KernelCensoredReg.fit()

statsmodels.nonparametric.kernel_regression.KernelCensoredReg.fit KernelCensoredReg.fit(data_predict=None) [source] Returns the marginal effects at the data_predict points.

KernelCensoredReg.r_squared()

statsmodels.nonparametric.kernel_regression.KernelCensoredReg.r_squared KernelCensoredReg.r_squared() Returns the R-Squared for the nonparametric regression. Notes For more details see p.45 in [2] The R-Squared is calculated by: where is the mean calculated in fit at the exog points.

KernelCensoredReg.loo_likelihood()

statsmodels.nonparametric.kernel_regression.KernelCensoredReg.loo_likelihood KernelCensoredReg.loo_likelihood()

KernelCensoredReg.cv_loo()

statsmodels.nonparametric.kernel_regression.KernelCensoredReg.cv_loo KernelCensoredReg.cv_loo(bw, func) [source] The cross-validation function with leave-one-out estimator Parameters: bw: array_like : Vector of bandwidth values func: callable function : Returns the estimator of g(x). Can be either _est_loc_constant (local constant) or _est_loc_linear (local_linear). Returns: L: float : The value of the CV function Notes Calculates the cross-validation least-squares function. This f

KernelCensoredReg.censored()

statsmodels.nonparametric.kernel_regression.KernelCensoredReg.censored KernelCensoredReg.censored(censor_val) [source]

KernelCensoredReg.aic_hurvich()

statsmodels.nonparametric.kernel_regression.KernelCensoredReg.aic_hurvich KernelCensoredReg.aic_hurvich(bw, func=None) Computes the AIC Hurvich criteria for the estimation of the bandwidth. Parameters: bw : str or array_like See the bw parameter of KernelReg for details. Returns: aic : ndarray The AIC Hurvich criteria, one element for each variable. func : None Unused here, needed in signature because it?s used in cv_loo. References See ch.2 in [1] and p.35 in [2].

Kernel Density Estimation

Kernel Density Estimation Link to Notebook GitHub In [1]: import numpy as np from scipy import stats import statsmodels.api as sm import matplotlib.pyplot as plt from statsmodels.distributions.mixture_rvs import mixture_rvs A univariate example. In [2]: np.random.seed(12345) In [3]: obs_dist1 = mixture_rvs([.25,.75], size=10000, dist=[stats.norm, stats.norm], kwargs = (dict(loc=-1,scale=.5),dict(loc=1,scale=.5))) In [4]: kde = sm.non

KDEMultivariateConditional.pdf()

statsmodels.nonparametric.kernel_density.KDEMultivariateConditional.pdf KDEMultivariateConditional.pdf(endog_predict=None, exog_predict=None) [source] Evaluate the probability density function. Parameters: endog_predict: array_like, optional : Evaluation data for the dependent variables. If unspecified, the training data is used. exog_predict: array_like, optional : Evaluation data for the independent variables. Returns: pdf: array_like : The value of the probability density at endog

KDEUnivariate.fit()

statsmodels.nonparametric.kde.KDEUnivariate.fit KDEUnivariate.fit(kernel='gau', bw='normal_reference', fft=True, weights=None, gridsize=None, adjust=1, cut=3, clip=(-inf, inf)) [source] Attach the density estimate to the KDEUnivariate class. Parameters: kernel : str The Kernel to be used. Choices are: ?biw? for biweight ?cos? for cosine ?epa? for Epanechnikov ?gau? for Gaussian. ?tri? for triangular ?triw? for triweight ?uni? for uniform bw : str, float The bandwidth to use. Choices are:

KDEUnivariate.evaluate()

statsmodels.nonparametric.kde.KDEUnivariate.evaluate KDEUnivariate.evaluate(point) [source] Evaluate density at a single point. Parameters: point : float Point at which to evaluate the density.