SquareFunc.derivplus()

statsmodels.sandbox.distributions.transformed.SquareFunc.derivplus SquareFunc.derivplus(x) [source]

SimpleTable.label_cells()

statsmodels.iolib.table.SimpleTable.label_cells SimpleTable.label_cells(func) [source] Return None. Labels cells based on func. If func(cell) is None then its datatype is not changed; otherwise it is set to func(cell).

static PHRegResults.pvalues()

statsmodels.duration.hazard_regression.PHRegResults.pvalues static PHRegResults.pvalues()

GEE.update_cached_means()

statsmodels.genmod.generalized_estimating_equations.GEE.update_cached_means GEE.update_cached_means(mean_params) [source] cached_means should always contain the most recent calculation of the group-wise mean vectors. This function should be called every time the regression parameters are changed, to keep the cached means up to date.

KDEMultivariateConditional.cdf()

statsmodels.nonparametric.kernel_density.KDEMultivariateConditional.cdf KDEMultivariateConditional.cdf(endog_predict=None, exog_predict=None) [source] Cumulative distribution function for the conditional density. Parameters: endog_predict: array_like, optional : The evaluation dependent variables at which the cdf is estimated. If not specified the training dependent variables are used. exog_predict: array_like, optional : The evaluation independent variables at which the cdf is estimated

stats.correlation_tools.cov_nearest()

statsmodels.stats.correlation_tools.cov_nearest statsmodels.stats.correlation_tools.cov_nearest(cov, method='clipped', threshold=1e-15, n_fact=100, return_all=False) [source] Find the nearest covariance matrix that is postive (semi-) definite This leaves the diagonal, i.e. the variance, unchanged Parameters: cov : ndarray, (k,k) initial covariance matrix method : string if ?clipped?, then the faster but less accurate corr_clipped is used. if ?nearest?, then corr_nearest is used threshol

Maximum Likelihood Estimation (Generic models)

Maximum Likelihood Estimation (Generic models) Link to Notebook GitHub This tutorial explains how to quickly implement new maximum likelihood models in statsmodels. We give two examples: Probit model for binary dependent variables Negative binomial model for count data The GenericLikelihoodModel class eases the process by providing tools such as automatic numeric differentiation and a unified interface to scipy optimization functions. Using statsmodels, users can fit new MLE models simply

NegativeBinomialResults.initialize()

statsmodels.discrete.discrete_model.NegativeBinomialResults.initialize NegativeBinomialResults.initialize(model, params, **kwd)

RegressionResults.conf_int()

statsmodels.regression.linear_model.RegressionResults.conf_int RegressionResults.conf_int(alpha=0.05, cols=None) [source] Returns the confidence interval of the fitted parameters. Parameters: alpha : float, optional The alpha level for the confidence interval. ie., The default alpha = .05 returns a 95% confidence interval. cols : array-like, optional cols specifies which confidence intervals to return Notes The confidence interval is based on Student?s t-distribution.

QuantRegResults.save()

statsmodels.regression.quantile_regression.QuantRegResults.save QuantRegResults.save(fname, remove_data=False) save a pickle of this instance Parameters: fname : string or filehandle fname can be a string to a file path or filename, or a filehandle. remove_data : bool If False (default), then the instance is pickled without changes. If True, then all arrays with length nobs are set to None before pickling. See the remove_data method. In some cases not all arrays will be set to None. No