NegativeBinomial.cov_params_func_l1()

statsmodels.discrete.discrete_model.NegativeBinomial.cov_params_func_l1 NegativeBinomial.cov_params_func_l1(likelihood_model, xopt, retvals) Computes cov_params on a reduced parameter space corresponding to the nonzero parameters resulting from the l1 regularized fit. Returns a full cov_params matrix, with entries corresponding to zero?d values set to np.nan.

static IVRegressionResults.HC1_se()

statsmodels.sandbox.regression.gmm.IVRegressionResults.HC1_se static IVRegressionResults.HC1_se() See statsmodels.RegressionResults

NegativeBinomial.predict()

statsmodels.genmod.families.family.NegativeBinomial.predict NegativeBinomial.predict(mu) Linear predictors based on given mu values. Parameters: mu : array The mean response variables Returns: lin_pred : array Linear predictors based on the mean response variables. The value of the link function at the given mu.

GLS.whiten()

statsmodels.regression.linear_model.GLS.whiten GLS.whiten(X) [source] GLS whiten method. Parameters: X : array-like Data to be whitened. Returns: np.dot(cholsigmainv,X) : See also regression.GLS

QuantReg.hessian()

statsmodels.regression.quantile_regression.QuantReg.hessian QuantReg.hessian(params) The Hessian matrix of the model

OLSResults.initialize()

statsmodels.regression.linear_model.OLSResults.initialize OLSResults.initialize(model, params, **kwd)

graphics.functional.banddepth()

statsmodels.graphics.functional.banddepth statsmodels.graphics.functional.banddepth(data, method='MBD') [source] Calculate the band depth for a set of functional curves. Band depth is an order statistic for functional data (see fboxplot), with a higher band depth indicating larger ?centrality?. In analog to scalar data, the functional curve with highest band depth is called the median curve, and the band made up from the first N/2 of N curves is the 50% central region. Parameters: data : nd

Kernel Density Estimation

Kernel Density Estimation Link to Notebook GitHub In [1]: import numpy as np from scipy import stats import statsmodels.api as sm import matplotlib.pyplot as plt from statsmodels.distributions.mixture_rvs import mixture_rvs A univariate example. In [2]: np.random.seed(12345) In [3]: obs_dist1 = mixture_rvs([.25,.75], size=10000, dist=[stats.norm, stats.norm], kwargs = (dict(loc=-1,scale=.5),dict(loc=1,scale=.5))) In [4]: kde = sm.non

SimpleTable.extend_right()

statsmodels.iolib.table.SimpleTable.extend_right SimpleTable.extend_right(table) [source] Return None. Extend each row of self with corresponding row of table. Does not import formatting from table. This generally makes sense only if the two tables have the same number of rows, but that is not enforced. :note: To extend append a table below, just use extend, which is the ordinary list method. This generally makes sense only if the two tables have the same number of columns, but that is not e

OLSInfluence.summary_frame()

statsmodels.stats.outliers_influence.OLSInfluence.summary_frame OLSInfluence.summary_frame() [source] Creates a DataFrame with all available influence results. Returns: frame : DataFrame A DataFrame with all results. Notes The resultant DataFrame contains six variables in addition to the DFBETAS. These are: cooks_d : Cook?s Distance defined in Influence.cooks_distance standard_resid : Standardized residuals defined in Influence.resid_studentized_internal hat_diag : The diagonal of the