Transf_gen.est_loc_scale()

statsmodels.sandbox.distributions.transformed.Transf_gen.est_loc_scale Transf_gen.est_loc_scale(*args, **kwds) est_loc_scale is deprecated! This function is deprecated, use self.fit_loc_scale(data) instead.

static LogitResults.aic()

statsmodels.discrete.discrete_model.LogitResults.aic static LogitResults.aic()

sandbox.stats.multicomp.StepDown()

statsmodels.sandbox.stats.multicomp.StepDown class statsmodels.sandbox.stats.multicomp.StepDown(vals, nobs_all, var_all, df=None) [source] a class for step down methods This is currently for simple tree subset descend, similar to homogeneous_subsets, but checks all leave-one-out subsets instead of assuming an ordered set. Comment in SAS manual: SAS only uses interval subsets of the sorted list, which is sufficient for range tests (maybe also equal variance and balanced sample sizes are requi

tsa.filters.cf_filter.cffilter()

statsmodels.tsa.filters.cf_filter.cffilter statsmodels.tsa.filters.cf_filter.cffilter(X, low=6, high=32, drift=True) [source] Christiano Fitzgerald asymmetric, random walk filter Parameters: X : array-like 1 or 2d array to filter. If 2d, variables are assumed to be in columns. low : float Minimum period of oscillations. Features below low periodicity are filtered out. Default is 6 for quarterly data, giving a 1.5 year periodicity. high : float Maximum period of oscillations. Features a

MultinomialResults.f_test()

statsmodels.discrete.discrete_model.MultinomialResults.f_test MultinomialResults.f_test(r_matrix, cov_p=None, scale=1.0, invcov=None) Compute the F-test for a joint linear hypothesis. This is a special case of wald_test that always uses the F distribution. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypothese

static ARResults.scale()

statsmodels.tsa.ar_model.ARResults.scale static ARResults.scale() [source]

IVRegressionResults.summary2()

statsmodels.sandbox.regression.gmm.IVRegressionResults.summary2 IVRegressionResults.summary2(yname=None, xname=None, title=None, alpha=0.05, float_format='%.4f') Experimental summary function to summarize the regression results Parameters: xname : List of strings of length equal to the number of parameters Names of the independent variables (optional) yname : string Name of the dependent variable (optional) title : string, optional Title for the top table. If not None, then this replac

static VARResults.pvalues()

statsmodels.tsa.vector_ar.var_model.VARResults.pvalues static VARResults.pvalues() [source] Two-sided p-values for model coefficients from Student t-distribution

stats.diagnostic.het_goldfeldquandt

statsmodels.stats.diagnostic.het_goldfeldquandt statsmodels.stats.diagnostic.het_goldfeldquandt = see class docstring

BinaryModel.fit_regularized()

statsmodels.discrete.discrete_model.BinaryModel.fit_regularized BinaryModel.fit_regularized(start_params=None, method='l1', maxiter='defined_by_method', full_output=1, disp=1, callback=None, alpha=0, trim_mode='auto', auto_trim_tol=0.01, size_trim_tol=0.0001, qc_tol=0.03, **kwargs) [source] Fit the model using a regularized maximum likelihood. The regularization method AND the solver used is determined by the argument method. Parameters: start_params : array-like, optional Initial guess of