static RegressionResults.aic()

statsmodels.regression.linear_model.RegressionResults.aic static RegressionResults.aic() [source]

IVGMM.score()

statsmodels.sandbox.regression.gmm.IVGMM.score IVGMM.score(params, weights, epsilon=None, centered=True)

stats.diagnostic.kstest_normal()

statsmodels.stats.diagnostic.kstest_normal statsmodels.stats.diagnostic.kstest_normal(x, pvalmethod='approx') Lillifors test for normality, Kolmogorov Smirnov test with estimated mean and variance Parameters: x : array_like, 1d data series, sample pvalmethod : ?approx?, ?table? ?approx? uses the approximation formula of Dalal and Wilkinson, valid for pvalues < 0.1. If the pvalue is larger than 0.1, then the result of table is returned ?table? uses the table from Dalal and Wilkinson, w

OLS.whiten()

statsmodels.regression.linear_model.OLS.whiten OLS.whiten(Y) [source] OLS model whitener does nothing: returns Y.

Regression with Discrete Dependent Variable

Regression with Discrete Dependent Variable Regression models for limited and qualitative dependent variables. The module currently allows the estimation of models with binary (Logit, Probit), nominal (MNLogit), or count (Poisson) data. See Module Reference for commands and arguments. Examples # Load the data from Spector and Mazzeo (1980) spector_data = sm.datasets.spector.load() spector_data.exog = sm.add_constant(spector_data.exog) # Logit Model logit_mod = sm.Logit(spector_data.endog, spe

Summary.add_extra_txt()

statsmodels.iolib.summary.Summary.add_extra_txt Summary.add_extra_txt(etext) [source] add additional text that will be added at the end in text format Parameters: etext : string string with lines that are added to the text output.

sandbox.stats.multicomp.maxzerodown()

statsmodels.sandbox.stats.multicomp.maxzerodown statsmodels.sandbox.stats.multicomp.maxzerodown(x) [source] find all up zero crossings and return the index of the highest Not used anymore >>> np.random.seed(12345) >>> x = np.random.randn(8) >>> x array([-0.20470766, 0.47894334, -0.51943872, -0.5557303 , 1.96578057, 1.39340583, 0.09290788, 0.28174615]) >>> maxzero(x) (4, array([1, 4])) no up-zero-crossing at end >>> np.random.seed(0) &

RLMResults.f_test()

statsmodels.robust.robust_linear_model.RLMResults.f_test RLMResults.f_test(r_matrix, cov_p=None, scale=1.0, invcov=None) Compute the F-test for a joint linear hypothesis. This is a special case of wald_test that always uses the F distribution. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test can

iolib.summary2.Summary

statsmodels.iolib.summary2.Summary class statsmodels.iolib.summary2.Summary [source] Methods add_array(array[, align, float_format]) Add the contents of a Numpy array to summary table add_base(results[, alpha, float_format, ...]) Try to construct a basic summary instance. add_df(df[, index, header, float_format, align]) Add the contents of a DataFrame to summary table add_dict(d[, ncols, align, float_format]) Add the contents of a Dict to summary table add_text(string) Append a note to

GroupsStats.runbasic()

statsmodels.sandbox.stats.multicomp.GroupsStats.runbasic GroupsStats.runbasic(useranks=False) [source]