static OLSInfluence.resid_press()

statsmodels.stats.outliers_influence.OLSInfluence.resid_press static OLSInfluence.resid_press() [source] (cached attribute) PRESS residuals

static OLSInfluence.resid_studentized_internal()

statsmodels.stats.outliers_influence.OLSInfluence.resid_studentized_internal static OLSInfluence.resid_studentized_internal() [source] (cached attribute) studentized residuals using variance from OLS this uses sigma from original estimate does not require leave one out loop

discrete.discrete_model.MultinomialResults()

statsmodels.discrete.discrete_model.MultinomialResults class statsmodels.discrete.discrete_model.MultinomialResults(model, mlefit, cov_type='nonrobust', cov_kwds=None, use_t=None) [source] A results class for multinomial data Parameters: model : A DiscreteModel instance params : array-like The parameters of a fitted model. hessian : array-like The hessian of the fitted model. scale : float A scale parameter for the covariance matrix. Returns: *Attributes* : aic : float Akaike infor

GEEResults.remove_data()

statsmodels.genmod.generalized_estimating_equations.GEEResults.remove_data GEEResults.remove_data() remove data arrays, all nobs arrays from result and model This reduces the size of the instance, so it can be pickled with less memory. Currently tested for use with predict from an unpickled results and model instance. Warning Since data and some intermediate results have been removed calculating new statistics that require them will raise exceptions. The exception will occur the first time

TransfTwo_gen.mean()

statsmodels.sandbox.distributions.transformed.TransfTwo_gen.mean TransfTwo_gen.mean(*args, **kwds) Mean of the distribution Parameters: arg1, arg2, arg3,... : array_like The shape parameter(s) for the distribution (see docstring of the instance object for more information) loc : array_like, optional location parameter (default=0) scale : array_like, optional scale parameter (default=1) Returns: mean : float the mean of the distribution

sandbox.stats.multicomp.varcorrection_unequal()

statsmodels.sandbox.stats.multicomp.varcorrection_unequal statsmodels.sandbox.stats.multicomp.varcorrection_unequal(var_all, nobs_all, df_all) [source] return joint variance from samples with unequal variances and unequal sample sizes something is wrong Parameters: var_all : array_like The variance for each sample nobs_all : array_like The number of observations for each sample df_all : array_like degrees of freedom for each sample Returns: varjoint : float joint variance. dfjoint

stats.power.tt_solve_power

statsmodels.stats.power.tt_solve_power statsmodels.stats.power.tt_solve_power = > solve for any one parameter of the power of a one sample t-test for the one sample t-test the keywords are: effect_size, nobs, alpha, power Exactly one needs to be None, all others need numeric values. This test can also be used for a paired t-test, where effect size is defined in terms of the mean difference, and nobs is the number of pairs. Parameters: effect_size : float standardized effect size, mean d

MultiComparison.kruskal()

statsmodels.sandbox.stats.multicomp.MultiComparison.kruskal MultiComparison.kruskal(pairs=None, multimethod='T') [source] pairwise comparison for kruskal-wallis test This is just a reimplementation of scipy.stats.kruskal and does not yet use a multiple comparison correction.

sandbox.stats.multicomp.maxzero()

statsmodels.sandbox.stats.multicomp.maxzero statsmodels.sandbox.stats.multicomp.maxzero(x) [source] find all up zero crossings and return the index of the highest Not used anymore >>> np.random.seed(12345) >>> x = np.random.randn(8) >>> x array([-0.20470766, 0.47894334, -0.51943872, -0.5557303 , 1.96578057, 1.39340583, 0.09290788, 0.28174615]) >>> maxzero(x) (4, array([1, 4])) no up-zero-crossing at end >>> np.random.seed(0) >>&

MultiComparison.getranks()

statsmodels.sandbox.stats.multicomp.MultiComparison.getranks MultiComparison.getranks() [source] convert data to rankdata and attach This creates rankdata as it is used for non-parametric tests, where in the case of ties the average rank is assigned.