Transf_gen.mean()

statsmodels.sandbox.distributions.transformed.Transf_gen.mean Transf_gen.mean(*args, **kwds) Mean of the distribution Parameters: arg1, arg2, arg3,... : array_like The shape parameter(s) for the distribution (see docstring of the instance object for more information) loc : array_like, optional location parameter (default=0) scale : array_like, optional scale parameter (default=1) Returns: mean : float the mean of the distribution

ACSkewT_gen.freeze()

statsmodels.sandbox.distributions.extras.ACSkewT_gen.freeze ACSkewT_gen.freeze(*args, **kwds) Freeze the distribution for the given arguments. Parameters: arg1, arg2, arg3,... : array_like The shape parameter(s) for the distribution. Should include all the non-optional arguments, may include loc and scale. Returns: rv_frozen : rv_frozen instance The frozen distribution.

MixedLMResults.summary()

statsmodels.regression.mixed_linear_model.MixedLMResults.summary MixedLMResults.summary(yname=None, xname_fe=None, xname_re=None, title=None, alpha=0.05) [source] Summarize the mixed model regression results. Parameters: yname : string, optional Default is y xname_fe : list of strings, optional Fixed effects covariate names xname_re : list of strings, optional Random effects covariate names title : string, optional Title for the top table. If not None, then this replaces the default

Poisson.jac()

statsmodels.discrete.discrete_model.Poisson.jac Poisson.jac(*args, **kwds) jac is deprecated, use score_obs instead! Use score_obs method. jac will be removed in 0.7 Poisson model Jacobian of the log-likelihood for each observation Parameters: params : array-like The parameters of the model Returns: score : ndarray (nobs, k_vars) The score vector of the model evaluated at params Notes for observations where the loglinear model is assumed

tools.numdiff.approx_fprime_cs()

statsmodels.tools.numdiff.approx_fprime_cs statsmodels.tools.numdiff.approx_fprime_cs(x, f, epsilon=None, args=(), kwargs={}) [source] Calculate gradient or Jacobian with complex step derivative approximation Parameters: x : array parameters at which the derivative is evaluated f : function f(*((x,)+args), **kwargs) returning either one value or 1d array epsilon : float, optional Stepsize, if None, optimal stepsize is used. Optimal step-size is EPS*x. See note. args : tuple Tuple of

graphics.regressionplots.plot_leverage_resid2()

statsmodels.graphics.regressionplots.plot_leverage_resid2 statsmodels.graphics.regressionplots.plot_leverage_resid2(results, alpha=0.05, label_kwargs={}, ax=None, **kwargs) [source] Plots leverage statistics vs. normalized residuals squared Parameters: results : results instance A regression results instance alpha : float Specifies the cut-off for large-standardized residuals. Residuals are assumed to be distributed N(0, 1) with alpha=alpha. label_kwargs : dict The keywords to pass to

tools.tools.isestimable()

statsmodels.tools.tools.isestimable statsmodels.tools.tools.isestimable(C, D) [source] True if (Q, P) contrast C is estimable for (N, P) design D From an Q x P contrast matrix C and an N x P design matrix D, checks if the contrast C is estimable by looking at the rank of vstack([C,D]) and verifying it is the same as the rank of D. Parameters: C : (Q, P) array-like contrast matrix. If C has is 1 dimensional assume shape (1, P) D: (N, P) array-like : design matrix Returns: tf : bool Tr

RegressionResults.conf_int()

statsmodels.regression.linear_model.RegressionResults.conf_int RegressionResults.conf_int(alpha=0.05, cols=None) [source] Returns the confidence interval of the fitted parameters. Parameters: alpha : float, optional The alpha level for the confidence interval. ie., The default alpha = .05 returns a 95% confidence interval. cols : array-like, optional cols specifies which confidence intervals to return Notes The confidence interval is based on Student?s t-distribution.

IVRegressionResults.remove_data()

statsmodels.sandbox.regression.gmm.IVRegressionResults.remove_data IVRegressionResults.remove_data() remove data arrays, all nobs arrays from result and model This reduces the size of the instance, so it can be pickled with less memory. Currently tested for use with predict from an unpickled results and model instance. Warning Since data and some intermediate results have been removed calculating new statistics that require them will raise exceptions. The exception will occur the first time

FTestPower.power()

statsmodels.stats.power.FTestPower.power FTestPower.power(effect_size, df_num, df_denom, alpha, ncc=1) [source] Calculate the power of a F-test. Parameters: effect_size : float standardized effect size, mean divided by the standard deviation. effect size has to be positive. df_num : int or float numerator degrees of freedom. df_denom : int or float denominator degrees of freedom. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that