ArmaFft.spdroots()

statsmodels.sandbox.tsa.fftarma.ArmaFft.spdroots ArmaFft.spdroots(w) [source] spectral density for frequency using polynomial roots builds two arrays (number of roots, number of frequencies)

tsa.filters.filtertools.convolution_filter()

statsmodels.tsa.filters.filtertools.convolution_filter statsmodels.tsa.filters.filtertools.convolution_filter(x, filt, nsides=2) [source] Linear filtering via convolution. Centered and backward displaced moving weighted average. Parameters: x : array_like data array, 1d or 2d, if 2d then observations in rows filt : array_like Linear filter coefficients in reverse time-order. Should have the same number of dimensions as x though if 1d and x is 2d will be coerced to 2d. nsides : int, opti

graphics.regressionplots.plot_regress_exog()

statsmodels.graphics.regressionplots.plot_regress_exog statsmodels.graphics.regressionplots.plot_regress_exog(results, exog_idx, fig=None) [source] Plot regression results against one regressor. This plots four graphs in a 2 by 2 figure: ?endog versus exog?, ?residuals versus exog?, ?fitted versus exog? and ?fitted plus residual versus exog? Parameters: results : result instance result instance with resid, model.endog and model.exog as attributes exog_idx : int index of regressor in exog

inverse_power.inverse()

statsmodels.genmod.families.links.inverse_power.inverse inverse_power.inverse(z) Inverse of the power transform link function Parameters: `z` : array-like Value of the transformed mean parameters at p Returns: `p` : array Mean parameters Notes g^(-1)(z`) = z`**(1/`power)

MixedLM.loglike()

statsmodels.regression.mixed_linear_model.MixedLM.loglike MixedLM.loglike(params) [source] Evaluate the (profile) log-likelihood of the linear mixed effects model. Parameters: params : MixedLMParams, or array-like. The parameter value. If array-like, must be a packed parameter vector compatible with this model. Returns: The log-likelihood value at `params`. : Notes This is the profile likelihood in which the scale parameter scale has been profiled out. The input parameter state, if pro

tools.eval_measures.vare()

statsmodels.tools.eval_measures.vare statsmodels.tools.eval_measures.vare(x1, x2, ddof=0, axis=0) [source] variance of error Parameters: x1, x2 : array_like The performance measure depends on the difference between these two arrays. axis : int axis along which the summary statistic is calculated Returns: vare : ndarray or float variance of difference along given axis. Notes If x1 and x2 have different shapes, then they need to broadcast. This uses numpy.asanyarray to convert the in

robust.scale.hubers_scale

statsmodels.robust.scale.hubers_scale statsmodels.robust.scale.hubers_scale = Huber?s scaling for fitting robust linear models. Huber?s scale is intended to be used as the scale estimate in the IRLS algorithm and is slightly different than the Huber class. Parameters: d : float, optional d is the tuning constant for Huber?s scale. Default is 2.5 tol : float, optional The convergence tolerance maxiter : int, optiona The maximum number of iterations. The default is 30. Notes Huber?s s

DescrStatsW.ztest_mean()

statsmodels.stats.weightstats.DescrStatsW.ztest_mean DescrStatsW.ztest_mean(value=0, alternative='two-sided') [source] z-test of Null hypothesis that mean is equal to value. The alternative hypothesis H1 is defined by the following ?two-sided?: H1: mean not equal to value ?larger? : H1: mean larger than value ?smaller? : H1: mean smaller than value Parameters: value : float or array the hypothesized value for the mean alternative : string The alternative hypothesis, H1, has to be one of

InverseGaussian.loglike()

statsmodels.genmod.families.family.InverseGaussian.loglike InverseGaussian.loglike(endog, mu, scale=1.0) [source] Loglikelihood function for inverse Gaussian distribution. Parameters: endog : array-like Endogenous response variable mu : array-like Fitted mean response variable scale : float, optional The default is 1. Returns: llf : float The value of the loglikelihood function evaluated at (endog,mu,scale) as defined below. Notes llf = -(1/2.)*sum((endog-mu)**2/(endog*mu**2*sca

sandbox.stats.multicomp.mcfdr()

statsmodels.sandbox.stats.multicomp.mcfdr statsmodels.sandbox.stats.multicomp.mcfdr(nrepl=100, nobs=50, ntests=10, ntrue=6, mu=0.5, alpha=0.05, rho=0.0) [source] MonteCarlo to test fdrcorrection