ACSkewT_gen.ppf()

statsmodels.sandbox.distributions.extras.ACSkewT_gen.ppf ACSkewT_gen.ppf(q, *args, **kwds) Percent point function (inverse of cdf) at q of the given RV. Parameters: q : array_like lower tail probability arg1, arg2, arg3,... : array_like The shape parameter(s) for the distribution (see docstring of the instance object for more information) loc : array_like, optional location parameter (default=0) scale : array_like, optional scale parameter (default=1) Returns: x : array_like quan

FEVD.summary()

statsmodels.tsa.vector_ar.var_model.FEVD.summary FEVD.summary() [source]

tsa.filters.filtertools.fftconvolve3()

statsmodels.tsa.filters.filtertools.fftconvolve3 statsmodels.tsa.filters.filtertools.fftconvolve3(in1, in2=None, in3=None, mode='full') [source] Convolve two N-dimensional arrays using FFT. See convolve. for use with arma (old version: in1=num in2=den in3=data better for consistency with other functions in1=data in2=num in3=den note in2 and in3 need to have consistent dimension/shape since I?m using max of in2, in3 shapes and not the sum copied from scipy.signal.signaltools, but here used to

tsa.arima_process.arma2ar()

statsmodels.tsa.arima_process.arma2ar statsmodels.tsa.arima_process.arma2ar(ar, ma, nobs=100) [source] get the AR representation of an ARMA process Parameters: ar : array_like, 1d auto regressive lag polynomial ma : array_like, 1d moving average lag polynomial nobs : int number of observations to calculate Returns: ar : array, 1d coefficients of AR lag polynomial with nobs elements ` : Notes This is just an alias for ar_representation = arma_impulse_response(ma, ar, nobs=100) ful

inverse_power.inverse()

statsmodels.genmod.families.links.inverse_power.inverse inverse_power.inverse(z) Inverse of the power transform link function Parameters: `z` : array-like Value of the transformed mean parameters at p Returns: `p` : array Mean parameters Notes g^(-1)(z`) = z`**(1/`power)

ANOVA

ANOVA Analysis of Variance models Examples In [1]: import statsmodels.api as sm In [2]: from statsmodels.formula.api import ols In [3]: moore = sm.datasets.get_rdataset("Moore", "car", ...: cache=True) # load data ...: In [4]: data = moore.data In [5]: data = data.rename(columns={"partner.status" : ...: "partner_status"}) # make name pythonic ...: In [6]: moore_lm = ols('conformity ~ C(fcategory, Sum)*C(partner_sta

BinaryResults.initialize()

statsmodels.discrete.discrete_model.BinaryResults.initialize BinaryResults.initialize(model, params, **kwd)

stats.proportion.proportion_effectsize()

statsmodels.stats.proportion.proportion_effectsize statsmodels.stats.proportion.proportion_effectsize(prop1, prop2, method='normal') [source] effect size for a test comparing two proportions for use in power function Parameters: prop1, prop2: float or array_like : Returns: es : float or ndarray effect size for (transformed) prop1 - prop2 Notes only method=?normal? is implemented to match pwr.p2.test see http://www.statmethods.net/stats/power.html Effect size for normal is defined as 2

ARMAResults.summary()

statsmodels.tsa.arima_model.ARMAResults.summary ARMAResults.summary(alpha=0.05) [source] Summarize the Model Parameters: alpha : float, optional Significance level for the confidence intervals. Returns: smry : Summary instance This holds the summary table and text, which can be printed or converted to various output formats. See also statsmodels.iolib.summary.Summary

NegativeBinomialResults.conf_int()

statsmodels.discrete.discrete_model.NegativeBinomialResults.conf_int NegativeBinomialResults.conf_int(alpha=0.05, cols=None, method='default') Returns the confidence interval of the fitted parameters. Parameters: alpha : float, optional The significance level for the confidence interval. ie., The default alpha = .05 returns a 95% confidence interval. cols : array-like, optional cols specifies which confidence intervals to return method : string Not Implemented Yet Method to estimate th