VARProcess.acf()

statsmodels.tsa.vector_ar.var_model.VARProcess.acf VARProcess.acf(nlags=None) [source] Compute theoretical autocovariance function Returns: acf : ndarray (p x k x k)

CLogLog.inverse()

statsmodels.genmod.families.links.CLogLog.inverse CLogLog.inverse(z) [source] Inverse of C-Log-Log transform link function Parameters: z : array-like The value of the inverse of the CLogLog link function at p Returns: p : array Mean parameters Notes g^(-1)(z) = 1-exp(-exp(z))

QuantRegResults.f_test()

statsmodels.regression.quantile_regression.QuantRegResults.f_test QuantRegResults.f_test(r_matrix, cov_p=None, scale=1.0, invcov=None) Compute the F-test for a joint linear hypothesis. This is a special case of wald_test that always uses the F distribution. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypothes

sandbox.stats.multicomp.MultiComparison()

statsmodels.sandbox.stats.multicomp.MultiComparison class statsmodels.sandbox.stats.multicomp.MultiComparison(data, groups, group_order=None) [source] Tests for multiple comparisons Parameters: data : array independent data samples groups : array group labels corresponding to each data point group_order : list of strings, optional the desired order for the group mean results to be reported in. If not specified, results are reported in increasing order. If group_order does not contain a

stats.weightstats.ttost_paired()

statsmodels.stats.weightstats.ttost_paired statsmodels.stats.weightstats.ttost_paired(x1, x2, low, upp, transform=None, weights=None) [source] test of (non-)equivalence for two dependent, paired sample TOST: two one-sided t tests null hypothesis: md < low or md > upp alternative hypothesis: low < md < upp where md is the mean, expected value of the difference x1 - x2 If the pvalue is smaller than a threshold,say 0.05, then we reject the hypothesis that the difference between the

LogitResults.predict()

statsmodels.discrete.discrete_model.LogitResults.predict LogitResults.predict(exog=None, transform=True, *args, **kwargs) Call self.model.predict with self.params as the first argument. Parameters: exog : array-like, optional The values for which you want to predict. transform : bool, optional If the model was fit via a formula, do you want to pass exog through the formula. Default is True. E.g., if you fit a model y ~ log(x1) + log(x2), and transform is True, then you can pass a data st

ProbitResults.summary()

statsmodels.discrete.discrete_model.ProbitResults.summary ProbitResults.summary(yname=None, xname=None, title=None, alpha=0.05, yname_list=None) Summarize the Regression Results Parameters: yname : string, optional Default is y xname : list of strings, optional Default is var_## for ## in p the number of regressors title : string, optional Title for the top table. If not None, then this replaces the default title alpha : float significance level for the confidence intervals Returns

IV2SLS.hessian()

statsmodels.sandbox.regression.gmm.IV2SLS.hessian IV2SLS.hessian(params) The Hessian matrix of the model

TLinearModel.from_formula()

statsmodels.miscmodels.tmodel.TLinearModel.from_formula classmethod TLinearModel.from_formula(formula, data, subset=None, *args, **kwargs) Create a Model from a formula and dataframe. Parameters: formula : str or generic Formula object The formula specifying the model data : array-like The data for the model. See Notes. subset : array-like An array-like object of booleans, integers, or index values that indicate the subset of df to use in the model. Assumes df is a pandas.DataFrame ar

static CountResults.aic()

statsmodels.discrete.discrete_model.CountResults.aic static CountResults.aic()