FTestAnovaPower.plot_power()

statsmodels.stats.power.FTestAnovaPower.plot_power FTestAnovaPower.plot_power(dep_var='nobs', nobs=None, effect_size=None, alpha=0.05, ax=None, title=None, plt_kwds=None, **kwds) plot power with number of observations or effect size on x-axis Parameters: dep_var : string in [?nobs?, ?effect_size?, ?alpha?] This specifies which variable is used for the horizontal axis. If dep_var=?nobs? (default), then one curve is created for each value of effect_size. If dep_var=?effect_size? or alpha, th

sandbox.stats.multicomp.varcorrection_unequal()

statsmodels.sandbox.stats.multicomp.varcorrection_unequal statsmodels.sandbox.stats.multicomp.varcorrection_unequal(var_all, nobs_all, df_all) [source] return joint variance from samples with unequal variances and unequal sample sizes something is wrong Parameters: var_all : array_like The variance for each sample nobs_all : array_like The number of observations for each sample df_all : array_like degrees of freedom for each sample Returns: varjoint : float joint variance. dfjoint

OLSResults.conf_int()

statsmodels.regression.linear_model.OLSResults.conf_int OLSResults.conf_int(alpha=0.05, cols=None) Returns the confidence interval of the fitted parameters. Parameters: alpha : float, optional The alpha level for the confidence interval. ie., The default alpha = .05 returns a 95% confidence interval. cols : array-like, optional cols specifies which confidence intervals to return Notes The confidence interval is based on Student?s t-distribution.

static RegressionResults.HC0_se()

statsmodels.regression.linear_model.RegressionResults.HC0_se static RegressionResults.HC0_se() [source] See statsmodels.RegressionResults

NegativeBinomialResults.t_test()

statsmodels.discrete.discrete_model.NegativeBinomialResults.t_test NegativeBinomialResults.t_test(r_matrix, cov_p=None, scale=None, use_t=None) Compute a t-test for a each linear hypothesis of the form Rb = q Parameters: r_matrix : array-like, str, tuple array : If an array is given, a p x k 2d array or length k 1d array specifying the linear restrictions. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test can be given as a string. See the example

PHReg.robust_covariance()

statsmodels.duration.hazard_regression.PHReg.robust_covariance PHReg.robust_covariance(params) [source] Returns a covariance matrix for the proportional hazards model regresion coefficient estimates that is robust to certain forms of model misspecification. Parameters: params : ndarray The parameter vector at which the covariance matrix is calculated. Returns: The robust covariance matrix as a square ndarray. : Notes This function uses the groups argument to determine groups within whi

stats.power.zt_ind_solve_power

statsmodels.stats.power.zt_ind_solve_power statsmodels.stats.power.zt_ind_solve_power = > solve for any one parameter of the power of a two sample z-test for z-test the keywords are: effect_size, nobs1, alpha, power, ratio exactly one needs to be None, all others need numeric values Parameters: effect_size : float standardized effect size, difference between the two means divided by the standard deviation. If ratio=0, then this is the standardized mean in the one sample test. nobs1 : i

ARMAResults.summary()

statsmodels.tsa.arima_model.ARMAResults.summary ARMAResults.summary(alpha=0.05) [source] Summarize the Model Parameters: alpha : float, optional Significance level for the confidence intervals. Returns: smry : Summary instance This holds the summary table and text, which can be printed or converted to various output formats. See also statsmodels.iolib.summary.Summary

stats.multicomp.pairwise_tukeyhsd()

statsmodels.stats.multicomp.pairwise_tukeyhsd statsmodels.stats.multicomp.pairwise_tukeyhsd(endog, groups, alpha=0.05) [source] calculate all pairwise comparisons with TukeyHSD confidence intervals this is just a wrapper around tukeyhsd method of MultiComparison Parameters: endog : ndarray, float, 1d response variable groups : ndarray, 1d array with groups, can be string or integers alpha : float significance level for the test Returns: results : TukeyHSDResults instance A results

Patsy: Contrast Coding Systems for categorical variables

Patsy: Contrast Coding Systems for categorical variables Note This document is based heavily on this excellent resource from UCLA. A categorical variable of K categories, or levels, usually enters a regression as a sequence of K-1 dummy variables. This amounts to a linear hypothesis on the level means. That is, each test statistic for these variables amounts to testing whether the mean for that level is statistically significantly different from the mean of the base category. This dummy codin