CountModel.fit_regularized()

statsmodels.discrete.discrete_model.CountModel.fit_regularized CountModel.fit_regularized(start_params=None, method='l1', maxiter='defined_by_method', full_output=1, disp=1, callback=None, alpha=0, trim_mode='auto', auto_trim_tol=0.01, size_trim_tol=0.0001, qc_tol=0.03, **kwargs) [source] Fit the model using a regularized maximum likelihood. The regularization method AND the solver used is determined by the argument method. Parameters: start_params : array-like, optional Initial guess of t

CountModel.fit()

statsmodels.discrete.discrete_model.CountModel.fit CountModel.fit(start_params=None, method='newton', maxiter=35, full_output=1, disp=1, callback=None, **kwargs) [source] Fit the model using maximum likelihood. The rest of the docstring is from statsmodels.base.model.LikelihoodModel.fit Fit method for likelihood based models Parameters: start_params : array-like, optional Initial guess of the solution for the loglikelihood maximization. The default is an array of zeros. method : str, opti

CountModel.cov_params_func_l1()

statsmodels.discrete.discrete_model.CountModel.cov_params_func_l1 CountModel.cov_params_func_l1(likelihood_model, xopt, retvals) Computes cov_params on a reduced parameter space corresponding to the nonzero parameters resulting from the l1 regularized fit. Returns a full cov_params matrix, with entries corresponding to zero?d values set to np.nan.

CountModel.cdf()

statsmodels.discrete.discrete_model.CountModel.cdf CountModel.cdf(X) The cumulative distribution function of the model.

Contrasts Overview

Contrasts Overview Link to Notebook GitHub In [1]: from __future__ import print_function import numpy as np import statsmodels.api as sm This document is based heavily on this excellent resource from UCLA http://www.ats.ucla.edu/stat/r/library/contrast_coding.htm A categorical variable of K categories, or levels, usually enters a regression as a sequence of K-1 dummy variables. This amounts to a linear hypothesis on the level means. That is, each test statistic for these v

CompareMeans.ztost_ind()

statsmodels.stats.weightstats.CompareMeans.ztost_ind CompareMeans.ztost_ind(low, upp, usevar='pooled') [source] test of equivalence for two independent samples, based on z-test Parameters: low, upp : float equivalence interval low < m1 - m2 < upp usevar : string, ?pooled? or ?unequal? If pooled, then the standard deviation of the samples is assumed to be the same. If unequal, then Welsh ttest with Satterthwait degrees of freedom is used Returns: pvalue : float pvalue of the non

CompareMeans.ztest_ind()

statsmodels.stats.weightstats.CompareMeans.ztest_ind CompareMeans.ztest_ind(alternative='two-sided', usevar='pooled', value=0) [source] z-test for the null hypothesis of identical means Parameters: x1, x2 : array_like, 1-D or 2-D two independent samples, see notes for 2-D case alternative : string The alternative hypothesis, H1, has to be one of the following ?two-sided?: H1: difference in means not equal to value (default) ?larger? : H1: difference in means larger than value ?smaller? :

CompareMeans.zconfint_diff()

statsmodels.stats.weightstats.CompareMeans.zconfint_diff CompareMeans.zconfint_diff(alpha=0.05, alternative='two-sided', usevar='pooled') [source] confidence interval for the difference in means Parameters: alpha : float significance level for the confidence interval, coverage is 1-alpha alternative : string This specifies the alternative hypothesis for the test that corresponds to the confidence interval. The alternative hypothesis, H1, has to be one of the following : ?two-sided?: H1:

CompareMeans.ttost_ind()

statsmodels.stats.weightstats.CompareMeans.ttost_ind CompareMeans.ttost_ind(low, upp, usevar='pooled') [source] test of equivalence for two independent samples, base on t-test Parameters: low, upp : float equivalence interval low < m1 - m2 < upp usevar : string, ?pooled? or ?unequal? If pooled, then the standard deviation of the samples is assumed to be the same. If unequal, then Welsh ttest with Satterthwait degrees of freedom is used Returns: pvalue : float pvalue of the non-

CompareMeans.ttest_ind()

statsmodels.stats.weightstats.CompareMeans.ttest_ind CompareMeans.ttest_ind(alternative='two-sided', usevar='pooled', value=0) [source] ttest for the null hypothesis of identical means this should also be the same as onewaygls, except for ddof differences Parameters: x1, x2 : array_like, 1-D or 2-D two independent samples, see notes for 2-D case alternative : string The alternative hypothesis, H1, has to be one of the following ?two-sided?: H1: difference in means not equal to value (def