CountResults.predict()

statsmodels.discrete.discrete_model.CountResults.predict CountResults.predict(exog=None, transform=True, *args, **kwargs) Call self.model.predict with self.params as the first argument. Parameters: exog : array-like, optional The values for which you want to predict. transform : bool, optional If the model was fit via a formula, do you want to pass exog through the formula. Default is True. E.g., if you fit a model y ~ log(x1) + log(x2), and transform is True, then you can pass a data st

static RLMResults.llf()

statsmodels.robust.robust_linear_model.RLMResults.llf static RLMResults.llf()

NonlinearIVGMM.gradient_momcond()

statsmodels.sandbox.regression.gmm.NonlinearIVGMM.gradient_momcond NonlinearIVGMM.gradient_momcond(params, epsilon=0.0001, centered=True) gradient of moment conditions Parameters: params : ndarray parameter at which the moment conditions are evaluated epsilon : float stepsize for finite difference calculation centered : bool This refers to the finite difference calculation. If centered is true, then the centered finite difference calculation is used. Otherwise the one-sided forward dif

static OLSInfluence.dffits()

statsmodels.stats.outliers_influence.OLSInfluence.dffits static OLSInfluence.dffits() [source] (cached attribute) dffits measure for influence of an observation based on resid_studentized_external, uses results from leave-one-observation-out loop It is recommended that observations with dffits large than a threshold of 2 sqrt{k / n} where k is the number of parameters, should be investigated. Returns: dffits: float : dffits_threshold : float References Wikipedia

MixedLMResults.predict()

statsmodels.regression.mixed_linear_model.MixedLMResults.predict MixedLMResults.predict(exog=None, transform=True, *args, **kwargs) Call self.model.predict with self.params as the first argument. Parameters: exog : array-like, optional The values for which you want to predict. transform : bool, optional If the model was fit via a formula, do you want to pass exog through the formula. Default is True. E.g., if you fit a model y ~ log(x1) + log(x2), and transform is True, then you can pass

static GLMResults.null_deviance()

statsmodels.genmod.generalized_linear_model.GLMResults.null_deviance static GLMResults.null_deviance() [source]

NegativeBinomialResults.load()

statsmodels.discrete.discrete_model.NegativeBinomialResults.load classmethod NegativeBinomialResults.load(fname) load a pickle, (class method) Parameters: fname : string or filehandle fname can be a string to a file path or filename, or a filehandle. Returns: unpickled instance :

TukeyHSDResults.summary()

statsmodels.sandbox.stats.multicomp.TukeyHSDResults.summary TukeyHSDResults.summary() [source] Summary table that can be printed

NegativeBinomial.fit()

statsmodels.discrete.discrete_model.NegativeBinomial.fit NegativeBinomial.fit(start_params=None, method='bfgs', maxiter=35, full_output=1, disp=1, callback=None, cov_type='nonrobust', cov_kwds=None, use_t=None, **kwargs) [source]

Discrete Choice Models Overview

Discrete Choice Models Overview Link to Notebook GitHub In [1]: from __future__ import print_function import numpy as np import statsmodels.api as sm Data Load data from Spector and Mazzeo (1980). Examples follow Greene's Econometric Analysis Ch. 21 (5th Edition). In [2]: spector_data = sm.datasets.spector.load() spector_data.exog = sm.add_constant(spector_data.exog, prepend=False) Inspect the data: In [3]: print(spector_data.exog[:5,:]) print(spector_d