NegativeBinomial.predict()

statsmodels.discrete.discrete_model.NegativeBinomial.predict NegativeBinomial.predict(params, exog=None, exposure=None, offset=None, linear=False) Predict response variable of a count model given exogenous variables. Notes If exposure is specified, then it will be logged by the method. The user does not need to log it first.

static MultinomialResults.llnull()

statsmodels.discrete.discrete_model.MultinomialResults.llnull static MultinomialResults.llnull()

tsa.vector_ar.var_model.VARResults()

statsmodels.tsa.vector_ar.var_model.VARResults class statsmodels.tsa.vector_ar.var_model.VARResults(endog, endog_lagged, params, sigma_u, lag_order, model=None, trend='c', names=None, dates=None) [source] Estimate VAR(p) process with fixed number of lags Parameters: endog : array endog_lagged : array params : array sigma_u : array lag_order : int model : VAR model instance trend : str {?nc?, ?c?, ?ct?} names : array-like List of names of the endogenous variables in order of appearance in e

sandbox.distributions.transformed.TransfTwo_gen()

statsmodels.sandbox.distributions.transformed.TransfTwo_gen class statsmodels.sandbox.distributions.transformed.TransfTwo_gen(kls, func, funcinvplus, funcinvminus, derivplus, derivminus, *args, **kwargs) [source] Distribution based on a non-monotonic (u- or hump-shaped transformation) the constructor can be called with a distribution class, and functions that define the non-linear transformation. and generates the distribution of the transformed random variable Note: the transformation, it?s

QuantRegResults.load()

statsmodels.regression.quantile_regression.QuantRegResults.load classmethod QuantRegResults.load(fname) load a pickle, (class method) Parameters: fname : string or filehandle fname can be a string to a file path or filename, or a filehandle. Returns: unpickled instance :

Contrasts Overview

Contrasts Overview Link to Notebook GitHub In [1]: from __future__ import print_function import numpy as np import statsmodels.api as sm This document is based heavily on this excellent resource from UCLA http://www.ats.ucla.edu/stat/r/library/contrast_coding.htm A categorical variable of K categories, or levels, usually enters a regression as a sequence of K-1 dummy variables. This amounts to a linear hypothesis on the level means. That is, each test statistic for these v

static OLSResults.mse_resid()

statsmodels.regression.linear_model.OLSResults.mse_resid static OLSResults.mse_resid()

RegressionResults.wald_test()

statsmodels.regression.linear_model.RegressionResults.wald_test RegressionResults.wald_test(r_matrix, cov_p=None, scale=1.0, invcov=None, use_f=None) Compute a Wald-test for a joint linear hypothesis. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test can be given as a string. See the examples. tu

genmod.generalized_linear_model.GLMResults()

statsmodels.genmod.generalized_linear_model.GLMResults class statsmodels.genmod.generalized_linear_model.GLMResults(model, params, normalized_cov_params, scale, cov_type='nonrobust', cov_kwds=None, use_t=None) [source] Class to contain GLM results. GLMResults inherits from statsmodels.LikelihoodModelResults Parameters: See statsmodels.LikelihoodModelReesults : Returns: **Attributes** : aic : float Akaike Information Criterion -2 * llf + 2*(df_model + 1) bic : float Bayes Information Cr

robust.robust_linear_model.RLM()

statsmodels.robust.robust_linear_model.RLM class statsmodels.robust.robust_linear_model.RLM(endog, exog, M=, missing='none', **kwargs) [source] Robust Linear Models Estimate a robust linear model via iteratively reweighted least squares given a robust criterion estimator. Parameters: endog : array-like 1-d endogenous response variable. The dependent variable. exog : array-like A nobs x k array where nobs is the number of observations and k is the number of regressors. An intercept is not