Weighted Least Squares

Weighted Least Squares Link to Notebook GitHub In [1]: from __future__ import print_function import numpy as np from scipy import stats import statsmodels.api as sm import matplotlib.pyplot as plt from statsmodels.sandbox.regression.predstd import wls_prediction_std from statsmodels.iolib.table import (SimpleTable, default_txt_fmt) np.random.seed(1024) WLS Estimation Artificial data: Heteroscedasticity 2 groups Model assumptions: Misspecification: true model is quadratic, estimat

NegativeBinomial.fit_regularized()

statsmodels.discrete.discrete_model.NegativeBinomial.fit_regularized NegativeBinomial.fit_regularized(start_params=None, method='l1', maxiter='defined_by_method', full_output=1, disp=1, callback=None, alpha=0, trim_mode='auto', auto_trim_tol=0.01, size_trim_tol=0.0001, qc_tol=0.03, **kwargs) [source]

StepDown.run()

statsmodels.sandbox.stats.multicomp.StepDown.run StepDown.run(alpha) [source] main function to run the test, could be done in __call__ instead this could have all the initialization code

StepDown.check_set()

statsmodels.sandbox.stats.multicomp.StepDown.check_set StepDown.check_set(indices) [source] check whether pairwise distances of indices satisfy condition

tsa.arima_model.ARIMA()

statsmodels.tsa.arima_model.ARIMA class statsmodels.tsa.arima_model.ARIMA(endog, order, exog=None, dates=None, freq=None, missing='none') [source] Autoregressive Integrated Moving Average ARIMA(p,d,q) Model Parameters: endog : array-like The endogenous variable. order : iterable The (p,d,q) order of the model for the number of AR parameters, differences, and MA parameters to use. exog : array-like, optional An optional arry of exogenous variables. This should not include a constant or

static GMMResults.llf()

statsmodels.sandbox.regression.gmm.GMMResults.llf static GMMResults.llf()

ARMAResults.remove_data()

statsmodels.tsa.arima_model.ARMAResults.remove_data ARMAResults.remove_data() remove data arrays, all nobs arrays from result and model This reduces the size of the instance, so it can be pickled with less memory. Currently tested for use with predict from an unpickled results and model instance. Warning Since data and some intermediate results have been removed calculating new statistics that require them will raise exceptions. The exception will occur the first time an attribute is access

regression.linear_model.GLS()

statsmodels.regression.linear_model.GLS class statsmodels.regression.linear_model.GLS(endog, exog, sigma=None, missing='none', hasconst=None, **kwargs) [source] Generalized least squares model with a general covariance structure. Parameters: endog : array-like 1-d endogenous response variable. The dependent variable. exog : array-like A nobs x k array where nobs is the number of observations and k is the number of regressors. An intercept is not included by default and should be added by

DiscreteResults.cov_params()

statsmodels.discrete.discrete_model.DiscreteResults.cov_params DiscreteResults.cov_params(r_matrix=None, column=None, scale=None, cov_p=None, other=None) Returns the variance/covariance matrix. The variance/covariance matrix can be of a linear contrast of the estimates of params or all params multiplied by scale which will usually be an estimate of sigma^2. Scale is assumed to be a scalar. Parameters: r_matrix : array-like Can be 1d, or 2d. Can be used alone or with other. column : array-

static QuantRegResults.mse()

statsmodels.regression.quantile_regression.QuantRegResults.mse static QuantRegResults.mse() [source]