graphics.regressionplots.plot_partregress()

statsmodels.graphics.regressionplots.plot_partregress statsmodels.graphics.regressionplots.plot_partregress(endog, exog_i, exog_others, data=None, title_kwargs={}, obs_labels=True, label_kwargs={}, ax=None, ret_coords=False, **kwargs) [source] Plot partial regression for a single regressor. Parameters: endog : ndarray or string endogenous or response variable. If string is given, you can use a arbitrary translations as with a formula. exog_i : ndarray or string exogenous, explanatory var

OLSResults.f_test()

statsmodels.regression.linear_model.OLSResults.f_test OLSResults.f_test(r_matrix, cov_p=None, scale=1.0, invcov=None) Compute the F-test for a joint linear hypothesis. This is a special case of wald_test that always uses the F distribution. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test can be

IVGMMResults.wald_test()

statsmodels.sandbox.regression.gmm.IVGMMResults.wald_test IVGMMResults.wald_test(r_matrix, cov_p=None, scale=1.0, invcov=None, use_f=None) Compute a Wald-test for a joint linear hypothesis. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test can be given as a string. See the examples. tuple : A tup

PoissonGMLE.hessian()

statsmodels.miscmodels.count.PoissonGMLE.hessian PoissonGMLE.hessian(params) Hessian of log-likelihood evaluated at params

Linear Regression

Linear Regression Linear models with independently and identically distributed errors, and for errors with heteroscedasticity or autocorrelation. This module allows estimation by ordinary least squares (OLS), weighted least squares (WLS), generalized least squares (GLS), and feasible generalized least squares with autocorrelated AR(p) errors. See Module Reference for commands and arguments. Examples # Load modules and data import numpy as np import statsmodels.api as sm spector_data = sm.datas

static IRAnalysis.H()

statsmodels.tsa.vector_ar.irf.IRAnalysis.H static IRAnalysis.H() [source]

BinaryResults.t_test()

statsmodels.discrete.discrete_model.BinaryResults.t_test BinaryResults.t_test(r_matrix, cov_p=None, scale=None, use_t=None) Compute a t-test for a each linear hypothesis of the form Rb = q Parameters: r_matrix : array-like, str, tuple array : If an array is given, a p x k 2d array or length k 1d array specifying the linear restrictions. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test can be given as a string. See the examples. tuple : A tuple o

NegativeBinomialResults.wald_test()

statsmodels.discrete.discrete_model.NegativeBinomialResults.wald_test NegativeBinomialResults.wald_test(r_matrix, cov_p=None, scale=1.0, invcov=None, use_f=None) Compute a Wald-test for a joint linear hypothesis. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test can be given as a string. See the

LinearIVGMM.get_error()

statsmodels.sandbox.regression.gmm.LinearIVGMM.get_error LinearIVGMM.get_error(params)

NonlinearIVGMM.score()

statsmodels.sandbox.regression.gmm.NonlinearIVGMM.score NonlinearIVGMM.score(params, weights, **kwds) [source]