RegressionResults.f_test()

statsmodels.regression.linear_model.RegressionResults.f_test RegressionResults.f_test(r_matrix, cov_p=None, scale=1.0, invcov=None) Compute the F-test for a joint linear hypothesis. This is a special case of wald_test that always uses the F distribution. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypotheses

Regression diagnostics

Regression diagnostics Link to Notebook GitHub This example file shows how to use a few of the statsmodels regression diagnostic tests in a real-life context. You can learn about more tests and find out more information abou the tests here on the Regression Diagnostics page. Note that most of the tests described here only return a tuple of numbers, without any annotation. A full description of outputs is always included in the docstring and in the online statsmodels documentation. For pres

QuantReg.fit_regularized()

statsmodels.regression.quantile_regression.QuantReg.fit_regularized QuantReg.fit_regularized(method='coord_descent', maxiter=1000, alpha=0.0, L1_wt=1.0, start_params=None, cnvrg_tol=1e-08, zero_tol=1e-08, **kwargs) Return a regularized fit to a linear regression model. Parameters: method : string Only the coordinate descent algorithm is implemented. maxiter : integer The maximum number of iteration cycles (an iteration cycle involves running coordinate descent on all variables). alpha :

graphics.regressionplots.plot_partregress()

statsmodels.graphics.regressionplots.plot_partregress statsmodels.graphics.regressionplots.plot_partregress(endog, exog_i, exog_others, data=None, title_kwargs={}, obs_labels=True, label_kwargs={}, ax=None, ret_coords=False, **kwargs) [source] Plot partial regression for a single regressor. Parameters: endog : ndarray or string endogenous or response variable. If string is given, you can use a arbitrary translations as with a formula. exog_i : ndarray or string exogenous, explanatory var

nonparametric.kernel_regression.KernelReg()

statsmodels.nonparametric.kernel_regression.KernelReg class statsmodels.nonparametric.kernel_regression.KernelReg(endog, exog, var_type, reg_type='ll', bw='cv_ls', defaults=) [source] Nonparametric kernel regression class. Calculates the conditional mean E[y|X] where y = g(X) + e. Note that the ?local constant? type of regression provided here is also known as Nadaraya-Watson kernel regression; ?local linear? is an extension of that which suffers less from bias issues at the edge of the supp

Dates in timeseries models

Dates in timeseries models Link to Notebook GitHub In [1]: from __future__ import print_function import statsmodels.api as sm import numpy as np import pandas as pd Getting started In [2]: data = sm.datasets.sunspots.load() Right now an annual date series must be datetimes at the end of the year. In [3]: from datetime import datetime dates = sm.tsa.datetools.dates_from_range('1700', length=len(data.endog)) Using Pandas Make a pandas TimeSeries

CountResults.t_test()

statsmodels.discrete.discrete_model.CountResults.t_test CountResults.t_test(r_matrix, cov_p=None, scale=None, use_t=None) Compute a t-test for a each linear hypothesis of the form Rb = q Parameters: r_matrix : array-like, str, tuple array : If an array is given, a p x k 2d array or length k 1d array specifying the linear restrictions. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test can be given as a string. See the examples. tuple : A tuple of

Nonparametric Methods nonparametric

Nonparametric Methods nonparametric This section collects various methods in nonparametric statistics. This includes kernel density estimation for univariate and multivariate data, kernel regression and locally weighted scatterplot smoothing (lowess). sandbox.nonparametric contains additional functions that are work in progress or don?t have unit tests yet. We are planning to include here nonparametric density estimators, especially based on kernel or orthogonal polynomials, smoothers, and tool

GMM.fit()

statsmodels.sandbox.regression.gmm.GMM.fit GMM.fit(start_params=None, maxiter=10, inv_weights=None, weights_method='cov', wargs=(), has_optimal_weights=True, optim_method='bfgs', optim_args=None) [source] Estimate parameters using GMM and return GMMResults TODO: weight and covariance arguments still need to be made consistent with similar options in other models, see RegressionResult.get_robustcov_results Parameters: start_params : array (optional) starting value for parameters ub minimiza

static QuantRegResults.resid_pearson()

statsmodels.regression.quantile_regression.QuantRegResults.resid_pearson static QuantRegResults.resid_pearson() Residuals, normalized to have unit variance. Returns: An array wresid/sqrt(scale) :