DiscreteModel.loglike()

statsmodels.discrete.discrete_model.DiscreteModel.loglike DiscreteModel.loglike(params) Log-likelihood of model.

GMM.start_weights()

statsmodels.sandbox.regression.gmm.GMM.start_weights GMM.start_weights(inv=True) [source]

BinaryModel.fit()

statsmodels.discrete.discrete_model.BinaryModel.fit BinaryModel.fit(start_params=None, method='newton', maxiter=35, full_output=1, disp=1, callback=None, **kwargs) Fit the model using maximum likelihood. The rest of the docstring is from statsmodels.base.model.LikelihoodModel.fit Fit method for likelihood based models Parameters: start_params : array-like, optional Initial guess of the solution for the loglikelihood maximization. The default is an array of zeros. method : str, optional T

Sem2SLS.whiten()

statsmodels.sandbox.sysreg.Sem2SLS.whiten Sem2SLS.whiten(Y) [source] Runs the first stage of the 2SLS. Returns the RHS variables that include the instruments.

static IVRegressionResults.tvalues()

statsmodels.sandbox.regression.gmm.IVRegressionResults.tvalues static IVRegressionResults.tvalues() Return the t-statistic for a given parameter estimate.

static OLSResults.cov_HC0()

statsmodels.regression.linear_model.OLSResults.cov_HC0 static OLSResults.cov_HC0() See statsmodels.RegressionResults

static IVRegressionResults.mse_model()

statsmodels.sandbox.regression.gmm.IVRegressionResults.mse_model static IVRegressionResults.mse_model()

Prediction (out of sample)

Prediction (out of sample) Link to Notebook GitHub In [1]: from __future__ import print_function import numpy as np import statsmodels.api as sm Artificial data In [2]: nsample = 50 sig = 0.25 x1 = np.linspace(0, 20, nsample) X = np.column_stack((x1, np.sin(x1), (x1-5)**2)) X = sm.add_constant(X) beta = [5., 0.5, 0.5, -0.02] y_true = np.dot(X, beta) y = y_true + sig * np.random.normal(size=nsample) Estimation In [3]: olsmod = sm.OLS(y, X) olsres = olsmo

IVGMMResults.jtest()

statsmodels.sandbox.regression.gmm.IVGMMResults.jtest IVGMMResults.jtest() overidentification test I guess this is missing a division by nobs, what?s the normalization in jval ?

NegativeBinomial.jac()

statsmodels.discrete.discrete_model.NegativeBinomial.jac NegativeBinomial.jac(*args, **kwds) jac is deprecated, use score_obs instead! Use score_obs method. jac will be removed in 0.7