stats.weightstats._zstat_generic2()

statsmodels.stats.weightstats._zstat_generic2 statsmodels.stats.weightstats._zstat_generic2(value, std_diff, alternative) [source] generic (normal) z-test to save typing can be used as ztest based on summary statistics

SkewNorm_gen.expect()

statsmodels.sandbox.distributions.extras.SkewNorm_gen.expect SkewNorm_gen.expect(func=None, args=(), loc=0, scale=1, lb=None, ub=None, conditional=False, **kwds) Calculate expected value of a function with respect to the distribution. The expected value of a function f(x) with respect to a distribution dist is defined as: ubound E[x] = Integral(f(x) * dist.pdf(x)) lbound Parameters: func : callable, optional Function for which integral is calculated. Takes only one argumen

static GLMResults.resid_response()

statsmodels.genmod.generalized_linear_model.GLMResults.resid_response static GLMResults.resid_response() [source]

PoissonZiGMLE.score_obs()

statsmodels.miscmodels.count.PoissonZiGMLE.score_obs PoissonZiGMLE.score_obs(params, **kwds) Jacobian/Gradient of log-likelihood evaluated at params for each observation.

IV2SLS.loglike()

statsmodels.sandbox.regression.gmm.IV2SLS.loglike IV2SLS.loglike(params) Log-likelihood of model.

Poisson.score_obs()

statsmodels.discrete.discrete_model.Poisson.score_obs Poisson.score_obs(params) [source] Poisson model Jacobian of the log-likelihood for each observation Parameters: params : array-like The parameters of the model Returns: score : ndarray (nobs, k_vars) The score vector of the model evaluated at params Notes for observations where the loglinear model is assumed

Ordinary Least Squares

Ordinary Least Squares Link to Notebook GitHub In [1]: from __future__ import print_function import numpy as np import statsmodels.api as sm import matplotlib.pyplot as plt from statsmodels.sandbox.regression.predstd import wls_prediction_std np.random.seed(9876789) OLS estimation Artificial data: In [2]: nsample = 100 x = np.linspace(0, 10, 100) X = np.column_stack((x, x**2)) beta = np.array([1, 0.1, 10]) e = np.random.normal(size=nsample) Our model needs an i

stats.diagnostic.het_arch()

statsmodels.stats.diagnostic.het_arch statsmodels.stats.diagnostic.het_arch(resid, maxlag=None, autolag=None, store=False, regresults=False, ddof=0) Engle?s Test for Autoregressive Conditional Heteroscedasticity (ARCH) Parameters: resid : ndarray, (nobs,) residuals from an estimation, or time series maxlag : int highest lag to use autolag : None or string If None, then a fixed number of lags given by maxlag is used. store : bool If true then the intermediate results are also returned

stats.moment_helpers.mnc2mc()

statsmodels.stats.moment_helpers.mnc2mc statsmodels.stats.moment_helpers.mnc2mc(mnc, wmean=True) [source] convert non-central to central moments, uses recursive formula optionally adjusts first moment to return mean

GMMResults.f_test()

statsmodels.sandbox.regression.gmm.GMMResults.f_test GMMResults.f_test(r_matrix, cov_p=None, scale=1.0, invcov=None) Compute the F-test for a joint linear hypothesis. This is a special case of wald_test that always uses the F distribution. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test can be