ARMA.loglike_kalman()

statsmodels.tsa.arima_model.ARMA.loglike_kalman ARMA.loglike_kalman(params, set_sigma2=True) [source] Compute exact loglikelihood for ARMA(p,q) model by the Kalman Filter.

static QuantRegResults.rsquared()

statsmodels.regression.quantile_regression.QuantRegResults.rsquared static QuantRegResults.rsquared() [source]

TLinearModel.initialize()

statsmodels.miscmodels.tmodel.TLinearModel.initialize TLinearModel.initialize() [source]

CountResults.initialize()

statsmodels.discrete.discrete_model.CountResults.initialize CountResults.initialize(model, params, **kwd)

PoissonGMLE.nloglike()

statsmodels.miscmodels.count.PoissonGMLE.nloglike PoissonGMLE.nloglike(params)

PoissonGMLE.score_obs()

statsmodels.miscmodels.count.PoissonGMLE.score_obs PoissonGMLE.score_obs(params, **kwds) Jacobian/Gradient of log-likelihood evaluated at params for each observation.

CountModel.fit()

statsmodels.discrete.discrete_model.CountModel.fit CountModel.fit(start_params=None, method='newton', maxiter=35, full_output=1, disp=1, callback=None, **kwargs) [source] Fit the model using maximum likelihood. The rest of the docstring is from statsmodels.base.model.LikelihoodModel.fit Fit method for likelihood based models Parameters: start_params : array-like, optional Initial guess of the solution for the loglikelihood maximization. The default is an array of zeros. method : str, opti

tsa.filters.hp_filter.hpfilter()

statsmodels.tsa.filters.hp_filter.hpfilter statsmodels.tsa.filters.hp_filter.hpfilter(X, lamb=1600) [source] Hodrick-Prescott filter Parameters: X : array-like The 1d ndarray timeseries to filter of length (nobs,) or (nobs,1) lamb : float The Hodrick-Prescott smoothing parameter. A value of 1600 is suggested for quarterly data. Ravn and Uhlig suggest using a value of 6.25 (1600/4**4) for annual data and 129600 (1600*3**4) for monthly data. Returns: cycle : array The estimated cycle i

Robust Linear Models

Robust Linear Models Link to Notebook GitHub In [1]: from __future__ import print_function import numpy as np import statsmodels.api as sm import matplotlib.pyplot as plt from statsmodels.sandbox.regression.predstd import wls_prediction_std Estimation Load data: In [2]: data = sm.datasets.stackloss.load() data.exog = sm.add_constant(data.exog) Huber's T norm with the (default) median absolute deviation scaling In [3]: huber_t = sm.RLM(data.endog, data.e

Logit.fit_regularized()

statsmodels.discrete.discrete_model.Logit.fit_regularized Logit.fit_regularized(start_params=None, method='l1', maxiter='defined_by_method', full_output=1, disp=1, callback=None, alpha=0, trim_mode='auto', auto_trim_tol=0.01, size_trim_tol=0.0001, qc_tol=0.03, **kwargs) Fit the model using a regularized maximum likelihood. The regularization method AND the solver used is determined by the argument method. Parameters: start_params : array-like, optional Initial guess of the solution for the