tools.eval_measures.aic()

statsmodels.tools.eval_measures.aic statsmodels.tools.eval_measures.aic(llf, nobs, df_modelwc) [source] Akaike information criterion Parameters: llf : float value of the loglikelihood nobs : int number of observations df_modelwc : int number of parameters including constant Returns: aic : float information criterion References http://en.wikipedia.org/wiki/Akaike_information_criterion

Transf_gen.cdf()

statsmodels.sandbox.distributions.transformed.Transf_gen.cdf Transf_gen.cdf(x, *args, **kwds) Cumulative distribution function of the given RV. Parameters: x : array_like quantiles arg1, arg2, arg3,... : array_like The shape parameter(s) for the distribution (see docstring of the instance object for more information) loc : array_like, optional location parameter (default=0) scale : array_like, optional scale parameter (default=1) Returns: cdf : ndarray Cumulative distribution fun

ARIMAResults.wald_test()

statsmodels.tsa.arima_model.ARIMAResults.wald_test ARIMAResults.wald_test(r_matrix, cov_p=None, scale=1.0, invcov=None, use_f=None) Compute a Wald-test for a joint linear hypothesis. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test can be given as a string. See the examples. tuple : A tuple of a

static RegressionResults.rsquared_adj()

statsmodels.regression.linear_model.RegressionResults.rsquared_adj static RegressionResults.rsquared_adj() [source]

GMMResults.get_bse()

statsmodels.sandbox.regression.gmm.GMMResults.get_bse GMMResults.get_bse(**kwds) [source] standard error of the parameter estimates with options Parameters: kwds : optional keywords options for calculating cov_params Returns: bse : ndarray estimated standard error of parameter estimates

IVGMMResults.get_bse()

statsmodels.sandbox.regression.gmm.IVGMMResults.get_bse IVGMMResults.get_bse(**kwds) standard error of the parameter estimates with options Parameters: kwds : optional keywords options for calculating cov_params Returns: bse : ndarray estimated standard error of parameter estimates

stats.correlation_tools.cov_nearest()

statsmodels.stats.correlation_tools.cov_nearest statsmodels.stats.correlation_tools.cov_nearest(cov, method='clipped', threshold=1e-15, n_fact=100, return_all=False) [source] Find the nearest covariance matrix that is postive (semi-) definite This leaves the diagonal, i.e. the variance, unchanged Parameters: cov : ndarray, (k,k) initial covariance matrix method : string if ?clipped?, then the faster but less accurate corr_clipped is used. if ?nearest?, then corr_nearest is used threshol

SquareFunc.derivplus()

statsmodels.sandbox.distributions.transformed.SquareFunc.derivplus SquareFunc.derivplus(x) [source]

NonlinearIVGMM.fitgmm()

statsmodels.sandbox.regression.gmm.NonlinearIVGMM.fitgmm NonlinearIVGMM.fitgmm(start, weights=None, optim_method='bfgs', optim_args=None) estimate parameters using GMM Parameters: start : array_like starting values for minimization weights : array weighting matrix for moment conditions. If weights is None, then the identity matrix is used Returns: paramest : array estimated parameters Notes todo: add fixed parameter option, not here ??? uses scipy.optimize.fmin

graphics.functional.rainbowplot()

statsmodels.graphics.functional.rainbowplot statsmodels.graphics.functional.rainbowplot(data, xdata=None, depth=None, method='MBD', ax=None, cmap=None) [source] Create a rainbow plot for a set of curves. A rainbow plot contains line plots of all curves in the dataset, colored in order of functional depth. The median curve is shown in black. Parameters: data : sequence of ndarrays or 2-D ndarray The vectors of functions to create a functional boxplot from. If a sequence of 1-D arrays, these