tools.eval_measures.mse()

statsmodels.tools.eval_measures.mse statsmodels.tools.eval_measures.mse(x1, x2, axis=0) [source] mean squared error Parameters: x1, x2 : array_like The performance measure depends on the difference between these two arrays. axis : int axis along which the summary statistic is calculated Returns: mse : ndarray or float mean squared error along given axis. Notes If x1 and x2 have different shapes, then they need to broadcast. This uses numpy.asanyarray to convert the input. Whether t

tools.tools.fullrank()

statsmodels.tools.tools.fullrank statsmodels.tools.tools.fullrank(X, r=None) [source] Return a matrix whose column span is the same as X. If the rank of X is known it can be specified as r ? no check is made to ensure that this really is the rank of X.

tools.eval_measures.medianbias()

statsmodels.tools.eval_measures.medianbias statsmodels.tools.eval_measures.medianbias(x1, x2, axis=0) [source] median bias, median error Parameters: x1, x2 : array_like The performance measure depends on the difference between these two arrays. axis : int axis along which the summary statistic is calculated Returns: medianbias : ndarray or float median bias, or median difference along given axis. Notes If x1 and x2 have different shapes, then they need to broadcast. This uses numpy

tools.eval_measures.bias()

statsmodels.tools.eval_measures.bias statsmodels.tools.eval_measures.bias(x1, x2, axis=0) [source] bias, mean error Parameters: x1, x2 : array_like The performance measure depends on the difference between these two arrays. axis : int axis along which the summary statistic is calculated Returns: bias : ndarray or float bias, or mean difference along given axis. Notes If x1 and x2 have different shapes, then they need to broadcast. This uses numpy.asanyarray to convert the input. Wh

Probit.jac()

statsmodels.discrete.discrete_model.Probit.jac Probit.jac(*args, **kwds) jac is deprecated, use score_obs instead! Use score_obs method. jac will be removed in 0.7 Probit model Jacobian for each observation Parameters: params : array-like The parameters of the model Returns: jac : ndarray, (nobs, k_vars) The derivative of the loglikelihood for each observation evaluated at params. Notes for observations Where . This simplification comes from the fact that the normal distribution

NegativeBinomialResults.remove_data()

statsmodels.discrete.discrete_model.NegativeBinomialResults.remove_data NegativeBinomialResults.remove_data() remove data arrays, all nobs arrays from result and model This reduces the size of the instance, so it can be pickled with less memory. Currently tested for use with predict from an unpickled results and model instance. Warning Since data and some intermediate results have been removed calculating new statistics that require them will raise exceptions. The exception will occur the f

GLS.hessian()

statsmodels.regression.linear_model.GLS.hessian GLS.hessian(params) The Hessian matrix of the model

SimpleTable.as_latex_tabular()

statsmodels.iolib.table.SimpleTable.as_latex_tabular SimpleTable.as_latex_tabular(center=True, **fmt_dict) [source] Return string, the table as a LaTeX tabular environment. Note: will require the booktabs package.

DescStatMV.mv_mean_contour()

statsmodels.emplike.descriptive.DescStatMV.mv_mean_contour DescStatMV.mv_mean_contour(mu1_low, mu1_upp, mu2_low, mu2_upp, step1, step2, levs=[0.2, 0.1, 0.05, 0.01, 0.001], var1_name=None, var2_name=None, plot_dta=False) [source] Creates a confidence region plot for the mean of bivariate data Parameters: m1_low : float Minimum value of the mean for variable 1 m1_upp : float Maximum value of the mean for variable 1 mu2_low : float Minimum value of the mean for variable 2 mu2_upp : float

DescStatMV.mv_test_mean()

statsmodels.emplike.descriptive.DescStatMV.mv_test_mean DescStatMV.mv_test_mean(mu_array, return_weights=False) [source] Returns -2 x log likelihood and the p-value for a multivariate hypothesis test of the mean Parameters: mu_array : 1d array Hypothesized values for the mean. Must have same number of elements as columns in endog return_weights : bool If True, returns the weights that maximize the likelihood of mu_array. Default is False. Returns: test_results : tuple The log-likelih