Logit.predict()

statsmodels.discrete.discrete_model.Logit.predict Logit.predict(params, exog=None, linear=False) Predict response variable of a model given exogenous variables. Parameters: params : array-like Fitted parameters of the model. exog : array-like 1d or 2d array of exogenous values. If not supplied, the whole exog attribute of the model is used. linear : bool, optional If True, returns the linear predictor dot(exog,params). Else, returns the value of the cdf at the linear predictor. Retur

inverse_power.inverse()

statsmodels.genmod.families.links.inverse_power.inverse inverse_power.inverse(z) Inverse of the power transform link function Parameters: `z` : array-like Value of the transformed mean parameters at p Returns: `p` : array Mean parameters Notes g^(-1)(z`) = z`**(1/`power)

static PHRegResults.baseline_cumulative_hazard()

statsmodels.duration.hazard_regression.PHRegResults.baseline_cumulative_hazard static PHRegResults.baseline_cumulative_hazard() [source] A list (corresponding to the strata) containing the baseline cumulative hazard function evaluated at the event points.

stats.proportion.proportions_chisquare()

statsmodels.stats.proportion.proportions_chisquare statsmodels.stats.proportion.proportions_chisquare(count, nobs, value=None) [source] test for proportions based on chisquare test Parameters: count : integer or array_like the number of successes in nobs trials. If this is array_like, then the assumption is that this represents the number of successes for each independent sample nobs : integer the number of trials or observations, with the same length as count. value : None or float or

static OLSInfluence.cooks_distance()

statsmodels.stats.outliers_influence.OLSInfluence.cooks_distance static OLSInfluence.cooks_distance() [source] (cached attribute) Cooks distance uses original results, no nobs loop

tsa.tsatools.add_trend()

statsmodels.tsa.tsatools.add_trend statsmodels.tsa.tsatools.add_trend(X, trend='c', prepend=False, has_constant='skip') [source] Adds a trend and/or constant to an array. Parameters: X : array-like Original array of data. trend : str {?c?,?t?,?ct?,?ctt?} ?c? add constant only ?t? add trend only ?ct? add constant and linear trend ?ctt? add constant and linear and quadratic trend. prepend : bool If True, prepends the new data to the columns of X. has_constant : str {?raise?, ?add?, ?ski

MNLogit.hessian()

statsmodels.discrete.discrete_model.MNLogit.hessian MNLogit.hessian(params) [source] Multinomial logit Hessian matrix of the log-likelihood Parameters: params : array-like The parameters of the model Returns: hess : ndarray, (J*K, J*K) The Hessian, second derivative of loglikelihood function with respect to the flattened parameters, evaluated at params Notes where equals 1 if j = l and 0 otherwise. The actual Hessian matrix has J**2 * K x K elements. Our Hessian is reshaped to be

ARMA.predict()

statsmodels.tsa.arima_model.ARMA.predict ARMA.predict(params, start=None, end=None, exog=None, dynamic=False) [source] ARMA model in-sample and out-of-sample prediction Parameters: params : array-like The fitted parameters of the model. start : int, str, or datetime Zero-indexed observation number at which to start forecasting, ie., the first forecast is start. Can also be a date string to parse or a datetime type. end : int, str, or datetime Zero-indexed observation number at which to

ACSkewT_gen.fit()

statsmodels.sandbox.distributions.extras.ACSkewT_gen.fit ACSkewT_gen.fit(data, *args, **kwds) Return MLEs for shape, location, and scale parameters from data. MLE stands for Maximum Likelihood Estimate. Starting estimates for the fit are given by input arguments; for any arguments not provided with starting estimates, self._fitstart(data) is called to generate such. One can hold some parameters fixed to specific values by passing in keyword arguments f0, f1, ..., fn (for shape parameters) an

tsa.varma_process.VarmaPoly()

statsmodels.tsa.varma_process.VarmaPoly class statsmodels.tsa.varma_process.VarmaPoly(ar, ma=None) [source] class to keep track of Varma polynomial format Examples ar23 = np.array([[[ 1. , 0. ], [ 0. , 1. ]], [[-0.6, 0. ], [ 0.2, -0.6]], [[-0.1, 0. ], [ 0.1, -0.1]]]) ma22 = np.array([[[ 1. , 0. ], [ 0. , 1. ]], [[ 0.4, 0. ], [ 0.2, 0.3]]]) Methods getisinvertible([a]) check whether the auto-regressive lag-polynomial is stationary getisstationary([a]) check whether the auto-regressive l