CountResults.summary()

statsmodels.discrete.discrete_model.CountResults.summary CountResults.summary(yname=None, xname=None, title=None, alpha=0.05, yname_list=None) Summarize the Regression Results Parameters: yname : string, optional Default is y xname : list of strings, optional Default is var_## for ## in p the number of regressors title : string, optional Title for the top table. If not None, then this replaces the default title alpha : float significance level for the confidence intervals Returns:

CountResults.summary2()

statsmodels.discrete.discrete_model.CountResults.summary2 CountResults.summary2(yname=None, xname=None, title=None, alpha=0.05, float_format='%.4f') Experimental function to summarize regression results Parameters: xname : List of strings of length equal to the number of parameters Names of the independent variables (optional) yname : string Name of the dependent variable (optional) title : string, optional Title for the top table. If not None, then this replaces the default title alp

CountResults.save()

statsmodels.discrete.discrete_model.CountResults.save CountResults.save(fname, remove_data=False) save a pickle of this instance Parameters: fname : string or filehandle fname can be a string to a file path or filename, or a filehandle. remove_data : bool If False (default), then the instance is pickled without changes. If True, then all arrays with length nobs are set to None before pickling. See the remove_data method. In some cases not all arrays will be set to None. Notes If remove

CountResults.remove_data()

statsmodels.discrete.discrete_model.CountResults.remove_data CountResults.remove_data() remove data arrays, all nobs arrays from result and model This reduces the size of the instance, so it can be pickled with less memory. Currently tested for use with predict from an unpickled results and model instance. Warning Since data and some intermediate results have been removed calculating new statistics that require them will raise exceptions. The exception will occur the first time an attribute

CountResults.normalized_cov_params()

statsmodels.discrete.discrete_model.CountResults.normalized_cov_params CountResults.normalized_cov_params()

CountResults.predict()

statsmodels.discrete.discrete_model.CountResults.predict CountResults.predict(exog=None, transform=True, *args, **kwargs) Call self.model.predict with self.params as the first argument. Parameters: exog : array-like, optional The values for which you want to predict. transform : bool, optional If the model was fit via a formula, do you want to pass exog through the formula. Default is True. E.g., if you fit a model y ~ log(x1) + log(x2), and transform is True, then you can pass a data st

CountResults.load()

statsmodels.discrete.discrete_model.CountResults.load classmethod CountResults.load(fname) load a pickle, (class method) Parameters: fname : string or filehandle fname can be a string to a file path or filename, or a filehandle. Returns: unpickled instance :

CountResults.initialize()

statsmodels.discrete.discrete_model.CountResults.initialize CountResults.initialize(model, params, **kwd)

CountResults.f_test()

statsmodels.discrete.discrete_model.CountResults.f_test CountResults.f_test(r_matrix, cov_p=None, scale=1.0, invcov=None) Compute the F-test for a joint linear hypothesis. This is a special case of wald_test that always uses the F distribution. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test ca

CountResults.get_margeff()

statsmodels.discrete.discrete_model.CountResults.get_margeff CountResults.get_margeff(at='overall', method='dydx', atexog=None, dummy=False, count=False) Get marginal effects of the fitted model. Parameters: at : str, optional Options are: ?overall?, The average of the marginal effects at each observation. ?mean?, The marginal effects at the mean of each regressor. ?median?, The marginal effects at the median of each regressor. ?zero?, The marginal effects at zero for each regressor. ?all?