GMMResults.summary()

statsmodels.sandbox.regression.gmm.GMMResults.summary GMMResults.summary(yname=None, xname=None, title=None, alpha=0.05) [source] Summarize the Regression Results Parameters: yname : string, optional Default is y xname : list of strings, optional Default is var_## for ## in p the number of regressors title : string, optional Title for the top table. If not None, then this replaces the default title alpha : float significance level for the confidence intervals Returns: smry : Summa

GMMResults.save()

statsmodels.sandbox.regression.gmm.GMMResults.save GMMResults.save(fname, remove_data=False) save a pickle of this instance Parameters: fname : string or filehandle fname can be a string to a file path or filename, or a filehandle. remove_data : bool If False (default), then the instance is pickled without changes. If True, then all arrays with length nobs are set to None before pickling. See the remove_data method. In some cases not all arrays will be set to None. Notes If remove_data

GMMResults.remove_data()

statsmodels.sandbox.regression.gmm.GMMResults.remove_data GMMResults.remove_data() remove data arrays, all nobs arrays from result and model This reduces the size of the instance, so it can be pickled with less memory. Currently tested for use with predict from an unpickled results and model instance. Warning Since data and some intermediate results have been removed calculating new statistics that require them will raise exceptions. The exception will occur the first time an attribute is a

GMMResults.predict()

statsmodels.sandbox.regression.gmm.GMMResults.predict GMMResults.predict(exog=None, transform=True, *args, **kwargs) Call self.model.predict with self.params as the first argument. Parameters: exog : array-like, optional The values for which you want to predict. transform : bool, optional If the model was fit via a formula, do you want to pass exog through the formula. Default is True. E.g., if you fit a model y ~ log(x1) + log(x2), and transform is True, then you can pass a data structu

GMMResults.normalized_cov_params()

statsmodels.sandbox.regression.gmm.GMMResults.normalized_cov_params GMMResults.normalized_cov_params()

GMMResults.load()

statsmodels.sandbox.regression.gmm.GMMResults.load classmethod GMMResults.load(fname) load a pickle, (class method) Parameters: fname : string or filehandle fname can be a string to a file path or filename, or a filehandle. Returns: unpickled instance :

GMMResults.jtest()

statsmodels.sandbox.regression.gmm.GMMResults.jtest GMMResults.jtest() [source] overidentification test I guess this is missing a division by nobs, what?s the normalization in jval ?

GMMResults.initialize()

statsmodels.sandbox.regression.gmm.GMMResults.initialize GMMResults.initialize(model, params, **kwd)

GMMResults.get_bse()

statsmodels.sandbox.regression.gmm.GMMResults.get_bse GMMResults.get_bse(**kwds) [source] standard error of the parameter estimates with options Parameters: kwds : optional keywords options for calculating cov_params Returns: bse : ndarray estimated standard error of parameter estimates

GMMResults.f_test()

statsmodels.sandbox.regression.gmm.GMMResults.f_test GMMResults.f_test(r_matrix, cov_p=None, scale=1.0, invcov=None) Compute the F-test for a joint linear hypothesis. This is a special case of wald_test that always uses the F distribution. Parameters: r_matrix : array-like, str, or tuple array : An r x k array where r is the number of restrictions to test and k is the number of regressors. It is assumed that the linear combination is equal to zero. str : The full hypotheses to test can be