sandbox.distributions.transformed.squarenormalg

statsmodels.sandbox.distributions.transformed.squarenormalg statsmodels.sandbox.distributions.transformed.squarenormalg = Distribution based on a non-monotonic (u- or hump-shaped transformation) the constructor can be called with a distribution class, and functions that define the non-linear transformation. and generates the distribution of the transformed random variable Note: the transformation, it?s inverse and derivatives need to be fully specified: func, funcinvplus, funcinvminus, deri

static CountResults.bse()

statsmodels.discrete.discrete_model.CountResults.bse static CountResults.bse()

sandbox.stats.multicomp.set_partition()

statsmodels.sandbox.stats.multicomp.set_partition statsmodels.sandbox.stats.multicomp.set_partition(ssli) [source] extract a partition from a list of tuples this should be correctly called select largest disjoint sets. Begun and Gabriel 1981 don?t seem to be bothered by sets of accepted hypothesis with joint elements, e.g. maximal_accepted_sets = { {1,2,3}, {2,3,4} } This creates a set partition from a list of sets given as tuples. It tries to find the partition with the largest sets. That i

MixedLM.EM()

statsmodels.regression.mixed_linear_model.MixedLM.EM MixedLM.EM(fe_params, cov_re, scale, niter_em=10, hist=None) [source] Run the EM algorithm from a given starting point. This is for ML (not REML), but it seems to be good enough to use for REML starting values. Returns: fe_params : 1d ndarray The final value of the fixed effects coefficients cov_re : 2d ndarray The final value of the random effects covariance matrix scale : float The final value of the error variance Notes This use

tsa.stattools.q_stat()

statsmodels.tsa.stattools.q_stat statsmodels.tsa.stattools.q_stat(x, nobs, type='ljungbox') [source] Return?s Ljung-Box Q Statistic x : array-like Array of autocorrelation coefficients. Can be obtained from acf. nobs : int Number of observations in the entire sample (ie., not just the length of the autocorrelation function results. Returns: q-stat : array Ljung-Box Q-statistic for autocorrelation parameters p-value : array P-value of the Q statistic Notes Written to be used with ac

GMM.from_formula()

statsmodels.sandbox.regression.gmm.GMM.from_formula classmethod GMM.from_formula(formula, data, subset=None, *args, **kwargs) Create a Model from a formula and dataframe. Parameters: formula : str or generic Formula object The formula specifying the model data : array-like The data for the model. See Notes. subset : array-like An array-like object of booleans, integers, or index values that indicate the subset of df to use in the model. Assumes df is a pandas.DataFrame args : extra ar

CountResults.save()

statsmodels.discrete.discrete_model.CountResults.save CountResults.save(fname, remove_data=False) save a pickle of this instance Parameters: fname : string or filehandle fname can be a string to a file path or filename, or a filehandle. remove_data : bool If False (default), then the instance is pickled without changes. If True, then all arrays with length nobs are set to None before pickling. See the remove_data method. In some cases not all arrays will be set to None. Notes If remove

TTestIndPower.solve_power()

statsmodels.stats.power.TTestIndPower.solve_power TTestIndPower.solve_power(effect_size=None, nobs1=None, alpha=None, power=None, ratio=1.0, alternative='two-sided') [source] solve for any one parameter of the power of a two sample t-test for t-test the keywords are: effect_size, nobs1, alpha, power, ratio exactly one needs to be None, all others need numeric values Parameters: effect_size : float standardized effect size, difference between the two means divided by the standard deviation.

VARProcess.forecast_interval()

statsmodels.tsa.vector_ar.var_model.VARProcess.forecast_interval VARProcess.forecast_interval(y, steps, alpha=0.05) [source] Construct forecast interval estimates assuming the y are Gaussian Returns: (lower, mid, upper) : (ndarray, ndarray, ndarray) Notes Lutkepohl pp. 39-40

static OLSInfluence.cooks_distance()

statsmodels.stats.outliers_influence.OLSInfluence.cooks_distance static OLSInfluence.cooks_distance() [source] (cached attribute) Cooks distance uses original results, no nobs loop