DescStatUV.ci_skew()

statsmodels.emplike.descriptive.DescStatUV.ci_skew DescStatUV.ci_skew(sig=0.05, upper_bound=None, lower_bound=None) [source] Returns the confidence interval for skewness. Parameters: sig : float The significance level. Default is .05 upper_bound : float Maximum value of skewness the upper limit can be. Default is .99 confidence limit assuming normality. lower_bound : float Minimum value of skewness the lower limit can be. Default is .99 confidence level assuming normality. Returns:

DescStatUV.test_var()

statsmodels.emplike.descriptive.DescStatUV.test_var DescStatUV.test_var(sig2_0, return_weights=False) [source] Returns -2 x log-likelihoog ratio and the p-value for the hypothesized variance Parameters: sig2_0 : float Hypothesized variance to be tested return_weights : bool If True, returns the weights that maximize the likelihood of observing sig2_0. Default is False Returns: test_results : tuple The log-likelihood ratio and the p_value of sig2_0 Examples >>> random_numbe

PoissonOffsetGMLE.expandparams()

statsmodels.miscmodels.count.PoissonOffsetGMLE.expandparams PoissonOffsetGMLE.expandparams(params) expand to full parameter array when some parameters are fixed Parameters: params : array reduced parameter array Returns: paramsfull : array expanded parameter array where fixed parameters are included Notes Calling this requires that self.fixed_params and self.fixed_paramsmask are defined. developer notes: This can be used in the log-likelihood to ... this could also be replaced by a m

static IVGMMResults.jval()

statsmodels.sandbox.regression.gmm.IVGMMResults.jval static IVGMMResults.jval()

TTestIndPower.solve_power()

statsmodels.stats.power.TTestIndPower.solve_power TTestIndPower.solve_power(effect_size=None, nobs1=None, alpha=None, power=None, ratio=1.0, alternative='two-sided') [source] solve for any one parameter of the power of a two sample t-test for t-test the keywords are: effect_size, nobs1, alpha, power, ratio exactly one needs to be None, all others need numeric values Parameters: effect_size : float standardized effect size, difference between the two means divided by the standard deviation.

MixedLMResults.profile_re()

statsmodels.regression.mixed_linear_model.MixedLMResults.profile_re MixedLMResults.profile_re(re_ix, num_low=5, dist_low=1.0, num_high=5, dist_high=1.0) [source] Calculate a series of values along a 1-dimensional profile likelihood. Parameters: re_ix : integer The index of the variance parameter for which to construct a profile likelihood. num_low : integer The number of points at which to calculate the likelihood below the MLE of the parameter of interest. dist_low : float The distanc

DescStatUV.ci_kurt()

statsmodels.emplike.descriptive.DescStatUV.ci_kurt DescStatUV.ci_kurt(sig=0.05, upper_bound=None, lower_bound=None) [source] Returns the confidence interval for kurtosis. Parameters: sig : float The significance level. Default is .05 upper_bound : float Maximum value of kurtosis the upper limit can be. Default is .99 confidence limit assuming normality. lower_bound : float Minimum value of kurtosis the lower limit can be. Default is .99 confidence limit assuming normality. Returns:

DescStatUV.test_joint_skew_kurt()

statsmodels.emplike.descriptive.DescStatUV.test_joint_skew_kurt DescStatUV.test_joint_skew_kurt(skew0, kurt0, return_weights=False) [source] Returns - 2 x log-likelihood and the p-value for the joint hypothesis test for skewness and kurtosis Parameters: skew0 : float Skewness value to be tested kurt0 : float Kurtosis value to be tested return_weights : bool If True, function also returns the weights that maximize the likelihood ratio. Default is False. Returns: test_results : tuple

Runs.runs_test()

statsmodels.sandbox.stats.runs.Runs.runs_test Runs.runs_test(correction=True) [source] basic version of runs test Parameters: correction: bool : Following the SAS manual, for samplesize below 50, the test statistic is corrected by 0.5. This can be turned off with correction=False, and was included to match R, tseries, which does not use any correction. pvalue based on normal distribution, with integer correction :

stats.power.GofChisquarePower()

statsmodels.stats.power.GofChisquarePower class statsmodels.stats.power.GofChisquarePower(**kwds) [source] Statistical Power calculations for one sample chisquare test Methods plot_power([dep_var, nobs, effect_size, ...]) plot power with number of observations or effect size on x-axis power(effect_size, nobs, alpha, n_bins[, ddof]) Calculate the power of a chisquare test for one sample solve_power([effect_size, nobs, alpha, ...]) solve for any one parameter of the power of a one sample ch