sandbox.stats.multicomp.maxzero()

statsmodels.sandbox.stats.multicomp.maxzero statsmodels.sandbox.stats.multicomp.maxzero(x) [source] find all up zero crossings and return the index of the highest Not used anymore >>> np.random.seed(12345) >>> x = np.random.randn(8) >>> x array([-0.20470766, 0.47894334, -0.51943872, -0.5557303 , 1.96578057, 1.39340583, 0.09290788, 0.28174615]) >>> maxzero(x) (4, array([1, 4])) no up-zero-crossing at end >>> np.random.seed(0) >>&

sandbox.stats.multicomp.line

statsmodels.sandbox.stats.multicomp.line statsmodels.sandbox.stats.multicomp.line = '' str(object=??) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object.

sandbox.stats.multicomp.homogeneous_subsets()

statsmodels.sandbox.stats.multicomp.homogeneous_subsets statsmodels.sandbox.stats.multicomp.homogeneous_subsets(vals, dcrit) [source] recursively check all pairs of vals for minimum distance step down method as in Newman-Keuls and Ryan procedures. This is not a closed procedure since not all partitions are checked. Parameters: vals : array_like values that are pairwise compared dcrit : array_like or float critical distance for rejecting, either float, or 2-dimensional array with distance

sandbox.stats.multicomp.GroupsStats()

statsmodels.sandbox.stats.multicomp.GroupsStats class statsmodels.sandbox.stats.multicomp.GroupsStats(x, useranks=False, uni=None, intlab=None) [source] statistics by groups (another version) groupstats as a class with lazy evaluation (not yet - decorators are still missing) written this time as equivalent of scipy.stats.rankdata gs = GroupsStats(X, useranks=True) assert_almost_equal(gs.groupmeanfilter, stats.rankdata(X[:,0]), 15) TODO: incomplete doc strings Methods groupdemean() groupss

sandbox.stats.multicomp.get_tukeyQcrit()

statsmodels.sandbox.stats.multicomp.get_tukeyQcrit statsmodels.sandbox.stats.multicomp.get_tukeyQcrit(k, df, alpha=0.05) [source] return critical values for Tukey?s HSD (Q) Parameters: k : int in {2, ..., 10} number of tests df : int degrees of freedom of error term alpha : {0.05, 0.01} type 1 error, 1-confidence level not enough error checking for limitations :

sandbox.stats.multicomp.fdrcorrection0()

statsmodels.sandbox.stats.multicomp.fdrcorrection0 statsmodels.sandbox.stats.multicomp.fdrcorrection0(pvals, alpha=0.05, method='indep', is_sorted=False) pvalue correction for false discovery rate This covers Benjamini/Hochberg for independent or positively correlated and Benjamini/Yekutieli for general or negatively correlated tests. Both are available in the function multipletests, as method=`fdr_bh`, resp. fdr_by. Parameters: pvals : array_like set of p-values of the individual tests.

sandbox.stats.multicomp.ecdf()

statsmodels.sandbox.stats.multicomp.ecdf statsmodels.sandbox.stats.multicomp.ecdf(x) no frills empirical cdf used in fdrcorrection

sandbox.stats.multicomp.distance_st_range()

statsmodels.sandbox.stats.multicomp.distance_st_range statsmodels.sandbox.stats.multicomp.distance_st_range(mean_all, nobs_all, var_all, df=None, triu=False) [source] pairwise distance matrix, outsourced from tukeyhsd CHANGED: meandiffs are with sign, studentized range uses abs q_crit added for testing TODO: error in variance calculation when nobs_all is scalar, missing 1/n

sandbox.stats.multicomp.compare_ordered()

statsmodels.sandbox.stats.multicomp.compare_ordered statsmodels.sandbox.stats.multicomp.compare_ordered(vals, alpha) [source] simple ordered sequential comparison of means vals : array_like means or rankmeans for independent groups incomplete, no return, not used yet

sandbox.stats.multicomp.ccols

statsmodels.sandbox.stats.multicomp.ccols statsmodels.sandbox.stats.multicomp.ccols = array([ 2, 3, 4, 5, 6, 7, 8, 9, 10])