sklearn.cross_validation.permutation_test_score()

Warning DEPRECATED sklearn.cross_validation.permutation_test_score(estimator, X, y, cv=None, n_permutations=100, n_jobs=1, labels=None, random_state=0, verbose=0, scoring=None) [source] Evaluate the significance of a cross-validated score with permutations Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.permutation_test_score instead. Read more in the User Guide. Parameters: estimator : estimator object implementing ?fit? The object to u

sklearn.cross_validation.train_test_split()

Warning DEPRECATED sklearn.cross_validation.train_test_split(*arrays, **options) [source] Split arrays or matrices into random train and test subsets Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.train_test_split instead. Quick utility that wraps input validation and next(iter(ShuffleSplit(n_samples))) and application to input data into a single call for splitting (and optionally subsampling) data in a oneliner. Read more in the User Gui

sklearn.cross_validation.cross_val_predict()

Warning DEPRECATED sklearn.cross_validation.cross_val_predict(estimator, X, y=None, cv=None, n_jobs=1, verbose=0, fit_params=None, pre_dispatch='2*n_jobs') [source] Generate cross-validated estimates for each input data point Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.cross_val_predict instead. Read more in the User Guide. Parameters: estimator : estimator object implementing ?fit? and ?predict? The object to use to fit the data. X

sklearn.cross_validation.cross_val_score()

Warning DEPRECATED sklearn.cross_validation.cross_val_score(estimator, X, y=None, scoring=None, cv=None, n_jobs=1, verbose=0, fit_params=None, pre_dispatch='2*n_jobs') [source] Evaluate a score by cross-validation Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.cross_val_score instead. Read more in the User Guide. Parameters: estimator : estimator object implementing ?fit? The object to use to fit the data. X : array-like The data to f

sklearn.covariance.shrunk_covariance()

sklearn.covariance.shrunk_covariance(emp_cov, shrinkage=0.1) [source] Calculates a covariance matrix shrunk on the diagonal Read more in the User Guide. Parameters: emp_cov : array-like, shape (n_features, n_features) Covariance matrix to be shrunk shrinkage : float, 0 <= shrinkage <= 1 Coefficient in the convex combination used for the computation of the shrunk estimate. Returns: shrunk_cov : array-like Shrunk covariance. Notes The regularized (shrunk) covariance is given b

sklearn.cross_validation.check_cv()

Warning DEPRECATED sklearn.cross_validation.check_cv(cv, X=None, y=None, classifier=False) [source] Input checker utility for building a CV in a user friendly way. Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.check_cv instead. Parameters: cv : int, cross-validation generator or an iterable, optional Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 3-fold cross-validation, integer

sklearn.covariance.empirical_covariance()

sklearn.covariance.empirical_covariance(X, assume_centered=False) [source] Computes the Maximum likelihood covariance estimator Parameters: X : ndarray, shape (n_samples, n_features) Data from which to compute the covariance estimate assume_centered : Boolean If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data are centered before computation. Returns: covariance : 2D ndarray, shape (n_features, n_

sklearn.covariance.graph_lasso()

sklearn.covariance.graph_lasso(emp_cov, alpha, cov_init=None, mode='cd', tol=0.0001, enet_tol=0.0001, max_iter=100, verbose=False, return_costs=False, eps=2.2204460492503131e-16, return_n_iter=False) [source] l1-penalized covariance estimator Read more in the User Guide. Parameters: emp_cov : 2D ndarray, shape (n_features, n_features) Empirical covariance from which to compute the covariance estimate. alpha : positive float The regularization parameter: the higher alpha, the more regula

sklearn.covariance.ledoit_wolf()

sklearn.covariance.ledoit_wolf(X, assume_centered=False, block_size=1000) [source] Estimates the shrunk Ledoit-Wolf covariance matrix. Read more in the User Guide. Parameters: X : array-like, shape (n_samples, n_features) Data from which to compute the covariance estimate assume_centered : boolean, default=False If True, data are not centered before computation. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, data are centered before

sklearn.cluster.ward_tree()

sklearn.cluster.ward_tree(X, connectivity=None, n_clusters=None, return_distance=False) [source] Ward clustering based on a Feature matrix. Recursively merges the pair of clusters that minimally increases within-cluster variance. The inertia matrix uses a Heapq-based representation. This is the structured version, that takes into account some topological structure between samples. Read more in the User Guide. Parameters: X : array, shape (n_samples, n_features) feature matrix representing