cross_validation.LabelShuffleSplit()

Warning DEPRECATED class sklearn.cross_validation.LabelShuffleSplit(labels, n_iter=5, test_size=0.2, train_size=None, random_state=None) [source] Shuffle-Labels-Out cross-validation iterator Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.GroupShuffleSplit instead. Provides randomized train/test indices to split data according to a third-party provided label. This label information can be used to encode arbitrary domain specific stratifica

cross_validation.LabelKFold()

Warning DEPRECATED class sklearn.cross_validation.LabelKFold(labels, n_folds=3) [source] K-fold iterator variant with non-overlapping labels. Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.GroupKFold instead. The same label will not appear in two different folds (the number of distinct labels has to be at least equal to the number of folds). The folds are approximately balanced in the sense that the number of distinct labels is approximat

cross_validation.KFold()

Warning DEPRECATED class sklearn.cross_validation.KFold(n, n_folds=3, shuffle=False, random_state=None) [source] K-Folds cross validation iterator. Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.KFold instead. Provides train/test indices to split data in train test sets. Split dataset into k consecutive folds (without shuffling by default). Each fold is then used as a validation set once while the k - 1 remaining fold(s) form the training

cross_decomposition.PLSSVD()

class sklearn.cross_decomposition.PLSSVD(n_components=2, scale=True, copy=True) [source] Partial Least Square SVD Simply perform a svd on the crosscovariance matrix: X?Y There are no iterative deflation here. Read more in the User Guide. Parameters: n_components : int, default 2 Number of components to keep. scale : boolean, default True Whether to scale X and Y. copy : boolean, default True Whether to copy X and Y, or perform in-place computations. Attributes: x_weights_ : array,

cross_decomposition.PLSRegression()

class sklearn.cross_decomposition.PLSRegression(n_components=2, scale=True, max_iter=500, tol=1e-06, copy=True) [source] PLS regression PLSRegression implements the PLS 2 blocks regression known as PLS2 or PLS1 in case of one dimensional response. This class inherits from _PLS with mode=?A?, deflation_mode=?regression?, norm_y_weights=False and algorithm=?nipals?. Read more in the User Guide. Parameters: n_components : int, (default 2) Number of components to keep. scale : boolean, (defa

cross_decomposition.PLSCanonical()

class sklearn.cross_decomposition.PLSCanonical(n_components=2, scale=True, algorithm='nipals', max_iter=500, tol=1e-06, copy=True) [source] PLSCanonical implements the 2 blocks canonical PLS of the original Wold algorithm [Tenenhaus 1998] p.204, referred as PLS-C2A in [Wegelin 2000]. This class inherits from PLS with mode=?A? and deflation_mode=?canonical?, norm_y_weights=True and algorithm=?nipals?, but svd should provide similar results up to numerical errors. Read more in the User Guide.

cross_decomposition.CCA()

class sklearn.cross_decomposition.CCA(n_components=2, scale=True, max_iter=500, tol=1e-06, copy=True) [source] CCA Canonical Correlation Analysis. CCA inherits from PLS with mode=?B? and deflation_mode=?canonical?. Read more in the User Guide. Parameters: n_components : int, (default 2). number of components to keep. scale : boolean, (default True) whether to scale the data? max_iter : an integer, (default 500) the maximum number of iterations of the NIPALS inner loop tol : non-negat

Cross-validation on Digits Dataset Exercise

A tutorial exercise using Cross-validation with an SVM on the Digits dataset. This exercise is used in the Cross-validation generators part of the Model selection: choosing estimators and their parameters section of the A tutorial on statistical-learning for scientific data processing. print(__doc__) import numpy as np from sklearn.model_selection import cross_val_score from sklearn import datasets, svm digits = datasets.load_digits() X = digits.data y = digits.target svc = svm.SVC(ker

Cross-validation on diabetes Dataset Exercise

A tutorial exercise which uses cross-validation with linear models. This exercise is used in the Cross-validated estimators part of the Model selection: choosing estimators and their parameters section of the A tutorial on statistical-learning for scientific data processing. from __future__ import print_function print(__doc__) import numpy as np import matplotlib.pyplot as plt from sklearn import datasets from sklearn.linear_model import LassoCV from sklearn.linear_model import Lasso from sk

covariance.ShrunkCovariance()

class sklearn.covariance.ShrunkCovariance(store_precision=True, assume_centered=False, shrinkage=0.1) [source] Covariance estimator with shrinkage Read more in the User Guide. Parameters: store_precision : boolean, default True Specify if the estimated precision is stored shrinkage : float, 0 <= shrinkage <= 1, default 0.1 Coefficient in the convex combination used for the computation of the shrunk estimate. assume_centered : boolean, default False If True, data are not centered