model_selection.RandomizedSearchCV()

class sklearn.model_selection.RandomizedSearchCV(estimator, param_distributions, n_iter=10, scoring=None, fit_params=None, n_jobs=1, iid=True, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', random_state=None, error_score='raise', return_train_score=True) [source] Randomized search on hyper parameters. RandomizedSearchCV implements a ?fit? and a ?score? method. It also implements ?predict?, ?predict_proba?, ?decision_function?, ?transform? and ?inverse_transform? if they are implem

model_selection.PredefinedSplit()

class sklearn.model_selection.PredefinedSplit(test_fold) [source] Predefined split cross-validator Splits the data into training/test set folds according to a predefined scheme. Each sample can be assigned to at most one test set fold, as specified by the user through the test_fold parameter. Read more in the User Guide. Examples >>> from sklearn.model_selection import PredefinedSplit >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]]) >>> y = np.array([0, 0, 1, 1]

model_selection.ParameterSampler()

class sklearn.model_selection.ParameterSampler(param_distributions, n_iter, random_state=None) [source] Generator on parameters sampled from given distributions. Non-deterministic iterable over random candidate combinations for hyper- parameter search. If all parameters are presented as a list, sampling without replacement is performed. If at least one parameter is given as a distribution, sampling with replacement is used. It is highly recommended to use continuous distributions for contin

model_selection.ParameterGrid()

class sklearn.model_selection.ParameterGrid(param_grid) [source] Grid of parameters with a discrete number of values for each. Can be used to iterate over parameter value combinations with the Python built-in function iter. Read more in the User Guide. Parameters: param_grid : dict of string to sequence, or sequence of such The parameter grid to explore, as a dictionary mapping estimator parameters to sequences of allowed values. An empty dict signifies default parameters. A sequence of d

model_selection.LeavePOut()

class sklearn.model_selection.LeavePOut(p) [source] Leave-P-Out cross-validator Provides train/test indices to split data in train/test sets. This results in testing on all distinct samples of size p, while the remaining n - p samples form the training set in each iteration. Note: LeavePOut(p) is NOT equivalent to KFold(n_splits=n_samples // p) which creates non-overlapping test sets. Due to the high number of iterations which grows combinatorically with the number of samples this cross-val

model_selection.LeavePGroupsOut()

class sklearn.model_selection.LeavePGroupsOut(n_groups) [source] Leave P Group(s) Out cross-validator Provides train/test indices to split data according to a third-party provided group. This group information can be used to encode arbitrary domain specific stratifications of the samples as integers. For instance the groups could be the year of collection of the samples and thus allow for cross-validation against time-based splits. The difference between LeavePGroupsOut and LeaveOneGroupOut

model_selection.LeaveOneOut

class sklearn.model_selection.LeaveOneOut [source] Leave-One-Out cross-validator Provides train/test indices to split data in train/test sets. Each sample is used once as a test set (singleton) while the remaining samples form the training set. Note: LeaveOneOut() is equivalent to KFold(n_splits=n) and LeavePOut(p=1) where n is the number of samples. Due to the high number of test sets (which is the same as the number of samples) this cross-validation method can be very costly. For large da

model_selection.LeaveOneGroupOut

class sklearn.model_selection.LeaveOneGroupOut [source] Leave One Group Out cross-validator Provides train/test indices to split data according to a third-party provided group. This group information can be used to encode arbitrary domain specific stratifications of the samples as integers. For instance the groups could be the year of collection of the samples and thus allow for cross-validation against time-based splits. Read more in the User Guide. Examples >>> from sklearn.model

model_selection.KFold()

class sklearn.model_selection.KFold(n_splits=3, shuffle=False, random_state=None) [source] K-Folds cross-validator Provides train/test indices to split data in train/test sets. Split dataset into k consecutive folds (without shuffling by default). Each fold is then used once as a validation while the k - 1 remaining folds form the training set. Read more in the User Guide. Parameters: n_splits : int, default=3 Number of folds. Must be at least 2. shuffle : boolean, optional Whether to s

model_selection.GroupShuffleSplit()

class sklearn.model_selection.GroupShuffleSplit(n_splits=5, test_size=0.2, train_size=None, random_state=None) [source] Shuffle-Group(s)-Out cross-validation iterator Provides randomized train/test indices to split data according to a third-party provided group. This group information can be used to encode arbitrary domain specific stratifications of the samples as integers. For instance the groups could be the year of collection of the samples and thus allow for cross-validation against ti