multiclass.OutputCodeClassifier()

class sklearn.multiclass.OutputCodeClassifier(estimator, code_size=1.5, random_state=None, n_jobs=1) [source] (Error-Correcting) Output-Code multiclass strategy Output-code based strategies consist in representing each class with a binary code (an array of 0s and 1s). At fitting time, one binary classifier per bit in the code book is fitted. At prediction time, the classifiers are used to project new points in the class space and the class closest to the points is chosen. The main advantage

multiclass.OneVsRestClassifier()

class sklearn.multiclass.OneVsRestClassifier(estimator, n_jobs=1) [source] One-vs-the-rest (OvR) multiclass/multilabel strategy Also known as one-vs-all, this strategy consists in fitting one classifier per class. For each classifier, the class is fitted against all the other classes. In addition to its computational efficiency (only n_classes classifiers are needed), one advantage of this approach is its interpretability. Since each class is represented by one and one classifier only, it i

multiclass.OneVsOneClassifier()

class sklearn.multiclass.OneVsOneClassifier(estimator, n_jobs=1) [source] One-vs-one multiclass strategy This strategy consists in fitting one classifier per class pair. At prediction time, the class which received the most votes is selected. Since it requires to fit n_classes * (n_classes - 1) / 2 classifiers, this method is usually slower than one-vs-the-rest, due to its O(n_classes^2) complexity. However, this method may be advantageous for algorithms such as kernel algorithms which don?

Multi-output Decision Tree Regression

An example to illustrate multi-output regression with decision tree. The decision trees is used to predict simultaneously the noisy x and y observations of a circle given a single underlying feature. As a result, it learns local linear regressions approximating the circle. We can see that if the maximum depth of the tree (controlled by the max_depth parameter) is set too high, the decision trees learn too fine details of the training data and learn from the noise, i.e. they overfit. print(

Multi-dimensional scaling

An illustration of the metric and non-metric MDS on generated noisy data. The reconstructed points using the metric MDS and non metric MDS are slightly shifted to avoid overlapping. # Author: Nelle Varoquaux <nelle.varoquaux@gmail.com> # License: BSD print(__doc__) import numpy as np from matplotlib import pyplot as plt from matplotlib.collections import LineCollection from sklearn import manifold from sklearn.metrics import euclidean_distances from sklearn.decomposition import PC

Multi-class AdaBoosted Decision Trees

This example reproduces Figure 1 of Zhu et al [1] and shows how boosting can improve prediction accuracy on a multi-class problem. The classification dataset is constructed by taking a ten-dimensional standard normal distribution and defining three classes separated by nested concentric ten-dimensional spheres such that roughly equal numbers of samples are in each class (quantiles of the distribution). The performance of the SAMME and SAMME.R [1] algorithms are compared. SAMME.R uses the prob

model_selection.TimeSeriesSplit()

class sklearn.model_selection.TimeSeriesSplit(n_splits=3) [source] Time Series cross-validator Provides train/test indices to split time series data samples that are observed at fixed time intervals, in train/test sets. In each split, test indices must be higher than before, and thus shuffling in cross validator is inappropriate. This cross-validation object is a variation of KFold. In the kth split, it returns first k folds as train set and the (k+1)th fold as test set. Note that unlike st

model_selection.StratifiedShuffleSplit()

class sklearn.model_selection.StratifiedShuffleSplit(n_splits=10, test_size=0.1, train_size=None, random_state=None) [source] Stratified ShuffleSplit cross-validator Provides train/test indices to split data in train/test sets. This cross-validation object is a merge of StratifiedKFold and ShuffleSplit, which returns stratified randomized folds. The folds are made by preserving the percentage of samples for each class. Note: like the ShuffleSplit strategy, stratified random splits do not gu

model_selection.StratifiedKFold()

class sklearn.model_selection.StratifiedKFold(n_splits=3, shuffle=False, random_state=None) [source] Stratified K-Folds cross-validator Provides train/test indices to split data in train/test sets. This cross-validation object is a variation of KFold that returns stratified folds. The folds are made by preserving the percentage of samples for each class. Read more in the User Guide. Parameters: n_splits : int, default=3 Number of folds. Must be at least 2. shuffle : boolean, optional Wh

model_selection.ShuffleSplit()

class sklearn.model_selection.ShuffleSplit(n_splits=10, test_size=0.1, train_size=None, random_state=None) [source] Random permutation cross-validator Yields indices to split data into training and test sets. Note: contrary to other cross-validation strategies, random splits do not guarantee that all folds will be different, although this is still very likely for sizeable datasets. Read more in the User Guide. Parameters: n_splits : int (default 10) Number of re-shuffling & splitting