multioutput.MultiOutputClassifier()

class sklearn.multioutput.MultiOutputClassifier(estimator, n_jobs=1) [source] Multi target classification This strategy consists of fitting one classifier per target. This is a simple strategy for extending classifiers that do not natively support multi-target classification Parameters: estimator : estimator object An estimator object implementing fit, score and predict_proba. n_jobs : int, optional, default=1 The number of jobs to use for the computation. If -1 all CPUs are used. If 1

manifold.LocallyLinearEmbedding()

class sklearn.manifold.LocallyLinearEmbedding(n_neighbors=5, n_components=2, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, neighbors_algorithm='auto', random_state=None, n_jobs=1) [source] Locally Linear Embedding Read more in the User Guide. Parameters: n_neighbors : integer number of neighbors to consider for each point. n_components : integer number of coordinates for the manifold reg : float regularization const

sklearn.metrics.hamming_loss()

sklearn.metrics.hamming_loss(y_true, y_pred, labels=None, sample_weight=None, classes=None) [source] Compute the average Hamming loss. The Hamming loss is the fraction of labels that are incorrectly predicted. Read more in the User Guide. Parameters: y_true : 1d array-like, or label indicator array / sparse matrix Ground truth (correct) labels. y_pred : 1d array-like, or label indicator array / sparse matrix Predicted labels, as returned by a classifier. labels : array, shape = [n_labe

sklearn.decomposition.dict_learning()

sklearn.decomposition.dict_learning(X, n_components, alpha, max_iter=100, tol=1e-08, method='lars', n_jobs=1, dict_init=None, code_init=None, callback=None, verbose=False, random_state=None, return_n_iter=False) [source] Solves a dictionary learning matrix factorization problem. Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving: (U^*, V^*) = argmin 0.5 || X - U V ||_2^2 + alpha * || U ||_1 (U,V) with || V_k ||

Iso-probability lines for Gaussian Processes classification

A two-dimensional classification example showing iso-probability lines for the predicted probabilities. Out: Learned kernel: 0.0256**2 * DotProduct(sigma_0=5.72) ** 2 print(__doc__) # Author: Vincent Dubourg <vincent.dubourg@gmail.com> # Adapted to GaussianProcessClassifier: # Jan Hendrik Metzen <jhm@informatik.uni-bremen.de> # License: BSD 3 clause import numpy as np from matplotlib import pyplot as plt from matplotlib import cm from sklearn.gaussian_process impo

linear_model.ElasticNet()

class sklearn.linear_model.ElasticNet(alpha=1.0, l1_ratio=0.5, fit_intercept=True, normalize=False, precompute=False, max_iter=1000, copy_X=True, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic') [source] Linear regression with combined L1 and L2 priors as regularizer. Minimizes the objective function: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 If you are interested in controlling the L1 an

multiclass.OneVsRestClassifier()

class sklearn.multiclass.OneVsRestClassifier(estimator, n_jobs=1) [source] One-vs-the-rest (OvR) multiclass/multilabel strategy Also known as one-vs-all, this strategy consists in fitting one classifier per class. For each classifier, the class is fitted against all the other classes. In addition to its computational efficiency (only n_classes classifiers are needed), one advantage of this approach is its interpretability. Since each class is represented by one and one classifier only, it i

5. Dataset loading utilities

The sklearn.datasets package embeds some small toy datasets as introduced in the Getting Started section. To evaluate the impact of the scale of the dataset (n_samples and n_features) while controlling the statistical properties of the data (typically the correlation and informativeness of the features), it is also possible to generate synthetic data. This package also features helpers to fetch larger datasets commonly used by the machine learning community to benchmark algorithm on data that

decomposition.MiniBatchSparsePCA()

class sklearn.decomposition.MiniBatchSparsePCA(n_components=None, alpha=1, ridge_alpha=0.01, n_iter=100, callback=None, batch_size=3, verbose=False, shuffle=True, n_jobs=1, method='lars', random_state=None) [source] Mini-batch Sparse Principal Components Analysis Finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Read more in the User Guide. Parameters: n_co

cross_decomposition.PLSRegression()

class sklearn.cross_decomposition.PLSRegression(n_components=2, scale=True, max_iter=500, tol=1e-06, copy=True) [source] PLS regression PLSRegression implements the PLS 2 blocks regression known as PLS2 or PLS1 in case of one dimensional response. This class inherits from _PLS with mode=?A?, deflation_mode=?regression?, norm_y_weights=False and algorithm=?nipals?. Read more in the User Guide. Parameters: n_components : int, (default 2) Number of components to keep. scale : boolean, (defa