exceptions.EfficiencyWarning

class sklearn.exceptions.EfficiencyWarning [source] Warning used to notify the user of inefficient computation. This warning notifies the user that the efficiency may not be optimal due to some reason which may be included as a part of the warning message. This may be subclassed into a more specific Warning class. New in version 0.18.

discriminant_analysis.QuadraticDiscriminantAnalysis()

class sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0, store_covariances=False, tol=0.0001) [source] Quadratic Discriminant Analysis A classifier with a quadratic decision boundary, generated by fitting class conditional densities to the data and using Bayes? rule. The model fits a Gaussian density to each class. New in version 0.17: QuadraticDiscriminantAnalysis Read more in the User Guide. Parameters: priors : array, optional, shape = [n_classes]

sklearn.metrics.pairwise.laplacian_kernel()

sklearn.metrics.pairwise.laplacian_kernel(X, Y=None, gamma=None) [source] Compute the laplacian kernel between X and Y. The laplacian kernel is defined as: K(x, y) = exp(-gamma ||x-y||_1) for each pair of rows x in X and y in Y. Read more in the User Guide. New in version 0.17. Parameters: X : array of shape (n_samples_X, n_features) Y : array of shape (n_samples_Y, n_features) gamma : float, default None If None, defaults to 1.0 / n_samples_X Returns: kernel_matrix : array of shape

Univariate Feature Selection

An example showing univariate feature selection. Noisy (non informative) features are added to the iris data and univariate feature selection is applied. For each feature, we plot the p-values for the univariate feature selection and the corresponding weights of an SVM. We can see that univariate feature selection selects the informative features and that these have larger SVM weights. In the total set of features, only the 4 first ones are significant. We can see that they have the highest sc

Sparse inverse covariance estimation

Using the GraphLasso estimator to learn a covariance and sparse precision from a small number of samples. To estimate a probabilistic model (e.g. a Gaussian model), estimating the precision matrix, that is the inverse covariance matrix, is as important as estimating the covariance matrix. Indeed a Gaussian model is parametrized by the precision matrix. To be in favorable recovery conditions, we sample the data from a model with a sparse inverse covariance matrix. In addition, we ensure that th

decomposition.RandomizedPCA()

Warning DEPRECATED class sklearn.decomposition.RandomizedPCA(*args, **kwargs) [source] Principal component analysis (PCA) using randomized SVD Deprecated since version 0.18: This class will be removed in 0.20. Use PCA with parameter svd_solver ?randomized? instead. The new implementation DOES NOT store whiten components_. Apply transform to get them. Linear dimensionality reduction using approximated Singular Value Decomposition of the data and keeping only the most significant singular

Simple 1D Kernel Density Estimation

This example uses the sklearn.neighbors.KernelDensity class to demonstrate the principles of Kernel Density Estimation in one dimension. The first plot shows one of the problems with using histograms to visualize the density of points in 1D. Intuitively, a histogram can be thought of as a scheme in which a unit ?block? is stacked above each point on a regular grid. As the top two panels show, however, the choice of gridding for these blocks can lead to wildly divergent ideas about the underlyi

SGD: Penalties

Plot the contours of the three penalties. All of the above are supported by sklearn.linear_model.stochastic_gradient. from __future__ import division print(__doc__) import numpy as np import matplotlib.pyplot as plt def l1(xs): return np.array([np.sqrt((1 - np.sqrt(x ** 2.0)) ** 2.0) for x in xs]) def l2(xs): return np.array([np.sqrt(1.0 - x ** 2.0) for x in xs]) def el(xs, z): return np.array([(2 - 2 * x - 2 * z + 4 * x * z - (4 * z ** 2

sklearn.metrics.roc_curve()

sklearn.metrics.roc_curve(y_true, y_score, pos_label=None, sample_weight=None, drop_intermediate=True) [source] Compute Receiver operating characteristic (ROC) Note: this implementation is restricted to the binary classification task. Read more in the User Guide. Parameters: y_true : array, shape = [n_samples] True binary labels in range {0, 1} or {-1, 1}. If labels are not binary, pos_label should be explicitly given. y_score : array, shape = [n_samples] Target scores, can either be pr

gaussian_process.kernels.ConstantKernel()

class sklearn.gaussian_process.kernels.ConstantKernel(constant_value=1.0, constant_value_bounds=(1e-05, 100000.0)) [source] Constant kernel. Can be used as part of a product-kernel where it scales the magnitude of the other factor (kernel) or as part of a sum-kernel, where it modifies the mean of the Gaussian process. k(x_1, x_2) = constant_value for all x_1, x_2 New in version 0.18. Parameters: constant_value : float, default: 1.0 The constant value which defines the covariance: k(x_1,