sklearn.datasets.dump_svmlight_file()

sklearn.datasets.dump_svmlight_file(X, y, f, zero_based=True, comment=None, query_id=None, multilabel=False) [source] Dump the dataset in svmlight / libsvm file format. This format is a text-based format, with one sample per line. It does not store zero valued features hence is suitable for sparse dataset. The first element of each line can be used to store a target variable to predict. Parameters: X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_

RBF SVM parameters

This example illustrates the effect of the parameters gamma and C of the Radial Basis Function (RBF) kernel SVM. Intuitively, the gamma parameter defines how far the influence of a single training example reaches, with low values meaning ?far? and high values meaning ?close?. The gamma parameters can be seen as the inverse of the radius of influence of samples selected by the model as support vectors. The C parameter trades off misclassification of training examples against simplicity of the d

sklearn.metrics.precision_recall_fscore_support()

sklearn.metrics.precision_recall_fscore_support(y_true, y_pred, beta=1.0, labels=None, pos_label=1, average=None, warn_for=('precision', 'recall', 'f-score'), sample_weight=None) [source] Compute precision, recall, F-measure and support for each class The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative. The recall

sklearn.metrics.hamming_loss()

sklearn.metrics.hamming_loss(y_true, y_pred, labels=None, sample_weight=None, classes=None) [source] Compute the average Hamming loss. The Hamming loss is the fraction of labels that are incorrectly predicted. Read more in the User Guide. Parameters: y_true : 1d array-like, or label indicator array / sparse matrix Ground truth (correct) labels. y_pred : 1d array-like, or label indicator array / sparse matrix Predicted labels, as returned by a classifier. labels : array, shape = [n_labe

sklearn.decomposition.dict_learning()

sklearn.decomposition.dict_learning(X, n_components, alpha, max_iter=100, tol=1e-08, method='lars', n_jobs=1, dict_init=None, code_init=None, callback=None, verbose=False, random_state=None, return_n_iter=False) [source] Solves a dictionary learning matrix factorization problem. Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving: (U^*, V^*) = argmin 0.5 || X - U V ||_2^2 + alpha * || U ||_1 (U,V) with || V_k ||

Iso-probability lines for Gaussian Processes classification

A two-dimensional classification example showing iso-probability lines for the predicted probabilities. Out: Learned kernel: 0.0256**2 * DotProduct(sigma_0=5.72) ** 2 print(__doc__) # Author: Vincent Dubourg <vincent.dubourg@gmail.com> # Adapted to GaussianProcessClassifier: # Jan Hendrik Metzen <jhm@informatik.uni-bremen.de> # License: BSD 3 clause import numpy as np from matplotlib import pyplot as plt from matplotlib import cm from sklearn.gaussian_process impo

Spectral clustering for image segmentation

In this example, an image with connected circles is generated and spectral clustering is used to separate the circles. In these settings, the Spectral clustering approach solves the problem know as ?normalized graph cuts?: the image is seen as a graph of connected voxels, and the spectral clustering algorithm amounts to choosing graph cuts defining regions while minimizing the ratio of the gradient along the cut, and the volume of the region. As the algorithm tries to balance the volume (ie ba

Gaussian Mixture Model Ellipsoids

Plot the confidence ellipsoids of a mixture of two Gaussians obtained with Expectation Maximisation (GaussianMixture class) and Variational Inference (BayesianGaussianMixture class models with a Dirichlet process prior). Both models have access to five components with which to fit the data. Note that the Expectation Maximisation model will necessarily use all five components while the Variational Inference model will effectively only use as many as are needed for a good fit. Here we can see th

cluster.FeatureAgglomeration()

class sklearn.cluster.FeatureAgglomeration(n_clusters=2, affinity='euclidean', memory=Memory(cachedir=None), connectivity=None, compute_full_tree='auto', linkage='ward', pooling_func=) [source] Agglomerate features. Similar to AgglomerativeClustering, but recursively merges features instead of samples. Read more in the User Guide. Parameters: n_clusters : int, default 2 The number of clusters to find. connectivity : array-like or callable, optional Connectivity matrix. Defines for each

sklearn.model_selection.permutation_test_score()

sklearn.model_selection.permutation_test_score(estimator, X, y, groups=None, cv=None, n_permutations=100, n_jobs=1, random_state=0, verbose=0, scoring=None) [source] Evaluate the significance of a cross-validated score with permutations Read more in the User Guide. Parameters: estimator : estimator object implementing ?fit? The object to use to fit the data. X : array-like of shape at least 2D The data to fit. y : array-like The target variable to try to predict in the case of supervi