The dataset used in this example is a preprocessed excerpt of the ?Labeled Faces in the Wild?, aka
This example compares non-nested and nested cross-validation strategies on a classifier of the iris data set. Nested cross-validation (CV) is often
An illustration of various linkage option for agglomerative clustering on a 2D embedding of the digits dataset. The goal
Three different types of SVM-Kernels are displayed below. The polynomial and RBF are especially useful when the data-points are not linearly separable.
Well calibrated classifiers are probabilistic classifiers for which the output of the predict_proba method can be directly interpreted as a confidence
Plot the confidence ellipsoids of a mixture of two Gaussians obtained with Expectation Maximisation (GaussianMixture class) and Variational Inference
This example shows how to perform univariate feature selection before running a SVC (support vector classifier) to improve the classification
Simple usage of various cross decomposition algorithms: - PLSCanonical - PLSRegression, with multivariate response, a.k.a. PLS2 - PLSRegression, with univariate
Plot the contours of the three penalties. All of the above are supported by sklearn.linear_model.stochastic_gradient.
This shows an example of a neighbors-based query (in particular a kernel density estimate) on geospatial data, using a Ball Tree built upon
Page 7 of 22