Probabilistic PCA and Factor Analysis are probabilistic models. The consequence is that the likelihood of new data can be used
Demonstrate the resolution of a regression problem using a k-Nearest Neighbor and the interpolation of the target using both barycenter and constant weights.
The usual covariance maximum likelihood estimate is very sensitive to the presence of outliers in the data set. In such a case, it would be better to
Example builds a swiss roll dataset and runs hierarchical clustering on their position. For more information, see
Example of confusion matrix usage to evaluate the quality of the output of a classifier on the iris data set. The diagonal elements represent the number of points for which
Reference: Brendan J. Frey and Delbert Dueck, ?Clustering by Passing Messages Between Data Points?, Science Feb. 2007
An illustration of the isotonic regression on generated data. The isotonic regression finds a non-decreasing approximation of a function while minimizing the mean squared
This example illustrates the effect of the parameters gamma and C of the Radial Basis Function (RBF) kernel SVM. Intuitively, the gamma
An example illustrating the approximation of the feature map of an RBF kernel. It shows how to use
This example aims at showing characteristics of different clustering algorithms on datasets that are ?interesting? but still in 2D
Page 18 of 22