This example shows how kernel density estimation (KDE), a powerful non-parametric density estimation technique, can be used to learn a generative model for a dataset
An example showing univariate feature selection. Noisy (non informative) features are added to the iris data and univariate feature selection is applied
In this example we compare the various initialization strategies for K-means in terms of runtime and quality of the results.
This example fits an AdaBoosted decision stump on a non-linearly separable classification dataset composed of two ?Gaussian quantiles? clusters (see
Modeling species? geographic distributions is an important problem in conservation biology. In this example we model the geographic distribution of two south american
This example shows the effect of imposing a connectivity graph to capture local structure in the data. The graph is simply the graph of 20
This example uses
Plot the maximum margin separating hyperplane within a two-class separable dataset using a linear Support Vector Machines classifier trained using SGD
This example is meant to illustrate situations where k-means will produce unintuitive and possibly unexpected clusters. In the first three plots, the input
A decision tree is boosted using the AdaBoost.R2 [1] algorithm on a 1D sinusoidal dataset with a small amount of Gaussian noise. 299 boosts (300 decision
Page 10 of 22