class sklearn.neighbors.NearestNeighbors(n_neighbors=5, radius=1.0, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None
class sklearn.model_selection.LeavePGroupsOut(n_groups)
An example to illustrate multi-output regression with decision tree. The
sklearn.datasets.fetch_20newsgroups(data_home=None, subset='train', categories=None, shuffle=True, random_state=42, remove=()
The plots below illustrate the effect the parameter C has on the separation line. A large value of C basically tells our model that we do not have
sklearn.isotonic.check_increasing(x, y)
class sklearn.semi_supervised.LabelSpreading(kernel='rbf', gamma=20, n_neighbors=7, alpha=0.2, max_iter=30, tol=0.001, n_jobs=1)
This example shows that model selection can be performed with Gaussian Mixture Models using information-theoretic criteria (BIC). Model selection concerns both
The goal of this guide is to explore some of the main scikit-learn tools on a single practical task: analysing a collection of text documents (newsgroups
Probabilistic PCA and Factor Analysis are probabilistic models. The consequence is that the likelihood of new data can be used
Page 62 of 70