Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat
Sample usage of Nearest Neighbors classification. It will plot the decision boundaries for each class.
Plot several randomly generated 2D classification datasets. This example illustrates the datasets.make_classification datasets
sklearn.cluster.k_means(X, n_clusters, init='k-means++', precompute_distances='auto', n_init=10, max_iter=300, verbose=False, tol=0.0001
sklearn.datasets.make_friedman2(n_samples=100, noise=0.0, random_state=None)
sklearn.metrics.hamming_loss(y_true, y_pred, labels=None, sample_weight=None, classes=None)
sklearn.ensemble.partial_dependence.plot_partial_dependence(gbrt, X, features, feature_names=None, label=None
class sklearn.mixture.GaussianMixture(n_components=1, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1,
This example demonstrates the behavior of Gaussian mixture models fit on data that was not sampled from a mixture of Gaussian random variables. The dataset
Out-of-bag (OOB) estimates can be a useful heuristic to estimate the ?optimal? number of boosting iterations. OOB estimates are almost identical to cross-validation
Page 12 of 70