This example compares 2 dimensionality reduction strategies: univariate feature selection with Anova feature
class sklearn.gaussian_process.kernels.PairwiseKernel(gamma=1.0, gamma_bounds=(1e-05, 100000.0), metric='linear',
This example demonstrates the problems of underfitting and overfitting and how we can use linear regression with polynomial features to approximate nonlinear functions
Compares FeatureHasher and DictVectorizer by using both to vectorize text documents. The example demonstrates syntax and speed only; it doesn
This example demonstrates the behavior of Gaussian mixture models fit on data that was not sampled from a mixture of Gaussian random variables. The dataset
class sklearn.neighbors.KNeighborsClassifier(n_neighbors=5, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski'
Illustration of how the performance of an estimator on unseen data (test data) is not the same as the performance on training data. As the regularization increases
sklearn.cluster.ward_tree(X, connectivity=None, n_clusters=None, return_distance=False)
sklearn.datasets.make_friedman2(n_samples=100, noise=0.0, random_state=None)
class sklearn.neighbors.LSHForest(n_estimators=10, radius=1.0, n_candidates=50, n_neighbors=5, min_hash_match=4, radius_cutoff_ratio=0
Page 8 of 70