grid_search.RandomizedSearchCV()

Warning DEPRECATED class sklearn.grid_search.RandomizedSearchCV(estimator, param_distributions, n_iter=10, scoring=None, fit_params=None, n_jobs=1, iid=True, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', random_state=None, error_score='raise') [source] Randomized search on hyper parameters. Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.RandomizedSearchCV instead. RandomizedSearchCV implements a ?fit? and a ?score? method. It a

grid_search.ParameterGrid()

Warning DEPRECATED class sklearn.grid_search.ParameterGrid(param_grid) [source] Grid of parameters with a discrete number of values for each. Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.ParameterGrid instead. Can be used to iterate over parameter value combinations with the Python built-in function iter. Read more in the User Guide. Parameters: param_grid : dict of string to sequence, or sequence of such The parameter grid to explore

grid_search.ParameterSampler()

Warning DEPRECATED class sklearn.grid_search.ParameterSampler(param_distributions, n_iter, random_state=None) [source] Generator on parameters sampled from given distributions. Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.ParameterSampler instead. Non-deterministic iterable over random candidate combinations for hyper- parameter search. If all parameters are presented as a list, sampling without replacement is performed. If at least one

grid_search.GridSearchCV()

Warning DEPRECATED class sklearn.grid_search.GridSearchCV(estimator, param_grid, scoring=None, fit_params=None, n_jobs=1, iid=True, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', error_score='raise') [source] Exhaustive search over specified parameter values for an estimator. Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.GridSearchCV instead. Important members are fit, predict. GridSearchCV implements a ?fit? and a ?score? meth

Gradient Boosting regularization

Illustration of the effect of different regularization strategies for Gradient Boosting. The example is taken from Hastie et al 2009. The loss function used is binomial deviance. Regularization via shrinkage (learning_rate < 1.0) improves performance considerably. In combination with shrinkage, stochastic gradient boosting (subsample < 1.0) can produce more accurate models by reducing the variance via bagging. Subsampling without shrinkage usually does poorly. Another strategy to reduce

Gradient Boosting regression

Demonstrate Gradient Boosting on the Boston housing dataset. This example fits a Gradient Boosting model with least squares loss and 500 regression trees of depth 4. print(__doc__) # Author: Peter Prettenhofer <peter.prettenhofer@gmail.com> # # License: BSD 3 clause import numpy as np import matplotlib.pyplot as plt from sklearn import ensemble from sklearn import datasets from sklearn.utils import shuffle from sklearn.metrics import mean_squared_error Load data boston = datasets.loa

GMM covariances

Demonstration of several covariances types for Gaussian mixture models. See Gaussian mixture models for more information on the estimator. Although GMM are often used for clustering, we can compare the obtained clusters with the actual classes from the dataset. We initialize the means of the Gaussians with the means of the classes from the training set to make this comparison valid. We plot predicted labels on both training and held out test data using a variety of GMM covariance types on the

Gradient Boosting Out-of-Bag estimates

Out-of-bag (OOB) estimates can be a useful heuristic to estimate the ?optimal? number of boosting iterations. OOB estimates are almost identical to cross-validation estimates but they can be computed on-the-fly without the need for repeated model fitting. OOB estimates are only available for Stochastic Gradient Boosting (i.e. subsample < 1.0), the estimates are derived from the improvement in loss based on the examples not included in the bootstrap sample (the so-called out-of-bag examples)

gaussian_process.kernels.Sum()

class sklearn.gaussian_process.kernels.Sum(k1, k2) [source] Sum-kernel k1 + k2 of two kernels k1 and k2. The resulting kernel is defined as k_sum(X, Y) = k1(X, Y) + k2(X, Y) New in version 0.18. Parameters: k1 : Kernel object The first base-kernel of the sum-kernel k2 : Kernel object The second base-kernel of the sum-kernel Methods clone_with_theta(theta) Returns a clone of self with given hyperparameters theta. diag(X) Returns the diagonal of the kernel k(X, X). get_params([deep

gaussian_process.kernels.WhiteKernel()

class sklearn.gaussian_process.kernels.WhiteKernel(noise_level=1.0, noise_level_bounds=(1e-05, 100000.0)) [source] White kernel. The main use-case of this kernel is as part of a sum-kernel where it explains the noise-component of the signal. Tuning its parameter corresponds to estimating the noise-level. k(x_1, x_2) = noise_level if x_1 == x_2 else 0 New in version 0.18. Parameters: noise_level : float, default: 1.0 Parameter controlling the noise level noise_level_bounds : pair of flo