Out-of-bag (OOB) estimates can be a useful heuristic to estimate the ?optimal? number of boosting iterations. OOB estimates are almost identical to cross-validation
class sklearn.ensemble.ExtraTreesRegressor(n_estimators=10, criterion='mse', max_depth=None, min_samples_split=2, min_samples_leaf=1
sklearn.preprocessing.scale(X, axis=0, with_mean=True, with_std=True, copy=True)
class sklearn.linear_model.ElasticNet(alpha=1.0, l1_ratio=0.5, fit_intercept=True, normalize=False, precompute=False, max_iter=1000
class sklearn.ensemble.GradientBoostingRegressor(loss='ls', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse'
Using orthogonal matching pursuit for recovering a sparse signal from a noisy measurement encoded with a dictionary print(__doc__)
A plot that compares the various convex loss functions supported by
sklearn.metrics.calinski_harabaz_score(X, labels)
class sklearn.feature_extraction.text.CountVectorizer(input=u'content', encoding=u'utf-8', decode_error=u'strict',
class sklearn.gaussian_process.kernels.Matern(length_scale=1.0, length_scale_bounds=(1e-05, 100000.0), nu=1.5)
Page 11 of 70