class sklearn.model_selection.LeavePGroupsOut(n_groups)
sklearn.metrics.recall_score(y_true, y_pred, labels=None, pos_label=1, average='binary', sample_weight=None)
sklearn.metrics.pairwise_distances(X, Y=None, metric='euclidean', n_jobs=1, **kwds)
sklearn.datasets.make_sparse_uncorrelated(n_samples=100, n_features=10, random_state=None)
sklearn.datasets.make_low_rank_matrix(n_samples=100, n_features=100, effective_rank=10, tail_strength=0.5, random_state=None)
class sklearn.semi_supervised.LabelSpreading(kernel='rbf', gamma=20, n_neighbors=7, alpha=0.2, max_iter=30, tol=0.001, n_jobs=1)
class sklearn.decomposition.DictionaryLearning(n_components=None, alpha=1, max_iter=1000, tol=1e-08, fit_algorithm='lars'
class sklearn.feature_selection.SelectFromModel(estimator, threshold=None, prefit=False)
class sklearn.decomposition.NMF(n_components=None, init=None, solver='cd', tol=0.0001, max_iter=200, random_state=None, alpha=0.0, l1_ratio=0
Every estimator has its advantages and drawbacks. Its generalization error can be decomposed in terms of bias, variance and noise. The
Page 67 of 70