Illustration of how the performance of an estimator on unseen data (test data) is not the same as the performance on training data. As the regularization increases
class sklearn.model_selection.TimeSeriesSplit(n_splits=3)
class sklearn.preprocessing.PolynomialFeatures(degree=2, interaction_only=False, include_bias=True)
There are 3 different approaches to evaluate the quality of predictions of a model: Estimator score
class sklearn.feature_selection.SelectFdr(score_func=, alpha=0.05)
The RandomForestClassifier is trained using bootstrap aggregation, where each new tree is fit from a bootstrap sample of the training observations
Biclustering can be performed with the module
sklearn.svm.libsvm.cross_validation() Binding of the cross-validation routine (low-level routine)
sklearn.neighbors.kneighbors_graph(X, n_neighbors, mode='connectivity', metric='minkowski', p=2, metric_params=None, include_self=False
sklearn.metrics.matthews_corrcoef(y_true, y_pred, sample_weight=None)
Page 13 of 70