Well calibrated classifiers are probabilistic classifiers for which the output of the predict_proba method can be directly interpreted as a confidence
sklearn.feature_selection.mutual_info_regression(X, y, discrete_features='auto', n_neighbors=3, copy=True, ran
sklearn.covariance.shrunk_covariance(emp_cov, shrinkage=0.1)
class sklearn.linear_model.PassiveAggressiveRegressor(C=1.0, fit_intercept=True, n_iter=5, shuffle=True, verbose=0
class sklearn.preprocessing.LabelBinarizer(neg_label=0, pos_label=1, sparse_output=False)
Transform a signal as a sparse combination of Ricker wavelets. This example visually compares different sparse coding methods using the
The usual covariance maximum likelihood estimate can be regularized using shrinkage. Ledoit and Wolf proposed a close formula to compute the asymptotically optimal
sklearn.random_projection.johnson_lindenstrauss_min_dim(n_samples, eps=0.1)
sklearn.utils.shuffle(*arrays, **options)
Principal Component Analysis applied to the Iris dataset. See
Page 21 of 70