class sklearn.cluster.FeatureAgglomeration(n_clusters=2, affinity='euclidean', memory=Memory(cachedir=None), connectivity=None
class sklearn.gaussian_process.kernels.WhiteKernel(noise_level=1.0, noise_level_bounds=(1e-05, 100000.0))
sklearn.datasets.load_svmlight_files(files, n_features=None, dtype=, multilabel=False, zero_based='auto', query_id=False)
class sklearn.naive_bayes.GaussianNB(priors=None)
class sklearn.dummy.DummyRegressor(strategy='mean', constant=None, quantile=None)
sklearn.metrics.pairwise.laplacian_kernel(X, Y=None, gamma=None)
This example shows that model selection can be performed with Gaussian Mixture Models using information-theoretic criteria (BIC). Model selection concerns both
sklearn.metrics.r2_score(y_true, y_pred, sample_weight=None, multioutput=None)
Computes a Bayesian Ridge Regression on a synthetic dataset. See
class sklearn.preprocessing.KernelCenterer
Page 68 of 70