Label Propagation learning a complex structure

Example of LabelPropagation learning a complex internal structure to demonstrate ?manifold learning?. The outer circle should be labeled ?red? and the inner circle ?blue?. Because both label groups lie inside their own distinct shape, we can see that the labels propagate correctly around the circle. print(__doc__) # Authors: Clay Woolam <clay@woolam.org> # Andreas Mueller <amueller@ais.uni-bonn.de> # License: BSD import numpy as np import matplotlib.pyplot as plt from sk

covariance.OAS()

class sklearn.covariance.OAS(store_precision=True, assume_centered=False) [source] Oracle Approximating Shrinkage Estimator Read more in the User Guide. OAS is a particular form of shrinkage described in ?Shrinkage Algorithms for MMSE Covariance Estimation? Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010. The formula used here does not correspond to the one given in the article. It has been taken from the Matlab program available from the authors? webpage (http://

grid_search.RandomizedSearchCV()

Warning DEPRECATED class sklearn.grid_search.RandomizedSearchCV(estimator, param_distributions, n_iter=10, scoring=None, fit_params=None, n_jobs=1, iid=True, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', random_state=None, error_score='raise') [source] Randomized search on hyper parameters. Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.RandomizedSearchCV instead. RandomizedSearchCV implements a ?fit? and a ?score? method. It a

sklearn.metrics.brier_score_loss()

sklearn.metrics.brier_score_loss(y_true, y_prob, sample_weight=None, pos_label=None) [source] Compute the Brier score. The smaller the Brier score, the better, hence the naming with ?loss?. Across all items in a set N predictions, the Brier score measures the mean squared difference between (1) the predicted probability assigned to the possible outcomes for item i, and (2) the actual outcome. Therefore, the lower the Brier score is for a set of predictions, the better the predictions are ca

Robust linear estimator fitting

Here a sine function is fit with a polynomial of order 3, for values close to zero. Robust fitting is demoed in different situations: No measurement errors, only modelling errors (fitting a sine with a polynomial) Measurement errors in X Measurement errors in y The median absolute deviation to non corrupt new data is used to judge the quality of the prediction. What we can see that: RANSAC is good for strong outliers in the y direction TheilSen is good for small outliers, both in direction X

sklearn.linear_model.logistic_regression_path()

sklearn.linear_model.logistic_regression_path(X, y, pos_class=None, Cs=10, fit_intercept=True, max_iter=100, tol=0.0001, verbose=0, solver='lbfgs', coef=None, copy=False, class_weight=None, dual=False, penalty='l2', intercept_scaling=1.0, multi_class='ovr', random_state=None, check_input=True, max_squared_sum=None, sample_weight=None) [source] Compute a Logistic Regression model for a list of regularization parameters. This is an implementation that uses the result of the previous model to

sklearn.datasets.load_svmlight_files()

sklearn.datasets.load_svmlight_files(files, n_features=None, dtype=, multilabel=False, zero_based='auto', query_id=False) [source] Load dataset from multiple files in SVMlight format This function is equivalent to mapping load_svmlight_file over a list of files, except that the results are concatenated into a single, flat list and the samples vectors are constrained to all have the same number of features. In case the file contains a pairwise preference constraint (known as ?qid? in the svm

exceptions.ChangedBehaviorWarning

class sklearn.exceptions.ChangedBehaviorWarning [source] Warning class used to notify the user of any change in the behavior. Changed in version 0.18: Moved from sklearn.base.

model_selection.ShuffleSplit()

class sklearn.model_selection.ShuffleSplit(n_splits=10, test_size=0.1, train_size=None, random_state=None) [source] Random permutation cross-validator Yields indices to split data into training and test sets. Note: contrary to other cross-validation strategies, random splits do not guarantee that all folds will be different, although this is still very likely for sizeable datasets. Read more in the User Guide. Parameters: n_splits : int (default 10) Number of re-shuffling & splitting

linear_model.RidgeClassifierCV()

class sklearn.linear_model.RidgeClassifierCV(alphas=(0.1, 1.0, 10.0), fit_intercept=True, normalize=False, scoring=None, cv=None, class_weight=None) [source] Ridge classifier with built-in cross-validation. By default, it performs Generalized Cross-Validation, which is a form of efficient Leave-One-Out cross-validation. Currently, only the n_features > n_samples case is handled efficiently. Read more in the User Guide. Parameters: alphas : numpy array of shape [n_alphas] Array of alpha