sklearn.metrics.adjusted_rand_score()

sklearn.metrics.adjusted_rand_score(labels_true, labels_pred) [source] Rand index adjusted for chance. The Rand Index computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in the predicted and true clusterings. The raw RI score is then ?adjusted for chance? into the ARI score using the following scheme: ARI = (RI - Expected_RI) / (max(RI) - Expected_RI) The adjusted Rand index is thus

sklearn.metrics.auc()

sklearn.metrics.auc(x, y, reorder=False) [source] Compute Area Under the Curve (AUC) using the trapezoidal rule This is a general function, given points on a curve. For computing the area under the ROC-curve, see roc_auc_score. Parameters: x : array, shape = [n] x coordinates. y : array, shape = [n] y coordinates. reorder : boolean, optional (default=False) If True, assume that the curve is ascending in the case of ties, as for an ROC curve. If the curve is non-ascending, the result w

sklearn.metrics.adjusted_mutual_info_score()

sklearn.metrics.adjusted_mutual_info_score(labels_true, labels_pred) [source] Adjusted Mutual Information between two clusterings. Adjusted Mutual Information (AMI) is an adjustment of the Mutual Information (MI) score to account for chance. It accounts for the fact that the MI is generally higher for two clusterings with a larger number of clusters, regardless of whether there is actually more information shared. For two clusterings and , the AMI is given as: AMI(U, V) = [MI(U, V) - E(MI(

sklearn.manifold.spectral_embedding()

sklearn.manifold.spectral_embedding(adjacency, n_components=8, eigen_solver=None, random_state=None, eigen_tol=0.0, norm_laplacian=True, drop_first=True) [source] Project the sample on the first eigenvectors of the graph Laplacian. The adjacency matrix is used to compute a normalized graph Laplacian whose spectrum (especially the eigenvectors associated to the smallest eigenvalues) has an interpretation in terms of minimal number of cuts necessary to split the graph into comparably sized co

sklearn.metrics.accuracy_score()

sklearn.metrics.accuracy_score(y_true, y_pred, normalize=True, sample_weight=None) [source] Accuracy classification score. In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true. Read more in the User Guide. Parameters: y_true : 1d array-like, or label indicator array / sparse matrix Ground truth (correct) labels. y_pred : 1d array-like, or label indicator array / sparse m

sklearn.manifold.locally_linear_embedding()

sklearn.manifold.locally_linear_embedding(X, n_neighbors, n_components, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, random_state=None, n_jobs=1) [source] Perform a Locally Linear Embedding analysis on the data. Read more in the User Guide. Parameters: X : {array-like, sparse matrix, BallTree, KDTree, NearestNeighbors} Sample data, shape = (n_samples, n_features), in the form of a numpy array, sparse array, precomputed

sklearn.linear_model.orthogonal_mp_gram()

sklearn.linear_model.orthogonal_mp_gram(Gram, Xy, n_nonzero_coefs=None, tol=None, norms_squared=None, copy_Gram=True, copy_Xy=True, return_path=False, return_n_iter=False) [source] Gram Orthogonal Matching Pursuit (OMP) Solves n_targets Orthogonal Matching Pursuit problems using only the Gram matrix X.T * X and the product X.T * y. Read more in the User Guide. Parameters: Gram : array, shape (n_features, n_features) Gram matrix of the input data: X.T * X Xy : array, shape (n_features,) o

sklearn.linear_model.logistic_regression_path()

sklearn.linear_model.logistic_regression_path(X, y, pos_class=None, Cs=10, fit_intercept=True, max_iter=100, tol=0.0001, verbose=0, solver='lbfgs', coef=None, copy=False, class_weight=None, dual=False, penalty='l2', intercept_scaling=1.0, multi_class='ovr', random_state=None, check_input=True, max_squared_sum=None, sample_weight=None) [source] Compute a Logistic Regression model for a list of regularization parameters. This is an implementation that uses the result of the previous model to

sklearn.linear_model.orthogonal_mp()

sklearn.linear_model.orthogonal_mp(X, y, n_nonzero_coefs=None, tol=None, precompute=False, copy_X=True, return_path=False, return_n_iter=False) [source] Orthogonal Matching Pursuit (OMP) Solves n_targets Orthogonal Matching Pursuit problems. An instance of the problem has the form: When parametrized by the number of non-zero coefficients using n_nonzero_coefs: argmin ||y - Xgamma||^2 subject to ||gamma||_0 <= n_{nonzero coefs} When parametrized by error using the parameter tol: argmin ||

sklearn.linear_model.lasso_stability_path()

sklearn.linear_model.lasso_stability_path(X, y, scaling=0.5, random_state=None, n_resampling=200, n_grid=100, sample_fraction=0.75, eps=8.8817841970012523e-16, n_jobs=1, verbose=False) [source] Stability path based on randomized Lasso estimates Read more in the User Guide. Parameters: X : array-like, shape = [n_samples, n_features] training data. y : array-like, shape = [n_samples] target values. scaling : float, optional, default=0.5 The alpha parameter in the stability selection art