sklearn.metrics.label_ranking_average_precision_score()

sklearn.metrics.label_ranking_average_precision_score(y_true, y_score) [source] Compute ranking-based average precision Label ranking average precision (LRAP) is the average over each ground truth label assigned to each sample, of the ratio of true vs. total labels with lower score. This metric is used in multilabel ranking problem, where the goal is to give better rank to the labels associated to each sample. The obtained score is always strictly greater than 0 and the best value is 1. Rea

sklearn.metrics.label_ranking_loss()

sklearn.metrics.label_ranking_loss(y_true, y_score, sample_weight=None) [source] Compute Ranking loss measure Compute the average number of label pairs that are incorrectly ordered given y_score weighted by the size of the label set and the number of labels not in the label set. This is similar to the error set size, but weighted by the number of relevant and irrelevant labels. The best performance is achieved with a ranking loss of zero. Read more in the User Guide. New in version 0.17: A

sklearn.metrics.jaccard_similarity_score()

sklearn.metrics.jaccard_similarity_score(y_true, y_pred, normalize=True, sample_weight=None) [source] Jaccard similarity coefficient score The Jaccard index [1], or Jaccard similarity coefficient, defined as the size of the intersection divided by the size of the union of two label sets, is used to compare set of predicted labels for a sample to the corresponding set of labels in y_true. Read more in the User Guide. Parameters: y_true : 1d array-like, or label indicator array / sparse matr

sklearn.metrics.homogeneity_completeness_v_measure()

sklearn.metrics.homogeneity_completeness_v_measure(labels_true, labels_pred) [source] Compute the homogeneity and completeness and V-Measure scores at once. Those metrics are based on normalized conditional entropy measures of the clustering labeling to evaluate given the knowledge of a Ground Truth class labels of the same samples. A clustering result satisfies homogeneity if all of its clusters contain only data points which are members of a single class. A clustering result satisfies com

sklearn.metrics.homogeneity_score()

sklearn.metrics.homogeneity_score(labels_true, labels_pred) [source] Homogeneity metric of a cluster labeling given a ground truth. A clustering result satisfies homogeneity if all of its clusters contain only data points which are members of a single class. This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won?t change the score value in any way. This metric is not symmetric: switching label_true with label_pred will return

sklearn.metrics.hamming_loss()

sklearn.metrics.hamming_loss(y_true, y_pred, labels=None, sample_weight=None, classes=None) [source] Compute the average Hamming loss. The Hamming loss is the fraction of labels that are incorrectly predicted. Read more in the User Guide. Parameters: y_true : 1d array-like, or label indicator array / sparse matrix Ground truth (correct) labels. y_pred : 1d array-like, or label indicator array / sparse matrix Predicted labels, as returned by a classifier. labels : array, shape = [n_labe

sklearn.metrics.hinge_loss()

sklearn.metrics.hinge_loss(y_true, pred_decision, labels=None, sample_weight=None) [source] Average hinge loss (non-regularized) In binary class case, assuming labels in y_true are encoded with +1 and -1, when a prediction mistake is made, margin = y_true * pred_decision is always negative (since the signs disagree), implying 1 - margin is always greater than 1. The cumulated hinge loss is therefore an upper bound of the number of mistakes made by the classifier. In multiclass case, the fun

sklearn.metrics.fbeta_score()

sklearn.metrics.fbeta_score(y_true, y_pred, beta, labels=None, pos_label=1, average='binary', sample_weight=None) [source] Compute the F-beta score The F-beta score is the weighted harmonic mean of precision and recall, reaching its optimal value at 1 and its worst value at 0. The beta parameter determines the weight of precision in the combined score. beta < 1 lends more weight to precision, while beta > 1 favors recall (beta -> 0 considers only precision, beta -> inf only reca

sklearn.metrics.fowlkes_mallows_score()

sklearn.metrics.fowlkes_mallows_score(labels_true, labels_pred, sparse=False) [source] Measure the similarity of two clusterings of a set of points. The Fowlkes-Mallows index (FMI) is defined as the geometric mean between of the precision and recall: FMI = TP / sqrt((TP + FP) * (TP + FN)) Where TP is the number of True Positive (i.e. the number of pair of points that belongs in the same clusters in both labels_true and labels_pred), FP is the number of False Positive (i.e. the number of pa

sklearn.metrics.get_scorer()

sklearn.metrics.get_scorer(scoring) [source]