sklearn.metrics.f1_score()

sklearn.metrics.f1_score(y_true, y_pred, labels=None, pos_label=1, average='binary', sample_weight=None) [source] Compute the F1 score, also known as balanced F-score or F-measure The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. The relative contribution of precision and recall to the F1 score are equal. The formula for the F1 score is: F1 = 2 * (precision * recall) / (precision + recall)

sklearn.metrics.explained_variance_score()

sklearn.metrics.explained_variance_score(y_true, y_pred, sample_weight=None, multioutput='uniform_average') [source] Explained variance regression score function Best possible score is 1.0, lower values are worse. Read more in the User Guide. Parameters: y_true : array-like of shape = (n_samples) or (n_samples, n_outputs) Ground truth (correct) target values. y_pred : array-like of shape = (n_samples) or (n_samples, n_outputs) Estimated target values. sample_weight : array-like of shap

sklearn.metrics.coverage_error()

sklearn.metrics.coverage_error(y_true, y_score, sample_weight=None) [source] Coverage error measure Compute how far we need to go through the ranked scores to cover all true labels. The best value is equal to the average number of labels in y_true per sample. Ties in y_scores are broken by giving maximal rank that would have been assigned to all tied values. Read more in the User Guide. Parameters: y_true : array, shape = [n_samples, n_labels] True binary labels in binary indicator format

sklearn.metrics.consensus_score()

sklearn.metrics.consensus_score(a, b, similarity='jaccard') [source] The similarity of two sets of biclusters. Similarity between individual biclusters is computed. Then the best matching between sets is found using the Hungarian algorithm. The final score is the sum of similarities divided by the size of the larger set. Read more in the User Guide. Parameters: a : (rows, columns) Tuple of row and column indicators for a set of biclusters. b : (rows, columns) Another set of biclusters l

sklearn.metrics.confusion_matrix()

sklearn.metrics.confusion_matrix(y_true, y_pred, labels=None, sample_weight=None) [source] Compute confusion matrix to evaluate the accuracy of a classification By definition a confusion matrix is such that is equal to the number of observations known to be in group but predicted to be in group . Thus in binary classification, the count of true negatives is , false negatives is , true positives is and false positives is . Read more in the User Guide. Parameters: y_true : array, shape =

sklearn.metrics.completeness_score()

sklearn.metrics.completeness_score(labels_true, labels_pred) [source] Completeness metric of a cluster labeling given a ground truth. A clustering result satisfies completeness if all the data points that are members of a given class are elements of the same cluster. This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won?t change the score value in any way. This metric is not symmetric: switching label_true with label_pred wil

sklearn.metrics.classification_report()

sklearn.metrics.classification_report(y_true, y_pred, labels=None, target_names=None, sample_weight=None, digits=2) [source] Build a text report showing the main classification metrics Read more in the User Guide. Parameters: y_true : 1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values. y_pred : 1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier. labels : array, shape = [n_labels] Optional list

sklearn.metrics.calinski_harabaz_score()

sklearn.metrics.calinski_harabaz_score(X, labels) [source] Compute the Calinski and Harabaz score. The score is defined as ratio between the within-cluster dispersion and the between-cluster dispersion. Read more in the User Guide. Parameters: X : array-like, shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. labels : array-like, shape (n_samples,) Predicted labels for each sample. Returns: score: float : The result

sklearn.metrics.brier_score_loss()

sklearn.metrics.brier_score_loss(y_true, y_prob, sample_weight=None, pos_label=None) [source] Compute the Brier score. The smaller the Brier score, the better, hence the naming with ?loss?. Across all items in a set N predictions, the Brier score measures the mean squared difference between (1) the predicted probability assigned to the possible outcomes for item i, and (2) the actual outcome. Therefore, the lower the Brier score is for a set of predictions, the better the predictions are ca

sklearn.metrics.average_precision_score()

sklearn.metrics.average_precision_score(y_true, y_score, average='macro', sample_weight=None) [source] Compute average precision (AP) from prediction scores This score corresponds to the area under the precision-recall curve. Note: this implementation is restricted to the binary classification task or multilabel classification task. Read more in the User Guide. Parameters: y_true : array, shape = [n_samples] or [n_samples, n_classes] True binary labels in binary label indicators. y_score