-
sklearn.metrics.coverage_error(y_true, y_score, sample_weight=None)
[source] -
Coverage error measure
Compute how far we need to go through the ranked scores to cover all true labels. The best value is equal to the average number of labels in
y_true
per sample.Ties in
y_scores
are broken by giving maximal rank that would have been assigned to all tied values.Read more in the User Guide.
Parameters: y_true : array, shape = [n_samples, n_labels]
True binary labels in binary indicator format.
y_score : array, shape = [n_samples, n_labels]
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by ?decision_function? on some classifiers).
sample_weight : array-like of shape = [n_samples], optional
Sample weights.
Returns: coverage_error : float
References
[R204] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.
sklearn.metrics.coverage_error()
2017-01-15 04:26:20
Please login to continue.