tf.contrib.metrics.streaming_concat()

tf.contrib.metrics.streaming_concat(values, axis=0, max_size=None, metrics_collections=None, updates_collections=None, name=None) Concatenate values along an axis across batches. The function streaming_concat creates two local variables, array and size, that are used to store concatenated values. Internally, array is used as storage for a dynamic array (if maxsize is None), which ensures that updates can be run in amortized constant time. For estimation of the metric over a stream of data, the

tf.contrib.metrics.set_intersection()

tf.contrib.metrics.set_intersection(a, b, validate_indices=True) Compute set intersection of elements in last dimension of a and b. All but the last dimension of a and b must match. Args: a: Tensor or SparseTensor of the same type as b. If sparse, indices must be sorted in row-major order. b: Tensor or SparseTensor of the same type as a. Must be SparseTensor if a is SparseTensor. If sparse, indices must be sorted in row-major order. validate_indices: Whether to validate the order and range

tf.contrib.metrics.streaming_accuracy()

tf.contrib.metrics.streaming_accuracy(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None) Calculates how often predictions matches labels. The streaming_accuracy function creates two local variables, total and count that are used to compute the frequency with which predictions matches labels. This frequency is ultimately returned as accuracy: an idempotent operation that simply divides total by count. For estimation of the metric over a stream of d

tf.contrib.metrics.set_union()

tf.contrib.metrics.set_union(a, b, validate_indices=True) Compute set union of elements in last dimension of a and b. All but the last dimension of a and b must match. Args: a: Tensor or SparseTensor of the same type as b. If sparse, indices must be sorted in row-major order. b: Tensor or SparseTensor of the same type as a. Must be SparseTensor if a is SparseTensor. If sparse, indices must be sorted in row-major order. validate_indices: Whether to validate the order and range of sparse indi

tf.contrib.metrics.set_size()

tf.contrib.metrics.set_size(a, validate_indices=True) Compute number of unique elements along last dimension of a. Args: a: SparseTensor, with indices sorted in row-major order. validate_indices: Whether to validate the order and range of sparse indices in a. Returns: int32 Tensor of set sizes. For a ranked n, this is a Tensor with rank n-1, and the same 1st n-1 dimensions as a. Each value is the number of unique elements in the corresponding [0...n-1] dimension of a. Raises: TypeError: I

tf.contrib.metrics.aggregate_metrics()

tf.contrib.metrics.aggregate_metrics(*value_update_tuples) Aggregates the metric value tensors and update ops into two lists. Args: *value_update_tuples: a variable number of tuples, each of which contain the pair of (value_tensor, update_op) from a streaming metric. Returns: a list of value tensors and a list of update ops. Raises: ValueError: if value_update_tuples is empty.

tf.contrib.metrics.confusion_matrix()

tf.contrib.metrics.confusion_matrix(predictions, labels, num_classes=None, dtype=tf.int32, name=None, weights=None) Computes the confusion matrix from predictions and labels. Calculate the Confusion Matrix for a pair of prediction and label 1-D int arrays. Considering a prediction array such as: [1, 2, 3] And a label array such as: [2, 2, 3] The confusion matrix returned would be the following one: [[0, 0, 0] [0, 1, 0] [0, 1, 0] [0, 0, 1]] If weights is not None, then the confusion matrix

tf.contrib.metrics.set_difference()

tf.contrib.metrics.set_difference(a, b, aminusb=True, validate_indices=True) Compute set difference of elements in last dimension of a and b. All but the last dimension of a and b must match. Args: a: Tensor or SparseTensor of the same type as b. If sparse, indices must be sorted in row-major order. b: Tensor or SparseTensor of the same type as a. Must be SparseTensor if a is SparseTensor. If sparse, indices must be sorted in row-major order. aminusb: Whether to subtract b from a, vs vice v

tf.contrib.metrics.auc_using_histogram()

tf.contrib.metrics.auc_using_histogram(boolean_labels, scores, score_range, nbins=100, collections=None, check_shape=True, name=None) AUC computed by maintaining histograms. Rather than computing AUC directly, this Op maintains Variables containing histograms of the scores associated with True and False labels. By comparing these the AUC is generated, with some discretization error. See: "Efficient AUC Learning Curve Calculation" by Bouckaert. This AUC Op updates in O(batch_size + nbins) time

tf.contrib.metrics.aggregate_metric_map()

tf.contrib.metrics.aggregate_metric_map(names_to_tuples) Aggregates the metric names to tuple dictionary. This function is useful for pairing metric names with their associated value and update ops when the list of metrics is long. For example: metrics_to_values, metrics_to_updates = slim.metrics.aggregate_metric_map({ 'Mean Absolute Error': new_slim.metrics.streaming_mean_absolute_error( predictions, labels, weights), 'Mean Relative Error': new_slim.metrics.streaming_mean_relative_error( pred