tf.contrib.metrics.accuracy()

tf.contrib.metrics.accuracy(predictions, labels, weights=None) Computes the percentage of times that predictions matches labels. Args: predictions: the predicted values, a Tensor whose dtype and shape matches 'labels'. labels: the ground truth values, a Tensor of any shape and bool, integer, or string dtype. weights: None or Tensor of float values to reweight the accuracy. Returns: Accuracy Tensor. Raises: ValueError: if dtypes don't match or if dtype is not bool, integer, or string.

tf.contrib.losses.sum_of_squares()

tf.contrib.losses.sum_of_squares(*args, **kwargs) Adds a Sum-of-Squares loss to the training procedure. (deprecated) THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_squared_error. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weig

tf.contrib.losses.sum_of_pairwise_squares()

tf.contrib.losses.sum_of_pairwise_squares(*args, **kwargs) Adds a pairwise-errors-squared loss to the training procedure. (deprecated) THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_pairwise_squared_error. Unlike the sum_of_squares loss, which is a measure of the differences between corresponding elements of predictions and targets, sum_of_pairwise_squares is a measure of the differences between pairs of corresponding elements of predictio

tf.contrib.losses.sparse_softmax_cross_entropy()

tf.contrib.losses.sparse_softmax_cross_entropy(logits, labels, weight=1.0, scope=None) Cross-entropy loss using tf.nn.sparse_softmax_cross_entropy_with_logits. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the loss weights apply to each corresponding sample. Args: logits: [batch_size, num_classes] logits outputs of the network . labels: [batch_size, 1] or [batch_size] tar

tf.contrib.losses.softmax_cross_entropy()

tf.contrib.losses.softmax_cross_entropy(logits, onehot_labels, weight=1.0, label_smoothing=0, scope=None) Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the loss weights apply to each corresponding sample. If label_smoothing is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = one

tf.contrib.losses.sigmoid_cross_entropy()

tf.contrib.losses.sigmoid_cross_entropy(logits, multi_class_labels, weight=1.0, label_smoothing=0, scope=None) Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the loss weights apply to each corresponding sample. If label_smoothing is nonzero, smooth the labels towards 1/2: new_multiclass_labels = mult

tf.contrib.losses.mean_squared_error()

tf.contrib.losses.mean_squared_error(*args, **kwargs) Adds a Sum-of-Squares loss to the training procedure. (deprecated) THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_squared_error. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the

tf.contrib.losses.mean_pairwise_squared_error()

tf.contrib.losses.mean_pairwise_squared_error(*args, **kwargs) Adds a pairwise-errors-squared loss to the training procedure. (deprecated) THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_pairwise_squared_error. Unlike the sum_of_squares loss, which is a measure of the differences between corresponding elements of predictions and targets, sum_of_pairwise_squares is a measure of the differences between pairs of corresponding elements of predi

tf.contrib.losses.log_loss()

tf.contrib.losses.log_loss(predictions, targets, weight=1.0, epsilon=1e-07, scope=None) Adds a Log Loss term to the training procedure. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weight vector. If the shape of weight matches the shape of predictions, then the loss of each measurab

tf.contrib.losses.hinge_loss()

tf.contrib.losses.hinge_loss(logits, target, scope=None) Method that returns the loss tensor for hinge loss. Args: logits: The logits, a float tensor. target: The ground truth output tensor. Its shape should match the shape of logits. The values of the tensor are expected to be 0.0 or 1.0. scope: The scope for the operations performed in computing the loss. Returns: A Tensor of same shape as logits and target representing the loss values across the batch. Raises: ValueError: If the shape