tf.contrib.losses.sum_of_pairwise_squares()

tf.contrib.losses.sum_of_pairwise_squares(*args, **kwargs) Adds a pairwise-errors-squared loss to the training procedure. (deprecated) THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_pairwise_squared_error. Unlike the sum_of_squares loss, which is a measure of the differences between corresponding elements of predictions and targets, sum_of_pairwise_squares is a measure of the differences between pairs of corresponding elements of predictio

tf.contrib.losses.sum_of_squares()

tf.contrib.losses.sum_of_squares(*args, **kwargs) Adds a Sum-of-Squares loss to the training procedure. (deprecated) THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_squared_error. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weig

tf.contrib.losses.sparse_softmax_cross_entropy()

tf.contrib.losses.sparse_softmax_cross_entropy(logits, labels, weight=1.0, scope=None) Cross-entropy loss using tf.nn.sparse_softmax_cross_entropy_with_logits. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the loss weights apply to each corresponding sample. Args: logits: [batch_size, num_classes] logits outputs of the network . labels: [batch_size, 1] or [batch_size] tar

tf.contrib.losses.sigmoid_cross_entropy()

tf.contrib.losses.sigmoid_cross_entropy(logits, multi_class_labels, weight=1.0, label_smoothing=0, scope=None) Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the loss weights apply to each corresponding sample. If label_smoothing is nonzero, smooth the labels towards 1/2: new_multiclass_labels = mult

tf.contrib.metrics.accuracy()

tf.contrib.metrics.accuracy(predictions, labels, weights=None) Computes the percentage of times that predictions matches labels. Args: predictions: the predicted values, a Tensor whose dtype and shape matches 'labels'. labels: the ground truth values, a Tensor of any shape and bool, integer, or string dtype. weights: None or Tensor of float values to reweight the accuracy. Returns: Accuracy Tensor. Raises: ValueError: if dtypes don't match or if dtype is not bool, integer, or string.

tf.contrib.losses.softmax_cross_entropy()

tf.contrib.losses.softmax_cross_entropy(logits, onehot_labels, weight=1.0, label_smoothing=0, scope=None) Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the loss weights apply to each corresponding sample. If label_smoothing is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = one

tf.contrib.losses.log_loss()

tf.contrib.losses.log_loss(predictions, targets, weight=1.0, epsilon=1e-07, scope=None) Adds a Log Loss term to the training procedure. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weight vector. If the shape of weight matches the shape of predictions, then the loss of each measurab

tf.contrib.losses.get_total_loss()

tf.contrib.losses.get_total_loss(add_regularization_losses=True, name='total_loss') Returns a tensor whose value represents the total loss. Notice that the function adds the given losses to the regularization losses. Args: add_regularization_losses: A boolean indicating whether or not to use the regularization losses in the sum. name: The name of the returned tensor. Returns: A Tensor whose value represents the total loss. Raises: ValueError: if losses is not iterable.

tf.contrib.losses.mean_pairwise_squared_error()

tf.contrib.losses.mean_pairwise_squared_error(*args, **kwargs) Adds a pairwise-errors-squared loss to the training procedure. (deprecated) THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_pairwise_squared_error. Unlike the sum_of_squares loss, which is a measure of the differences between corresponding elements of predictions and targets, sum_of_pairwise_squares is a measure of the differences between pairs of corresponding elements of predi

tf.contrib.losses.get_regularization_losses()

tf.contrib.losses.get_regularization_losses(scope=None) Gets the regularization losses. Args: scope: an optional scope for filtering the losses to return. Returns: A list of loss variables.