tf.contrib.losses.get_total_loss()

tf.contrib.losses.get_total_loss(add_regularization_losses=True, name='total_loss') Returns a tensor whose value represents the total loss. Notice that the function adds the given losses to the regularization losses. Args: add_regularization_losses: A boolean indicating whether or not to use the regularization losses in the sum. name: The name of the returned tensor. Returns: A Tensor whose value represents the total loss. Raises: ValueError: if losses is not iterable.

tf.contrib.losses.get_regularization_losses()

tf.contrib.losses.get_regularization_losses(scope=None) Gets the regularization losses. Args: scope: an optional scope for filtering the losses to return. Returns: A list of loss variables.

tf.contrib.losses.get_losses()

tf.contrib.losses.get_losses(scope=None, loss_collection='losses') Gets the list of losses from the loss_collection. Args: scope: an optional scope for filtering the losses to return. loss_collection: Optional losses collection. Returns: a list of loss tensors.

tf.contrib.losses.cosine_distance()

tf.contrib.losses.cosine_distance(predictions, targets, dim, weight=1.0, scope=None) Adds a cosine-distance loss to the training procedure. Note that the function assumes that the predictions and targets are already unit-normalized. Args: predictions: An arbitrary matrix. targets: A Tensor whose shape matches 'predictions' dim: The dimension along which the cosine distance is computed. weight: Coefficients for the loss a scalar, a tensor of shape [batch_size] or a tensor whose shape matche

tf.contrib.losses.compute_weighted_loss()

tf.contrib.losses.compute_weighted_loss(losses, weight=1.0) Computes the weighted loss. Args: losses: A tensor of size [batch_size, d1, ... dN]. weight: A tensor of size [1] or [batch_size, d1, ... dK] where K < N. Returns: A scalar Tensor that returns the weighted loss. Raises: ValueError: If the weight is None or the shape is not compatible with the losses shape or if the number of dimensions (rank) of either losses or weight is missing.

tf.contrib.losses.add_loss()

tf.contrib.losses.add_loss(*args, **kwargs) Adds a externally defined loss to the collection of losses. Args: loss: A loss Tensor. loss_collection: Optional collection to add the loss to.

tf.contrib.losses.absolute_difference()

tf.contrib.losses.absolute_difference(predictions, targets, weight=1.0, scope=None) Adds an Absolute Difference loss to the training procedure. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weight vector. If the shape of weight matches the shape of predictions, then the loss of each

tf.contrib.learn.train()

tf.contrib.learn.train(graph, output_dir, train_op, loss_op, global_step_tensor=None, init_op=None, init_feed_dict=None, init_fn=None, log_every_steps=10, supervisor_is_chief=True, supervisor_master='', supervisor_save_model_secs=600, keep_checkpoint_max=5, supervisor_save_summaries_steps=100, feed_fn=None, steps=None, fail_on_nan_loss=True, monitors=None, max_steps=None) Train a model. Given graph, a directory to write outputs to (output_dir), and some ops, run a training loop. The given trai

tf.contrib.learn.TensorFlowRNNRegressor.__repr__()

tf.contrib.learn.TensorFlowRNNRegressor.__repr__()

tf.contrib.learn.TensorFlowRNNRegressor.__init__()

tf.contrib.learn.TensorFlowRNNRegressor.__init__(rnn_size, cell_type='gru', num_layers=1, input_op_fn=null_input_op_fn, initial_state=None, bidirectional=False, sequence_length=None, attn_length=None, attn_size=None, attn_vec_size=None, n_classes=0, batch_size=32, steps=50, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, continue_training=False, config=None, verbose=1) Initializes a TensorFlowRNNRegressor instance. Args: rnn_size: The size for rnn cell, e.g. size of your word embe