tf.contrib.layers.optimize_loss()

tf.contrib.layers.optimize_loss(loss, global_step, learning_rate, optimizer, gradient_noise_scale=None, gradient_multipliers=None, clip_gradients=None, learning_rate_decay_fn=None, update_ops=None, variables=None, name=None, summaries=None)

Given loss and parameters for optimizer, returns a training op.

Various ways of passing optimizers, include: - string, name of the optimizer like 'SGD', 'Adam', see OPTIMIZER_CLS_NAMES for full list. E.g. optimize_loss(..., optimizer='Adam'). - function, takes learning rate Tensor as argument and must return Optimizer instance. E.g. optimize_loss(..., optimizer=lambda lr: tf.train.MomentumOptimizer(lr, momentum=0.5)). Alternatively, if learning_rate is None, the function takes no arguments. E.g. optimize_loss(..., learning_rate=None, optimizer=lambda: tf.train.MomentumOptimizer(0.5, momentum=0.5)). - class, subclass of Optimizer that takes only one required argument - learning rate, such as AdamOptimizer, AdagradOptimizer. E.g. optimize_loss(..., optimizer=tf.train.AdagradOptimizer). - object, instance of subclass of Optimizer. E.g., optimizer_loss(..., optimizer=tf.train.AdagradOptimizer(0.5)).

Args:
  • loss: Tensor, 0 dimensional.
  • global_step: Tensor, step counter for each update.
  • learning_rate: float or Tensor, magnitude of update per each training step.
  • optimizer: string, class or optimizer instance, used as trainer. string should be name of optimizer, like 'SGD', 'Adam', 'Adagrad'. Full list in OPTIMIZER_CLS_NAMES constant. class should be sub-class of tf.Optimizer that implements compute_gradients and apply_gradients functions. optimizer instance should be instantion of tf.Optimizer sub-class and have compute_gradients and apply_gradients functions.
  • gradient_noise_scale: float or None, adds 0-mean normal noise scaled by this value.
  • gradient_multipliers: dict of variables or variable names to floats. If present, gradients for specified variables will be multiplied by given constant.
  • clip_gradients: float or None, clips gradients by this value.
  • learning_rate_decay_fn: function, takes learning_rate and global_step Tensors, returns Tensor. Can be used to implement any learning rate decay functions. For example: tf.train.exponential_decay.
  • update_ops: list of update Operations to execute at each step. If None, uses elements of UPDATE_OPS collection. The order of execution between update_ops and loss is non-deterministic.
  • variables: list of variables to optimize or None to use all trainable variables.
  • name: The name for this operation is used to scope operations and summaries.
  • summaries: List of internal quantities to visualize on tensorboard. If not set only the loss and the learning rate will be reported. The complete list is in OPTIMIZER_SUMMARIES.
Returns:

Training op.

Raises:
  • ValueError: if optimizer is wrong type.
doc_TensorFlow
2016-10-14 13:05:23
Comments
Leave a Comment

Please login to continue.