tf.contrib.layers.optimize_loss(loss, global_step, learning_rate, optimizer, gradient_noise_scale=None, gradient_multipliers=None, clip_gradients=None, learning_rate_decay_fn=None, update_ops=None, variables=None, name=None, summaries=None)
Given loss and parameters for optimizer, returns a training op.
Various ways of passing optimizers, include: - string, name of the optimizer like 'SGD', 'Adam', see OPTIMIZER_CLS_NAMES for full list. E.g. optimize_loss(..., optimizer='Adam')
. - function, takes learning rate Tensor
as argument and must return Optimizer
instance. E.g. optimize_loss(...,
optimizer=lambda lr: tf.train.MomentumOptimizer(lr, momentum=0.5))
. Alternatively, if learning_rate
is None
, the function takes no arguments. E.g. optimize_loss(..., learning_rate=None,
optimizer=lambda: tf.train.MomentumOptimizer(0.5, momentum=0.5))
. - class, subclass of Optimizer
that takes only one required argument - learning rate, such as AdamOptimizer, AdagradOptimizer. E.g. optimize_loss(..., optimizer=tf.train.AdagradOptimizer)
. - object, instance of subclass of Optimizer
. E.g., optimizer_loss(..., optimizer=tf.train.AdagradOptimizer(0.5))
.
Args:
-
loss
: Tensor, 0 dimensional. -
global_step
: Tensor, step counter for each update. -
learning_rate
: float or Tensor, magnitude of update per each training step. -
optimizer
: string, class or optimizer instance, used as trainer. string should be name of optimizer, like 'SGD', 'Adam', 'Adagrad'. Full list in OPTIMIZER_CLS_NAMES constant. class should be sub-class of tf.Optimizer that implementscompute_gradients
andapply_gradients
functions. optimizer instance should be instantion oftf.Optimizer
sub-class and havecompute_gradients
andapply_gradients
functions. -
gradient_noise_scale
: float or None, adds 0-mean normal noise scaled by this value. -
gradient_multipliers
: dict of variables or variable names to floats. If present, gradients for specified variables will be multiplied by given constant. -
clip_gradients
: float orNone
, clips gradients by this value. -
learning_rate_decay_fn
: function, takeslearning_rate
andglobal_step
Tensor
s, returnsTensor
. Can be used to implement any learning rate decay functions. For example: tf.train.exponential_decay. -
update_ops
: list of updateOperation
s to execute at each step. IfNone
, uses elements of UPDATE_OPS collection. The order of execution betweenupdate_ops
andloss
is non-deterministic. -
variables
: list of variables to optimize orNone
to use all trainable variables. -
name
: The name for this operation is used to scope operations and summaries. -
summaries
: List of internal quantities to visualize on tensorboard. If not set only the loss and the learning rate will be reported. The complete list is in OPTIMIZER_SUMMARIES.
Returns:
Training op.
Raises:
-
ValueError
: if optimizer is wrong type.
Please login to continue.