tf.contrib.layers.batch_norm()

tf.contrib.layers.batch_norm(*args, **kwargs)

Adds a Batch Normalization layer from http://arxiv.org/abs/1502.03167.

"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift"

Sergey Ioffe, Christian Szegedy

Can be used as a normalizer function for conv2d and fully_connected.

Note: When is_training is True the moving_mean and moving_variance need to be updated, by default the update_ops are placed in tf.GraphKeys.UPDATE_OPS so they need to be added as a dependency to the train_op, example:

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) if update_ops: updates = tf.group(*update_ops) total_loss = control_flow_ops.with_dependencies([updates], total_loss)

One can set update_collections=None to force the updates in place, but that can have speed penalty, specially in distributed settings.

Args:
  • inputs: a tensor with 2 or more dimensions, where the first dimension has batch_size. The normalization is over all but the last dimension.
  • decay: decay for the moving average.
  • center: If True, subtract beta. If False, beta is ignored.
  • scale: If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling can be done by the next layer.
  • epsilon: small float added to variance to avoid dividing by zero.
  • activation_fn: activation function, default set to None to skip it and maintain a linear activation.
  • updates_collections: collections to collect the update ops for computation. The updates_ops need to be excuted with the train_op. If None, a control dependency would be added to make sure the updates are computed in place.
  • is_training: whether or not the layer is in training mode. In training mode it would accumulate the statistics of the moments into moving_mean and moving_variance using an exponential moving average with the given decay. When it is not in training mode then it would use the values of the moving_mean and the moving_variance.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • variables_collections: optional collections for the variables.
  • outputs_collections: collections to add the outputs.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • scope: Optional scope for variable_scope.
Returns:

A Tensor representing the output of the operation.

Raises:
  • ValueError: if rank or last dimension of inputs is undefined.
doc_TensorFlow
2016-10-14 13:05:20
Comments
Leave a Comment

Please login to continue.