tf.contrib.layers.layer_norm(*args, **kwargs)
Adds a Layer Normalization layer from https://arxiv.org/abs/1607.06450.
"Layer Normalization"
Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton
Can be used as a normalizer function for conv2d and fully_connected.
Args:
-
inputs
: a tensor with 2 or more dimensions. The normalization occurs over all but the first dimension. -
center
: If True, subtractbeta
. If False,beta
is ignored. -
scale
: If True, multiply bygamma
. If False,gamma
is not used. When the next layer is linear (also e.g.nn.relu
), this can be disabled since the scaling can be done by the next layer. -
activation_fn
: activation function, default set to None to skip it and maintain a linear activation. -
reuse
: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given. -
variables_collections
: optional collections for the variables. -
outputs_collections
: collections to add the outputs. -
trainable
: IfTrue
also add variables to the graph collectionGraphKeys.TRAINABLE_VARIABLES
(see tf.Variable). -
scope
: Optional scope forvariable_op_scope
.
Returns:
A Tensor
representing the output of the operation.
Raises:
-
ValueError
: if rank or last dimension ofinputs
is undefined.
Please login to continue.