tf.contrib.layers.xavier_initializer_conv2d()

tf.contrib.layers.xavier_initializer_conv2d(uniform=True, seed=None, dtype=tf.float32) Returns an initializer performing "Xavier" initialization for weights. This function implements the weight initialization from: Xavier Glorot and Yoshua Bengio (2010): Understanding the difficulty of training deep feedforward neural networks. International conference on artificial intelligence and statistics. This initializer is designed to keep the scale of the gradients roughly the same in all layers. In u

tf.contrib.layers.xavier_initializer()

tf.contrib.layers.xavier_initializer(uniform=True, seed=None, dtype=tf.float32) Returns an initializer performing "Xavier" initialization for weights. This function implements the weight initialization from: Xavier Glorot and Yoshua Bengio (2010): Understanding the difficulty of training deep feedforward neural networks. International conference on artificial intelligence and statistics. This initializer is designed to keep the scale of the gradients roughly the same in all layers. In uniform

tf.contrib.layers.variance_scaling_initializer()

tf.contrib.layers.variance_scaling_initializer(factor=2.0, mode='FAN_IN', uniform=False, seed=None, dtype=tf.float32) Returns an initializer that generates tensors without scaling variance. When initializing a deep network, it is in principle advantageous to keep the scale of the input variance constant, so it does not explode or diminish by reaching the final layer. This initializer use the following formula: if mode='FAN_IN': # Count only number of input connections. n = fan_in elif mode='FA

tf.contrib.layers.unit_norm()

tf.contrib.layers.unit_norm(*args, **kwargs) Normalizes the given input across the specified dimension to unit length. Note that the rank of input must be known. Args: inputs: A Tensor of arbitrary size. dim: The dimension along which the input is normalized. epsilon: A small value to add to the inputs to avoid dividing by zero. scope: Optional scope for variable_scope. Returns: The normalized Tensor. Raises: ValueError: If dim is smaller than the number of dimensions in 'inputs'. Ali

tf.contrib.layers.sum_regularizer()

tf.contrib.layers.sum_regularizer(regularizer_list, scope=None) Returns a function that applies the sum of multiple regularizers. Args: regularizer_list: A list of regularizers to apply. scope: An optional scope name Returns: A function with signature sum_reg(weights) that applies the sum of all the input regularizers.

tf.contrib.layers.summarize_tensors()

tf.contrib.layers.summarize_tensors(tensors, summarizer=summarize_tensor) Summarize a set of tensors.

tf.contrib.layers.summarize_tensor()

tf.contrib.layers.summarize_tensor(tensor, tag=None) Summarize a tensor using a suitable summary type. This function adds a summary op for tensor. The type of summary depends on the shape of tensor. For scalars, a scalar_summary is created, for all other tensors, histogram_summary is used. Args: tensor: The tensor to summarize tag: The tag to use, if None then use tensor's op's name. Returns: The summary op created or None for string tensors.

tf.contrib.layers.summarize_collection()

tf.contrib.layers.summarize_collection(collection, name_filter=None, summarizer=summarize_tensor) Summarize a graph collection of tensors, possibly filtered by name. The layers module defines convenience functions summarize_variables, summarize_weights and summarize_biases, which set the collection argument of summarize_collection to VARIABLES, WEIGHTS and BIASES, respectively.

tf.contrib.layers.summarize_activations()

tf.contrib.layers.summarize_activations(name_filter=None, summarizer=summarize_activation) Summarize activations, using summarize_activation to summarize.

tf.contrib.layers.summarize_activation()

tf.contrib.layers.summarize_activation(op) Summarize an activation. This applies the given activation and adds useful summaries specific to the activation. Args: op: The tensor to summarize (assumed to be a layer activation). Returns: The summary op created to summarize op.