tf.contrib.layers.flatten()

tf.contrib.layers.flatten(*args, **kwargs) Flattens the input while maintaining the batch_size. Assumes that the first dimension represents the batch. Args: inputs: a tensor of size [batch_size, ...]. outputs_collections: collection to add the outputs. scope: Optional scope for name_scope. Returns: a flattened tensor with shape [batch_size, k]. Raises: ValueError: if inputs.shape is wrong.

tf.contrib.layers.convolution2d_transpose()

tf.contrib.layers.convolution2d_transpose(*args, **kwargs) Adds a convolution2d_transpose with an optional batch normalization layer. The function creates a variable called weights, representing the kernel, that is convolved with the input. If batch_norm_params is None, a second variable called 'biases' is added to the result of the operation. Args: inputs: a tensor of size [batch_size, height, width, channels]. num_outputs: integer, the number of output filters. kernel_size: a list of leng

tf.contrib.layers.fully_connected()

tf.contrib.layers.fully_connected(*args, **kwargs) Adds a fully connected layer. fully_connected creates a variable called weights, representing a fully connected weight matrix, which is multiplied by the inputs to produce a Tensor of hidden units. If a normalizer_fn is provided (such as batch_norm), it is then applied. Otherwise, if normalizer_fn is None and a biases_initializer is provided then a biases variable would be created and added the hidden units. Finally, if activation_fn is not No

tf.contrib.layers.convolution2d_in_plane()

tf.contrib.layers.convolution2d_in_plane(*args, **kwargs) Performs the same in-plane convolution to each channel independently. This is useful for performing various simple channel-independent convolution operations such as image gradients: image = tf.constant(..., shape=(16, 240, 320, 3)) vert_gradients = layers.conv2d_in_plane(image, kernel=[1, -1], kernel_size=[2, 1]) horz_gradients = layers.conv2d_in_plane(image, kernel=[1, -1], kernel_size=[1, 2]) Args: inputs: a 4-D tensor with dimensio

tf.contrib.layers.apply_regularization()

tf.contrib.layers.apply_regularization(regularizer, weights_list=None) Returns the summed penalty by applying regularizer to the weights_list. Adding a regularization penalty over the layer weights and embedding weights can help prevent overfitting the training data. Regularization over layer biases is less common/useful, but assuming proper data preprocessing/mean subtraction, it usually shouldn't hurt much either. Args: regularizer: A function that takes a single Tensor argument and returns

tf.contrib.layers.avg_pool2d()

tf.contrib.layers.avg_pool2d(*args, **kwargs) Adds a 2D average pooling op. It is assumed that the pooling is done per image but not in batch or channels. Args: inputs: A Tensor of size [batch_size, height, width, channels]. kernel_size: A list of length 2: [kernel_height, kernel_width] of the pooling kernel over which the op is computed. Can be an int if both values are the same. stride: A list of length 2: [stride_height, stride_width]. Can be an int if both strides are the same. Note tha

tf.contrib.layers.batch_norm()

tf.contrib.layers.batch_norm(*args, **kwargs) Adds a Batch Normalization layer from http://arxiv.org/abs/1502.03167. "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" Sergey Ioffe, Christian Szegedy Can be used as a normalizer function for conv2d and fully_connected. Note: When is_training is True the moving_mean and moving_variance need to be updated, by default the update_ops are placed in tf.GraphKeys.UPDATE_OPS so they need to be added as a depe

tf.contrib.layers.convolution2d()

tf.contrib.layers.convolution2d(*args, **kwargs) Adds a 2D convolution followed by an optional batch_norm layer. convolution2d creates a variable called weights, representing the convolutional kernel, that is convolved with the inputs to produce a Tensor of activations. If a normalizer_fn is provided (such as batch_norm), it is then applied. Otherwise, if normalizer_fn is None and a biases_initializer is provided then a biases variable would be created and added the activations. Finally, if ac

tf.contrib.graph_editor.Transformer.__call__()

tf.contrib.graph_editor.Transformer.__call__(sgv, dst_graph, dst_scope, src_scope='', reuse_dst_scope=False) Execute the transformation. Args: sgv: the source subgraph-view. dst_graph: the destination graph. dst_scope: the destination scope. src_scope: the source scope, which specify the path from which the relative path of the transformed nodes are computed. For instance, if src_scope is a/ and dst_scoped is b/, then the node a/x/y will have a relative path of x/y and will be transformed

tf.contrib.graph_editor.transform_op_if_inside_handler()

tf.contrib.graph_editor.transform_op_if_inside_handler(info, op, keep_if_possible=True) Transform an optional op only if it is inside the subgraph. This handler is typically use to handle original op: it is fine to keep them if they are inside the subgraph, otherwise they are just ignored. Args: info: Transform._Info instance. op: the optional op to transform (or ignore). keep_if_possible: re-attach to the original op if possible, that is, if the source graph and the destination graph are t