tf.contrib.layers.stack()

tf.contrib.layers.stack(inputs, layer, stack_args, **kwargs) Builds a stack of layers by applying layer repeatedly using stack_args. stack allows you to repeatedly apply the same operation with different arguments stack_args[i]. For each application of the layer, stack creates a new scope appended with an increasing number. For example: y = stack(x, fully_connected, [32, 64, 128], scope='fc') # It is equivalent to: x = fully_connected(x, 32, scope='fc/fc_1') x = fully_connected(x, 64, scope='

tf.contrib.layers.separable_convolution2d()

tf.contrib.layers.separable_convolution2d(*args, **kwargs) Adds a depth-separable 2D convolution with optional batch_norm layer. This op first performs a depthwise convolution that acts separately on channels, creating a variable called depthwise_weights. If num_outputs is not None, it adds a pointwise convolution that mixes channels, creating a variable called pointwise_weights. Then, if batch_norm_params is None, it adds bias to the result, creating a variable called 'biases', otherwise it a

tf.contrib.layers.safe_embedding_lookup_sparse()

tf.contrib.layers.safe_embedding_lookup_sparse(embedding_weights, sparse_ids, sparse_weights=None, combiner=None, default_id=None, name=None, partition_strategy='div') Lookup embedding results, accounting for invalid IDs and empty features. The partitioned embedding in embedding_weights must all be the same shape except for the first dimension. The first dimension is allowed to vary as the vocabulary size is not necessarily a multiple of P. Invalid IDs (< 0) are pruned from input IDs and we

tf.contrib.layers.repeat()

tf.contrib.layers.repeat(inputs, repetitions, layer, *args, **kwargs) Applies the same layer with the same arguments repeatedly. y = repeat(x, 3, conv2d, 64, [3, 3], scope='conv1') # It is equivalent to: x = conv2d(x, 64, [3, 3], scope='conv1/conv1_1') x = conv2d(x, 64, [3, 3], scope='conv1/conv1_2') y = conv2d(x, 64, [3, 3], scope='conv1/conv1_3') If the scope argument is not given in kwargs, it is set to layer.__name__, or layer.func.__name__ (for functools.partial objects). If neither __n

tf.contrib.layers.optimize_loss()

tf.contrib.layers.optimize_loss(loss, global_step, learning_rate, optimizer, gradient_noise_scale=None, gradient_multipliers=None, clip_gradients=None, learning_rate_decay_fn=None, update_ops=None, variables=None, name=None, summaries=None) Given loss and parameters for optimizer, returns a training op. Various ways of passing optimizers, include: - string, name of the optimizer like 'SGD', 'Adam', see OPTIMIZER_CLS_NAMES for full list. E.g. optimize_loss(..., optimizer='Adam'). - function, ta

tf.contrib.layers.one_hot_encoding()

tf.contrib.layers.one_hot_encoding(*args, **kwargs) Transform numeric labels into onehot_labels using tf.one_hot. Args: labels: [batch_size] target labels. num_classes: total number of classes. on_value: A scalar defining the on-value. off_value: A scalar defining the off-value. outputs_collections: collection to add the outputs. scope: Optional scope for name_scope. Returns: one hot encoding of the labels.

tf.contrib.layers.max_pool2d()

tf.contrib.layers.max_pool2d(*args, **kwargs) Adds a 2D Max Pooling op. It is assumed that the pooling is done per image but not in batch or channels. Args: inputs: A Tensor of size [batch_size, height, width, channels]. kernel_size: A list of length 2: [kernel_height, kernel_width] of the pooling kernel over which the op is computed. Can be an int if both values are the same. stride: A list of length 2: [stride_height, stride_width]. Can be an int if both strides are the same. Note that pr

tf.contrib.layers.layer_norm()

tf.contrib.layers.layer_norm(*args, **kwargs) Adds a Layer Normalization layer from https://arxiv.org/abs/1607.06450. "Layer Normalization" Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton Can be used as a normalizer function for conv2d and fully_connected. Args: inputs: a tensor with 2 or more dimensions. The normalization occurs over all but the first dimension. center: If True, subtract beta. If False, beta is ignored. scale: If True, multiply by gamma. If False, gamma is not used. Whe

tf.contrib.layers.l2_regularizer()

tf.contrib.layers.l2_regularizer(scale, scope=None) Returns a function that can be used to apply L2 regularization to weights. Small values of L2 can help prevent overfitting the training data. Args: scale: A scalar multiplier Tensor. 0.0 disables the regularizer. scope: An optional scope name. Returns: A function with signature l2(weights) that applies L2 regularization. Raises: ValueError: If scale is negative or if scale is not a float.

tf.contrib.layers.l1_regularizer()

tf.contrib.layers.l1_regularizer(scale, scope=None) Returns a function that can be used to apply L1 regularization to weights. L1 regularization encourages sparsity. Args: scale: A scalar multiplier Tensor. 0.0 disables the regularizer. scope: An optional scope name. Returns: A function with signature l1(weights) that apply L1 regularization. Raises: ValueError: If scale is negative or if scale is not a float.