tf.nn.rnn_cell.DropoutWrapper.zero_state()

tf.nn.rnn_cell.DropoutWrapper.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args: batch_size: int, float, or unit Tensor representing the batch size. dtype: the data type to use for the state. Returns: If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the sha

tf.nn.rnn_cell.BasicLSTMCell

class tf.nn.rnn_cell.BasicLSTMCell Basic LSTM recurrent network cell. The implementation is based on: http://arxiv.org/abs/1409.2329. We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training. It does not allow cell clipping, a projection layer, and does not use peep-hole connections: it is the basic baseline. For advanced models, please use the full LSTMCell that follows.

tf.contrib.distributions.StudentTWithAbsDfSoftplusSigma.batch_shape()

tf.contrib.distributions.StudentTWithAbsDfSoftplusSigma.batch_shape(name='batch_shape') Shape of a single sample from a single event index as a 1-D Tensor. The product of the dimensions of the batch_shape is the number of independent distributions of this kind the instance represents. Args: name: name to give to the op Returns: batch_shape: Tensor.

tf.image.decode_png()

tf.image.decode_png(contents, channels=None, dtype=None, name=None) Decode a PNG-encoded image to a uint8 or uint16 tensor. The attr channels indicates the desired number of color channels for the decoded image. Accepted values are: 0: Use the number of channels in the PNG-encoded image. 1: output a grayscale image. 3: output an RGB image. 4: output an RGBA image. If needed, the PNG-encoded image is transformed to match the requested number of color channels. Args: contents: A Tensor of typ

tf.contrib.learn.run_feeds()

tf.contrib.learn.run_feeds(*args, **kwargs) See run_feeds_iter(). Returns a list instead of an iterator.

tf.SparseTensor.from_value()

tf.SparseTensor.from_value(cls, sparse_tensor_value)

tf.contrib.distributions.GammaWithSoftplusAlphaBeta.prob()

tf.contrib.distributions.GammaWithSoftplusAlphaBeta.prob(value, name='prob') Probability density/mass function (depending on is_continuous). Args: value: float or double Tensor. name: The name to give this op. Returns: prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.reduce_max()

tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None) Computes the maximum of elements across dimensions of a tensor. Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1. If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is return

tf.reduce_join()

tf.reduce_join(inputs, reduction_indices, keep_dims=None, separator=None, name=None) Joins a string Tensor across the given dimensions. Computes the string join across dimensions in the given string Tensor of shape [d_0, d_1, ..., d_n-1]. Returns a new Tensor created by joining the input strings with the given separator (default: empty string). Negative indices are counted backwards from the end, with -1 being equivalent to n - 1. Passing an empty reduction_indices joins all strings in linear

tf.contrib.distributions.Bernoulli.param_shapes()

tf.contrib.distributions.Bernoulli.param_shapes(cls, sample_shape, name='DistributionParamShapes') Shapes of parameters given the desired shape of a call to sample(). Subclasses should override static method _param_shapes. Args: sample_shape: Tensor or python list/tuple. Desired shape of a call to sample(). name: name to prepend ops with. Returns: dict of parameter name to Tensor shapes.