tf.contrib.rnn.GridLSTMCell

class tf.contrib.rnn.GridLSTMCell Grid Long short-term memory unit (LSTM) recurrent network cell. The default is based on: Nal Kalchbrenner, Ivo Danihelka and Alex Graves "Grid Long Short-Term Memory," Proc. ICLR 2016. http://arxiv.org/abs/1507.01526 When peephole connections are used, the implementation is based on: Tara N. Sainath and Bo Li "Modeling Time-Frequency Patterns with LSTM vs. Convolutional Architectures for LVCSR Tasks." submitted to INTERSPEECH, 2016. The code uses optional peep

tf.contrib.distributions.InverseGamma

class tf.contrib.distributions.InverseGamma The InverseGamma distribution with parameter alpha and beta. The parameters are the shape and inverse scale parameters alpha, beta. The PDF of this distribution is: pdf(x) = (beta^alpha)/Gamma(alpha)(x^(-alpha-1))e^(-beta/x), x > 0 and the CDF of this distribution is: cdf(x) = GammaInc(alpha, beta / x) / Gamma(alpha), x > 0 where GammaInc is the upper incomplete Gamma function. Examples: dist = InverseGamma(alpha=3.0, beta=2.0) dist2 = Inverse

tf.QueueBase.shapes

tf.QueueBase.shapes The list of shapes for each component of a queue element.

tf.nn.rnn_cell.InputProjectionWrapper.__init__()

tf.nn.rnn_cell.InputProjectionWrapper.__init__(cell, num_proj, input_size=None) Create a cell with input projection. Args: cell: an RNNCell, a projection of inputs is added before it. num_proj: Python integer. The dimension to project to. input_size: Deprecated and unused. Raises: TypeError: if cell is not an RNNCell.

tf.segment_prod()

tf.segment_prod(data, segment_ids, name=None) Computes the product along segments of a tensor. Read the section on Segmentation for an explanation of segments. Computes a tensor such that \(output_i = \prod_j data_j\) where the product is over j such that segment_ids[j] == i. Args: data: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. segment_ids: A Tensor. Must be one of the follo

tf.contrib.distributions.Laplace.param_static_shapes()

tf.contrib.distributions.Laplace.param_static_shapes(cls, sample_shape) param_shapes with static (i.e. TensorShape) shapes. Args: sample_shape: TensorShape or python list/tuple. Desired shape of a call to sample(). Returns: dict of parameter name to TensorShape. Raises: ValueError: if sample_shape is a TensorShape and is not fully defined.

tf.sparse_reduce_sum_sparse()

tf.sparse_reduce_sum_sparse(sp_input, reduction_axes=None, keep_dims=False) Computes the sum of elements across dimensions of a SparseTensor. This Op takes a SparseTensor and is the sparse counterpart to tf.reduce_sum(). In contrast to SparseReduceSum, this Op returns a SparseTensor. Reduces sp_input along the dimensions given in reduction_axes. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_axes. If keep_dims is true, the reduced dimensions are re

tf.contrib.distributions.NormalWithSoftplusSigma.name

tf.contrib.distributions.NormalWithSoftplusSigma.name Name prepended to all ops created by this Distribution.

tf.contrib.distributions.normal_congugates_known_sigma_predictive()

tf.contrib.distributions.normal_congugates_known_sigma_predictive(prior, sigma, s, n) Posterior predictive Normal distribution w. conjugate prior on the mean. This model assumes that n observations (with sum s) come from a Normal with unknown mean mu (described by the Normal prior) and known variance sigma^2. The "known sigma predictive" is the distribution of new observations, conditioned on the existing observations and our prior. Accepts a prior Normal distribution object, having parameters

tf.contrib.graph_editor.ph()

tf.contrib.graph_editor.ph(dtype, shape=None, scope=None) Create a tf.placeholder for the Graph Editor. Note that the correct graph scope must be set by the calling function. The placeholder is named using the function placeholder_name (with no tensor argument). Args: dtype: the tensor type. shape: the tensor shape (optional). scope: absolute scope within which to create the placeholder. None means that the scope of t is preserved. "" means the root scope. Returns: A newly created tf.plac