tf.contrib.distributions.Categorical.survival_function()

tf.contrib.distributions.Categorical.survival_function(value, name='survival_function') Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x] = 1 - P[X <= x] = 1 - cdf(x). Args: value: float or double Tensor. name: The name to give this op. Returns: Tensorof shapesample_shape(x) + self.batch_shapewith values of typeself.dtype`.

tf.contrib.distributions.Poisson.log_pmf()

tf.contrib.distributions.Poisson.log_pmf(value, name='log_pmf') Log probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: log_pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.nn.rnn_cell.LSTMCell

class tf.nn.rnn_cell.LSTMCell Long short-term memory unit (LSTM) recurrent network cell. The default non-peephole implementation is based on: http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf S. Hochreiter and J. Schmidhuber. "Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997. The peephole implementation is based on: https://research.google.com/pubs/archive/43905.pdf Hasim Sak, Andrew Senior, and Francoise Beaufays. "Long short-term memory recurrent neural network archi

tf.contrib.distributions.Chi2.parameters

tf.contrib.distributions.Chi2.parameters Dictionary of parameters used by this Distribution.

tf.contrib.distributions.Binomial.log_pdf()

tf.contrib.distributions.Binomial.log_pdf(value, name='log_pdf') Log probability density function. Args: value: float or double Tensor. name: The name to give this op. Returns: log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if not is_continuous.

tf.contrib.bayesflow.stochastic_tensor.LaplaceWithSoftplusScaleTensor.dtype

tf.contrib.bayesflow.stochastic_tensor.LaplaceWithSoftplusScaleTensor.dtype

tf.contrib.graph_editor.replace_t_with_placeholder_handler()

tf.contrib.graph_editor.replace_t_with_placeholder_handler(info, t) Transform a tensor into a placeholder tensor. This handler is typically used to transform a subgraph input tensor into a placeholder. Args: info: Transform._Info instance. t: tensor whose input must be transformed into a place holder. Returns: The tensor generated by the newly created place holder.

tf.contrib.training.NextQueuedSequenceBatch.state()

tf.contrib.training.NextQueuedSequenceBatch.state(state_name) Returns batched state tensors. Args: state_name: string, matches a key provided in initial_states. Returns: A Tensor: a batched set of states, either initial states (if this is the first run of the given example), or a value as stored during a previous iteration via save_state control flow. Its type is the same as initial_states["state_name"].dtype. If we had at input: initial_states[state_name].get_shape() == [d1, d2, ...], the

tf.contrib.distributions.MultivariateNormalDiagWithSoftplusStDev.cdf()

tf.contrib.distributions.MultivariateNormalDiagWithSoftplusStDev.cdf(value, name='cdf') Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x] Args: value: float or double Tensor. name: The name to give this op. Returns: cdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.contrib.distributions.Multinomial.name

tf.contrib.distributions.Multinomial.name Name prepended to all ops created by this Distribution.