tf.sparse_reset_shape()

tf.sparse_reset_shape(sp_input, new_shape=None) Resets the shape of a SparseTensor with indices and values unchanged. If new_shape is None, returns a copy of sp_input with its shape reset to the tight bounding box of sp_input. If new_shape is provided, then it must be larger or equal in all dimensions compared to the shape of sp_input. When this condition is met, the returned SparseTensor will have its shape reset to new_shape and its indices and values unchanged from that of sp_input. For exa

tf.contrib.distributions.WishartFull.entropy()

tf.contrib.distributions.WishartFull.entropy(name='entropy') Shanon entropy in nats.

tf.contrib.distributions.WishartFull.pdf()

tf.contrib.distributions.WishartFull.pdf(value, name='pdf') Probability density function. Args: value: float or double Tensor. name: The name to give this op. Returns: prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if not is_continuous.

tf.contrib.distributions.MultivariateNormalFull.log_prob()

tf.contrib.distributions.MultivariateNormalFull.log_prob(value, name='log_prob') Log probability density/mass function (depending on is_continuous). Additional documentation from _MultivariateNormalOperatorPD: x is a batch vector with compatible shape if x is a Tensor whose shape can be broadcast up to either: self.batch_shape + self.event_shape or [M1,...,Mm] + self.batch_shape + self.event_shape Args: value: float or double Tensor. name: The name to give this op. Returns: log_prob: a

tf.contrib.distributions.MultivariateNormalFull.sigma

tf.contrib.distributions.MultivariateNormalFull.sigma Dense (batch) covariance matrix, if available.

tf.igamma()

tf.igamma(a, x, name=None) Compute the lower regularized incomplete Gamma function Q(a, x). The lower regularized incomplete Gamma function is defined as: P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x) where gamma(a, x) = int_{0}^{x} t^{a-1} exp(-t) dt is the lower incomplete Gamma function. Note, above Q(a, x) (Igammac) is the upper regularized complete Gamma function. Args: a: A Tensor. Must be one of the following types: float32, float64. x: A Tensor. Must have the same type as a. name:

tf.contrib.distributions.Exponential.sample_n()

tf.contrib.distributions.Exponential.sample_n(n, seed=None, name='sample_n') Generate n samples. Additional documentation from Gamma: See the documentation for tf.random_gamma for more details. Args: n: Scalar Tensor of type int32 or int64, the number of observations to sample. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with a prepended dimension (n,). Raises: TypeError: if n is not an integer type.

tf.contrib.distributions.Chi2WithAbsDf.get_batch_shape()

tf.contrib.distributions.Chi2WithAbsDf.get_batch_shape() Shape of a single sample from a single event index as a TensorShape. Same meaning as batch_shape. May be only partially defined. Returns: batch_shape: TensorShape, possibly unknown.

tensorflow::Tensor::SummarizeValue()

string tensorflow::Tensor::SummarizeValue(int64 max_entries) const Render the first max_entries values in *this into a string.

p]

KL[q || p] If log_p(z) = Log[p(z)] for distribution p, this Op approximates the negative Kullback-Leibler divergence. elbo_ratio(log_p, q, n=100) = -1 * KL[q || p], KL[q || p] = E[ Log[q(Z)] - Log[p(Z)] ] Note that if p is a Distribution, then distributions.kl(q, p) may be defined and available as an exact result.