tf.contrib.distributions.Chi2.is_reparameterized

tf.contrib.distributions.Chi2.is_reparameterized

tf.contrib.distributions.MultivariateNormalFull.log_survival_function()

tf.contrib.distributions.MultivariateNormalFull.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. Args: value: floa

tf.contrib.distributions.Exponential.entropy()

tf.contrib.distributions.Exponential.entropy(name='entropy') Shanon entropy in nats. Additional documentation from Gamma: This is defined to be entropy = alpha - log(beta) + log(Gamma(alpha)) + (1-alpha)digamma(alpha) where digamma(alpha) is the digamma function.

tf.contrib.distributions.WishartFull.log_survival_function()

tf.contrib.distributions.WishartFull.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. Args: value: float or double

tf.contrib.distributions.WishartFull.survival_function()

tf.contrib.distributions.WishartFull.survival_function(value, name='survival_function') Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x] = 1 - P[X <= x] = 1 - cdf(x). Args: value: float or double Tensor. name: The name to give this op. Returns: Tensorof shapesample_shape(x) + self.batch_shapewith values of typeself.dtype`.

tf.contrib.distributions.MultivariateNormalDiagPlusVDVT.cdf()

tf.contrib.distributions.MultivariateNormalDiagPlusVDVT.cdf(value, name='cdf') Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x] Args: value: float or double Tensor. name: The name to give this op. Returns: cdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.contrib.bayesflow.variational_inference.elbo()

tf.contrib.bayesflow.variational_inference.elbo(log_likelihood, variational_with_prior=None, keep_batch_dim=True, form=None, name='ELBO') Evidence Lower BOund. log p(x) >= ELBO. Optimization objective for inference of hidden variables by variational inference. This function is meant to be used in conjunction with DistributionTensor. The user should build out the inference network, using DistributionTensors as latent variables, and the generative network. elbo at minimum needs p(x|Z) and ass

tf.contrib.bayesflow.stochastic_tensor.NormalTensor.dtype

tf.contrib.bayesflow.stochastic_tensor.NormalTensor.dtype

tf.contrib.distributions.Uniform.pmf()

tf.contrib.distributions.Uniform.pmf(value, name='pmf') Probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.assert_equal()

tf.assert_equal(x, y, data=None, summarize=None, message=None, name=None) Assert the condition x == y holds element-wise. Example of adding a dependency to an operation: with tf.control_dependencies([tf.assert_equal(x, y)]): output = tf.reduce_sum(x) Example of adding dependency to the tensor being checked: x = tf.with_dependencies([tf.assert_equal(x, y)], x) This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] == y[i]. If both x and y are empty,