tf.contrib.distributions.NormalWithSoftplusSigma.log_prob()

tf.contrib.distributions.NormalWithSoftplusSigma.log_prob(value, name='log_prob') Log probability density/mass function (depending on is_continuous). Args: value: float or double Tensor. name: The name to give this op. Returns: log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.contrib.distributions.MultivariateNormalDiag.log_survival_function()

tf.contrib.distributions.MultivariateNormalDiag.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. Args: value: floa

tf.contrib.distributions.BernoulliWithSigmoidP.mode()

tf.contrib.distributions.BernoulliWithSigmoidP.mode(name='mode') Mode. Additional documentation from Bernoulli: Returns 1 if p > 1-p and 0 otherwise.

tf.contrib.bayesflow.stochastic_tensor.MixtureTensor.mean()

tf.contrib.bayesflow.stochastic_tensor.MixtureTensor.mean(name='mean')

tf.contrib.distributions.TransformedDistribution.log_pmf()

tf.contrib.distributions.TransformedDistribution.log_pmf(value, name='log_pmf') Log probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: log_pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.contrib.distributions.Normal.get_batch_shape()

tf.contrib.distributions.Normal.get_batch_shape() Shape of a single sample from a single event index as a TensorShape. Same meaning as batch_shape. May be only partially defined. Returns: batch_shape: TensorShape, possibly unknown.

tf.contrib.learn.DNNRegressor.linear_weights_

tf.contrib.learn.DNNRegressor.linear_weights_ Returns weights per feature of the linear part.

tf.contrib.distributions.RegisterKL

class tf.contrib.distributions.RegisterKL Decorator to register a KL divergence implementation function. Usage: @distributions.RegisterKL(distributions.Normal, distributions.Normal) def _kl_normal_mvn(norm_a, norm_b): # Return KL(norm_a || norm_b)

tf.contrib.distributions.ExponentialWithSoftplusLam.__init__()

tf.contrib.distributions.ExponentialWithSoftplusLam.__init__(lam, validate_args=False, allow_nan_stats=True, name='ExponentialWithSoftplusLam')

tf.contrib.distributions.Exponential.parameters

tf.contrib.distributions.Exponential.parameters Dictionary of parameters used by this Distribution.