tf.contrib.distributions.MultivariateNormalFull.get_event_shape()

tf.contrib.distributions.MultivariateNormalFull.get_event_shape() Shape of a single sample from a single batch as a TensorShape. Same meaning as event_shape. May be only partially defined. Returns: event_shape: TensorShape, possibly unknown.

tf.contrib.distributions.MultivariateNormalDiagWithSoftplusStDev.std()

tf.contrib.distributions.MultivariateNormalDiagWithSoftplusStDev.std(name='std') Standard deviation.

tf.contrib.distributions.QuantizedDistribution.entropy()

tf.contrib.distributions.QuantizedDistribution.entropy(name='entropy') Shanon entropy in nats.

tf.contrib.distributions.MultivariateNormalDiagWithSoftplusStDev.log_prob()

tf.contrib.distributions.MultivariateNormalDiagWithSoftplusStDev.log_prob(value, name='log_prob') Log probability density/mass function (depending on is_continuous). Additional documentation from _MultivariateNormalOperatorPD: x is a batch vector with compatible shape if x is a Tensor whose shape can be broadcast up to either: self.batch_shape + self.event_shape or [M1,...,Mm] + self.batch_shape + self.event_shape Args: value: float or double Tensor. name: The name to give this op. Retur

tf.contrib.distributions.StudentT.variance()

tf.contrib.distributions.StudentT.variance(name='variance') Variance. Additional documentation from StudentT: The variance for Student's T equals df / (df - 2), when df > 2 infinity, when 1 < df <= 2 NaN, when df <= 1

tf.contrib.bayesflow.stochastic_tensor.GammaTensor.value()

tf.contrib.bayesflow.stochastic_tensor.GammaTensor.value(name='value')

tf.QueueBase.names

tf.QueueBase.names The list of names for each component of a queue element.

tf.contrib.rnn.LSTMBlockCell

class tf.contrib.rnn.LSTMBlockCell Basic LSTM recurrent network cell. The implementation is based on: http://arxiv.org/abs/1409.2329. We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training. Unlike BasicLSTMCell, this is a monolithic op and should be much faster. The weight and bias matrixes should be compatible as long as the variabel scope matches.

tf.contrib.bayesflow.stochastic_tensor.BernoulliWithSigmoidPTensor.__init__()

tf.contrib.bayesflow.stochastic_tensor.BernoulliWithSigmoidPTensor.__init__(name=None, dist_value_type=None, loss_fn=score_function, **dist_args)

tf.contrib.bayesflow.stochastic_tensor.InverseGammaWithSoftplusAlphaBetaTensor.value_type

tf.contrib.bayesflow.stochastic_tensor.InverseGammaWithSoftplusAlphaBetaTensor.value_type