tf.contrib.bayesflow.stochastic_tensor.BernoulliTensor.clone()

tf.contrib.bayesflow.stochastic_tensor.BernoulliTensor.clone(name=None, **dist_args)

tf.contrib.bayesflow.stochastic_tensor.BernoulliTensor

class tf.contrib.bayesflow.stochastic_tensor.BernoulliTensor BernoulliTensor is a StochasticTensor backed by the distribution Bernoulli.

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.__init__()

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.__init__()

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.value()

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.value(name=None)

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.name

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.name

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.loss()

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.loss(sample_loss) Returns the term to add to the surrogate loss. This method is called by surrogate_loss. The input sample_loss should have already had stop_gradient applied to it. This is because the surrogate_loss usually provides a Monte Carlo sample term of the form differentiable_surrogate * sample_loss where sample_loss is considered constant with respect to the input for purposes of the gradient. Args: sample_loss: Tensor, sam

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.input_dict

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.input_dict

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.graph

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.graph

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.dtype

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.dtype

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor

class tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor Base Class for Tensor-like objects that emit stochastic values.