tf.contrib.bayesflow.stochastic_tensor.WishartCholeskyTensor.distribution

tf.contrib.bayesflow.stochastic_tensor.WishartCholeskyTensor.distribution

tf.contrib.learn.monitors.StopAtStep.step_begin()

tf.contrib.learn.monitors.StopAtStep.step_begin(step)

tf.contrib.bayesflow.stochastic_tensor.LaplaceTensor.__init__()

tf.contrib.bayesflow.stochastic_tensor.LaplaceTensor.__init__(name=None, dist_value_type=None, loss_fn=score_function, **dist_args)

tf.contrib.bayesflow.stochastic_tensor.BinomialTensor.loss()

tf.contrib.bayesflow.stochastic_tensor.BinomialTensor.loss(final_loss, name='Loss')

tf.contrib.bayesflow.stochastic_tensor.MixtureTensor

class tf.contrib.bayesflow.stochastic_tensor.MixtureTensor MixtureTensor is a StochasticTensor backed by the distribution Mixture.

tf.contrib.bayesflow.stochastic_tensor.WishartCholeskyTensor.__init__()

tf.contrib.bayesflow.stochastic_tensor.WishartCholeskyTensor.__init__(name=None, dist_value_type=None, loss_fn=score_function, **dist_args)

tf.contrib.learn.Estimator.model_dir

tf.contrib.learn.Estimator.model_dir

tf.contrib.learn.monitors.CaptureVariable.epoch_end()

tf.contrib.learn.monitors.CaptureVariable.epoch_end(epoch) End epoch. Args: epoch: int, the epoch number. Raises: ValueError: if we've not begun an epoch, or epoch number does not match.

tf.contrib.learn.monitors.CheckpointSaver.set_estimator()

tf.contrib.learn.monitors.CheckpointSaver.set_estimator(estimator) A setter called automatically by the target estimator. If the estimator is locked, this method does nothing. Args: estimator: the estimator that this monitor monitors. Raises: ValueError: if the estimator is None.

tf.contrib.bayesflow.stochastic_tensor.ExponentialWithSoftplusLamTensor.graph

tf.contrib.bayesflow.stochastic_tensor.ExponentialWithSoftplusLamTensor.graph