tf.reduce_prod()

tf.reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None) Computes the product of elements across dimensions of a tensor. Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1. If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is retur

tf.contrib.distributions.InverseGamma.log_cdf()

tf.contrib.distributions.InverseGamma.log_cdf(value, name='log_cdf') Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ] Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1. Args: value: float or double Tensor. name: The name to give this op. Returns: logcdf: a Tensor of shape sample_shape(x) +

tf.contrib.learn.monitors.SummarySaver.step_end()

tf.contrib.learn.monitors.SummarySaver.step_end(step, output) Overrides BaseMonitor.step_end. When overriding this method, you must call the super implementation. Args: step: int, the current value of the global step. output: dict mapping string values representing tensor names to the value resulted from running these tensors. Values may be either scalars, for scalar tensors, or Numpy array, for non-scalar tensors. Returns: bool, the result of every_n_step_end, if that was called this step

tf.contrib.learn.monitors.NanLoss.epoch_end()

tf.contrib.learn.monitors.NanLoss.epoch_end(epoch) End epoch. Args: epoch: int, the epoch number. Raises: ValueError: if we've not begun an epoch, or epoch number does not match.

tf.contrib.bayesflow.stochastic_tensor.GammaTensor.mean()

tf.contrib.bayesflow.stochastic_tensor.GammaTensor.mean(name='mean')

tf.contrib.distributions.Chi2.validate_args

tf.contrib.distributions.Chi2.validate_args Python boolean indicated possibly expensive checks are enabled.

tf.contrib.bayesflow.stochastic_tensor.ExponentialTensor.entropy()

tf.contrib.bayesflow.stochastic_tensor.ExponentialTensor.entropy(name='entropy')

tf.contrib.bayesflow.stochastic_tensor.UniformTensor.__init__()

tf.contrib.bayesflow.stochastic_tensor.UniformTensor.__init__(name=None, dist_value_type=None, loss_fn=score_function, **dist_args)

tf.contrib.learn.monitors.CheckpointSaver.__init__()

tf.contrib.learn.monitors.CheckpointSaver.__init__(checkpoint_dir, save_secs=None, save_steps=None, saver=None, checkpoint_basename='model.ckpt', scaffold=None) Initialize CheckpointSaver monitor. Args: checkpoint_dir: str, base directory for the checkpoint files. save_secs: int, save every N secs. save_steps: int, save every N steps. saver: Saver object, used for saving. checkpoint_basename: str, base name for the checkpoint files. scaffold: Scaffold, use to get saver object. Raises:

tf.contrib.distributions.BetaWithSoftplusAB.batch_shape()

tf.contrib.distributions.BetaWithSoftplusAB.batch_shape(name='batch_shape') Shape of a single sample from a single event index as a 1-D Tensor. The product of the dimensions of the batch_shape is the number of independent distributions of this kind the instance represents. Args: name: name to give to the op Returns: batch_shape: Tensor.