tf.contrib.distributions.MultivariateNormalDiag.log_prob()

tf.contrib.distributions.MultivariateNormalDiag.log_prob(value, name='log_prob') Log probability density/mass function (depending on is_continuous). Additional documentation from _MultivariateNormalOperatorPD: x is a batch vector with compatible shape if x is a Tensor whose shape can be broadcast up to either: self.batch_shape + self.event_shape or [M1,...,Mm] + self.batch_shape + self.event_shape Args: value: float or double Tensor. name: The name to give this op. Returns: log_prob: a

tf.contrib.bayesflow.stochastic_tensor.NormalWithSoftplusSigmaTensor.__init__()

tf.contrib.bayesflow.stochastic_tensor.NormalWithSoftplusSigmaTensor.__init__(name=None, dist_value_type=None, loss_fn=score_function, **dist_args)

tf.contrib.learn.monitors.ExportMonitor.export_dir

tf.contrib.learn.monitors.ExportMonitor.export_dir

tf.contrib.graph_editor.reroute_b2a_inputs()

tf.contrib.graph_editor.reroute_b2a_inputs(sgv0, sgv1) Re-route all the inputs of sgv1 to sgv0 (see reroute_inputs).

tf.nn.rnn_cell.InputProjectionWrapper

class tf.nn.rnn_cell.InputProjectionWrapper Operator adding an input projection to the given cell. Note: in many cases it may be more efficient to not use this wrapper, but instead concatenate the whole sequence of your inputs in time, do the projection on this batch-concatenated sequence, then split it.

tf.contrib.distributions.StudentTWithAbsDfSoftplusSigma.log_prob()

tf.contrib.distributions.StudentTWithAbsDfSoftplusSigma.log_prob(value, name='log_prob') Log probability density/mass function (depending on is_continuous). Args: value: float or double Tensor. name: The name to give this op. Returns: log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.contrib.learn.monitors.CheckpointSaver.step_end()

tf.contrib.learn.monitors.CheckpointSaver.step_end(step, output) Callback after training step finished. This callback provides access to the tensors/ops evaluated at this step, including the additional tensors for which evaluation was requested in step_begin. In addition, the callback has the opportunity to stop training by returning True. This is useful for early stopping, for example. Note that this method is not called if the call to Session.run() that followed the last call to step_begin()

tf.contrib.learn.monitors.LoggingTrainable.epoch_end()

tf.contrib.learn.monitors.LoggingTrainable.epoch_end(epoch) End epoch. Args: epoch: int, the epoch number. Raises: ValueError: if we've not begun an epoch, or epoch number does not match.

tf.contrib.learn.LinearRegressor.model_dir

tf.contrib.learn.LinearRegressor.model_dir

tf.contrib.learn.monitors.CaptureVariable.step_end()

tf.contrib.learn.monitors.CaptureVariable.step_end(step, output) Overrides BaseMonitor.step_end. When overriding this method, you must call the super implementation. Args: step: int, the current value of the global step. output: dict mapping string values representing tensor names to the value resulted from running these tensors. Values may be either scalars, for scalar tensors, or Numpy array, for non-scalar tensors. Returns: bool, the result of every_n_step_end, if that was called this s