tf.contrib.learn.monitors.NanLoss.step_begin()

tf.contrib.learn.monitors.NanLoss.step_begin(step) Overrides BaseMonitor.step_begin. When overriding this method, you must call the super implementation. Args: step: int, the current value of the global step. Returns: A list, the result of every_n_step_begin, if that was called this step, or an empty list otherwise. Raises: ValueError: if called more than once during a step.

tf.contrib.distributions.Poisson.log_prob()

tf.contrib.distributions.Poisson.log_prob(value, name='log_prob') Log probability density/mass function (depending on is_continuous). Additional documentation from Poisson: Note thet the input value must be a non-negative floating point tensor with dtype dtype and whose shape can be broadcast with self.lam. x is only legal if it is non-negative and its components are equal to integer values. Args: value: float or double Tensor. name: The name to give this op. Returns: log_prob: a Tensor o

tf.contrib.learn.monitors.SummarySaver.set_estimator()

tf.contrib.learn.monitors.SummarySaver.set_estimator(estimator)

tf.contrib.distributions.Poisson.pmf()

tf.contrib.distributions.Poisson.pmf(value, name='pmf') Probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.contrib.distributions.QuantizedDistribution.event_shape()

tf.contrib.distributions.QuantizedDistribution.event_shape(name='event_shape') Shape of a single sample from a single batch as a 1-D int32 Tensor. Args: name: name to give to the op Returns: event_shape: Tensor.

tf.nn.rnn_cell.RNNCell.state_size

tf.nn.rnn_cell.RNNCell.state_size size(s) of state(s) used by this cell. It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes.

tf.contrib.learn.monitors.GraphDump.__init__()

tf.contrib.learn.monitors.GraphDump.__init__(ignore_ops=None) Initializes GraphDump monitor. Args: ignore_ops: list of string. Names of ops to ignore. If None, GraphDump.IGNORE_OPS is used.

tf.contrib.learn.monitors.GraphDump.epoch_end()

tf.contrib.learn.monitors.GraphDump.epoch_end(epoch) End epoch. Args: epoch: int, the epoch number. Raises: ValueError: if we've not begun an epoch, or epoch number does not match.

tf.acos()

tf.acos(x, name=None) Computes acos of x element-wise. Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional). Returns: A Tensor. Has the same type as x.

tf.edit_distance()

tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance') Computes the Levenshtein distance between sequences. This operation takes variable-length sequences (hypothesis and truth), each provided as a SparseTensor, and computes the Levenshtein distance. You can normalize the edit distance by length of truth by setting normalize to true. For example, given the following input: # 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values: # (0,0) = ["a"] # (1,0) =