tf.nn.rnn_cell.EmbeddingWrapper.state_size

tf.nn.rnn_cell.EmbeddingWrapper.state_size

tf.ReaderBase.supports_serialize

tf.ReaderBase.supports_serialize Whether the Reader implementation can serialize its state.

tf.argmin()

tf.argmin(input, dimension, name=None) Returns the index with the smallest value across dimensions of a tensor. Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. dimension: A Tensor. Must be one of the following types: int32, int64. int32, 0 <= dimension < rank(input). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0. na

tf.contrib.distributions.Binomial

class tf.contrib.distributions.Binomial Binomial distribution. This distribution is parameterized by a vector p of probabilities and n, the total counts.

tf.contrib.bayesflow.stochastic_tensor.MultinomialTensor.value()

tf.contrib.bayesflow.stochastic_tensor.MultinomialTensor.value(name='value')

tf.contrib.learn.monitors.CaptureVariable.step_end()

tf.contrib.learn.monitors.CaptureVariable.step_end(step, output) Overrides BaseMonitor.step_end. When overriding this method, you must call the super implementation. Args: step: int, the current value of the global step. output: dict mapping string values representing tensor names to the value resulted from running these tensors. Values may be either scalars, for scalar tensors, or Numpy array, for non-scalar tensors. Returns: bool, the result of every_n_step_end, if that was called this s

tf.contrib.bayesflow.stochastic_tensor.StudentTTensor.value_type

tf.contrib.bayesflow.stochastic_tensor.StudentTTensor.value_type

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.state_size

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.state_size

tf.contrib.learn.monitors.StepCounter.begin()

tf.contrib.learn.monitors.StepCounter.begin(max_steps=None) Called at the beginning of training. When called, the default graph is the one we are executing. Args: max_steps: int, the maximum global step this training will run until. Raises: ValueError: if we've already begun a run.

tf.assert_greater_equal()

tf.assert_greater_equal(x, y, data=None, summarize=None, message=None, name=None) Assert the condition x >= y holds element-wise. Example of adding a dependency to an operation: with tf.control_dependencies([tf.assert_greater_equal(x, y)]): output = tf.reduce_sum(x) Example of adding dependency to the tensor being checked: x = tf.with_dependencies([tf.assert_greater_equal(x, y)], x) This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] >= y[