tf.ReaderBase.supports_serialize

tf.ReaderBase.supports_serialize Whether the Reader implementation can serialize its state.

tf.contrib.distributions.Binomial

class tf.contrib.distributions.Binomial Binomial distribution. This distribution is parameterized by a vector p of probabilities and n, the total counts.

tf.contrib.learn.LinearRegressor.model_dir

tf.contrib.learn.LinearRegressor.model_dir

tf.contrib.learn.monitors.LoggingTrainable.epoch_end()

tf.contrib.learn.monitors.LoggingTrainable.epoch_end(epoch) End epoch. Args: epoch: int, the epoch number. Raises: ValueError: if we've not begun an epoch, or epoch number does not match.

tf.contrib.learn.monitors.CheckpointSaver.step_end()

tf.contrib.learn.monitors.CheckpointSaver.step_end(step, output) Callback after training step finished. This callback provides access to the tensors/ops evaluated at this step, including the additional tensors for which evaluation was requested in step_begin. In addition, the callback has the opportunity to stop training by returning True. This is useful for early stopping, for example. Note that this method is not called if the call to Session.run() that followed the last call to step_begin()

tf.contrib.learn.monitors.ExportMonitor.export_dir

tf.contrib.learn.monitors.ExportMonitor.export_dir

tf.contrib.graph_editor.reroute_b2a_inputs()

tf.contrib.graph_editor.reroute_b2a_inputs(sgv0, sgv1) Re-route all the inputs of sgv1 to sgv0 (see reroute_inputs).

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.state_size

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.state_size

tf.contrib.learn.monitors.StepCounter.begin()

tf.contrib.learn.monitors.StepCounter.begin(max_steps=None) Called at the beginning of training. When called, the default graph is the one we are executing. Args: max_steps: int, the maximum global step this training will run until. Raises: ValueError: if we've already begun a run.

tf.assert_greater_equal()

tf.assert_greater_equal(x, y, data=None, summarize=None, message=None, name=None) Assert the condition x >= y holds element-wise. Example of adding a dependency to an operation: with tf.control_dependencies([tf.assert_greater_equal(x, y)]): output = tf.reduce_sum(x) Example of adding dependency to the tensor being checked: x = tf.with_dependencies([tf.assert_greater_equal(x, y)], x) This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] >= y[