tf.contrib.learn.monitors.GraphDump.epoch_end()

tf.contrib.learn.monitors.GraphDump.epoch_end(epoch) End epoch. Args: epoch: int, the epoch number. Raises: ValueError: if we've not begun an epoch, or epoch number does not match.

tf.contrib.learn.monitors.GraphDump.epoch_begin()

tf.contrib.learn.monitors.GraphDump.epoch_begin(epoch) Begin epoch. Args: epoch: int, the epoch number. Raises: ValueError: if we've already begun an epoch, or epoch < 0.

tf.contrib.learn.monitors.GraphDump.end()

tf.contrib.learn.monitors.GraphDump.end(session=None) Callback at the end of training/evaluation. Args: session: A tf.Session object that can be used to run ops. Raises: ValueError: if we've not begun a run.

tf.contrib.learn.monitors.GraphDump.data

tf.contrib.learn.monitors.GraphDump.data

tf.contrib.learn.monitors.GraphDump.compare()

tf.contrib.learn.monitors.GraphDump.compare(other_dump, step, atol=1e-06) Compares two GraphDump monitors and returns differences. Args: other_dump: Another GraphDump monitor. step: int, step to compare on. atol: float, absolute tolerance in comparison of floating arrays. Returns: Returns tuple: matched: list of keys that matched. non_matched: dict of keys to tuple of 2 mismatched values. Raises: ValueError: if a key in data is missing from other_dump at step.

tf.contrib.learn.monitors.GraphDump.begin()

tf.contrib.learn.monitors.GraphDump.begin(max_steps=None)

tf.contrib.learn.monitors.GraphDump

class tf.contrib.learn.monitors.GraphDump Dumps almost all tensors in the graph at every step. Note, this is very expensive, prefer PrintTensor in production.

tf.contrib.learn.monitors.get_default_monitors()

tf.contrib.learn.monitors.get_default_monitors(loss_op=None, summary_op=None, save_summary_steps=100, output_dir=None, summary_writer=None) Returns a default set of typically-used monitors. Args: loss_op: Tensor, the loss tensor. This will be printed using PrintTensor at the default interval. summary_op: See SummarySaver. save_summary_steps: See SummarySaver. output_dir: See SummarySaver. summary_writer: See SummarySaver. Returns: list of monitors.

tf.contrib.learn.monitors.ExportMonitor.__init__()

tf.contrib.learn.monitors.ExportMonitor.__init__(*args, **kwargs) Initializes ExportMonitor. (deprecated arguments) SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-23. Instructions for updating: The signature of the input_fn accepted by export is changing to be consistent with what's used by tf.Learn Estimator's train/evaluate. input_fn (and in most cases, input_feature_key) will both become required args. Args: every_n_steps: Run monitor every N steps. export_dir: str, f

tf.contrib.learn.monitors.ExportMonitor.step_end()

tf.contrib.learn.monitors.ExportMonitor.step_end(step, output) Overrides BaseMonitor.step_end. When overriding this method, you must call the super implementation. Args: step: int, the current value of the global step. output: dict mapping string values representing tensor names to the value resulted from running these tensors. Values may be either scalars, for scalar tensors, or Numpy array, for non-scalar tensors. Returns: bool, the result of every_n_step_end, if that was called this ste