tf.contrib.learn.monitors.StepCounter.epoch_end()

tf.contrib.learn.monitors.StepCounter.epoch_end(epoch) End epoch. Args: epoch: int, the epoch number. Raises: ValueError: if we've not begun an epoch, or epoch number does not match.

tf.contrib.learn.train()

tf.contrib.learn.train(graph, output_dir, train_op, loss_op, global_step_tensor=None, init_op=None, init_feed_dict=None, init_fn=None, log_every_steps=10, supervisor_is_chief=True, supervisor_master='', supervisor_save_model_secs=600, keep_checkpoint_max=5, supervisor_save_summaries_steps=100, feed_fn=None, steps=None, fail_on_nan_loss=True, monitors=None, max_steps=None) Train a model. Given graph, a directory to write outputs to (output_dir), and some ops, run a training loop. The given trai

tf.contrib.bayesflow.stochastic_tensor.BinomialTensor.input_dict

tf.contrib.bayesflow.stochastic_tensor.BinomialTensor.input_dict

tf.contrib.distributions.LaplaceWithSoftplusScale.sample_n()

tf.contrib.distributions.LaplaceWithSoftplusScale.sample_n(n, seed=None, name='sample_n') Generate n samples. Args: n: Scalar Tensor of type int32 or int64, the number of observations to sample. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with a prepended dimension (n,). Raises: TypeError: if n is not an integer type.

tf.contrib.graph_editor.matcher.input_ops()

tf.contrib.graph_editor.matcher.input_ops(*args) Add input matches.

tf.sub()

tf.sub(x, y, name=None) Returns x - y element-wise. NOTE: Sub supports broadcasting. More about broadcasting here Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. y: A Tensor. Must have the same type as x. name: A name for the operation (optional). Returns: A Tensor. Has the same type as x.

tf.contrib.graph_editor.SubGraphView.__enter__()

tf.contrib.graph_editor.SubGraphView.__enter__() Allow Python context to minize the life time of a subgraph view. A subgraph view is meant to be a lightweight and transient object. A short lifetime will alleviate the "out-of-sync" issue mentioned earlier. For that reason, a SubGraphView instance can be used within a Python context. For example: from tensorflow.contrib import graph_editor as ge with ge.make_sgv(...) as sgv: print(sgv) Returns: Itself.

tf.contrib.learn.monitors.StopAtStep.epoch_end()

tf.contrib.learn.monitors.StopAtStep.epoch_end(epoch) End epoch. Args: epoch: int, the epoch number. Raises: ValueError: if we've not begun an epoch, or epoch number does not match.

tf.contrib.distributions.LaplaceWithSoftplusScale.dtype

tf.contrib.distributions.LaplaceWithSoftplusScale.dtype The DType of Tensors handled by this Distribution.

tf.contrib.framework.get_or_create_global_step()

tf.contrib.framework.get_or_create_global_step(graph=None) Returns and create (if necessary) the global step variable. Args: graph: The graph in which to create the global step. If missing, use default graph. Returns: the tensor representing the global step variable.