tf.contrib.distributions.Uniform.b

tf.contrib.distributions.Uniform.b

tf.contrib.learn.monitors.ExportMonitor.run_on_all_workers

tf.contrib.learn.monitors.ExportMonitor.run_on_all_workers

tf.contrib.bayesflow.stochastic_tensor.TransformedDistributionTensor.graph

tf.contrib.bayesflow.stochastic_tensor.TransformedDistributionTensor.graph

tf.contrib.graph_editor.reroute_a2b()

tf.contrib.graph_editor.reroute_a2b(sgv0, sgv1) Re-route the inputs and outputs of sgv0 to sgv1 (see _reroute).

tf.trace()

tf.trace(x, name=None) Compute the trace of a tensor x. trace(x) returns the sum of along the diagonal. For example: # 'x' is [[1, 1], # [1, 1]] tf.trace(x) ==> 2 # 'x' is [[1,2,3], # [4,5,6], # [7,8,9]] tf.trace(x) ==> 15 Args: x: 2-D tensor. name: A name for the operation (optional). Returns: The trace of input tensor.

tf.contrib.learn.LinearRegressor.model_dir

tf.contrib.learn.LinearRegressor.model_dir

tf.contrib.learn.monitors.LoggingTrainable.epoch_end()

tf.contrib.learn.monitors.LoggingTrainable.epoch_end(epoch) End epoch. Args: epoch: int, the epoch number. Raises: ValueError: if we've not begun an epoch, or epoch number does not match.

tf.contrib.learn.monitors.CheckpointSaver.step_end()

tf.contrib.learn.monitors.CheckpointSaver.step_end(step, output) Callback after training step finished. This callback provides access to the tensors/ops evaluated at this step, including the additional tensors for which evaluation was requested in step_begin. In addition, the callback has the opportunity to stop training by returning True. This is useful for early stopping, for example. Note that this method is not called if the call to Session.run() that followed the last call to step_begin()

tf.contrib.learn.monitors.ExportMonitor.export_dir

tf.contrib.learn.monitors.ExportMonitor.export_dir

tf.contrib.graph_editor.reroute_b2a_inputs()

tf.contrib.graph_editor.reroute_b2a_inputs(sgv0, sgv1) Re-route all the inputs of sgv1 to sgv0 (see reroute_inputs).