tf.contrib.metrics.set_difference()

tf.contrib.metrics.set_difference(a, b, aminusb=True, validate_indices=True) Compute set difference of elements in last dimension of a and b. All but the last dimension of a and b must match. Args: a: Tensor or SparseTensor of the same type as b. If sparse, indices must be sorted in row-major order. b: Tensor or SparseTensor of the same type as a. Must be SparseTensor if a is SparseTensor. If sparse, indices must be sorted in row-major order. aminusb: Whether to subtract b from a, vs vice v

tf.image.resize_nearest_neighbor()

tf.image.resize_nearest_neighbor(images, size, align_corners=None, name=None) Resize images to size using nearest neighbor interpolation. Args: images: A Tensor. Must be one of the following types: uint8, int8, int16, int32, int64, half, float32, float64. 4-D with shape [batch, height, width, channels]. size: A 1-D int32 Tensor of 2 elements: new_height, new_width. The new size for the images. align_corners: An optional bool. Defaults to False. If true, rescale input by (new_height - 1) / (

tf.contrib.layers.summarize_tensor()

tf.contrib.layers.summarize_tensor(tensor, tag=None) Summarize a tensor using a suitable summary type. This function adds a summary op for tensor. The type of summary depends on the shape of tensor. For scalars, a scalar_summary is created, for all other tensors, histogram_summary is used. Args: tensor: The tensor to summarize tag: The tag to use, if None then use tensor's op's name. Returns: The summary op created or None for string tensors.

tf.contrib.bayesflow.stochastic_tensor.InverseGammaWithSoftplusAlphaBetaTensor.value()

tf.contrib.bayesflow.stochastic_tensor.InverseGammaWithSoftplusAlphaBetaTensor.value(name='value')

tf.contrib.learn.monitors.CaptureVariable.step_begin()

tf.contrib.learn.monitors.CaptureVariable.step_begin(step) Overrides BaseMonitor.step_begin. When overriding this method, you must call the super implementation. Args: step: int, the current value of the global step. Returns: A list, the result of every_n_step_begin, if that was called this step, or an empty list otherwise. Raises: ValueError: if called more than once during a step.

tf.contrib.distributions.WishartFull.get_batch_shape()

tf.contrib.distributions.WishartFull.get_batch_shape() Shape of a single sample from a single event index as a TensorShape. Same meaning as batch_shape. May be only partially defined. Returns: batch_shape: TensorShape, possibly unknown.

tf.delete_session_tensor()

tf.delete_session_tensor(handle, name=None) Delete the tensor for the given tensor handle. This is EXPERIMENTAL and subject to change. Delete the tensor of a given tensor handle. The tensor is produced in a previous run() and stored in the state of the session. Args: handle: The string representation of a persistent tensor handle. name: Optional name prefix for the return tensor. Returns: A pair of graph elements. The first is a placeholder for feeding a tensor handle and the second is a d

tf.contrib.distributions.ExponentialWithSoftplusLam.mean()

tf.contrib.distributions.ExponentialWithSoftplusLam.mean(name='mean') Mean.

tf.python_io.TFRecordWriter.__enter__()

tf.python_io.TFRecordWriter.__enter__() Enter a with block.

tf.contrib.learn.monitors.get_default_monitors()

tf.contrib.learn.monitors.get_default_monitors(loss_op=None, summary_op=None, save_summary_steps=100, output_dir=None, summary_writer=None) Returns a default set of typically-used monitors. Args: loss_op: Tensor, the loss tensor. This will be printed using PrintTensor at the default interval. summary_op: See SummarySaver. save_summary_steps: See SummarySaver. output_dir: See SummarySaver. summary_writer: See SummarySaver. Returns: list of monitors.