tf.contrib.graph_editor.SubGraphView.remove_unused_ops()

tf.contrib.graph_editor.SubGraphView.remove_unused_ops(control_inputs=True) Remove unused ops. Args: control_inputs: if True, control inputs are used to detect used ops. Returns: A new subgraph view which only contains used operations.

tf.sparse_concat()

tf.sparse_concat(concat_dim, sp_inputs, name=None, expand_nonconcat_dim=False) Concatenates a list of SparseTensor along the specified dimension. Concatenation is with respect to the dense versions of each sparse input. It is assumed that each inputs is a SparseTensor whose elements are ordered along increasing dimension number. If expand_nonconcat_dim is False, all inputs' shapes must match, except for the concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are allowd to va

tf.contrib.framework.deprecated()

tf.contrib.framework.deprecated(date, instructions) Decorator for marking functions or methods deprecated. This decorator logs a deprecation warning whenever the decorated function is called. It has the following format: (from ) is deprecated and will be removed after . Instructions for updating: will include the class name if it is a method. It also edits the docstring of the function: ' (deprecated)' is appended to the first line of the docstring and a deprecation notice is prepended to t

tf.contrib.distributions.LaplaceWithSoftplusScale.pmf()

tf.contrib.distributions.LaplaceWithSoftplusScale.pmf(value, name='pmf') Probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.sparse_maximum()

tf.sparse_maximum(sp_a, sp_b, name=None) Returns the element-wise max of two SparseTensors. Assumes the two SparseTensors have the same shape, i.e., no broadcasting. Example: sp_zero = ops.SparseTensor([[0]], [0], [7]) sp_one = ops.SparseTensor([[1]], [1], [7]) res = tf.sparse_maximum(sp_zero, sp_one).eval() # "res" should be equal to SparseTensor([[0], [1]], [0, 1], [7]). Args: sp_a: a SparseTensor operand whose dtype is real, and indices lexicographically ordered. sp_b: the other SparseTe

tf.contrib.distributions.GammaWithSoftplusAlphaBeta.pdf()

tf.contrib.distributions.GammaWithSoftplusAlphaBeta.pdf(value, name='pdf') Probability density function. Args: value: float or double Tensor. name: The name to give this op. Returns: prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if not is_continuous.

tf.SparseTensor.from_value()

tf.SparseTensor.from_value(cls, sparse_tensor_value)

tf.contrib.distributions.Gamma.sample_n()

tf.contrib.distributions.Gamma.sample_n(n, seed=None, name='sample_n') Generate n samples. Additional documentation from Gamma: See the documentation for tf.random_gamma for more details. Args: n: Scalar Tensor of type int32 or int64, the number of observations to sample. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with a prepended dimension (n,). Raises: TypeError: if n is not an integer type.

tf.contrib.graph_editor.SubGraphView.remap()

tf.contrib.graph_editor.SubGraphView.remap(new_input_indices=None, new_output_indices=None) Remap the inputs and outputs of the subgraph. Note that this is only modifying the view: the underlying tf.Graph is not affected. Args: new_input_indices: an iterable of integers representing a mapping between the old inputs and the new ones. This mapping can be under-complete and must be without repetitions. new_output_indices: an iterable of integers representing a mapping between the old outputs an

tf.contrib.distributions.Binomial.get_event_shape()

tf.contrib.distributions.Binomial.get_event_shape() Shape of a single sample from a single batch as a TensorShape. Same meaning as event_shape. May be only partially defined. Returns: event_shape: TensorShape, possibly unknown.