tf.cumprod()

tf.cumprod(x, axis=0, exclusive=False, reverse=False, name=None) Compute the cumulative product of the tensor x along axis. By default, this op performs an inclusive cumprod, which means that the first element of the input is identical to the first element of the output: prettyprint tf.cumprod([a, b, c]) ==> [a, a * b, a * b * c] By setting the exclusive kwarg to True, an exclusive cumprod is performed instead: prettyprint tf.cumprod([a, b, c], exclusive=True) ==> [0, a, a * b] By settin

tf.cross()

tf.cross(a, b, name=None) Compute the pairwise cross product. a and b must be the same shape; they can either be simple 3-element vectors, or any shape where the innermost dimension is 3. In the latter case, each pair of corresponding 3-element vectors is cross-multiplied independently. Args: a: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half. A tensor containing 3-element vectors. b: A Tensor. Must have the same type as a. Anoth

tf.cos()

tf.cos(x, name=None) Computes cos of x element-wise. Args: x: A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. name: A name for the operation (optional). Returns: A Tensor. Has the same type as x.

tf.contrib.util.stripped_op_list_for_graph()

tf.contrib.util.stripped_op_list_for_graph(graph_def) Collect the stripped OpDefs for ops used by a graph. This function computes the stripped_op_list field of MetaGraphDef and similar protos. The result can be communicated from the producer to the consumer, which can then use the C++ function RemoveNewDefaultAttrsFromGraphDef to improve forwards compatibility. Args: graph_def: A GraphDef proto, as from graph.as_graph_def(). Returns: An OpList of ops used by the graph. Raises: ValueError:

tf.contrib.util.ops_used_by_graph_def()

tf.contrib.util.ops_used_by_graph_def(graph_def) Collect the list of ops used by a graph. Does not validate that the ops are all registered. Args: graph_def: A GraphDef proto, as from graph.as_graph_def(). Returns: A list of strings, each naming an op used by the graph.

tf.contrib.util.make_tensor_proto()

tf.contrib.util.make_tensor_proto(values, dtype=None, shape=None) Create a TensorProto. Args: values: Values to put in the TensorProto. dtype: Optional tensor_pb2 DataType value. shape: List of integers representing the dimensions of tensor. Returns: A TensorProto. Depending on the type, it may contain data in the "tensor_content" attribute, which is not directly useful to Python programs. To access the values you should convert the proto back to a numpy ndarray with tensor_util.MakeNdarr

tf.contrib.util.make_ndarray()

tf.contrib.util.make_ndarray(tensor) Create a numpy ndarray from a tensor. Create a numpy ndarray with the same shape and data as the tensor. Args: tensor: A TensorProto. Returns: A numpy array with the tensor contents. Raises: TypeError: if tensor has unsupported type.

tf.contrib.util.constant_value()

tf.contrib.util.constant_value(tensor) Returns the constant value of the given tensor, if efficiently calculable. This function attempts to partially evaluate the given tensor, and returns its value as a numpy ndarray if this succeeds. TODO(mrry): Consider whether this function should use a registration mechanism like gradients and ShapeFunctions, so that it is easily extensible. NOTE: If constant_value(tensor) returns a non-None result, it will no longer be possible to feed a different value

tf.contrib.training.weighted_resample()

tf.contrib.training.weighted_resample(inputs, weights, overall_rate, scope=None, mean_decay=0.999, warmup=10, seed=None) Performs an approximate weighted resampling of inputs. This method chooses elements from inputs where each item's rate of selection is proportional to its value in weights, and the average rate of selection across all inputs (and many invocations!) is overall_rate. Args: inputs: A list of tensors whose first dimension is batch_size. weights: A [batch_size]-shaped tensor wi

tf.contrib.training.stratified_sample_unknown_dist()

tf.contrib.training.stratified_sample_unknown_dist(tensors, labels, probs, batch_size, enqueue_many=False, queue_capacity=16, threads_per_queue=1, name=None) Stochastically creates batches based on per-class probabilities. NOTICE This sampler can be significantly slower than stratified_sample due to each thread discarding all examples not in its assigned class. This uses a number of threads proportional to the number of classes. See stratified_sample for an implementation that discards fewer e