tensorflow::PartialTensorShapeUtils::PartialShapeListString()

string tensorflow::PartialTensorShapeUtils::PartialShapeListString(const gtl::ArraySlice< PartialTensorShape > &shapes)

tf.reduce_max()

tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None) Computes the maximum of elements across dimensions of a tensor. Reduces input_tensor along the dimensions given in reduction_indices. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_indices. If keep_dims is true, the reduced dimensions are retained with length 1. If reduction_indices has no entries, all dimensions are reduced, and a tensor with a single element is return

tf.sparse_tensor_to_dense()

tf.sparse_tensor_to_dense(sp_input, default_value=0, validate_indices=True, name=None) Converts a SparseTensor into a dense tensor. This op is a convenience wrapper around sparse_to_dense for SparseTensors. For example, if sp_input has shape [3, 5] and non-empty string values: [0, 1]: a [0, 3]: b [2, 0]: c and default_value is x, then the output will be a dense [3, 5] string tensor with values: [[x a x b x] [x x x x x] [c x x x x]] Indices must be without repeats. This is only tested if va

tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape()

tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape(dtype, shape=None, scope=None) Create a tf.placeholder for the Graph Editor. Note that the correct graph scope must be set by the calling function. The placeholder is named using the function placeholder_name (with no tensor argument). Args: dtype: the tensor type. shape: the tensor shape (optional). scope: absolute scope within which to create the placeholder. None means that the scope of t is preserved. "" means the root scope.

tf.contrib.distributions.GammaWithSoftplusAlphaBeta.sample_n()

tf.contrib.distributions.GammaWithSoftplusAlphaBeta.sample_n(n, seed=None, name='sample_n') Generate n samples. Additional documentation from Gamma: See the documentation for tf.random_gamma for more details. Args: n: Scalar Tensor of type int32 or int64, the number of observations to sample. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with a prepended dimension (n,). Raises: TypeError: if n is not an integer type.

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__init__()

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__init__(num_units, use_peepholes=False, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=False, activation=tanh) Initialize the parameters for an LSTM cell. Args: num_units: int, The number of units in the LSTM cell use_peepholes: bool, set True to enable diagonal/peephole connections. initializer: (optional) The initializer to use for the weight and projection matrices. num

tf.contrib.distributions.DirichletMultinomial.variance()

tf.contrib.distributions.DirichletMultinomial.variance(name='variance') Variance. Additional documentation from DirichletMultinomial: The variance for each batch member is defined as the following: Var(X_j) = n * alpha_j / alpha_0 * (1 - alpha_j / alpha_0) * (n + alpha_0) / (1 + alpha_0) where alpha_0 = sum_j alpha_j. The covariance between elements in a batch is defined as: Cov(X_i, X_j) = -n * alpha_i * alpha_j / alpha_0 ** 2 * (n + alpha_0) / (1 + alpha_0)

tf.nn.rnn_cell.BasicLSTMCell.__call__()

tf.nn.rnn_cell.BasicLSTMCell.__call__(inputs, state, scope=None) Long short-term memory cell (LSTM).

tf.contrib.distributions.BernoulliWithSigmoidP.prob()

tf.contrib.distributions.BernoulliWithSigmoidP.prob(value, name='prob') Probability density/mass function (depending on is_continuous). Args: value: float or double Tensor. name: The name to give this op. Returns: prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.contrib.learn.monitors.SummarySaver.run_on_all_workers

tf.contrib.learn.monitors.SummarySaver.run_on_all_workers