tf.contrib.learn.BaseEstimator.evaluate()

tf.contrib.learn.BaseEstimator.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=None, steps=None, metrics=None, name=None) See Evaluable. Raises: ValueError: If at least one of x or y is provided, and at least one of input_fn or feed_fn is provided. Or if metrics is not None or dict.

tf.contrib.learn.monitors.StepCounter.step_end()

tf.contrib.learn.monitors.StepCounter.step_end(step, output) Overrides BaseMonitor.step_end. When overriding this method, you must call the super implementation. Args: step: int, the current value of the global step. output: dict mapping string values representing tensor names to the value resulted from running these tensors. Values may be either scalars, for scalar tensors, or Numpy array, for non-scalar tensors. Returns: bool, the result of every_n_step_end, if that was called this step,

tf.string_split()

tf.string_split(source, delimiter=' ') Split elements of source based on delimiter into a SparseTensor. Let N be the size of source (typically N will be the batch size). Split each element of source based on delimiter and return a SparseTensor containing the splitted tokens. Empty tokens are ignored. If delimiter is an empty string, each element of the source is split into individual 1 character strings. For example: N = 2, source[0] is 'hello world' and source[1] is 'a b c', then the output w

tf.contrib.learn.read_batch_features()

tf.contrib.learn.read_batch_features(file_pattern, batch_size, features, reader, randomize_input=True, num_epochs=None, queue_capacity=10000, feature_queue_capacity=100, reader_num_threads=1, parser_num_threads=1, parse_fn=None, name=None) Adds operations to read, queue, batch and parse Example protos. Given file pattern (or list of files), will setup a queue for file names, read Example proto using provided reader, use batch queue to create batches of examples of size batch_size and parse exa

tf.contrib.bayesflow.stochastic_tensor.InverseGammaTensor.distribution

tf.contrib.bayesflow.stochastic_tensor.InverseGammaTensor.distribution

tf.contrib.distributions.Dirichlet.pdf()

tf.contrib.distributions.Dirichlet.pdf(value, name='pdf') Probability density function. Args: value: float or double Tensor. name: The name to give this op. Returns: prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if not is_continuous.

tf.contrib.graph_editor.filter_ops()

tf.contrib.graph_editor.filter_ops(ops, positive_filter) Get the ops passing the given filter. Args: ops: an object convertible to a list of tf.Operation. positive_filter: a function deciding where to keep an operation or not. If True, all the operations are returned. Returns: A list of selected tf.Operation. Raises: TypeError: if ops cannot be converted to a list of tf.Operation.

tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor

class tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor Base Class for Tensor-like objects that emit stochastic values.

tf.contrib.graph_editor.SubGraphView.connected_inputs

tf.contrib.graph_editor.SubGraphView.connected_inputs The connected input tensors of this subgraph view.

tf.lbeta()

tf.lbeta(x, name='lbeta') Computes ln(|Beta(x)|), reducing along the last dimension. Given one-dimensional z = [z_0,...,z_{K-1}], we define Beta(z) = \prod_j Gamma(z_j) / Gamma(\sum_j z_j) And for n + 1 dimensional x with shape [N1, ..., Nn, K], we define lbeta(x)[i1, ..., in] = Log(|Beta(x[i1, ..., in, :])|). In other words, the last dimension is treated as the z vector. Note that if z = [u, v], then Beta(z) = int_0^1 t^{u-1} (1 - t)^{v-1} dt, which defines the traditional bivariate beta func