tf.contrib.distributions.Normal.log_pmf()

tf.contrib.distributions.Normal.log_pmf(value, name='log_pmf') Log probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: log_pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.contrib.training.NextQueuedSequenceBatch.save_state()

tf.contrib.training.NextQueuedSequenceBatch.save_state(state_name, value, name=None) Returns an op to save the current batch of state state_name. Args: state_name: string, matches a key provided in initial_states. value: A Tensor. Its type must match that of initial_states[state_name].dtype. If we had at input: initial_states[state_name].get_shape() == [d1, d2, ...] then the shape of value must match: tf.shape(value) == [batch_size, d1, d2, ...] name: string (optional). The name scope for

tf.FixedLengthRecordReader.read()

tf.FixedLengthRecordReader.read(queue, name=None) Returns the next record (key, value pair) produced by a reader. Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). Args: queue: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. name: A name for the operation (optional). Returns: A tuple of Tensors (key, value). key: A string scalar Tensor. v

tf.contrib.distributions.TransformedDistribution.inverse

tf.contrib.distributions.TransformedDistribution.inverse Inverse function of transform, y => x.

tf.contrib.learn.DNNRegressor.evaluate()

tf.contrib.learn.DNNRegressor.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=None, steps=None, metrics=None, name=None) See Evaluable. Raises: ValueError: If at least one of x or y is provided, and at least one of input_fn or feed_fn is provided. Or if metrics is not None or dict.

tf.ReaderBase.read()

tf.ReaderBase.read(queue, name=None) Returns the next record (key, value pair) produced by a reader. Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). Args: queue: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. name: A name for the operation (optional). Returns: A tuple of Tensors (key, value). key: A string scalar Tensor. value: A strin

tf.contrib.learn.infer()

tf.contrib.learn.infer(restore_checkpoint_path, output_dict, feed_dict=None) Restore graph from restore_checkpoint_path and run output_dict tensors. If restore_checkpoint_path is supplied, restore from checkpoint. Otherwise, init all variables. Args: restore_checkpoint_path: A string containing the path to a checkpoint to restore. output_dict: A dict mapping string names to Tensor objects to run. Tensors must all be from the same graph. feed_dict: dict object mapping Tensor objects to input

tf.contrib.distributions.WishartFull.log_pmf()

tf.contrib.distributions.WishartFull.log_pmf(value, name='log_pmf') Log probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: log_pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.TFRecordReader

class tf.TFRecordReader A Reader that outputs the records from a TFRecords file. See ReaderBase for supported methods.

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__call__()

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__call__(inputs, state, scope=None) Run one step of LSTM. Args: inputs: input Tensor, 2D, batch x num_units. state: if state_is_tuple is False, this must be a state Tensor, 2-D, batch x state_size. If state_is_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c_state and m_state. scope: VariableScope for the created subgraph; defaults to "LSTMCell". Returns: A tuple containing: - A 2-D, [batch x output_dim], Ten