tf.contrib.training.NextQueuedSequenceBatch.__init__()

tf.contrib.training.NextQueuedSequenceBatch.__init__(state_saver)

tf.contrib.training.NextQueuedSequenceBatch.total_length

tf.contrib.training.NextQueuedSequenceBatch.total_length The lengths of the original (non-truncated) unrolled examples. Returns: An integer vector of length batch_size, the total lengths.

tf.contrib.training.SequenceQueueingStateSaver

class tf.contrib.training.SequenceQueueingStateSaver SequenceQueueingStateSaver provides access to stateful values from input. This class is meant to be used instead of, e.g., a Queue, for splitting variable-length sequence inputs into segments of sequences with fixed length and batching them into mini-batches. It maintains contexts and state for a sequence across the segments. It can be used in conjunction with a QueueRunner (see the example below). The SequenceQueueingStateSaver (SQSS) accep

tf.contrib.training.NextQueuedSequenceBatch.state()

tf.contrib.training.NextQueuedSequenceBatch.state(state_name) Returns batched state tensors. Args: state_name: string, matches a key provided in initial_states. Returns: A Tensor: a batched set of states, either initial states (if this is the first run of the given example), or a value as stored during a previous iteration via save_state control flow. Its type is the same as initial_states["state_name"].dtype. If we had at input: initial_states[state_name].get_shape() == [d1, d2, ...], the

tf.contrib.training.NextQueuedSequenceBatch.sequences

tf.contrib.training.NextQueuedSequenceBatch.sequences A dict mapping keys of input_sequences to split and rebatched data. Returns: A dict mapping keys of input_sequences to tensors. If we had at input: sequences["name"].get_shape() == [None, d1, d2, ...] where None meant the sequence time was dynamic, then for this property: sequences["name"].get_shape() == [batch_size, num_unroll, d1, d2, ...].

tf.contrib.training.NextQueuedSequenceBatch.sequence_count

tf.contrib.training.NextQueuedSequenceBatch.sequence_count An int32 vector, length batch_size: the sequence count of each entry. When an input is split up, the number of splits is equal to: padded_length / num_unroll. This is the sequence_count. Returns: An int32 vector Tensor.

tf.contrib.training.NextQueuedSequenceBatch.sequence

tf.contrib.training.NextQueuedSequenceBatch.sequence An int32 vector, length batch_size: the sequence index of each entry. When an input is split up, the sequence values 0, 1, ..., sequence_count - 1 are assigned to each split. Returns: An int32 vector Tensor.

tf.contrib.training.NextQueuedSequenceBatch.save_state()

tf.contrib.training.NextQueuedSequenceBatch.save_state(state_name, value, name=None) Returns an op to save the current batch of state state_name. Args: state_name: string, matches a key provided in initial_states. value: A Tensor. Its type must match that of initial_states[state_name].dtype. If we had at input: initial_states[state_name].get_shape() == [d1, d2, ...] then the shape of value must match: tf.shape(value) == [batch_size, d1, d2, ...] name: string (optional). The name scope for

tf.contrib.training.NextQueuedSequenceBatch.next_key

tf.contrib.training.NextQueuedSequenceBatch.next_key The key names of the next (in iteration) truncated unrolled examples. The format of the key is: "%05d_of_%05d:%s" % (sequence + 1, sequence_count, original_key) if sequence + 1 < sequence_count, otherwise: "STOP:%s" % original_key where original_key is the unique key read in by the prefetcher. Returns: A string vector of length batch_size, the keys.

tf.contrib.training.NextQueuedSequenceBatch.key

tf.contrib.training.NextQueuedSequenceBatch.key The key names of the given truncated unrolled examples. The format of the key is: "%05d_of_%05d:%s" % (sequence, sequence_count, original_key) where original_key is the unique key read in by the prefetcher. Returns: A string vector of length batch_size, the keys.