tf.contrib.training.NextQueuedSequenceBatch.key

tf.contrib.training.NextQueuedSequenceBatch.key The key names of the given truncated unrolled examples. The format of the key is: "%05d_of_%05d:%s" % (sequence, sequence_count, original_key) where original_key is the unique key read in by the prefetcher. Returns: A string vector of length batch_size, the keys.

tf.contrib.training.NextQueuedSequenceBatch.insertion_index

tf.contrib.training.NextQueuedSequenceBatch.insertion_index The insertion indices of the examples (when they were first added). These indices start with the value -2**63 and increase with every call to the prefetch op. Each whole example gets its own insertion index, and this is used to prioritize the example so that its truncated segments appear in adjacent iterations, even if new examples are inserted by the prefetch op between iterations. Returns: An int64 vector of length batch_size, the i

tf.contrib.training.NextQueuedSequenceBatch.context

tf.contrib.training.NextQueuedSequenceBatch.context A dict mapping keys of input_context to batched context. Returns: A dict mapping keys of input_context to tensors. If we had at input: context["name"].get_shape() == [d1, d2, ...] then for this property: context["name"].get_shape() == [batch_size, d1, d2, ...]

tf.contrib.training.NextQueuedSequenceBatch.batch_size

tf.contrib.training.NextQueuedSequenceBatch.batch_size The batch_size of the given batch. Usually, this is the batch_size requested when initializing the SQSS, but if allow_small_batch=True this will become smaller when inputs are exhausted. Returns: A scalar integer tensor, the batch_size

tf.contrib.training.NextQueuedSequenceBatch

class tf.contrib.training.NextQueuedSequenceBatch NextQueuedSequenceBatch stores deferred SequenceQueueingStateSaver data. This class is instantiated by SequenceQueueingStateSaver and is accessible via its next_batch property.

tf.contrib.training.bucket_by_sequence_length()

tf.contrib.training.bucket_by_sequence_length(input_length, tensors, batch_size, bucket_boundaries, num_threads=1, capacity=32, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, keep_input=None, shared_name=None, name=None) Lazy bucketing of inputs according to their length. This method calls tf.contrib.training.bucket under the hood, after first subdividing the bucket boundaries into separate buckets and identifying which bucket the given input_length belongs to. See the docume

tf.contrib.training.bucket()

tf.contrib.training.bucket(tensors, which_bucket, batch_size, num_buckets, num_threads=1, capacity=32, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, keep_input=None, shared_name=None, name=None) Lazy bucketing of input tensors according to which_bucket. The argument tensors can be a list or a dictionary of tensors. The value returned by the function will be of the same type as tensors. The tensors entering this function are put into the bucket given by which_bucket. Each buc

tf.contrib.training.batch_sequences_with_states()

tf.contrib.training.batch_sequences_with_states(input_key, input_sequences, input_context, input_length, initial_states, num_unroll, batch_size, num_threads=3, capacity=1000, allow_small_batch=True, pad=True, name=None) Creates batches of segments of sequential input. This method creates a SequenceQueueingStateSaver (SQSS) and adds it to the queuerunners. It returns a NextQueuedSequenceBatch. It accepts one example at a time identified by a unique input_key. input_sequence is a dict with value

tf.contrib.rnn.TimeFreqLSTMCell.__init__()

tf.contrib.rnn.TimeFreqLSTMCell.__init__(num_units, use_peepholes=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1.0, feature_size=None, frequency_skip=None) Initialize the parameters for an LSTM cell. Args: num_units: int, The number of units in the LSTM cell use_peepholes: bool, set True to enable diagonal/peephole connections. cell_clip: (optional) A float value, if provided the cell state is clipped by this value prior to the cell output activation. initializer

tf.contrib.rnn.TimeFreqLSTMCell.__call__()

tf.contrib.rnn.TimeFreqLSTMCell.__call__(inputs, state, scope=None) Run one step of LSTM. Args: inputs: input Tensor, 2D, batch x num_units. state: state Tensor, 2D, batch x state_size. scope: VariableScope for the created subgraph; defaults to "TimeFreqLSTMCell". Returns: A tuple containing: - A 2D, batch x output_dim, Tensor representing the output of the LSTM after reading "inputs" when previous state was "state". Here output_dim is num_units. - A 2D, batch x state_size, Tensor represe