tf.contrib.training.resample_at_rate()

tf.contrib.training.resample_at_rate(inputs, rates, scope=None, seed=None, back_prop=False) Given inputs tensors, stochastically resamples each at a given rate. For example, if the inputs are [[a1, a2], [b1, b2]] and the rates tensor contains [3, 1], then the return value may look like [[a1, a2, a1, a1], [b1, b2, b1, b1]]. However, many other outputs are possible, since this is stochastic -- averaged over many repeated calls, each set of inputs should appear in the output rate times the number

tf.contrib.training.NextQueuedSequenceBatch.__init__()

tf.contrib.training.NextQueuedSequenceBatch.__init__(state_saver)

tf.contrib.training.NextQueuedSequenceBatch.total_length

tf.contrib.training.NextQueuedSequenceBatch.total_length The lengths of the original (non-truncated) unrolled examples. Returns: An integer vector of length batch_size, the total lengths.

tf.contrib.training.NextQueuedSequenceBatch.state()

tf.contrib.training.NextQueuedSequenceBatch.state(state_name) Returns batched state tensors. Args: state_name: string, matches a key provided in initial_states. Returns: A Tensor: a batched set of states, either initial states (if this is the first run of the given example), or a value as stored during a previous iteration via save_state control flow. Its type is the same as initial_states["state_name"].dtype. If we had at input: initial_states[state_name].get_shape() == [d1, d2, ...], the

tf.contrib.training.NextQueuedSequenceBatch.sequence_count

tf.contrib.training.NextQueuedSequenceBatch.sequence_count An int32 vector, length batch_size: the sequence count of each entry. When an input is split up, the number of splits is equal to: padded_length / num_unroll. This is the sequence_count. Returns: An int32 vector Tensor.

tf.contrib.training.NextQueuedSequenceBatch.sequences

tf.contrib.training.NextQueuedSequenceBatch.sequences A dict mapping keys of input_sequences to split and rebatched data. Returns: A dict mapping keys of input_sequences to tensors. If we had at input: sequences["name"].get_shape() == [None, d1, d2, ...] where None meant the sequence time was dynamic, then for this property: sequences["name"].get_shape() == [batch_size, num_unroll, d1, d2, ...].

tf.contrib.training.NextQueuedSequenceBatch.sequence

tf.contrib.training.NextQueuedSequenceBatch.sequence An int32 vector, length batch_size: the sequence index of each entry. When an input is split up, the sequence values 0, 1, ..., sequence_count - 1 are assigned to each split. Returns: An int32 vector Tensor.

tf.contrib.training.NextQueuedSequenceBatch.save_state()

tf.contrib.training.NextQueuedSequenceBatch.save_state(state_name, value, name=None) Returns an op to save the current batch of state state_name. Args: state_name: string, matches a key provided in initial_states. value: A Tensor. Its type must match that of initial_states[state_name].dtype. If we had at input: initial_states[state_name].get_shape() == [d1, d2, ...] then the shape of value must match: tf.shape(value) == [batch_size, d1, d2, ...] name: string (optional). The name scope for

tf.contrib.training.NextQueuedSequenceBatch.next_key

tf.contrib.training.NextQueuedSequenceBatch.next_key The key names of the next (in iteration) truncated unrolled examples. The format of the key is: "%05d_of_%05d:%s" % (sequence + 1, sequence_count, original_key) if sequence + 1 < sequence_count, otherwise: "STOP:%s" % original_key where original_key is the unique key read in by the prefetcher. Returns: A string vector of length batch_size, the keys.

tf.contrib.training.NextQueuedSequenceBatch.length

tf.contrib.training.NextQueuedSequenceBatch.length The lengths of the given truncated unrolled examples. For initial iterations, for which sequence * num_unroll < length, this number is num_unroll. For the remainder, this number is between 0 and num_unroll. Returns: An integer vector of length batch_size, the lengths.