tf.contrib.training.stratified_sample()

tf.contrib.training.stratified_sample(tensors, labels, target_probs, batch_size, init_probs=None, enqueue_many=False, queue_capacity=16, threads_per_queue=1, name=None) Stochastically creates batches based on per-class probabilities. This method discards examples. Internally, it creates one queue to amortize the cost of disk reads, and one queue to hold the properly-proportioned batch. See stratified_sample_unknown_dist for a function that performs stratified sampling with one queue per class

tf.contrib.training.SequenceQueueingStateSaver.__init__()

tf.contrib.training.SequenceQueueingStateSaver.__init__(batch_size, num_unroll, input_length, input_key, input_sequences, input_context, initial_states, capacity=None, allow_small_batch=False, name=None) Creates the SequenceQueueingStateSaver. Args: batch_size: int or int32 scalar Tensor, how large minibatches should be when accessing the state() method and context, sequences, etc, properties. num_unroll: Python integer, how many time steps to unroll at a time. The input sequences of length

tf.contrib.training.SequenceQueueingStateSaver.prefetch_op

tf.contrib.training.SequenceQueueingStateSaver.prefetch_op The op used to prefetch new data into the state saver. Running it once enqueues one new input example into the state saver. The first time this gets called, it additionally creates the prefetch_op. Subsequent calls simply return the previously created prefetch_op. It should be run in a separate thread via e.g. a QueueRunner. Returns: An Operation that performs prefetching.

tf.contrib.training.SequenceQueueingStateSaver.num_unroll

tf.contrib.training.SequenceQueueingStateSaver.num_unroll

tf.contrib.training.SequenceQueueingStateSaver.next_batch

tf.contrib.training.SequenceQueueingStateSaver.next_batch The NextQueuedSequenceBatch providing access to batched output data. Also provides access to the state and save_state methods. The first time this gets called, it additionally prepares barrier reads and creates NextQueuedSequenceBatch / next_batch objects. Subsequent calls simply return the previously created next_batch. In order to access data in next_batch without blocking, the prefetch_op must have been run at least batch_size times

tf.contrib.training.SequenceQueueingStateSaver.name

tf.contrib.training.SequenceQueueingStateSaver.name

tf.contrib.training.SequenceQueueingStateSaver.close()

tf.contrib.training.SequenceQueueingStateSaver.close(cancel_pending_enqueues=False, name=None) Closes the barrier and the FIFOQueue. This operation signals that no more segments of new sequences will be enqueued. New segments of already inserted sequences may still be enqueued and dequeued if there is a sufficient number filling a batch or allow_small_batch is true. Otherwise dequeue operations will fail immediately. Args: cancel_pending_enqueues: (Optional.) A boolean, defaulting to False. I

tf.contrib.training.SequenceQueueingStateSaver.batch_size

tf.contrib.training.SequenceQueueingStateSaver.batch_size

tf.contrib.training.SequenceQueueingStateSaver.barrier

tf.contrib.training.SequenceQueueingStateSaver.barrier

tf.contrib.training.SequenceQueueingStateSaver

class tf.contrib.training.SequenceQueueingStateSaver SequenceQueueingStateSaver provides access to stateful values from input. This class is meant to be used instead of, e.g., a Queue, for splitting variable-length sequence inputs into segments of sequences with fixed length and batching them into mini-batches. It maintains contexts and state for a sequence across the segments. It can be used in conjunction with a QueueRunner (see the example below). The SequenceQueueingStateSaver (SQSS) accep