tf.contrib.training.stratified_sample()

tf.contrib.training.stratified_sample(tensors, labels, target_probs, batch_size, init_probs=None, enqueue_many=False, queue_capacity=16, threads_per_queue=1, name=None) Stochastically creates batches based on per-class probabilities. This method discards examples. Internally, it creates one queue to amortize the cost of disk reads, and one queue to hold the properly-proportioned batch. See stratified_sample_unknown_dist for a function that performs stratified sampling with one queue per class

tf.contrib.training.SequenceQueueingStateSaver.__init__()

tf.contrib.training.SequenceQueueingStateSaver.__init__(batch_size, num_unroll, input_length, input_key, input_sequences, input_context, initial_states, capacity=None, allow_small_batch=False, name=None) Creates the SequenceQueueingStateSaver. Args: batch_size: int or int32 scalar Tensor, how large minibatches should be when accessing the state() method and context, sequences, etc, properties. num_unroll: Python integer, how many time steps to unroll at a time. The input sequences of length

tf.contrib.training.SequenceQueueingStateSaver.close()

tf.contrib.training.SequenceQueueingStateSaver.close(cancel_pending_enqueues=False, name=None) Closes the barrier and the FIFOQueue. This operation signals that no more segments of new sequences will be enqueued. New segments of already inserted sequences may still be enqueued and dequeued if there is a sufficient number filling a batch or allow_small_batch is true. Otherwise dequeue operations will fail immediately. Args: cancel_pending_enqueues: (Optional.) A boolean, defaulting to False. I

tf.contrib.training.SequenceQueueingStateSaver.name

tf.contrib.training.SequenceQueueingStateSaver.name

tf.contrib.training.SequenceQueueingStateSaver.barrier

tf.contrib.training.SequenceQueueingStateSaver.barrier

tf.contrib.training.SequenceQueueingStateSaver.next_batch

tf.contrib.training.SequenceQueueingStateSaver.next_batch The NextQueuedSequenceBatch providing access to batched output data. Also provides access to the state and save_state methods. The first time this gets called, it additionally prepares barrier reads and creates NextQueuedSequenceBatch / next_batch objects. Subsequent calls simply return the previously created next_batch. In order to access data in next_batch without blocking, the prefetch_op must have been run at least batch_size times

tf.contrib.training.SequenceQueueingStateSaver.num_unroll

tf.contrib.training.SequenceQueueingStateSaver.num_unroll

tf.contrib.training.SequenceQueueingStateSaver.batch_size

tf.contrib.training.SequenceQueueingStateSaver.batch_size

tf.contrib.training.SequenceQueueingStateSaver.prefetch_op

tf.contrib.training.SequenceQueueingStateSaver.prefetch_op The op used to prefetch new data into the state saver. Running it once enqueues one new input example into the state saver. The first time this gets called, it additionally creates the prefetch_op. Subsequent calls simply return the previously created prefetch_op. It should be run in a separate thread via e.g. a QueueRunner. Returns: An Operation that performs prefetching.

tf.contrib.training.NextQueuedSequenceBatch.state()

tf.contrib.training.NextQueuedSequenceBatch.state(state_name) Returns batched state tensors. Args: state_name: string, matches a key provided in initial_states. Returns: A Tensor: a batched set of states, either initial states (if this is the first run of the given example), or a value as stored during a previous iteration via save_state control flow. Its type is the same as initial_states["state_name"].dtype. If we had at input: initial_states[state_name].get_shape() == [d1, d2, ...], the