tf.contrib.learn.monitors.ValidationMonitor.__init__()

tf.contrib.learn.monitors.ValidationMonitor.__init__(x=None, y=None, input_fn=None, batch_size=None, eval_steps=None, every_n_steps=100, metrics=None, early_stopping_rounds=None, early_stopping_metric='loss', early_stopping_metric_minimize=True, name=None)

Initializes a ValidationMonitor.

Args:
  • x: See BaseEstimator.evaluate.
  • y: See BaseEstimator.evaluate.
  • input_fn: See BaseEstimator.evaluate.
  • batch_size: See BaseEstimator.evaluate.
  • eval_steps: See BaseEstimator.evaluate.
  • every_n_steps: Check for new checkpoints to evaluate every N steps. If a new checkpoint is found, it is evaluated. See EveryN.
  • metrics: See BaseEstimator.evaluate.
  • early_stopping_rounds: int. If the metric indicated by early_stopping_metric does not change according to early_stopping_metric_minimize for this many steps, then training will be stopped.
  • early_stopping_metric: string, name of the metric to check for early stopping.
  • early_stopping_metric_minimize: bool, True if early_stopping_metric is expected to decrease (thus early stopping occurs when this metric stops decreasing), False if early_stopping_metric is expected to increase. Typically, early_stopping_metric_minimize is True for loss metrics like mean squared error, and False for performance metrics like accuracy.
  • name: See BaseEstimator.evaluate.
Raises:
  • ValueError: If both x and input_fn are provided.
doc_TensorFlow
2016-10-14 13:06:49
Comments
Leave a Comment

Please login to continue.