Illustration of how the performance of an estimator on unseen data (test data) is not the same as the performance on training data. As the regularization increases
This example demonstrates the problems of underfitting and overfitting and how we can use linear regression with polynomial features to approximate nonlinear functions
Example of Receiver Operating Characteristic (ROC) metric to evaluate classifier output quality using cross-validation. ROC
This example compares non-nested and nested cross-validation strategies on a classifier of the iris data set. Nested cross-validation (CV) is often
Example of Receiver Operating Characteristic (ROC) metric to evaluate classifier output quality. ROC curves typically feature true positive rate
In this plot you can see the training scores and validation scores of an SVM for different values of the kernel parameter gamma. For very low values of gamma, you
On the left side the learning curve of a naive Bayes classifier is shown for the digits dataset. Note that the training score and the cross-validation score are both
This examples shows how a classifier is optimized by cross-validation, which is done using the
Example of Precision-Recall metric to evaluate classifier output quality. In information retrieval, precision is a measure of result relevancy, while recall is a measure
Example of confusion matrix usage to evaluate the quality of the output of a classifier on the iris data set. The diagonal elements represent the number of points for which
Page 1 of 2