Illustration of how the performance of an estimator on unseen data (test data) is not the same as the performance on training data. As the regularization increases
This example demonstrates the problems of underfitting and overfitting and how we can use linear regression with polynomial features to approximate nonlinear functions
Example of Receiver Operating Characteristic (ROC) metric to evaluate classifier output quality using cross-validation. ROC
This example compares non-nested and nested cross-validation strategies on a classifier of the iris data set. Nested cross-validation (CV) is often
In this plot you can see the training scores and validation scores of an SVM for different values of the kernel parameter gamma. For very low values of gamma, you
Example of Receiver Operating Characteristic (ROC) metric to evaluate classifier output quality. ROC curves typically feature true positive rate
On the left side the learning curve of a naive Bayes classifier is shown for the digits dataset. Note that the training score and the cross-validation score are both
This examples shows how a classifier is optimized by cross-validation, which is done using the
Example of Precision-Recall metric to evaluate classifier output quality. In information retrieval, precision is a measure of result relevancy, while recall is a measure
Compare randomized search and grid search for optimizing hyperparameters of a random forest. All parameters that influence
Page 1 of 2