RandomTreesEmbedding provides a way to map data to a very high-dimensional, sparse representation, which might be beneficial for classification
Out-of-bag (OOB) estimates can be a useful heuristic to estimate the ?optimal? number of boosting iterations. OOB estimates are almost identical to cross-validation
The RandomForestClassifier is trained using bootstrap aggregation, where each new tree is fit from a bootstrap sample of the training observations
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature
This example is based on Figure 10.2 from Hastie et al 2009 [1] and illustrates the difference in performance between the discrete SAMME [2] boosting algorithm
Partial dependence plots show the dependence between the target function
This example shows how quantile regression can be used to create prediction intervals.
An example using IsolationForest for anomaly detection. The IsolationForest ?isolates? observations by randomly selecting a feature and then randomly selecting
This example shows the use of forests of trees to evaluate the importance of the pixels in an image classification task (faces). The hotter
An example to compare multi-output regression with random forest and the
Page 1 of 2