RandomTreesEmbedding provides a way to map data to a very high-dimensional, sparse representation, which might be beneficial for classification
This example is based on Figure 10.2 from Hastie et al 2009 [1] and illustrates the difference in performance between the discrete SAMME [2] boosting algorithm
The RandomForestClassifier is trained using bootstrap aggregation, where each new tree is fit from a bootstrap sample of the training observations
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature
Out-of-bag (OOB) estimates can be a useful heuristic to estimate the ?optimal? number of boosting iterations. OOB estimates are almost identical to cross-validation
This example shows the use of forests of trees to evaluate the importance of the pixels in an image classification task (faces). The hotter
An example to compare multi-output regression with random forest and the
Partial dependence plots show the dependence between the target function
An example using IsolationForest for anomaly detection. The IsolationForest ?isolates? observations by randomly selecting a feature and then randomly selecting
This example shows how quantile regression can be used to create prediction intervals.
Page 1 of 2