Reinforcement Learning Trees
- 2 October 2015
- journal article
- research article
- Published by Informa UK Limited in Journal of the American Statistical Association
- Vol. 110 (512), 1770-1784
- https://doi.org/10.1080/01621459.2015.1036994
Abstract
In this article, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman 2001 Breiman, L. (2001), “Random Forests,” Machine Learning, 45, 5–32. [Crossref], [Web of Science ®] , [Google Scholar] ) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree uses the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that toward terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings. Supplementary materials for this article are available online.Keywords
This publication has 21 references indexed in Scilit:
- BART: Bayesian additive regression treesThe Annals of Applied Statistics, 2010
- Random survival forestsThe Annals of Applied Statistics, 2008
- Extremely randomized treesMachine Learning, 2006
- Gene selection and classification of microarray data using random forestBMC Bioinformatics, 2006
- Identifying SNPs predictive of phenotype using random forestsGenetic Epidemiology, 2004
- Greedy function approximation: A gradient boosting machine.The Annals of Statistics, 2001
- Random ForestsMachine Learning, 2001
- An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and RandomizationMachine Learning, 2000
- Shape Quantization and Recognition with Randomized TreesNeural Computation, 1997
- Bagging predictorsMachine Learning, 1996