Learning Nonlinear Functions Using Regularized Greedy Forest
- 20 August 2013
- journal article
- research article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Pattern Analysis and Machine Intelligence
- Vol. 36 (5), 942-954
- https://doi.org/10.1109/tpami.2013.159
Abstract
We consider the problem of learning a forest of nonlinear decision rules with general loss functions. The standard methods employ boosted decision trees such as Adaboost for exponential loss and Friedman's gradient boosting for general loss. In contrast to these traditional boosting algorithms that treat a tree learner as a black box, the method we propose directly learns decision forests via fully-corrective regularized greedy search using the underlying forest structure. Our method achieves higher accuracy and smaller models than gradient boosting on many of the datasets we have tested on.Other Versions
This publication has 15 references indexed in Scilit:
- BART: Bayesian additive regression treesThe Annals of Applied Statistics, 2010
- Trading Accuracy for Sparsity in Optimization Problems with Sparsity ConstraintsSIAM Journal on Optimization, 2010
- Predictive learning via rule ensemblesThe Annals of Applied Statistics, 2008
- SmcHD1, containing a structural-maintenance-of-chromosomes hinge domain, has a critical role in X inactivationNature Genetics, 2008
- Boosting with early stopping: Convergence and consistencyThe Annals of Statistics, 2005
- The Boosting Approach to Machine Learning: An OverviewPublished by Springer Science and Business Media LLC ,2003
- Greedy function approximation: A gradient boosting machine.The Annals of Statistics, 2001
- A Decision-Theoretic Generalization of On-Line Learning and an Application to BoostingJournal of Computer and System Sciences, 1997
- Hybrid ℓ1/ℓ2 minimization with applications to tomographyGeophysics, 1997
- Bagging predictorsMachine Learning, 1996