Theoretical Views of Boosting
- 19 November 1999
- book chapter
- conference paper
- Published by Springer Science and Business Media LLC in Lecture Notes in Computer Science
Abstract
Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm, we briefly survey theoretical work on boosting including analyses of AdaBoost’s training error and generalization error, connections between boosting and game theory, methods of estimating probabilities using boosting, and extensions of AdaBoost for multiclass classification problems. We also briefly mention some empirical work.Keywords
This publication has 12 references indexed in Scilit:
- Improved boosting algorithms using confidence-rated predictionsPublished by Association for Computing Machinery (ACM) ,1998
- Arcing classifier (with discussion and a rejoinder by the author)The Annals of Statistics, 1998
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the networkIEEE Transactions on Information Theory, 1998
- A Decision-Theoretic Generalization of On-Line Learning and an Application to BoostingJournal of Computer and System Sciences, 1997
- Boosting a Weak Learning Algorithm by MajorityInformation and Computation, 1995
- The Nature of Statistical Learning TheoryPublished by Springer Science and Business Media LLC ,1995
- Cryptographic limitations on learning Boolean formulae and finite automataJournal of the ACM, 1994
- BOOSTING PERFORMANCE IN NEURAL NETWORKSInternational Journal of Pattern Recognition and Artificial Intelligence, 1993
- What Size Net Gives Valid Generalization?Neural Computation, 1989
- A theory of the learnableCommunications of the ACM, 1984