Pure Exploration in Multi-armed Bandits Problems
- 1 January 2009
- book chapter
- conference paper
- Published by Springer Science and Business Media LLC in Lecture Notes in Computer Science
Abstract
No abstract availableKeywords
Other Versions
This publication has 8 references indexed in Scilit:
- Pure Exploration in Multi-armed Bandits ProblemsLecture Notes in Computer Science, 2009
- Bandit Based Monte-Carlo PlanningLecture Notes in Computer Science, 2006
- The Budgeted Multi-armed Bandit ProblemLecture Notes in Computer Science, 2004
- PAC Bounds for Multi-armed Bandit and Markov Decision ProcessesLecture Notes in Computer Science, 2002
- The Nonstochastic Multiarmed Bandit ProblemSIAM Journal on Computing, 2002
- Finite-time Analysis of the Multiarmed Bandit ProblemMachine Learning, 2002
- Asymptotically efficient adaptive allocation rulesAdvances in Applied Mathematics, 1985
- Some aspects of the sequential design of experimentsBulletin of the American Mathematical Society, 1952