Ensemble Algorithms in Reinforcement Learning
- 2 May 2008
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)
- Vol. 38 (4), 930-936
- https://doi.org/10.1109/tsmcb.2008.920231
Abstract
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms.Keywords
This publication has 13 references indexed in Scilit:
- Generalized maze navigation: SRN critics solve what feedforward or Hebbian nets cannotPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Multi-agent reinforcement learning: weighting and partitioningNeural Networks, 1999
- Bagging predictorsMachine Learning, 1996
- Reinforcement learning of multiple tasks using a hierarchical CMAC architectureRobotics and Autonomous Systems, 1995
- Stable Function Approximation in Dynamic ProgrammingPublished by Elsevier BV ,1995
- Prioritized sweeping: Reinforcement learning with less data and less timeMachine Learning, 1993
- Q-learningMachine Learning, 1992
- Adaptive Mixtures of Local ExpertsNeural Computation, 1991
- Learning to predict by the methods of temporal differencesMachine Learning, 1988
- Learning Automata - A SurveyIEEE Transactions on Systems, Man, and Cybernetics, 1974