Reinforcement learning in continuous time: advantage updating
- 1 January 1994
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
A new algorithm for reinforcement learning, advantage updating, is described. Advantage updating is a direct learning technique; it does not require a model to be given or learned. It is incremental, requiring only a constant amount of calculation per time step, independent of the number of possible actions, possible outcomes from a given action, or number of states. Analysis and simulation indicate that advantage updating is applicable to reinforcement learning systems working in continuous time (or discrete time with small time steps) for which standard algorithms such as Q-learning are not applicable. Simulation results are presented indicating that for a simple linear quadratic regulator (LQR) problem, advantage updating learns more quickly than Q-learning by a factor of 100,000 when the time step is small. Even for large time steps, advantage updating is never slower than Q-learning, and advantage updating is more resistant to noise than is Q-learning. Convergence properties are discussed. It is proved that the learning rule for advantage updating converges to the optimal policy with probability one.Keywords
This publication has 6 references indexed in Scilit:
- Function minimization for dynamic programming using connectionist networksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Reinforcement Learning With High-Dimensional, Continuous ActionsPublished by Defense Technical Information Center (DTIC) ,1993
- On the Convergence of Stochastic Iterative Dynamic Programming AlgorithmsPublished by Defense Technical Information Center (DTIC) ,1993
- A Hierarchical Network of Control Systems that Learn: Modeling Nervous System Function During Classical and Instrumental ConditioningAdaptive Behavior, 1993
- A Hierarchical Network of Provably Optimal Learning Control Systems: Extensions of the Associative Control Process (ACP) NetworkAdaptive Behavior, 1993
- Technical Note: Q-LearningMachine Learning, 1992