Actor-critic reinforcement learning for tracking control in robotics
- 1 December 2016
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 5819-5826
- https://doi.org/10.1109/cdc.2016.7799164
Abstract
In this article we provide experimental results and evaluation of a compensation method which improves the tracking performance of a nominal feedback controller by means of reinforcement learning (RL). The compensator is based on the actor-critic scheme and it adds a correction signal to the nominal control input with the goal to improve the tracking performance using on-line learning. The algorithm has been evaluated on a 6 DOF industrial robot manipulator with the objective to accurately track different types of reference trajectories. An extensive experimental study has shown that the proposed RL-based compensation method significantly improves the performance of the nominal feedback controller.Keywords
This publication has 6 references indexed in Scilit:
- Nonlinear Disturbance Compensation and Reference Tracking via Reinforcement Learning with Fuzzy ApproximatorsIFAC Proceedings Volumes, 2014
- Reinforcement learning in robotics: A surveyThe International Journal of Robotics Research, 2013
- A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy GradientsIEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), 2012
- Efficient Model Learning Methods for Actor–Critic ControlIEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2011
- A survey of iterative learning controlIEEE Control Systems, 2006
- A survey of repetitive controlPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2005