Actor–Critic-Based Optimal Tracking for Partially Unknown Nonlinear Discrete-Time Systems

Abstract
This paper presents a partially model-free adaptive optimal control solution to the deterministic nonlinear discrete-time (DT) tracking control problem in the presence of input constraints. The tracking error dynamics and reference trajectory dynamics are first combined to form an augmented system. Then, a new discounted performance function based on the augmented system is presented for the optimal nonlinear tracking problem. In contrast to the standard solution, which finds the feedforward and feedback terms of the control input separately, the minimization of the proposed discounted performance function gives both feedback and feedforward parts of the control input simultaneously. This enables us to encode the input constraints into the optimization problem using a nonquadratic performance function. The DT tracking Bellman equation and tracking Hamilton-Jacobi-Bellman (HJB) are derived. An actor-critic-based reinforcement learning algorithm is used to learn the solution to the tracking HJB equation online without requiring knowledge of the system drift dynamics. That is, two neural networks (NNs), namely, actor NN and critic NN, are tuned online and simultaneously to generate the optimal bounded control policy. A simulation example is given to show the effectiveness of the proposed method.
Funding Information
  • National Science Foundation (ECCS-1405173, IIS-1208623)
  • U.S. Office of Naval Research, Arlington, VA, USA (N00014-13-1-0562)
  • Air Force Office of Scientific Research, Arlington, VA, USA, through the European Office of Aerospace Research and Development Project (13-3055)
  • National Natural Science Foundation of China (61120106011)
  • 111 Project, Ministry of Education, China (B08015)

This publication has 35 references indexed in Scilit: