Abstract
This article considers online optimization with a finite prediction window of cost functions and additional switching costs on the decisions. We study the fundamental limits of dynamic regret of any online algorithm for both the with-prediction and the no-prediction cases. Besides, we propose two gradient-based online algorithms: receding horizon gradient descent (RHGD) and receding horizon accelerated gradient (RHAG); and provide their regret upper bounds. RHAG's regret upper bound is close to the lower bound, indicating the tightness of our lower bound and that our RHAG is near-optimal. Finally, we conduct numerical experiments to complement the theoretical results.
Funding Information
  • NSF CAREER (ECCS-1553407)
  • AFOSR YIP (FA9550-18-1-0150)
  • ONR YIP (N00014-19-1-2217)

This publication has 19 references indexed in Scilit: