Harnessing the nonrepetitiveness in iterative learning control

Abstract
In iterative learning control (ILC), it is usually assumed that the disturbances, uncertainties and the desired trajectories are invariant with respect to the iteration number or iteration-independent. In practice, this may not be true. How to accommodate the iteration-dependent disturbances, uncertainties and the desired trajectories is practically important for any successful application of ILC. In practice, it is observed that the baseline performance of ILC is limited mainly by the nonrepetitiveness factors. In this paper, by the proposed two methods, it is shown that one can harness or make use of the nonrepetitiveness in ILC to reduce the baseline errors. When the pattern of the nonrepetitiveness is known, an internal model principle (IMP) in the iteration domain can be applied. When the pattern of the nonrepetitiveness is unknown in advance, a disturbance observer in the iteration domain is proposed. It is noted that to harness the nonrepetitiveness in ILC, usually, the ILC updating law has to be high-order in the iteration direction. To facilitate our discussion, a supervector notion is adopted in a fairly general setting. Simulation examples are provided to illustrate the fact that nonrepetitiveness in ILC, if properly handled, can be harnessed to achieve a better performance previously not achievable.