Abstract
Trajectory Extension Learning is an incremental method for training an artificial neural network to approximate the inverse dynamics of a robot manipulator. Training data near a desired trajectory is obtained by slowly varying a parameter of the trajectory from a region of easy solvability of the inverse dynamics toward the desired behavior. The parameter can be average speed, path shape, feedback gain, or any other controllable variable. As learning proceeds, an approximate solution to the local inverse dynamics for each value of the parameter is used to guide learning for the next value of the parameter. Convergence conditions are given for two variations on the algorithm. Examples are shown of application to a real 2-joint direct drive robot arm and a simulated 3-joint redundant arm, both using simulated equilibrium point control.