Recoding arm position to learn visuomotor transformations.

Abstract
There is strong experimental evidence that guiding the arm toward a visual target involves an initial vectorial transformation from direction in visual space to direction in motor space. Constraints on this transformation are imposed (i) by the neural codes for incoming information: the desired movement direction is thought to be signalled by populations of broadly tuned neurons and arm position by populations of monotonically tuned neurons; and (ii) by the properties of outgoing information: the actual movement direction results from the collective action of broadly tuned neurons whose preferred directions rotate with the position of the arm. A neural network model is presented that computes the visuomotor mapping, given these constraints. Appropriate operations are learned by the network in an unsupervised fashion through repeated action- perception cycles by recoding the arm-related proprioceptive information. The resulting solution has two interesting properties: (i) the required transformation is executed accurately over a large part of the reaching space, although few positions are actually learned; and (ii) properties of single neurons and populations in the network closely resemble those of neurons and populations in parietal and motor cortical regions. This model thus suggests a realistic scenario for the calculation of coordinate transformations and initial motor command for arm reaching movements.