Reference frames for representing visual and tactile locations in parietal cortex

Abstract
The ventral intraparietal area (VIP) receives converging inputs from visual, somatosensory, auditory and vestibular systems that use diverse reference frames to encode sensory information. A key issue is how VIP combines those inputs together. We mapped the visual and tactile receptive fields of multimodal VIP neurons in macaque monkeys trained to gaze at three different stationary targets. Tactile receptive fields were found to be encoded into a single somatotopic, or head-centered, reference frame, whereas visual receptive fields were widely distributed between eye- to head-centered coordinates. These findings are inconsistent with a remapping of all sensory modalities in a common frame of reference. Instead, they support an alternative model of multisensory integration based on multidirectional sensory predictions (such as predicting the location of a visual stimulus given where it is felt on the skin and vice versa). This approach can also explain related findings in other multimodal areas.