Predictive learning as a network mechanism for extracting low-dimensional latent space representations
Open Access
- 3 March 2021
- journal article
- research article
- Published by Springer Science and Business Media LLC in Nature Communications
- Vol. 12 (1), 1-13
- https://doi.org/10.1038/s41467-021-21696-1
Abstract
Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task’s low-dimensional latent structure in the network activity – i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data.Keywords
This publication has 62 references indexed in Scilit:
- The importance of mixed selectivity in complex cognitive tasksNature, 2013
- A Computational Model of Limb Impedance Control Based on Principles of Internal Model UncertaintyPLOS ONE, 2010
- Attractor concretion as a mechanism for the formation of context representationsNeuroImage, 2010
- Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responsesFrontiers in Computational Neuroscience, 2010
- Attractor neural network models of spatial maps in hippocampusHippocampus, 1999
- Emergence of simple-cell receptive field properties by learning a sparse code for natural imagesNature, 1996
- Improving Generalization for Temporal Difference Learning: The Successor RepresentationNeural Computation, 1993
- Backpropagation through time: what it does and how to do itProceedings of the IEEE, 1990
- Preserved Learning and Retention of Pattern-Analyzing Skill in Amnesia: Dissociation of Knowing How and Knowing ThatScience, 1980
- The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving ratBrain Research, 1971