Learning Overcomplete Representations
Top Cited Papers
- 1 February 2000
- journal article
- research article
- Published by MIT Press in Neural Computation
- Vol. 12 (2), 337-365
- https://doi.org/10.1162/089976600300015826
Abstract
In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures. In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures. In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.Keywords
This publication has 21 references indexed in Scilit:
- Emergence of simple-cell receptive field properties by learning a sparse code for natural imagesNature, 1996
- An Information-Maximization Approach to Blind Separation and Blind DeconvolutionNeural Computation, 1995
- Nonlinear neurons in the low-noise limit: a factorial code maximizes information transferNetwork: Computation in Neural Systems, 1994
- WHAT IS THE GOAL OF SENSORY CODINGNeural Computation, 1994
- Matching pursuits with time-frequency dictionariesIEEE Transactions on Signal Processing, 1993
- Shiftable multiscale transformsIEEE Transactions on Information Theory, 1992
- Entropy-based algorithms for best basis selectionIEEE Transactions on Information Theory, 1992
- The wavelet transform, time-frequency localization and signal analysisIEEE Transactions on Information Theory, 1990
- Entropy reduction and decorrelation in visual coding by oriented neural receptive fieldsIEEE Transactions on Biomedical Engineering, 1989
- Complete discrete 2-D Gabor transforms by neural networks for image analysis and compressionIEEE Transactions on Acoustics, Speech, and Signal Processing, 1988