Alopex: A Correlation-Based Learning Algorithm for Feedforward and Recurrent Neural Networks
- 1 May 1994
- journal article
- Published by MIT Press in Neural Computation
- Vol. 6 (3), 469-490
- https://doi.org/10.1162/neco.1994.6.3.469
Abstract
We present a learning algorithm for neural networks, called Alopex. Instead of error gradient, Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assumptions about transfer functions of individual neurons, and does not explicitly depend on the functional form of the error measure. Hence, it can be used in networks with arbitrary transfer functions and for minimizing a large class of error measures. The learning algorithm is the same for feedforward and recurrent networks. All the weights in a network are updated simultaneously, using only local computations. This allows complete parallelization of the algorithm. The algorithm is stochastic and it uses a “temperature” parameter in a manner similar to that in simulated annealing. A heuristic “annealing schedule” is presented that is effective in finding global minima of error surfaces. In this paper, we report extensive simulation studies illustrating these advantages and show that learning times are comparable to those for standard gradient descent methods. Feedforward networks trained with Alopex are used to solve the MONK's problems and symmetry problems. Recurrent networks trained with the same algorithm are used for solving temporal XOR problems. Scaling properties of the algorithm are demonstrated using encoder problems of different sizes and advantages of appropriate error measures are illustrated using a variety of problems.Keywords
This publication has 16 references indexed in Scilit:
- Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer NetworksNeural Computation, 1991
- Model-free distributed learningIEEE Transactions on Neural Networks, 1990
- Hebbian Synapses: Biophysical Mechanisms and AlgorithmsAnnual Review of Neuroscience, 1990
- Pioneer Neurons and Target Selection in Cerebral Cortical DevelopmentCold Spring Harbor Symposia on Quantitative Biology, 1990
- Multilayer feedforward networks are universal approximatorsNeural Networks, 1989
- A Learning Algorithm for Continually Running Fully Recurrent Neural NetworksNeural Computation, 1989
- The Inversion of Sensory Processing by Feedback Pathways: A Model of Visual Cognitive FunctionsScience, 1987
- Optimization by Simulated AnnealingScience, 1983
- Associative search network: A reinforcement learning associative memoryBiological Cybernetics, 1981
- ALOPEX: A stochastic method for determining visual receptive fieldsVision Research, 1974