An algorithm for fast convergence in training neural networks
- 13 November 2002
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are studied. One modification is made on performance index, while the other one is on calculating gradient information. The modified algorithm gives a better convergence rate compared to the standard Levenberg-Marquard (LM) method and is less computationally intensive and requires less memory. The performance of the algorithm has been checked on several example problems.Keywords
This publication has 12 references indexed in Scilit:
- Training feedforward networks with the Marquardt algorithmIEEE Transactions on Neural Networks, 1994
- Speed up learning and network optimization with extended back propagationNeural Networks, 1993
- Improving the convergence of the back-propagation algorithmNeural Networks, 1992
- First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's MethodNeural Computation, 1992
- Enhanced training algorithms, and integrated training/architecture selection for multilayer perceptron networksIEEE Transactions on Neural Networks, 1992
- Improving Convergence of Back-Propagation by Handling Flat-Spots in the Output LayerPublished by Elsevier BV ,1992
- Conjugate gradient algorithm for efficient training of artificial neural networksIEE Proceedings G Circuits, Devices and Systems, 1992
- Backpropagation: past and futurePublished by Institute of Electrical and Electronics Engineers (IEEE) ,1988
- Increased rates of convergence through learning rate adaptationNeural Networks, 1988
- Learning representations by back-propagating errorsNature, 1986