Parameter Incremental Learning Algorithm for Neural Networks
- 13 November 2006
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 17 (6), 1424-1438
- https://doi.org/10.1109/tnn.2006.880581
Abstract
In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicableKeywords
This publication has 13 references indexed in Scilit:
- Implementing Online Natural Gradient Learning: Problems and SolutionsIEEE Transactions on Neural Networks, 2006
- Stochastic LearningLecture Notes in Computer Science, 2004
- A novel training scheme for multilayered perceptrons to realize proper generalization and incremental learningIEEE Transactions on Neural Networks, 2003
- A clustering approach to incremental learning for feedforward neural networksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Learn++: an incremental learning algorithm for supervised neural networksIEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), 2001
- Natural Gradient Works Efficiently in LearningNeural Computation, 1998
- Efficient BackPropPublished by Springer Science and Business Media LLC ,1998
- Advanced supervised learning in multi-layer perceptrons — From backpropagation to adaptive learning algorithmsComputer Standards & Interfaces, 1994
- First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's MethodNeural Computation, 1992
- A Resource-Allocating Network for Function InterpolationNeural Computation, 1991