Adding learning to cellular genetic algorithms for training recurrent neural networks
- 1 March 1999
- journal article
- research article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 10 (2), 239-252
- https://doi.org/10.1109/72.750546
Abstract
This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GA's) for training recurrent neural networks (RNN's), Each weight of an RNN is encoded as a floating point number, and a concatenation of the numbers forms a chromosome. Reproduction takes place locally in a square grid with each grid point representing a chromosome. Two approaches, Lamarckian and Baldwinian mechanisms, for combining cellular GA's and learning have been compared, Different hill-climbing algorithms are incorporated into the cellular GA's as learning methods. These include the real-time recurrent learning (RTRL) and its simplified versions, and the delta rule. The RTRL algorithm has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, which is the simplest form of learning, has been implemented by considering the RNN's as feedforward networks during learning, The hybrid algorithms are used to train the RNN's to solve a long-term dependency problem. The results show that Baldwinian learning is inefficient in assisting the cellular GA. It is conjectured that the more difficult it is for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning, the poorer is the convergence of Baldwinian learning. Most of the combinations using the Lamarckian mechanism show an improvement in reducing the number of generations required for an optimum network; however, only a few can reduce the actual time taken. Embedding the delta rule in the cellular GA's has been found to be the fastest method. It is also concluded that learning should not be too extensive if the hybrid algorithm is to be benefit from learning.Keywords
This publication has 27 references indexed in Scilit:
- Landscapes, Learning Costs, and Genetic AssimilationEvolutionary Computation, 1996
- The Role of Development in Genetic AlgorithmsPublished by Elsevier BV ,1995
- Learning and Evolution in Neural NetworksAdaptive Behavior, 1994
- An evolutionary algorithm that constructs recurrent neural networksIEEE Transactions on Neural Networks, 1994
- Genetic evolution of the topology and weight distribution of neural networksIEEE Transactions on Neural Networks, 1994
- Evolving recurrent perceptrons for time-series modelingIEEE Transactions on Neural Networks, 1994
- On the effectiveness of crossover in simulated evolutionary optimizationBiosystems, 1994
- Adding Learning to the Cellular Development of Neural Networks: Evolution and the Baldwin EffectEvolutionary Computation, 1993
- Comparing genetic operators with gaussian mutations in simulated evolutionary processes using linear systemsBiological Cybernetics, 1990
- Learning State Space Trajectories in Recurrent Neural NetworksNeural Computation, 1989