Abstract
In the neural network/genetic algorithm community, rather limited success in the training of neural networks by genetic algorithms has been reported. In a paper by Whitley et al. (1991), he claims that, due to “the multiple representations problem”, genetic algorithms will not effectively be able to train multilayer perceptrons, whose chromosomal representation of its weights exceeds 300 bits. In the following paper, by use of a “real-life problem”, known to be non-trivial, and by a comparison with “classic” neural net training methods, I will try to show, that the modest success of applying genetic algorithms to the training of perceptrons, is caused not so much by the “multiple representations problem” as by the fact that problem-specific knowledge available is often ignored, thus making the problem unnecessarily tough for the genetic algorithm to solve. Special success is obtained by the use of a new fitness function, which takes into account the fact that the search performed by a genetic algorithm is holistic, and not local as is usually the case when perceptrons are trained by traditional methods.

This publication has 3 references indexed in Scilit: