Adaptive probabilities of crossover and mutation in genetic algorithms

Abstract
In this paper we describe an efficient approach for multimodal function optimization using Genetic Algorithms (GAs). We recommend the use of adaptive probabilities of crossover and mutation to realize the twin goals of maintaining diversity in the population and sustaining the convergence capacity of the GA. In the Adaptive Genetic Algorithm (AGA), the probabilities of crossover and mutation, p(c) and p(m), are varied depending on the fitness values of the solutions. High-fitness solutions are 'protected', while solutions with subaverage fitnesses are totally disrupted. By using adaptively varying p(c) and p(m), we also provide a solution to the problem of deciding the optimal values of p(c) and p(m), i.e., p(c) and p(m) need not be specified at all. The AGA is compared with previous approaches for adapting operator probabilities in genetic algorithms. The sShema theorem is derived for the AGA, and the working of the AGA is analyzed. We compare the performance of the AGA with that of the Standard GA (SGA) in optimizing several nontrivial multimodal functions with varying degrees of complexity. For most functions, the AGA converges to the global optimum in far fewer generations than the SGA, and it gets stuck at a local optimum fewer times. Our experiments demonstrate that the relative performance of the AGA as compared to that of the SGA improves as the epistacity and the multimodal nature of the objective function increase. We believe that the AGA is the first step in realizing a class of self organizing GAs capable of adapting themselves in locating the global optimum in a multimodal landscape.

This publication has 6 references indexed in Scilit: