Bouncing towards the optimum: Improving the results of Monte Carlo optimization algorithms

Abstract
Simulated annealing and related Monte Carlo-type optimization algorithms are used to apply statistical physics concepts, in particular ideas from the statistical mechanics of spin glasses, to find optimal configurations for combinatorial optimization problems. There are formal proofs showing that these algorithms converge asymptotically (i.e.—possibly—for infinitely long simulation times) to a global optimum. Practical implementations, however, only allow for finite simulation times, and, thus, the annealing process is often trapped in energetically higher, suboptimal configurations. In this work we present an algorithm—we call it bouncing— which takes the final low-energy configuration of, e.g., a conventional monotonically cooled annealing run as an input and subjects it to a schedule of repeatedly reheating and cooling. The maximum of a susceptibility and a specific-heat-like quantity sampled during the initial monotonic cooling process serve as lower and upper starting temperature bounds for this secondary heating and cooling. We present, in addition to a serial implementation, a recipe for a parallel computer, and provide a number of results showing the success of the bouncing method for a particularly prominent example of a combinatorial optimization problem: the traveling salesman problem.