No free lunch theorems for optimization
- 1 April 1997
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Evolutionary Computation
- Vol. 1 (1), 67-82
- https://doi.org/10.1109/4235.585893
Abstract
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori "head-to-head" minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms.Keywords
This publication has 9 references indexed in Scilit:
- Elements of Information TheoryPublished by Wiley ,2001
- The Lack of A Priori Distinctions Between Learning AlgorithmsNeural Computation, 1996
- What makes an optimization problem hand?Complexity, 1996
- Tabu Search—Part IIINFORMS Journal on Computing, 1990
- Tabu Search—Part IINFORMS Journal on Computing, 1989
- Optimization by Simulated AnnealingScience, 1983
- Markov Random Fields and Their ApplicationsContemporary Mathematics, 1980
- Introduction to Random FieldsGraduate Texts in Mathematics, 1976
- Branch-and-Bound Methods: A SurveyOperations Research, 1966