Abstract
A well known approach to constrained optimization is via a sequence of unconstrained minimization calculations applied to a penalty function. This paper shown how it is posiible to generalize Powell's penelty function to solve constrained problems with both equality and inequality constraints. The resulting methods are equivalent to the Hestenes' method of multipliers, and a generalization of this to inequality constraints suggested by Rockafellar. Local duality results (not all of which have appeared before) for these methods are reviewed, with particular emphasis on those of practical importance. It is shown that various strategies for varying control parameters are possible, all of which can be viewed as Newton or Newton-like iterations applied to the dual problem. Practical strategies for guaranteeing convergence are also discussed. A wide selection of numerical evidence is reported, and the algorithms are compared both amongst themselves and with other penalty function methods. The new penalty function is well conditioned, without singularities, and it is not necessary for the control parameters to tend to infinity in order to force convergence. The rate of convergence is rapid and high accuracy is achieved in few unconstrained minimizations.; furthermore the computational effort for successive minimizations goes down rapidly. The methods are very easy to program efficiently, using an established quasi-Newton subroutine for unconstrained minimization.