Main Content

Nonlinear Constraint Solver Algorithm

The pattern search algorithm uses the Augmented Lagrangian Pattern Search (ALPS) algorithm to solve nonlinear constraint problems. The optimization problem solved by the ALPS algorithm is

min x f ( x )

such that

c i ( x ) 0 , i = 1 m c e q i ( x ) = 0 , i = m + 1 m t A x b A e q x = b e q l b x u b ,

wherec(x) represents the nonlinear inequality constraints,ceq(x) represents the equality constraints,mis the number of nonlinear inequality constraints, andmtis the total number of nonlinear constraints.

阿尔卑斯山算法试图解决非线性optimization problem with nonlinear constraints, linear constraints, and bounds. In this approach, bounds and linear constraints are handled separately from nonlinear constraints. A subproblem is formulated by combining the objective function and nonlinear constraint function using the Lagrangian and the penalty parameters. A sequence of such optimization problems are approximately minimized using a pattern search algorithm such that the linear constraints and bounds are satisfied.

Each subproblem solution represents one iteration. The number of function evaluations per iteration is therefore much higher when using nonlinear constraints than otherwise.

A subproblem formulation is defined as

Θ ( x , λ , s , ρ ) = f ( x ) i = 1 m λ i s i log ( s i c i ( x ) ) + i = m + 1 m t λ i c e q i ( x ) + ρ 2 i = m + 1 m t c e q i ( x ) 2 ,

where

  • The componentsλiof the vectorλare nonnegative and are known as Lagrange multiplier estimates

  • The elementssiof the vectorsare nonnegative shifts

  • ρis the positive penalty parameter.

The algorithm begins by using an initial value for the penalty parameter (InitialPenalty).

The pattern search minimizes a sequence of subproblems, each of which is an approximation of the original problem. Each subproblem has a fixed value ofλ,s, andρ. When the subproblem is minimized to a required accuracy and satisfies feasibility conditions, the Lagrangian estimates are updated. Otherwise, the penalty parameter is increased by a penalty factor (PenaltyFactor). This results in a new subproblem formulation and minimization problem. These steps are repeated until the stopping criteria are met.

Each subproblem solution represents one iteration. The number of function evaluations per iteration is therefore much higher when using nonlinear constraints than otherwise.

For a complete description of the algorithm, see the following references:

References

[1] Kolda, Tamara G., Robert Michael Lewis, and Virginia Torczon. “A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints.” Technical Report SAND2006-5315, Sandia National Laboratories, August 2006.

[2] Conn, A. R., N. I. M. Gould, and Ph. L. Toint. “A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds,”SIAM Journal on Numerical Analysis, Volume 28, Number 2, pages 545–572, 1991.

[3] Conn, A. R., N. I. M. Gould, and Ph. L. Toint. “A Globally Convergent Augmented Lagrangian Barrier Algorithm for Optimization with General Inequality Constraints and Simple Bounds,”Mathematics of Computation, Volume 66, Number 217, pages 261–288, 1997.

Related Topics