Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Chapter 3.

Constrained Optimization - Global


Methods

III.1 Introduction
Constrained optimization is a mathematical approach used to find the optimal
values of variables within a defined set of constraints or limitations. In various real-
world scenarios, decision-makers often face situations where certain conditions
must be satisfied, resources are limited, or specific requirements must be met.
Constrained optimization provides a systematic framework to address these
challenges and determine the best possible solution that adheres to the given
constraints.

III.2 Problem statement


A constrained optimization problem is given in the following general form:

min f ( X ) X n

hi ( X )  0 i  1,..., p (3.1)
g ( X )  0 j  1,..., q
 j

where f ( X ) is the objective function, hi ( X ) and g j ( X ) are the constraint


functions defining the domain within which the minimization is to be carried out. In
these instances, the focus of minimization is inherently confined to the specific
domain defined by the constraints.
If g j ( X )  0 , (3.1) is called equality constrained optimization problem, if
hi ( X )  0 , (3.1) is called inequality constrained optimization problem.

III.3 The Lagrangian


The Lagrangian is used to convert a constrained optimization problem into an
unconstrained optimization problem. The Lagrangian of a constrained optimization
problem takes the following form:

p p
L( X ,  ,  )  f ( X )   i hi ( X )    j g j ( X ) (3.2)
i 1 j 1

where i and  j are called the Lagrange multipliers


L( X , ,  ) is then considered as the “new” objective function.

Remarque:
The Lagrangian can be expressed alternatively by :
p p
L( X ,  ,  )  f ( X )   i hi ( X )   j g j ( X )
i 1 j 1

This has no effect except when considering the practical significance of Lagrange
multipliers.

III.4 Equality constrained optimization problem


An optimization problem involving a set of p equality constraints is formulated as
follows:

min f ( X ) X n

 (3.3)
 h( X )  0
With

 h1 ( X ) 
h ( X ) 
 2 
. 
h( X )    (3.4)
. 
. 
 
 hp ( X ) 

Theorem

"If the functions ( f ), ( h1 , h2 ,..., hp ) defining the optimization problem with P


equality constraints are differentiable in a neighborhood of the solution X * , and if
the gradients of the constraints are linearly independent at X * , then there exists
(*  (1* , 2* ,...,  * )  ) such that ( X * , * ) is a stationary point of the Lagrangian."

Therefore, the optimality condition is:

LX = 0 (3.5)

III.5 Inequality constrained optimization problem


An optimization problem involving a set of q inequality constraints is formulated as
follows:

min f ( X ) X n

 (3.6)
g( X )  0
With

 g1 ( X ) 
 g ( X )
 2 
. 
g( X )    (3.7)
. 
. 
 
 g q ( X ) 

Theorem

“If f  X  and g  X  are differentiable functions from ( n


) to ( ) , and X * is a local
minimum of f over the set {X / g ( X )  0}, and if we further assume that (g ( X * )  0)
, then there exists (  0) such that:

f ( X * )  g ( X * )  0
(3.8)
g( X *)  0

Moreover, if f and g are convex, then these two equalities are sufficient to ensure
that X * is a local minimum.”

The inequality constraints are often handled using the method of Lagrange multipliers,
the complementary slackness condition states that at the optimal solution, either the
Lagrange multiplier is zero or the corresponding inequality constraint is active.

Example 1:

Find the minimum of the function f  2 x12  3 x22 with the constraint 2 x1  x2  4 .

Let us write the Lagrangian function: ( X ,  )  2 x12  3 x22   (2 x1  x2  4)

The optimality condition imply:



 4 x1  2  0
x1

 6 x2    0
x2

 2 x1  x2  4  0

We obtain a system of three equation with three unknowns, which can be resolved
analytically (for example by reformulating the problem as : AX  B and X  A1B ) .

Example 2:
Determine the dimensions of a lidless parallelepiped cardboard box that requires the
least amount of cardboard while having a predetermined volume V.

Fig 3.1 parallelepiped cardboard box

The problem can be stated as follow:

 Min  A  xy  2 xz  2 yz 

 subject to  xyz  V  .

First, we have to write the Lagrangian:


( x, y, z,  )  xy  2 xz  2 yz   ( xyz  V )

The optimality condition gives:



 yz   yz
x

 xz   xz  0
y

 xy  2 x  2 y  0
z

 xyz  V  0

Therefore, we have to solve the system of equations for ( x, y, z,  ) , and finally check
that the solution satisfies the constraint  xyz  V  0  .

Example 3:

We want to minimize the function f ( x, y)  x2  y 2 subject to the constraint


g ( x, y)  x  y 1  0 .

The optimization problem can be formulated as follows:

Min f ( x, y )  x 2  y 2
Subject to g ( x, y )  x  y  1  0

The Lagrangian: L( x, y,  )  x2  y 2   ( x  y  1) . Applying the optimality condition:

L
 2x    0
x
L
 2y    0
y
L
 x  y 1  0

1 1
Solving these equations simultaneously, we get: x  , y  ,   1
2 2
Now, we check the second-order conditions. The Hessian matrix is:

 2 0 1
H   0 2 1
 1 1 0 

The eigenvalues of this matrix are (1, 3, 0). Since all eigenvalues are positive or zero,
1 1
the critical point  ,  is a minimum, and the minimum value of the function f is
2 2
2 2
1 1 1
     .
2 2 2
III.6 Projected gradient method
The projected gradient method is an optimization algorithm that combines the
gradient method (seen in chapter 2), with a projection onto the set of feasible points.
The purpose of this projection step is to ensure that the updated point ( X k 1 ) satisfies
any constraints imposed on the optimization problem. This is crucial for maintaining
feasibility with respect to the problem's constraints, and it guarantees that each
iteration respects the constraints of the optimization problem.

III.6.1 Projection onto a Convex Set

Let Ω be a closed convex set in n , the projection of a point X  n


onto Ω,
denoted by P ( X ) , is defined as the unique solution to:

1
min ‖ X  Y ‖ 22 : Y   (3.9)
2

III.6.2 Projected gradient algorithm


The algorithm of this method is based on the one of the gradient (see chapter
2), with the addition of the projection the set Ω:

Yk  X k  k f ( X k ) (3.10)

to obtain the point


X k 1  P (Yk ) (3.11)

The vector X k 1 can be obtained by minimizing the function:

1
min ‖ X k   k f ( X k )  Y ‖ 22 Y  n
(3.12)
2

Algorithm

1. Initialization: Choose an initial point X 0 that satisfies the constraints.

2. Iterations: For each iteration k , do the following:

a. Compute the gradient of the objective function at the current point ( X k ) : (f ( X k ))
. b. Update the point using the gradient: (Yk  X k  k f ( X k ))

c. Project Yk onto the convex set Ω to ensure feasibility: ( X k 1  P (Yk ))


The projection operation ( P (Yk )) involves finding the point in Ω that minimizes the
distance to Yk . In case of equality constraints, it can be defined as :

g ( v)T v  g ( v)
Project( v)  v  g ( v) (3.13)
g ( v)T g ( v)

3. Stopping criterion: Repeat steps 2 until a certain stopping criterion is met, such
as when the norm of the gradient is sufficiently small.

Example:
Let's consider a simple optimization problem with a convex constraint. Suppose we
want to minimize the function ( f ( x)  x 2 ) subject to the constraint ( x  2) .

The optimization problem can be formulated as follows:

Min f ( x)  x 2

Subject to g ( x)  x  2  0
The projected gradient update is given by:
Yk  X k   k f ( X k )
X k 1  P (Yk )

Now, let's apply the algorithm:


1. Objective function gradient: f ( x)  2 x

2. Projected Gradient Update: Yk  X k  k f ( X k )  X k  2k xk

3. Projection onto the set ( x  2) : X k 1  max(Yk , 2)

Let's iterate through this process:

- Choose an initial point  X 0  4  , and step size (  k  0.1)

- Iterate through the projected gradient updates until convergence.

Let's assume  k  0  for simplicity: Y1  X 0  0.1 2  X 0 , X1  max(Y1 , 2)

- Repeat this process until convergence.

You might also like