Professional Documents
Culture Documents
Chapter 3
Chapter 3
III.1 Introduction
Constrained optimization is a mathematical approach used to find the optimal
values of variables within a defined set of constraints or limitations. In various real-
world scenarios, decision-makers often face situations where certain conditions
must be satisfied, resources are limited, or specific requirements must be met.
Constrained optimization provides a systematic framework to address these
challenges and determine the best possible solution that adheres to the given
constraints.
min f ( X ) X n
hi ( X ) 0 i 1,..., p (3.1)
g ( X ) 0 j 1,..., q
j
p p
L( X , , ) f ( X ) i hi ( X ) j g j ( X ) (3.2)
i 1 j 1
Remarque:
The Lagrangian can be expressed alternatively by :
p p
L( X , , ) f ( X ) i hi ( X ) j g j ( X )
i 1 j 1
This has no effect except when considering the practical significance of Lagrange
multipliers.
min f ( X ) X n
(3.3)
h( X ) 0
With
h1 ( X )
h ( X )
2
.
h( X ) (3.4)
.
.
hp ( X )
Theorem
LX = 0 (3.5)
min f ( X ) X n
(3.6)
g( X ) 0
With
g1 ( X )
g ( X )
2
.
g( X ) (3.7)
.
.
g q ( X )
Theorem
f ( X * ) g ( X * ) 0
(3.8)
g( X *) 0
Moreover, if f and g are convex, then these two equalities are sufficient to ensure
that X * is a local minimum.”
The inequality constraints are often handled using the method of Lagrange multipliers,
the complementary slackness condition states that at the optimal solution, either the
Lagrange multiplier is zero or the corresponding inequality constraint is active.
Example 1:
Find the minimum of the function f 2 x12 3 x22 with the constraint 2 x1 x2 4 .
Example 2:
Determine the dimensions of a lidless parallelepiped cardboard box that requires the
least amount of cardboard while having a predetermined volume V.
Min A xy 2 xz 2 yz
subject to xyz V .
Example 3:
Min f ( x, y ) x 2 y 2
Subject to g ( x, y ) x y 1 0
L
2x 0
x
L
2y 0
y
L
x y 1 0
1 1
Solving these equations simultaneously, we get: x , y , 1
2 2
Now, we check the second-order conditions. The Hessian matrix is:
2 0 1
H 0 2 1
1 1 0
The eigenvalues of this matrix are (1, 3, 0). Since all eigenvalues are positive or zero,
1 1
the critical point , is a minimum, and the minimum value of the function f is
2 2
2 2
1 1 1
.
2 2 2
III.6 Projected gradient method
The projected gradient method is an optimization algorithm that combines the
gradient method (seen in chapter 2), with a projection onto the set of feasible points.
The purpose of this projection step is to ensure that the updated point ( X k 1 ) satisfies
any constraints imposed on the optimization problem. This is crucial for maintaining
feasibility with respect to the problem's constraints, and it guarantees that each
iteration respects the constraints of the optimization problem.
1
min ‖ X Y ‖ 22 : Y (3.9)
2
Yk X k k f ( X k ) (3.10)
1
min ‖ X k k f ( X k ) Y ‖ 22 Y n
(3.12)
2
Algorithm
a. Compute the gradient of the objective function at the current point ( X k ) : (f ( X k ))
. b. Update the point using the gradient: (Yk X k k f ( X k ))
g ( v)T v g ( v)
Project( v) v g ( v) (3.13)
g ( v)T g ( v)
3. Stopping criterion: Repeat steps 2 until a certain stopping criterion is met, such
as when the norm of the gradient is sufficiently small.
Example:
Let's consider a simple optimization problem with a convex constraint. Suppose we
want to minimize the function ( f ( x) x 2 ) subject to the constraint ( x 2) .
Min f ( x) x 2
Subject to g ( x) x 2 0
The projected gradient update is given by:
Yk X k k f ( X k )
X k 1 P (Yk )