Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

 Optimal and adaptive control

- Linear control system


+Linear quadratic regulator(LQR): continuous system, discrete system,
+Linear quadratic Gaussian (LQG)
- non-linear optimal control
- model predictive control
 Contents: Parameter optimization problems
-“Applied Optimal Control: Optimization, Estimation and Control”, E.Bryson,1975.
-“Linear Optimal control system”,Sivan, Kwakernaak, 1972
-“Data-Driven Science and Engineering, Machine Learning, Dynamic Systems and
Control”,S.Bruton, 2019.

%%%%%%%%-----------------

1. Parameter optimization problems


1.1 Problems without constraints
-decision vector as
u1
u= ⋮
[]
um
-the performance index
L(u)
-problem
Find u¿ s.t. L ( u ) =min L(u)
¿

-The stationary points: the necessary conditions for the optimality


∂L
=0
∂u
 Exam.1
L (u )=u2 +2 u+5 , L ( x )=x 2 + x+ 1
 Exam.2
L ( u1 ,u2 ) =( u1−1 ) 2+ ( u2−2 )2 +5
 Exam.3: A positive definite matrix

L ( u1 ,u2 ) =2u 21+ 2u1 u2 +4 u 22+1¿ [ u1 u2 ] [21 14 ][ uu ]+1


1

 Exam.4: not positive definite matrix

L ( u1 ,u2 ) =−u21 +2 u1 u 2+3 u22 +1¿ [ u1 u2 ] [−11 13] [uu ]+1


1

-------------%%%%%%%%%%%%
1.2 Problems with constraints
- Decision vector u
T
- State vector x=[ x 1 , x 2 ,… , x n ]
T
- Constraint vector
f =[ f 1 , f 2 , … , f n ] , f (x ,u)=0
- Performance Index L(x ,u)
- Problem:
Find u¿ s.t. L ( u , x )=min L(u , x) with the constraints f ( x ,u )=0
¿ ¿
 Exam.5
L ( x , u )=x 2 +u2 , x +u=1
 Lagrange Multiplier: λ
Define H ( x , u , λ ) is to “adjoin” the constraints to the index as
H ( x , u , λ )=L ( x , u ) + λf (x , u)
- Necessary conditions:
1) Constraints:
∂H
=f ( x , u ) =0
∂λ
2) Stationary points: differential changes in H (x ,u , λ)
∂H ∂H
dH = dx+ du
∂x ∂u
∂H ∂H
=0 , =0
∂x ∂u
 Exam.1
1 x2 u2
L=
( +
2 a2 b 2 )
, f ( x , u )=x +mu−c =0

Procedure to Solution:
1) Define adjoined index
1 x2 u2
H ( x , u , λ )= ( )
+ + λ( x +mu−c)
2 a2 b 2
2) Necessary conditions)
a) x +mu−c=0 : the constraint condition
∂H x
b) 0= = + λ : the stationary condition w.r.t x
∂ x a2
∂H u
c) 0= = + λm : the stationary condition w.r.t u
∂u b2
3) Find solutions satisfying a), b) and c)
¿ b2 mc ¿ a2 c ¿ −c
u= 2 2 2
, x = 2 2 2
,λ = 2 2 2
a +m b a +m b a +m b

1.3 (skip) Problems with equality constraints: sufficient conditions


1) Together with the necessary conditions
2) The second derivatives
∂2 L
( )
∂ u2 f =0
=H uu−H ux f −1 T T
x f u−f u ( f x )
−1 −1
H xu + f Tu ( f TX ) H xx f −1
x fu

 Ex.4 (prob.4. page 12)


1 T 1 T
min L ( x , u )=min
u u
( 2
x Qx+ u Ru
2 )
with constraints
f ( x ,u )=x +Gu+ c=0
Sol:
1) Define H as
H=L+ λf
2) Find the necessary conditions:
∂H
a) Constraints( =0 ¿
∂λ
f ( x ,u )=x +Gu+ c=0
 x=−Gu−c
b) The stationary points

∂H ∂ 1 T 1 T
=
∂x ∂x 2 (
x Qx+ u Ru+ λ (x+ Gu+ c) =Qx+ λ=0
2 )
λ=−Qx

∂H ∂ 1 T 1 T
=
∂u ∂ x 2 (
x Qx+ u Ru+ λ (x+ Gu+ c) =Ru+ G T λ=0
2 )
Ru=−G T Q (−Gu−c )
c) Find the optimal
u¿
¿ T −1 T
u =−( R+G QG ) G Qc
d) The others, you may substitute u¿ into equations.

2. Calculus of Variations
2.1 Problem Concept
tf

min J =h ( x ( t f ) ) +∫ g ( x ( t ) ,u ( t ) ,t ) dt
u t0
Subject to
ẋ=f ( x , u ,t )
x ( t 0 ) , x ( t f ) :¿
- J ( x ,u ,t 0 ,t f ): a functional.

%%%%Kim’s comment: Functional


Functional: it is a function but it’s value is a scalar.
- Example
tf 1 /2

(
‖x ( t )‖2= ∫ x ( t ) x ( t ) dt
t0
T
)
-------------------------%%%%%%%%%
2.2 Some definitions and facts
 Maximum and minimum: functional
A functional J ( x ( t ) ) has a local minimum at x ¿ if
¿
J ( x )≥ J (x )
For all admissible x in ‖x−x ¿‖≤ ϵ
 Minimum can occur at (i) stationary point, (ii) at boundary..
 An increment of a functional:
∆J¿
 A variation of the functional is a linear approximation of the increment
∆J¿
 Fundamental theorem of the calculus of variations
If x is an extremal function, then the variation of
¿
J must vanish on x ¿ for all admissible
δx ,
δJ ( x ¿ , δ x ¿ )=0

2.3 Euler Equation: Without path constraints , scalar case


- The cost is
tf

J ( x (t) )=∫ g( x , ẋ¿ ,t )dt , t 0 , t f ¿


t0
Find
tf

min J =∫ g ( x ( t ) , ẋ ( t ) , t ) dt
x t0
- By the fundamental theorem of the calculus of variations, the necessary
condition for an extreme is

∂ g ( x , ẋ , t ) d ∂ g ( x , ẋ ,t )
∂x

dt ∂ ẋ ( =0 )
%%%%Kim’s comment

d
Why the integrand has ()? If g ( x , y ,t ) , x ∧ y are independent, then Euler equation is
dt
∂g ∂ g
+ =0.
∂x ∂y
In our case g ( x , ẋ ,t ) , looks like..
∂g ∂g
+
∂ x ∂ ẋ
d
But x , ẋ are not independent. Hence the () term is appeared.
dt
%%%%%

 Ex. Find the curve that gives the shortest distance between 2 points in a plane ( x0 , y0 )
and ( xf , yf )
- Solution:
xf xf xf xf xf
dy 2
J=∫ ds=∫ √ ( dx ) + ( dy ) ¿ ∫
x0 x0
2 2

x0 √ 1+ ( )
dx
dx¿ ∫ √ 1+ ( ẏ )2 dx ≡∫ g( ẏ ) dx
x 0 x 0

Using Euler’s condition,

∂ g ( x , ẋ , t ) d ∂ g(x , ẋ , t)
∂x

dt ∂ ẋ ( =0 )
∂ g ( ẏ )
Since x is independent variable, substitute t−→ x , x−→ y , then =0
∂y
1) The first term
∂ g ( y , ẏ , x ) ∂
= ( √ 1+ ( ẏ )2)=0
∂x ∂x
2) The second term

d ∂ g ( ẏ ) d ∂ g d ẏ d ẏ ÿ
(
dx ∂ ẏ
=) ( ) =
d ẏ ∂ ẏ dx d ẏ ( 1+ ẏ )
2 1 /2
ÿ=
(
( 1+ ẏ 2 )
3 /2
=0
)
 ÿ=0
 y=c1 x+ c 2

 if x 0=0 , y ( x 0 )=0 and x f =a , y ( x f )=b  y= ( ba ) x


2.4 Vector case:
tf

J ( X ( t ) )=∫ g ( X ( t ) , Ẋ ( t ) , t ) dt , t 0 , t f , X ( t 0 ) ¿
t0

Then Euler condition is

∂g d ∂g
− =0
∂ X dt ∂ Ẋ
2.5 boundary conditions
- There are several constraints about boundaries. It may be free or fixed.
t 0 ,t f , x ( t 0 ) , x ( t f )
At each boundary condition, there are necessary conditions for the extreme.

If you are interested in these conditions, which is called as “transversality condition “

 Hamiltonian Jacobi Bellman condition: with path constraints


tf

min J =h ( x ( t f ) ,t f ) +∫ g ( x ( t ) , u ( t ) , t ) dt
t0
Subject to

ẋ=a ( x ,u ,t ) , x ( t 0 ) :¿

2.6 Hamiltonian
Define
J ¿ ( x ( t ) , t ) = min ¿ ¿
u ( τ ) ∈U
t ≤ τ ≤t +∆ t
2.7 Hamiltonian
H ( x , u , J ¿x , t ) =g ( x , u , t ) + J ¿x ( x ,t ) a( x ,u , t)
2.8 Hamiltonian-Jacobi-Bellman condition: necessary and sufficient condition for
optimality
−J ¿t ( x ,t )=min H (x , u , J ¿x , t)
u∈U
3. Example of HJB : Continuous LQR [1]
3.1 problem
ẋ= Ax+ Bu
tf
1 1
J= x ( t f )T Hx ( t f )+ ∫ { xT Qx+ uT Ru } dt , t f :¿
2 2t 0

3.2 Hamiltonian
1
H ( x , u , J ¿x , t ) = ( x T Qx+uT Ru ) + J ¿x ( Ax+ Bu)
2
3.3 For optimal control
∂H
=u T R+J ¿x ( x , t ) B=0
∂u
 u¿ =−R−1 BT J ¿T
x

∂2 H
 Since =R> 0  a global minimum
∂u 2
¿
3.4 Solve for J x
¿ 1 T T
Assume J = x Px , P=P ,
2
Then
∂J¿ T ∂J¿ 1 T
=x P , = x Ṗ x
∂x ∂t 2
The HJB is
−J ¿t ( x ,t )=min H (x , u , J ¿x , t)
u∈U
Substituting these into HJB:
−1 T 1
x Ṗ x= x T ( Q+ PA + A T P−PB R−1 BT P ) x , ∀ x ,
2 2
− Ṗ=Q+ PA + A T P−PB R−1 BT P ,
 With terminal condition as P ( t f ) =H
3.5 Riccati equation
− Ṗ=Q+ PA+ A T P−PB R−1 BT P , P ( t f )=H

3.6 The optimal control


u¿ =−R−1 BT Px
- A linear state feedback!!

[1] “MIT OpenCourseWare, Principles of Optimal Control, Lecture 4”.

You might also like