Download as pdf or txt
Download as pdf or txt
You are on page 1of 428

Model Predictive Control

Lars Grne u
Mathematisches Institut, Universitt Bayreuth a

IDK Winter School, Thurnau, March 46, 2009

Contents
Part 1: (1) Introduction: What is Model Predictive Control? (2) Background: Innite horizon optimal control (3) The stability problem and its classical solutions:
(3a) Equilibrium endpoint constraint (3b) Regional endpoint constraint and terminal cost

(4) Inverse optimality and suboptimality estimates Part 2: (5) Stability and suboptimality without stabilizing constraints (6) Examples for the design of MPC schemes (7) Varying control horizons (time permitting) (8) Demonstration (T. Gebelein and J. Pannek)
Lars Grne, Model Predictive Control, p. 2 u

(1) Introduction What is Model Predictive Control (MPC)?

Setup
We consider nonlinear discrete time control systems x(n + 1) = f (x(n), u(n)) with x(n) X, u(n) U

Lars Grne, Model Predictive Control, p. 4 u

Setup
We consider nonlinear discrete time control systems x(n + 1) = f (x(n), u(n)) with x(n) X, u(n) U we consider discrete time systems for simplicity of exposition

Lars Grne, Model Predictive Control, p. 4 u

Setup
We consider nonlinear discrete time control systems x(n + 1) = f (x(n), u(n)) with x(n) X, u(n) U we consider discrete time systems for simplicity of exposition continuous time systems can be treated in an analogous way or as discrete time sampled data systems

Lars Grne, Model Predictive Control, p. 4 u

Setup
We consider nonlinear discrete time control systems x(n + 1) = f (x(n), u(n)) with x(n) X, u(n) U we consider discrete time systems for simplicity of exposition continuous time systems can be treated in an analogous way or as discrete time sampled data systems X and U depend on the model. These may be Euclidean spaces Rn and Rm or more general (e.g., innite dimensional) spaces

Lars Grne, Model Predictive Control, p. 4 u

Setup
We consider nonlinear discrete time control systems x(n + 1) = f (x(n), u(n)) with x(n) X, u(n) U we consider discrete time systems for simplicity of exposition continuous time systems can be treated in an analogous way or as discrete time sampled data systems X and U depend on the model. These may be Euclidean spaces Rn and Rm or more general (e.g., innite dimensional) spaces state and control constraints can be added explicitly or included implicitly by chosing X and U as suitable subsets of the respective spaces
Lars Grne, Model Predictive Control, p. 4 u

Prototype Problem
Assume there exists an equilibrium x X for u = 0, i.e. f (x , 0) = x

Lars Grne, Model Predictive Control, p. 5 u

Prototype Problem
Assume there exists an equilibrium x X for u = 0, i.e. f (x , 0) = x Task: stabilize the system x(n + 1) = f (x(n), u(n)) at x via static state feedback

Lars Grne, Model Predictive Control, p. 5 u

Prototype Problem
Assume there exists an equilibrium x X for u = 0, i.e. f (x , 0) = x Task: stabilize the system x(n + 1) = f (x(n), u(n)) at x via static state feedback i.e., nd F : X U , such that x is asymptotically stable for the feedback controlled system xF (n + 1) = f (xF (n), F (xF (n)))

Lars Grne, Model Predictive Control, p. 5 u

Prototype Problem
Recall: Asymptotic stability means

Lars Grne, Model Predictive Control, p. 6 u

Prototype Problem
Recall: Asymptotic stability means Attraction: xF (n) x as n for all xF (0) X plus Stability: Solutions starting close to 0 remain close to 0

Lars Grne, Model Predictive Control, p. 6 u

Prototype Problem
Recall: Asymptotic stability means Attraction: xF (n) x as n for all xF (0) X plus Stability: Solutions starting close to 0 remain close to 0 or, formally: for each > 0 there exists > 0 such that xF (n) x for all xF (0) x , n N0

Lars Grne, Model Predictive Control, p. 6 u

Prototype Problem
Recall: Asymptotic stability means Attraction: xF (n) x as n for all xF (0) X plus Stability: Solutions starting close to 0 remain close to 0 or, formally: for each > 0 there exists > 0 such that xF (n) x for all xF (0) x , n N0 This prototype equilibrium stabilization problem is easily generalizable to tracking, set stabilization, . . .

Lars Grne, Model Predictive Control, p. 6 u

Prototype Problem
Recall: Asymptotic stability means Attraction: xF (n) x as n for all xF (0) X plus Stability: Solutions starting close to 0 remain close to 0 or, formally: for each > 0 there exists > 0 such that xF (n) x for all xF (0) x , n N0 This prototype equilibrium stabilization problem is easily generalizable to tracking, set stabilization, . . . In the sequel, we always assume that the problem is solvable, i.e., that a stabilizing feedback F : X U exists
Lars Grne, Model Predictive Control, p. 6 u

The basic idea of MPC


(1) At each time N0 , for the current state x , use the model to predict solutions x(n + 1) = f (x(n), u(n)), n = 0, . . . , N 1, x(0) = x ,

Lars Grne, Model Predictive Control, p. 7 u

The basic idea of MPC


(1) At each time N0 , for the current state x , use the model to predict solutions x(n + 1) = f (x(n), u(n)), n = 0, . . . , N 1, x(0) = x ,
N 1

(2) Use these predictions in order to optimize JN (x , u) =


n=0

(x(n), u(n))

over the control sequences u = (u(0), . . . , u(N 1)) U N , where (x, u) penalizes the distance from the equilibrium and control eort, e.g., (x, u) = x x 2 + u 2

Lars Grne, Model Predictive Control, p. 7 u

The basic idea of MPC


(1) At each time N0 , for the current state x , use the model to predict solutions x(n + 1) = f (x(n), u(n)), n = 0, . . . , N 1, x(0) = x ,
N 1

(2) Use these predictions in order to optimize JN (x , u) =


n=0

(x(n), u(n))

over the control sequences u = (u(0), . . . , u(N 1)) U N , where (x, u) penalizes the distance from the equilibrium and control eort, e.g., (x, u) = x x 2 + u 2 (3) From the optimal control sequence u (0), . . . , u (N 1), use the rst element as feedback value, i.e., F (x ) := u (0)
Lars Grne, Model Predictive Control, p. 7 u

MPC from the control point of view


N 1

minimize JN (x , u) =
n=0

(x(n), u(t)),

x(0) = x

optimal control u (0), . . . , u (N 1)

set FN (x ) := u (0)

Lars Grne, Model Predictive Control, p. 8 u

MPC from the control point of view


N 1

minimize JN (x , u) =
n=0

(x(n), u(t)),

x(0) = x

optimal control u (0), . . . , u (N 1)


u u
horizon N Horizont

set FN (x ) := u (0)
horizon Horizont N

u u

u[0,M1] ( ,x)
1

FN (x )

u(x ) FN M (,x)

FN (x +1 )2 u

[0,M1]

( 1,x1)

u (n)

u (n)

T 1

t
MT N

0 1 T

(M+1)T

tt

+1
Lars Grne, Model Predictive Control, p. 8 u

MPC from the trajectory point of view


x x0

n 0 1 2 3 4 5 6

Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x

n 0 1 2 3 4 5 6

black = predictions (open loop optimization)

Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x x1

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x

x2

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x

x3

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x

...

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x

... x4 n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x

... ... n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x

... ... x5 n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x

... ... ... n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 9 u

MPC from the trajectory point of view


x

... x6 0 1 2 3 4 5 6 ... ... n

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 9 u

Model predictive control

(aka Receding horizon control)

Idea rst formulated in [A.I. Propoi, Use of linear programming


methods for synthesizing sampled-data automatic systems, Automation and Remote Control 1963]

Lars Grne, Model Predictive Control, p. 10 u

Model predictive control

(aka Receding horizon control)

Idea rst formulated in [A.I. Propoi, Use of linear programming


methods for synthesizing sampled-data automatic systems, Automation and Remote Control 1963], often rediscovered

Lars Grne, Model Predictive Control, p. 10 u

Model predictive control

(aka Receding horizon control)

Idea rst formulated in [A.I. Propoi, Use of linear programming


methods for synthesizing sampled-data automatic systems, Automation and Remote Control 1963], often rediscovered

used in industrial applications since the mid 1970s, mainly for constrained linear systems [Qin & Badgwell, 1997, 2001]

Lars Grne, Model Predictive Control, p. 10 u

Model predictive control

(aka Receding horizon control)

Idea rst formulated in [A.I. Propoi, Use of linear programming


methods for synthesizing sampled-data automatic systems, Automation and Remote Control 1963], often rediscovered

used in industrial applications since the mid 1970s, mainly for constrained linear systems [Qin & Badgwell, 1997, 2001] more than 9000 industrial MPC applications in Germany counted in [Dittmar & Pfeifer, 2005]

Lars Grne, Model Predictive Control, p. 10 u

Model predictive control

(aka Receding horizon control)

Idea rst formulated in [A.I. Propoi, Use of linear programming


methods for synthesizing sampled-data automatic systems, Automation and Remote Control 1963], often rediscovered

used in industrial applications since the mid 1970s, mainly for constrained linear systems [Qin & Badgwell, 1997, 2001] more than 9000 industrial MPC applications in Germany counted in [Dittmar & Pfeifer, 2005] development of theory since 1980 (linear), 1990 (nonlinear)

Lars Grne, Model Predictive Control, p. 10 u

Model predictive control

(aka Receding horizon control)

Idea rst formulated in [A.I. Propoi, Use of linear programming


methods for synthesizing sampled-data automatic systems, Automation and Remote Control 1963], often rediscovered

used in industrial applications since the mid 1970s, mainly for constrained linear systems [Qin & Badgwell, 1997, 2001] more than 9000 industrial MPC applications in Germany counted in [Dittmar & Pfeifer, 2005] development of theory since 1980 (linear), 1990 (nonlinear) Central questions: When does MPC stabilize the system?

Lars Grne, Model Predictive Control, p. 10 u

Model predictive control

(aka Receding horizon control)

Idea rst formulated in [A.I. Propoi, Use of linear programming


methods for synthesizing sampled-data automatic systems, Automation and Remote Control 1963], often rediscovered

used in industrial applications since the mid 1970s, mainly for constrained linear systems [Qin & Badgwell, 1997, 2001] more than 9000 industrial MPC applications in Germany counted in [Dittmar & Pfeifer, 2005] development of theory since 1980 (linear), 1990 (nonlinear) Central questions: When does MPC stabilize the system? How good is the performance of the MPC feedback law?

Lars Grne, Model Predictive Control, p. 10 u

Model predictive control

(aka Receding horizon control)

Idea rst formulated in [A.I. Propoi, Use of linear programming


methods for synthesizing sampled-data automatic systems, Automation and Remote Control 1963], often rediscovered

used in industrial applications since the mid 1970s, mainly for constrained linear systems [Qin & Badgwell, 1997, 2001] more than 9000 industrial MPC applications in Germany counted in [Dittmar & Pfeifer, 2005] development of theory since 1980 (linear), 1990 (nonlinear) Central questions: When does MPC stabilize the system? How good is the performance of the MPC feedback law? How long does the optimization horizon N need to be?
Lars Grne, Model Predictive Control, p. 10 u

Model predictive control

(aka Receding horizon control)

Idea rst formulated in [A.I. Propoi, Use of linear programming


methods for synthesizing sampled-data automatic systems, Automation and Remote Control 1963], often rediscovered

used in industrial applications since the mid 1970s, mainly for constrained linear systems [Qin & Badgwell, 1997, 2001] more than 9000 industrial MPC applications in Germany counted in [Dittmar & Pfeifer, 2005] development of theory since 1980 (linear), 1990 (nonlinear) Central questions: When does MPC stabilize the system? How good is the performance of the MPC feedback law? How long does the optimization horizon N need to be?
and, of course, the development of good algorithms (not topic of this course)
Lars Grne, Model Predictive Control, p. 10 u

An example
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0.5 0 0.5 1 1.5

Lars Grne, Model Predictive Control, p. 11 u

An example
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0.5 0 0.5 1 1.5

x1 (n + 1) = sin( + u) x2 (n + 1) = cos( + u)/2

arccos(x2 /), x1 0 2 arccos(x2 /), x1 < 0, 2 X = R , U = [0, umax ], x = (0, 1/2)T , x0 = (0, 1/2)T with = (x1 , 2x2 )T , =

Lars Grne, Model Predictive Control, p. 11 u

An example
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0.5 0 0.5 1 1.5

x1 (n + 1) = sin( + u) x2 (n + 1) = cos( + u)/2

arccos(x2 /), x1 0 2 arccos(x2 /), x1 < 0, 2 X = R , U = [0, umax ], x = (0, 1/2)T , x0 = (0, 1/2)T with = (x1 , 2x2 )T , = MPC with (x, u) = x x 2 + |u|2 and umax = 0.2 yields asymptotic stability for N = 11
Lars Grne, Model Predictive Control, p. 11 u

An example
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0.5 0 0.5 1 1.5

x1 (n + 1) = sin( + u) x2 (n + 1) = cos( + u)/2

arccos(x2 /), x1 0 2 arccos(x2 /), x1 < 0, 2 X = R , U = [0, umax ], x = (0, 1/2)T , x0 = (0, 1/2)T with = (x1 , 2x2 )T , = MPC with (x, u) = x x 2 + |u|2 and umax = 0.2 yields asymptotic stability for N = 11 but not for N 10
Lars Grne, Model Predictive Control, p. 11 u

(2) Background Innite horizon optimal control

Stabilization via optimal control


For continuous running cost
uU

: X U R+ with 0 and (x , 0) = 0

min (x, u) > 0 for x = x dene the innite horizon functional

J (x, u) :=
n=0

(x(n), u(n))

and the optimal value function V (x) :=


u:N0 U

inf J (x, u)

Lars Grne, Model Predictive Control, p. 13 u

Stabilization via optimal control

V (x) =

u:N0 U

inf J (x, u) =

u:N0 U

inf

(x(n), u(n))
n=0

Lars Grne, Model Predictive Control, p. 14 u

Stabilization via optimal control

V (x) =

u:N0 U

inf J (x, u) =

u:N0 U

inf

(x(n), u(n))
n=0

Facts (for suitable ): if the feedback stabilization problem is solvable, then the function V is nite and continuous

Lars Grne, Model Predictive Control, p. 14 u

Stabilization via optimal control

V (x) =

u:N0 U

inf J (x, u) =

u:N0 U

inf

(x(n), u(n))
n=0

Facts (for suitable ): if the feedback stabilization problem is solvable, then the function V is nite and continuous V satises the Dynamic Programming Principle V (x) = min { (x, u) + V (f (x, u))}
uU

Lars Grne, Model Predictive Control, p. 14 u

Stabilization via optimal control

V (x) =

u:N0 U

inf J (x, u) =

u:N0 U

inf

(x(n), u(n))
n=0

Facts (for suitable ): if the feedback stabilization problem is solvable, then the function V is nite and continuous V satises the Dynamic Programming Principle V (x) = min { (x, u) + V (f (x, u))}
uU

if we choose F (x) U as the minimizer, i.e., F (x) = argmin{ (x, u) + V (f (x, u))}
uU

then F is the optimal feedback


Lars Grne, Model Predictive Control, p. 14 u

Asymptotic stability of the optimal feedback law


Furthermore F is asymptotically stabilizing: This follows from V (f (x, F (x))) V (x) (x, F (x)) ...

Lars Grne, Model Predictive Control, p. 15 u

Asymptotic stability of the optimal feedback law


Furthermore F is asymptotically stabilizing: This follows from
40

V (f (x, F (x))) V (x) (x, F (x))


<V (x) for x=x

30 20 10 0 10 8

11 00
6 4 2 x2 0 2

1 0

1 0 1 0

1 0

15 10 5 4 6 5 8 10 10 15 0 x1

V is a Lyapunov function

Lars Grne, Model Predictive Control, p. 15 u

Asymptotic stability of the optimal feedback law


Furthermore F is asymptotically stabilizing: This follows from
40

V (f (x, F (x))) V (x) (x, F (x))


<V (x) for x=x

30 20 10 0 10 8

11 00
6 4 2 x2 0 2

1 0

1 0 1 0

1 0

15 10 5 4 6 5 8 10 10 15 0 x1

V is a Lyapunov function approach for MPC: Prove similar inequalities for FN and

N 1

VN (x(0)) :=

u:N0 U

inf JN (x(0), u) =

u:N0 U

inf

(x(n), u(n))
n=0

and use VN as a Lyapunov function


Lars Grne, Model Predictive Control, p. 15 u

(3) The Stability Problem

VN as a Lyapunov Function
Problem: Prove that the MPC feedback law FN is stabilizing

Lars Grne, Model Predictive Control, p. 17 u

VN as a Lyapunov Function
Problem: Prove that the MPC feedback law FN is stabilizing Approach: Dene the nite time optimal value function
N 1

VN (x(0)) :=

u:N0 U

inf JN (x(0), u) =

u:N0 U

inf

(x(n), u(n))
n=0

and prove that VN is Lyapunov function

Lars Grne, Model Predictive Control, p. 17 u

VN as a Lyapunov Function
Problem: Prove that the MPC feedback law FN is stabilizing Approach: Dene the nite time optimal value function
N 1

VN (x(0)) :=

u:N0 U

inf JN (x(0), u) =

u:N0 U

inf

(x(n), u(n))
n=0

and prove that VN is Lyapunov function, i.e., that VN has suitable upper and lower bounds (automatically inherited from )

Lars Grne, Model Predictive Control, p. 17 u

VN as a Lyapunov Function
Problem: Prove that the MPC feedback law FN is stabilizing Approach: Dene the nite time optimal value function
N 1

VN (x(0)) :=

u:N0 U

inf JN (x(0), u) =

u:N0 U

inf

(x(n), u(n))
n=0

and prove that VN is Lyapunov function, i.e., that VN has suitable upper and lower bounds (automatically inherited from ) and VN (f (x, FN (x))) VN (x) (x, FN (x)) for some : X U R+ with (x, FN (x)) > 0 for x = x
0

Lars Grne, Model Predictive Control, p. 17 u

VN as a Lyapunov Function
Problem: Prove that the MPC feedback law FN is stabilizing Approach: Dene the nite time optimal value function
N 1

VN (x(0)) :=

u:N0 U

inf JN (x(0), u) =

u:N0 U

inf

(x(n), u(n))
n=0

and prove that VN is Lyapunov function, i.e., that VN has suitable upper and lower bounds (automatically inherited from ) and VN (f (x, FN (x))) VN (x) (x, FN (x)) for some : X U R+ with (x, FN (x)) > 0 for x = x
0

VN (xFN (n)) 0

Lars Grne, Model Predictive Control, p. 17 u

VN as a Lyapunov Function
Problem: Prove that the MPC feedback law FN is stabilizing Approach: Dene the nite time optimal value function
N 1

VN (x(0)) :=

u:N0 U

inf JN (x(0), u) =

u:N0 U

inf

(x(n), u(n))
n=0

and prove that VN is Lyapunov function, i.e., that VN has suitable upper and lower bounds (automatically inherited from ) and VN (f (x, FN (x))) VN (x) (x, FN (x)) for some : X U R+ with (x, FN (x)) > 0 for x = x
0

VN (xFN (n)) 0

xFN (n) x + stability

Lars Grne, Model Predictive Control, p. 17 u

VN as a Lyapunov Function
Problem: Prove that the MPC feedback law FN is stabilizing Approach: Dene the nite time optimal value function
N 1

VN (x(0)) :=

u:N0 U

inf JN (x(0), u) =

u:N0 U

inf

(x(n), u(n))
n=0

and prove that VN is Lyapunov function, i.e., that VN has suitable upper and lower bounds (automatically inherited from ) and VN (f (x, FN (x))) VN (x) (x, FN (x)) for some : X U R+ with (x, FN (x)) > 0 for x = x
0

VN (xFN (n)) 0

xFN (n) x + stability

(most commonly used approach in the literature)


Lars Grne, Model Predictive Control, p. 17 u

Why is this dicult?


We want VN (f (x, FN (x))) VN (x) (x, FN (x)) ()

Lars Grne, Model Predictive Control, p. 18 u

Why is this dicult?


We want VN (f (x, FN (x))) VN (x) (x, FN (x))
<0 for x=x

()

Lars Grne, Model Predictive Control, p. 18 u

Why is this dicult?


We want VN (f (x, FN (x))) VN (x) (x, FN (x))
<0 for x=x

()

For N = , the dynamic programming principle immediately implies () with (x, F (x)) = (x, F (x)): V (x) = (x, F (x)) + V (f (x, F (x)))

Lars Grne, Model Predictive Control, p. 18 u

Why is this dicult?


We want VN (f (x, FN (x))) VN (x) (x, FN (x))
<0 for x=x

()

For N = , the dynamic programming principle immediately implies () with (x, F (x)) = (x, F (x)): V (x) = (x, F (x)) + V (f (x, F (x))) The dynamic programming principle for VN reads VN (x) = min{ (x, u) + VN 1 (f (x, u))}
uU

(x, FN (x)) + VN 1 (f (x, FN (x)))

Lars Grne, Model Predictive Control, p. 18 u

Why is this dicult?


We want VN (f (x, FN (x))) VN (x) (x, FN (x))
<0 for x=x

()

For N = , the dynamic programming principle immediately implies () with (x, F (x)) = (x, F (x)): V (x) = (x, F (x)) + V (f (x, F (x))) The dynamic programming principle for VN reads VN (x) = min{ (x, u) + VN 1 (f (x, u))}
uU

(x, FN (x)) + VN 1 (f (x, FN (x)))

Thus, () follows with (x, u) = (x, u) + VN 1 (f (x, u)) VN (f (x, u))

Lars Grne, Model Predictive Control, p. 18 u

Why is this dicult?


We want VN (f (x, FN (x))) VN (x) (x, FN (x))
<0 for x=x

()

For N = , the dynamic programming principle immediately implies () with (x, F (x)) = (x, F (x)): V (x) = (x, F (x)) + V (f (x, F (x))) The dynamic programming principle for VN reads VN (x) = min{ (x, u) + VN 1 (f (x, u))}
uU

(x, FN (x)) + VN 1 (f (x, FN (x)))

Thus, () follows with (x, u) = (x, u) + VN 1 (f (x, u)) VN (f (x, u)) Problem: ensure (x, FN (x)) > 0 for x = x
Lars Grne, Model Predictive Control, p. 18 u

Why is this dicult?


Task: Give conditions under which
(x, FN (x)) := (x, FN (x))+VN 1 (f (x, FN (x)))VN (f (x, FN (x))) > 0

holds for x = x .

Lars Grne, Model Predictive Control, p. 19 u

Why is this dicult?


Task: Give conditions under which
(x, FN (x)) := (x, FN (x))+VN 1 (f (x, FN (x)))VN (f (x, FN (x))) > 0

holds for x = x . For the basic (and most widely used) MPC formulation
N 1

VN (x(0)) :=

u:N0 U

inf JN (x(0), u) =

u:N0 U

inf

(x(n), u(n))
n=0

this appeared to be out of reach until the mid 1990s

Lars Grne, Model Predictive Control, p. 19 u

Why is this dicult?


Task: Give conditions under which
(x, FN (x)) := (x, FN (x))+VN 1 (f (x, FN (x)))VN (f (x, FN (x))) > 0

holds for x = x . For the basic (and most widely used) MPC formulation
N 1

VN (x(0)) :=

u:N0 U

inf JN (x(0), u) =

u:N0 U

inf

(x(n), u(n))
n=0

this appeared to be out of reach until the mid 1990s (note: VN 1 VN 0 by denition; typically with strict <)

Lars Grne, Model Predictive Control, p. 19 u

Why is this dicult?


Task: Give conditions under which
(x, FN (x)) := (x, FN (x))+VN 1 (f (x, FN (x)))VN (f (x, FN (x))) > 0

holds for x = x . For the basic (and most widely used) MPC formulation
N 1

VN (x(0)) :=

u:N0 U

inf JN (x(0), u) =

u:N0 U

inf

(x(n), u(n))
n=0

this appeared to be out of reach until the mid 1990s (note: VN 1 VN 0 by denition; typically with strict <) additional stabilizing constraints were proposed
Lars Grne, Model Predictive Control, p. 19 u

(3a) Classical solution of the stability problem: Equilibrium endpoint constraint

Equilibrium endpoint constraint I


Optimal control problem
N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n))

Lars Grne, Model Predictive Control, p. 21 u

Equilibrium endpoint constraint I


Optimal control problem
N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n))

Recall: f (x , 0) = x and

(x , 0) = 0

add equilibrium endpoint constraint x(N ) = x


[Keerthi/Gilbert 88, . . . ]

Lars Grne, Model Predictive Control, p. 21 u

Equilibrium endpoint constraint II


Then, each feasible trajectory for horizon N 1 with control u(0), . . . , u(N 2) can be prolonged with no cost by setting u(N 1) := 0

Lars Grne, Model Predictive Control, p. 22 u

Equilibrium endpoint constraint II


Then, each feasible trajectory for horizon N 1 with control u(0), . . . , u(N 2) can be prolonged with no cost by setting u(N 1) := 0, i.e. (x(N 1), u(N 1)) = (x , 0) = 0

Lars Grne, Model Predictive Control, p. 22 u

Equilibrium endpoint constraint II


Then, each feasible trajectory for horizon N 1 with control u(0), . . . , u(N 2) can be prolonged with no cost by setting u(N 1) := 0, i.e. (x(N 1), u(N 1)) = (x , 0) = 0 and thus
N 2

JN 1 (x(0), u) =
n=0 N 1

(x(n), u(n))

=
n=0

(x(n), u(n)) = JN (x(0), u).

Lars Grne, Model Predictive Control, p. 22 u

Equilibrium endpoint constraint II


Then, each feasible trajectory for horizon N 1 with control u(0), . . . , u(N 2) can be prolonged with no cost by setting u(N 1) := 0, i.e. (x(N 1), u(N 1)) = (x , 0) = 0 and thus
N 2

JN 1 (x(0), u) =
n=0 N 1

(x(n), u(n))

=
n=0

(x(n), u(n)) = JN (x(0), u).

Since this prolonged trajectory is again feasible, we get VN (x) VN 1 (x)

Lars Grne, Model Predictive Control, p. 22 u

Equilibrium endpoint constraint II


Then, each feasible trajectory for horizon N 1 with control u(0), . . . , u(N 2) can be prolonged with no cost by setting u(N 1) := 0, i.e. (x(N 1), u(N 1)) = (x , 0) = 0 and thus
N 2

JN 1 (x(0), u) =
n=0 N 1

(x(n), u(n))

=
n=0

(x(n), u(n)) = JN (x(0), u).

Since this prolonged trajectory is again feasible, we get VN (x) VN 1 (x) Note: VN 1 (x) VN (x) does no longer hold under x(N ) = x
Lars Grne, Model Predictive Control, p. 22 u

Equilibrium endpoint constraint III


From VN (x) VN 1 (x) we get (x, FN (x)) = (x, FN (x)) + VN 1 (f (x, FN (x))) VN (f (x, FN (x))) (x, FN (x)) > 0

for all x = x by choice of .

Lars Grne, Model Predictive Control, p. 23 u

Equilibrium endpoint constraint III


From VN (x) VN 1 (x) we get (x, FN (x)) = (x, FN (x)) + VN 1 (f (x, FN (x))) VN (f (x, FN (x))) (x, FN (x)) > 0

for all x = x by choice of . VN (f (x, FN (x))) VN (x) (x, FN (x)) i.e., stability with Lyapunov function VN and = ()

Lars Grne, Model Predictive Control, p. 23 u

Equilibrium endpoint constraint III


From VN (x) VN 1 (x) we get (x, FN (x)) = (x, FN (x)) + VN 1 (f (x, FN (x))) VN (f (x, FN (x))) (x, FN (x)) > 0

for all x = x by choice of . VN (f (x, FN (x))) VN (x) (x, FN (x)) i.e., stability with Lyapunov function VN and = Note: In general, x(N ) = x does not imply xFN (N ) = x
Lars Grne, Model Predictive Control, p. 23 u

()

Equilibrium endpoint constraint Discussion


The additional condition x(N ) = x ensures asymptotic stability in a rigorously provable way

Lars Grne, Model Predictive Control, p. 24 u

Equilibrium endpoint constraint Discussion


The additional condition x(N ) = x ensures asymptotic stability in a rigorously provable way, but online optimization may become harder

Lars Grne, Model Predictive Control, p. 24 u

Equilibrium endpoint constraint Discussion


The additional condition x(N ) = x ensures asymptotic stability in a rigorously provable way, but online optimization may become harder large feasible set {x(0) Rn | x(N ) = x for some u U} typically needs large optimization horizon N

Lars Grne, Model Predictive Control, p. 24 u

Equilibrium endpoint constraint Discussion


The additional condition x(N ) = x ensures asymptotic stability in a rigorously provable way, but online optimization may become harder large feasible set {x(0) Rn | x(N ) = x for some u U} typically needs large optimization horizon N system needs to be controllable to x in nite time

Lars Grne, Model Predictive Control, p. 24 u

Equilibrium endpoint constraint Discussion


The additional condition x(N ) = x ensures asymptotic stability in a rigorously provable way, but online optimization may become harder large feasible set {x(0) Rn | x(N ) = x for some u U} typically needs large optimization horizon N system needs to be controllable to x in nite time not very often used in industrial practice

Lars Grne, Model Predictive Control, p. 24 u

(3b) Classical solution of the stability problem: Regional endpoint constraint and terminal cost

Regional constraint and terminal cost I


Optimal control problem
N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n))

Lars Grne, Model Predictive Control, p. 26 u

Regional constraint and terminal cost I


Optimal control problem
N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n))

We want VN to become a Lyapunov function add local Lyapunov function W : B (x ) R+ as terminal cost 0
N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n)) + W (x(N ))

Lars Grne, Model Predictive Control, p. 26 u

Regional constraint and terminal cost I


Optimal control problem
N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n))

We want VN to become a Lyapunov function add local Lyapunov function W : B (x ) R+ as terminal cost 0
N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n)) + W (x(N ))

and use terminal constraint x(N ) x , W (x(N ))


[Chen & Allgwer 98, Jadbabaie et al. 98 . . . ] o
Lars Grne, Model Predictive Control, p. 26 u

Regional constraint and terminal cost II


N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n)) + W (x(N ))

plus terminal constraint x(N ) x , W (x(N ))

Lars Grne, Model Predictive Control, p. 27 u

Regional constraint and terminal cost II


N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n)) + W (x(N ))

plus terminal constraint x(N ) x , W (x(N )) We choose W , , such that cl {x B (x ) | W (x) } B (x )

Lars Grne, Model Predictive Control, p. 27 u

Regional constraint and terminal cost II


N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n)) + W (x(N ))

plus terminal constraint x(N ) x , W (x(N )) We choose W , , such that cl {x B (x ) | W (x) } B (x ) W (x) implies the existence of FW (x) U with W (f (x, FW (x)) W (x) (x, FW (x))

Lars Grne, Model Predictive Control, p. 27 u

Regional constraint and terminal cost II


Then, each feasible trajectory for horizon N 1 with control u(0), . . . , u(N 2) can be prolonged by setting u(N 1) := FW (x(N 1)).

Lars Grne, Model Predictive Control, p. 28 u

Regional constraint and terminal cost II


Then, each feasible trajectory for horizon N 1 with control u(0), . . . , u(N 2) can be prolonged by setting u(N 1) := FW (x(N 1)). This yields (x(N 1), u(N 1)) W (x(N 1)) W (x(N )) and thus
N 2

JN 1 (x(0), u) =
n=0 N 1

(x(n), u(n)) + W (x(N 1))

n=0

(x(n), u(n)) + W (x(N )) = JN (x(0), u).

Since this prolonged trajectory is again feasible, we get VN (x) VN 1 (x) and we obtain stability just as for the equilibrium constraint
Lars Grne, Model Predictive Control, p. 28 u

Regional constraint and terminal cost Discussion


Compared to the equilibrium constraint, the regional constraint yields easier online optimization problems

Lars Grne, Model Predictive Control, p. 29 u

Regional constraint and terminal cost Discussion


Compared to the equilibrium constraint, the regional constraint yields easier online optimization problems yields larger feasible sets

Lars Grne, Model Predictive Control, p. 29 u

Regional constraint and terminal cost Discussion


Compared to the equilibrium constraint, the regional constraint yields easier online optimization problems yields larger feasible sets does not need exact controllability to x

Lars Grne, Model Predictive Control, p. 29 u

Regional constraint and terminal cost Discussion


Compared to the equilibrium constraint, the regional constraint yields easier online optimization problems yields larger feasible sets does not need exact controllability to x But: large feasible set still needs a large optimization horizon N

Lars Grne, Model Predictive Control, p. 29 u

Regional constraint and terminal cost Discussion


Compared to the equilibrium constraint, the regional constraint yields easier online optimization problems yields larger feasible sets does not need exact controllability to x But: large feasible set still needs a large optimization horizon N additional analytical eort for computing W

Lars Grne, Model Predictive Control, p. 29 u

Regional constraint and terminal cost Discussion


Compared to the equilibrium constraint, the regional constraint yields easier online optimization problems yields larger feasible sets does not need exact controllability to x But: large feasible set still needs a large optimization horizon N additional analytical eort for computing W hardly ever used in industrial practice

Lars Grne, Model Predictive Control, p. 29 u

Regional constraint and terminal cost Discussion


Compared to the equilibrium constraint, the regional constraint yields easier online optimization problems yields larger feasible sets does not need exact controllability to x But: large feasible set still needs a large optimization horizon N additional analytical eort for computing W hardly ever used in industrial practice In Part 2 we will see how stability can be proved without stabilizing terminal constraints
Lars Grne, Model Predictive Control, p. 29 u

(4) Inverse optimality and suboptimality

Performance of FN
Once stability can be guaranteed, we can investigate the performance of the MPC feedback law FN

Lars Grne, Model Predictive Control, p. 31 u

Performance of FN
Once stability can be guaranteed, we can investigate the performance of the MPC feedback law FN Performance of a feedback F : X U is measured via the innite horizon functional

J (xF (0), F ) :=
n=0

(xF (n), F (xF (n)))

Lars Grne, Model Predictive Control, p. 31 u

Performance of FN
Once stability can be guaranteed, we can investigate the performance of the MPC feedback law FN Performance of a feedback F : X U is measured via the innite horizon functional

J (xF (0), F ) :=
n=0

(xF (n), F (xF (n)))

Recall: F = F is optimal: J (xF (0), F ) = V (xFN (0))

Lars Grne, Model Predictive Control, p. 31 u

Performance of FN
Once stability can be guaranteed, we can investigate the performance of the MPC feedback law FN Performance of a feedback F : X U is measured via the innite horizon functional

J (xF (0), F ) :=
n=0

(xF (n), F (xF (n)))

Recall: F = F is optimal: J (xF (0), F ) = V (xFN (0)) In the literature, two dierent concepts can be found: Inverse Optimality: show that FN is optimal for an altered running cost =

Lars Grne, Model Predictive Control, p. 31 u

Performance of FN
Once stability can be guaranteed, we can investigate the performance of the MPC feedback law FN Performance of a feedback F : X U is measured via the innite horizon functional

J (xF (0), F ) :=
n=0

(xF (n), F (xF (n)))

Recall: F = F is optimal: J (xF (0), F ) = V (xFN (0)) In the literature, two dierent concepts can be found: Inverse Optimality: show that FN is optimal for an altered running cost = Suboptimality: derive upper bounds for J (xFN (0), FN )
Lars Grne, Model Predictive Control, p. 31 u

Inverse optimality
Theorem: [Poubelle/Bitmead/Gevers 88, Magni/Sepulchre 97] FN is optimal for the problem

minimize J (x(0), u) =
n=0

(x(n), u(n))

with (x, u) := (x, u) + VN 1 (f (x, u)) VN (f (x, u))

Lars Grne, Model Predictive Control, p. 32 u

Inverse optimality
Theorem: [Poubelle/Bitmead/Gevers 88, Magni/Sepulchre 97] FN is optimal for the problem

minimize J (x(0), u) =
n=0

(x(n), u(n))

with (x, u) := (x, u) + VN 1 (f (x, u)) VN (f (x, u)) Idea of proof: By the dynamic programming principle: VN (x) = inf { (x, u) + VN 1 (f (x, u))}
uU

Lars Grne, Model Predictive Control, p. 32 u

Inverse optimality
Theorem: [Poubelle/Bitmead/Gevers 88, Magni/Sepulchre 97] FN is optimal for the problem

minimize J (x(0), u) =
n=0

(x(n), u(n))

with (x, u) := (x, u) + VN 1 (f (x, u)) VN (f (x, u)) Idea of proof: By the dynamic programming principle: VN (x) = inf { (x, u) + VN 1 (f (x, u))}
uU

= inf { (x, u) + VN (f (x, u))}


uU

Lars Grne, Model Predictive Control, p. 32 u

Inverse optimality
Theorem: [Poubelle/Bitmead/Gevers 88, Magni/Sepulchre 97] FN is optimal for the problem

minimize J (x(0), u) =
n=0

(x(n), u(n))

with (x, u) := (x, u) + VN 1 (f (x, u)) VN (f (x, u)) Idea of proof: By the dynamic programming principle: VN (x) = inf { (x, u) + VN 1 (f (x, u))}
uU

= inf { (x, u) + VN (f (x, u))}


uU

Hence, it satises the Bellman equation for , implying J (xFN (0), FN ) = VN (xFN (0))
Lars Grne, Model Predictive Control, p. 32 u

Inverse optimality
Inverse optimality shows that FN is an innite horizon optimal feedback law

Lars Grne, Model Predictive Control, p. 33 u

Inverse optimality
Inverse optimality shows that FN is an innite horizon optimal feedback law thus implies several good properties of FN , like, e.g., some inherent robustness againts perturbations

Lars Grne, Model Predictive Control, p. 33 u

Inverse optimality
Inverse optimality shows that FN is an innite horizon optimal feedback law thus implies several good properties of FN , like, e.g., some inherent robustness againts perturbations But the running cost (x, u) := (x, u) + VN 1 (f (x, u)) VN (f (x, u)) is unknown and dicult to compute

Lars Grne, Model Predictive Control, p. 33 u

Inverse optimality
Inverse optimality shows that FN is an innite horizon optimal feedback law thus implies several good properties of FN , like, e.g., some inherent robustness againts perturbations But the running cost (x, u) := (x, u) + VN 1 (f (x, u)) VN (f (x, u)) is unknown and dicult to compute knowing that FN is optimal for J (xFN (0), FN ) doesnt give us a simple way to estimate J (xFN (0), FN )

Lars Grne, Model Predictive Control, p. 33 u

Suboptimality
Theorem [???]: For both stabilizing terminal constraints the estimate J (xFN (0), FN ) VN (xFN (0)) holds.

Lars Grne, Model Predictive Control, p. 34 u

Suboptimality
Theorem [???]: For both stabilizing terminal constraints the estimate J (xFN (0), FN ) VN (xFN (0)) holds. Sketch of proof: Both constraints imply VN 1 VN . Hence l(xFN (n), FN (xFN (n)) = VN (xFN (n)) VN 1 (xFN (n + 1))

Lars Grne, Model Predictive Control, p. 34 u

Suboptimality
Theorem [???]: For both stabilizing terminal constraints the estimate J (xFN (0), FN ) VN (xFN (0)) holds. Sketch of proof: Both constraints imply VN 1 VN . Hence l(xFN (n), FN (xFN (n)) = VN (xFN (n)) VN 1 (xFN (n + 1)) VN (xFN (n)) VN (xFN (n + 1))

Lars Grne, Model Predictive Control, p. 34 u

Suboptimality
Theorem [???]: For both stabilizing terminal constraints the estimate J (xFN (0), FN ) VN (xFN (0)) holds. Sketch of proof: Both constraints imply VN 1 VN . Hence l(xFN (n), FN (xFN (n)) = VN (xFN (n)) VN 1 (xFN (n + 1)) VN (xFN (n)) VN (xFN (n + 1)) Summing over n = 0, . . . , k yields
k

l(xFN (n), FN (xFN (n)) VN (xFN (0)) VN (xFN (k + 1))


n=0

VN (xFN (0))

Lars Grne, Model Predictive Control, p. 34 u

Suboptimality
Theorem [???]: For both stabilizing terminal constraints the estimate J (xFN (0), FN ) VN (xFN (0)) holds. Sketch of proof: Both constraints imply VN 1 VN . Hence l(xFN (n), FN (xFN (n)) = VN (xFN (n)) VN 1 (xFN (n + 1)) VN (xFN (n)) VN (xFN (n + 1)) Summing over n = 0, . . . , k yields
k

l(xFN (n), FN (xFN (n)) VN (xFN (0)) VN (xFN (k + 1))


n=0

VN (xFN (0)) Now letting k yields the assertion.


Lars Grne, Model Predictive Control, p. 34 u

Suboptimality
Suboptimality gives us an easy to evaluate bound J (xFN (0), FN ) VN (xFN (0)) for the innite horizon performance of FN .

Lars Grne, Model Predictive Control, p. 35 u

Suboptimality
Suboptimality gives us an easy to evaluate bound J (xFN (0), FN ) VN (xFN (0)) for the innite horizon performance of FN . However, due to the terminal constraints, VN (x) can be much larger than the optimal upper bound V (x).

Lars Grne, Model Predictive Control, p. 35 u

Suboptimality
Suboptimality gives us an easy to evaluate bound J (xFN (0), FN ) VN (xFN (0)) for the innite horizon performance of FN . However, due to the terminal constraints, VN (x) can be much larger than the optimal upper bound V (x). In Part 2 we will see that MPC without stabilizing terminal constraints allows for suboptimality estimates in terms of V (x).

Lars Grne, Model Predictive Control, p. 35 u

Summary of Part 1
MPC is an online optimal control based method for computing stabilizing feedback laws

Lars Grne, Model Predictive Control, p. 36 u

Summary of Part 1
MPC is an online optimal control based method for computing stabilizing feedback laws MPC computes the feedback law by iteratively solving nite horizon optimal control problems using the current state x as initial value

Lars Grne, Model Predictive Control, p. 36 u

Summary of Part 1
MPC is an online optimal control based method for computing stabilizing feedback laws MPC computes the feedback law by iteratively solving nite horizon optimal control problems using the current state x as initial value the feedback value FN (x ) is the rst element of the resulting optimal control sequence

Lars Grne, Model Predictive Control, p. 36 u

Summary of Part 1
MPC is an online optimal control based method for computing stabilizing feedback laws MPC computes the feedback law by iteratively solving nite horizon optimal control problems using the current state x as initial value the feedback value FN (x ) is the rst element of the resulting optimal control sequence suitable terminal constraints ensure stability with VN as Lyapunov function

Lars Grne, Model Predictive Control, p. 36 u

Summary of Part 1
MPC is an online optimal control based method for computing stabilizing feedback laws MPC computes the feedback law by iteratively solving nite horizon optimal control problems using the current state x as initial value the feedback value FN (x ) is the rst element of the resulting optimal control sequence suitable terminal constraints ensure stability with VN as Lyapunov function FN is innite horizon optimal for a suitably altered running cost

Lars Grne, Model Predictive Control, p. 36 u

Summary of Part 1
MPC is an online optimal control based method for computing stabilizing feedback laws MPC computes the feedback law by iteratively solving nite horizon optimal control problems using the current state x as initial value the feedback value FN (x ) is the rst element of the resulting optimal control sequence suitable terminal constraints ensure stability with VN as Lyapunov function FN is innite horizon optimal for a suitably altered running cost the innite horizon functional along the FN -controlled trajectory is bounded by VN
Lars Grne, Model Predictive Control, p. 36 u

Part 2 (5) Stability and suboptimality without stabilizing constraints

MPC without stabilizing terminal constraints

We return to the basic MPC formulation


N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n)),

x(0) = x

without any stabilizing terminal constraints

Lars Grne, Model Predictive Control, p. 38 u

MPC without stabilizing terminal constraints

We return to the basic MPC formulation


N 1

minimize JN (x(0), u) =
n=0

(x(n), u(n)),

x(0) = x

without any stabilizing terminal constraints How can we prove stability for this setting?

Lars Grne, Model Predictive Control, p. 38 u

MPC without stabilizing terminal constraints


Recall: we need to prove VN (f (x, FN (x))) VN (x) (x, FN (x)) for some (x, FN (x)) > 0 for x = x

Lars Grne, Model Predictive Control, p. 39 u

MPC without stabilizing terminal constraints


Recall: we need to prove VN (f (x, FN (x))) VN (x) (x, FN (x)) for some (x, FN (x)) > 0 for x = x Since by dynamic programming we have (x, FN (x)) = (x, FN (x))+VN 1 (f (x, FN (x)))VN (f (x, FN (x))), this is equivalent to proving (x, FN (x)) + VN 1 (f (x, FN (x))) VN (f (x, FN (x))) > 0 for x = x

Lars Grne, Model Predictive Control, p. 39 u

MPC without stabilizing terminal constraints


Theorem: [Alamir/Bornard 95, Jadbabaie/Hauser 05, Grimm et al. 05] Under suitable conditions, MPC without terminal constraints stabilizes the system for suciently large optimization horizon N .

Lars Grne, Model Predictive Control, p. 40 u

MPC without stabilizing terminal constraints


Theorem: [Alamir/Bornard 95, Jadbabaie/Hauser 05, Grimm et al. 05] Under suitable conditions, MPC without terminal constraints stabilizes the system for suciently large optimization horizon N . Idea of proof: Use convergence limN VN = V to prove
(x, FN (x)) + VN 1 (f (x, FN (x))) VN (f (x, FN (x))) (x, FN (x)) > 0

Lars Grne, Model Predictive Control, p. 40 u

MPC without stabilizing terminal constraints


Theorem: [Alamir/Bornard 95, Jadbabaie/Hauser 05, Grimm et al. 05] Under suitable conditions, MPC without terminal constraints stabilizes the system for suciently large optimization horizon N . Idea of proof: Use convergence limN VN = V to prove
(x, FN (x)) + VN 1 (f (x, FN (x))) VN (f (x, FN (x))) (x, FN (x)) > 0

The crucial condition for suciently uniform convergence is Exponential controllability through : for real numbers C > 0, (0, 1) and each x X there exists u() with (x(n), u(n)) C n (x(0)) with

(x) = minu (x, u)

Lars Grne, Model Predictive Control, p. 40 u

MPC without stabilizing terminal constraints


Theorem: [Alamir/Bornard 95, Jadbabaie/Hauser 05, Grimm et al. 05] Under suitable conditions, MPC without terminal constraints stabilizes the system for suciently large optimization horizon N .

Lars Grne, Model Predictive Control, p. 41 u

MPC without stabilizing terminal constraints


Theorem: [Alamir/Bornard 95, Jadbabaie/Hauser 05, Grimm et al. 05] Under suitable conditions, MPC without terminal constraints stabilizes the system for suciently large optimization horizon N . Question: How large is suciently large for N ?

Lars Grne, Model Predictive Control, p. 41 u

MPC without stabilizing terminal constraints


Theorem: [Alamir/Bornard 95, Jadbabaie/Hauser 05, Grimm et al. 05] Under suitable conditions, MPC without terminal constraints stabilizes the system for suciently large optimization horizon N . Question: How large is suciently large for N ? the rst two references are non-constructive in terms of N

Lars Grne, Model Predictive Control, p. 41 u

MPC without stabilizing terminal constraints


Theorem: [Alamir/Bornard 95, Jadbabaie/Hauser 05, Grimm et al. 05] Under suitable conditions, MPC without terminal constraints stabilizes the system for suciently large optimization horizon N . Question: How large is suciently large for N ? the rst two references are non-constructive in terms of N [Grimm et al.] leads to the following estimate: Let

:=
n=0

C n =

C 1

for C, from (x(n), u(n)) C n (x(0)). Then N = O( 2 ) (the constants in O can be computed, if desired)
Lars Grne, Model Predictive Control, p. 41 u

MPC without stabilizing terminal constraints


A better estimate can be obtained, if (x, FN (x)) + VN 1 (f (x, FN (x))) VN (f (x, FN (x))) > 0 is established via directly estimating |VN VN 1 | instead of using the detour |VN VN 1 | |VN V | + |VN 1 V |

Lars Grne, Model Predictive Control, p. 42 u

MPC without stabilizing terminal constraints


A better estimate can be obtained, if (x, FN (x)) + VN 1 (f (x, FN (x))) VN (f (x, FN (x))) > 0 is established via directly estimating |VN VN 1 | instead of using the detour |VN VN 1 | |VN V | + |VN 1 V | This way, in [Grne/Rantzer 08] the estimate u N = O( log ) is shown, again for

:=
n=0

C n =

C 1

with C, from (x(n), u(n)) C n (x(0))


Lars Grne, Model Predictive Control, p. 42 u

MPC without stabilizing terminal constraints


All these estimates rely on the parameter

:=
n=0

C n =

C 1

with C, from (x(n), u(n)) C n (x(0))

Lars Grne, Model Predictive Control, p. 43 u

MPC without stabilizing terminal constraints


All these estimates rely on the parameter

:=
n=0

C n =

C 1

with C, from (x(n), u(n)) C n (x(0)) This is because of the inequality VN (x) V (x) (x), since these estimates rely on bounds on the value functions

Lars Grne, Model Predictive Control, p. 43 u

MPC without stabilizing terminal constraints


All these estimates rely on the parameter

:=
n=0

C n =

C 1

with C, from (x(n), u(n)) C n (x(0)) This is because of the inequality VN (x) V (x) (x), since these estimates rely on bounds on the value functions Main drawback of these approaches: we cannot distinguish between the inuence of C and
Lars Grne, Model Predictive Control, p. 43 u

MPC without stabilizing terminal constraints


All these estimates rely on the parameter

:=
n=0

C n =

C 1

with C, from (x(n), u(n)) C n (x(0)) This is because of the inequality VN (x) V (x) (x), since these estimates rely on bounds on the value functions Main drawback of these approaches: we cannot distinguish between the inuence of C and
(or other parameters in alternative controllability conditions)
Lars Grne, Model Predictive Control, p. 43 u

Relaxed Lyapunov inequality


We want VN (f (x, F (x))) VN (x) (x, F (x))

Lars Grne, Model Predictive Control, p. 44 u

Relaxed Lyapunov inequality


We want VN (f (x, F (x))) VN (x) (x, F (x)) Ansatz: = for (0, 1]

Lars Grne, Model Predictive Control, p. 44 u

Relaxed Lyapunov inequality


We want VN (f (x, F (x))) VN (x) (x, F (x)) Ansatz: = for (0, 1]

Theorem [Grne/Rantzer 08]: If there exists (0, 1] such that u the relaxed Lyapunov inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) holds, then asymptotic stability follows (with VN as Lyapunov function)

Lars Grne, Model Predictive Control, p. 44 u

Relaxed Lyapunov inequality


We want VN (f (x, F (x))) VN (x) (x, F (x)) Ansatz: = for (0, 1]

Theorem [Grne/Rantzer 08]: If there exists (0, 1] such that u the relaxed Lyapunov inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) holds, then asymptotic stability follows (with VN as Lyapunov function) and we get the suboptimality estimate J (x, FN ) V (x)/

Lars Grne, Model Predictive Control, p. 44 u

Relaxed Lyapunov inequality


We want VN (f (x, F (x))) VN (x) (x, F (x)) Ansatz: = for (0, 1]

Theorem [Grne/Rantzer 08]: If there exists (0, 1] such that u the relaxed Lyapunov inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) holds, then asymptotic stability follows (with VN as Lyapunov function) and we get the suboptimality estimate J (x, FN ) V (x)/ we get stability and suboptimality at once
Lars Grne, Model Predictive Control, p. 44 u

Computing
Goal: Compute in the relaxed Lyapunov inequality VN (f (x, FN (x))) VN (x) (x, FN (x))

Lars Grne, Model Predictive Control, p. 45 u

Computing
Goal: Compute in the relaxed Lyapunov inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) Related approach in the literature: estimate stability and suboptimality from numerical approximation to VN [Shamma/Xiong 97, Primbs/Nevestic 01]

Lars Grne, Model Predictive Control, p. 45 u

Computing
Goal: Compute in the relaxed Lyapunov inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) Related approach in the literature: estimate stability and suboptimality from numerical approximation to VN [Shamma/Xiong 97, Primbs/Nevestic 01] Here: compute analytically from the controllability property (x(n), u(n)) C n (x(0))

Lars Grne, Model Predictive Control, p. 45 u

Computing
Goal: Compute in the relaxed Lyapunov inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) Related approach in the literature: estimate stability and suboptimality from numerical approximation to VN [Shamma/Xiong 97, Primbs/Nevestic 01] Here: compute analytically from the controllability property (x(n), u(n)) C n (x(0))
m1

Here: via

Vm (x) C
k=0

k (x)

Lars Grne, Model Predictive Control, p. 45 u

Computing
Goal: Compute in the relaxed Lyapunov inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) Related approach in the literature: estimate stability and suboptimality from numerical approximation to VN [Shamma/Xiong 97, Primbs/Nevestic 01] Here: compute analytically from the controllability property (x(n), u(n)) C n (x(0))
m1

Here: via

Vm (x) C
k=0

k (x) =: Bm (x)

Lars Grne, Model Predictive Control, p. 45 u

Computing
Goal: Compute in the relaxed Lyapunov inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) Related approach in the literature: estimate stability and suboptimality from numerical approximation to VN [Shamma/Xiong 97, Primbs/Nevestic 01] Here: compute analytically from the controllability property (x(n), u(n)) C n (x(0))
m1

Here: via

Vm (x) C
k=0

k (x) =: Bm (x)

using optimality conditions for (pieces of) trajectories


Lars Grne, Model Predictive Control, p. 45 u

Computing
The desired inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) is satised for all x X i VN (x (1)) VN (x (0)) (x (0), u (0)) holds for all optimal trajectories x (n), u (n) for VN .

Lars Grne, Model Predictive Control, p. 46 u

Computing
The desired inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) is satised for all x X i VN (x (1)) VN (x (0)) (x (0), u (0)) holds for all optimal trajectories x (n), u (n) for VN . From the controllability property we get: VN (x (1)) BN (x (1))

Lars Grne, Model Predictive Control, p. 46 u

Computing
The desired inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) is satised for all x X i VN (x (1)) VN (x (0)) (x (0), u (0)) holds for all optimal trajectories x (n), u (n) for VN . From the controllability property we get: VN (x (1)) BN (x (1)) VN (x (1)) (x (1), u (1)) + BN 1 (x (2))

Lars Grne, Model Predictive Control, p. 46 u

Computing
The desired inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) is satised for all x X i VN (x (1)) VN (x (0)) (x (0), u (0)) holds for all optimal trajectories x (n), u (n) for VN . From the controllability property we get: VN (x (1)) BN (x (1)) VN (x (1)) VN (x (1)) (x (1), u (1)) + BN 1 (x (2)) (x (1), u (1)) + (x (2), u (2)) + BN 2 (x (3))

Lars Grne, Model Predictive Control, p. 46 u

Computing
The desired inequality VN (f (x, FN (x))) VN (x) (x, FN (x)) is satised for all x X i VN (x (1)) VN (x (0)) (x (0), u (0)) holds for all optimal trajectories x (n), u (n) for VN . From the controllability property we get: VN (x (1)) BN (x (1)) VN (x (1)) VN (x (1)) . . . . . . (x (1), u (1)) + BN 1 (x (2)) (x (1), u (1)) + (x (2), u (2)) + BN 2 (x (3)) . . .
Lars Grne, Model Predictive Control, p. 46 u

Computing
VN (x (1)) is bounded by sums over (x (n), u (n)) For sums of these values, in turn, we get bounds from the optimality principle and the controllability property

Lars Grne, Model Predictive Control, p. 47 u

Computing
VN (x (1)) is bounded by sums over (x (n), u (n)) For sums of these values, in turn, we get bounds from the optimality principle and the controllability property:
N 1

(x (n), u (n)) = VN (x (0))


n=0

BN (x (0))

Lars Grne, Model Predictive Control, p. 47 u

Computing
VN (x (1)) is bounded by sums over (x (n), u (n)) For sums of these values, in turn, we get bounds from the optimality principle and the controllability property:
N 1

(x (n), u (n)) = VN (x (0))


n=0 N 1

BN (x (0))

(x (n), u (n)) = VN 1 (x (1)) BN 1 (x (1))


n=1

Lars Grne, Model Predictive Control, p. 47 u

Computing
VN (x (1)) is bounded by sums over (x (n), u (n)) For sums of these values, in turn, we get bounds from the optimality principle and the controllability property:
N 1

(x (n), u (n)) = VN (x (0))


n=0 N 1

BN (x (0))

(x (n), u (n)) = VN 1 (x (1)) BN 1 (x (1))


n=1 N 1

(x (n), u (n)) = VN 2 (x (2)) BN 2 (x (2))


n=2

Lars Grne, Model Predictive Control, p. 47 u

Computing
VN (x (1)) is bounded by sums over (x (n), u (n)) For sums of these values, in turn, we get bounds from the optimality principle and the controllability property:
N 1

(x (n), u (n)) = VN (x (0))


n=0 N 1

BN (x (0))

(x (n), u (n)) = VN 1 (x (1)) BN 1 (x (1))


n=1 N 1

(x (n), u (n)) = VN 2 (x (2)) BN 2 (x (2))


n=2

. . .

. . .
Lars Grne, Model Predictive Control, p. 47 u

Verifying the relaxed Lyapunov inequality


Find , such that for all optimal trajectories x , u : VN (x (1)) VN (x (0)) (x (0), u (0)) ()

Lars Grne, Model Predictive Control, p. 48 u

Verifying the relaxed Lyapunov inequality


Find , such that for all optimal trajectories x , u : VN (x (1)) VN (x (0)) (x (0), u (0)) Dene n := (x (n), u (n)), := VN (x (1)) ()

Lars Grne, Model Predictive Control, p. 48 u

Verifying the relaxed Lyapunov inequality


Find , such that for all optimal trajectories x , u : VN (x (1)) VN (x (0)) (x (0), u (0)) Dene Then: n := (x (n), u (n)),
N 1

()

:= VN (x (1))

()
n=0

n 0

Lars Grne, Model Predictive Control, p. 48 u

Verifying the relaxed Lyapunov inequality


Find , such that for all optimal trajectories x , u : VN (x (1)) VN (x (0)) (x (0), u (0)) Dene Then: n := (x (n), u (n)),
N 1

()

:= VN (x (1))

()
n=0

n 0

The inequalities from the last slides translate to


N 1 N k1

n
n=k j n=0

C n k ,
N j1

k = 0, . . . , N 2

(1)

n=1

n + j+1
n=0

C n ,

j = 0, . . . , N 2

(2)

Lars Grne, Model Predictive Control, p. 48 u

Verifying the relaxed Lyapunov inequality


Find , such that for all optimal trajectories x , u : VN (x (1)) VN (x (0)) (x (0), u (0)) Dene Then: n := (x (n), u (n)),
N 1

()

:= VN (x (1))

()
n=0

n 0

The inequalities from the last slides translate to


N 1 N k1

n
n=k j n=0

C n k ,
N j1

k = 0, . . . , N 2

(1)

n=1

n + j+1
n=0

C n ,

j = 0, . . . , N 2

(2)

We call 0 , . . . , N 1 , 0 with (1), (2) admissible


Lars Grne, Model Predictive Control, p. 48 u

Verifying the relaxed Lyapunov inequality


Let x (n), u (n) optimal trajectory for VN We want Dene Then: VN (x (1)) VN (x (0)) (x (0), u (0)) n := (x (n), u (n)),
N 1

()

:= VN (x (1))

()
n=0

n 0

The inequalities from the last slides translate to


N 1 N k1

n
n=k j n=0

C n k ,
N j1

k = 0, . . . , N 2

(1)

n=1

n + j+1
n=0

C n ,

j = 0, . . . , N 2

(2)

We call 0 , . . . , N 1 , 0 with (1), (2) admissible


Lars Grne, Model Predictive Control, p. 48 u

Stability and suboptimality condition


Theorem: [Grne 09] Assume that all admissible 0 , . . . , N 1 , u 0 satisfy
N 1

n=0

n 0 for some > 0,

Then the MPC feedback FN stabilizes all control systems, which satisfy the controllability condition and we get J (x, FN ) V (x)/.

Lars Grne, Model Predictive Control, p. 49 u

Stability and suboptimality condition


Theorem: [Grne 09] Assume that all admissible 0 , . . . , N 1 , u 0 satisfy
N 1

n=0

n 0 for some > 0,

Then the MPC feedback FN stabilizes all control systems, which satisfy the controllability condition and we get J (x, FN ) V (x)/. If, conversely, there exist admissible 0 , . . . , N 1 , 0 with
N 1

n=0

n 0 for some < 0,

then there exists a control system, which satises the controllability condition but is not stabilized by FN .
Lars Grne, Model Predictive Control, p. 49 u

Verifying the condition by Linear Programming


In order to apply the theorem,we need to check
N 1

n=0

n 0

for all admissible 0 , . . . , N 1 , 0 and some > 0.

Lars Grne, Model Predictive Control, p. 50 u

Verifying the condition by Linear Programming


In order to apply the theorem,we need to check
N 1

n=0

n 0

for all admissible 0 , . . . , N 1 , 0 and some > 0. Equivalently:


N 1

minimize =
n=0

over all admissible 0 , . . . , N 1 , 0 with 0 = 1

Lars Grne, Model Predictive Control, p. 50 u

Verifying the condition by Linear Programming


In order to apply the theorem,we need to check
N 1

n=0

n 0

for all admissible 0 , . . . , N 1 , 0 and some > 0. Equivalently:


N 1

minimize =
n=0

over all admissible 0 , . . . , N 1 , 0 with 0 = 1 This is a (small!) linear program which is explicitly solvable

Lars Grne, Model Predictive Control, p. 50 u

Computation of stability and optimality bounds


We thus obtain the explicit formula [Grne/Pannek/Worthmann 09] u
N

(N 1) =1
N i=2 N

(i 1) with i = (i 1)

i1

C k
k=0

i
i=2 i=2

depending on the optimization horizon N and the parameters C, in (x(n), u(n)) C n (x(0))

Lars Grne, Model Predictive Control, p. 51 u

Computation of stability and optimality bounds


We thus obtain the explicit formula [Grne/Pannek/Worthmann 09] u
N

(N 1) =1
N i=2 N

(i 1) with i = (i 1)

i1

C k
k=0

i
i=2 i=2

depending on the optimization horizon N and the parameters C, in (x(n), u(n)) C n (x(0)) In particular, for given 0 we can compute the minimal horizon N with > 0

Lars Grne, Model Predictive Control, p. 51 u

Computation of stability and optimality bounds


We thus obtain the explicit formula [Grne/Pannek/Worthmann 09] u
N

(N 1) =1
N i=2 N

(i 1) with i = (i 1)

i1

C k
k=0

i
i=2 i=2

depending on the optimization horizon N and the parameters C, in (x(n), u(n)) C n (x(0)) In particular, for given 0 we can compute the minimal horizon N with > 0 We illustrate this for 0 = 0, i.e., for the minimal stabilizing horizon
Lars Grne, Model Predictive Control, p. 51 u

Horizon depending on C and


Horizons N for dierent C, with
6 5 4 3 2 1 0 0 6 5 4 3 2 1 0 0

n=0

C n = 6:

10

10

C = 6, = 0
6 5 4 3 2 1 0 0

C = 12/5, = 3/5
6 5 4 3 2 1 0 0

10

10

C = 3/2, = 3/4

C = 6/5, = 4/5

Lars Grne, Model Predictive Control, p. 52 u

Horizon depending on C and


Horizons N for dierent C, with
6 5 4 3 2 1 0 0 6 5 4 3 2 1 0 0

n=0

C n = 6:

10

10

C = 6, = 0 N = 11
6 5 4 3 2 1 0 0

C = 12/5, = 3/5
6 5 4 3 2 1 0 0

10

10

C = 3/2, = 3/4

C = 6/5, = 4/5

Lars Grne, Model Predictive Control, p. 52 u

Horizon depending on C and


Horizons N for dierent C, with
6 5 4 3 2 1 0 0 6 5 4 3 2 1 0 0

n=0

C n = 6:

10

10

C = 6, = 0 N = 11
6 5 4 3 2 1 0 0

C = 12/5, = 3/5 N = 10
6 5 4 3 2 1 0 0

10

10

C = 3/2, = 3/4

C = 6/5, = 4/5

Lars Grne, Model Predictive Control, p. 52 u

Horizon depending on C and


Horizons N for dierent C, with
6 5 4 3 2 1 0 0 6 5 4 3 2 1 0 0

n=0

C n = 6:

10

10

C = 6, = 0 N = 11
6 5 4 3 2 1 0 0

C = 12/5, = 3/5 N = 10
6 5 4 3 2 1 0 0

10

10

C = 3/2, = 3/4 N =7

C = 6/5, = 4/5

Lars Grne, Model Predictive Control, p. 52 u

Horizon depending on C and


Horizons N for dierent C, with
6 5 4 3 2 1 0 0 6 5 4 3 2 1 0 0

n=0

C n = 6:

10

10

C = 6, = 0 N = 11
6 5 4 3 2 1 0 0

C = 12/5, = 3/5 N = 10
6 5 4 3 2 1 0 0

10

10

C = 3/2, = 3/4 N =7

C = 6/5, = 4/5 N =4
Lars Grne, Model Predictive Control, p. 52 u

Stability chart for C and

(Figure: Harald Voit)

Lars Grne, Model Predictive Control, p. 53 u

Stability chart for C and

(Figure: Harald Voit)

Conclusion: for short optimization horizon N it is Conclusion: more important: small C (small overshoot) Conclusion: less important: small (fast decay)
Lars Grne, Model Predictive Control, p. 53 u

Other types of controllability condition

The procedure is easily extended to the more general controllability condition: for a sequence (cn )nN0 with cn 0 and every x X there exists u() with (x(n), u(n)) cn (x(0)), with

n = 0, 1, 2, . . .

(x) = minu (x, u) (as before)

Lars Grne, Model Predictive Control, p. 54 u

Horizons for nite time controllability


Horizons N for dierent cn with
6 5 4 3 2 1 0 0

cn = 6:
6 5 4 3 2 1 0 0

6 5 4 3 2 1 0 0

6 5 4 3 2 1 0 0

Lars Grne, Model Predictive Control, p. 55 u

Horizons for nite time controllability


Horizons N for dierent cn with
6 5 4 3 2 1 0 0

cn = 6:
6 5 4 3 2 1 0 0

N = 11
6 5 4 3 2 1 0 0 6 5 4 3 2 1 0 0

Lars Grne, Model Predictive Control, p. 55 u

Horizons for nite time controllability


Horizons N for dierent cn with
6 5 4 3 2 1 0 0

cn = 6:
6 5 4 3 2 1 0 0

N = 11
6 5 4 3 2 1 0 0 6 5 4 3 2 1 0 0

N = 10

Lars Grne, Model Predictive Control, p. 55 u

Horizons for nite time controllability


Horizons N for dierent cn with
6 5 4 3 2 1 0 0

cn = 6:
6 5 4 3 2 1 0 0

N = 11
6 5 4 3 2 1 0 0 6 5 4 3 2 1 0 0

N = 10

N =7

Lars Grne, Model Predictive Control, p. 55 u

Horizons for nite time controllability


Horizons N for dierent cn with
6 5 4 3 2 1 0 0

cn = 6:
6 5 4 3 2 1 0 0

N = 11
6 5 4 3 2 1 0 0 6 5 4 3 2 1 0 0

N = 10

N =7

N =6

Lars Grne, Model Predictive Control, p. 55 u

Horizons for nite time controllability


Horizons N for dierent cn with
6 5 4 3 2 1 0 0

cn = 6:
6 5 4 3 2 1 0 0

N = 11
6 5 4 3 2 1 0 0 6 5 4 3 2 1 0 0

N = 10

N =7

N =6

for obtaining short horizons smaller (and later) overshoot is more important than fast controllability

Lars Grne, Model Predictive Control, p. 55 u

Horizons for nite time controllability


Horizons N for dierent cn with
6 5 4 3 2 1 0 0

cn = 6:
6 5 4 3 2 1 0 0

N = 11
6 5 4 3 2 1 0 0 6 5 4 3 2 1 0 0

N = 10

N =7

N =6

for obtaining short horizons smaller (and later) overshoot is more important than fast controllability we can use this for the design of
Lars Grne, Model Predictive Control, p. 55 u

(6) Examples for the design of MPC schemes

Design of good MPC running costs


We want small overshoot C in the estimate (x(n), u(n)) C n (x(0)) or, more generally, small values cn in (x(n), u(n)) cn (x(0))

Lars Grne, Model Predictive Control, p. 57 u

Design of good MPC running costs


We want small overshoot C in the estimate (x(n), u(n)) C n (x(0)) or, more generally, small values cn in (x(n), u(n)) cn (x(0)) The trajectories x(n) are given

Lars Grne, Model Predictive Control, p. 57 u

Design of good MPC running costs


We want small overshoot C in the estimate (x(n), u(n)) C n (x(0)) or, more generally, small values cn in (x(n), u(n)) cn (x(0)) The trajectories x(n) are given, but we can use the running cost as design parameter

Lars Grne, Model Predictive Control, p. 57 u

The car-and-mountains example reloaded


1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0.5 0 0.5 1 1.5

Lars Grne, Model Predictive Control, p. 58 u

The car-and-mountains example reloaded


1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0.5 0 0.5 1 1.5

MPC with (x, u) = x x 2 + |u|2 and umax = 0.2 asymptotic stability for N = 11 but not for N 10

Lars Grne, Model Predictive Control, p. 58 u

The car-and-mountains example reloaded


1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0.5 0 0.5 1 1.5

MPC with (x, u) = x x 2 + |u|2 and umax = 0.2 asymptotic stability for N = 11 but not for N 10 Reason: detour around mountains causes large overshoot C

Lars Grne, Model Predictive Control, p. 58 u

The car-and-mountains example reloaded


1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0.5 0 0.5 1 1.5

MPC with (x, u) = x x 2 + |u|2 and umax = 0.2 asymptotic stability for N = 11 but not for N 10 Reason: detour around mountains causes large overshoot C Remedy: put larger weight on x2 : (x, u) = (x1 x )2 + 5(x2 x )2 + |u|2 1 2
Lars Grne, Model Predictive Control, p. 58 u

The car-and-mountains example reloaded


1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0.5 0 0.5 1 1.5

MPC with (x, u) = x x 2 + |u|2 and umax = 0.2 asymptotic stability for N = 11 but not for N 10 Reason: detour around mountains causes large overshoot C Remedy: put larger weight on x2 : (x, u) = (x1 x )2 + 5(x2 x )2 + |u|2 1 2 as. stab. for N = 2
Lars Grne, Model Predictive Control, p. 58 u

Example: pendulum on a cart


x1 x2 x3 x4 u = = angle = angular velocity = cart position = cart velocity = cart acceleration control system
u ucos()

m=l=1

x1 = x2 (t) x2 = g sin(x1 ) kx2 u cos(x1 ) x3 = x4 x4 = u


Lars Grne, Model Predictive Control, p. 59 u

Example: Inverted Pendulum


Reducing overshoot for swingup of the pendulum on a cart: x1 = x2 , x3 = x4 , x2 = g sin(x1 ) kx2 u cos(x1 ) x4 = u

Typical swingup trajectory x1 and x2 component

Lars Grne, Model Predictive Control, p. 60 u

Example: Inverted Pendulum


Reducing overshoot for swingup of the pendulum on a cart: x1 = x2 , x3 = x4 , Let (x) = x2 = g sin(x1 ) kx2 u cos(x1 ) x4 = u
1 (x1 , x2 )

+ x2 + x2 3 4

Typical swingup trajectory x1 and x2 component

Lars Grne, Model Predictive Control, p. 60 u

Example: Inverted Pendulum


Reducing overshoot for swingup of the pendulum on a cart: x1 = x2 , x3 = x4 , Let (x) = x2 = g sin(x1 ) kx2 u cos(x1 ) x4 = u
1 (x1 , x2 )

+ x2 + x2 with 3 4

1 (x1 , x2 )

= x2 + x2 1 2

N = 15 sampling time T = 0.15


Lars Grne, Model Predictive Control, p. 60 u

Example: Inverted Pendulum


Reducing overshoot for swingup of the pendulum on a cart: x1 = x2 , x3 = x4 , Let (x) = x2 = g sin(x1 ) kx2 u cos(x1 ) x4 = u
1 (x1 , x2 )

+ x2 + x2 with 3 4

1 (x1 , x2 )

= x2 + x2 1 2

4(1 cos x1 ) + x2 2 N = 10

N = 15 sampling time T = 0.15

Lars Grne, Model Predictive Control, p. 60 u

Example: Inverted Pendulum


Reducing overshoot for swingup of the pendulum on a cart: x1 = x2 , x3 = x4 , Let (x) = x2 = g sin(x1 ) kx2 u cos(x1 ) x4 = u
1 (x1 , x2 )

+ x2 + x2 with 3 4

1 (x1 , x2 )

= x2 + x2 1 2

4(1 cos x1 ) + x2 2 N = 10

(sin x1 , x2 )P (sin x1 , x2 )T +2((1 cos x1 )(1 cos x2 )2 )2

N = 15 sampling time T = 0.15

N = 4 (swingup only)

Lars Grne, Model Predictive Control, p. 60 u

A PDE example
Our results are also applicable for innite dimensional system

Lars Grne, Model Predictive Control, p. 61 u

A PDE example
Our results are also applicable for innite dimensional system We illustrate this with the 1d controlled PDE yt = yx + yxx + y(y + 1)(1 y) + u with domain = [0, 1] solution y = y(t, x) boundary conditions y(t, 0) = y(t, 1) = 0 parameters = 0.1 and = 10

Lars Grne, Model Predictive Control, p. 61 u

The uncontrolled PDE


t=0 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.025 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.05 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.075 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.1 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.125 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.15 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.175 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.2 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.225 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.25 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.275 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.3 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.325 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.35 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.375 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.4 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.425 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.45 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.475 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.5 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.525 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.55 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.575 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.6 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.625 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.65 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.675 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.7 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.725 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.75 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.775 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.8 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.825 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.85 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.875 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.9 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.925 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.95 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=0.975 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


t=1 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

uncontrolled (u 0)
Lars Grne, Model Predictive Control, p. 62 u

The uncontrolled PDE


1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

all equilibrium solutions


Lars Grne, Model Predictive Control, p. 62 u

MPC for the PDE example


yt = yx + yxx + y(y + 1)(1 y) + u

Lars Grne, Model Predictive Control, p. 63 u

MPC for the PDE example


yt = yx + yxx + y(y + 1)(1 y) + u Goal: stabilize the sampled data solution y(n, ) at y 0

Lars Grne, Model Predictive Control, p. 63 u

MPC for the PDE example


yt = yx + yxx + y(y + 1)(1 y) + u Goal: stabilize the sampled data solution y(n, ) at y 0 Usual approach: quadratic L2 cost (y(n, ), u(n, )) = y(n, )
2 L2

+ u(n, )

2 L2

Lars Grne, Model Predictive Control, p. 63 u

MPC for the PDE example


yt = yx + yxx + y(y + 1)(1 y) + u Goal: stabilize the sampled data solution y(n, ) at y 0 Usual approach: quadratic L2 cost (y(n, ), u(n, )) = y(n, )
2 L2

+ u(n, )

2 L2

For y 0 the control u must compensate for yx

u yx

Lars Grne, Model Predictive Control, p. 63 u

MPC for the PDE example


yt = yx + yxx + y(y + 1)(1 y) + u Goal: stabilize the sampled data solution y(n, ) at y 0 Usual approach: quadratic L2 cost (y(n, ), u(n, )) = y(n, ) controllability condition (y(n, ), u(n, )) C n (y(0, ))
2 L2

+ u(n, )

2 L2

For y 0 the control u must compensate for yx

u yx

Lars Grne, Model Predictive Control, p. 63 u

MPC for the PDE example


yt = yx + yxx + y(y + 1)(1 y) + u Goal: stabilize the sampled data solution y(n, ) at y 0 Usual approach: quadratic L2 cost (y(n, ), u(n, )) = y(n, ) controllability condition (y(n, ), u(n, )) C n (y(0, )) y(n, )
2 L2 2 L2

+ u(n, )

2 L2

For y 0 the control u must compensate for yx

u yx

+ u(n, )

2 L2

C n y(0, )

2 L2

Lars Grne, Model Predictive Control, p. 63 u

MPC for the PDE example


yt = yx + yxx + y(y + 1)(1 y) + u Goal: stabilize the sampled data solution y(n, ) at y 0 Usual approach: quadratic L2 cost (y(n, ), u(n, )) = y(n, ) controllability condition (y(n, ), u(n, )) C n (y(0, )) y(n, ) y(n, )
2 L2 2 L2 2 L2

+ u(n, )

2 L2

For y 0 the control u must compensate for yx

u yx

+ u(n, )

2 L2 2 L2

C n y(0, ) C n y(0, )

2 L2 2 L2

+ yx (n, )

Lars Grne, Model Predictive Control, p. 63 u

MPC for the PDE example


yt = yx + yxx + y(y + 1)(1 y) + u Goal: stabilize the sampled data solution y(n, ) at y 0 Usual approach: quadratic L2 cost (y(n, ), u(n, )) = y(n, ) controllability condition (y(n, ), u(n, )) C n (y(0, )) for yx
L2 2 L2

+ u(n, )

2 L2

For y 0 the control u must compensate for yx

u yx

y(n, ) y(n, ) > y >

2 L2 2 L2 L2

+ u(n, )

2 L2 2 L2

C n y(0, ) C n y(0, )

2 L2 2 L2

+ yx (n, )

this can only hold if C > 0 >


Lars Grne, Model Predictive Control, p. 63 u

MPC for the PDE example


Conclusion: because of y(n, )
2 L2

+ yx (n, )

2 L2

C n y(0, )

2 L2

the controllability condition may only hold for very large C

Lars Grne, Model Predictive Control, p. 64 u

MPC for the PDE example


Conclusion: because of y(n, )
2 L2

+ yx (n, )

2 L2

C n y(0, )

2 L2

the controllability condition may only hold for very large C Remedy: use H 1 cost (y(n, ), u(n, )) = y(n, )
2 L2

+ yx (n, )
2 H1

2 L2

+ u(n, )

2 L2 .

= y(n,)

Lars Grne, Model Predictive Control, p. 64 u

MPC for the PDE example


Conclusion: because of y(n, )
2 L2

+ yx (n, )

2 L2

C n y(0, )

2 L2

the controllability condition may only hold for very large C Remedy: use H 1 cost (y(n, ), u(n, )) = y(n, )
2 L2

+ yx (n, )
2 H1

2 L2

+ u(n, )

2 L2 .

= y(n,)

Then an analogous computation yields y(n, )


2 L2 +(1+)

yx (n, )

2 L2

C n

y(0, )

2 L2 +

yx (0, )

2 L2

Lars Grne, Model Predictive Control, p. 64 u

MPC with L2 vs. H1 cost


t=0 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.025 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.05 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.075 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.1 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.125 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.15 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.175 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.2 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.225 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.25 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.275 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

MPC with L2 vs. H1 cost


t=0.3 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 N= 3, L2 N=11, L2 N= 3, H1

MPC with L2 and H1 cost, = 0.1, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 65 u

Boundary Control
Now we change our PDE from distributed to (Dirichlet-) boundary control, i.e. yt = yx + yxx + y(y + 1)(1 y) with domain = [0, 1] solution y = y(t, x) boundary conditions y(t, 0) = u0 (t), y(t, 1) = u1 (t) parameters = 0.1 and = 10

Lars Grne, Model Predictive Control, p. 66 u

Boundary Control
Now we change our PDE from distributed to (Dirichlet-) boundary control, i.e. yt = yx + yxx + y(y + 1)(1 y) with domain = [0, 1] solution y = y(t, x) boundary conditions y(t, 0) = u0 (t), y(t, 1) = u1 (t) parameters = 0.1 and = 10 with boundary control, stability can only be achieved via large gradients in the transient phase

Lars Grne, Model Predictive Control, p. 66 u

Boundary Control
Now we change our PDE from distributed to (Dirichlet-) boundary control, i.e. yt = yx + yxx + y(y + 1)(1 y) with domain = [0, 1] solution y = y(t, x) boundary conditions y(t, 0) = u0 (t), y(t, 1) = u1 (t) parameters = 0.1 and = 10 with boundary control, stability can only be achieved via large gradients in the transient phase L2 should perform better that H1
Lars Grne, Model Predictive Control, p. 66 u

Boundary control, L2 vs. H1, N = 20


t=0 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.025 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.05 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.075 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.1 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.125 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.15 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.175 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.2 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.225 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.25 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.275 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.3 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.325 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.35 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.375 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.4 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.425 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.45 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.475 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.5 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.525 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.55 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.575 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.6 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.625 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.65 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.675 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.7 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.725 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.75 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.775 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.8 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.825 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2 vs. H1, N = 20


t=0.85 Horizont 20 (L2) Horizont 20 (H1) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


Lars Grne, Model Predictive Control, p. 67 u

Boundary control, L2, N = 10, 12, 20


t=0 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.025 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.05 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.075 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.1 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.125 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.15 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.175 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.2 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.225 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.25 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.275 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.3 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.325 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.35 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.375 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.4 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.425 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.45 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.475 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.5 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.525 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.55 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.575 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.6 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.625 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.65 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.675 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.7 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.725 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.75 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.775 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.8 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.825 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.85 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.875 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.9 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.925 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.95 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=0.975 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.025 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.05 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.075 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.1 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.125 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.15 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.175 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.2 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.225 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.25 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.275 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.3 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.325 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.35 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.375 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025

Lars Grne, Model Predictive Control, p. 68 u

Boundary control, L2, N = 10, 12, 20


t=1.4 Horizont 10 (L2) Horizont 12 (L2) Horizont 20 (L2) 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Boundary control, = 0.001, sampling time T = 0.025


(PDE computations: N. Altmller, A. Grtsch, J. Pannek, S. Trenz, K. Worthmann) u o

Lars Grne, Model Predictive Control, p. 68 u

(7) Varying control horizon


[Grne/Pannek/Worthmann 09] u

Packet loss
Plant x(n)

Network

F (x(n)) N

MPC Controller

Lars Grne, Model Predictive Control, p. 70 u

Packet loss
Plant x(n)

Network

F (x(n)) N

MPC Controller

Lars Grne, Model Predictive Control, p. 70 u

Packet loss
Plant x(n)

Network

F (x(n)) N

MPC Controller

Idea: send several values of optimal open loop Idea: control sequence (instead of just the rst value)

Lars Grne, Model Predictive Control, p. 70 u

Packet loss
Plant x(n)

Network

F (x(n)) N

MPC Controller

Idea: send several values of optimal open loop Idea: control sequence (instead of just the rst value) Idea: use these values until next values arrive
Lars Grne, Model Predictive Control, p. 70 u

Schematic illustration of the idea


x x0

n 0 1 2 3 4 5 6

Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x

n 0 1 2 3 4 5 6

black = predictions (open loop optimization)

Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x x1

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x

x2

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x

x2

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x

x3

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x

...

n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x

... x4 n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x

... ... n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x

... ... x5 n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x

... ... ... n 0 1 2 3 4 5 6

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 71 u

Schematic illustration of the idea


x

... x6 0 1 2 3 4 5 6 ... ... n

black = predictions (open loop optimization) red = MPC closed loop


Lars Grne, Model Predictive Control, p. 71 u

Rigorous formulation
Denote successful transmission times by ni , i = 1, 2, . . .

Lars Grne, Model Predictive Control, p. 72 u

Rigorous formulation
Denote successful transmission times by ni , i = 1, 2, . . . Dene a buer length M N, M N 1

Lars Grne, Model Predictive Control, p. 72 u

Rigorous formulation
Denote successful transmission times by ni , i = 1, 2, . . . Dene a buer length M N, M N 1 At each transmission time ni , the plant receives and buers the feedback control sequence FN (xni , k) = u (k), and implements FN (xni , 0), FN (xni , 1), . . . , FN (xni , mi 1) on the control horizon mi = ni+1 ni M , i.e., until the next sequence arrives k = 0, 1, 2, . . . , M 1

Lars Grne, Model Predictive Control, p. 72 u

Rigorous formulation
Denote successful transmission times by ni , i = 1, 2, . . . Dene a buer length M N, M N 1 At each transmission time ni , the plant receives and buers the feedback control sequence FN (xni , k) = u (k), and implements FN (xni , 0), FN (xni , 1), . . . , FN (xni , mi 1) on the control horizon mi = ni+1 ni M , i.e., until the next sequence arrives Note: mi is unknown at time ni
Lars Grne, Model Predictive Control, p. 72 u

k = 0, 1, 2, . . . , M 1

Stability theorem
Theorem: If there exists (0, 1] such that the relaxed Lyapunov inequality
m1

VN (x(m, x0 , u )) VN (x)
k=0

(x(m, x0 , u ), u (m))

holds for all m = 1, . . . , M , then asymptotic stability follows for the MPC closed loop with arbitrary transmission times ni , i N, satisfying mi = ni+1 ni M .

Lars Grne, Model Predictive Control, p. 73 u

Stability theorem
Theorem: If there exists (0, 1] such that the relaxed Lyapunov inequality
m1

VN (x(m, x0 , u )) VN (x)
k=0

(x(m, x0 , u ), u (m))

holds for all m = 1, . . . , M , then asymptotic stability follows for the MPC closed loop with arbitrary transmission times ni , i N, satisfying mi = ni+1 ni M . Furthermore, VN is Lyapunov function at the transmission times ni and we get the suboptimality estimate J (x, FN ) V (x)/

Lars Grne, Model Predictive Control, p. 73 u

Stability theorem
Theorem: If there exists (0, 1] such that the relaxed Lyapunov inequality
m1

VN (x(m, x0 , u )) VN (x)
k=0

(x(m, x0 , u ), u (m))

holds for all m = 1, . . . , M , then asymptotic stability follows for the MPC closed loop with arbitrary transmission times ni , i N, satisfying mi = ni+1 ni M . Furthermore, VN is Lyapunov function at the transmission times ni and we get the suboptimality estimate J (x, FN ) V (x)/ Note: The stability for arbitrary but xed m carries over to time varying mi because VN is a common Lyapunov function
Lars Grne, Model Predictive Control, p. 73 u

Computation of (N, m)
We want = (N, m) satisfying
m1

VN (x(m, x0 , u )) VN (x)
k=0

(x(m, x0 , u ), u (m)),

for all m = 1, . . . , M .

Lars Grne, Model Predictive Control, p. 74 u

Computation of (N, m)
We want = (N, m) satisfying
m1

VN (x(m, x0 , u )) VN (x)
k=0

(x(m, x0 , u ), u (m)),

for all m = 1, . . . , M . Again, for each m this can be computed via an explicitly solvable linear program which yields
N N

(i 1) = 1
i=m+1 N N i=N m+1 N

(i 1)
N

i
i=m+1 i=m+1

(i 1)
i=N m+1

i
i=N m+1

(i 1)

with i =

i1 k=0

C k
Lars Grne, Model Predictive Control, p. 74 u

Example
0.25 0.2 0.15 0.1 0.05 0 0.05 0.1 0.15 0.2 1 2 3 4 m 5 6 7

(N, m) for C = 2, = 0.68, N = 8, m = 1, . . . , 7

Lars Grne, Model Predictive Control, p. 75 u

Example
0.25 0.2 0.15 0.1 0.05 0 0.05 0.1 0.15 0.2 1 2 3 4 m 5 6 7

(N, m) for C = 2, = 0.68, N = 8, m = 1, . . . , 7 This symmetry and monotonicity is not a coincidence


Lars Grne, Model Predictive Control, p. 75 u

Property of (N, m)
Theorem: The values (N, m) satisfy (N, m) = (N, N m), m = 1, . . . , N 1 and (N, m) (N, m + 1), m = 1, . . . N/2

Lars Grne, Model Predictive Control, p. 76 u

Property of (N, m)
Theorem: The values (N, m) satisfy (N, m) = (N, N m), m = 1, . . . , N 1 and (N, m) (N, m + 1), m = 1, . . . N/2 Corollary: If N is such that all C, -exponentially controllable systems are stabilized with classical MPC (m = 1), then they are stabilized for arbitrary varying control horizons mi {1, . . . , N 1}

Lars Grne, Model Predictive Control, p. 76 u

Conservatism of worst case analysis


0.25 0.2 0.15 0.1 0.05 0 0.05 0.1 0.15 0.2 1 2 3 4 m 5 6 7

The symmetry states that the worst case system for m behaves exactly as good as the worst case system for N m.

Lars Grne, Model Predictive Control, p. 77 u

Conservatism of worst case analysis


0.25 0.2 0.15 0.1 0.05 0 0.05 0.1 0.15 0.2 1 2 3 4 m 5 6 7

The symmetry states that the worst case system for m behaves exactly as good as the worst case system for N m. However, in general these worst case systems do not coincide.

Lars Grne, Model Predictive Control, p. 77 u

Conservatism of worst case analysis


0.25 0.2 0.15 0.1 0.05 0 0.05 0.1 0.15 0.2 1 2 3 4 m 5 6 7

The symmetry states that the worst case system for m behaves exactly as good as the worst case system for N m. However, in general these worst case systems do not coincide. How conservative is this worst case approach?
Lars Grne, Model Predictive Control, p. 77 u

Example: linearized inverted pendulum


0 1 0 0 g k 0 0 x + x= 0 0 0 1 0 0 0 0 0 1 u, 0 1
1

0 0 x0 = 2 0

sampling time T = 0.5, (x, u) = 2 x


0.5 0

+ 4 u 1 , N = 11

0.5

x3

m=1

1.5

2.5 0

4 t

10

x3 component of trajectory (cart position) for dierent m


Lars Grne, Model Predictive Control, p. 78 u

Example: linearized inverted pendulum


0 1 0 0 g k 0 0 x + x= 0 0 0 1 0 0 0 0 0 1 u, 0 1
1

0 0 x0 = 2 0

sampling time T = 0.5, (x, u) = 2 x


0.5 0

+ 4 u 1 , N = 11

0.5

m=2 m=1

x3

1.5

2.5 0

4 t

10

x3 component of trajectory (cart position) for dierent m


Lars Grne, Model Predictive Control, p. 78 u

Example: linearized inverted pendulum


0 1 0 0 g k 0 0 x + x= 0 0 0 1 0 0 0 0 0 1 u, 0 1
1

0 0 x0 = 2 0

sampling time T = 0.5, (x, u) = 2 x


0.5 0

+ 4 u 1 , N = 11

m=4 m=2 m=1

0.5

x3

1.5

2.5 0

4 t

10

x3 component of trajectory (cart position) for dierent m


Lars Grne, Model Predictive Control, p. 78 u

Example: linearized inverted pendulum


0 1 0 0 g k 0 0 x + x= 0 0 0 1 0 0 0 0 0 1 u, 0 1
1

0 0 x0 = 2 0

sampling time T = 0.5, (x, u) = 2 x


0.5 0

+ 4 u 1 , N = 11

m=4 m=2 m=1 m=9

0.5

x3

1.5

2.5 0

4 t

10

x3 component of trajectory (cart position) for dierent m


Lars Grne, Model Predictive Control, p. 78 u

Example: linearized inverted pendulum


0 1 0 0 g k 0 0 x + x= 0 0 0 1 0 0 0 0 0 1 u, 0 1
1

0 0 x0 = 2 0

sampling time T = 0.5, (x, u) = 2 x


0.5 0

+ 4 u 1 , N = 11

m=4 m=2 m=1 m=9 m=10

0.5

x3

1.5

2.5 0

4 t

10

x3 component of trajectory (cart position) for dierent m


Lars Grne, Model Predictive Control, p. 78 u

Example: linearized inverted pendulum


0 1 0 0 g k 0 0 x + x= 0 0 0 1 0 0 0 0 0 1 u, 0 1
1

0 0 x0 = 2 0

sampling time T = 0.5, (x, u) = 2 x


0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55

+ 4 u 1 , N = 11

5 m

10

5 m

10

after 1 MPC step

at time n = 20
Lars Grne, Model Predictive Control, p. 78 u

Example: linearized inverted pendulum


0 1 0 0 g k 0 0 x + x= 0 0 0 1 0 0 0 0 0 1 u, 0 1
1

0 0 x0 = 2 0

sampling time T = 0.5, (x, u) = 2 x


0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55

+ 4 u 1 , N = 11

5 m

10

5 m

10

after 1 MPC step at time n = 20 Symmetry is not present in this example

Lars Grne, Model Predictive Control, p. 78 u

Monte Carlo simulation


Alternative to worst case approach: probabilistic analysis: We generate random trajectories satisfying the LP-optimality conditions derived from the C, -exponential controllability condition and compute by Monte Carlo simulation
MonteCarlo, 1000 samples 1 0.98 0.96 0.94 0.92 0.9 0.88 0.86 0.84 0.82 0.8 1 2 3 4 5 m 6 7 8 9

This results are qualitatively similar to the numerical simulations


Lars Grne, Model Predictive Control, p. 79 u

Summary of Part 2
Stability of unconstrained MPC problems can be ensured using exponential controllability conditions

Lars Grne, Model Predictive Control, p. 80 u

Summary of Part 2
Stability of unconstrained MPC problems can be ensured using exponential controllability conditions First proofs used convergence VN V in order to establish stability

Lars Grne, Model Predictive Control, p. 80 u

Summary of Part 2
Stability of unconstrained MPC problems can be ensured using exponential controllability conditions First proofs used convergence VN V in order to establish stability Tighter and more useful estimates can be obtained by using optimality conditions for (pieces of) trajectories

Lars Grne, Model Predictive Control, p. 80 u

Summary of Part 2
Stability of unconstrained MPC problems can be ensured using exponential controllability conditions First proofs used convergence VN V in order to establish stability Tighter and more useful estimates can be obtained by using optimality conditions for (pieces of) trajectories The conditions lead to an explicitly solvable linear program

Lars Grne, Model Predictive Control, p. 80 u

Summary of Part 2
Stability of unconstrained MPC problems can be ensured using exponential controllability conditions First proofs used convergence VN V in order to establish stability Tighter and more useful estimates can be obtained by using optimality conditions for (pieces of) trajectories The conditions lead to an explicitly solvable linear program The knowledge obtained from this analysis can be used to design good MPC schemes by choosing suitable running costs

Lars Grne, Model Predictive Control, p. 80 u

Summary of Part 2
Stability of unconstrained MPC problems can be ensured using exponential controllability conditions First proofs used convergence VN V in order to establish stability Tighter and more useful estimates can be obtained by using optimality conditions for (pieces of) trajectories The conditions lead to an explicitly solvable linear program The knowledge obtained from this analysis can be used to design good MPC schemes by choosing suitable running costs The analysis can be extended to variable control horizons useful for networked control systems
Lars Grne, Model Predictive Control, p. 80 u

You might also like