Download as pdf or txt
Download as pdf or txt
You are on page 1of 139

Model Predictive Control

Alberto Bemporad

imt.lu/ab

Course page: http://cse.lab.imtlucca.it/~bemporad/mpc_course.html


Course structure
• Basic concepts of model predictive control (MPC) and linear MPC

• Linear time-varying and nonlinear MPC

• Quadratic programming (QP) and explicit MPC

• Hybrid MPC

• Stochastic MPC

• Learning-based MPC

• Numerical examples:

– MPC Toolbox for MATLAB (linear/explicit/parameter-varying MPC)

– Hybrid Toolbox for MATLAB (explicit MPC, hybrid systems)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 2/132


Course structure

• For additional background:

– Linear Systems:

http://cse.lab.imtlucca.it/~bemporad/intro_control_course.html

– Numerical Optimization:

http://cse.lab.imtlucca.it/~bemporad/optimization_course.html

– Machine Learning:

http://cse.lab.imtlucca.it/~bemporad/ml.html

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 3/132


Model Predictive Control: Basic Concepts
Model Predictive Control (MPC)
optimization
prediction model
algorithm

model-based optimizer
process

set-points inputs outputs


r(t) u(t) y(t)

measurements

simplified likely
Use a dynamical model of the process to predict its future
---------
evolution and choose the “best” control action
a good

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 4/132


Model Predictive Control
• MPC problem: 昀椀nd the best control sequence over a future horizon of N steps
X
N −1
2 2 past future
min ∥yk − r(t)∥2 + ρ∥uk − ur (t)∥2 r(t)
u0 , . . . , uN −1 k=0 yk
predicted outputs
s.t. xk+1 = f (xk , uk ) prediction model uk
yk = g(xk ) manipulated inputs

umin ≤ uk ≤ umax constraints t t+k t+N


ymin ≤ yk ≤ ymax

x0 = x(t) state feedback

numerical optimization problem


t+1 t+1+k t+N+1

1 estimate current state x(t)


2 optimize wrt {u0 , . . . , uN −1 }
3 only apply optimal u0 as input u(t)

Repeat at all time steps t


"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 5/132
Daily-life examples of MPC
• MPC is like playing chess !

• Online (event-based) re-planning used in GPS navigation

• You use MPC too when you drive !

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 6/132


MPC in industry
• The MPC concept dates back to the 60’s

(Rafal, Stevens, AiChE Journal, 1968)

(Propoi, 1963)
• MPC used in the process industries since the 80’s
(Qin, Badgewell, 2003) (Bauer, Craig, 2008)

Today APC (advanced process control) = MPC

©SimulateLive.com

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 7/132


MPC in industry
(Qin, Badgewell, 2003)

• Industrial survey of MPC applications conducted in mid 1999

Area Aspen Honeywell Adersab Invensys SGSc Total


Technology Hi-Spec

Refining 1200 480 280 25 1985


Petrochemicals 450 80 — 20 550
Chemicals 100 20 3 21 144
Pulp and paper 18 50 — — 68
Air & Gas — 10 — — 10
Utility — 10 — 4 14
Mining/Metallurgy 8 6 7 16 37
Food Processing — — 41 10 51
Polymer 17 — — — 17
Furnaces — — 42 3 45
Aerospace/Defense — — 13 — 13
Automotive — — 7 — 7
Unclassified 40 40 1045 26 450 1601
Total 1833 696 1438 125 450 4542
First App. DMC:1985 PCT:1984 IDCOM:1973
IDCOM-M:1987 RMPCT:1991 HIECON:1986 1984 1985
OPC:1987
Largest App. 603 ( 283 225 ( 85 — 31 ( 12 —

Estimates based on vendor survey

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 8/132


MPC in industry
(Bauer, Craig, 2008)
• Economic assessment of Advanced Process Control (APC)
Petrochemical
Chemicals
Petroleum refining
Mineralsprocessing
Oil & Gas
Power & utilities
Pulp & paper
Industrial gases
Coal products Model predictive control Standard
Suppliers
Water & wastewater Users Constraint control Frequently
Other Split-range control Rarely

Linear programming (LP) Never


0% 20% 40% 60% 80% 100%
Don't know
Nonlinear control algorithms or models
participants of APC survey by industry (worldwide) Dead-time compensation
Statistical process control
Neural networks based control
Expert system based control
Fuzzy logic control
Internal model control (IMC)
Adaptive / selftuning control
Directsynthesis (DS)

0% 20% 40% 60% 80% 100%

Industrial use of APC methods: survey results


"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 9/132
MPC in industry
(Samad, IEEE CS Magazine, 2017)

• Impact of advanced control technologies in industry

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 10/132


MPC in industry
Table 2
The percentage of survey respondents indicating whether a control technology had
demonstrated (“Current Impact”) or was likely to demonstrate over the next five
(Samad et al., 2020)
years (“Future Impact”) high impact in practice.

Current Impact Future Impact

Control Technology %High %High


PID control 91% 78%
System Identification 65% 72%
Estimation and filtering 64% 63%
Model-predictive control 62% 85%
Process data analytics 51% 70%
Fault detection and identification 48% 78%
Decentralized and/or coordinated control 29% 54%
Robust control 26% 42%
Intelligent control 24% 59%
Discrete-event systems 24% 39%
Nonlinear control 21% 42%
Adaptive control 18% 44%
Repetitive control 12% 17%
Hybrid dynamical systems 11% 33%
Other advanced control technology 11% 25%
Game theory 5% 17%

"As can be observed, MPC is clearly considered more impactful, and likely to be more impactful,
vis-à-vis other control technologies, especially those that can be considered the "crown jewels" of
control theory - robust control, adaptive control, and nonlinear control."

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 11/132


Typical use of MPC
strategic planner
high-level optimizer
• Static optimizer of steady-


reference
state (static I/O models)
optimization
• Reference trajectory
generator

reference-signal r(t)
tactical planner
dynamic optimizer
• optimize transient response
MPC • ensure reference tracking
• handle constraints
• coordinate multiple inputs

measurements y(t) u(t) actuator set-points


regulators
low-level • fast sampling
controllers • single loops

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 12/132


MPC of automotive systems
(Bemporad, Bernardini, Borrelli, Cimini, Di Cairano, Esen, Giorgetti, Graf-Plessen, Hrovat, Kolmanovsky
Levijoki, Livshiz, Long, Pattipati, Ripaccioli, Trimboli, Tseng, Verdejo, Yanakiev, ..., 2001-present)

Powertrain Ford Motor Company


engine control, magnetic actuators, robotized gearbox,
power MGT in HEVs, cabin heat control, electrical motors
Jaguar
DENSO Automotive
Vehicle dynamics Fiat
traction control, active steering, semiactive suspensions, General Motors
autonomous driving

suspension deflection

tire deflection
suspension deflection

tire deflection

 4

Most automotive OEMs are looking into MPC solutions today 4

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 13/132


MPC for autonomous driving (R&D)
MPC for autonomous driving
" Coordinate torque request and steering to achieve safe and
comfortable autonomous driving in the presence of other
• Coordinate torque request and steering to achieve safe and
vehicles and obstacles
comfortable autonomous driving with no collisions

• "MPC path path


combines
Combine planning,
planning, path tracking,
path tracking and obstacle
and obstacle avoidance
avoidance

Employ stochastic
• "Stochastic MPC
prediction to take
models into
are account
used uncertainty
to account and
for uncertainty
driver9s behavior
and driver’s behavior

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 14/132


MPC of gasoline turbocharged engines
• Control throttle, wastegate, intake & exhaust cams to make engine torque
track set-points, with max ef昀椀ciency and satisfying constraints
MPC Engine

Desired Actuators Achieved


torque commands Torque

Measurements

numerical optimization problem


solved in real-time on ECU

(Bemporad, Bernardini, Long, Verdejo, 2018)

engine operating at low pressure (66 kPa)


"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 15/132
Supervisory MPC of powertrain with CVT

• Coordinate engine torque request and continuously variable transmission


(CVT) ratio to improve fuel economy and drivability

• Real-time MPC is able to take into account coupled dynamics and constraints,
optimizing performance also during transients
Engine
torque
request

Desired
axle torque
Engine Control
nowcar.com

MPC
CVT US06 Double Hill driving cycle
ratio
request CVT Control (Bemporad, Bernardini, Livshiz, Pattipati, 2018)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 16/132


MPC in automotive production
ODYS real-time embedded optimization and MPC software is currently
running on 3+ million vehicles worldwide

• Multivariable system, 4 inputs, 4 outputs.


QP solved in real time on ECU
(Bemporad, Bernardini, Long, Verdejo, 2018)

• Supervisory MPC for powertrain control


also in production since 2018
(Bemporad, Bernardini, Livshiz, Pattipati, 2018)

First known mass production of MPC in the automotive industry

http://www.odys.it/odys-and-gm-bring-online-mpc-to-production

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 17/132


Aerospace applications of MPC
• MPC capabilities explored in space applications

cooperating UAVs powered descent planetary rover

  
(Bemporad, Rocchi, 2011) (Pascucci, Bennani, Bemporad, 2016) (Krenn et. al., 2012)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 18/132


MPC In Aeronautic Industry

http://www.pw.utc.com/Press/Story/20100527-0100/2010

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 19/132


MPC for Smart Electricity Grids
(Patrinos, Trimboli, Bemporad, 2011)
transmission grid

coal 1 ? ?
photovoltaic
coal 2

? ? ? hydro-storage

? ? ?
demand
natural gas

? ? ? wind farm

Dispatch power in smart distribution grids, trade energy on energy markets

Challenges: account for dynamics, network topology, physical constraints, and


stochasticity (of renewable energy, demand, electricity prices)
FP7-ICT project “E-PRICE - Price-based Control of Electrical Power Systems”
(2010-2013)
"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 20/132
MPC of drinking water networks
(Sampathirao, Sopasakis, Bemporad, Patrinos, 2017)

Drinking water
network of
Barcelona:

63 tanks
114 controlled 昀氀ows
17 mixing nodes

• ≈5% savings on energy costs w.r.t. current practice


• Demand and minimum pressure requirements met, smooth control actions
• Computation time: ≈20 s on NVIDIA Tesla 2075 CUDA (sample time = 1 hr)
FP7-ICT project “EFFINET - Ef昀椀cient Integrated Real-time Monitoring and Control of Drinking Water Networks”

(2012-2015)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 21/132


MPC for dynamic hedging of financial options
(Bemporad, Bellucci, Gabbriellini, Quantitative Finance, 2014)

• Goal: 昀椀nd a dynamic hedging policy of a portfolio replicating a synthetic


option, so to minimize risk that payoff̸=portfolio wealth at expiration date

• A simple linear stochastic model describes the dynamics of portfolio wealth

• Stochastic MPC results:


Portfolio wealth vs. payoff at expiration Portfolio wealth vs. payoff at expiration
100 0.2

80
0.15

Final wealth in portfolio


60
0.1

40

0.05
20

0
0

−20 −0.05
0 50 100 150 200 −0.05 0 0.05 0.1 0.15 0.2
Stock price at expiration Payoff
( )
x(ti ) − x(ti−1 )
p(T ) = max{w(T ) − K, 0} p(T ) = max 0, C + min
i∈{1,...,Nfix } x(ti−1 )

European call Napoleon cliquet


"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 22/132
MPC research is driven by applications

• Process control → linear MPC (some nonlinear too) 1970-2000

• Automotive control → explicit, hybrid MPC 2001-2010

• Aerospace systems and UAVs → linear time-varying MPC >2005

• Information and Communication Technologies (ICT)


(wireless nets, cloud) → distributed/decentralized MPC >2005

• Energy, 昀椀nance, automotive, water → stochastic MPC >2010

• Industrial production → embedded optimization solvers for MPC >2010

• Machine learning → data-driven MPC today

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 23/132


MPC design flow
High-fidelity simulation model (simplified) control-oriented performance index
prediction model & constraints
u y
G(z)

closed-loop simulation
MPC
design
physical modeling +
parameter estimation
real-time
system code
identification
/* z=A*x0; */
for (i=0;i<m;i++) {
z[i]=A[i]*x[0];
for (j=1;j<n;j++) {
z[i]+=A[i+m*j]*x[j];
}
}

experiments

physical process
"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 24/132
MPC toolboxes
• MPC Toolbox (The Mathworks, Inc.): (Bemporad, Ricker, Morari, 1998-today)

– Part of Mathworks’ of昀椀cial toolbox distribution


– All written in MATLAB code
– Great for education and research

• Hybrid Toolbox: (Bemporad, 2003-today) 10,000+ downloads


– Free download: http://cse.lab.imtlucca.it/~bemporad/hybrid/toolbox 1.5 downloads/day
– Great for research and education

odys.it/embedded-mpc
• ODYS Embedded MPC Toolset: (ODYS, 2013-today)

– Very 昀氀exible MPC design and seamless integration in production


– Real-time MPC code and QP solver written in plain C
– Support for nonlinear models and deep learning
– Designed and adopted for industrial production

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 25/132


Benefits of MPC
• Long history (decades) of success of MPC in industry

• MPC is a universal control methodology:

– to coordinate multiple inputs/outputs, arbitrary models (linear, nonlinear, ...)


– to optimize performance under constraints
– intuitive to design, easy to calibrate and recon昀椀gure = short development time

• MPC is a mature technology also in fast-sampling applications (e.g. automotive)

– modern ECUs can solve MPC problems in real-time


– advanced MPC software tools are available for design/calibration/deployment

Ready to learn how MPC works ?

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 26/132


Basics of Constrained Optimization

See more in “Numerical Optimization” course


http://cse.lab.imtlucca.it/~bemporad/optimization_course.html
Mathematical programming
minx f (x)
f(x)
s.t. g(x) ≤ 0

x ∈ Rn , f : Rn → R, g : Rn → Rm
g(x) ≤ 0
x

  
x1 g1 (x1 , x2 , . . . , xn )
 .   .. 
 ..  ,
x= f (x) = f (x1 , x2 , . . . , xn ), g(x) = 
 
 . 
xn gm (x1 , x2 , . . . , xn )

In general, the problem is dif昀椀cult to solve use software tools

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 27/132


Optimization software
• Comparison on benchmark problems:
http://plato.la.asu.edu/bench.html

• Taxonomy of many solvers for different classes of optimization problems:


http://www.neos-guide.org

• NEOS server for remotely solving optimization problems:

http://www.neos-server.org

• Good open-source optimization software:

http://www.coin-or.org/

• GitHub , MATLAB Central , Google ,...

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 28/132


Convex sets
• A set S ⊆ Rn is convex if for all x1 , x2 ∈ S

λx1 + (1 − λ)x2 ∈ S, ∀λ ∈ [0, 1]

convex set
nonconvex set

S x1 x2
x1 x2

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 29/132


Convex functions
• A function f : S → R is a convex function if S is convex and

f (λx1 + (1 − λ)x2 ) ≤ λf (x1 ) + (1 − λ)f (x2 )


Jensen’s inequality
∀x1 , x2 ∈ S, λ ∈ [0, 1]

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 30/132


Convex optimization problem
The optimization problem
min f (x)
s.t. x ∈ S

is a convex optimization problem if S is a convex set S


x1 x2
and f : S → R is a convex function

• Often S is de昀椀ned by linear equality constraints Ax = b and convex inequality


constraints g(x) ≤ 0, g : Rn → Rm convex

• Every local solution is also a global one (we will see this later)

• Ef昀椀cient solution algorithms exist

• Often occurring in many problems in engineering, economics, and science

Excellent textbook: “Convex Optimization” (Boyd, Vandenberghe, 2002)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 31/132


Polyhedra
• Convex polyhedron = intersection of a 昀椀nite set of half-spaces of Rn
• Convex polytope = bounded convex polyhedron
v3
b2
x=
• Hyperplane (H-)representation: v2 A
A2
A2 1

n
P = {x ∈ R : Ax ≤ b} v3
v1 A
1 x=
b1
A3x=b3 A3
• Vertex (V-)representation:
q
X p
X Convex hull = transformation
P = {x ∈ Rn : x = αi vi + β j rj } from V- to H-representation
i=1 j=1
Vertex enumeration =
∑ q
αi , βj ≥ 0, αi = 1, vi , rj ∈ Rn transformation from H- to
i=1
V-representation
when q = 0 the polyhedron is a cone
vi = vertex, rj = extreme ray

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 32/132


Linear programming
• Linear programming (LP) problem:
-c
min c′ x
x*
s.t. Ax ≤ b, x ∈ Rn
Ex = f George Dantzig
(1914–2005)

• LP in standard form:

min cx
s.t. Ax = b
x ≥ 0, x ∈ Rn
• Conversion to standard form:
1. introduce slack variables
n
X n
X
aij xj ≤ bi ⇒ aij xj + si = bi , si ≥ 0
j=1 j=1

2. split positive and negative part of x


 n  n
 X  X + −
 aij xj + si = bi  aij (xj − xj ) + si = bi
j=1
⇒ j=1

  + −

xj free, si ≥ 0 x j , x j , si ≥ 0

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 33/132


Linear programming (LP)
• Converting maximization to minimization

max c′ x = −(min −c′ x) (more generally: max f (x) = − min{−f (x)})


x x x x

• Equalities to double inequalities (not recommended)


n
( P
n
X aij xj ≤ bi
aij xj = bi ⇒ Pj=1
n
j=1 j=1 aij xj ≥ bi

• Change direction of an inequality


n
X n
X
aij xj ≥ bi ⇒ −aij xj ≤ −bi
j=1 j=1

An LP can be always formulated using “min” and “≤”

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 34/132


Quadratic programming (QP)
• Quadratic programming (QP) problem:
1 ′ Ax ≤ b
min x Qx + c′ x
2 x*
s.t. Ax ≤ b, x ∈ Rn
1 x0 Qx + c0 x = constant
Ex = f 2

• Convex optimization problem if Q ⪰ 0 (Q = positive semide昀椀nite matrix) 1


• Without loss of generality, we can assume Q = Q′ :

1 ′ Q+Q Q−Q′ ′
1 ′
2 x Qx = 2x ( 2 ′ + 2 )x = 12 x′ ( Q+Q 1 ′ 1 ′ ′ ′
2 )x + 4 x Qx − 4 (x Q x)
1 ′ Q+Q
= 2 x ( 2 )x

• Hard problem if Q ̸⪰ 0 (Q = inde昀椀nite matrix)


1 A matrix P ∈ Rn×n is positive semide昀椀nite (P ⪰ 0) if x′ P x ≥ 0 for all x.

It is positive de昀椀nite (P ≻ 0) if in addition x′ P x > 0 for all x ̸= 0.


It is negative (semi)de昀椀nite (P ≺ 0, P ⪯ 0) if −P is positive (semi)de昀椀nite.
It is inde昀椀nite otherwise.

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 35/132


Mixed-integer programming (MIP)
1 ′
min c′ x min x Qx + c′ x
2
s.t. Ax ≤ b, x = [ xxcb ] s.t. Ax ≤ b, x = [ xxcb ]
xc ∈ Rnc , xb ∈ {0, 1}nb xc ∈ Rnc , xb ∈ {0, 1}nb
mixed-integer linear program (MILP) mixed-integer quadratic program (MIQP)

• Some variables are real, some are binary (0/1)

• MILP and MIQP are N P -hard problems, in general

• Many good solvers are available (CPLEX, Gurobi, GLPK, FICO Xpress, CBC, ...)
For comparisons see http://plato.la.asu.edu/bench.html

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 36/132


Modeling languages for optimization problems

• AMPL (A Modeling Language for Mathematical Programming) most used


modeling language, supports several solvers

• GAMS (General Algebraic Modeling System) is one of the 昀椀rst modeling


languages

• GNU MathProg a subset of AMPL associated with the free package GLPK
(GNU Linear Programming Kit)

• YALMIP MATLAB-based modeling language

• CVX (CVXPY) Modeling language for convex problems in MATLAB ( )

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 37/132


Modeling languages for optimization problems
• CASADI + IPOPT Nonlinear modeling + automatic differentiation, nonlinear
programming solver (MATLAB, , C++)

• Optimization Toolbox’ modeling language (part of MATLAB since R2017b)

• PYOMO -based modeling language

• GEKKO -based mixed-integer nonlinear modeling language

• PYTHON-MIP -based modeling language for mixed-integer linear


programming

• PuLP A linear programming modeler for

• JuMP A modeling language for linear, quadratic, and nonlinear constrained


optimization problems embedded in

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 38/132


Linear MPC
Linear MPC - Unconstrained case
• Linear prediction model
Notation:
( x ∈ Rn
xk+1 = Axk + Buk x0 = x(t)
u ∈ Rm
yk = Cxk xk = x(t + k|t)
y ∈ Rp
uk = u(t + k|t)

k−1
X
• Relation between input and states: xk = Ak x0 + Aj Buk−1−j
j=0
• Performance index
 u0 
N
X −1 R = R′ ≻0 u1
J(z, x0 ) = x′N P xN + x′k Qxk +u′k Ruk Q = Q′ ⪰0 z= .. 
.
k=0 P = P ′ ⪰0 uN −1

• Goal: 昀椀nd the sequence z ∗ that minimizes J(z, x0 ), i.e., that steers the state x
to the origin optimally

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 39/132


Computation of cost function

z }| {
 ′   
x1 Q 0 0 ... 0 x1
x2   0 Q 0 ... 0   x2
  
   
′  .   . . .. . .   . 
J(z, x0 ) = x0 Qx0 +  .   . . . . .   . 

 .   .
  . . . 

 . 

 xN −1   0 ... 0 Q 0   xN −1 
xN 0 0 ... 0 P xN
 ′   
u0 R 0 ... 0 u0
 u1   0 R ... 0   u1
 
   
+ .   . . .. .   . 
 .   . . . . 
 . 
 .   . . .  . 
uN −1 0 ... 0 R uN −1
| {z }

      
x1 B 0 ... 0 u0 A
 x2   AB B ... 0   u1   A2 
      
 .  =  . . .. .   . + .  x0
 .   . . . . 
 .   . 
 .   . . .  .   . 
xN AN −1 B AN −2 B ... B uN −1 AN
| {z }| {z } | {z }
S̄ z T̄

′ ′
J(z, x0 ) = (S̄z + T̄ x0 ) Q̄(S̄z + T̄ x0 ) + z R̄z + x′0 Qx0
1 ′ 1
= z 2(R̄ + S̄ ′ Q̄S̄) z + x′0 2T̄ ′ Q̄S̄ z + x′0 2(Q + T̄ ′ Q̄T̄ ) x0
2 | {z } | {z } 2 | {z }
H F′ Y
"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 40/132
Linear MPC - Unconstrained case
 u0 
u1
1 1 condensed
J(z, x0 ) = z ′ Hz+x′0 F ′ z+ x′0 Y x0 z= .. 
2 2 . form of MPC
uN −1

• The optimum is obtained by zeroing the gradient

∇z J(z, x0 ) = Hz + F x0 = 0
 u∗

0
u∗
1
 
and hence z ∗ =  ..  = −H −1 F x0 (“batch” solution)

.
uN −1

• Alternative #1: 昀椀nd z ∗ via dynamic programming (Riccati iterations)


• Alternative #2: keep also x1 , . . . , xN as optimization variables and the equality
constraints xk+1 = Axk + Buk (non-condensed form, which is very sparse)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 41/132


Unconstrained linear MPC algorithm
@ each sampling step t: past future r(t)
yk
predicted outputs
uk
manipulated inputs

t t+k t+N

• Minimize quadratic function (no constraints)


 u0 
u1
1
min f (z) = z ′ Hz + x′ (t)F ′ z z= .. 
z 2 .
uN −1

• solution: ∇f (z) = Hz + F x(t) = 0 ⇒ z ∗ = −H −1 F x(t)


h i
u(t) = − I 0 ... 0 H −1 F x(t) = Kx(t)

unconstrained linear MPC = linear state-feedback!

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 42/132


Constrained Linear MPC
( x ∈ Rn
xk+1 = Axk + Buk
• Linear prediction model: u ∈ Rm
yk = Cxk
y ∈ Rp
• Constraints to enforce:
(
umin ≤ u(t) ≤ umax umin , umax ∈ Rm
ymin ≤ y(t) ≤ ymax ymin , ymax ∈ Rp

• Constrained optimal control problem (quadratic performance index):


N
X −1
min x′N P xN + x′k Qxk + u′k Ruk  u0 
z R = R′ ≻0
k=0 u1
Q = Q′ ⪰0 z= .. 
.
s.t. umin ≤ uk ≤ umax , k = 0, . . . , N − 1 P = ′
P ⪰0 uN −1

ymin ≤ yk ≤ ymax , k = 1, . . . , N

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 43/132


Constrained Linear MPC
k−1
X
k
• Linear prediction model: xk = A x0 + Ai Buk−1−i
i=0

• Optimization problem (condensed form):


V (x0 ) = 12 x′0 Y x0 + min 1 ′
2 z Hz + x′0 F ′ z (quadratic objective)
z

s.t. Gz ≤ W + Sx0 (linear constraints)

convex Quadratic Program (QP)


 u0 
u1
• z= ..  ∈ RN m is the optimization vector z*

.
uN −1

• H = H ′ ≻ 0, and H, F, Y, G, W, S depend on weights Q, R, P upper and lower


bounds umin , umax , ymin , ymax and model matrices A, B, C .

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 44/132


Computation of constraint matrices
• Input constraints umin ≤ uk ≤ umax , k = 0, . . . , N − 1
   
1 0 ... 0 umax
 0 1 ... 0   u 
   max 
   
 .. .. . 
.   .
.  
u0

(
 .
 . .  
 . 

uk ≤ umax  0 ... 0 1     u1 
  umax   
 z ≤   z= . 
−uk ≤ −umin  −1 0 ... 0   −umin  . 
.
  
   
 0 −1 ... 0   −umin 
    uN −1
 . .. . 
  . 
 . .   . 
 . . .  . 
0 ... 0 −1 −umin
k−1
X
• Output constraints yk = CAk x0 + CAi Buk−1−i ≤ ymax , k = 1, . . . , N
i=0
     
CB 0 ... 0 ymax CA
 CAB CB ... 0   ymax   CA2 
     
 . . z ≤  .  −  .  x0
 . .   .   . 
 . .   .   . 
N
CA −1 B ... CAB CB ymax CAN

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 45/132


Linear MPC algorithm
past future r(t)
yk
predicted outputs
uk
@ each sampling step t:
manipulated inputs

t t+k t+N

• Measure (or estimate) the current state x(t)


 u∗
 
 feedback
0  z }| {
 1 ′
 u∗
1   min

 2 z Hz + x′ (t)F ′ z
• Get the solution z ∗ =  ..  of the QP z

.
uN −1 



 s.t. Gz ≤ W + S x(t)

 |{z}
feedback
• Apply only u(t) = u∗0 , discarding the remaining optimal inputs u∗1 , . . . , u∗N −1

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 46/132


Double integrator example
Pt−1 Pt−1
• System: x2 (t) = x2 (0) + j=0 u(j), y(t) = x1 (0) + j=0 x2 (j)

• Sample time: Ts = 1 s
(
x(t + 1) = [ 10 11 ] x(t) + [ 01 ] u(t)
• State-space realization:
y(t) = [ 1 0 ] x(t)

• Constraints: −1 ≤ u(t) ≤ 1
P 
1 1 2
• Performance index: min k=0 yk2 + 10 uk + x′2 [ 10 01 ] x2 (N = 2)

• QP matrices:
H = [ 4.2 2 26
2 2.2 ] , F = [ 0 2 ]
1 ′
+ x′ (t)F ′ z + 12 x′ (t)Y x(t)
 1 0 
cost: 2 z Hz 6 ] , G = −1 0
Y = [ 46 12 0 1
1  0 00 −1
constraints: Gz ≤ W + Sx(t)
W = 11 , S = 00 00
1 00

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 47/132


Double integrator example
x1
x2

u x2

x1

go to demo linear/doubleint.m (Hybrid Toolbox)


(see also mpcdoubleint.m in MPC Toolbox)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 48/132


Double integrator example

• Add constraint on second state at prediction time t + 1:

x2,k ≥ −1, k = 1

• New QP matrices:

H = [ 4.2 2 26 4 6
2 2.2 ] , F = [ 0 2 ] , Y = [ 6 12 ]
" −1 0 # "1# "0 1#
1 0 1 0 0
G= −1 0 ,W = 1 ,S= 0 0
0 1 1 0 0
0 −1 1 0 0

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 49/132


Double integrator example

x1
x2

u x2

x1

• State constraint x2 (t) ≥ −1 is satis昀椀ed

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 50/132


Linear MPC - Tracking
• Objective: make the output y(t) track a reference signal r(t)
• Let us parameterize the problem using the input increments

∆u(t) = u(t) − u(t − 1)

• As u(t) = u(t − 1) + ∆u(t) we need to extend the system with a new state
xu (t) = u(t − 1)
(
x(t + 1) = Ax(t) + Bu(t − 1) + B∆u(t)
xu (t + 1) = xu (t) + ∆u(t)

 h i h i
x(t+1) x(t)

xu (t+1) = [ A0 BI ] xu (t) + [ BI ] ∆u(t)
h i
x(t)
 y(t) = [ C 0 ] xu (t)

• Again a linear system with states x(t), xu (t) and input ∆u(t)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 51/132


Linear MPC - Tracking
• Optimal control problem (quadratic performance index):
N
X −1
min ∥W y (yk+1 − r(t))∥22 + ∥W ∆u ∆uk ∥22    
z ∆u0 u0
k=0
 ∆u1   u1 
[∆uk ≜ uk − uk−1 ], u−1 = u(t − 1)    
z= .  or z =  . 
 .   . 
s.t. umin ≤ uk ≤ umax , k = 0, . . . , N − 1  .   . 
ymin ≤ yk ≤ ymax , k = 1, . . . , N ∆uN −1 uN −1
∆umin ≤ ∆uk ≤ ∆umax , k = 0, . . . , N − 1

weight W (·) = diagonal matrix, or Cholesky factor of Q(·) = (W (·) )′ W (·)


1 ′
min J(z, x(t)) = + [x′ (t) r ′ (t) u′ (t − 1)]F ′ z
z Hz
z 2

 convex
x(t) Quadratic
 
s.t. Gz ≤ W + S  r(t) 
u(t − 1)
Program

• Add the extra penalty ∥W u (uk − uref (t))∥22 to track input references
• Constraints may depend on r(t), such as emin ≤ yk − r(t) ≤ emax

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 52/132


Linear MPC tracking example
1.6

1.4

1.2
1
• System: y(t) = s2 +0.4s+1 u(t) 1

0.8
d2 y
(or equivalently dt2 + 0.4 dy
dt + y = u) 0.6

0.4

0.2

• Sampling with period Ts = 0.5 s: 0


0 10 20 30
(  1.597 −0.8187 
x(t + 1) = 1 0 x(t) + [ 0.5
0 ] u(t)
y(t) = [ 0.2294 0.2145 ] x(t)

go to demo linear/example3.m (Hybrid Toolbox)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 53/132


Linear MPC tracking example
• Performance index:
9
X input

min (yk+1 − r(t))2 + 0.04∆u2k tf(1,[1 .4 1])

LTI System
k=0 output

Step
Linear Constrained
Controller

• Closed-loop MPC results:


u(t) y(t)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 54/132


Linear MPC tracking example
• Impose constraint 0.8 ≤ u(t) ≤ 1.2 (amplitude)

u(t) y(t)

• Impose instead constraint −0.2 ≤ ∆u(t) ≤ 0.2 (slew-rate)

u(t) y(t)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 55/132


Measured disturbances
• Measured disturbance v(t) = input that is measured but not manipulated
(
manipulated
xk+1 = Axk + Buk + Bv v(t) u(t) variables
yk = Cxk + Dv v(t) outputs
process
y(t)
measured
k−1
∑ v(t) disturbances
xk = Ak x0 + Aj Buk−1−j + Aj Bv v(t)
state
j=0
x(t)
• Same performance index, same constraints. We still have a QP:
1 ′
min 2
+ [x′ (t) r′ (t) u′ (t − 1) v ′ (t)]F ′ z
z Hz
z  
x(t)
 r(t) 
s.t. Gz ≤ W + S 
 
 u(t − 1) 

v(t)

• Note that MPC naturally provides feedforward action on v(t) and r(t)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 56/132


Anticipative action (a.k.a. "preview")
N
∑ −1
min ∥W y (yk+1 − r(t + k))∥22 + ∥W ∆u ∆u(k)∥22
z
k=0

• Reference not known in advance • Future refs (partially) known in


(causal): advance (anticipative action):
rk ≡ r(t), ∀k = 0, . . . , N − 1 rk = r(t + k), ∀k = 0, . . . , N − 1
output / reference output / reference

use r(t) use r(t+k)

input input

go to demo mpcpreview.m (MPC Toolbox)

• Same idea also applies for preview of measured disturbances v(t + k)


"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 57/132
Example: Cascaded MPC
• We can use preview also to coordinate multiple MPC controllers

• Example: cascaded MPC

speed process #1 torque process #2 throttle


y(t) actual u1(t) u2(t)

r(t) MPC #1 MPC #2


u1*(t), u1*(t+1), &, u1*(t+N-1)
reference
desired u1(t)
speed
= future ref seq. for MPC#2

• MPC #1 sends current and future references to MPC #2

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 58/132


Example: Decentralized MPC
• Example: decentralized MPC

process #1 process #2

y1(t) y2(t) u2(t)

u1(t)

r1(t) r2(t)

MPC #1 MPC #2
u1 (t), u1(t+1), …, u1(t+N-1)

u2 (t), u2(t+1), …, u2(t+N-1)

• Commands generated by MPC #1 are measured dist. in MPC#2, and vice versa

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 59/132


Soft constraints
• Relax output constraints to prevent QP infeasibility
N
∑ −1
min ∥W y (yk+1 − r(t))∥22 + ∥W ∆u ∆uk ∥22 + ∥W u (uk − uref (t))∥22 + ρϵ ϵ2
z
k=0

s.t. xk+1 = Axk + Buk , k = 0, . . . , N − 1


 
umin ≤ uk ≤ umax , k = 0, . . . , N − 1 ∆u(0)

∆umin ≤ ∆uk ≤ ∆umax , k = 0, . . . , N − 1 ..


z= .

ymin − ϵVmin ≤ yk ≤ ymax + ϵVmax , k = 1, . . . , N ∆u(N −1)
ϵ

• ϵ = “panic” variable, with weight ρϵ ≫ W y , W ∆u

• Vmin , Vmax = vectors with entries > 0. The larger the i-th entry of vector V , the
relatively softer the corresponding i-th constraint

• Infeasibility can be due to:


– modeling errors, disturbances
– wrong MPC setup (e.g., prediction horizon is too short)
"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 60/132
Delays - Method #1
• Linear model with delays x(t + 1) = Ax(t) + Bu(t − τ )
y(t) = Cx(t)

u(t − 1) ... u(t − τ ) y(t)


u(t) x(t)
A, B C

• Map delays to poles in z = 0:

xk (t) ≜ u(t − k) ⇒ xk (t + 1) = xk−1 (t), k = 1, . . . , τ


 x   A B 0 0 ... 0   x   0 
xτ 0 0 Im 0 ... 0 xτ 0
 xτ −1   0 0 0 Im ... 0   xτ −1   0 
 .  (t + 1) =  . . . . . .   .  (t) +  .  u(t)
.. .. .. .. . . .. .. .. ..
x 1 x
0 0 0 I 0 ... 0 1 m

• Apply MPC to the extended system

• Note: the prediction horizon N must be ≥ τ , otherwise no input u0 , . . . , uN −1


has an effect on the output!

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 61/132


Delays - Method #2
(
x(t + 1) = Ax(t) + Bu(t − τ )
• Linear model with delays:
y(t) = Cx(t)
(
x̄(t + 1) = Ax̄(t) + Bu(t)
• Delay-free model: x̄(t) ≜ x(t + τ )
ȳ(t) = C x̄(t)
• Design MPC for delay-free model, u(t) = fMPC (x̄(t))
• Compute the predicted state
τ
X −1
x̄(t) = x̂(t + τ ) = Aτ x(t) + Aj B u(t − 1 − j)
| {z }
j=0
past inputs!

• Compute the MPC control move u(t) = fMPC (x̂(t + τ ))

• For better closed-loop performance and improved robustness, x̂(t + τ ) can be


computed by a more complex model than (A, B, C)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 62/132


Good models for (MPC) control
• Computation complexity depends on chosen prediction model

• Good models for MPC must be

– Descriptive enough to capture the most signi昀椀cant dynamics of the system

– Simple enough for solving the optimization problem

“Things should be made as simple as possible, but not any simpler.”

Albert Einstein
(1879–1955)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 63/132


MPC theory
• After the industrial success of MPC, a lot of research done:

– linear MPC ⇒ linear prediction model


– nonlinear MPC ⇒ nonlinear prediction model
– robust MPC ⇒ uncertain (linear) prediction model
– stochastic MPC ⇒ stochastic prediction model
– distributed/decentralized MPC⇒ multiple MPCs cooperating together
– economic MPC ⇒ MPC based on arbitrary (economic) performance indices
– hybrid MPC ⇒ prediction model integrating logic and dynamics
– explicit MPC ⇒ of昀氀ine (exact/approximate) computation of MPC
– solvers for MPC ⇒ online numerical algorithms for solving MPC problems
– data-driven MPC ⇒ machine learning methods tailored to MPC design

• Main theoretical issues: feasibility, stability, solution algorithms (Mayne, 2014)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 64/132


Feasibility
N
X −1
min ∥W y (yk+1 − r(t))∥2 + ∥W ∆u ∆uk ∥2
z
k=0
subj. to umin ≤ uk ≤ umax , k = 0, . . . , N − 1 QP problem
ymin ≤ yk ≤ ymax , k = 1, . . . , N
∆umin ≤ ∆uk ≤ ∆umax , k = 0, . . . , N − 1

• Feasibility: Will the QP problem be feasible at all sampling instants t?

• Input constraints only: always feasible if u/∆u constraints are consistent

• Hard output constraints:


– When N < ∞ there is no guarantee that the QP problem will remain feasible at all
t, even in the nominal case
– N = ∞ ok in the nominal case, but we have an in昀椀nite number of constraints!
– Maximum output admissible set theory: N < ∞ is enough (Gutman, Ckwikel, 1987)
(Gilbert, Tan, 1991) (Chmielewski, Manousiouthakis, 1996) (Kerrigan, Maciejowski, 2000)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 65/132


Predicted and actual trajectories
• Consider predicted and actual trajectories

x(t) x(t)

x*(t+2|t) x*(t+2|t)

x*(t+1|t)=x(t+1) x*(t+1|t)=x(t+1)
0 t t+N 0 t t+N

x(t) x*(t+2|t+1)=x(t+2) x(t) x*(t+2|t+1)=x(t+2)

x*(t+2|t) x*(t+2|t)

x*(t+1|t)=x(t+1) x*(t+1|t)=x(t+1)
0 t t+N 0 t t+N

• Even assuming perfect model and no disturbances, predicted open-loop


trajectories and actual closed-loop trajectories can be different

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 66/132


Predicted and actual trajectories
optimal state x∗ (k)
• Special case: when the horizon is in昀椀nite, open-loop
trajectories and closed-loop trajectories coincide.
time
x(t) k N
0
x*(t+2|t)=x*(t+2|t+1)=x(t+2) optimal input u∗ (k)

x*(t+1|t)=x(t+1)
0 t +1 time

• This follows by Bellman’s principle of optimality: 0 k N

“Given the optimal sequence u∗ (0), . . . , u∗ (N − 1) and the cor-


responding optimal trajectory x∗ (0), . . . , x∗ (N ), the subsequence
u∗ (k), . . . , u∗ (N − 1) is optimal for the subproblem on the horizon
[k, N ], starting from the optimal state x∗ (k).”
Richard Bellman
(1920–1984)
"An optimal policy has the property that, regardless of the decisions taken to enter a particular state,
the remaining decisions made for leaving that stage must constitute an optimal policy."
(Bellman, 1957)
"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 67/132
Convergence and stability
N
X −1
min x′N P xN + x′k Qxk + u′k Ruk
z
k=0
QP problem
s.t. umin ≤ uk ≤ umax , k = 0, . . . , N − 1
ymin ≤ Cxk ≤ ymax , k = 1, . . . , N

Q = Q′ ⪰ 0, R = R′ ≻ 0, P = P ′ ⪰ 0

• Stability is a complex function of the model (A, B, C) and the MPC parameters
N, Q, R, P, umin , umax , ymin , ymax

• Stability constraints and weights on the terminal state xN can be imposed over
the prediction horizon to ensure stability of MPC

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 68/132


Basic convergence properties
(Keerthi, Gilbert, 1988) (Bemporad, Chisci, Mosca, 1994)

• Theorem: Let the MPC law be based on



N −1
V ∗ (x(t)) = min x′k Qxk + u′k Ruk
k=0
s.t. xk+1 = Axk + Buk
umin ≤ uk ≤ umax
ymin ≤ Cxk ≤ ymax
xN = 0 ← “terminal constraint”

with R, Q ≻ 0, umin < 0 < umax , ymin < 0 < ymax .


If the optimization problem is feasible at time t = 0 then

lim x(t) = 0, lim u(t) = 0


t→∞ t→∞

and the constraints are satis昀椀ed at all time t ≥ 0, for all R, Q ≻ 0.

• Many more convergence and stability results exist (Mayne, 2014)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 69/132


Convergence proof
Proof: Main idea = use the value function V ∗ (x(t)) as a Lyapunov-like function

• Let zt = [ut0 . . . utN −1 ]′ be the optimal control sequence at time t

• By construction z̄t+1 = [ut1 . . . utN −1 0]′ is a feasible sequence at time t + 1

• The cost of z̄t+1 is V ∗ (x(t)) − x′ (t)Qx(t) − u′ (t)Ru(t) ≥ V ∗ (x(t + 1))

• V ∗ (x(t)) is monotonically decreasing and ≥ 0, so ∃ limt→∞ V ∗ (x(t)) ≜ V∞

• Hence
0 ≤ x′ (t)Qx(t) + u′ (t)Ru(t) ≤ V ∗ (x(t)) − V ∗ (x(t + 1)) → 0 for t → ∞

• Since R, Q ≻ 0, limt→∞ x(t) = 0, limt→∞ u(t) = 0

Reaching the global optimum exactly is not needed to prove convergence

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 70/132


More general convergence result - Output weights
(Keerthi, Gilbert, 1988) (Bemporad, Chisci, Mosca, 1994)

• Theorem: Consider the MPC control law based on optimizing



N −1
V ∗ (x(t)) = min yk′ Qy yk + u′k Ruk
k=0
s.t. xk+1 = Axk + Buk
yk = Cxk
umin ≤ uk ≤ umax
ymin ≤ yk ≤ ymax
either N = ∞ or xN = 0

If the optimization problem is feasible at time t = 0 then for all R = R′ ≻ 0,


Qy = Q′y ≻ 0
lim y(t) = 0, lim u(t) = 0
t→∞ t→∞
and the constraints are satis昀椀ed at all time t ≥ 0. Moreover, if (C, A) is a
detectable pair then limt→∞ x(t) = 0.

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 71/132


Convergence proof
Proof:

• The shifted optimal sequence z̄t+1 = [ut1 . . . utN −1 0]′ (or z̄t+1 = [ut1 ut2 . . .]′ in
case N = ∞) is feasible sequence at time t + 1 and has cost

V ∗ (x(t)) − y ′ (t)Qy y(t) − u′ (t)Ru(t) ≥ V ∗ (x(t + 1))


• Therefore, by convergence of V ∗ (x(t)), we have that

0 ≤ y ′ (t)Qy y(t) + u′ (t)Ru(t) ≤ V ∗ (x(t)) − V ∗ (x(t + 1)) → 0


for t → ∞
• Since R, Qy ≻ 0, also limt→∞ y(t) = 0, limt→∞ u(t) = 0
• For all k = 0, . . . , n − 1 we have that
k−1
X

0 = lim y (t+k)Qy y(t+k) = lim ∥LC(A x(t)+ k
Aj Bu(t+k −1−j))∥22
t→∞ t→∞
j=0

where Qy = L′ L (Cholesky factorization) and L is nonsingular

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 72/132


Convergence proof (cont'd)

• As u(t) → 0, also LCAk x(t) → 0, and since L is nonsingular CAk x(t) → 0 too,
for all k = 0, . . . , n − 1

• Hence Θx(t) → 0, where Θ is the observability matrix of (C, A)

• If (C, A) is observable then Θ is nonsingular and hence limt→∞ x(t) = 0

• If (C, A) is only detectable, we can make a canonical observability


decomposition and show that the observable states converge to zero

• As also u(t) → 0 and the unobservable subsystem is asymptotically stable, the


unobservable states must converge to zero asymptotically too

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 73/132


Extension to reference tracking
• We want to track a constant reference r. Assume xr and ur exist such that

xr = Axr + Bur
(equilibrium state/input)
r = Cxr

• Formulate the MPC problem (assume umin < ur < umax , ymin < r < ymax )

N −1
min (yk − r)′ Qy (yk − r) + (uk − ur )′ R(uk − ur )
k=0
s.t. xk+1 = Axk + Buk
umin ≤ uk ≤ umax , ymin ≤ Cxk ≤ ymax
xN = xr

• We can repeat the convergence proofs in the shifted-coordinates


xk+1 − xr = A(xk − xr ) + B(uk − ur ), yk − r = C(xk − xr )

• Drawback: the approach only works well in nominal conditions, as the input
reference ur depends on A, B, C (inverse static model)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 73/132


Stability constraints

1. No stability constraint, in昀椀nite prediction horizon N →∞


(Keerthi, Gilbert, 1988) (Rawlings, Muske, 1993) (Bemporad, Chisci, Mosca, 1994)

2. End-point constraint xN = 0
(Kwon, Pearson, 1977) (Keerthi, Gilbert, 1988) (Bemporad, Chisci, Mosca, 1994)

3. Relaxed terminal constraint xN ∈ Ω


(Scokaert, Rawlings, 1996)

4. Contraction constraint ∥xk+1 ∥ ≤ α∥x(t)∥, α < 1


(Polak, Yang, 1993) (Bemporad, 1998)

All the proofs in (1,2,3) use the value function V ∗ (x(t)) = minz J(z, x(t)) as a
Lyapunov function.

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 74/132


Control and constraint horizons
N
X −1
min ∥W y (yk − r(t))∥22 + ∥W ∆u ∆uk ∥22 + ρϵ ϵ2
z
k=0

subj. to umin ≤ uk ≤ umax , k = 0, . . . , N − 1


∆umin ≤ ∆uk ≤ ∆umax , k = 0, . . . , N − 1
∆uk = 0, k = Nu , . . . , N − 1
ymin − ϵVmin ≤ yk ≤ ymax + ϵVmax , k = 1, . . . , Nc

• The input horizon Nu limits the number ymax


of free variables yk
– Reduced performance
– Reduced computation time uk
typically Nu = 1 ÷ 5
t t+Nu t+Nc t+N

• The constraint horizon Nc limits the number of constraints


– Higher chance of violating output constraints but reduced computation time

• Other variable-reduction methods exist (Bemporad, Cimini, 2020)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 75/132


MPC and Linear Quadratic Regulation (LQR)
PN −1
• Special case: J(z, x0 ) = x′N P xN + k=0 x′k Qxk + u′k Ruk , Nu = N , with
matrix P solving the Algebraic Riccati Equation

P = A′ P A − A′ P B(B ′ P B + R)−1 B ′ P A + Q
Jacopo Francesco Riccati
(1676–1754)

(unconstrained) MPC = LQR for any choice of the prediction horizon N

Proof: Easily follows from Bellman’s principle of optimality (dynamic


programming): x′N P xN = optimal “cost-to-go” from time N to ∞.

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 76/132


MPC and Constrained LQR
• Consider again the constrained MPC law based on minimizing

N −1
min x′N P xN + x′k Qxk + u′k Ruk
z
k=0
s.t. umin ≤ uk ≤ umax , k = 0, . . . , N − 1
ymin ≤ yk ≤ ymax , k = 1, . . . , N
uk = Kxk , k = Nu , . . . , N − 1

• Choose matrix P and terminal gain K by solving the LQR problem


K = −(R + B ′ P B)−1 B ′ P A
P = (A + BK)′ P (A + BK) + K ′ RK + Q

• In a polyhedral region around the origin, constrained MPC = constrained LQR


for any choice of the prediction and control horizons N , Nu
(Sznaier, Damborg, 1987) (Chmielewski, Manousiouthakis, 1996) (Scokaert, Rawlings, 1998)
(Bemporad, Morari, Dua Pistikopoulos, 2002)
• The larger the horizon N , the larger the region where MPC ≡ constrained LQR

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 77/132


MPC and Constrained LQR

• Some MPC formulations also include the terminal constraint xN ∈ X∞



X∞ = x : [ uymin
min
] ≤ [K k umax
C ] (A + BK) x ≤ [ ymax ] , ∀k ≥ 0

that is the maximum output admissible set for the closed-loop system
(A + BK, B, C) and constraints umin ≤ Kx ≤ umax , ymin ≤ Cx ≤ ymax

• This makes MPC ≡ constrained LQR where the MPC problem is feasible

• Recursive feasibility in ensured by the terminal constraint for all x(0) such that
the MPC problem is feasible @t = 0

• The domain of feasibility may be reduced because of the additional constraint

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 78/132


Example: AFTI-F16 aircraft

Linearized model:
 " # " #
 −0.0151 −60.5651 0 −32.174 −2.516 −13.136
 −0.0001 −1.3411 0.9929 0 −0.1689 −0.2514
 ẋ = 0.00018 43.2541 −0.86939 0 x+ −17.251 −1.5766 u
0 0 1 0 0 0

 0 1 0 0

y = 0001 x (Kapasouris, Athans, Stein, 1988)

:); 80*79
• Inputs: elevator and 昀氀aperon (=昀氀ap+aileron) angles /0'$%+1
!"##$%
• Outputs: attack and pitch angles -').,
&'$()*+%,
• Sampling time: Ts = 0.05 sec (+ ZOH) !+''

• Constraints: ±25◦ on both angles /0'$%+1


2+130*"#01)'4)50,

• Open-loop unstable, poles are 6$%*07)'4)50,


2)*$%)'4)50,
−7.6636, −0.0075 ± 0.0556j, 5.4530
go to demo linear/afti16.m (Hybrid Toolbox)
see also afti16.m (MPC Toolbox)
"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 79/132
Example: AFTI-F16 aircraft
• Prediction horizon N = 10, control horizon Nu = 2

• Input weights W ∆u = [ 0.1


0 0.1 ], W = 0
0 u

 25◦ 
• Input constraints umin = −umax = 25◦
30
20
20
15
10
y1 (t) 10
u(t) 0
W y = [ 10 01 ]
y2 (t) 5
-10

-20
0
-30
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4
30
20
20
15
10
y1 (t) 10
u(t) 0
W y = [ 100 0
0 1]
y2 (t) 5 -10

-20
0

0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 80/132


Example: AFTI-F16 aircraft
• Add output constraints y1,min = −y1,max = 0.5◦
30
20
20
15
y1 (t) 10
10
u(t) 0
W y = [ 10 01 ]
y2 (t) 5
-10
0
-20
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4

• Soft output constraint = convex penalty with dead zone

min ρϵ ϵ 2
s.t. y1,min − ϵV1,min ≤ y1 ≤ y1,max − ϵV1,max

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 81/132


Example: AFTI-F16 aircraft
• Linear control (=unconstrained MPC) + input clipping
10 4 30

20
3
10
y1 (t) 2
u(t) 0
W y = [ 10 01 ]
y2 (t) 1
-10

-20
0
-30
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4

unstable!

Saturation needs to be considered in the control design!

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 82/132


Saturation
• Saturation is dangerous because it can break the control loop

saturation
r(t) ua(t)
linear x(t)
controller process

• MPC takes saturation into account and handles it automatically (and optimally)

saturation
r(t) ua(t)
MPC x(t)
controller process

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 83/132


Tuning guidelines
N
X −1
min ∥W y (yk+1 − r(t))∥22 + ∥W ∆u ∆uk ∥22 + ρϵ ϵ2
∆U
k=0

subj. to ∆umin ≤ ∆uk ≤ ∆umax , k = 0, . . . , Nu − 1


∆uk = 0, k = Nu , . . . , N − 1
umin ≤ uk ≤ umax , k = 0, . . . , Nu − 1
ymin − ϵVmin ≤ yk ≤ ymax + ϵVmax , k = 1, . . . , Nc

• weights: the larger the ratio W y /W ∆u the more aggressive the controller
• input horizon: the larger Nu , the more “optimal” but more complex the controller
• prediction horizon: the smaller N , the more aggressive the controller
• constraints horizon: the smaller Nc , the simpler the controller
• limits: controller less aggressive if ∆umin , ∆umax are small
• penalty ρϵ : pick up smallest ρϵ that keeps soft constraints reasonably satis昀椀ed

Always try to set Nu as small as possible!

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 84/132


Tuning guidelines - Effect of sampling time
• Let Ts = sampling time used in MPC predictions

• For predicting Tp time units in the future we must set


  4
Tp
N= 3
Ts
2

0
0 1 2 3 4 5 6
time (s)

du
• Slew-rate constraints u̇min ≤ dt ≤ u̇max on actuators are related to Ts by

∆u ∆umax
du uk − uk−1 z }|min{ z }| {
u̇ ≜ ≈ Ts u̇min ≤ ∆uk ≤ Ts u̇max
dt Ts

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 85/132


Tuning guidelines - Effect of sampling time
• The MPC cost to minimize can be thought in continuous-time as
Z Tp  
J= ∥W y (y(τ ) − r)∥22 + ∥W u (u(τ ) − ur )∥22 + ∥W u̇ u̇(τ )∥22 dτ + ρϵ ϵ2
0
 
N −1
X
∥W y (yk+1 − r)∥22 + ∥W u (uk − ur )∥22 + ∥ W u̇ Ts−1 ∆uk ∥22 + ρϵ Ts−1 ϵ2 

≈ Ts 
| {z }
k=0
W ∆u

• Hence, when changing the sampling time from T1 to T2 , can can keep the same
MPC cost function by leaving W y , W u unchanged and simply rescaling

W u̇ T1 W u̇ T1 ∆u ρϵ T 1 ρϵ T1
W2∆u = = = W1 ρϵ2 = = = ρϵ1
T2 T2 T1 T2 T2 T2 T1 T2

• Note: Ts used for controller execution can be ̸= than Ts used in prediction

• Small controller Ts ⇒ fast reaction to set-point changes / disturbances

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 86/132


Scaling

• Humans think in昀椀nite precision …computers do not !

• Numerical dif昀椀culties may arise if variables assume very small/large values

• Example:

y1 ∈ [−1e − 4, 1e − 4] [V] y1 ∈ [−0.1, 0.1] [mV]


y2 ∈ [−1e4, 1e4] [Pa] y2 ∈ [−10, 10] [kPa]

• Ideally all variables should be in [−1, 1]. For example, one can replace y with
y/ymax

• Scaling also possible after formulating the QP problem (=preconditioning)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 87/132


Pre-stabilization of open-loop unstable models
• Numerical issues may arise with open-loop unstable models
k−1
X
• When condensing the QP we substitute xk = Ak x0 + Aj Buk−1−j that
j=0
may lead to a badly conditioned Hessian matrix H
 B 0 ... 0
′  Q 0 0 ... 0

B 0 ... 0
 "R #
0 Q 0 ... 0 0 ... 0
AB B ... 0 AB B ... 0 0 R ... 0
  ... ... . .
. . ..

H= . . .. . . . . .. + . . . .
.
.
.
. . . . .. ..  .
.
.
. . . . . .. .
. . .
0 ... 0 Q 0
AN −1 B AN −2 B ... B AN −1 B AN −2 B ... B 0 ... 0 R
0 0 ... 0 P

• Pre-stabilizing the system with uk = Kxk + vk leads to xk+1 = AK xk + Bvk ,


k−1
X
AK ≜ A + BK , and therefore xk = AkK x0 + AjK Bvk−1−j
j=0
• Input constraints become mixed constraints umin ≤ Kxk + vk ≤ umax

• K can be chosen for example by pole-placement or LQR

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 88/132


Observer design for MPC
State observer for MPC
r(t) u(t) y(t)
Optimizer Process measured
reference x(t) outputs

x(t)
Observer
state estimate

• Full state x(t) of process may not be available, only outputs y(t)
• Even if x(t) is available, noise should be 昀椀ltered out
• Prediction and process models may be quite different
• The state x(t) may not have any physical meaning
(e.g., in case of model reduction or subspace identi昀椀cation)

We need to use a state observer

• Example: Luenberger observer x̂(t + 1) = Ax̂(t) + Bu(t) + L(y(t) − C x̂(t))

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 89/132


Extended model for observer design
manipulated variables
u(t)
yu(t)
v(t) measured disturbances plant controlled variables unmeasured
outputs

nd(t) unmeasured d(t) model


disturbance
model unmeasured
white noise
disturbances

zm(t)
nm(t) m(t) + +
measurement ym(t)
white noise noise model measured
outputs

unmeasured disturbance model measurement noise model


( (
xd (t + 1) = Āxd (t) + B̄nd (t) xm (t + 1) = Ãxm (t) + B̃nm (t)
d(t) = C̄xd (t) + D̄nd (t) m(t) = C̃xm (t) + D̃nm (t)

• Note: the measurement noise model is not used for optimization, as we want
zm to go to its reference, not ym

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 90/132


Kalman filter design
• Plant model
{
x(t + 1) = Ax(t) + Bu u(t) + Bv v(t) + Bd d(t)
y(t) = Cx(t) + Dv v(t) + Dd d(t)
Rudolf Emil Kalman
• Full model for designing Kalman 昀椀lter (1930–2016)

[ ] [ ][ ]
x(t+1) ABd C̄ 0 x(t) [B ] [B ]
u v
xd (t+1) = 0 Ā 0 xd (t) + 0 u(t) + 0 v(t)+
xm (t+1) xm (t) 0 0
[ 0 ] 0 Ã [0]
Bd D̄
[B ]
u
B̄ nd (t) + 0 nm (t) + 0 nu (t)
0
0 [ B̃ ]
x(t)
ym (t) = [ Cm Ddm C̄ C̃ ] xd (t) + Dvm v(k) + D̄m nd (t) + D̃m nm (t)
xm (t)

• nd (k) = source of modeling errors


• nm (k) = source of measurement noise
• nu (k) = white noise on input u (added to compute the Kalman gain)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 91/132


Observer implementation
• Measurement update
ŷm (t|t − 1) = Cm x̂(t|t − 1)

x̂(t|t) = x̂(t|t − 1) + M (ym (t) − ŷm (t|t − 1))

• Time update

x̂(t + 1|t) = Ax̂(t|t − 1) + Bu(t) + L(ym (t) − ŷm (t|t − 1))

• Note that if L = AM then x̂(t + 1|t) = Ax̂(t|t) + Bu(t)

• The separation principle holds (under certain assumptions)


(Muske, Meadows, Rawlings, 1994)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 92/132


I/O feedthrough
• We always assumed no feedthrough from u to measured y
yk = Cxk +Duk + Dv vk + Dd dk , Dm = 0
• This avoids static loops between state observer, as x̂(t|t) depends on u(t) via
ŷm (t|t − 1), and MPC (u(t) depends on x̂(t|t))

• Often D = 0 is not a limiting assumption as


– often actuator dynamics must be considered (u is the set-point to a low-level
controller of the actuators)
– most physical models described by ordinary differential equations are strictly
causal, and so is the discrete-time version of the model

• In case D ̸= 0, we can assume a delay in executing the commanded u


yk = Cm xk + Duk−1 yk

yk-1 uk
and treat u(t − 1) as an extra state uk-1

• Not an issue for unmeasured outputs (k-1)Ts kTs (k+1)Ts


t

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 93/132


Integral action in MPC
Output integrators
manipulated variables
u(t)
yu(t)
v(t) measured disturbances plant controlled variables
unmeasured
outputs

nd(t) unmeasured d(t) model +


zm(t)
disturbance
model unmeasured +
white noise
disturbances

ni(t) output

white noise integrators

nm(t) measurement m(t) +


+
noise model
ym(t)
white noise measured
outputs

• Introduce output integrators as additional disturbance models


• Under certain conditions, observer + controller provide zero offset in
steady-state

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 94/132


Output integrators and offset-free tracking
• Add constant unknown disturbances on measured outputs:

 xk+1
 = Axk + Buk
dk+1 = dk

 yk = Cxk + dk

• Use the extended model to design a state observer (e.g., Kalman 昀椀lter) that
ˆ from y(t)
estimates both the state x̂(t) and disturbance d(t)

• Why we get offset-free tracking in steady-state (intuitively):


ˆ → y(t)
– the observer makes C x̂(t) + d(t) (estimation error)
ˆ → r(t)
– the MPC controller makes C x̂(t) + d(t) (predicted tracking error)
– the combination of the two makes y(t) → r(t) (actual tracking error)

ˆ compensates for model mismatch


• In steady state, the term d(t)

• See more on survey paper (Pannocchia, Gabiccini, Artoni, 2015)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 95/132


Output integrators and offset-free tracking
(Pannocchia, Rawlings, 2003)

• More general result: consider the disturbance model



 xk+1
 = Axk + Buk + Bd dk
dk+1 = dk special case: output integrators

 yk = Cxk + Dd dk Bd = 0 , D d = I

Theorem
Let the number nd of disturbances be equal to the number ny of measured outputs.
Then all pairs (Bd , Dd ) satisfying
" #
I − A −Bd
rank = nx + n y
C Dd

guarantee zero offset in steady-state (y = r), provided that constraints are not
active in steady-state and the closed-loop system is asymptotically stable.

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 96/132


Disturbance model example
 1 −1 −2 1  1
0 −2 3 2
• Open-loop system: A = 0 2 −1 −1 , B = 0
1 , Cm = [ 1 1 0 0 ]
0 2 0 −2 1

• We cannot add an output integrator since the pair ([ A


0 1 ] , [ C 1 ]) is not
0

observable

 A Bd

• Add an input integrator: Bd = B , Dd = 0, rank C m Dd = 5 = nx + ny

 x(t + 1) =
 Ax(t) + B(u(t) + d(t))
d(t + 1) = d(t)

 y(t) = Cx(t) + 0 · d(t)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 97/132


Error feedback
(Kwakernaak, Sivan, 1972)

• Idea: add the integral of the tracking error as an additional state (original idea
developed for integral action in state-feedback control)

• Extended prediction model:




 x(t + 1) = Ax(t) + Bu u(t) + 0 · r(t) ← r(t) is seen as a meas. disturbance

 q(t + 1) = q(t) + Cx(t) − r(t) ← integral action
| {z }


 tracking error

y(t) = Cx(t)

• ∥W i q∥22 is penalized in the cost function, otherwise it is useless. W i is a new


tuning knob

• Intuitively, if the MPC closed-loop is asymptotically stable then q(t) converges


to a constant, and hence y(t) − r(t) converges to zero.

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 98/132


Reference / Command Governor
(Bemporad, Mosca, 1994) (Gilbert, Kolmanovsky, Tan, 1994) (Garone, Di Cairano, Kolmanovsky, 2016)
actual control
reference input
constrained
r(t) w(t) u(t) variables
local c(t)
desired
reference
feedback y(t) ¼ r(t)
reference process
governor controlled outputs

primal system
x(t)
estimated plant + controller states

• MPC manipulates set-points to a linear feedback loop to enforce constraints


• Separation of problems:
– Local feedback guarantees offset-free tracking y(t) − w(t) → 0 in steady-state
in the absence of constraints
– Actual reference w(t) generated by MPC to take care of constraints

• Advantages: small-signal properties preserved, fewer variables to optimize

w(t) = arg minw ∥w − r(t)∥22


s.t. ct+k ∈ C
"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 99/132
Integral action and ∆u-formulation

• In control systems, integral action occurs if the controller has a


transfer-function from the output to the input of the form

B(z)
u(t) = y(t), B(1) ̸= 0
(z − 1)A(z)

• One may think that the ∆u-formulation of MPC provides integral action ...

... is it true ?

• Example: we want to regulate the output y(t) to zero of the scalar system

x(t + 1) = αx(t) + βu(t)


y(t) = x(t)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 100/132


Integral action and ∆u-formulation
• Design an unconstrained MPC controller with horizon N = 1

∆u(t) = arg min∆u0 ∆u20 + ρy12


s.t. u0 = u(t − 1) + ∆u0
y1 = x1 = αx(t) + β(∆u0 + u(t − 1))
• By substitution, we get

∆u(t) = arg min∆u0 ∆u20 + ρ(αx(t) + βu(t − 1) + β∆u0 )2


= arg min∆u0 (1 + ρβ 2 )∆u20 + 2βρ(αx(t) + βu(t − 1))∆u0
βρα ρβ 2
= − 1+ρβ 2 x(t) − 1+ρβ 2 u(t − 1)

• Since x(t) = y(t) and u(t) = u(t − 1) + ∆u(t) we get the linear controller
ρβα
1+ρβ 2 z
u(t) = − 1 y(t) No pole in z = 1
z − 1+ρβ 2

• Reason: MPC gives a feedback gain on both x(t) and u(t − 1), not just on x(t)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 101/132


Integral action and ∆u-formulation
• Numerical test (with MPC Toolbox)
alpha=.5;
beta=3;
sys=ss(alpha,beta,1,0);sys.ts=1;
rho=1;p=1;m=1;
weights=struct('OV',rho,'MVRate',1);
mpc1=mpc(sys,1,p,m,weights);
setoutdist(mpc1,'remove',1); % no output disturbance model
mpc1tf=tf(mpc1);
sum(mpc1tf.den{1})

ans = 0.9000

• Now add an output integrator as disturbance model


setoutdist(mpc1,'integrators'); % add output disturbance model
mpc1tf=tf(mpc1);
sum(mpc1tf.den{1})

ans = -6.4185e-16

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 102/132


MPC and internal model principle

• Assume we have an internal model of the reference signal r(t)

xr (t + 1) = Ar xr (t)
r(t) = Cr xr (t)

• Consider the augmented prediction model


" # " #" # " #
xk+1 A 0 xk B
= + uk
xrk+1 0 Ar xrk 0

• Design
h i a state observer
h i(e.g., via pole-placement) to estimate x(t), x (t) from
r
y(t) C 0
 x(t)
r(t)
= 0 Cr xr (t)

 xk 
• Design an MPC controller with output ek = yk − rk = [ C −Cr ] xrk

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 103/132


MPC and internal model principle
N
X output
0.5

• Penalize ek2 (+small penalty on u2 or ∆u2 )


k=1 0

• Set control horizon Nu = N


-0.5
0 5 10 15 20 25 30 35 40 45 50

input
2

• Example: r(t) = r0 sin(ωt + ϕ0 ), ω = 1 rad/s 1

-1

h 1.6416 −0.7668 0.2416 i -2

A= 1 0 0
-3
0 5 10 15 20 25 30 35 40 45 50

h 0.50i 0.125 0

B= 0
0
observer poles placed in
C = [ 0.1349 0.0159 −0.1933 ]
{0.3, 0.4, 0.5, 0.6, 0.7}

Ar =
 1.7552 −1  sample time Ts = 0.5 s
1 0
Cr = [ .2448 .2448 ] input saturation −2 ≤ u(t) ≤ 2

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 104/132


Model Predictive Control Toolbox
(Bemporad, Ricker, Morari, 1998-present)

• Several MPC design features available:


– explicit MPC

– time-varying/adaptive models, nonlinear models, weights, constraints

– stability/frequency analysis of closed-loop (inactive constraints)

– …
• Prediction models can be generated by the Identi昀椀cation Toolbox or
automatically linearized from Simulink

• Fast command-line MPC functions


(compiled EML-code)

• Graphical User Interface

• Simulink library (compiled EML-code)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 105/132


MPC Simulink Library

>> mpclib

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 106/132


MPC Simulink block

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 107/132


MPC Simulink block

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 108/132


MPC Simulink block

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 109/132


MPC Graphical User Interface

>> mpcDesigner (old version: >> mpctool)

See video on Mathworks’ web site (link)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 110/132


Example: MPC of a DC servomotor
R
µM Tr
V e µ
JM s T µL
bM r JL
bL

Symbol Value (MKS) Meaning >> mpcmotor


LS 1.0 shaft length
dS 0.02 shaft diameter
JS negligible shaft inertia
JM 0.5 motor inertia
βM 0.1 motor viscous friction coe昀케cient
R 20 resistance of armature see also
kT
ρ
10
20
motor constant
gear ratio
linear/dcmotor.m
kθ 1280.2 torsional rigidity (Hybrid Toolbox)
JL 50JM nominal load inertia
βL 25 load viscous friction coe昀케cient
Ts 0.1 sampling time

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 111/132


DC servomotor model

 
0 1 0 0  
 − kθ kθ 0
 JL − βJL
L
ρJL
0 
  0  [ θL ]
ẋ =  0 0 0 1 x +  V
   
θ̇L
   0  x= θM
kT θ̇M
 
2
kθ βM +kT /R
ρJM
0 − ρ2kJθ − JM
RJM
[ ] M

θL = 1 0 0 0 x y=
[θ ]
L
T
[ ]
T = kθ 0 − kρθ 0 x

>> [plant, tau] = mpcmotormodel;


>> plant = setmpcsignals(plant,'MV',1,'MO',1,'UO',2);

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 112/132


DC servomotor constraints

• The input DC voltage V is bounded withing the range

|V | ≤ 220 V

• Finite shear strength τadm = 50N/mm2 requires that the torsional torque T
satis昀椀es the constraint
|T | ≤ 78.5398 N m
• Sampling time of model/controller: Ts = 0.1s

>> MV = struct('Min',-220,'Max',220);
>> OV = struct('Min',{-Inf,-78.5398},'Max',{Inf,78.5398});
>> Ts = 0.1;

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 113/132


DC servomotor - MPC setup
p−1

min ∥W y (yk+1 − r(t))∥2 + ∥W ∆u ∆uk ∥2 + ρϵ ϵ2
∆U
k=0

subj. to ∆umin ≤ ∆uk ≤ ∆umax , k = 0, . . . , m − 1


∆uk = 0, k = m, . . . , p − 1
umin ≤ uk ≤ umax , k = 0, . . . , m − 1
ymin − ϵVmin ≤ yk ≤ ymax + ϵVmax , k = 1, . . . , p

>> Weights = struct('MV',0,'MVRate',0.1,'OV',[0.1 0]);


>> p = 10;
>> m = 2;
>> mpcobj = mpc(plant,Ts,p,m,Weights,MV,OV);

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 114/132


DC servomotor - Closed-loop simulation

R
µM Tr
V e µ
JM s T µL
bM r JL
bL

Closed-loop simulation using


the sim command

>> Tstop = 8; % seconds


>> Tf = round(Tstop/Ts); % simulation iterations
>> r = [pi*ones(Tf,1) zeros(Tf,1)];
>> [y1,t1,u1] = sim(mpcobj,Tf,r);

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 115/132


DC servomotor - Closed-loop simulation

closed-loop
simulation in
Simulink

>> mdl = 'mpc_motor';


>> open_system(mdl)
>> sim(mdl)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 116/132


DC servomotor - Closed-loop simulation
θL (t) T (t) V (t)
4 100 400

3 300

2 50 200

1 100

0 0 0

-1 -100

-2 -50 -200

-3 -300

-4 -100 -400
0 5 10 15 20 0 5 10 15 20 0 5 10 15 20

• same MPC tuning parameters

• setpoint r(t) = π sin(0.4t) deg

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 117/132


Example: MPC of active suspensions
»
Lf Lr
• Take simulation model sldemo_suspn in MATLAB z
uf ur
h

Ff ront = 2Kf (Lf θ − (z + h)) + 2Cf (Lf θ̇ − ż) + uf

Ff ront , Frear = upward force on body from front/rear suspension


Kf , Kr = front and rear suspension spring constant
Cf , Cr = front and rear suspension damping rate
Lf , Lr = horizontal distance from c.g. to front/rear suspension
θ, θ̇ = pitch (rotational) angle and its rate of change
z, ż = bounce (vertical) distance and its rate of change

• For control purposes we add external forces uf , ur as manipulated variables

• The system has 4 states: θ, θ̇, z , ż

• Measured disturbances: road height h, pitch moment from vehicle acceleration

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 118/132


Example: MPC of active suspensions

• Step #1: get a linear discrete-time model (4 inputs, 3 outputs, 4 states)

>> plant_mdl = 'suspension_model';


>> op = operspec(plant_mdl);
>> [op_point, op_report] = findop(plant_mdl,op);
>> sys = linearize(plant_mdl, op_point);
>> Ts = 0.025; % sample time (s)
>> plant = c2d(sys,Ts);

>> plant = setmpcsignals(plant,'MV',[1 2],'MD',...


[3 4],'MO',[1 2],'UO',3);

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 119/132


Example: MPC of active suspensions

• Step #2: design the MPC controller

>> dfmax = .1/.1; % [kN/s]


>> MV = struct('RateMin',-Ts*dfmax, Ts*dfmax,...
'RateMax',Ts*dfmax,Ts*dfmax);
>> OV = [];
>> Weights = struct('MV',[0 0], 'MVRate',[.01 .01],...
'OV',[.01 0 10]);

>> p = 50; % Prediction horizon


>> m = 5; % Control horizon

>> mpcobj = mpc(plant,Ts,p,m,Weights,MV,OV);

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 120/132


Example: MPC of active suspensions
Vehicle Suspension Model ?

1/Iyy 1 1 Theta
My s Thetadot s Theta
1/(Body Inertia) THETAdot THETA
Pitch moment
induced by
vehicle acceleration Thetadot

al1 sign
-Front Pitch Moment Rear Pitch Moment

FrontForce RearForce al2 sign

Front Suspension Rear Suspension h


Road Height

1/Mb 1 1 Z+h
s Zdot s Z Z+h
1/(Body Mass) Zdot Z

-9.81
Zdot
acceleration Zdot
due to gravity

cm
1000 al1 100
mo
Theta
Gain4 Gain2
uf
1/1000
ur mv MPC ref y0
1000
reference Gain5
Gain3 al1
md al2 100

MPC Controller Gain1 Copyright 1990-2015 The MathWorks, Inc.

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 121/132


Example: MPC of active suspensions
• Closed-loop MPC results 6
#10
-3 3_ (rad/s)

Open-loop
2
z_ (cm/s)

Closed-loop
4 0

2 -2

0 -4
Open-loop
Closed-loop
-2 -6
0 2 4 6 8 10 0 2 4 6 8 10

MPC Front Force uf (N) MPC Rear Force ur (N)


150 20

0
100

-20
50
-40

0
-60

-50 -80
0 2 4 6 8 10 0 2 4 6 8 10

My (Nm) h, Z+h (cm)


100 5
road height h
80 relative displacement Z+h
0

60
-5
40

-10
20

0 -15
0 2 4 6 8 10 0 2 4 6 8 10

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 122/132


Frequency analysis of MPC (for small signals)
• Unconstrained MPC gain + linear observer = linear dynamical system

• Closed-loop MPC analysis can be performed using standard frequency-domain


tools (e.g., Bode plots for sensitivity analysis)
process ym(t)
x(t)

u(t)

MPC
^
x(t) r(t)

>> sys=ss(mpcobj) returns the LTI object of the MPC controller


>> sys=tf(mpcobj) (when constraints are inactive)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 123/132


Controller matching
• Given a desired linear controller u = Kd x, 昀椀nd a set of weights Q, R, P
de昀椀ning an MPC problem such that
h i
− I 0 . . . 0 H −1 F = Kd

i.e., the MPC law coincides with Kd when the constraints are inactive

• Recall that the QP matrices are H = 2(R̄ + S̄ ′ Q̄S̄), F = 2S̄ ′ Q̄T̄ , where
Q 0 0 ... 0 
 R 0 ... 0   B 0 ... 0
  A 
0 Q 0 ... 0
0 R ... 0 AB B ... 0 A2
. . . . . .. ..
Q̄ =  . . . . .  , R̄ =  .. .. . . ..  , S̄ =  . . ..  , T̄ =  .. 
. . . . . . . . . . . . . .
0 ... 0 Q 0 0 ... 0 R AN −1 B AN −2 B ... B AN
0 0 ... 0 P

• The above inverse optimality problem can be cast to a convex problem


(Di Cairano, Bemporad, 2010)

• Result extended to match any linear controller/observer by LQR/Kalman 昀椀lter


(Zanon, Bemporad, 2022)
"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 124/132
Controller matching - Example
(Di Cairano, Bemporad, 2010)

• Open-loop process: y(t) = 1.8y(t − 1) + 1.2y(t − 2) + u(t − 1)

• Constraints: −24 ≤ u(t) ≤ 24, y(t) ≥ −5

• Desired controller = PID with gains KI = 0.248, KP = 0.752, KD = 2.237


  " y(t−1) #
KD
u(t) = − KI I(t) + KP y(t) + Ts (y(t) − y(t − 1)) y(t−2)
x(t) = I(t−1)
I(t) = I(t − 1) + Ts y(t) u(t−1)

• Matching result (using inverse LQR):


6.401 0.064 −0.001 0.020
   422.7 241.7 50.39 201.4 
Q∗ = 0.064
−0.001
6.605
0.006
0.006
6.647
0.080
−0.020 , R∗ = 1, P ∗ = 241.7 151.0 32.13 120.4
50.39 32.13 19.85 26.75
0.019 0.080 −0.020 6.378 201.4 120.4 26.75 106.6

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 125/132


Controller matching - Example
20

10

0
y

210

220
0 2 4 6 8 10 12 14 16 18 20
t

50 matched MPC
25
u1

225

250
0 2 4 6 8 10 12 14 16 18 20
t

what PID would apply 20


0
220

y2
240
260

• Note: This is not trivially a


280
2100
0 2 4 6 8 10 12 14 16 18 20
t

saturation of a PID controller. 50

In this case saturating the PID 25


u2

output leads to closed-loop 225

instability! 250
0 2 4 6 8 10
t
12 14 16 18 20

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 126/132


Linear MPC based on linear programming
Linear MPC based on LP
(Propoi, 1963) (Bemporad, Borrelli, Morari, 2003)

( x ∈ Rn
xk+1 = Axk + Buk
• Linear prediction model: u ∈ Rm
yk = Cxk
y ∈ Rp
• Constraints to enforce:
(
umin ≤ u(t) ≤ umax
ymin ≤ y(t) ≤ ymax

• Constrained optimal control problem (∞-norms): ∥v∥∞ ≜ max |vi |


i=1,...,n
N
X −1
min ∥P xN ∥∞ + ∥Qxk ∥∞ + ∥Ruk ∥∞  u0 
z
u1
k=0 R, Q, P
z= .. 
full rank .
s.t. umin ≤ uk ≤ umax , k = 0, . . . , N − 1 uN −1

ymin ≤ yk ≤ ymax , k = 1, . . . , N

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 127/132


Linear MPC based on LP

• Basic trick: introduce slack variables (Qi = i-th row of Q)

ϵxk ≥ Qi xk i = 1, . . . , n
ϵxk ≥ −Qi xk k = 0, . . . , N − 1
ϵxk ≥ ∥Qxk ∥∞
ϵuk ≥ R i uk i = 1, . . . , m
ϵuk ≥ ∥Ruk ∥∞
ϵuk ≥ −Ri uk k = 0, . . . , N − 1
ϵxN ≥ ∥P xk ∥∞
ϵxN ≥ P i xN i = 1, . . . , n
ϵxN ≥ −P i xN

• Example:
minx |x| → minx,ϵ ϵ
s.t. ϵ ≥ x
ϵ ≥ −x

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 128/132


Linear MPC based on LP
k−1
X
• Linear prediction model: xk = Ak x0 + Ai Buk−1−i
i=0

• Optimization problem:
V (x0 ) = min [1 . . . 1 0 . . . 0]z (linear objective)
z

s.t. Gz ≤ W + Sx0 (linear constraints)

Linear Program (LP)

• optimization vector: z ≜ [ϵu0 . . . ϵuN −1 ϵx1 . . . ϵxN u′0 , . . . , u′N −1 ]′ ∈ RN (nu +2)

• G, W , S are obtained from weights Q, R, P , and model matrices A, B , C

• Q, R, P can be selected to guarantee closed-loop stability


(Bemporad, Borrelli, Morari, 2003)

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 129/132


Extension to arbitrary convex PWA functions
a04x + b4
Result `(x)
a01x + b1
Every convex piecewise af昀椀ne function
ℓ : Rn → R can be represented as the a03x + b3

max of af昀椀ne functions, and vice versa


(Schechter, 1987)
x
a02x + b2
Example: ℓ(x) = |x| = max{x, −x}
ℓ(x) = max {a′1 x + b1 , . . . , a′4 x + b4 }
• Constrained optimal control problem
ℓk , ℓN , gk , gN are arbitrary
N −1
convex piecewise af昀椀ne (PWA)
X
min ℓN (xN ) + ℓk (xk , uk )
U
k=0 functions

s.t. gk (xk , uk ) ≤ 0, k = 0, . . . , N − 1
gN (xN ) ≤ 0

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 130/132


Convex PWA optimization problems and LP
• Minimization of a convex PWA function ℓ(x):

÷
minϵ,x ϵ


 ϵ ≥ a′1 x + b1
 ϵ ≥ a′ x + b

2 2
s.t.
 ϵ ≥ a′3 x + b3


ϵ ≥ a′4 x + b4

x

• By construction ϵ ≥ max{a′1 x + b1 , a′2 x + b2 , a′3 x + b3 , a′4 x + b4 }

• By contradiction it is easy to show that at the optimum we have that

ϵ = max{a′1 x + b1 , a′2 x + b2 , a′3 x + b3 , a′4 x + b4 }

• Convex PWA constraints ℓ(x) ≤ 0 can be handled similarly by imposing


a′i x + bi ≤ 0, ∀i = 1, 2, 3, 4

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 131/132


LP-based vs QP-based MPC
• QP- and LP-based share the same set of feasible inputs (Gz ≤ W + Sx)

• When constraints dominate over performance there is little difference


between them (e.g., during transients)

• Small-signal response, however, is usually less smooth with LP than with QP,
because in LP an optimal point is always on a vertex

LP optimizer QP optimizer
always on a same feasible set may lie
vertex anywhere

( )
Gz ≤ W + Sx
z: {z : Gz ≤ W + Sx}
Ḡz + Ēϵ ≤ W̄ + S̄x
ϵ = additional slack variables introduced
to represent convex PWA costs

"Model Predictive Control" - © 2023 A. Bemporad. All rights reserved. 132/132

You might also like