An E!cient O"-Line Formulation of Robust Model Predictive Control Using Linear Matrix Inequalities

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Available online at www.sciencedirect.

com
Automatica 39 (2003) 837846
www.elsevier.com/locate/automatica
Brief Paper
An ecient o-line formulation of robust model predictive control
using linear matrix inequalities

Zhaoyang Wan, Mayuresh V. Kothare

Chemical Process Modeling and Control Research Center, Department of Chemical Engineering, Lehigh University, Bethlehem, PA 18015, USA
Received 11 September 2000; received in revised form 25 October 2001; accepted 28 July 2002
Abstract
The practicality of model predictive control (MPC) is partially limited by its ability to solve optimization problems in real time.
Moreover, on-line computational demand for synthesizing a robust MPC algorithm will likely grow signicantly with the problem size. In
this paper, we use the concept of an asymptotically stable invariant ellipsoid to develop a robust constrained MPC algorithm which gives
a sequence of explicit control laws corresponding to a sequence of asymptotically stable invariant ellipsoids constructed o-line one within
another in state space. This o-line approach can address a broad class of model uncertainty descriptions with guaranteed robust stability of
the closed-loop system and substantial reduction of the on-line MPC computation. The controller design is illustrated with two examples.
? 2003 Elsevier Science Ltd. All rights reserved.
Keywords: Model predictive control; Linear matrix inequalities; Multivariable constrained systems; Asymptotic stability; Invariant ellipsoid; On-line
computation; Robust stability
1. Introduction
Model predictive control (MPC) is an eective control
algorithm for dealing with multivariable constrained con-
trol problems that are encountered in the chemical process
industries. At each sampling time, MPC uses an explicit
process model and information about input and output con-
straints to compute process inputs so as to optimize future
plant behavior over the prediction horizon. Although more
than one input move is computed, the controller implements
only the rst computed input (Morari & Lee, 1999) and
repeats these calculations at the next sampling time. Since
models are only an approximation of the real process, it is
extremely important for MPC to be robust to model uncer-
tainty (Bemporad & Morari, 1999).
For robust constrained state feedback MPC and a nite
input horizon, a standard approach to synthesize a stabilizing

This paper was not presented at any IFAC meeting. This paper was
recommended for publication in revised form by Associate Editor Frank
Allgower under the direction of Editor Sigurd Skogestad.

Corresponding author. Tel.: +1-610-758-6654; fax: +1-610-


758-5057.
E-mail addresses: zhw2@lehigh.edu (Z. Wan),
mayuresh.kothare@lehigh.edu (M.V. Kothare).
MPC is to use the optimal input sequence at time k as a
feasible input sequence at time k +1, and force the feasible
cost at time k+1 to be less than the optimal cost at time k for
each model in the uncertain set (Badgwell, 1997; Primbs &
Nevisti c, 2000). This approach generally leads to a quadratic
program (QP) that is solved at each sampling time.
For an innite input and output horizon, a closed-loop
feedback law must be adopted to facilitate a nite di-
mensional formulation (Kothare, Balakrishnan, & Morari,
1996). At each sampling time, an optimal upper bound on
the worst case performance cost over the innite horizon
is obtained by forcing a quadratic function of the state to
decrease at each prediction time by at least the amount of
the worst case performance cost at that prediction time. If
the optimal feedback law computed at time k is applied at
time k + 1, the feasible upper bound at time k + 1 must be
less than the optimal upper bound at time k (Kothare et al.,
1996). Since the optimal solution at time k + 1 must be
less than or equal to the feasible solution at time k + 1, the
closed-loop system is asymptotically stable. In this formu-
lation, at each sampling time, a state feedback law is com-
puted by semidenite programming (SDP) involving linear
matrix inequality (LMI) constraints (Kothare et al., 1996).
The requirement of optimality leads to high on-line
MPC computation and limits the application of MPC
0005-1098/03/$ - see front matter ? 2003 Elsevier Science Ltd. All rights reserved.
doi:10.1016/S0005-1098(02)00174-7
838 Z. Wan, M.V. Kothare / Automatica 39 (2003) 837846
to relatively slow dynamics and small-scale processes.
Moreover, when MPC incorporates explicit model un-
certainty, the resulting on-line computation will likely
grow signicantly (Braatz, Young, Doyle, & Morari,
1994) with the number of vertices of the uncertainty set
which itself grows exponentially with the number of in-
dependent uncertain process parameters (Kothare et al.,
1996).
Realizing MPCs potentially high on-line computational
demand, researchers have begun to study the possibility of
fast computation of an optimal or suboptimal solution to the
optimization problems associated with MPC. For nominal
constrained MPC, Bemporad, Morari, Dua, & Pistikopoulos
(2002) explicitly characterized the solution of the
constrained QP problem of MPC as a piecewise lin-
ear and continuous state feedback law. Connections
between nominal constrained MPC and anti-windup
control schemes were investigated in Cherukuri and
Nikolaou (1998), De Don a and Goodwin (2000) and
Zheng (1999). VanAntwerp and Braatz (2000) devel-
oped the iterated ellipsoid algorithm, in which an ellip-
soid is used to approximate the linear input constraint set
in an o-line calculation and is rescaled during on-line
calculation. Zheng (1999) approximated the optimal in-
put moves over a control horizon by the current input
subject to constraints and the unconstrained solution of
future inputs subject to saturation. For constrained sys-
tems with polytopic model uncertainty, the receding
horizon dual-mode paradigm was introduced in Lee and
Kouvaritakis (2000) to reduce computational complex-
ity. For constrained nonlinear stable systems, Chen and
Allg ower (1998a) reduced on-line computation of the
quasi-innite horizon nonlinear MPC (Chen & Allg ower,
1998b) by removing the nonlinear terminal constraints and
reducing the control horizon.
In this paper, we develop an o-line formulation of ro-
bust constrained MPC applicable to a broad class of uncer-
tainty descriptions that gives a sequence of explicit control
laws corresponding to a sequence of asymptotically stable
invariant ellipsoids constructed one inside another in state
space. With this o-line approach, the computation of ro-
bust MPC is reduced signicantly with minor loss in perfor-
mance, thereby potentially facilitating the implementation
of robust constrained MPC in fast processes and large scale
systems.
Notation: Ris the set of real numbers. For a matrix A, A
T
denotes its transpose, A
1
its inverse (if it exists), A
2
its
induced 2-norm, o(A) its maximum singular value. For a
set of scalars {a
i
} (or matrices {A
i
}), a
i
(or A
i
) denotes the
ith scalar (or matrix). The matrix inequality AB (AB)
means that A and B are square symmetric and A B is
positive (semi-) denite. I denotes the identity matrix. For
vector x, x
P
, P 0, denotes its weighted vector 2-norm, x
i
its ith component. x(k) or x(k|k) denotes the state measured
at real time k; x(k +i|k) (i 1) the state at prediction time
k +i predicted at real time k.
2. Background
2.1. Models for uncertain systems
Consider a linear time varying (LTV) system
x(k + 1) =A(k)x(k) +B(k)u(k),
,(k) =Cx(k), (1)
_
A(k) B(k)

O,
where u(k) R
n
u
is the control input, x(k) R
n
x
is the state
of the plant and ,(k) R
n
,
is the plant output. For polytopic
uncertainty, O is the polytope Co{[A
1
B
1
], . . . , [A
L
B
L
]},
where Co denotes the convex hull, [A
i
B
i
] are vertices of the
convex hull. Any [A B] within the convex set O is a linear
combination of the vertices [A B] =

L
)=1
:
)
[A
)
B
)
] with

L
)=1
:
)
=1, 0 6:
)
61. For norm-bound uncertainty, the
LTV system is expressed as a LTI system with uncertainties
or perturbations appearing in a feedback loop:
x(k + 1) =Ax(k) +Bu(k) +B

(k),
,(k) =Cx(k),
q(k) =C
q
x(k) +D
qu
u(k),
(k) = (q)(k),
(2)
where the operator A=diag(A
1
, . . ., A
!
) with A
i
: R
n
i
R
n
i
,
i=1, . . . , !. A can represent either a memoryless time vary-
ing matrix with A
i
(k)
2
o(A
i
(k))61, k0, or a convo-
lution operator (e.g., a stable LTI dynamical system), with
the operator norminduced by the truncated !
2
-normless than
1, i.e.,

k
)=0

i
())
T

i
())6

k
)=0
q
i
())
T
q
i
()), i=1, . . . , !,
k0. When A is memoryless time varying, the uncertainty
set O is {[A+B

C
q
B+B

D
qu
], o(A
i
(k))61}.
2.2. On-line robust constrained MPC using LMIs
Consider the following problem, which minimizes the
worst case innite horizon quadratic objective function:
min
u(k+i|k)=F(k)x(k+i|k)
max
[A(k+i) B(k+i)]O, i0
J

(k), (3)
J

(k) =

i=0
[x(k +i|k)
T
Qx(k +i|k)
+u(k +i|k)
T
Ru(k +i|k)] (4)
with Q0, R0, subject to (1) or (2) and
|u
r
(k +i|k)| 6u
r, max
, i 0, r = 1, 2, . . . , n
u
, (5)
|,
r
(k +i|k)| 6,
r, max
, i 1, r = 1, 2, . . . , n
,
. (6)
In (3), we assume that at each sampling time k, a state feed-
back law u(k +i|k)=F(k)x(k +i|k) is used to minimize the
Z. Wan, M.V. Kothare / Automatica 39 (2003) 837846 839
worst case value of J

(k). Following Kothare et al. (1996),


we now derive an upper bound on J

(k). At sampling time


k, dene a quadratic function J(x)=x
T
P(k)x, P(k) 0. For
any [A(k + i) B(k + i)] O, i 0, suppose J(x) satises
the following robust stability constraint:
J(x(k +i + 1|k)) J(x(k +i|k))
6[x(k +i|k)
T
Qx(k +i|k)
+u(k +i|k)
T
Ru(k +i|k)]. (7)
Summing (7) from i =0 to and requiring x(|k) =0 or
J(x(|k)) = 0, we get
max
[A(k+i) B(k+i)]O, i0
J

(k) 6J(x(k|k)) 6, (8)


(7) and (8) give an upper bound on J

(k). The condition


J(x(k|k)) 6 in (8) can be expressed equivalently as the
LMIs
_
1 x(k|k)
T
x(k|k) Q
_
0, Q0, (9)
where Q = P(k)
1
. The robust stability constraint (7) for
system (1) is satised if for each vertex of O (Kothare et al.,
1996).
_

_
Q QA
T
)
+Y
T
B
T
)
QQ
1}2
Y
T
R
1}2
A
)
Q +B
)
Y Q 0 0
Q
1}2
Q 0 I 0
R
1}2
Y 0 0 I
_

_
0, ) = 1, . . . , L, (10)
where, Q = P(k)
1
and F(k) is parameterized by YQ
1
.
The corresponding robust stability constraint for the system
(2) can be found in Kothare et al. (1996) and is skipped here
for brevity. The input constraints (5) are satised if there
exists a symmetric matrix X such that (Kothare et al., 1996)
_
X Y
Y
T
Q
_
0 with X
rr
6u
2
r, max
, r = 1, 2, . . . , n
u
. (11)
Similarly, the output constraints (6) for system (1) are sat-
ised if there exists a symmetric matrix Z such that for each
vertex of O,
_
Z C(A
)
Q +B
)
Y)
(A
)
Q +B
)
Y)
T
C
1
Q
_
0,
) = 1, . . . , L (12)
with Z
rr
6,
2
r, max
, r = 1, 2, . . . , n
,
. The corresponding con-
ditions for the satisfaction of output constraints (6) for the
system (2) can be found in Kothare et al. (1996) and are
skipped here for brevity.
Theorem 1 (On-line robust constrained MPC, Kothare et al.
(1996)) For the system (1), at sampling time k, the state
feedback matrix F(k) in the control law u(k + i|k)=
F(k)x(k + i|k), i 0, which minimizes the upper bound
on the worst case MPC objective function J

(k), is given
by F(k) = YQ
1
where Q0 and Y are obtained from
the solution (if it exists) of the following linear objective
minimization problem:
min
,Q,X,Y,Z
sub)ect to (9), (10), (11) and (12).
This MPC algorithm, if initially feasible, robustly asymp-
totically stabilizes the closed-loop system.
Note that the corresponding theoremfor the state feedback
control law synthesis for the uncertain system (2) can be
found in Kothare et al. (1996).
3. O-line robust constrained MPC
In this section, we present an o-line approach based on
the concept of the asymptotically stable invariant ellipsoid.
Without loss of generality, we use the algorithm for poly-
topic uncertain systems described by (1) in Theorem 1 to
illustrate the subsequent o-line formulation. Similar results
can be obtained for the norm-bound uncertain system (2).
3.1. Asymptotically stable invariant ellipsoid
Denition 1. Given a discrete dynamical system x(k +1)=
[(x(k)), a subset E = {x R
n
x
| x
T
Q
1
x 61} of the state
space R
n
x
is said to be an asymptotically stable invariant
ellipsoid, if it has the property that, whenever x(k
1
) E, then
x(k) E for all times k k
1
and x(k) 0 as k .
Lemma 1. Consider a closed-loop system composed of a
plant (1) and a state feedback controller u(k) =YQ
1
x(k),
where Y and Q
1
are obtained by applying the robust con-
strained MPC algorithm dened in Theorem 1 to a system
state x
0
. Then, the subset E = {x R
n
x
| x
T
Q
1
x 61} of
the state space R
n
x
is an asymptotically stable invariant
ellipsoid.
Proof. The only LMI in Theorem 1 that depends on the
system state is (9) which is automatically satised for all
states within the ellipsoid E. So the minimizer , Q, X, Y, Z
at a given state x
0
of the plant (1) is also feasible (not neces-
sarily optimal) for any other state in E. Thus, we can apply
the state feedback law u=YQ
1
x to any non-zero x(k) E,
where x(k) = x
0
and still satisfy (10)(12), thereby
ensuring that in real time x(k + i + 1)
T
Q
1
x(k + i + 1)
x(k +i)
T
Q
1
x(k +i) 61, i 0. Thus, x(k +i) E, i 0
and x(k + i) 0 as i , establishing that E is an
asymptotically stable invariant ellipsoid.
3.2. O-line robust constrained MPC
For an input constrained system, when we apply
Theorem 1 to a state far from the origin, the resulting
840 Z. Wan, M.V. Kothare / Automatica 39 (2003) 837846
asymptotically stable invariant ellipsoid has a more con-
strained feedback matrix. From Kothare et al. (1996), we
know that it is not necessary to keep this feedback matrix
constant while the state is converging to the origin where
there are less constraints on the choice of the feedback
matrix. Within an asymptotically stable invariant ellipsoid
E={x R
n
x
|x
T
Q
1
x 61}, we dene the distance between
the state x and the origin as the weighted norm x
Q
1 ,
_
x
T
Q
1
x. By adding asymptotically stable invariant ellip-
soids one inside another, we have more freedom to adopt
varying feedback matrices based on the distance between
the state and the origin. We show next that we can achieve
this in the proposed o-line formulation.
Algorithm 1 (O-line robust constrained MPC). Consider
an uncertain system (1) subject to input and output con-
straints (5) and (6). O-line, given an initial feasible state
x
1
, generate a sequence of minimizers
i
, Q
i
, X
i
, Y
i
and
Z
i
(i = 1, . . . , N) as follows. Let i := 1.
1. compute the minimizer
i
, Q
i
, X
i
, Y
i
, Z
i
at x
i
by using
Theorem 1 with an additional constraint Q
i1
Q
i
(ignored at i = 1), store Q
1
i
, F
i
(=Y
i
Q
1
i
), X
i
, Y
i
in a
look-up table;
2. if i N, choose a state x
i+1
satisfying x
i+1

2
Q
1
i
1.
Let i := i + 1, go to step 1.
On-line, given an initial state x(0) satisfying x(0)
2
Q
1
1
61, let the state be x(k) at time k. Perform a bisec-
tion search over Q
1
i
in the lookup table to nd the
largest index i (or equivalently, the smallest ellipsoid
E
i
= {x R
n
x
|x
T
Q
1
i
x 61}) such that x(k)
2
Q
1
i
61.
Apply the control law u(k) =F
i
x(k).
Remark 1. Step 1 in the o-line part of Algorithm 1 is
always feasible for i 1, assuming it is feasible for i = 1.
This is because if the minimizer is , Q, X, Y, Z at x, then,
at an arbitrary x satisfying x
Q
1 1, : 1 such that
: x
Q
1 = 1 and 1}:
2
, 1}:
2
Q, 1}:
2
X, 1}:
2
Y, 1}:
2
Z is a
feasible solution with the additional constraint Q1}:
2
Q
satised.
Remark 2. From the on-line robust MPC algorithm in
Theorem 1, we know that the optimal robust MPC law and
the corresponding asymptotically stable invariant ellipsoid
depend on the state. Although the control law can be applied
to all the states within the ellipsoid, it is not necessarily
optimal. So our o-line formulation sacrices optimality
somewhat while signicantly reducing the on-line compu-
tational burden.
Remark 3. In each o-line optimization in Algorithm 1, we
can minimize the performance cost based on a set of states
instead of a single state. This can help in averaging the ef-
fect of the individual state on the suboptimal MPC law.
Furthermore, since the MPC law is available o-line, perfor-
mance analysis can be carried out to study the closed-loop
responses.
Theorem 2. Given a dynamical system (1) and an initial
state x(0) satisfying x(0)
2
Q
1
1
61, the o-line robust con-
strained MPC Algorithm 1 robustly asymptotically stabi-
lizes the closed-loop system.
Proof. For the o-line minimization at x
i
, i = 2, . . . , N,
the additional constraint Q
i1
Q
i
is equivalent to
Q
1
i1
Q
1
i
. This implies that the constructed asymptoti-
cally stable invariant ellipsoid E
i
= {x R
n
x
|x
T
Q
1
i
x 61}
is inside E
i1
, i.e., E
i
E
i1
. So for a xed x, x
2
Q
1
i
is monotonic with respect to the index i. This ensures the
uniqueness of the on-line bisection search in the look-up
table for the largest i satisfying x
2
Q
1
i
61.
Given a dynamical system (1) and an initial state x(0)
satisfying x(0)
2
Q
1
1
61, the closed-loop system becomes
x(k + 1) =
_

_
(A(k) +B(k)F
i
)x(k) if x(k)
2
Q
1
i
61,
x(k)
2
Q
1
i+1
1,
i = N,
(A(k) +B(k)F
N
)x(k) if x(k)
2
Q
1
N
61.
When x(k) satises x(k)
2
Q
1
i
61 and x(k)
2
Q
1
i+1
1,
i = 1, . . . , N 1, the control law u(k) =F
i
x(k) correspond-
ing to the ellipsoid E
i
is guaranteed to keep the state within
E
i
(using Lemma 1) and converge it into the ellipsoid E
i+1
,
and so on. Lastly, the smallest ellipsoid E
N
is guaranteed to
keep the state within E
N
and converge it to the origin.
Note that the sequence of state feedback matrices gen-
erated in Algorithm 1 are constant between two adjacent
asymptotically stable invariant ellipsoids and discontinuous
on the boundary of each asymptotically stable invariant el-
lipsoid. The following results are devoted to constructing a
continuous feedback matrix over the state space.
Algorithm 2. Consider the lookup table generated by the
o-line part of Algorithm 1. If for each x
i
(i =1, . . . , N 1),
Q
1
i
(A
)
+B
)
F
i+1
)
T
Q
1
i
(A
)
+B
)
F
i+1
) 0,
) = 1, . . . , L (13)
is satised, then, on-line, given an initial state x(0) sat-
isfying x(0)
2
Q
1
1
61 and the current state x(k) at time
k, perform a bisection search over Q
1
i
in the lookup
table to nd the largest index i (or equivalently, the small-
est ellipsoid E
i
) such that x(k)
2
Q
1
i
61. If i = N, solve
x(k)
T
(:
i
Q
1
i
+ (1 :
i
)Q
1
i+1
)x(k) = 1 for :
i
and apply the
Z. Wan, M.V. Kothare / Automatica 39 (2003) 837846 841
control law u(k)=(:
i
F
i
+(1:
i
)F
i+1
)x(k). If i =N, apply
u(k) =F
N
x(k).
Remark 4. The LMI minimization problem in this paper is
solved by using interior point methods which generally use
a sequence of strictly convex unconstrained minimization
problems to solve a convex constrained minimization prob-
lem (Nesterov & Nemirovsky, 1994). For a strictly convex
unconstrained minimization problem, not only the objective
function but also all the minimizers are unique. Hence, it is
reasonable to assume that the optimal solutions for the op-
timizations in the o-line part of Algorithm 1 are unique.
Under this assumption we can always nd a sequence of
minimizers for Algorithm 2, because condition (13) be-
comes trivial if x
i+1
is chosen to be suciently close to x
i
.
Theorem 3. Given a dynamical system (1) and an initial
state x(0) satisfying x(0)
2
Q
1
1
61, the o-line robust con-
strained MPC Algorithm 2 robustly asymptotically stabi-
lizes the closed-loop system.
Proof. The closed-loop system is given by
x(k + 1)
=
_

_
(A(k) +B(k)
F(:
i
(k)))x(k) if x(k)
2
Q
1
i
61,
x(k)
2
Q
1
i+1
1,
i = N,
(A(k) +B(k)F
N
)x(k) if x(k)
2
Q
1
N
61,
(14)
where F(:
i
(k)) = :
i
(k)F
i
+ (1 :
i
(k))F
i+1
with :
i
(k)
satisfying x(k)
T
(:
i
(k)Q
1
i
+ (1 :
i
(k)) Q
1
i+1
)x(k) =
1, 0 6:
i
(k) 61. When x(k) satises x(k)
2
Q
1
i
61 and x(k)
2
Q
1
i+1
1, i = N, let F(:
i
) = :
i
F
i
+ (1
:
i
)F
i+1
, Q(:
i
)
1
=:
i
Q
1
i
+(1:
i
)Q
1
i+1
0, X(:
i
)=:
i
X
i
+
(1:
i
)X
i+1
0 and Z(:
i
) =:
i
Z
i
+(1:
i
)Z
i+1
0, where
:
i
is solved by satisfying x(k)
T
Q(:
i
)
1
x(k)=1, 0 6:
i
61.
The satisfaction of (10) for x
i
and (13) ensures that
_
Q
1
i
(A
)
+B
)
F(:
i
))
T
A
)
+B
)
F(:
i
) Q
i
_
0, ) = 1, . . . , L,
and the satisfaction of (11) and (12) for both x
i
and
x
i+1
ensures that there exist symmetric matrices X(:
i
), Z(:
i
)
such that
_
X(:
i
) F(:
i
)
F(:
i
)
T
Q(:
i
)
1
_
0 with X(:
i
)
rr
6u
2
r, max
,
r = 1, 2, . . . , n
u
,
_
Z(:
i
) C(A
)
+B
)
F(:
i
))
(A
)
+B
)
F(:
i
))
T
C
1
Q(:
i
)
1
_
0
with Z(:
i
)
rr
6,
2
r, max
, r = 1, 2, . . . , n
,
.
Thus, the control law u(k) = F(:
i
(k))x(k) between E
i
and
E
i+1
is guaranteed to keep the state within E
i
and converge
it into E
i+1
, with constraints satised. Lastly, the control law
u(k)=F
N
x(k) within the smallest ellipsoid E
N
is guaranteed
to keep the state within E
N
and converge it to the origin.
Corollary 1. The feedback matrix F implemented in Al-
gorithm 2 is a continuous function of the state x.
Proof. The feedback matrix implemented in Algorithm 2 is
F(x) =
_

_
:
i
F
i
+ (1 :
i
)F
i+1
if x
2
Q
1
i
61,
x
2
Q
1
i+1
1, i = N,
F
N
if x
2
Q
1
N
61,
where :
i
is the solution of x
T
(:
i
Q
1
i
+(1:
i
)Q
1
i+1
)x=1, i =
N.
Consider two ring regions R
i1
={x R
n
x
| x
2
Q
1
i1
61,
x
2
Q
1
i
1} and R
i
={x R
n
x
| x
2
Q
1
i
61, x
2
Q
1
i+1
1}.
Firstly, within R
i
, the solution of x
T
(:
i
Q
1
i
+ (1
:
i
)Q
1
i+1
)x =1 is :
i
=(1 x
T
Q
1
i+1
x)}(x
T
(Q
1
i
Q
1
i+1
)x) sat-
isfying 0 6:
i
61. Therefore within R
i
, :
i
is a continuous
function of x, and so is F(:
i
). The same argument holds
for the region R
i1
with :
i1
= (1 x
T
Q
1
i
x)}(x
T
(Q
1
i1

Q
1
i
)x), 0 6:
i1
61. Secondly, for x R
i1
, x
2
Q
1
i
1
implies :
i1
0, and for x R
i
, x
2
Q
1
i
1 implies
:
i
1. Thus on the boundary between R
i1
and R
i
,
lim
:
i1
0
F(:
i1
) = lim
:
i
1
F(:
i
) =F
i
which establishes the continuity of F on the boundary be-
tween R
i1
and R
i
. So it can be concluded that F(x) is a
continuous function of x.
Remark 5. Both Algorithms 1 and 2 are general approaches
to construct a Lyapunov function for uncertain and con-
strained systems. The Lyapunov function is
J(x) =
_

_
x
T
Q
1
i
x if x(k)
2
Q
1
i
61, x(k)
2
Q
1
i+1
1,
i = N,
x
T
Q
1
N
x if x(k)
2
Q
1
N
61,
This Lyapunov function is not necessarily continuous on the
boundary of each asymptotically stable invariant ellipsoid. It
is enough to have J(x) be monotonically decreasing within
the smallest ellipsoid and within each ring region between
two adjacent ellipsoids to stabilize the closed-loop system.
842 Z. Wan, M.V. Kothare / Automatica 39 (2003) 837846
From both Algorithms 1 and 2, we can see that the
choice of the state x
i+1
satisfying x
i+1

2
Q
1
i
1 is arbi-
trary. For ease of implementation, we provide the following
suggestions. We can choose an arbitrary one dimensional
subspace S = {:x
max
| 1 : 0, : R, x
max
R
n
x
},
where x
max
is a state chosen to be as far from the ori-
gin as is feasible for the problem. We can then dis-
cretize this set and construct a set of discrete points,
S
d
={:
i
x
max
| 1 :
1
:
N
0, :
i
R, x
max
R
n
x
}.
Since the asymptotically stable invariant ellipsoid con-
structed for each discrete point actually passes through that
point, :
i+1
x
max

2
Q
1
i
:
i
x
max

2
Q
1
i
= 1 is satised. In or-
der to obtain a look-up table that can cover a very large
portion of the state space with a limited number of discrete
points, we suggest a discretization of the one dimensional
subspace using a logarithmic scale. As noted before, we can
always nd a feasible sequence of minimizers in Algorithm
2 by rening the discretization.
3.3. Complexity analysis
For Algorithms 1 and 2, the on-line computation
mainly comes from the bisection search in a lookup
table. A sequence of K stored Q
1
i
(K generally less
than 20) requires log
2
K searches, and the matrix
vector multiplication in one search has quadratic growth
O(n
2
s
) in the number of ops, with n
s
the number
of state variables. So the total number of ops re-
quired to calculate an input move is O(n
2
s
log
2
K).
On the other hand, the fastest interior point algo-
rithms show O(MN
3
) growth in computation (Gahinet,
Nemirovski, Laub, & Chilali, 1995) where M is the to-
tal row size of the LMI system and N is the total num-
ber of scalar decision variables. M is proportional to L
and Nn
2
s
}2 + n
s
n
c
, with L the number of vertices of
the uncertain model and n
c
the number of manipulated
variables. So we can conclude that this o-line approach
can substantially reduce the on-line computational burden
in robust MPC.
4. Examples
In this section, we present two examples that illustrate
the implementation of the proposed o-line approach. For
both these examples, the software LMI control toolbox
(Gahinet et al., 1995) in the MATLAB environment was
used to compute the solution of the LMI minimization
problem.
4.1. Example 1
Consider the transfer function P(s) = }s(s + :) where
= 0.787, 0.1 6: 610. The transfer function can be de-
scribed by the following linear time varying system by
discretization, using a sampling time of 0.1 and Eulers
rst-order approximation for the derivative
x(k + 1) =
_
x
1
(k + 1)
x
2
(k + 1)
_
=
_
1 0.1
0 1 0.1:(k)
_
x(k) +
_
0
0.1
_
u(k) ,A(k)x(k) +B(k)u(k),
,(k) =
_
1 0

x(k) ,Cx(k).
Using the polytopic uncertainty description (1), we have
A(k) O =Co{A
1
, A
2
}, where
A
1
=
_
1 0.1
0 0.99
_
, A
2
=
_
1 0.1
0 0
_
.
It can also be described as structured feedback uncertainty
(2) but we will use the above polytopic uncertainty descrip-
tion. The robust performance objective function is (3) sub-
ject to |u(k + i|k)| 62, i 0. Here J

(k) is given by (4)


with
Q =
_
1 0
0 0
_
, R = 0.00002.
We choose the x
1
-axis as the one dimensional subspace, and
discretize it into thirteen points x
set
1
=[1, 0.9, 0.75, 0.65, 0.52,
0.4, 0.28, 0.18, 0.1, 0.05, 0.02, 0.01, 0.001]. Fig. 1 shows the
ellipsoids dened by Q
1
i
for all 13 discrete points, Fig.
2 shows the o-line control law F along the x
1
-axis. Both
plots are obtained using the o-line part of Algorithm 1 with
the additional constraint (13). The two plots in Fig. 2 show
two regions: one is the constrained region (i = 1, . . . , 12),
where the input constraints impose lesser and lesser limits
on feedback gains as i increases, and the other is the un-
constrained region (i = 12, 13), where the input constraints
have no eect on the feedback gain.
Given an initially disturbed state x(0) = [0.05 0]
T
, the
rst row of plots in Fig. 3 show the closed-loop responses
of the system corresponding to :(k) 9 (for this system,
nominal MPC with :(k) :
nom
1, is unstable (Kothare
et al., 1996), and the second row of plots the closed-loop re-
sponses of the system corresponding to :(k) changing from
0.1 to 10 at a rate of 0.5 per sampling period. The o-line
Algorithm 2 gives nearly the same performance as the
on-line robust constrained MPC algorithm (Kothare et
al., 1996). On a Gateway PC with a Pentium III proces-
sor (1000 MHz, Cache RAM 256 KB and total memory
256 MB) and using MATLAB code, the average time for
the o-line MPC algorithm to compute a feedback gain is
2.0 10
4
s, which is about 1000 times faster than the
0.2 s it takes for on-line MPC.
Z. Wan, M.V. Kothare / Automatica 39 (2003) 837846 843
-1.5 -1 -0.5 0 0.5 1 1.5
-1.5
-1
-0.5
0
0.5
1
1.5
x
1
x
21 2 3 4 5 6 7 8 9
Fig. 1. The ellipsoids dened by Q
1
i
for all 13 discrete points for Example 1.
10
-3
10
-2
10
-1
10
0
-90
-80
-70
-60
-50
-40
-30
-20
-10
0
10
-3
10
-2
10
-1
10
0
-15
-10
-5
0
F

(
1
,
1
)
F

(
1
,
2
)
x
1
10
11
12
13
9
8
7
6
4 3
2 1
13
12
11
10
9
8
7
6
5 5
4
32 1
x
1
Fig. 2. The o-line control law F for Example 1; (+) shows the discretization points.
4.2. Example 2
Consider the following linearized model derived for a
single, non-isothermal CSTR (Marlin, 1995).
x =Ax +Bu,
where x is a vector of the reactor concentration and tem-
perature, and u is a vector of the feed concentration and the
coolant ow, both of which are constrained. A and B depend
on operating conditions as follows:
A =
_

F
J
k
0
e
E}R1
s

E
R1
2
s
k
0
e
E}R1
s
C
As
H
rxn
k
0
e
E}R1
s
jC

F
J

UA
JjC

H
rxn
E
jC

R1
2
s
k
0
e
E}R1
s
C
As
_

_
,
B =
_

_
F
J
0
0 2.098 10
5
1
s
365
JjC

_
,
844 Z. Wan, M.V. Kothare / Automatica 39 (2003) 837846
0 0.5 1 1.5 2
-0.15
-0.1
-0.05
0
0.05
0.1
0 0.5 1 1.5 2
-2
-1.5
-0.5
0
0 0.5 1 1.5 2
-0.2
-0.1
0
0.1
0 0.5 1 1.5 2
-2
-1
0
1
x
1

a
n
d

x
2
u
u
x
1

a
n
d

x
2
time time
-1
Fig. 3. Closed-loop responses for Example 1: solid lines, on-line MPC algorithm in Theorem 1; dashed lines with (+), o-line MPC in Algorithm 2.
where F=1 m
3
}min, J=1 m
3
, k
0
=10
9
10
10
min
1
, E}R=
8330.1 K, H
rxn
= 10
7
10
8
cal}kmol, j = 10
6
g}m
3
,
UA = 5.34 10
6
cal}(K min), and C

= 1 cal}(g K)We
will concentrate on this linearized model at steady state
1
s
= 394 K and C
As
= 0.265 kmol}m
3
under the uncertain
parameters k
0
and H
rxn
. The model is discretized using
a sampling time of 0.15 min and given in terms of pertur-
bation variables as follows:
x(k + 1) =
_
C
A
(k + 1)
1(k + 1)
_
=
_
0.85 0.0986:(k) 0.0014:(k)
0.9864:(k)[(k) 0.0487 + 0.01403:(k)[(k)
_
x(k) +
_
0.15 0
0 0.912
_
u(k),
,(k) =
_
1 0
0 1
_
x(k),
where 16:(k)=k
0
}10
9
610 and 16[(k)=H
rxn
}10
7
610. Because the two uncertain parameters : and [ are
independent of each other, in order to guarantee robust
stability, it is necessary to consider the polytopic uncertain
model
O=Co
__
0.751 0.0014
0.986 0.063
_
,
_
0.751 0.0014
9.864 0.189
_
,
_
0.136 0.014
9.864 0.189
_
,
_
0.136 0.014
98.644 1.451
__
-0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25
-18
-12
-6
0
6
12
18
x
1
x
2
1
2
3
4
5
6
7
Fig. 4. The ellipsoids dened by Q
1
i
for all 10 discrete points for
Example 2 with R = 0.2I .
with its four vertices representing all the possible combi-
nations of the two uncertain parameters. The robust per-
formance objective function is dened as (3) subject to |u
1
(k + i|k)| 60.5 kmol}m
3
and |u
2
(k + i|k)| 61 m
3
}min,
i 0. Here J

(k) is given by (4) with Q = I and


R = 0.2I 2I .
We choose the x
1
-axis as the one dimensional subspace,
and discretize it into 10 points x
set
1
= [1.4 10
1
, 9
10
2
, 710
2
, 610
2
, 510
2
, 310
2
, 210
2
, 1.7
10
2
, 10
2
, 10
3
]. Fig. 4 shows the ellipsoids dened by
Q
1
i
for all 10 discrete points, obtained from the o-line
part of Algorithm 1 with the additional constraint (13) for
R = 0.2I .
Z. Wan, M.V. Kothare / Automatica 39 (2003) 837846 845
0 0.5 1
-0.5
0
0.5
1
0 0.5 1
-1
-0.5
0
0.5
1
0 0.5 1
-0.5
0
0.5
1
0 0.5 1
-1
-0.5
0
0.5
1
C
A
/
0
.
1

(
k
m
o
l
e
/
m
3
)

a
n
d

T
/
2

(
K
)
C
A
0

(
k
m
o
l
e
/
m
3
)

a
n
d

F
c

(
m
3
/
m
i
n
)
C
A
/ 0. 1
C
A
/ 0. 1
T/2
T/2
F
c
F
c
C
A0
C
A0
time, (min) time, (min
)
Fig. 5. Closed-loop responses for Example 2: solid lines, on-line MPC algorithm in Theorem 1; dashed lines with (+), o-line MPC in Algorithm 2.
Given an initially disturbed state x(0) =[0.1 2]
T
, the two
rows of plots in Fig. 5 show the closed-loop responses of the
system corresponding to :(k) 1.1 and [(k) 1.1 with
two dierent control weights R = 0.2I and 2I , respectively.
The o-line Algorithm 2 gives nearly the same performance
as the on-line robust constrained MPC algorithm (Kothare
et al., 1996). The computations in this o-line approach are
900 times faster than those in on-line MPC.
5. Conclusions
In this paper, we have developed an o-line robust con-
strained MPC algorithm with guaranteed robust stability of
the closed-loop system for two classes of uncertainty de-
scriptions. The advantage of this algorithm is that it provides
o-line a set of stabilizing state feedback laws, correspond-
ing to a set of invariant ellipsoids one inside another in state
space. Since no optimization is involved except a simple bi-
section search, the on-line MPC computation is reduced by
nearly three orders of magnitude in the simulation exam-
ples, with little or no loss of performance. This makes robust
MPC a very attractive control methodology for application
to large scale systems and fast processes.
Acknowledgements
Financial support for this research from the American
Chemical Societys Petroleum Research Fund (ACS-PRF)
and the P. C. Rossin and Frank Hook Assistant Professor-
ships (Lehigh University) is gratefully acknowledged.
References
Badgwell, T. A. (1997). Robust model predictive control of stable linear
systems. International Journal of Control, 68(4), 797818.
Bemporad, A., & Morari, M. (1999). Robust Model Predictive Control:
A Survey. In: A. Garulli, A. Tesi, A. Vicino, & G. Zappa (Eds.),
Robustness in identication and control, Vol. 245 (pp. 207226).
London LTD, Godalming: Springer.
Bemporad, A., Morari, M., Dua, V., & Pistikopoulos, E. N. (2002). The
explicit linear quadratic regulator for constrained systems. Automatica,
38(1).
Braatz, R. D., Young, P., Doyle, J. C., & Morari, M. (1994).
Computational complexity of j calculations. IEEE transactions on
automatic control, 39(5), 10001002.
Chen, H., & Allg ower, F. (1998a). A computationally attractive nonlinear
predictive control scheme. Journal of Process Control, 8(56),
475485.
Chen, H., & Allg ower, F. (1998b). A quasi-innite horizon nonlinear
predictive control. Automatica, 34(10), 12051217.
Cherukuri, M. R., & Nikolaou, M. (1998). The equivalence between
model predictive control and anti-windup control schemes. In AICHE
annual meeting.
De Don a, J. A., & Goodwin, G. C. (2000). Elucidation of the state-space
regions wherein model predictive control and anti-windup strategies
achieve identical control policies. In Proceedings of the 2000 American
control conference (pp. 19241928). Chicago, Il.
Gahinet, P., Nemirovski, A., Laub, A. J., & Chilali, M. (1995). LMI
control toolbox: for use with MATLAB: The Mathworks, Inc. Natick,
MA.
Kothare, M. V., Balakrishnan, V., & Morari, M. (1996). Robust
constrained model predictive control using linear matrix inequalities.
Automatica, 32(10), 13611379.
Lee, Y. I., & Kouvaritakis, B. (2000). A linear programming approach
to constrained robust predictive control. IEEE Transactions on
Automatic Control, 45(9), 17651770.
Marlin, T. E. (1995). Process control: Designing Processes and Control
Systems for Dynamic Performance. New York: McGraw Hill.
846 Z. Wan, M.V. Kothare / Automatica 39 (2003) 837846
Morari, M., & Lee, J. H. (1999). Model predictive controlPast, Present
and Future. Computers & Chemical Engineering, 23(45), 667682.
Nesterov, Yu., & Nemirovsky, A. (1994). Interior-point polynomial
methods in convex programming, Vol. 13 of Studies in Applied
Mathematics. Philadelphia, PA: SIAM.
Primbs, J. A., & Nevisti c, V. (2000). A framework for robustness analysis
of constrained nite receding horizon control. IEEE Transactions on
Automatic Control, 45(10), 18281838.
VanAntwerp, J. G., & Braatz, R. D. (2000). Fast model predictive control
of sheet and lm processes. IEEE transactions on control systems
technology, 8(3), 408417.
Zheng, A. (1999). Reducing on-line computational demands in model
predictive control by approximating QP constraints. Journal of Process
Control, 9(4), 279290.
Zhaoyang Wan received both his B.S. de-
gree and M.S degree in Chemical Engineer-
ing from Tsinghua University, P. R. China,
in 1994 and 1997. From September 1997 to
August 1998, he was faculty advisor at Bei-
jing University of Chemical Technology, P.
R. China. From September 1998 to present,
he is a Ph.D. student at Lehigh University
in the Department of Chemical Engineer-
ing. He received a Signicant Contribution
Award for his internship in the Process
Dynamics and Control group of DuPont
Engineering Technology. He has been
Student Vice President of Beta Pi Chapter of Phi Beta Delta International
Honor Society for International scholars since 2000. His main research
interests are in robust model predictive control of constrained linear and
nonlinear processes and control of distributed systems.
Mayuresh V. Kothare received the degree
of Bachelor of Technology (B.Tech.) in
Chemical Engineering from the Indian In-
stitute of Technology, Bombay, in 1991. He
was recipient of the Institute Silver Medal
for ranking rst in Chemical Engineering
at the Indian Institute of Technology, Bom-
bay, the J. N. Tata Endowment award and
the Sumant Mulgaonkar Memorial award
for academic excellence in chemical engi-
neering. He received his M.S. degree in
Chemical Engineering from the California
Institute of Technology in June 1995 and a Ph.D. degree in Chemical
Engineering in June 1997 with a minor in Control and Dynamical Systems.
From September 1994 to December 1996, he was a research assistant
in the Automatic Control Laboratory of the Swiss Federal Institute of
Technology in Z urich. From January to June 1997, he was a visiting
scholar in the Department of Chemical Engineering at City College New
York. From July 1997 to June 1998, he held a postdoctoral appointment
in the Modeling and Advanced Control group of Mobil Technology
Company, a subsidiary of Mobil Oil Corporation. In July 1998, he joined
Lehigh University where he is currently a P. C. Rossin and Frank Hook
Assistant Professor of Chemical Engineering and Co-Director of the
Chemical Process Modeling and Control Research Center. Dr. Kothares
interests are in the areas of control of chemical processes with input and
output constraints; and modeling, dynamics and control of microchemical
systems. He has made contributions in the analysis and synthesis of
anti-windup control systems, model predictive control and application of
linear matrix inequalities to solving receding horizon control problems. He
is recipient of the year 2000 Ted Peterson Student Paper Award from the
Computing & Systems Technology Division of the American Institute of
Chemical Engineers and the CAREER award from the National Science
Foundation in 2002.

You might also like