Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Automatica 42 (2006) 523 – 533

www.elsevier.com/locate/automatica

Optimization over state feedback policies for robust control with constraints夡
Paul J. Goulart a,∗ , Eric C. Kerrigan b , Jan M. Maciejowski a
a Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK
b Department of Aeronautics and Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2AZ, UK

Received 15 March 2005; received in revised form 8 June 2005; accepted 25 August 2005

Abstract
This paper is concerned with the optimal control of linear discrete-time systems subject to unknown but bounded state disturbances and
mixed polytopic constraints on the state and input. It is shown that the class of admissible affine state feedback control policies with knowledge
of prior states is equivalent to the class of admissible feedback policies that are affine functions of the past disturbance sequence. This implies
that a broad class of constrained finite horizon robust and optimal control problems, where the optimization is over affine state feedback
policies, can be solved in a computationally efficient fashion using convex optimization methods. This equivalence result is used to design a
robust receding horizon control (RHC) state feedback policy such that the closed-loop system is input-to-state stable (ISS) and the constraints
are satisfied for all time and all allowable disturbance sequences. The cost to be minimized in the associated finite horizon optimal control
problem is quadratic in the disturbance-free state and input sequences. The value of the receding horizon control law can be calculated at each
sample instant using a single, tractable and convex quadratic program (QP) if the disturbance set is polytopic, or a tractable second-order cone
program (SOCP) if the disturbance set is given by a 2-norm bound.
䉷 2006 Elsevier Ltd. All rights reserved.

Keywords: Robust control; Constraint satisfaction; Robust optimization; Predictive control; Optimal control

1. Introduction & Diaz-Bobillo, 1995) or predictive control (Maciejowski,


2002; Mayne, Rawlings, Rao, & Scokaert, 2000).
This paper is concerned with the control of constrained It is generally accepted that if disturbances are to be
discrete-time linear systems that are subject to additive, but accounted for in the formulation of a constrained optimal
bounded disturbances on the state. The main aim is to provide control problem, then the optimization has to be done over
results that allow for the efficient computation of an optimal admissible state feedback policies, rather than open-loop input
and stabilizing state feedback control policy that ensures a sequences, otherwise infeasibility and instability problems can
given set of state and input constraints are satisfied for all time, occur (Mayne et al., 2000). However, optimization over arbi-
despite the presence of the disturbances. This is a problem that trary (nonlinear) feedback policies is particularly difficult if
has been studied for some time now in the optimal control lit- constraints have to be satisfied. Current proposals for achieving
erature (Bertsekas & Rhodes, 1973) and a number of different this using finite dimensional optimization, such as Scokaert
solutions are available, most of which draw on results from set and Mayne (1998), are computationally intractable since the
invariance theory (Blanchini, 1999), 1 optimal control (Dahleh size of the optimization problem grows exponentially with the
size of the problem data.
Hence, a popular approach in the predictive control litera-
夡 This paper was not presented at any IFAC meeting. This paper was ture is to compute one or more stabilizing linear state feedback
recommended for publication in revised form by Associate Editor Martin control laws off-line and restrict the on-line computation to
Guay under the direction of Editor Frank Allgower. the selection of one of these control laws, followed by the
∗ Corresponding author. Tel.: +44 1223 332711; fax: +44 1223 332662.
E-mail addresses: pgoulart@alum.mit.edu (P.J. Goulart), computation of a finite sequence of admissible perturbations
erickerrigan@ieee.org (E.C. Kerrigan), jmm@eng.cam.ac.uk to the selected control law (Bemporad, 1998; Chisci, Rossiter,
(J.M. Maciejowski). & Zappa, 2001; Lee & Kouvaritakis, 1999; Mayne, Seron,
0005-1098/$ - see front matter 䉷 2006 Elsevier Ltd. All rights reserved.
doi:10.1016/j.automatica.2005.08.023
524 P.J. Goulart et al. / Automatica 42 (2006) 523 – 533

& Raković, 2005). Though this approach considerably reduces a broad class of disturbances by solving a single and tractable
the computational complexity, it is not always obvious how to convex optimization problem.
best choose the linear control laws off-line so as to minimize In Section 5 we state our main result and show that the state
conservativeness. and disturbance feedback parameterizations of Sections 3 and
An obvious improvement to this approach of “pre- 4 are equivalent. This has important system-theoretical conse-
stabilization” is to try to simultaneously compute the linear quences, which are explored in detail in Sections 6 and 7. Sec-
feedback control law and perturbation sequence on-line at tion 6 is concerned with results that guarantee robust constraint
each sample instant. However, the problem with this approach satisfaction for all time when the proposed control policy is
is that the predicted input and state sequences are nonlinear used in the implementation of time-varying, receding horizon
functions of the sequence of state feedback gains. As a conse- or time-optimal control laws. Section 7 formulates a suitable
quence, the set of feasible decision variables is non-convex, in robust finite horizon optimal control problem and presents re-
general. Various proposals have been put forward for modify- sults that guarantee robust stability of the closed-loop system
ing this problem so that the set of feasible decision variables is for the case when the solution to the optimal control prob-
convex (Jia, Krogh, & Stursberg, 2005; Kothare, Balakrishnan, lem is implemented in a receding horizon fashion. The paper
& Morari, 1996; Smith, 2004), but generally this comes with concludes in Section 8 and suggests some topics for further
an increase in the conservativeness of the solution. Hence, an research.
interesting question is whether one can re-parameterize the op- Notation and definitions. For matrices A and B, let AB
timal control problem, where the optimization is over the class denote element-wise inequality. A matrix, not necessarily
of affine state feedback policies (linear feedback plus pertur- square, is referred to as (strictly) lower triangular if the (i, j )
bation), and formulate an equivalent, but convex and tractable entry is zero for all i < j (i j ). A block partitioned matrix
optimization problem. One of the contributions of this paper is referred to as (strictly) block lower triangular if the (i, j )
is to show that this is possible. block is zero when i < j (i j ); note that a block lower tri-
To show this, we exploit a recent result for solving a angular matrix is not necessarily lower triangular. Given a
class of robust optimization problems with hard constraints, vector x, x2A := x T Ax. Z[k,l] represents the set of integers
called adjustable robust counterpart (ARC) problems (Ben-Tal, {k, k + 1, . . . , l}. A continuous function  : R  0 → R  0 is
Goryashko, Guslitzer, & Nemirovski, 2004; Guslitser, 2002), a K-function if it is strictly increasing and (0) = 0; it is a
where the optimization variables correspond to decisions that K∞ -function if, in addition, (s) → ∞ as s → ∞. A contin-
can be made once actual values of the uncertainty become uous function  : R  0 × R  0 → R  0 is a KL-function if
available. The authors of Ben-Tal et al. (2004) and Guslitser for all k 0, the function (·, k) is a K-function and for each
(2002) proposed that, instead of solving for an admissible s 0, (s, ·) is decreasing with (s, k) → 0 as k → ∞.
nonlinear function of the uncertainty, one could aim to param-
eterize the solution as an affine function of the uncertainty. 2. Standing assumptions
They proceeded to show that if the uncertainty set is poly-
hedral and the constraints in the optimization problem are Consider the following discrete-time LTI system:
affine, then an affine function of the uncertainty can be found
by solving a single, computationally tractable linear program x + = Ax + Bu + w, (1)
(LP). Equivalent parameterizations have also been proposed
where x ∈ Rn is the system state at the current time instant, x +
in the literature on predictive control with Gaussian state dis-
is the state at the next time instant, u ∈ Rm is the control input
turbances (van Hessem & Bosgra, 2002) and bounded state
and w ∈ Rn is the disturbance. The current and future values
disturbances (Löfberg, 2003). This paper also presents a num-
of the disturbance are unknown and may change unpredictably
ber of novel system-theoretic results relating to the use of this
from one time instant to the next, but are contained in a con-
new parameterization.
vex and compact (closed and bounded) set W, which contains
In order for the results in this paper to be applicable to a
the origin. The actual values of the state, input and disturbance
large class of problems, the development of this paper starts
at time instant k are denoted by x(k), u(k) and w(k), respec-
with a general problem description, which is refined in each
tively; where it is clear from the context, x will be used to de-
section. Section 2 discusses the class of systems that is to be
note the current value of the state. It is assumed that (A, B) is
considered throughout the paper and lists a number of standing
stabilizable and that at each sample instant a measurement of
assumptions. Section 3 describes the well-known affine state
the state is available. We also assume that a linear state feed-
feedback parameterization and Section 4 describes the new
back gain matrix K ∈ Rm×n is given, such that A + BK is
affine disturbance feedback parameterization that was proposed
strictly stable.
in Ben-Tal et al. (2004), Gartska and Wets (1974), Guslitser
The system is subject to mixed constraints on the state and
(2002), Löfberg (2003) and van Hessem and Bosgra (2002).
input:
We demonstrate that in the case of the state feedback param-
eterization of Section 3, the set of decisions variables is non- Z := {(x, u) ∈ Rn × Rm | Cx + Dub}, (2)
convex, in general. In contrast, with the disturbance feedback
parameterization of Section 4, the set of admissible decision where the matrices C ∈ Rs×n , D ∈ Rs×m and the vector b ∈
variables is convex, and an admissible policy can be found for Rs . It is assumed that Z is bounded and contains the origin in its
P.J. Goulart et al. / Automatica 42 (2006) 523 – 533 525

interior. A primary design goal is to guarantee that the state and of admissible (L, g) is defined as
input of the closed-loop system remain in Z for all time and for ⎧


(L, g) satisfies (5), x = x0 ⎪
all allowable disturbance sequences. Finally, a target/terminal ⎪



constraint set Xf is given by ⎨
xi+1 = Ax i + Bui + wi ⎬

i
N (x) := (L, g)
ui = j =0 Li,j xj + gi
sf
. (6)




Xf := {x ∈ Rn |Y x z}, (3) ⎪

(x i , ui ) ∈ Z, xN ∈ X f ⎪

∀ i ∈ Z[0,N−1] , ∀w ∈ W
where the matrix Y ∈ Rr×n and the vector z ∈ Rr . It is assumed
that Xf is bounded and contains the origin in its interior. The set of initial states x for which an admissible control policy
of the form (4) exists is defined as
Remark 1. Many of the results in this paper remain valid if
XN
sf
:= {x ∈ Rn | sf
N (x)  = ∅}. (7)
the assumption that Z and Xf are polytopes is relaxed to Z and
Xf being convex; the current assumptions serve to simplify the It is critical to note that it may not be possible to select a single
presentation in Section 4. (L, g) such that it is admissible for all x ∈ XNsf . Indeed, it is easy

to find examples where there exists a pair (x, x̃) ∈ XN sf × X sf


N
In the sequel, predictions of the system’s evolution over a  sf
such that N (x) N (x̃) = ∅. For problems of non-trivial
sf
finite control/planning horizon will be used to define a number
size, it is therefore necessary to calculate an admissible pair
of suitable control policies. Let the length N of this planning
(L, g) on-line, given a measurement of the current state x.
horizon be a positive integer and define stacked versions of the
Once an admissible control policy is computed for the cur-
predicted input, state and disturbance vectors u ∈ RmN , x ∈
rent state, there are many ways in which it can be applied to
Rn(N +1) and w ∈ RnN , respectively, as
the system; time-varying, time-optimal and receding-horizon
x := [x0T ... xN ] ,
T T implementations are the most common, and are considered in
detail in Sections 6 and 7.
u := [uT0 ... uTN−1 ]T ,
Remark 2. Note that the state feedback policy (4) subsumes
w := [w0T ... wN−1
T
]T ,
the well-known class of “pre-stabilizing” control policies
where x0 = x denotes the current measured value of the state (Bemporad, 1998; Chisci et al., 2001; Lee & Kouvaritakis,
and xi+1 := Ax i + Bui + wi , i = 0, . . . , N − 1 denote the 1999; Mayne et al., 2005), in which the control policy takes the
prediction of the state after i time instants. Finally, let the set form ui = Kx i + ci , where K is computed off-line and on-line
W := W N := W × · · · × W , so that w ∈ W. computation is limited to finding an admissible perturbation
sequence {ci }N−1
i=0 .

3. State feedback parameterization


Finding an admissible pair (L, g), given the current state x,
has been believed to be a very difficult problem due to the
One natural approach to controlling the system in (1), while
following property:
ensuring the satisfaction of the constraints, is to search over the
set of time-varying affine state feedback control policies with
Proposition 3 (Non-convexity). For a given state x ∈ XN sf ,
knowledge of prior states:
the set of admissible affine state feedback control parameters
i sf
N (x) is non-convex, in general.

ui = Li,j xj + gi , ∀i ∈ Z[0,N−1] , (4)
j =0 The truth of this statement is easily verified by considering
the following example:
where each Li,j ∈ Rm×n and gi ∈ Rm . For notational conve-
nience, we also define the block lower triangular matrix L ∈ Example 4. Consider the SISO system x + = x + u + w with
RmN×n(N +1) and stacked vector g ∈ RmN as initial state x0 =0, input constraint |u| 3, bounded disturbances
⎡ L |w| 1 and a planning horizon of N = 3. Consider a control
0,0 0 ··· 0⎤ ⎡
g0

policy of the form (4) with g=0 and L2,1 =0, so that u0 =0 and
L := ⎣ ... ..
.
..
.
.. ⎦ ,
. g := ⎣ ... ⎦ ,
u1 = L1,1 w0 ,
LN−1,0 · · · LN−1,N−1 0 gN−1 ,
(5) u2 = [L2,2 (1 + L1,1 )]w0 + L2,2 w1 .

so that the input sequence can be written as u = Lx + g. For a Since the constraints on the components of w are independent,
given initial state x, we say that the pair (L, g) is admissible if it is easy to show that the input constraints are satisfied for all
the control policy (4) guarantees that for all allowable distur- w ∈ W if and only if
bance sequences of length N, the constraints (2) are satisfied
over the horizon i = 0, . . . , N − 1 and that the state is in the |L1,1 | 3,
target set (3) at the end of the horizon. More precisely, the set |L2,2 (1 + L1,1 )| + |L2,2 | 3.
526 P.J. Goulart et al. / Automatica 42 (2006) 523 – 533

It is straightforward to verify that the set of gains (L1,1 , L2,2 ) Before proceeding, we note that one can easily find matrices
that satisfy these constraints is non-convex; the pairs (−3, 1) F ∈ Rt×mN , G ∈ Rt×nN , H ∈ Rt×n and a vector c ∈ Rt ,
and (−1, 3) are admissible, while the pair (−2, 2) is not. where t := sN + r (for completeness, the matrices and vectors
N (x)
are given in the Appendix), such that the expression for df
It is surprising to note that, though the set sf N (x) may be can be rewritten more compactly as
non-convex, the set XN sf is always convex. We defer the proof



(M, v) satisfies (10)
N (x)
of this until Section 5. Additionally, despite the fact that sf

N (x) = (M, v)
F v + (F M + G)w c + H x .
df
(13)
may be non-convex, we will show that one can still find an ad-

∀w ∈ W
missible (L, g) by solving a single tractable convex program-
ming problem.
4.1. Convexity of df
N (x)

4. Disturbance feedback parameterization The main advantage of the disturbance feedback parameter-
ization in (8) over the state feedback parameterization in (4) is
An alternative to (4) is to parameterize the control policy as formalized in the following statement.
an affine function of the sequence of past disturbances, so that
i−1 Proposition 5 (Convexity). For a given state x ∈ XN df , the set

ui = Mi,j wj + vi , ∀i ∈ Z[0,N−1] , (8) of admissible affine disturbance feedback parameters df N (x)
j =0 is convex and closed. Furthermore, the set of states XN df , for

which at least one admissible affine disturbance feedback policy


where each Mi,j ∈ Rm×n and vi ∈ Rm . It should be noted exists, is convex.
that, since full state feedback is assumed, the past disturbance
sequence is easily calculated as the difference between the pre- Proof. Consider the set
dicted and actual states at each step, i.e. 


(M, v) satisfies (10)

CN := (M, v, x)
F v + (F M + G)w c + H x .
wi = xi+1 − Ax i − Bui , ∀i ∈ Z[0,N−1] . (9)

∀w ∈ W
The above parameterization appears to have originally been It immediately follows that
suggested some time ago within the context of stochastic pro-

grams with recourse (Gartska & Wets, 1974). More recently, it  


(M, v) satisfies (10)

CN = (M, v, x)

.
has been revisited as a means for finding solutions to a class of F v + (F M + G)w c + H x
w∈W
robust optimization problems, called affinely adjustable robust
counterpart (AARC) problems (Ben-Tal et al., 2004; Guslitser, CN is closed and convex, since it is the intersection of an
2002), and robust model predictive control problems (Löfberg, arbitrary collection of closed and convex sets. The set XN df is
2003; van Hessem & Bosgra, 2002). convex since it is a projection of CN onto a suitably defined
For notational convenience, we define the vector v ∈ RmN subspace. Since the set dfN (x) in (13) can similarly be written
and the strictly block lower triangular matrix M ∈ RmN×nN as an intersection of closed and convex sets, it is also closed
such that and convex. 
⎡ 0 ⎡ ⎤
··· ··· 0⎤ v0
The above result is of fundamental importance. If W is con-
⎢ M1,0 0 ··· 0⎥ ⎢ .. ⎥
⎢ . ⎥
M := ⎢⎣ .. .. .. ⎥
.. ⎦ , v := ⎢ . ⎥, vex and compact, then it is conceptually possible to compute a
. . . . ⎣ .. ⎦ pair (M, v) ∈ dfN (x) in a computationally tractable way, given
MN−1,0 · · · MN−1,N−2 0 vN−1 the current state x.
(10)
Remark 6. Note that the proof of Proposition 5 does not
so that the input sequence can be written as u = Mw + v. In a require W to be convex. However, the set W can, without
manner similar to (6), we define the set of admissible (M, v) as loss of generality or increased conservativeness, be replaced

⎫ by its convex hull (see Bertsekas, Nedic, & Ozdaglar, 2003,

(M, v) satisfies (10), x = x0 ⎪



⎪ Example 7.1.2).

xi+1 = Ax i + Bui + wi ⎬

i−1
N (x) := (M, v)
ui = j =0 Mi,j wj + vi
df
(11)



⎪ 4.2. Computation of admissible policies


(xi , ui ) ∈ Z, xN ∈ Xf ⎪

∀ i ∈ Z[0,N−1] , ∀w ∈ W
It is well-known that one can eliminate the universal quanti-
and the set of initial states x for which an admissible control fier in (13) to obtain the equivalent expression
policy of the form (8) exists as 


(M, v) satisfies (10)
df

N (x)= (M, v)
F v+ max (F M+G)w c+H x , (14)
XN
df
:= {x ∈ Rn | df
N (x)  = ∅}. (12) w∈W
P.J. Goulart et al. / Automatica 42 (2006) 523 – 533 527

where maxw∈W (F M + G)w denotes row-wise maximization. where Fi , ci , and Hi represent the ith rows of the respective
When W is convex, computing an admissible policy is easily matrices. An admissible pair (M, v) can then be found by solv-
done by formulating the dual of each maximization problem ing a single second-order cone program (SOCP) in a polyno-
maxw∈W (F M + G)i w, i = 1, . . . , t, introducing some slack mial number of variables and constraints (Lobo, Vandenberghe,
variables and solving a single, suitably defined convex pro- Boyd, & Lebret, 1998). A similar process leads, in the case of
gramming problem, where (F M + G)i represent the ith row of 1- or ∞-norm bounded disturbances, to a tractable LP. Alter-
(F M + G). We provide two examples for commonly encoun- natively, these may be handled as special cases of Example 7.
tered disturbance sets.
5. Equivalence between state and disturbance feedback
Example 7 (Polytopic disturbance sets (Ben-Tal et al., 2004; parameterizations
Guslitser, 2002)). Suppose that W is a polytope (closed and
bounded polyhedron). In this case the disturbance set may be One important question is whether the disturbance feedback
written as parameterization (8) is more or less conservative than the state
feedback parameterization (4). We next show that they are ac-
W = {w ∈ RnN | Sw h}, tually equivalent.
where S ∈ Ra×n and h ∈ Ra . Note that this includes cases
where the disturbance set is time-varying, and that both 1- and Theorem 9 (Equivalence). The set of admissible states XN df =

∞-norm disturbance sets W can be characterized in this manner. XN . Additionally, given any x ∈ XN , for any admissible (L, g)
sf sf

In this case, each row of maxw∈W (F M +G)w is actually an an admissible (M, v) can be found that yields the same state
LP, and exploitation of its dual leads to a constraint of the form and input sequence for all allowable disturbance sequences,
and vice-versa.
max (F M + G)i w = min hT zi ,
w∈W zi
Proof. XN sf ⊆ X df : By definition, for a given x ∈ X sf , there
N N
s.t. S zi = (F M + G)Ti ,
T
zi 0, exists a pair (L, g) that satisfies the constraints in (6). For a
where the vector zi ∈ Ra represents the dual variables asso- given disturbance sequence w ∈ W, the inputs and states of
ciated with the ith row of the maximization in (14). By com- the system can be written as
bining these into a matrix Z := [z1 . . . zN ] ∈ Ra×t , one can
u = Lx + g, x = (I − BL)−1 (Bg + Ew + Ax),
rewrite df
N (x) in terms of purely affine constraints:

 where the matrices A, B, and E (for completeness, these are

(M, v) satisfies (10), ∃Z s.t.

given in the Appendix) are defined so that one can write x =
N (x) = (M, v)
F v + Z h c + H x
df T , (15)

Ax +Bu+Ew. The matrix I −BL is always non-singular, since
F M + G = ZT S, Z0
BL is strictly lower triangular. The control sequence can then be
where all inequalities are element-wise. An admissible pair rewritten as u =L(I −BL)−1 (Bg +Ax)+L(I −BL)−1 Ew +g,
(M, v) can then be found by solving a single LP in a polyno- and an admissible (M, v) constructed by choosing
mial number of decision variables and constraints. Note that
standard Kronecker product identities can be used to convert M = L(I − BL)−1 E, (16a)
the matrix products in (15) to a vectorized form compatible
with standard LP formulations. v = L(I − BL)−1 (Bg + Ax) + g. (16b)

Example 8 (Norm-bounded disturbance sets). Suppose that W It is easy to verify that this (M, v) satisfies (11), and gives
represents the affine map of a set of norm-bounded signals: exactly the same input sequence as the pair (L, g). Therefore,
(M, v) ∈ dfN (x), thus x ∈ XN ⇒ x ∈ XN .
sf df

W = {w| w = Ed + f, dp 1}, XNdf ⊆ X sf : By definition, for a given x ∈ X df , there exists


N N
a pair (M, v) that satisfies the constraints in (11). For a given
where E ∈ RnN ×l and f ∈ RnN . From the well-known prop- disturbance sequence w ∈ W, the inputs and states of the
erties of the dual norm, it immediately follows that: system can be written as
max a T w = E T aq + a T f ,
w∈W u = Mw + v, x = B(Mw + v) + Ew + Ax.

for any vector a ∈ RnN , where 1/p + 1/q = 1. In particular, if Recall that since full state feedback is assumed, one can con-
W is the linear map of a 2-norm ball, then the disturbance set struct matrices E† and I such that the disturbances can be re-
(with f = 0, p = q = 2) can be written as W = {Ed| d2 1}. covered using w = E† x − IAx + E† Bu. It is easy to verify that
The row-wise maximization in (14) can then be simplified to the matrices E† and IT are left inverses of E and A, respec-

 tively, so that E† E = I and IT A = I . The input sequence can

(M, v) satisfies (10)

then be rewritten as u = (I − ME† B)−1 (ME† x − MIAx + v).
df
N (x) = (M, v)

Fi v + (F M + G)i E2 ci + Hi x ,


The matrix I − ME† B is non-singular because the product
∀ i ∈ Z[1,t]
ME† B = M(I ⊗ B) is strictly lower triangular. An admissible
528 P.J. Goulart et al. / Automatica 42 (2006) 523 – 533

(L, g) can then be constructed by choosing Proof. The proof of the first relation is by induction. Let x ∈
sf and (L, g) ∈ sf (x). One can construct a pair (L̄, ḡ) ∈
XN
L = (I − ME† B)−1 ME† , (17a) N 
sf , where L̄ := L 0 0 and ḡ := [gT 0]T , such that the
XN+1 0 K 0
−1
g = (I − ME B) †
(v − MIAx). (17b) final stage input will be uN = Kx N . From the definition of
It is easy to verify that this (L, g) satisfies (6), and gives exactly sf
N (x), it follows that xN ∈ Xf . If Assumption 1 holds, then
the same input sequence as the pair (M, v). Therefore, (L, g) ∈ (xN , uN ) ∈ Z and xN+1 = Ax N + BuN + wN ∈ Xf for all
sfN (x), thus x ∈ XN ⇒ x ∈ XN . 
df sf wN ∈ W . It then follows from the definition of sfN+1 (x) that
(L̄, ḡ) ∈ sf
N+1 (x), hence x ∈ X sf . The proof is completed
N+1
This leads to the following surprising result. by verifying, in a similar manner, that Xf ⊆ X1sf ⊆ X2sf . The
second relation then follows from Theorem 9. 
Corollary 10 (Convexity of XN sf ). The set of states X sf , for
N
which an admissible affine state feedback policy of the form (4)
exists, is a convex set. 6.2. Time-varying control laws

6. Geometric and invariance properties We first consider what happens if one were to implement
an admissible affine disturbance feedback policy in a time-
It is well-known that the set of states for which an admissi- varying fashion. Given any (M, v) ∈ df N (x(0)), we consider
ble open loop input sequence exists (i.e. a feedback policy with the following time-varying affine disturbance feedback policy:
L = 0 or M = 0) may collapse to the empty set if the horizon 
vk + k−1
j =0 Mi,j w(j ) if k ∈ Z[0,N−1] , .
is sufficiently large (Scokaert & Mayne, 1998, Section F). Fur- u(k) = (18)
Kx(k) if k ∈ Z[N,∞)
thermore, for time-varying, receding-horizon or time-optimal
control implementations, it may not be possible to guarantee Recall that the realized disturbance sequence w(·) can be re-
constraint satisfaction for all time unless additional assumptions covered using relation (9). Theorem 9 implies that we could
are made. In this section, we provide conditions under which also have defined an equivalent, time-varying affine state feed-
these problems will not occur. We first introduce the following back policy, but we choose to work with disturbance feedback
standard assumption (cf. Mayne et al., 2000). policies due to the convenience of computation resulting from
Proposition 5 and Examples 7 and 8. The next result follows
Assumption 1 (Terminal constraint). The state feedback gain immediately.
matrix K and terminal constraint Xf have been chosen such
that: Proposition 12 (Time-varying control). Let Assumption 1 hold,
the initial state x(0) ∈ XN df and (M, v) ∈ df (x(0)). For all
• Xf is contained inside the set of states for which the con- N
allowable infinite disturbance sequences, the state of system (1),
straints (2) are satisfied under the control u = Kx, i.e. Xf ⊆
in closed-loop with the feedback policy (18), enters Xf in N
{x| (x, Kx) ∈ Z} = {x| (C + DK)x b}.
steps or less and is in Xf for all k ∈ Z[N,∞) . Furthermore, the
• Xf is robust positively invariant for the closed-loop system
constraints in (2) are satisfied for all time and for all allowable
x + = (A + BK)x + w, i.e. (A + BK)x + w ∈ Xf , for all
infinite disturbance sequences.
x ∈ Xf and all w ∈ W .

Under some additional, mild technical assumptions, one can 6.3. Receding horizon control laws
compute a K and a polytopic Xf that satisfies Assumption 1 if
W is a polytope, an ellipsoid or the affine map of a p-norm ball. We next consider what happens when the disturbance feed-
The reader is referred to Blanchini (1999), Kolmanovsky and back parameterization (8) is used to design a receding horizon
Gilbert (1998), Lee and Kouvaritakis (1999) and the references control (RHC) law. In RHC, an admissible feedback policy is
therein for details. computed at each time instant, but only the first component of
the policy is applied.
6.1. Monotonicity of XN
sf and X df
Consider the set-valued map N : XN sf → 2Rm (2Rm is the set
N
of all subsets of Rm ), which is defined by considering only the
We are now in a position to give a sufficient condition under first portion of an admissible state feedback parameter (L, g),
which one can guarantee that XN sf (equivalently, X df ) is non-
N i.e.
empty and the size of XN is non-decreasing (with respect to
sf

set inclusion) with horizon length N: N (x) := {u|∃(L, g) ∈ sf


N (x) s.t. u = L0,0 x + g0 }. (19)

An admissible RHC law N : XN sf → Rm is defined as any


Proposition 11 (Size of XN sf and X sf ). If Assumption 1 holds,
N
then the following set inclusions hold: selection from the set-valued map N (·), i.e. N (·) has to sat-
isfy N (x) ∈ N (x) for all x ∈ XN
sf . The resulting closed-loop
Xf ⊆ X1sf ⊆ · · · ⊆ XN−1
sf
⊆ XN
sf
⊆ XN+1
sf
⊆ ··· system is then given by
Xf ⊆ X1df ⊆ · · · ⊆ XN
df
−1 ⊆ XN ⊆ XN+1 ⊆ · · ·
df df
x + = Ax + BN (x) + w. (20)
P.J. Goulart et al. / Automatica 42 (2006) 523 – 533 529

Note that the RHC law N (·) is time-invariant and is, in gen- 7. Uniqueness, continuity and stability of a class of
eral, a nonlinear function of the current state. Due to the non- receding horizon control laws
convexity of sfN (x), computing an admissible (L, g) in (19) at
each time instant is problematic. However, by a straightforward We next consider the important problem of how to synthe-
application of Theorem 9, it follows that size an RHC law such that the closed-loop system is robustly
stable. We choose to minimize the value of a cost function that
N (x) = {u| ∃(M, v) ∈ df
N (x) s.t. u = v0 }. is quadratic in the disturbance-free states and control inputs
and demonstrate that this allows for the synthesis of a continu-
Hence, computation of a N (x) ∈ N (x) is possible using
ous RHC law, which guarantees that the closed-loop system is
convex programming methods.
input-to-state stable (ISS). As in Section 6, we rely heavily on
The following result is easily proven using standard methods
Theorem 9 in order to derive these results, moving freely be-
in RHC (Mayne et al., 2000), by employing the state feedback
tween the two parameterizations and using whichever is most
parameterization (4).
natural in each context.
Proposition 13 (RHC). If Assumption 1 holds, then the set XN sf

is robust positively invariant for the closed-loop system (20), 7.1. Cost function
i.e. if x ∈ XN
sf , then Ax + B (x) + w ∈ X sf for all w ∈ W .
N N
Furthermore, the constraints (2) are satisfied for all time and We define an optimal policy to be one that minimizes the
for all allowable disturbance sequences if and only if the initial value of a cost function that is quadratic in the disturbance-free
state x(0) ∈ XN sf . state and input sequences. We thus define
N−1
 1 1
6.4. Minimum-time control laws VN (x, L, g, w) := (x̃i 2Q + ũi 2R ) + x̃N 2P (21)
2 2
i=0
We conclude this section by deriving some results for robust
minimum-time control laws. Given a maximum horizon length where x̃0 =x, x̃i+1 =Ax̃i +B ũi +wi and ũi = ij =0 Li,j x̃j +gi
Nmax and the set N := {1, . . . , Nmax }, let for i = 0, . . . , N − 1, and P, Q and R are positive definite; and
define an optimal policy pair as
N ∗ (x) := min{N ∈ N| sf
N (x)  = ∅}
N (L∗ (x), g∗ (x)) := argmin VN (x, L, g, 0). (22)
(L,g)∈sf
N (x)
be the minimum horizon length for which an admissible affine
state feedback policy of the form (4) exists. Consider also the sf → Rm is defined by the
m The time-invariant RHC law N : XN
set-valued map  : X → 2R , defined as
first part of the optimal affine state feedback control policy, i.e.

N ∗ (x) (x) if x ∈
/ Xf ,
(x) := N (x) := L∗0,0 (x)x + g0∗ (x), (23)
Kx if x ∈ Xf ,
which is nonlinear, in general. The closed-loop system becomes
where N ∗ (x) (x) is defined as in (19) with N = N ∗ (x), and
  x + = Ax + BN (x) + w. (24)
X := Xf ∪ ∪ XN sf
.
N∈N We also define the value function VN∗ : XN
sf → R
 0 to be

Let the time-invariant robust time-optimal control law  : X → VN∗ (x) := min VN (x, L, g, 0). (25)
Rm be any selection from (·), i.e. (x) ∈ (x), for all x ∈ X. (L,g)∈sf
N (x)
Note that (·) is defined everywhere on X and that the state of
the closed-loop system x + = Ax + B(x) + w will enter Xf 7.2. Exploiting equivalence to compute the RHC law
in less than Nmax steps if this is possible, even if Assumption 1
does not hold. Proof of the following result is straightforward For the equivalent affine disturbance feedback parameteriza-
and closely parallels that of Propositions 11 and 13. tion (8), we define a cost function JN (·) analogous to the one
defined in (21), i.e.
Proposition 14 (Minimum-time control). If Assumption 1
N−1
holds, then X = XN and X is robust positively invariant for 1
sf
1
max
the closed-loop system x + = Ax + B(x) + w, i.e. if x ∈ X, JN (x, M, v, w) := (x̄i 2Q + ūi 2R ) + x̄N 2P
2 2
i=0
then x + ∈ X for all w ∈ W . The state of the closed-loop sys-
tem enters Xf in Nmax steps or less and, once inside, remains i−1
where x̄0 =x, x̄i+1 =Ax̄i +B ūi +wi and ūi = j =0 Mi,j wj +vi
inside for all time and all allowable disturbance sequences. for i = 0, . . . , N − 1. If we define
Furthermore, constraints (2) are satisfied for all time and for
all allowable disturbance sequences if and only if the initial (M∗ (x), v∗ (x)) := argmin JN (x, M, v, 0), (26)
state x(0) ∈ X. (M,v)∈df
N (x)
530 P.J. Goulart et al. / Automatica 42 (2006) 523 – 533

then the proof of the following result follows by a straightfor- Lemma 18 (Value at the origin). If Assumption 1 holds, then
ward application of Theorem 9. VN∗ (0) = 0 and N (0) = 0.

Proposition 15 (Computation of RHC law). The RHC law Proof. Proposition 11 implies that the origin is in the interior
N (·), defined in (23), is given by the first part of the optimal of XN sf . If x ∈ X , then (L, g) ∈ sf (x) if g = 0, L = K
f i,i
N
control sequence v∗ (·), i.e. for i = 0, . . . , N − 1 and Li,j = 0 for all i  = j . Hence,
VN∗ (0) VN (0, L, 0, 0) = 0. Since VN∗ (x) 0 ∀x ∈ XN sf ,
N (x) = v0∗ (x) = L∗0,0 (x)x + g0∗ (x), ∀x ∈ XN
sf
. ∗
VN (0) = 0, thus N (0) = 0. 
The minimum value of JN (x, ·, ·, 0) taken over the set of admis-
sible affine disturbance feedback parameters is equal to VN∗ (x), 7.4. Input-to-state stability (ISS) for RHC
defined in (25), i.e.
Since the disturbance is non-zero, it is not possible to guar-
VN∗ (x) = min JN (x, M, v, 0).
(M,v)∈df
N (x)
antee that the origin is asymptotically stable, as in conventional
RHC without disturbances (Mayne et al., 2000). As an alterna-
Remark 16. (L∗ (x), g∗ (x)) in (22) is found by letting (M, v)= tive, we use the notion of input-to-state stability (ISS) (Jiang &
(M∗ (x), v∗ (x)) in (17a). This is important because it allows Wang, 2001; Khalil, 2002), which has proven to be effective in
one to efficiently compute the value of the RHC law u = N (x) the study of RHC laws with input constraints only (Kim, Yoon,
via the minimization of a convex function over a convex set. Jadbabaie, & De Persis, 2004) and in the analysis and synthe-
In particular, we note that if W is a polytope as in Example 7, sis of RHC laws with robust constraint satisfaction guarantees
then (26) can be written as a convex quadratic program (QP) (Limón Marruedo, Álamo, & Camacho, 2002).
in a tractable number of variables and constraints. If W is an Consider a nonlinear, time-invariant, discrete-time system of
ellipsoid or the affine map of a Euclidean ball as in Example the form
8, then the optimization problem in (26) can be converted to a
tractable SOCP (Lobo et al., 1998). x + = f (x, w), (29)

7.3. Continuity of the RHC law and value function where x ∈ Rn is the state and w ∈ Rl is a disturbance that
takes on values in a compact set W ⊂ Rl containing the origin.
Proposition 17 (Continuity of N and VN∗ ). If W is a polytope, It is assumed that the state is measured at each time instant,
then the RHC law N (·) in (23) is unique and Lipschitz con- that f : Rn × Rl → Rn is continuous and that f (0, 0) = 0.
tinuous on XN sf . Furthermore, the value function V ∗ (·) in (25)
N
Given the state x at time 0, a disturbance sequence w(·), where
is strictly convex and Lipschitz continuous on XN sf . w(k) ∈ W for all k ∈ Z[0,∞) , let the solution to (29) at time k
be denoted by (k, x, w(·)). For systems of this type, a useful
Proof. Note that JN (x, M, v, 0) = JN (x, 0, v, 0) for all M. definition of stability is ISS.
Hence, if we define the set
Definition 19 (ISS). For system (29), the origin is input-to-
VN (x) := {v| ∃M s.t. (M, v) ∈ df
N (x)}, state stable (ISS) with region of attraction X ⊆ Rn , which
contains the origin in its interior, if there exist a KL-function
then, from (26) and Proposition 15, respectively,
(·) and a K-function (·) such that for all initial states x ∈ X
v∗ (x) = argmin JN (x, 0, v, 0), (27) and disturbance sequences w(·), where w(k) ∈ W for all k ∈
v∈VN (x) Z[0,∞) , the solution of the system satisfies (k, x, w(·)) ∈ X
VN∗ (x) = min JN (x, 0, v, 0). (28) and for all k ∈ N,
v∈VN (x)

If W is a polytope, then VN (x) is polyhedral since it is the (k, x, w(·)) (x, k) + (sup{w()|  ∈ Z[0,k−1] }).
projection of the polyhedron (15) onto a subspace. It is also
easy to verify that (x, v)  → JN (x, 0, v, 0) is a strictly con- Note that ISS implies that the origin is an asymptotically stable
vex quadratic function, and thus that (28) is a strictly convex point for the undisturbed system x + = f (x, 0) with region of
QP. By applying the results in Bemporad, Morari, Dua, and attraction X, and also that all state trajectories are bounded for
Pistikopoulos (2002), it follows that v∗ (·) and hence N (·) are all bounded disturbance sequences. Furthermore, every trajec-
continuous, piecewise affine functions on XN sf , and that V ∗ (·) tory (x, k, w(·)) → 0 if w(k) → 0 as k → ∞.
N
is a strictly convex, piecewise quadratic function on XN sf . Lips-

chitz continuity follows from the assumption that Z is compact, In order to be self-contained, we recall the following useful
hence XN sf is also compact.  result from Jiang and Wang (2001, Lemma 3.5).

Finally, we present the following result, which is useful in Lemma 20 (ISS-Lyapunov function). For system (29), the ori-
proving stability in the next section. gin is ISS with region of attraction X ⊆ Rn if the following
P.J. Goulart et al. / Automatica 42 (2006) 523 – 533 531

conditions are satisfied: Theorem 23 (ISS for RHC). Let W be a polytope and the RHC
law N (·) be defined as in (23). If Assumptions 1 and 2 hold,
• X contains the origin in its interior and Xf is robust posi- then the origin is ISS for the closed-loop system (24) with region
tively invariant for (29), i.e. f (x, w) ∈ X for all x ∈ X and of attraction XNsf . Furthermore, the input and state constraints

all w ∈ W . (2) are satisfied for all time and for all allowable disturbance
• There exist K∞ functions 1 (·), 2 (·) and 3 (·), a K- sequences if and only if the initial state x(0) ∈ XN sf .

function (·), and a continuous function V : X → R  0


such that for all x ∈ X, Proof. For the system of interest, we of course let f (x, w) :=
Ax + BN (x) + w. Lemma 18 implies that f (0, 0) = 0 and
1 (x)V (x)2 (x), Proposition 17 implies that f (·) is continuous on XN sf .

Combining Proposition 17 with Lemma 18, it follows that


V (f (x, w)) − V (x)  − 3 (x) + (w). VN∗ (·) is a continuous, positive definite function. Hence, there
exist K∞ -functions 1 (·) and 2 (·) such that (30a) holds with
Remark 21. A function V (·) that satisfies the conditions in V (·) := VN∗ (·) (Khalil, 2002, Lemma 4.3).
Lemma 20 is called an ISS-Lyapunov function. Using standard techniques (Mayne et al., 2000), it is easy
to show that V (·) := VN∗ (·) is a Lyapunov function for the
The above result leads immediately to the following. undisturbed system x + = Ax + BN (x). More precisely, the
methods in Mayne et al. (2000) can be employed to show that
Lemma 22 (Lipschitz Lyapunov function for undisturbed sys- (30b) holds with 3 (z) := (1/2)
min (Q)z2 .
tem). Let X ⊆ Rn contain the origin in its interior and be a It follows from Proposition 13 that XN sf is robust positively
robust positively invariant set for (29). Furthermore, let there invariant for system (24). Proposition 11 implies that the origin
exist K∞ -functions 1 (·), 2 (·) and 3 (·) and a function V : is in the interior of XN
sf . Finally, recall from Proposition 17 that
X → R  0 that is Lipschitz continuous on X such that for all ∗
N (·) and VN (·) are Lipschitz continuous on XN sf . By combining
x ∈ X, all of the above, it follows from Lemma 22 that VN∗ (·) is an
ISS-Lyapunov function for system (24). 
1 (x) V (x)2 (x), (30a)
V (f (x, 0)) − V (x) − 3 (x). (30b) Remark 24. Given the same assumptions as in Theorem 23,
it can be shown (Mayne et al., 2000) that the origin is an
The function V (·) is an ISS-Lyapunov function and the origin exponentially stable equilibrium for the undisturbed system
is ISS for system (29) with region of attraction X if either of x + = Ax + BN (x) with region of attraction XN
sf .
the following conditions are satisfied:
8. Conclusions
(i) f : X × W → Rn is Lipschitz continuous on X × W .
(ii) f (x, w) := g(x) + w, where g : X → Rn is continuous
We have shown that the state feedback parameterization of
on X.
Section 3 is equivalent to the disturbance feedback parameter-
ization of Section 4. This has the important consequence that,
Proof. Let LV be the Lipschitz constant of V (·). (i) Since under suitable assumptions on the disturbance and cost function
V (f (x, w)) − V (f (x, 0)) LV f (x, w) − f (x, 0)  in a given finite horizon optimal control problem, an admissi-
LV Lf w, where Lf is the Lipschitz constant of f (·), it ble and optimal state feedback control policy can be found by
follows that V (f (x, w)) − V (x) = V (f (x, 0)) − V (x) + solving a tractable and convex optimization problem. This is a
V (f (x, w)) − V (f (x, 0)) − 3 (x) + LV Lf w. The surprising result, since the set of admissible affine state feed-
proof is completed by letting (s) := LV Lf s in back parameters is non-convex, in general.
Lemma 20. In addition, if the optimal control problem involves the min-
(ii) Note that V (f (x, w)) − V (f (x, 0)) LV w. The imization of a quadratic cost and the solution is to be imple-
proof is completed as for (i), but by letting (s) := LV s in mented in a receding horizon fashion, then one can choose
Lemma 20.  the terminal cost and terminal constraint to guarantee that the
closed-loop system is input-to-state stable and that the state and
Finally, we add the following assumption, which allows input constraints are satisfied for all time and for all disturbance
VN∗ (·) in (25) to be used as an ISS-Lyapunov function. sequences.
A number of open research issues remain. For example, we
Assumption 2 (Terminal cost). The terminal cost F (x) := have shown that the proposed disturbance feedback parameter-
x T P x is a Lyapunov function in the terminal set Xf for the ization is equivalent to affine state-feedback with knowledge
undisturbed closed-loop system x + = (A + BK)x in the sense of prior states. It would be interesting to see if it is possible
that, for all x ∈ Xf , to derive an equivalent convex re-parameterization in the case
where the control at each stage is an affine function of the cur-
F ((A + BK)x) − F (x)  − x T (Q + K T RK)x. rent state only.
532 P.J. Goulart et al. / Automatica 42 (2006) 523 – 533

This paper only considered the regulation problem with state Bertsekas, D. P., Nedic, A., & Ozdaglar, A. E. (2003). Convex analysis and
feedback. In order to be practically useful, the results need to optimization. Athena Scientific.
Bertsekas, D. P., & Rhodes, I. B. (1973). Sufficiently informative functions
be extended to handle the cases of output feedback, setpoint
and the minimax feedback control of uncertain dynamic systems. IEEE
tracking and offset-free control. Transactions on Automatic Control, AC-18(2), 117–124.
It may be possible to extend the continuity and stability Blanchini, F. (1999). Set invariance in control. Automatica, 35(1), 1747–1767.
results in Proposition 17 and Theorem 23 to cover a broader Chisci, L., Rossiter, J. A., & Zappa, G. (2001). Systems with persistent state
class of disturbances, such as ellipsoidal or 2-norm bounded disturbances: Predictive control with restricted constraints. Automatica,
37(7), 1019–1028.
disturbances, or to cover more general convex constraints
Dahleh, M. A., & Diaz-Bobillo, I. J. (1995). Control of uncertain systems.
on the states and inputs. However, the arguments used for Englewood Cliffs: Prentice Hall.
doing so are likely to differ substantially from those given Gartska, S. J., & Wets, R. J.-B. (1974). On decision rules in stochastic
here. programming. Mathematical Programming, 7, 117–143.
The result on computational tractability may be extended to Goulart, P.J., & Kerrigan, E.C. (2005). An efficient decomposition-based
formulation for robust control with constraints. In Proceedings of the 16th
exploit any additional structure in the optimal control prob- IFAC World Congress on Automatic Control, Prague, Czech Republic.
lem for specific classes of disturbances and cost functions. Guslitser, E. (2002). Uncertainty-immunized solutions in linear programming.
Some results along these lines are already available for prob- Master’s Thesis, Technion, Israeli Institute of Technology.
lems with ∞-norm bounded disturbances (Goulart & Kerrigan, Jia, D., Krogh, B.H., & Stursberg, O. (2005). An LMI approach to robust
2005) and the minimization of the finite-horizon 2 gain of a model predictive control. Journal of Optimization Theory and Applications,
127(2), 347–365.
system (Kerrigan & Alamo, 2004). Jiang, Z., & Wang, Y. (2001). Input-to-state stability for discrete-time non-
linear systems. Automatica, 37(6), 857–869.
Kerrigan, E.C., & Alamo, T. (2004). A convex parameterization for solving
Acknowlegements constrained min–max problems with a quadratic cost. In Proceedings of
the 2004 American Control Conference, Boston, MA, USA.
Paul Goulart would like to thank the Gates Cambridge Trust Khalil, H. K. (2002). Nonlinear systems. USA: Prentice Hall.
Kim, J., Yoon, T., Jadbabaie, A., & De Persis, C. (2004). Input-to-state
for their support. Eric Kerrigan would like to thank the Royal stabilizing MPC for naturally stable linear systems subject to input
Academy of Engineering for supporting this research. constraints. In Proceedings of the 43rd IEEE Conference on Decision and
Control, Paradise Island, Bahamas, December 2004, pp. 5041–5046.
Kolmanovsky, I., & Gilbert, E. G. (1998). Theory and computations of
Appendix disturbance invariant sets for discrete-time linear systems. Mathematical
Problems in Engineering, 4(4), 317–363.
Define A ∈ Rn(N +1)×n and E ∈ Rn(N+1)×nN as Kothare, M. V., Balakrishnan, V., & Morari, M. (1996). Robust constrained
model predictive control using linear matrix inequalities. Automatica,
⎡ ⎤ ⎡ ⎤
In 0 0 ··· 0 32(10), 1361–1379.
Lee, Y. I., & Kouvaritakis, B. (1999). Constrained receding horizon predictive
⎢ ⎥ ⎢ ⎥
⎢ A ⎥ ⎢ In 0 ··· 0 ⎥ control for systems with disturbances. International Journal of Control,
⎢ ⎥ ⎢ ⎥
⎢ 2⎥ ⎢ ⎥ 72(11), 1027–1032.

A := ⎢ A ⎥ ⎢ A In ··· 0 ⎥.
⎥ , E := ⎢ ⎥ Limón Marruedo, D., Álamo, T., & Camacho, E.F. (2002). Input-to-state
⎢ . ⎥ ⎢ .. .. .. .. ⎥ stable MPC for constrained discrete-time nonlinear systems with bounded
⎢ .. ⎥ ⎢ . . ⎥
⎣ ⎦ ⎣ . . ⎦ additive disturbances. In Proceedings of the 41st IEEE Conference on
Decision and Control, Las Vegas, Nevada, USA, pp. 4619–2624.
AN AN−1 AN−2 · · · In Lobo, M. S., Vandenberghe, L., Boyd, S., & Lebret, H. (1998). Applications
of second-order cone programming. Linear Algebra and its Applications,
The matrices B ∈ Rn(N+1)×mN , C ∈ Rt×n(N+1) and D ∈ 284(1–3), 193–228.
Rt×mN are defined as B := E(IN ⊗ B), C := IN 0⊗C Y0 ,
Löfberg, J. (2003). Approximations of closed-loop MPC. In Proceedings of
  the 42nd IEEE Conference on Decision and Control, Maui, Hawaii, USA,
and D := IN 0⊗D . It is easy to verify that (11) is equivalent pp. 1438–1442.
Maciejowski, J. M. (2002). Predictive control with constraints. UK: Prentice
 with F := CB + D, G := CE, H := −CA, and
to (13) Hall.
Mayne, D. Q., Rawlings, J. B., Rao, C. V., & Scokaert, P. O. M. (2000).
c := 1Nz⊗b . Constrained model predictive control: Stability and optimality. Automatica,
36(6), 789–814 Survey paper.
Mayne, D. Q., Seron, M. M., & Raković, S. V. (2005). Robust model
predictive control of constrained linear systems with bounded disturbances.
References
Automatica, 41(2), 219–224.
Scokaert, P. O. M., & Mayne, D. Q. (1998). Min–max feedback model
Bemporad, A. (1998). Reducing conservativeness in predictive control of predictive control for constrained linear systems. IEEE Transactions on
constrained systems with disturbances. In Proceedings of the 37th IEEE Automatic Control, 43(8), 1136–1142.
Conference on Decision and Control, Tampa, FL, USA, December, Smith, R.S. (2004). Robust model predictive control of constrained linear
pp. 1384–1391. systems. In Proceedings of the 2004 American Control Conference, Boston,
Bemporad, A., Morari, M., Dua, V., & Pistikopoulos, E. N. (2002). The MA, USA.
explicit linear quadratic regulator for constrained systems. Automatica, van Hessem, D.H., & Bosgra, O.H. (2002). A conic reformulation of model
38(1), 3–20. predictive control including bounded and stochastic disturbances under
Ben-Tal, A., Goryashko, A., Guslitzer, E., & Nemirovski, A. (2004). Adjustable state and input constraints. In Proceedings of the 41st IEEE Conference
robust solutions of uncertain linear programs. Mathematical Programming, on Decision and Control, pp. 4643–4648.
99(2), 351–376.
P.J. Goulart et al. / Automatica 42 (2006) 523 – 533 533

Paul J. Goulart received the S.B. and S.M. Jan M. Maciejowski is a Reader in Control
degrees in Aeronautics and Astronautics from Engineering in the Department of Engineering
the Massachusetts Institute of Technology. From at the University of Cambridge, and currently
1998 to 2002 he was an engineer and software Head of the Control Group. He is also a Fellow
developer for the Flight Operations Group of of Pembroke College, Cambridge. He graduated
the Chandra X-Ray Observatory, and from 2002 from Sussex University in 1971 with a B.Sc.
to 2003 was a research engineer in the Au- degree in Automatic Control, and from Cam-
tonomous Systems Group at the Charles Stark bridge University in 1978 with a Ph.D degree
Draper Laboratory. He is currently a Gates Cam- in Control Engineering. From 1971 to 1974 he
bridge Scholar and Ph.D. candidate in Control was a systems engineer with Marconi Space
Engineering at the University of Cambridge. His and Defence Systems Ltd., working mostly on
research interests include robust and predictive attitude control of spacecraft and high-altitude
control and robust optimization. balloon platforms. He was President of the European Union Control Associa-
tion for 2003–2005. In 2002 he was President of the Institute of Measurement
Eric C. Kerrigan received the B.Sc.(Eng.) de- and Control. He is a Fellow of the IEE and of the InstMC, and a Senior
gree in Electrical Engineering from the Uni- Member of the IEEE. He is also a “Distinguished Lecturer” of the IEEE
versity of Cape Town in 1996 and the Ph.D. Control Systems Society. His book “Multivariable Feedback Design” received
degree from the University of Cambridge in the IFAC Control Engineering Textbook Prize in 1996. In 2001 he published
2001. During 1997 he worked for the Coun- “Predictive Control with Constraints”, which has recently been translated into
cil for Scientific and Industrial Research, South Japanese. His current research interests are in predictive control, its applica-
Africa. From 2001 to 2005 he was a research tion to fault-tolerant control, in hybrid systems, and in system identification.
fellow at Wolfson College and the Department
of Engineering, University of Cambridge. He is
currently a Royal Academy of Engineering Re-
search Fellow and lecturer at Imperial College
London. His research interests include predic-
tive control, optimal control, robust control and
optimization.

You might also like