Runge-Kutta Model-Based Adaptive Predictive Control Mechanism For Non-Linear Processes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Article

Transactions of the Institute of


Measurement and Control
35(2) 166180
The Author(s) 2012
Reprints and permissions:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/0142331212438910
tim.sagepub.com
RungeKutta model-based adaptive
predictive control mechanism for
non-linear processes
Serdar Iplikci
Abstract
This paper proposes a novel non-linear model predictive control mechanism for non-linear systems. The idea behind the mechanism is that the so-
called RungeKutta model of a continuous-time non-linear system can be regarded as an approximate discrete model and employed in a generalized
predictive control loop for prediction and derivative calculation purposes. Additionally, the RungeKutta model of the system is used for state estima-
tion in the extended Kalman filter framework and online parameter adaptation. The proposed method has been tested on two different non-linear sys-
tems. Simulation results have revealed the effectiveness of the proposed method for different cases.
Keywords
Extended Kalman filter, non-linear model predictive control, optimal control, parameter estimation
Introduction
Since the appearance of the first model predictive control
(MPC) technique (Richalet et al., 1978), many MPC methods
have been proposed (Cutler and Ramaker, 1980; Keyser and
Cauwenberghe, 1985; Soeterboek, 1992) in the literature and
they have been proven to be effective and robust tools for
controlling industrial processes especially for non-minimum
phase plants, open-loop unstable plants and plants with vari-
able dead-time and/or parameters (Camacho, 1993; Clarke
and Mohtadi, 1989). They have been applied not only to
industrial processes (Clarke, 1988; Richalet, 1993), but also to
a wide spectrum of areas ranging from chemistry to aerospace
(Qin and Badgwell, 2003) mainly because of their superior
features, namely the time-domain formulation, receding hori-
zon scheme and constraint-handling capability. Among oth-
ers, the most special and probably the most common one is
the so-called generalized predictive control (GPC) technique
(Clarke et al., 1987a,b; Clarke, 1994). Still, all MPC tech-
niques share the same idea: the model of the plant is used to
predict future behaviour of the plant in response to a candi-
date control vector, which is to be optimized by solving a
finite-horizon open-loop optimal problem during each sam-
pling period, and then the first element of the optimized con-
trol vector is applied to the plant. In this respect, the model of
the plant plays a crucial role in the MPC methods. The first
MPC technique (Richalet et al., 1978) is based on an impulse-
response model, whereas another early method (Cutler and
Ramaker, 1980) is based on the step-response model of the
plant. Although the MPC techniques based on the impulse-
and step-response models exhibit adequate performance for
stable systems, they are inapplicable to unstable ones. In
order to overcome this problem, techniques based on the
CARIMA (Clarke et al., 1987a,b) and CARMA (Keyser and
Cauwenberghe, 1985) models have been proposed. Even if
these MPC techniques have been suggested for discrete-time
systems at the beginning, their continuous-time versions can
be found in the literature (Demircioglu and Gawthrop, 1991;
Gawthrop and Demircioglu, 1989). These GPC methods have
originally been developed for linear plants, which leads to an
optimization problem that can be solved analytically. This
study, on the other hand, concentrates on controlling non-
linear continuous-time processes. In this case, one solution is
to linearize the non-linear system dynamics around the oper-
ating points and then to apply a linear GPC technique based
on the linearized model. However, the linearization approach
may lose its efficiency in the following cases: (a) highly non-
linear processes operating in the vicinity of a fixed point, (b)
moderately non-linear processes operating in the whole oper-
ating region, (c) reference signals varying frequently in the
whole operating region. Another remedy is the so-called non-
linear model predictive control (NMPC) technique (Camacho
and Bordons, 2003, 2007; Henson, 1998), which employs a
non-linear model of the plant. The NMPC technique entails
solving a constrained non-linear optimization problem during
every sampling period. It is usually a very difficult and time
consuming task to find the global solution of such a problem.
Fortunately, it is demonstrated in Scokaert et al. (1999) that
Department of Electrical and Electronics Engineering, Pamukkale
University, Turkey
Corresponding author:
Serdar Iplikci, Department of Electrical and Electronics Engineering,
Pamukkale University, Denizli, Turkey.
Email: iplikci@pau.edu.tr
instead of finding the global solution, feasible, sub-optimal
solutions to this problem can yield a stabilizing controller.
Moreover, as stressed in Maciejowski (2002), it is even not
necessary to find a local minimum and the only task during
each sampling period is to find a solution that provides a suf-
ficient decrement in the cost function.
NMPC relies on the model of the non-linear plant and, as
discussed in Henson (1998) and Camacho and Bordons (2003),
there are three types of non-linear plant models that are
employed in the NMPC loop. The first one is referred to as the
fundamental models, which are obtained by applying first prin-
ciples (e.g. mass, energy and momentum balances), resulting in
ordinary differential equations (ODEs) and in some cases addi-
tive algebraic equations. The second type of non-linear plant
models are obtained by inputoutput measurements and they
are therefore referred to as the empirical models. In the litera-
ture, there exist many types of empirical models employed in
the NMPC loop, such as polynomial ARMA models
(Hernandez and Arkun, 1993), Hammerstein models (Fruzzetti
et al., 1997), Volterra models (Doyle et al., 1995; Genceli and
Nikolaou, 1995; Gruber et al., 2010; Maner et al., 1996),
Wiener models (Cervantes et al., 2003; Norquay et al., 1998),
and also models using soft computing tools, namely artificial
neural networks (Lawrynczuk, 2007; Piche et al., 2000), fuzzy
logic (Cho et al., 1999; Roubos et al., 1999), and machine
learning tools like support vector machines (Iplikci, 2006,
2010; Xi et al., 2007). Finally, the hybrid non-linear models
combine the fundamental and empirical approaches.
This work has been concentrated on MPC of non-linear
systems based on the fundamental models. More specifically,
the focus is on the continuous-time systems because a great
deal of the natural processes and industrial plants are in the
form of continuous-time systems. However, most of the
NMPC techniques are most naturally developed for discrete-
time systems, and therefore, it is usually necessary to convert
the non-linear optimal control problem (NOCP) into a non-
linear programming (NLP) problem by either discretization
or approximation methods, and then solve it using an NLP
solver. For this purpose, several methods, which are referred
to as the direct methods, have been proposed in the literature,
for instance collocation on finite elements (Kawathekar and
Riggs, 2007), multiple shooting (Schafer et al., 2007) and their
combinations (Tamimi and Li, 2010). Alternatively, another
approach is to employ the discretized models of the
continuous-time systems by discretizing their ODEs in the
fundamental models obtained by first principles. In Sistu and
Bequette (1996), for example, explicit and implicit Euler
methods have been considered.
This paper, on the other hand, proposes a novel NMPC
method for non-linear continuous-time systems, which is
based on the classical fourth-order RungeKutta algorithm.
In the method, a discretized model of the non-linear continu-
ous-time system, which is referred to as the RungeKutta
model of the system, is obtained by the RungeKutta algo-
rithm and then employed in the GPC loop. As will be demon-
strated in the next section, the RungeKutta model of the
system is used not only for prediction and derivative calcula-
tion purposes in the GPC framework but also for state and
parameter estimation purposes. In the literature, there exist
many numerical integration methods for solving ODEs (Press
et al., 2007). The motivation behind the use of the fourth-
order RungeKutta algorithm among others is the fact that it
has been proved to be very accurate and stable and also works
well in many applications. The novelty in the paper is the use
of the classical RungeKutta numerical integration algorithm
to discretize the non-linear continuous-time system to obtain
its so-called RungeKutta model, whereby it is possible to
implement MPC, state estimation and online parameter esti-
mation, which results in an adaptive non-linear predictive
controller.
This paper is organized as follows: divided into five sub-
sections, the next section introduces the proposed Runge
Kutta-based control scheme. The first subsection reviews the
classical fourth-order RungeKutta algorithm. The proposed
structure is given in the following subsection. In the third
subsection, it is demonstrated how to obtain future predic-
tions and Jacobian calculations using the RungeKutta
models. The RungeKutta model-based state and online
parameter estimation blocks, which are used in the proposed
structure, are discussed in the last two subsections, respec-
tively. In the section after that, the effectiveness of the pro-
posed structure is demonstrated on two different non-linear
multiple-input multiple-output (MIMO) systems. At the end
of the section, total computation times for each system
have been given in order to have a basis for real-time applic-
ability of the proposed method. The paper ends with the
conclusions.
The proposed RungeKutta-based
control structure
MIMO systems and the RungeKutta method
Consider an N-dimensional continuous-time MIMO system
as illustrated in Figure 1(a). The state equations of the system
are
Figure 1 (a) A continuous-time multiple-input multiple-output (MIMO) system and (b) its RungeKutta model.
Iplikci 167
_ x
1
=f
1
(x
1
(t), . . . , x
N
(t), u
1
(t), . . . , u
R
(t), u)
.
.
.
_ x
N
=f
N
(x
1
(t), . . . , x
N
(t), u
1
(t), . . . , u
R
(t), u)
1
subject to state and input constraints of the form
x
1
(t) 2 X
1
, . . . , x
N
(t) 2 X
N
, 8t 0
u
1
(t) 2 U
1
, . . . , u
R
(t) 2 U
R
, 8t 0
2
Where X
i
s and U
i
s are box constraints for the states and
inputs as given below, respectively
X
i
=fx
i
2 < j x
imin
<x
i
<x
imax
g, for i =1, . . . , N,
U
i
=fu
i
2 < j u
imin
<u
i
<u
imax
g, for i =1, . . . , R,
3
and the output equations are
y
1
(t) =g
1
(x
1
(t), . . . , x
N
(t), u
1
(t), . . . , u
R
(t))
.
.
.
y
Q
(t) =g
Q
(x
1
(t), . . . , x
N
(t), u
1
(t), . . . , u
R
(t))
4
where R is the number of inputs, N is the number of states and
Q is the number of outputs, u is the parameter of the system.
These equations can be written in a more compact form as
_ x =f(x, u, u)
y =g(x, u)
x 2 X u 2 U
5
It is assumed that terms f
i
and g
i
are known and continuously
differentiable with respect to input variables, the state vari-
ables and u, and that the state and input constraint sets X and
U are compact.
Now, assume that the current state x
1
(n),.,x
N
(n) of the
system and the current inputs u
1
(n),.,u
R
(n) to the system are
given at the time index n, where n denotes the sampling
instant at t=nT
s
. It is well known that the state and
output values of the system at the next sampling instant, i.e.
x
i
(n +1) and y
i
(n +1), can be predicted by the fourth-order
RungeKutta algorithm as follows. Initially, it is set
x
1
(n) =x
1
(n), ., x
N
(n) =x
N
(n).
^x
1
(n +1) =^x
1
(n) +
1
6
K
1
X
1
+
1
3
K
2
X
1
+
1
3
K
3
X
1
+
1
6
K
4
X
1
.
.
.
^x
N
(n +1) =^x
N
(n) +
1
6
K
1
X
N
+
1
3
K
2
X
N
+
1
3
K
3
X
N
+
1
6
K
4
X
N
6
and
^y
1
(n +1) =g
1
(^x
1
(n +1), . . . , ^x
N
(n +1), u
1
(n), . . . , u
R
(n))
.
.
.
^y
Q
(n +1) =g
Q
(^x
1
(n +1), . . . , ^x
N
(n +1), u
1
(n), . . . , u
R
(n))
7
where
K
1
X
1
=T
s
f
1
(^x
1
(n), . . . , ^x
N
(n), u
1
(n), . . . , u
R
(n), u)
.
.
.
K
1
X
N
=T
s
f
N
(^x
1
(n), . . . , ^x
N
(n), u
1
(n), . . . , u
R
(n), u)
K
2
X
1
=T
s
f
1
(^x
1
(n) +0:5K
1
X
1
, . . . , ^x
N
(n) +0:5K
1
X
N
, u
1
(n), . . . , u
R
(n), u)
.
.
.
K
2
X
N
=T
s
f
N
(^x
1
(n) +0:5K
1
X
1
, . . . , ^x
N
(n) +0:5K
1
X
N
, u
1
(n), . . . , u
R
(n), u)
K
3
X
1
=T
s
f
1
(^x
1
(n) +0:5K
2
X
1
, . . . , ^x
N
(n) +0:5K
2
X
N
, u
1
(n), . . . , u
R
(n), u)
.
.
.
K
3
X
N
=T
s
f
N
(^x
1
(n) +0:5K
2
X
1
, . . . , ^x
N
(n) +0:5K
2
X
N
, u
1
(n), . . . , u
R
(n), u)
K
4
X
1
=T
s
f
1
(^x
1
(n) +K
3
X
1
, . . . , ^x
N
(n) +K
3
X
N
, u
1
(n), . . . , u
R
(n), u)
.
.
.
K
4
X
N
=T
s
f
N
(^x
1
(n) +K
3
X
1
, . . . , ^x
N
(n) +K
3
X
N
, u
1
(n), . . . , u
R
(n), u)
8
Equations (6) and (7) can be put into a more compact form as
^x(n +1) =
^
f(^x(n), u(n), u)
^y(n +1) =g(^x(n), u(n))
9
Hence, given the current state variables x
1
(n),.,x
N
(n) and
the inputs u
1
(n),.,u
R
(n) of the system at the sampling instant
t=nT
s
, its output values at the next sampling instant
(t +T
s
=(n +1)T
s
) can be predicted by using Equation (9).
From the predictive control point of view, Equation (9) can
be regarded as a RungeKutta model of the system as seen in
Figure 1(b). Hence, the discrete-time RungeKutta model of
the continuous-time system can be employed in the GPC
framework as described in the next subsection.
The RungeKutta-based GPC structure
In the proposed method, the RungeKutta model of the plant
(9) is employed in the GPC scheme, the structure of which is
illustrated in Figure 2, where the so-called RungeKutta-
based GPC structure includes the RungeKutta model of the
plant and the RungeKutta model-based extended Kalman
filter (EKF) and parameter estimation blocks and the cost
function minimization (CFM) block. The role of the Runge
Kutta model of the plant is four-fold: (a) it produces the
future predictions of the plant in response to the candidate
control vector, which will be described in the next subsection;
(b) it is utilized to obtain the gradient information required in
the CFM block; (c) it is employed in the EKF block to esti-
mate the states of the plant; and (d) it is used in the Runge
Kutta model-based parameter estimation block to make the
proposed structure adaptation to the variations of the system
parameters.
The procedure of the proposed method is as follows: first,
a candidate control vector is constructed as
u(n) =u
1
(n), . . . , u
R
(n)
T
10
Then, K-step ahead future predictions of the plant outputs are
obtained in response to the candidate control vector as if it is
repeatedly applied to the plant exactly K times. Calculations of
the future predictions are handled by the RungeKutta model
of the plant. Meanwhile, the RungeKutta model receives not
only the candidate control vector as the input but also current
168 Transactions of the Institute of Measurement and Control 35(2)
state of the plant and the accurate value of the system para-
meters, which are provided by the RungeKutta model-based
EKF and parameter estimation blocks, respectively.
Based on these predictions, the candidate control vector
u(n) is updated in a manner such that the sum of the squared
K-step ahead prediction errors are minimized with minimum
deviations in the control actions, thereby resulting in an opti-
mal control action u
*
(n)= u(n) +du(n) to be applied to the
plant. In other words, it is attempted to minimize the objec-
tive function F given by (11)
F(u(n)) =

Q
q =1

K
k =1
(y
q
(n +k) ^y
q
(n +k))
2
+

R
r =1
l
r
(u
r
(n) u
r
(n 1))
2
11
where K is the prediction horizon and l
r
is the penalty term
associated with the rth input. For an optimal control action
u
*
(n), an additive correction term du(n) has to be found such
that the objective function F(u(n) +du(n)) is minimized,
while satisfying the condition for descent direction: F(u(n)
+du(n)) \ F(u(n)). The CFM block tries to minimize this
objective function with respect to du(n) based on the second-
order Taylor approximation as follows:
F(u(n) +du(n)) F(u(n)) +
F(u(n))
u(n)
_ _
T
du(n)
+
1
2
du(n)
T

2
F(u(n))
u
2
(n)
_ _
du(n) 12
where
F(u(n))
u(n)
is the gradient vector and

2
F(u(n))
u
2
(n)
is the
Hessian matrix. Since it is sought for a du(n) that minimizes
the objective function, if the derivative of the approximation
of F with respect to du(n) is taken and then equated to zero as
F(u(n) +du(n))
du(n)

F(u(n))
u(n)
+

2
F(u(n))
u
2
(n)
du(n) =0 13
then du(n) is obtained as
du(n) =

2
F(u(n))
u
2
(n)
_ _
1
F(u(n))
u(n)
14
which corresponds to the Newton direction that provides a
quadratic convergence to the local minimum if the Hessian
matrix is positive definite (for descent direction) and the higher-
order terms are negligibly small (Nocedal and Wright, 1999;
Venkataraman, 2002). If the Hessian matrix is not positive defi-
nite, a judiciously chosen additive term gI can be added to the
Hessian to make this extended Hessian matrix positive definite.
By judiciously, we mean g should be chosen slightly larger
than the most negative eigenvalue of the Hessian matrix.
At this point, it is apparent that the calculation of the
gradient and Hessian terms, i.e. the first- and second-order
derivatives of the objective function with respect to u(n),
is needed. However, in order to avoid calculating the
time-consuming second-order derivatives, the well-known
Jacobian approximation can be employed (Nocedal and
Wright, 1999), which suggests that the (KQ+R)3R Jacobian
matrix J
) 1 (
1
n y
) 1 (n y
Q
1
z
1
y
Q
y
] ) ( ) 1 ( [
1 1
K n y n y
] ) ( ) 1 ( [ K n y n y
Q Q
) (
*
1
n u
) (
*
n u
R
1
z
1
z
) (
~
1
n x ) (
~
n x
N
1
z
1
z
1
z
)] ( [
1
n u
)] ( [ n u
R
Cost Function
Minimization
Runge-Kutta
Model
Runge-Kutta
Model Based
Parameter
Estimation
Runge-Kutta
Model Based
EKF
MIMO
System
Figure 2 Proposed RungeKutta-based control structure.
Iplikci 169
J =
^e(n +1)
u1(n)
^e(n +1)
u2(n)
. . .
^e(n +1)
uR(n)
.
.
.
.
.
.
.
.
.
.
.
.
^e(n +K)
u1(n)
^e(n +K)
u2(n)
. . .
^e(n +K)
uR(n)
.
.
.
.
.
.
.
.
.
.
.
.
^e(n +KQ)
u1(n)
^e(n +KQ)
u2(n)
. . .
^e(n +KQ)
uR(n)

l1
p
(u1(n)u1(n1))
u1(n)

l1
p
(u1(n)u1(n1))
u2(n)
. . .

l1
p
(u1(n)u1(n1))
uR(n)
.
.
.
.
.
.
.
.
.
.
.
.

lR
p
(uR(n)uR(n1))
u1(n)

lR
p
(uR(n)uR(n1))
u2(n)
. . .

lR
p
(uR(n)uR(n1))
uR(n)
_

_
_

_
=
^y
1
(n +1)
u1(n)
^y
1
(n +1)
u2(n)
. . .
^y
1
(n +1)
uR(n)
.
.
.
.
.
.
.
.
.
.
.
.
^y
1
(n +K)
u1(n)
^y
1
(n +K)
u2(n)
. . .
^y
1
(n +K)
uR(n)
.
.
.
.
.
.
.
.
.
.
.
.
^y
Q
(n +K)
u1(n)
^y
Q
(n +K)
u2(n)
. . .
^y
Q
(n +K)
uR(n)

l
1
p
l
1
p
. . .

l
1
p
.
.
.
.
.
.
.
.
.
.
.
.

l
R
p
l
R
p
. . .

l
R
p
_

_
_

_
15
can represent the gradient vector exactly and the Hessian
matrix approximately as
F(u(n))
u(n)
=2J
T
^e and

2
F(u(n))
u
2
(n)
2J
T
J 16
respectively, where ^e is the vector of prediction errors and
input slews given by
^e =
^e(n +1)
.
.
.
^e(n +K)
.
.
.
^e(n +KQ)

l
1
p
Du
1
(n)
.
.
.

l
R
p
Du
R
(n)
_

_
_

_
=
y
1
(n +1) ^y
1
(n +1)
.
.
.
y
1
(n +K) ^y
1
(n +K)
.
.
.
y
Q
(n +K) ^y
Q
(n +K)

l
1
p
(u
1
(n) u
1
(n 1))
.
.
.

l
R
p
(u
R
(n) u
R
(n 1))
_

_
_

_
17
Thus, the correction term is computed as
du(n) = (J
T
J +mI)
1
J
T
^e 18
which implies that it is needed only the first-order derivatives
y
q
(n +k)
ur (n)
for q=1,.,Q and r=1,.,R.
By using the RungeKutta model of the plant given in
Equation (9), it is explained in the next subsection how to
obtain the K-step ahead future predictions of the plant outputs
and also the necessary derivatives for the Jacobian calculations.
Future predictions and Jacobian calculations
The K-step ahead future predictions of the plant outputs
can be calculated by using the RungeKutta model of the
plant if Equations (6) and (7) are employed in an iterative
manner as
^x(n +k) =
^
f(^x(n +k 1), u(n), u)
^y(n +k) =g(^x(n +k 1), u(n)) for k = 1, . . . , K
19
It should be noted that the candidate control vector u(n) is
assumed to remain unchanged during the prediction interval
([t +T
s
t +KT
s
]). Hence, a series of future predictions is
obtained for each output as
^y
q
(n +1), . . . , ^y
q
(n +K), for q =1, . . . , Q 20
In order to obtain the necessary derivatives for the Jacobian
calculation, first, Equations (6)(8) are rewritten as
^x
i
(n +k) =^x
i
(n +k 1) +
1
6
K
1
X
i
(n +k 1)
+
1
3
K
2
X
i
(n +k 1) +
1
3
K
3
X
i
(n +k 1)
+
1
6
K
4
X
i
(n +k 1)
21
for i=1,.,N and
^y
q
(n +k) =g
q
(^x
1
(n +k), . . . , ^x
N
(n +k), u
1
(n), . . . , u
R
(n))
22
for q=1,.,Q, where
K
1
X
i
(n +k 1) =T
s
f
i
(^x
1
(n +k 1), . . . , ^x
N
(n +k 1),
u
1
(n), . . . , u
R
(n), u) 23
K
2
X
i
(n +k 1) =T
s
f
i
(^x
1
(n +k 1) +0:5K
1
X
1
(n +k 1), . . . ,
^x
N
(n +k 1) +0:5K
1
X
N
(n +k 1), u
1
(n), . . . , u
R
(n), u)
24
K
3
X
i
(n+k 1) =T
s
f
i
(^x
1
(n+k 1) +0:5K
2
X
1
(n+k 1), . . . ,
^x
N
(n+k 1) +0:5K
2
X
N
(n+k 1), u
1
(n), . . . , u
R
(n), u)
25
K
4
X
i
(n +k 1) =T
s
f
i
(^x
1
(n +k 1) +K
3
X
1
(n +k 1), . . . ,
^x
N
(n +k 1) +K
3
X
N
(n +k 1), u
1
(n), . . . , u
R
(n), u)
26
Now, the problem of calculation of the derivatives is turned
out to that of the terms
y
q
(n +k)
ur (n)
, which can be handled as
follows:
170 Transactions of the Institute of Measurement and Control 35(2)
^y
q
(n +k)
u
r
(n)
=
g
q
(^x
1
(n +k), . . . , ^x
N
(n +k), u
1
(n), . . . , u
R
(n))
u
r
(n)
=
g
q
u
r
+
g
q
^x
1
(n +k)
^x
1
(n +k)
u
r
(n)
+ . . . +
g
q
^x
N
(n +k)
^x
N
(n +k)
u
r
(n)
_ _
x1 =^x1(n +k)
.
.
.
xN =^xN (n +k)
=
g
q
u
r
+
g
q
x
1
^x
1
(n +k)
u
r
(n)
+ . . . +
g
q
x
N
^x
N
(n +k)
u
r
(n)
_ _
x1 =^x1(n +k)
.
.
.
xN =^xN (n +k)
=
g
q
u
r
+

N
i =1
g
q
x
i
^x
i
(n +k)
u
r
(n)
_ _
x1 =^x1(n +k)
.
.
.
xN =^xN (n +k)
27
Here
^x
i
(n +k)
u
r
(n)
=
^x
i
(n +k 1)
u
r
(n)
+
1
6
K
1
X
i
(n +k 1)
u
r
(n)
+
1
3
K
2
X
i
(n +k 1)
u
r
(n)
+
1
3
K
3
X
i
(n +k 1)
u
r
(n)
+
1
6
K
4
X
i
(n +k 1)
u
r
(n)
28
where
K
1
X
i
(n +k 1)
u
r
(n)
=T
s
f
i
(^x
1
(n +k 1), . . . , ^x
N
(n +k 1), u
1
(n), . . . , u
R
(n), u)
u
r
(n)
=T
s
f
i
u
r
+

N
j =1
f
i
x
j
^x
j
(n +k 1)
u
r
(n)
_ _
x1 =^x1(n +k1)
.
.
.
xN =^xN (n +k1)
29
and
K
2
X
i
(n +k 1)
ur(n)
=T
s
f
i
ur
_ _
x1 =^x1(n +k1) +K1X1(n +k1)=2
.
.
.
xN =^xN (n +k1) +K1XN (n +k1)=2
+T
s

N
j =1
f
i
x
j
^x
j
(n +k 1)
u
r
(n)
+
1
2
K
1
X
j
(n +k 1)
u
r
(n)
_ _
_ _
x1 =^x1(n +k1) +K1X1(n +k1)=2
.
.
.
xN =^xN (n +k1) +K1XN (n +k1)=2
30
and
K
3
X
i
(n +k 1)
u
r
(n)
=T
s
f
i
u
r
_ _
x1 =^x1(n +k1) +K2X1(n +k1)=2
.
.
.
xN =^xN (n +k1) +K2XN (n +k1)=2
+T
s

N
j =1
f
i
x
j
^x
j
(n +k 1)
u
r
(n)
+
1
2
K
2
X
j
(n +k 1)
u
r
(n)
_ _
_ _
x1 =^x1(n +k1) +K2X1(n +k1)=2
.
.
.
xN =^xN (n +k1) +K2XN (n +k1)=2
31
and
K
4
X
i
(n +k 1)
ur(n)
=T
s
f
i
ur
_ _
x1 =^x1(n +k1) +K3X1(n +k1)
.
.
.
xN =^xN (n +k1) +K3XN (n +k1)
+T
s

N
j =1
f
i
x
j
^xj(n +k 1)
u
r
(n)
+
1
2
K3Xj(n +k 1)
u
r
(n)
_ _
_ _
x1 =^x1(n +k1) +K3X1(n +k1)
.
.
.
xN =^xN (n +k1) +K3XN (n +k1)
32
Consequently, the necessary derivatives can be obtained by
using the RungeKutta model of the plant.
The RungeKutta model-based EKF
This subsection describes the estimation of the current
state of the plant based on its RungeKutta model in
combination with the well-known EKF approach (Grewal
and Andrews, 2008; Welch and Bishop, 2006). Before dis-
cussing the RungeKutta model-based EKF, it is helpful
to review EKF. Consider a non-linear discrete-time system
given by
x(n +1) =h(x(n), u(n)) +w(n)
y(n +1) =g(x(n), u(n)) +v(n)
33
where x is an N-dimensional state vector to be estimated,
u 2 <
R
is the input vector and y 2 <
Q
is the output vector.
Moreover, w is the vector of process noises with covariance
matrix Q and v is the vector of measurement noises with cov-
ariance matrix R. The time-update equations are
~x

(n) =h(~x(n 1), u(n 1))


P

n
=A
n
P
n1
A
T
n
+Q
34
and the measurement-update equations are
K
n
=P

n
H
T
n
(H
n
P

n
H
T
n
+R)
1
~x(n) =~x

(n) +K
n
(y(n) g(~x

(n), u(n 1)))


P
n
=(I K
n
H
n
)P

n
35
where x(n) is the estimated state vector and
A
n
=
h
x

x =~x(n1)
u =u(n1)
and H
n
=
g
x

x =~x(n1)
u =u(n1)
36
This filter is named the discrete EKF because the underlying
system and the filter are both in the discrete-time form
(Grewal and Andrews, 2008). In this study, however, the sys-
tems under investigation are continuous-time as given by
Equations (1) and (4), and hence their discrete approxima-
tions are required in order to apply the EKF algorithm. At
this point, the RungeKutta model given by (9) can be utilized
as discrete model of the system. Accordingly, the matrices can
be written as:
A
n
=

^
f
x

x =~x(n1)
u =u(n1)
and H
n
=
g
x

x =~x(n1)
u =u(n1)
37
where

^
f
x

x =~x(n1)
u =u(n1)
=

^
f
i
(~x(n1), u(n1))
~xj (n1)
_ _
=
~xi (n)
~xj (n1)
_ _ for
i =1, . . . , N
j =1, . . . , N
38
where
Iplikci 171
~x
i
(n)
~x
j
(n 1)
=d
i, j
+
1
6
K
1
X
i
(n 1)
~x
j
(n 1)
+
1
3
K
2
X
i
(n 1)
~x
j
(n 1)
+
1
3
K
3
X
i
(n 1)
~x
j
(n 1)
+
1
6
K
4
X
i
(n 1)
~x
j
(n 1)
39
where
K
1
X
i
(n 1)
~x
j
(n 1)
=T
s
f
i
(~x
1
(n 1), . . . , ~x
N
(n 1), u
1
(n 1), . . . , u
R
(n 1))
~x
j
(n 1)
=T
s
f
i
x
j
_ _
x =~x(n1)
u =u(n1)
40
and
K
2
X
i
(n 1)
~x
j
(n 1)
=T
s

N
p =1
f
i
x
p
1
2
K
1
X
p
(n 1)
~x
j
(n 1)
+d
p, j
_ _
_ _
x
1
=~x
1
(n 1) +K
1
X
1
(n 1)=2
.
.
.
x
N
=~x
N
(n 1) +K
1
X
N
(n 1)=2
u =u(n 1)
41
and
K
3
X
i
(n 1)
~x
j
(n 1)
=T
s

N
p =1
f
i
x
p
1
2
K
2
X
p
(n 1)
~x
j
(n 1)
+d
p, j
_ _
_ _
x
1
=~x
1
(n 1) +K
2
X
1
(n 1)=2
.
.
.
x
N
=~x
N
(n 1) +K
2
X
N
(n 1)=2
u =u(n 1)
42
and
K
4
X
i
(n 1)
~x
j
(n 1)
=T
s

N
p =1
f
i
x
p
1
2
K
3
X
p
(n 1)
~x
j
(n 1)
+d
p, j
_ _
_ _
x
1
=~x
1
(n 1) +K
3
X
1
(n 1)
.
.
.
x
N
=~x
N
(n 1) +K
3
X
N
(n 1)
u=u(n 1)
43
Thus, the current state of the plant can be estimated by its
RungeKutta model employed in the EKF algorithm.
The RungeKutta model-based online parameter
estimation
Assume that previous state x(n) and current state x(n +1) of
a non-linear system (1) are given directly (or estimated by
EKF) at time (n +1)T
s
and that the previous control input
u(n) is known. If the RungeKutta model of the system is
employed, it is definitely possible to relate the current state of
the system to its previous state, inputs and the parameter (u)
by Equation (6). Thus, the parameter (u) of the system can be
estimated as follows: first, x(n +1) is predicted by the Runge
Kutta model of the system and then the vector of prediction
errors related to the parameter is formed as
e =
e
1
e
2
.
.
.
e
N
_

_
_

_
=
x
1
(n +1) ^x
1
(n +1)
x
2
(n +1) ^x
2
(n +1)
.
.
.
x
N
(n +1) ^x
Q
(n +1)
_

_
_

_
44
Next, the Jacobian matrix consisting of the partial derivatives
of the errors with respect to the parameter is constructed as
J
u
=
e
1
u
e
2
u
. . .
e
N
u
_ _
T
45
Finally, the parameter of the system is updated by
u
n +1
=u
n

J
T
u
e
J
T
u
J
u
46
In this respect, the derivatives
xi (n +1)
u
for i= 1,.,N are
needed for calculation of the Jacobian, which can be obtained
by using Equation (6) of the RungeKutta model of the sys-
tem as described below.
^x
i
(n +1)
u
=
^x
i
(n)
u
+
1
6
K
1
X
i
(n)
u
+
1
3
K
2
X
i
(n)
u
+
1
3
K
3
X
i
(n)
u
+
1
6
K
4
X
i
(n)
u
47
where
K
1
X
i
(n)
u
=T
s
f
i
(^x
1
(n), . . . , ^x
N
(n), u
1
(n), . . . , u
R
(n))
u
=T
s
f
i
u
_ _
x =^x(n)
u=u(n)
48
and
K
2
X
i
(n)
u
=T
s
f
i
u
+
1
2

N
j =1
f
i
x
j
K
1
X
j
(n)
u
_ _
x1 =^x1(n) +K1X1(n)=2
.
.
.
xN =^xN (n) +K1XN (n)=2
49
and
K
3
X
i
(n)
u
=T
s
f
i
u
+
1
2

N
j =1
f
i
x
j
K
2
X
j
(n)
u
_ _
x1 =^x1(n) +K2X1(n)=2
.
.
.
xN =^xN (n) +K2XN (n)=2
50
and
K
4
X
i
(n)
u
=T
s
f
i
u
+
1
2

N
j =1
f
i
x
j
K
3
X
j
(n)
u
_ _
x1 =^x1(n) +K3X1(n)
.
.
.
xN =^xN (n) +K3XN (n)
51
172 Transactions of the Institute of Measurement and Control 35(2)
Thus, necessary derivatives for parameter estimation can be
obtained by using the RungeKutta model of the plant.
The simulation results
In the simulations, it is assumed that both process and mea-
surement noises are independent of each other and Gaussian
with zero-mean, and that covariance matrices Q and R are
diagonal matrices given by
Q=s
2
w
I and R=s
2
v
I 52
where I is the identity matrix and s
2
w
and s
2
v
are the variances
of the process and measurement noises, respectively.
The three-tank system
The first system on which the proposed scheme has been
tested is the three-tank liquid system as sketched in Figure 3.
The dynamics of the system can be expressed by a set of
differential equations (Iplikci, 2010)
_ y
1
(t) =
1
A
u
1
(t) Q
13
(t)
_ y
2
(t) =
1
A
u
2
(t) +Q
32
(t) Q
20
(t)
_ y
3
(t) =
1
A
Q
13
(t) Q
32
(t)
53
where
Q
13
(t) =az
13
S
n
sgn(y
1
(t) y
3
(t))

2gjy
1
(t) y
3
(t)j
_
Q
20
(t) =az
20
S
n

2gy
2
(t)
_
Q
32
(t) =az
32
S
n
sgn(y
3
(t) y
2
(t))

2gjy
3
(t) y
2
(t)j
_
For the system, u
i
(t) is the supply flow rate of pump
i
as the
ith input and y
i
(t) is the liquid level of tank
i
as the ith output.
Explanations and values for the parameters appearing in the
system equations are given in Table 1.
In this work, the aim is to control the liquid levels of tank
1
and tank
2
by manipulating flow rate of pump
1
and pump
2
. In
the simulations, the sampling period is selected as T
s
=1.0 s
and magnitudes of the control signals are allowed to be altered
between u
1min
=u
2max
= 0 m
3
/s and u
1min
=u
2max
=10
4
m
3
/s.
Furthermore, standard deviations of measurement and process
noises are s
v
=0.003 and s
w
=0.0001, respectively. Moreover,
because of the assumption that states of the system are not
available for measurement, they are estimated by the Runge
Kutta-based EKF block of the proposed structure.
Figure 4 shows the simulation results for staircase reference
inputs. From the figure, it is seen that the proposed controller
carries out the control task very successfully: it provides very
small transient and steady-state tracking error despite the exis-
tence of both measurement and process noises. Moreover, it
effectively tolerates the interactions between the tanks, which
can be observed well during the periods when the level of a
tank is changing while the other one is remaining constant.
For better visualization of the performance of the pro-
posed controller, the reference input of tank
2
is changed sinu-
soidally, while the reference input of tank
1
is kept constant at
0.2 m, as seen in Figure 5. As can be seen from the figure,
even though the reference input for tank
1
is constant, flow
rate of pump
1
is changing sinusoidally in order to compensate
for the interaction between the tanks.
Finally, in order to show the effectiveness of the online
parameter estimation capability of the proposed controller,
the reference inputs are set to 0.25 and 0.2 m for tank
1
and
tank
2
, respectively, while varying the outflow parameter az
13
sinusoidally between 0.2 and 0.8. It is observed from Figure 6
that correct values of the parameter are estimated accurately
in a short period and then maintained in the long run.
3.2 The Van de Vusse chemical reaction
The second process on which the proposed structure has been
tested is the Van de Vusse chemical reaction. It is a non-
isothermal process affected by thermal effect, and the result-
ing open-loop system shows strictly non-minimum-phase
behaviour. In this process, the following series/parallel reac-
tions take place
A!
k1
B!
k2
C
2A!
k3
D
54
The mass and energy balances describing the dynamics of the
process are given by
pump
1
pump
2
A
1
y
2
y
3
y
1
u
2
u
n
S
n
S
DC
tank
1
tank
3
tank
2
13
az
32
az
10
az
30
az
20
az
DC
Figure 3 The three-tank liquid level control system.
Table 1 The system parameters
Parameter description Value
az
13
: outflow coefficient between tank
1
and tank
3
0.52
az
32
: outflow coefficient between tank
3
and tank
2
0.55
az
10
: outflow coefficient from tank
1
to reservoir 0.26
az
20
: outflow coefficient from tank
2
to reservoir 0.28
az
30
: outflow coefficient from tank
3
to reservoir 0.45
A: cross section of the cylinders 0.0154 [m
2
]
S
n
: section of connection pipe n 5310
5
[m
2
]
g: gravitation coefficient 9.81 [m/s
2
]
Iplikci 173
_
C
A
(t) =
F
V
(C
A0
C
A
(t)) k
10
e

E
1
T
C
A
k
30
e

E
3
T
C
2
A
_
C
B
(t) =
F
V
C
B
(t) +k
10
e

E
1
T
C
A
k
20
e

E
2
T
C
B
_
T(t) =
1
rC
p
k
10
e

E
1
T
C
A
( DH
1
) +k
20
e

E
2
T
C
B
( DH
2
)
+k
30
e

E
3
T
C
2
A
( DH
3
) +
F
V
(T
0
T(t)) +
Q
rC
p
55
where C
A
and C
B
are the molar concentrations of A and B,
respectively, T is the temperature of the reactor,
F
V
is the dilu-
tion rate, r is the density of the reacting mixture, C
P
is the
heat capacity, the DH
i
terms are the heat of the reaction, V is
the volume of the reactor and Q is the rate of heat added or
removed per unit volume (Niemiec, 2003). Nominal values of
the system parameters are given in Table 2.
For this process, the aim is to control the molar concentra-
tion of B (C
B
) and the temperature of the reactor (T) by
manipulating the dilution rate (
F
V
) and the rate of heat added
or removed per unit volume (Q)
In the simulations, the sampling period is selected as
T
s
=0.01 h and magnitudes of the control signals are allowed
to be altered between [0,500] h
21
and [1000,0] kJ/lh, respec-
tively. Moreover, the standard deviations of measurement
and process noises are s
v
=0.01 and s
w
=0.001, respectively.
Since the states of the system are assumed to be unavailable
for measurement, they are estimated by the RungeKutta-
based EKF block of the proposed structure.
0 200 400 600 800 1000 1200 1400 1600 1800
0
0.1
0.2
0.3
0.4
time (sec.)
h
1
r
e
f

,

h
1
h
1ref
h
1
0 200 400 600 800 1000 1200 1400 1600 1800
0
0.1
0.2
0.3
0.4
time (sec.)
h
2
r
e
f

,

h
2
h
2ref
h
2
0 200 400 600 800 1000 1200 1400 1600 1800
0
0.2
0.4
0.6
0.8
1
x 10
-4
time (sec.)
q
1
0 200 400 600 800 1000 1200 1400 1600 1800
0
0.2
0.4
0.6
0.8
1
x 10
-4
time (sec.)
q
2
Figure 4 Inputs and outputs of the three-tank system for staircase reference inputs.
174 Transactions of the Institute of Measurement and Control 35(2)
For staircase reference inputs, the simulation results can
be seen in Figure 7. It is observed from the figure that the
proposed controller provides very small tracking errors except
for the short periods during which the reference input for the
reactor temperature is changed abruptly. During these peri-
ods, C
B
inevitably deviates from its reference value for a while
and then settles back to it. This is attributed to the strong
dependence of C
B
on the reactor temperature.
Yet, it can be said that the proposed controller can effec-
tively tolerate the interactions between the state variables of
the process. As a matter of fact, as seen in Figure 8, the con-
troller is able to keep the temperature very close to the con-
stant reference input even though the reference input for C
B
is changing sinusoidally.
As has been done for the three-tank case, one parameter
(C
A0
) of the chemical reaction is sinusoidally changed around
its nominal value in order to test the adaptation capability of
the proposed structure. It can be observed from Figure 9 that
the actual parameter value is estimated very rapidly and accu-
rately by the online parameter estimation block and that the
0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200
0
0.1
0.2
0.3
0.4
time (sec.)
h
1
r
e
f

,

h
1
h
1ref
h
1
0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200
0
0.1
0.2
0.3
0.4
time (sec.)
h
2
r
e
f

,

h
2
h
2ref
h
2
0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200
0
0.2
0.4
0.6
0.8
1
x 10
-4
time (sec.)
q
1
0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200
0
0.2
0.4
0.6
0.8
1
x 10
-4
time (sec.)
q
2
Figure 5 Inputs and outputs of the three-tank system for constant and sinusoidal reference inputs.
Iplikci 175
0 100 200 300 400 500 600 700 800 900 1000
0
0.2
0.4
time (sec.)
h
1
r
e
f

,

h
1
h
1ref
h
1
0 100 200 300 400 500 600 700 800 900 1000
0
0.2
0.4
time (sec.)
h
2
r
e
f

,

h
2
h
2ref
h
2
0 100 200 300 400 500 600 700 800 900 1000
0
0.5
1
x 10
-4
time (sec.)
q
1
0 100 200 300 400 500 600 700 800 900 1000
0
0.5
1
x 10
-4
time (sec.)
q
2
0 100 200 300 400 500 600 700 800 900 1000
0.2
0.4
0.6
0.8
1
1.2
time (sec.)
a
z
1
3
act. az
13
est. az
13
Figure 6 Inputs and outputs of the three-tank system for varying outflow parameter (az
13
) and its estimation
Table 2 Nominal parameters of the chemical reaction
k
10
= 1:287310
12
[h
21
] k
20
= 1:287310
12
[h
21
] k
30
= 9:043310
9
[h
21
l/mol]
E
1
= 9758:3 [K] E
2
= 9758:3 [K] E
3
= 8560:0 [K]
DH
1
= 4:2 [kJ/mol] DH
2
= 11 [kJ/mol] DH
3
= 41:85 [kJ/mol]
C
A0
= 5:0 [mol/l] T
0
= 403:15 [K] r = 0:9342 [kg/l]
C
p
= 3:01 [kJ/kgK] V = 10:0 [l]
176 Transactions of the Institute of Measurement and Control 35(2)
controller can successfully tolerate the variations in the para-
meter of the process.
Computation times
Another important issue concerning the proposed structure is
the applicability to real-time systems. In order to have a basis
for the applicability of the proposed controller to the real-
time systems, the maximum response times of the proposed
controller have been calculated. For that purpose, the method
has been implemented in MatLab (version 7.4.0.287) on a PC
(with Intel(R) Core(TM)2 CPU T7200 running at 2.0 GHz
and 1024 MB RAM) without optimizing the codes, i.e. the
formulations given for each operation such as K-step ahead
predictions, Jacobian calculations, etc., have been strictly fol-
lowed. Then, the simulations have been carried out for the
staircase reference inputs, which are discussed in the previous
subsections. For each system under investigation, computa-
tion times of the operations and the resulting total response
times have been recorded during every sampling period. The
records corresponding to the maximum total response times
for the controller are tabulated in Table 3.
As can be seen from the table, the maximum total response
times of the proposed controller for both systems are less than
6 ms, which is much less than the sampling periods of the
investigated systems. Based on the results given in the table, it
0 5 10 15 20 25 30 35 40 45 50
0.6
0.8
1
1.2
time (hour)
C
B
r
e
f

,

C
B
C
Bref
C
B
0 5 10 15 20 25 30 35 40 45 50
405.5
406
406.5
407
407.5
408
time (hour)
T
r
e
f

,

T
T
ref
T
0 5 10 15 20 25 30 35 40 45 50
0
100
200
300
400
500
time (hour)
F
/
V
0 5 10 15 20 25 30 35 40 45 50
-1000
-800
-600
-400
-200
0
time (hour)
Q
Figure 7 Inputs and outputs of the chemical reaction for staircase reference inputs.
Iplikci 177
can be stated that the proposed controller can be used in real-
time applications.
Conclusions
This paper proposes a structure that introduces the use of the
fourth-order RungeKutta numerical integration tool for con-
structing the so-called RungeKutta models for non-linear
processes and consequently its utilization in the GPC frame-
work. In the proposed structure, the role of the RungeKutta
model of a non-linear process is fourfold: (a) it accounts for
future predictions of the process in response to the candidate
control vector; (b) it is used to extract the gradient informa-
tion required for the Jacobian calculation; (c) it is used in the
EKF framework in order to estimate the unmeasurable states
of the system; (d) it is used for online estimation of a time-
varying parameter of the process.
The proposed structure has been tested on two different
non-linear MIMO processes by simulations under various sce-
narios. The simulation results have revealed that the proposed
structure is capable of (a) providing very small transient and
steady-state tracking errors for constant reference inputs and
system parameters; (b) adaptation to the changes in the
0 5 10 15 20 25 30 35 40 45 50
0.6
0.8
1
1.2
time (hour)
C
B
r
e
f

,

C
B
C
Bref
C
B
0 5 10 15 20 25 30 35 40 45 50
405.5
406
406.5
407
407.5
408
time (hour)
T
r
e
f

,

T
T
ref
T
0 5 10 15 20 25 30 35 40 45 50
0
100
200
300
400
500
time (hour)
F
/
V
0 5 10 15 20 25 30 35 40 45 50
-1000
-800
-600
-400
-200
0
time (hour)
Q
Figure 8 Inputs and outputs of the chemical reaction for constant and sinusoidal reference inputs.
178 Transactions of the Institute of Measurement and Control 35(2)
reference inputs; (c) compensation of the interactions between
the state variables of the processes; (d) online estimation of
the time-varying system parameter and (e) tolerating the
effects of the time-varying system parameter.
As a result, the proposed structure, containing several
novelties, can be used not only for control but also for state
estimation and online parameter estimation purposes.
Funding
This research received no specific grant from any funding
agency in the public, commercial, or not-for-profit sectors.
0 5 10 15 20 25 30 35 40 45 50
0.6
0.8
1
1.2
time (hour)
C
B
r
e
f

,

C
B
C
Bref
C
B
0 5 10 15 20 25 30 35 40 45 50
406
407
408
time (hour)
T
r
e
f

,

T
T
ref
T
0 5 10 15 20 25 30 35 40 45 50
0
200
400
time (hour)
F
/
V
0 5 10 15 20 25 30 35 40 45 50
-1000
-500
0
time (hour)
Q
0 5 10 15 20 25 30 35 40 45 50
4.5
5
5.5
6
6.5
time (hour)
C
A
0
act. C
A
0
est. C
A
0
Figure 9 Inputs and outputs of the chemical reaction for varying concentration parameter ( C
A0
) and its estimation.
Table 3 Computation times for the proposed controller
Computation time [ms.]
Operation Three-tank Chemical reaction
State estimation 0.49 0.39
K-step ahead prediction 0.68 0.33
Jacobian calculation 2.23 2.57
Parameter estimation 1.77 0.92
Miscellaneous 0.12 0.12
Total 5.29 4.33
Iplikci 179
References
Camacho EF (1993) Constrained generalized predictive control. IEEE
Transactions on Automatic Control 38: 327332.
Camacho EF and Bordons C (2003) Model Predictive Control.
London: Springer-Verlag.
Camacho EF and Bordons C (2007) Nonlinear model predictive con-
trol: an introductory review. Lecture Notes in Control and Informa-
tion Sciences 358: 116.
Cervantes AL, Agamennoni OE and Figueroa JL (2003) A nonlinear
model predictive control system based on wiener piecewise linear
models. Journal of Process Control 13: 655666.
Cho KH, Yeo YK, Kim JS and Koh ST (1999) Fuzzy model predic-
tive control of nonlinear pH process. Korean Journal of Chemical
Engineering 16: 208214.
Clarke DW (1988) Application of generalized predictive control to
industrial processes. IEEE Control Systems Magazine 122: 4955.
Clarke DW (1994) Advances in Model-based Predictive Control.
Oxford: Oxford University Press.
Clarke DW and Mohtadi C (1989) Properties of generalized predictive
control. Automatica 25: 859875.
Clarke DW, Mohtadi C and Tuffs PC (1987a) Generalized predictive
controlpart 1: the basic algorithm. Automatica 23: 137148.
Clarke DW, Mohtadi C and Tuffs PC (1987b) Generalized predictive
controlpart 2: extensions and interpretations. Automatica 23:
149163.
Cutler CR and Ramaker BL (1980) Dynamic matrix control: a com-
puter control algorithm. In: Proceedings of the Joint Automatic
Control Conference, San Francisco, CA.
Demircioglu H and Gawthrop PJ (1991) Continuous-time generalised
predictive control (CGPC). Automatica 27: 5574.
Doyle FJ, Ogunnaike BA and Pearson RK (1995) Nonlinear model
based control using second-order Volterra models. Automatica 31:
697714.
Fruzzetti KP, Palazoglu A and McDonald KA (1997) Nonlinear
model predictive control using Hammerstein models. Journal of
Process Control 7: 3141.
Gawthrop PJ and Demircioglu H (1989) Continuous-time generalized
predictive control. In: Proceedings of the IFAC Symposium on Adap-
tive Systems in Control and Signal Processing, Glasgow, 123128.
Genceli H and Nikolaou M (1995) Design of robust constrained
model predictive controllers with Volterra series. AIChe Journal
41: 20982107.
Grewal MS and Andrews AP (2008) Kalman Filtering: Theory and
Practice Using MATLAB. New York: John Wiley and Sons.
Gruber JK, Bordons C, Bars R and Haber R (2010) Nonlinear pre-
dictive control of smooth nonlinear systems based on Volterra
models application to a pilot plant. International Journal of
Robust and Nonlinear Control 20: 18171835.
Henson MA (1998) Nonlinear model predictive control: current sta-
tus and future directions. Computers and Chemical Engineering 23:
187202.
Hernandez E and Arkun Y (1993) Control of nonlinear systems using
polynomial ARMA models. AIChE Journal 39: 446460.
Iplikci S (2006) Support vector machines-based generalized predictive
control. International Journal of Robust and Nonlinear Control 16:
843862.
Iplikci S (2010) A support vector machines based control application
to the experimental three-tank system. ISA Transactions 49:
376386.
Kawathekar RB and Riggs J (2007) Nonlinear model predictive con-
trol of a reactive distillation column. Control Engineering Practice
15: 231239.
Keyser RMCD and Cauwenberghe ARV (1985) Extended prediction
selfadaptive control. In: Proceedings of the 7th IFAC Symposium
on Identification and System Parameter Estimation, York.
Lawrynczuk M (2007) A family of model predictive control algo-
rithms with artificial neural networks. International Journal of
Applied Mathematics and Computer Science 17: 217232.
Maciejowski JM (2002) Predictive Control with Constraints. Essex:
Pearson Education Limited.
Maner BR, Doyle FJ, Ogunnaike BA and Pearson RK (1996) Non-
linear model predictive control of a simulated multivariable poly-
merization reactor using second-order Volterra models.
Automatica 32: 12851301.
Niemiec MP and Kravaris C (2003) Nonlinear model-state feedback
control for nonminimum-phase processes. Automatica 39: 1295
1302.
Nocedal J and Wright SJ (1999) Numerical Optimization. New York:
Springer.
Norquay SJ, Palazoglu A and Romagnoli JA (1998) Model predictive
control based on Wiener models. Chemical Engineering Science 53:
7584.
Piche S, Sayyar-Rodsari B, Johnson D and Gerules M (2000) Non-
linear model predictive control using neural networks. IEEE Con-
trol Systems Magazine 20: 5362.
Press WH, Teukolsky SA, Vetterling WT and Flannery BP (2007)
Numerical Recipes: The Art of Scientific Computing. New York:
Cambridge University Press.
Qin SJ and Badgwell TA (2003) A survey of industrial model predic-
tive control technology. Control Engineering Practice 11: 733764.
Richalet J (1993) Industrial applications of model-based predictive
control. Automatica 29: 12511274.
Richalet JA, Rault A, Testud JL and Papon J (1978) Model predictive
heuristic control: applications to an industrial process. Automatica
14: 413428.
Roubos JA, Mollov S, Babuska R and Verbruggen HB (1999) Fuzzy
model-based predictive control using TakagiSugeno models.
International Journal of Approximate Reasoning 22: 330.
Schafer A, Kuhl P, Diehl M, Schlder J and Bock HG (2007) Fast
reduced multiple shooting methods for nonlinear model predictive
control. Chemical Engineering and Processing 46: 12001214.
Scokaert POM, Mayne DQ and Rawlings JB (1999) Suboptimal
model predictive controllers (feasibility implies stability). IEEE
Transactions on Automatic Control 44: 648654.
Sistu PB and Bequette B (1996) Nonlinear model predictive control:
closed-loop stability analysis. AIChE Journal 42: 33883402.
Soeterboek R (1992) Predictive Control: A Unified Approach. Engle-
wood Cliffs, NJ: Prentice-Hall.
Tamimi J and Li P (2010) A combined approach to nonlinear model
predictive control of fast systems. Journal of Process Control 20:
10921102.
Venkataraman P (2002) Applied Optimization with MATLAB Pro-
gramming. New York: Wiley-Interscience.
Welch G and Bishop G (2006) An Introduction to the Kalman Filter.
Technical Report, TR 95-041, Department of Computer Science,
University of North Carolina at Chapel Hill, NC.
Xi XC, Poo ANK and Chou S (2007) Support vector regression
model predictive control on a HVAC plant. Control Engineering
Practice 15: 897908.
180 Transactions of the Institute of Measurement and Control 35(2)
Copyright of Transactions of the Institute of Measurement & Control is the property of Sage Publications, Ltd.
and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright
holder's express written permission. However, users may print, download, or email articles for individual use.

You might also like