Download as pdf or txt
Download as pdf or txt
You are on page 1of 163

C236etukansi.

fm Page 1 Tuesday, April 25, 2006 12:15 PM

C 236

OULU 2006

UNIVERSITY OF OULU P.O. Box 7500 FI-90014 UNIVERSITY OF OULU FINLAND

U N I V E R S I TAT I S

S E R I E S

SCIENTIAE RERUM NATURALIUM


Professor Mikko Siponen
HUMANIORA
Professor Harri Mantila

TECHNICA
Professor Juha Kostamovaara

ACTA

U N I V E R S I T AT I S O U L U E N S I S

Imre Beny

E D I T O R S

Imre Beny

A
B
C
D
E
F
G

O U L U E N S I S

ACTA

A C TA

C 236

CASCADE GENERALIZED
PREDICTIVE CONTROL
APPLICATIONS IN POWER
PLANT CONTROL

MEDICA
Professor Olli Vuolteenaho

SCIENTIAE RERUM SOCIALIUM


Senior assistant Timo Latomaa
SCRIPTA ACADEMICA
Communications Officer Elna Stjerna
OECONOMICA
Senior Lecturer Seppo Eriksson

EDITOR IN CHIEF
Professor Olli Vuolteenaho

EDITORIAL SECRETARY
Publication Editor Kirsti Nurkkala
ISBN 951-42-8031-8 (Paperback)
ISBN 951-42-8032-6 (PDF)
ISSN 0355-3213 (Print)
ISSN 1796-2226 (Online)

FACULTY OF TECHNOLOGY,
DEPARTMENT OF PROCESS AND ENVIRONMENTAL ENGINEERING,
UNIVERSITY OF OULU

TECHNICA

ACTA UNIVERSITATIS OULUENSIS

C Te c h n i c a 2 3 6

IMRE BENY

CASCADE GENERALIZED
PREDICTIVE CONTROL
APPLICATIONS IN POWER PLANT
CONTROL

Academic Dissertation to be presented with the assent of


the Faculty of Technology, University of Oulu, for public
discussion in Kuusamonsali (Auditorium YB 210),
Linnanmaa, on April 28th, 2006 at 11 a.m.

O U L U N Y L I O P I S TO, O U L U 2 0 0 6

Copyright 2006
Acta Univ. Oul. C 236, 2006

Supervised by
Professor Urpo Kortela
Doctor Jeno Kovcs

Reviewed by
Professor Ruth Bars
Professor Kaddour Najim

ISBN 951-42-8031-8 (Paperback)


ISBN 951-42-8032-6 (PDF)
http://herkules.oulu.fi/isbn9514280326/
ISSN 0355-3213 (Printed )
ISSN 1796-2226 (Online)
http://herkules.oulu.fi/issn03553213/

Cover design
Raimo Ahonen

OULU UNIVERSITY PRESS


OULU 2006

Beny, Imre, Cascade Generalized Predictive ControlApplications in power plant


control
Faculty of Technology, University of Oulu, P.O.Box 4000, FI-90014 University of Oulu, Finland,
Department of Process and Environmental Engineering, University of Oulu, P.O.Box 4300,
FI-90014 University of Oulu, Finland
Acta Univ. Oul. C 236, 2006
Oulu, Finland

Abstract
The Generalized Predictive Controller in transfer function representation is proposed for the cascade
control task. The recommended cascade GPC (CGPC) applies one predictor and one cost function
that results in several advantageous features:
The disturbance regulations of the inner and the outer loops can be totally decoupled;
The inner disturbance regulation is well damped, the typical overshoot of the traditional cascade
control structure is avoided;
The robustness properties of the inner and the outer loops can be designed separately;
The anti-windup properties of the CGPC are exactly as perfect as in the case of the simple SISO
GPC. The typical problem of the saturation in the inner loop, resulting in modeling error for the outer
loop, is prevented.
The CGPC was applied as the oxygen controller of a pilot fluidized bed boiler. The investigation
is based on simulation experiments and on experiments on a pilot scale boiler.
In another simulation experiment, the CGPC was applied as the temperature controller of at a
steam superheater stage. The results of the experiments well illustrated the power of the proposed
cascade control algorithm.

Keywords: cascade control, GPC, oxygen control, robustness, superheater control

Acknowledgements
This work was carried out at the Systems Engineering Laboratory, at the Department of
Process and Environmental Engineering, University of Oulu. It was financially supported
by the Graduate School in Electronics, Telecommunication and Automation, by the
Tekniikan Edistmisti, by the Oulu University Foundation and by the Neles 30 Sti.
I would like to express my sincere gratitude to my supervisors, Professor Urpo Kortela
and Dr. Jen Kovcs for the opportunity and support to perform this research. Special
thanks to them for the constant readiness to be engaged in discussion.
Im very grateful to Dr. Albin Zsebik and Dr. Gyrgy Lipovszki, who have supported
my research at the Budapest University of Technology and Economics and encouraged
my progress.
Professor Ruth Bars and Professor Kaddour Najim are appreciated for critically
reviewing this thesis and thanked for their valuable comments on this work. I owe many
thanks to Regina Casteleijn-Osorno for carefully revising the language.
Dr. Enso Ikonen is acknowledged for patiently introducing me to the world of
predictive control. I warmly thank my office-mates Matias Paloranta and Zoltn Hmer,
and all my colleagues of the Laboratory for the good and inspiring atmosphere. Matias is
specially thanked for his friendship beyond the walls of the University.
I sincerely thank Liisa Myllykoski and Hannelle Timonen for taking care of all my
official business and guiding me through the Finnish administration. Liisa is also thanked
for her kindness and help in our private life.
I wish to thank Betti and Jen for the plenty of help and the lovely times we had with
them. My wife, Zsuzsa and myself were lucky that we found friends in Oulu. I would like
to thank Pisti, Balzs, Rka and all other fellows for the time we spent together. I
specially thank the Salonp family: Eija, Pekka, Justus, Julia and Petrus, for their love
beyond ones depth.
At last, I warmly acknowledge my family. My wife accepted the challenge of moving
to the North in Oulu, and supported my work with her care and brought happiness even in
the dark and cold days of Oulu. I thank my parents for inciting and facilitating for me to
study. I thank my Mother for her support and understanding for leaving her alone for
years. I thank my Brothers, their families and also my Family-in-law for being
continuously prompting us back to Hungary.
Budapest, March 2006

Imre Beny

List of symbols and abbreviations


Latin letters
A, B, C, D
A, B, C, D, E
c
D
G
G
J
Hp
Hm
Hc
e
m
m
O2
Q
t
T
u
v
V
w
x
y
y

polynomials of the ARIMAX model


Matrices of the state space models
specific heat capacity
delay
process model in Laplace domain
the matrix of the step-response coefficients
cost function
Prediction horizon
Minimum horizon
Control horizon
noise signal
mass
mass flow
oxygen content
heat flow
time
temperature (in Section 5.3 Time constant)
control signal
intermediate variable (the measured disturbance signal only in
Section 3.1.9)
volume
reference signal
process state vector
process output signal
estimated process output signal

Greek letters

heat transfer coefficient


weighting coefficient, (density in Section 5.2)
disturbance signal
damping factor
difference operator
weighting matrix

Indices
after sh
after sp
before sp
CL
e
fg
forced
free
h
i
in
o
OL
r
ref
sp
st
t

after the superheater


after the spray
before the spray
closed loop
error
flue gas
forced response
free response
enthalpy
inner loop (in cascade control structure)
inlet
outer loop (in cascade control structure)
open loop
radiative
reference signal
spray
steam
tracking

Abbreviations
2GPC
ANFIS
ARIMAX
CARIMA
CGPC
DMC
FBB
CFBC
EPSAC
EHAC
FIR
GPC
IMC
KT
MIMO

cascade control loop including GPC in both the primary and the
secondary controllers
Adaptive Neuro-Fuzzy Interference System
Auto-Regressive Integrative Moving Average with eXogenous input
Controlled Auto-Regressive and Integrated Moving Average
Cascade Generalized Predictive Control
Dynamic Matrix Control
Fluidized Bed Boiler
Circulating Fluidized Bed Combustion
Extended Prediction Self Adaptive Control
Extended Horizon Adaptive Control
Finite Impulse Response
Generalized Predictive Control
Internal Model Control
Kappa Tau tuning method
Multi-Input Multi-Output

MPC
MPHC
MURHAC
MUSMAR
LQ
LQG
LS
P
PFC
PI
PID
SH-GPC
SISO
TIDEA
UPC

Model Predictive Control


Model Predictive Heuristic Control
MUltipredictor Receding Horizon Adaptive Control
MUltiStep Multivariable Adaptive Regulator
Linear Quadratic control
Linear Quadratic Gaussian control
Least-Square method
Proportional control
Predictive Functional Control
Proportional-Integral control
Proportional-Integral-Derivative control
GPC derived for the superheater control task
Single-Input Single-Output
time-delay identification algorithm
Unified Predictive Control

Contents
Abstract
Acknowledgements
List of symbols and abbreviations
Contents
1 Introduction ................................................................................................................... 13
1.1 Predictive control....................................................................................................14
1.2 Cascade Control......................................................................................................17
1.3 The superheater control ..........................................................................................18
2 Aims of the research ...................................................................................................... 21
2.1 Outline of the Thesis...............................................................................................21
3 Cascade generalized predictive controller ..................................................................... 23
3.1 Generalized Predictive Controller ..........................................................................23
3.1.1 The Predictive Control Concept ......................................................................23
3.1.2 The process model ...........................................................................................25
3.1.3 Disturbance model...........................................................................................28
3.1.4 The free and forced response...........................................................................29
3.1.5 The cost function .............................................................................................31
3.1.6 The control algorithm ......................................................................................31
3.1.7 Closed loop relations of the GPC ....................................................................34
3.1.8 Tuning of the GPC algorithms.........................................................................37
3.1.9 Feedforward control in GPC............................................................................42
3.2 Robustness of the GPC algorithm...........................................................................47
3.2.1 The Modulus Margin .......................................................................................48
3.2.2 The stability boundary for the additive modelling error ..................................51
3.2.3 Effect of the noise model on the robustness ....................................................53
3.3 The cascade GPC algorithm ...................................................................................56
3.3.1 The cascade predictor ......................................................................................57
3.3.2 The control algorithm ......................................................................................60
3.3.3 Robustness properties of the CGPC ................................................................65
3.3.4 Comparison of CGPC with cascade GPC-s.....................................................69

3.4 Constraint control problem .....................................................................................76


3.4.1 Constrained Generalized Predictive Control ...................................................78
3.4.2 Constrained Cascade Generalized Predictive Control .....................................81
3.5 Cascade GPC based on state space model ..............................................................84
3.6 Summary.................................................................................................................87
4 Flue gas oxygen content control in fluidized bed boiler............................................... 88
4.1 Process description .................................................................................................89
4.1.1 The traditional oxygen controller ....................................................................90
4.1.2 Description of the proposed controller ............................................................90
4.2 Simulation results ...................................................................................................97
4.2.1 Simulator description.......................................................................................97
4.2.2 Perfect model matching ...................................................................................97
4.2.3 Control performance in presence of modelling error.......................................99
4.3 Experiments on a pilot plant .................................................................................101
4.3.1 Pilot plant description....................................................................................101
4.3.2 Measurement results ......................................................................................102
4.4 Summary...............................................................................................................107
5 The GPC in the Superheater control ............................................................................ 108
5.1 Introduction ..........................................................................................................108
5.2 Spray-Superheater simulator ................................................................................ 112
5.2.1 The water spray model................................................................................... 112
5.2.2 The superheater model................................................................................... 112
5.2.3 Validation of the models ................................................................................ 116
5.3 Tuning of the PI cascade controllers..................................................................... 118
5.3.1 Tuning of the secondary controller ................................................................ 119
5.3.2 Tuning of the primary controller....................................................................120
5.3.3 Feedforward mass flow disturbance compensation .......................................121
5.3.4 Anti-windup method in the cascade structure................................................122
5.4 Tuning of the CGPC .............................................................................................122
5.5 The extended cascade Generalized Predictive Controller.....................................123
5.5.1 Derivation of the controller ...........................................................................123
5.5.2 Tuning of the SH-GPC ..................................................................................129
5.6 Comparison of the controllers ..............................................................................130
5.6.1 Robustness properties of the controllers........................................................130
5.6.2 Evaluation of the controllers based on deterministic disturbances ................132
5.6.3 Evaluation of the controllers based on measured input data..........................136
5.7 Summary...............................................................................................................140
6 Conclusions ................................................................................................................. 142
References
Appendices

1 Introduction
For everyday people, the term Control engineering sounds like a modern science; a
product of the twentieth century. However, the problem of regulating the processes was
always present; this lead people to look for the solutions.
The earliest tangible work on control was motivated by practical concerns. There were
devices whose operation and maintenance could be simplified by imposing capacities for
automatic regulation. We know that self-regulating devices were constructed long ago in
ancient times. A Greek man named Ktesibios in Alexandria invented a float regulator for
a water clock circa 270 BC, and another Greek man named Philon of Byzantium used a
float regulator to keep the level of oil in a lamp constant in circa 250 BC. (Lewis 1992)
Nowadays control engineering plays a fundamental role in modern technological
systems. The proper control of the processes is the basis for improving product quality,
reducing energy consumption, minimizing waste material, increasing safety levels and
reducing pollution.
The thermal and electrical power generations in power plants are a complex and
complicated technology that is required to satisfy numerous technical, security and
economical boundaries. The aim of the power plant automation system is to deal with
these boundaries and to realize a safe and cost saving operation. The role of the
automation covers a wide range of different tasks: securing the human power and the
environment, securing the technical equipment, securing the reliable and efficient power
generation, ensuring the high level of availability, the longevity and the fast changes in
power level.
In the beginning of the twentieth century, power plant automation consisted of direct
monitoring of the local measuring devices and a person directly acting on the valves and
bolts. The signal transfer and remote controlled drives appeared only in the late 1930s,
and rapid development of the instrumentation started after the 2nd World War. The real
breakthrough in the field was due to the appearance of semiconductor technology in the
1960s.
The importance of power plant automation has been continuously increasing and that
can be explained by the following facts (Czinder 2000):
The growing unit power results the increased number of measurements and drives.
The increased availability requirements implicate the reduction of the hazards.

14
Environmental protection became a crucial aspect. The emissions have to be reduced,
thus several additional control tasks have appeared.
The efficient power generation is a natural requirement on the energy market. To
successfully improve the efficiency, tighter control is required.
To follow the energy consumption, flexible units are required. The following of power
level changes are important control tasks.
Last but not least, to relieve some of the mental workload of the human operators. This
is due to the extension of the process and of control system, the human operators can
handle only a very small portion of the monitored information.
As a result of the challenging requirements against power plant automation, and aside
from the development of the instrumentation, the applied control algorithms have been
greatly improved. Power plant automation is one of the key fields as a driving force to
develop the new control algorithms, and where the new control strategies are tested. The
predictive controllers are also widely tested in the power plant automation.

1.1 Predictive control


The concept of predicting the process output was introduced for the first time by
Smith (1957) in the well known Smith-predictor. The basic idea was to build a parallel
model to compensate the delay of the process. The controller is then designed for the
non-delayed process. Due to the fact that the delay compensator includes the model of the
process, significant robustness issues are associated with the structure. With the proposed
delay-compensation, both the tracking and regulation performance of a traditional control
loop can be improved remarkably.
According to Qin & Badgwell (2003), the predictive control concept can be traced
back to the Linear Quadratic Gaussian (LQG) controller proposed by Kalman et al. in the
early 1960s (1960a, b). The LQG algorithm has a powerful stabilizing property due to
the applied infinite prediction horizon. The LQG theory became a standard approach to
solve control problems in a wide range of application areas. Goodwin et al. (2001)
estimate that there may be thousands of applications of LQG, and the number of patterns
related to the Kalman filter is about 400 per year. However, the applications are not
widespread because the algorithm is lack of handling constraints, process nonlinearities
and model uncertainty. The most significant reason that limited the permeation of the
LQG theory is the culture of the industrial process community, namely because the
technicians and control engineers either had no exposure to LQG concepts or regarded
them as impractical.
The concept of predictive control was introduced simultaneously by Richalet (1976
and 1978) and Cutler & Ramaker (1979) in the late seventies.
The ideas characterizing the class of the predictive controllers are basically the
following:
explicit use of a model to predict the process output at a future time instant;
calculation of the control signal by minimizing a cost function over a certain
prediction horizon;

15
receding horizon strategy, so only the first control signal is applied from the sequence
calculated at a certain time instant; in the next sample the calculation of the control
sequence is repeated.
The model based predictive controllers (MPC) have several advantages which make the
algorithm attractive for the control engineers. The most important benefits are
(Soeterboek 1992, Camacho & Bordons 2004):
The predictive concept is very intuitive; the tuning requires limited knowledge of
control theory.
Great variety of processes can be handled, from simple to complex dynamics; systems
with long time delays, nonminimum phase or unstable behavior.
The concept is not restricted to single-input, singleoutput (SISO) processes. The
derivation and application of the predictive controllers to multi-input, multi-output
(MIMO) is straightforward.
In contrast to the LQ or pole-placement controllers, predictive controllers can also be
derived for nonlinear processes. A nonlinear model of the process is then used
explicitly to design the controller.
The predictive controllers can handle the constraints in a very natural and easy way.
The constraints can be systematically included during the design. This feature in
particular is believed to be one of the most attractive aspects of the predictive
controllers.
The feedforward control can be introduced to compensate the measurable disturbances
inherently.
It is an especially advantageous algorithm when the future reference trajectory is
known.
The predictive control has some drawbacks. Since the predictive control belongs to the
class of model-based controller design methods, an appropriate model of the process
must be available.
The quality of the process model is a key aspect. The modeling error can essentially
influence the performance of the controller. During the tuning of the controller, the
robustness properties of the resulted control loop must be considered.
The model predictive controller results a simple control law, but it requires little
computation; its derivation is more complex than a traditional PID controller or a poleplacement controller. If the applied model of the process is constant along the operation
range, then the control law can be calculated beforehand. But in an adaptive case, the
calculations (of the optimal predictor) must be repeated in every sampling time. Even
though, nowadays the available computation power is remarkable, this fact must be kept
in mind with regards to the real applications.
The model based predictive control is an open methodology. Within maintaining the
characteristics of the MPC class there are different ways to design a predictive controller.
As a result, numerous different predictive controllers have been proposed in the literature
over the last decades (MPHC, DMC, PFC, GPC, etc).
As it was already mentioned, the first description of MPC application was presented
by Richalet et al. (1976 and 1978). His algorithm is called Model Predictive Heuristic
Control (MPHC) (later known as Model Algorithmic Control (MAC)). The algorithm is

16
based on the impulse response model and the reference trajectory is introduced as a first
order system. The cost function is quadratic over a finite prediction horizon, and the input
and output constraints are included in the formulation. The software applied in the
industry is called IDCON. Due to the applied process model, the algorithm can be applied
for stable process only.
The Dynamic Matrix Control (DMC) was developed at the end of the seventies by
Cutler & Ramaker (1979) of the Shell Oil Co. The process applies the step response
model of the process, taking into account only finite elements, hence the algorithm be
applied for stable process only. An attractive characteristic of the method is that it can
easily handle any constraints.
The MPC soon became popular, particularly in chemical process industry. After the
initial success, several other MPC algorithms appeared. Ydsties Extended Horizon
Adaptive Control (EHAC) (Ydstie 1984) applies transfer function model of the process,
and it is aiming to minimize the discrepancy between the process output and the reference
trajectory at a certain sample after the process delay. The algorithm has only one
parameter to tune, which simplifies the tuning, but nevertheless provides little freedom of
design.
Extended Prediction Self Adaptive Control (EPSAC) proposed by De Keyser & Van
Cauwenberghe (1984) similarly applies the transfer function model, but the control signal
is assumed to be constant in the future. Due to this assumption, the control signal can be
calculated analytically which makes the algorithm attractive.
One of the most popular model based predictive controllers is the Generalized
Predictive Controller (GPC) proposed by Clarke et al. (1987). The algorithm is going to
be presented and investigated in detail in the following chapter.
A few years later, Richalet (1992) also proposed the Predictive Functional Control
(PFC). The distinctive features of this algorithm are the application of basis functions
(steps, ramps, parabolas, etc) to structure the control signal; and the concept of the
coincidence points to evaluate the cost function along the horizon. The drawback of the
controller (similarly to the MAC) is that it only can be used for stable processes. The
advantage is that it can be applied for nonlinear processes using nonlinear state space
models.
The above mentioned MPC algorithm limitations on independent tracking and
regulation design. To expand the possibilities, several new algorithms were proposed, for
example the unified predictive controller (UPC, proposed by Soeterboek 1992) or the
adaptation of the partial state model reference method in predictive controller (Najim &
MSaad 1991, Landau et al. 1998)
Besides the already mentioned algorithms, there are numerous other predictive
controllers: Multistep Multivariable Adaptive Regulator (MUSMAR) (Greco et al. 1984),
and Multipredictor Receding Horizon Adaptive Control (MURHAC) (Lemos & Mosca
1985) etc
Due to the fact that the model predictive control is a very effective tool and the
research on this field is very active; several books (Soeterboek 1992, Camacho &
Bordons 2004, Maciejowski 2002) and thousands of papers have been published. The
reported applications cover a wide range of control tasks from very fast processes to
highly nonlinear slow processes. Several surveys about the realized applications can be
already found in literature (Clarke 1988, Garcia et al. 1989, Richalet 1993, Lee & Cooley

17
1997) and a survey extended by the available commercial MPC products that was
recently presented by Qin & Badgwell (2003).

1.2 Cascade Control


Cascade control is one of the most popular structures for process control (Maffezzoni
et al. 1990), as it is a special architecture for dealing with disturbances. The core idea is
to feed back an intermediate variable that lies between the disturbance injection point and
the controlled process output. The classical cascade control structure is illustrated in
Fig. 1. The control task is to control the y(t) output, and to track the w(t) reference signal,
meanwhile the process is charged with different disturbances, e1 and e2. The cascade
structure assumes that there is an intermediate measurable variable in the process, thus
the process can be separated into two sub-processes. Compared to a traditional control
loop, a second control loop is introduced including only the inner process. The goal of the
inner controller (or also called secondary controller) is to attenuate the effect of the inner
e1 disturbance before it significantly affects the process output. This can be realized if
high gain can be applied in the secondary controller, thus the secondary loop can quickly
regulate the disturbances of the inner loop. Note that the outer controller (or so called
primary controller) provides the reference signal for the secondary controller.
The cascade structures main benefits can be exploited only in certain circumstances
(strm & Hgglund 1995):
the inner process has significant nonlinearities that limit the loop performance;
the outer process has significant delay, or limits the bandwidth in a basic control
structure; the rule of thumb is that the average residence times should have a ratio at
least 5;
essential disturbances act in the inner loop.
e1 (t )
w(t )

Outer
Controller

v ref (t )

Inner
Controller

u (t ) Inner
Process

e2 (t )
v(t )

Outer
Process

y (t )

Fig. 1. Cascade control structure, where w(t) is the reference signal, u(t) is the control signal,
v(t) and vref(t) are the intermediate variable and its reference signal, ei are disturbances and
y(t) is the process output.

Even though the cascade control is a traditional method that has been applied for decades,
improving the performance of the structure is still a core issue. Some books provide
fundamental tuning methods for conventional cascade control systems (Shinskey 1996,
strm & Hgglund 1995, Luyben 1990). Improvements in the tuning of the traditional
PID controller in cascade control scheme have been developed (e.g. Lee et al. 1998,

18
Huang et al. 1998, Tan et al. 2000, Song et al. 2003), meanwhile new solutions are also
presented (e.g. Semino & Brambilla 1996, Lestage et al. 1999). Kaya (2001) proposed a
cascade control scheme combined with Smith predictor. Liu et al. (2005) proposed a new
two-degree-of-freedom cascade structure that can decouple the tracking and regulation
performance, and permits the tuning of the robustness of the inner and outer process
separately. The robustness of the cascade controller was also addressed in the paper of
Maffezzoni et al. (1990). Their solution provided an independent design of the cascade
control loops and intrinsic avoidance of windup problem.
The number of cascade applications in the literature is enormous. In the following
only a few applications, related to the predictive control concept are going to be
presented.
Maciejowski et al. (1991) presented the application of DMC algorithm in the outer
loop of the cascade control of a chlorine producing plant. The DMC gave the setpoint
values for the multivariable compressor control. The good behaviour of the proposed
control structure was demonstrated on a real-time simulation of the plant.
An interesting application of the MUSMAR algorithm for control of a distributed
collector solar field was given by Silva et al. (1997). The control problem consisted of
keeping constant the temperature of the field outlet oil by acting on the circulating oil
flow used for heat transfer. In the inner loop a MUSMAR controller and in the outer loop
a PID controller were applied. Difficulties arose from the time varying transport delay of
the process. The obtained experiences were generalized to a wide class of industrial
processes.
Bordons & Camacho (1999) reported a successful application in sludge density control
in a sugar factory. In both inner and outer loops GPC controllers were applied. The paper
showed the tuning of the controllers regarding the robustness behaviour. The emphasis
was put on how simple and powerful the predictive control algorithm was.
Shaoyuan et al. (2000) applied cascade GPC to a biaxial film production line. In the
proposed solution the traditional control scheme was applied with GPC controllers in the
primary and secondary loop as well.
Hedjar et al. (2000, 2003) investigated the application of the MPC for control of an
induction motor. They proposed the application of a nonlinear predictive controller in
both the inner and the outer loops.

1.3 The superheater control


One of the most difficult loops in a modern power plant is that of controlling superheated
steam temperature. The efficiency of the power generating station depends strongly on
maximizing this temperature, but within very tight limits. The temperature allowed is
limited by the ability of the steel-alloy heat transfer tubing, retaining its strength.
Excessive temperature and especially temperature variations, cause stress and distortion
which can significantly shorten the life-spam of the superheater.
The superheater control is usually considered as a cascade control problem. Generally
the steam temperature is controlled by water injection into the main steam flow before the
superheater. The steam temperature is measured after the spray (before the superheater)

19
and after the superheater. The spray process is fast, meanwhile the superheater has a
significant time lag. The main disturbances of the superheated steam temperature arrive
with the steam flow that can be measured by the first measurement. These facts confirm
the application of the cascade control.
The temperature control of the superheated steam has been widely considered in
technical literature and many different control schemes have been proposed besides the
classical PI based cascade control configuration.
Autotuning of the PID controller for superheated steam temperatures was proposed by
Madrigal-Espinosa & Garza-Barrientos (1995). The traditional PI cascade scheme is
improved by autotuning the primary controller. The tuning is based on the modified
Ziegler-Nichols rules (Hang et al. 1991). The auto-tuning is combined with a gainscheduling module to overcome difficulties if the parameter estimation is not possible
because of unexcited inputs.
Buschini et al. (1994) showed a self-tuning temperature controller based on the robust
cascade control scheme by Maffezzoni et al. (1990). The controllers of the robust cascade
scheme were tuned by the self-tuning algorithm based on a time-delay identification
algorithm called TIDEA, proposed by Ferretti et al. (1991). The proposed control
configuration was compared to a self-tuned PI based cascade scheme, and the results
were found promising.
An LQG self-tuning controller was presented by Forrest et al. (1993). The LQG
algorithm was applied in both the inner and outer loop. The LQG controller in the inner
loop is a fixed gain controller; meanwhile the primary controller applies the self-tuning
version of the LQG. By the application of the proposed controller the steam temperature
control is improved compared to the classical PI based control configuration.
The time varying behavior of the superheater process is addressed by the solution of
Zhao et al. (2002). The primary controller is an IMC; while the secondary controller is a
simple proportional controller. Both the primary and secondary controllers are built in a
multi-model manner, and adapted according to the load level. The proposed control
scheme is compared to a gain-scheduling PI based cascade control scheme. The paper
drives the attention to the relationship of the model number and the smooth switch
between the precalculated controllers even though it does not deal with the actual
switching method.
The multi-model approach was also applied by Guo et al. (2003). The superheated
steam temperature control is proposed to be solved by a traditional GPC extended with
measurable disturbance feedforward. The measured disturbance is the feed-water flow.
The solution combines the fixed multi-model method and the on-line adaptation,
according to the deviation of the predicted output and the measured output. The better
model (the on-line identified or the fixed model chosen from the multi model base) is
applied in the GPC. Good control performance and robustness was reported, based on
simulation study.
Nakamura et al. (1995) presented an adaptive model based solution for the steam
temperature control. The authors proposed a simulator-model based control method
extended with online identification and adaptation of the simulator in the controller. The
method was found to be promising based on the presented simulation results.
The MUSMAR approach was also applied at superheated steam temperature control
(Silva et al. 2000). Besides the traditional measurements, the steam mass flow and the

20
flue gas flow signals are used in the controller. The authors presented the effectiveness of
the algorithm by the test results performed on a boiler with maximum 150 t/h steam
capacity.
Moelbak (1999) presented an application based evaluation of different control
strategies. Besides the classical PI control scheme, fuzzy controller and GPC were tested.
The GPC controller was applied in the outer loop keeping the PI algorithm in the inner
controller. The comparison showed the advantage of the advanced control methods; the
best results were obtained by the GPC. The author also pointed out the significant
performance improvement achieved by introducing new instrumentation. It was
demonstrated by using radiation pyrometry.

2 Aims of the research


The initial problem of this work was how to improve the traditional superheater
controllers without applying new measurements. The proposition was soon reformulated,
as it was important to see how to improve the traditional cascade control structure in
general.
The main aims were:
to derive the GPC for the cascade control scheme based on transfer function model
to investigate its tracking, regulation and robustness performances.
As a natural requirement, the implemented controller was aimed to be tested at a real
plant, or on reasonable process simulator.
In the following, all the presented calculations are implemented in Matlab R12 and
represent the own work of the author; the shown simulations are prepared in Simulink.
More details about the implementations can be found in Appendix 5.

2.1 Outline of the Thesis


A new Cascade Generalized Predictive Controller (CGPC) is introduced in Chapter 3.
Besides the derivation of the controller, the basic tuning rules and its main properties are
demonstrated. The proposed CGPC is investigated from the robustness and constraint
control points of view. The investigation includes the comparison of the proposed
algorithm with the traditional cascade controllers. At the end of the chapter, the
relationship of the proposed cascade controller to the state space model based GPC is
shown.
In Chapter 4, the CGPC algorithm is applied for the oxygen control of a fluidized bed
boiler. The problem of oxygen control is investigated by Paloranta et al. (2003, 2004, and
2005). With his permission and support, the proposed CGPC algorithm was tested on the
FBB simulator, and was also tested on a pilot scale fluidized bed boiler. The chapter
includes the derivation of the controller, a short overview about its robustness properties
and finally the result of the simulations and the measurements on a plant.

22
In Chapter 5, the CGPC algorithm is applied for superheater control. The process is a
typical example for cascade control. The test of the proposed algorithm for superheater
control is convenient. The developed simulator of the superheater loop is presented in the
beginning of the chapter. Based on simulations the performances of three controllers are
compared: a traditional PI controller with steam mass flow disturbance feedforward
compensation; the proposed CGPC controller; and the CGPC extended with steam mass
flow feedforward compensation. The controllers are compared based on the simulations,
and the robustness properties of the control loops are also considered.
Finally in Chapter 6, the conclusions are drawn. The appendices include the referred
but not crucial algorithms, i.e. the GPC algorithm based on the state space process model;
the numerical optimization method applied in the GPC in case of active constraint,
namely the feasible direction method; the Diophantine equation solver; the applied linear
model parameters and a short description about the Matlab implementation of the
algorithms.

3 Cascade generalized predictive controller


3.1 Generalized Predictive Controller
The cascade generalized predictive controller is derived from the generalized predictive
controller. First the GPC and its main properties are presented to advance the
investigation of the properties of the CGPC algorithm.

3.1.1 The Predictive Control Concept


The predictive control is based on the process model applied for the prediction of the
process behaviour. The basic idea of the predictive control is illustrated in Fig. 2.
The predictive controller calculates such future control sequences that result in the
process output close to the desired output on the prediction horizon. In the receding
control horizon controller, the whole control sequence is not applied, only its first
element and the optimisation procedure is repeated in the next sampling instant. All
predictive controllers have four major features:
The process and disturbance models applied for the prediction;
The reference trajectory, in other words the desired process output; the predictive
scheme has special importance, when the reference trajectory is known ahead.
The criterion or cost function, that is minimised to obtain the optimal control signal
sequence;
Optimization method to find the control signal sequence that minimises the cost
function.
The controller may have several design parameters. The most important ones are the
optimisation horizon, the control horizon and the weighting factors. In the cost function
the error between the reference signal and the predicted process output appears on the
time range between the minimum (Hm) and the prediction horizon (Hp); and also the
control signal values are included according to the control horizon. The optimisation
horizon shows which future error values should count in the cost function. The control

24
horizon (Hc) value means, how many changes in the control signal are allowed in the
future.

Predicted controlled variable


Past controlled variable
Sampled past and predicted controlled variables
Reference signal

k+1

k+Hm

k+Hp

a)
1.2

Future assumed control signal


Past control signal

1
0.8
0.6
0.4
0.2
0
-0.2

k+Hc-1

b)
Fig. 2. a) The past and predicted process output signals b) The control signal.

25

3.1.2 The process model


In predictive controllers, different process models can be applied. In the following the
most common ones are presented.
Impulse response or weighting function model. The model is based on the process
response to an impulse input as shown in Fig. 3. The output can be calculated from the
previous inputs according to the following equation:

y ( t ) = hi u ( t i )
i =1

where hi is the sampled output for the unit impulse excitation. As a consequence the
unstable process can not be represented in this form. Usually the sum is truncated, and
only N values are considered. The truncated impulse response model:
N

y ( t ) = hi u ( t i ) = H ( q 1 ) u ( t )
i =1

where q-1 is the backward shift operator. The difficulty with the application of this
model is the large number of the necessary parameters, as N is usually a large value.
The prediction based on this model is straightforward:
N

y ( t + k t ) = hi u ( t + k i t ) = H ( q 1 ) u ( t + k t )
i =1

The advantage of the model is the simple identification.

i+1

i+1

Fig. 3. The parameters of the impulse response models.

26
Step response model. This model is similar to the impulse response model, but the
input signal is a unit step. Similar to the impulse response, the truncated response is
commonly applied as given:
N

y ( t ) = y0 + gi u ( t i ) = y0 + G ( q 1 )(1 q 1 ) u ( t )
i =1

where gi are the sampled process output values for the unit step input (as shown in Fig.
4), and u ( t ) = u ( t ) u ( t 1) . The initial output value can be taken to be 0 without
the loss of generality, so the predictor is:
N

y ( t + k t ) = gi u ( t + k i t ) = G ( q 1 ) u ( t + k t )
i =1

The impulse signal is the difference of two step signals with a sampling period delay,
the connection between the parameters of the impulse response model and those of the
step response model can be written:
hi = gi gi 1 .

As the connection between the model parameters are very strong, this model has the
same drawbacks as the impulse response model: applicable only for stable process and
the large number of process parameters. The advantage is that the parameters can be
identified by conducting a simple experiment on the process and the predictor can be
easily obtained.

i+1

i+1

Fig. 4. The parameters of the impulse response models.

27
Transfer function model. The process output signal is given by:

y (t ) =

B ( q 1 )
A ( q 1 )

u ( t 1)

where
A ( q 1 ) = 1 + a1q 1 + a2 q 2 +
B ( q 1 ) = b0 + b1q 1 +

+ ana q na

+ bnb q nb

The prediction can be expressed:

y ( t + k t ) =

B ( q 1 )
A ( q 1 )

u (t + k 1 t )

To separate the effects of the past and future inputs a Diophantine equation must be
solved:
B ( q 1 )
A ( q 1 )

= Ek ( q

)+q

k +1

Fk ( q 1 )
A ( q 1 )

Replacing it into the prediction follows:


y ( t + k t ) = Ek ( q 1 ) u ( t + k 1 t ) +

Fk ( q 1 )
A ( q 1 )

u (t )

This model is already able to describe unstable processes (i.e. integrative process) and
another advantage is the limited number of required parameters. The disadvantage of
the model is that a priori knowledge is required about the orders of the A and B
polynomials.

28
State space model. The model for SISO process is the following:
x ( t + 1) = Ax ( t ) + Bu ( t )
y ( t ) = Cx ( t )

where x is the state vector, A, B, C are the matrices of the system, input and output
respectively. The prediction is given by:
k

y ( t + k t ) = Cx ( t + k t ) = C Ak x ( t ) + Ai 1 Bu ( t + k i t )
i =1

This predictor requires the state vector. If the state variables are not measured, then
state observer is required to implement a state space model based predictive controller.
The state space model is the most convenient for multivariable processes.

3.1.3 Disturbance model


The disturbance model has special importance in the predictive controllers. The most
general one is the Controlled Autoregressive and Integrated Moving Average (CARIMA),
in which the difference between the measured output and the output calculated by the
process model given by:
n (t ) =

C ( q 1 )

D ( q 1 )

(t )

where the denominator D ( q 1 ) includes the integrator, generally chosen as


1 q 1 A q 1 ; (t) is a white noise with mean value of zero, and the polynomial C is
identified or chosen as a controller parameter.
To calculate the predicted error, the following Diophantine equation must be solved (a
possible Diophantine equation solver is given in Appendix 2)

) ( )

C ( q 1 )

D ( q 1 )

= Ek ( q

)+q

Fk ( q 1 )
D ( q 1 )

The prediction of the disturbance:


n ( t + k t ) =

C ( q 1 )

D ( q 1 )

( t + k t ) = Ek ( q 1 ) ( t + k t ) +

Fk ( q 1 )
D ( q 1 )

(t )

29
Since the order of the Ek polynomial is less than k, and the (t) signal is a white noise
with zero mean, the expected value of the first term on the right side is 0. Thus the
prediction of the future disturbance is:
n ( t + k t ) =

Fk ( q 1 )
D ( q 1 )

(t ) .

3.1.4 The free and forced response


In the GPC, a predictor is required to estimate the future outputs with its disturbances.
Combining the process model with the disturbance model, it is possible to derive the
predictor to estimate the future values of the output signal based on the information
available up to the actual t time instant.
Taking the transfer function model and the CARIMA model, the output of the process
is given by:
y (t ) =

B ( q 1 )
A ( q 1 )

u ( t 1) +

( ) (

C ( q 1 )

D ( q 1 )

(t )

(1)

) ( )

where D q 1 = 1 q 1 A q 1 , and the delay of the process is included in the B


polynomial by zero first coefficients of the polynomial.
The k-step ahead output prediction is given by:
y ( t + k t ) =

B ( q 1 )
A ( q 1 )

u ( t + k 1) +

C ( q 1 )

D ( q 1 )

(t + k t )

(2)

The disturbance term can be separated into available and future information as in the
previous section:
C ( q 1 )

D ( q 1 )

= Ek ( q 1 ) + q k

Fk ( q 1 )
D ( q 1 )

Reorganizing the equation:


1=

Ek ( q 1 ) D ( q 1 )
C ( q 1 )

+ qk

Fk ( q 1 )
C ( q 1 )

(3)

30
Multiplying (2) with this expression follows:
y ( t + k t ) =

Ek ( q 1 ) D ( q 1 ) B ( q 1 )
C ( q 1 )

u
t
k
+

1
+
( t + k ) +
(
)
1
1
1
C (q )
D (q )
A ( q )

Fk ( q 1 )

C ( q 1 )

B ( q 1 )

C ( q 1 )

u
t
k
t
k
1

+
+
(
)
(
)
D ( q 1 )
A ( q 1 )

equivalently:
Ek ( q 1 ) B ( q 1 )(1 q 1 )

y ( t + k t ) =
u ( t + k 1) + Ek ( q 1 ) ( t + k ) +
1
C (q )

Fk ( q 1 ) B ( q 1 )
C ( q 1 )

+
u
t
1
t

(
)
(
)
C ( q 1 ) A ( q 1 )
D ( q 1 )

Considering that the expected value of the second term of the first row is equal to 0, and
the term within the brackets in the second row is equal to the actual process output, the
prediction of the process output is given by:
y ( t + k t ) =

Ek ( q 1 ) B ( q 1 )
C ( q 1 )

u ( t + k 1) +

Fk ( q 1 )
C ( q 1 )

y (t )

The effect of the control signal is included in the first term on the right side. To separate
the effect of the past and future control actions, the following Diophantine equation must
be solved:
Ek ( q 1 ) B ( q 1 )
C ( q 1 )

= Gk ( q

)+q

Lk ( q 1 )

(4)

C ( q 1 )

The final form of the predictor is:


y ( t + k t ) = Gk ( q
+

Lk ( q 1 )

) u ( t + k 1) + C

Fk ( q 1 )
C ( q 1 )

y (t )

(q )
1

u ( t 1) +

(5)

31
In the predictive control literature the first term on the right side is called forced response
( y forced ) and the rest is called free response ( y free ). The free response expresses the
prediction of the process outputs based on the past inputs and assumed to keep constant
the last control signal. The free response also includes the already measured disturbances
and their effect on the future outputs (expressed in the last term of predictor). The forced
response corresponds to the prediction triggered by our actual and future control signal.

3.1.5 The cost function


Different cost functions can be applied in the predictive controllers. In the Generalized
Predictive Controller the following form is usual:
J ( u ) =

Hp

Hc

( j ) y ( t + j t ) w ( t + j ) + ( j ) u ( t + j 1)

j = Hm

j =1

(6)

where w is the reference signal.


The cost function includes the predicted errors and the control actions. The tuning
parameters of the controllers can be properly seen in the cost function:
Hm the minimum horizon, specifying the beginning of the horizon in the cost
function, from which point the output error is taken into account. Since the control action
affects the process output only after the process delay, the minimum horizon is suggested
to be equal or higher than the process delay.
Hp the prediction horizon, specifying the end of the horizon in the cost function, in
other words the last output error that is taken into account.
Hc the control horizon, the number of consecutive changes in the control signal.
1, 2 the weighting vectors, enabling the weighting of the terms in the cost function
also with respect to their appearance in time.

3.1.6 The control algorithm


The analytical minimization of the cost function is possible if no constraints on the
control signal are assumed. The cost function is the following:
J ( u ) =

Hp

Hc

( j ) y ( t + j t ) w ( t + j ) + ( j ) u ( t + j 1)

j = Hm

j =1

32
Introducing the free and forced response notation, and organizing the signals into vectormatrix form, the cost function is:
J ( u ) = ( G u + y free w ) 1 ( G u + y free w ) + u T 2 u
T

(7)

where:
w ( t + H m )
u ( t )

u ( t + 1 )
u =
; w = w (t + k )

w t + H
u ( t + H c )
p )

y free

1
FH ( q 1 )
y free ( t + H m ) LH m ( q )

u ( t 1)
y (t )

y free ( t + k )
;
=
= Lk ( q )
+ Fk ( q )

C (q )
C ( q 1 )

y

F ( q 1 )
free ( t + d + H p ) LH ( q 1 )
p

Hp

1 = diag 1 ( H m ) , 1 ( H m + 1) , , 1 ( H p ) ,

2 = diag 2 ( 1) , 2 ( 2 ) , , 2 ( H c ) ,
and
g min
g
min +1
G=

g H p

0
g min
g H p 1

0
,

g H p H c +1
0

where: Gk ( q 1 ) = g0 + g1 q 1 + + g k q k
(Notice, that these gi coefficients are the same as the parameters of the step response
model.)

33
The equation (7) can be written in the following form:
J ( u ) =

1 T
u H u + bT u + f 0
2

(8)

where
H = 2 ( G T 1G + 2 )
bT = 2 ( y free w ) 1G
T

f 0 = ( y free w ) 1 ( y free w )
T

The minimum of the J cost function assuming the absence of any constraints can be
found by making the gradient of J equal to zero, which leads to the solution:
u = H 1b = ( GT 1G + 2 ) G1 ( y free w )
1

(9)

As it was already mentioned, the GPC is a receding horizon controller, and thus the first
element of the calculated control signal sequence to be applied on the process. The
procedure of minimization of the cost function is repeated in the next sampling instant.
The applied control signal is:
u ( t ) = K ( w y free )
where K is the first row of the matrix ( G T 1G + 2 ) G T 1 .
Assuming that the future reference trajectory keeps constant along the prediction
horizon (or equivalently it is unknown and therefore assumed to be constant) the control
algorithm is the following:
1

u ( t ) = w ( t )

Hp

i = Hm

KL

u ( t - 1)
C (q

-1

KF

y (t )

C ( q -1 )

where ki coefficients are the elements of the K vector; and


LH ( q 1 )
FH ( q 1 )
m

FH +1 ( q 1 )
LH +1 ( q 1 )
L= m
and F = m

1
1
FH p ( q )
LH p ( q )

(10)

34
The derived controller is the GPC. The most important features that distinguish it among
the numerous predictive controllers are the following:

the application of the CARIMA process model;


the use of long-range prediction over a finite horizon;
the weight of the control increments;
the application of the control horizon concept.

3.1.7 Closed loop relations of the GPC


The derived control algorithm in Eq. (10) can be written in the following form:
Hp

u ( t ) =

Hp

k w (t ) k L

i = Hm

i = Hm

u ( t 1)
C (q

y (t )

Hp

kF C

i = Hm

(11)

(q )
1

The GPC controller can be easily transformed into the R-S-T structure by some simple
manipulations:
Hp
Hp
Hp

u ( t ) C ( q 1 ) + q 1 ki Li = C ( q 1 ) ki w ( t ) ki Fi y ( t )
i = Hm
i = Hm
i = Hm

The R-S-T control law is:


u ( t ) S ( q 1 ) = w ( t ) T ( q 1 ) y ( t ) R ( q 1 )
The R-S-T polynomials are:
Hp

S ( q 1 ) =

C ( q 1 ) + q 1 ki Li ( q 1 )
i = Hm

Hp

i = Hm

Hp

; R ( q 1 ) =

k F (q )

i = Hm

Hp

i = Hm

; T ( q 1 ) = C ( q 1 )

35
In the following the T polynomial is not distinguished and only the C polynomial notation
is going to be applied. The control loop in R-S-T form is shown in Fig. 5.

e(t )

( )
( )

C q 1
D q 1

( )
( )

w(t)

C q 1
S q 1

( )
( )

u(t)

q 1 B q 1
A q 1

y (t )

( )
( )

R q 1
C q 1
Fig. 5. The R-S-T control structure of the GPC.

Notice that in the following the R-S-T forms of the controllers are shown to facilitate the
derivation of certain properties of the control loops. The actual codes of the simulations
are always implemented as the original algorithm based on equation (9).
Based on Fig. 5, by a couple of manipulations the output can be given by:
C ( q 1 ) B ( q 1 )

S ( q 1 ) A ( q 1 )

y (t ) =
1+

q 1

C ( q 1 ) R ( q 1 ) B ( q 1 )

S ( q 1 ) C ( q 1 ) A ( q 1 )

w (t ) +
q

C ( q 1 )

A ( q 1 )

+
1+

C ( q 1 ) R ( q 1 ) B ( q 1 )

S ( q 1 ) C ( q 1 ) A ( q 1 )

(t )
q

or equivalently:
y (t ) =

C ( q 1 ) B ( q 1 ) q 1

S ( q 1 ) A ( q 1 ) + R ( q 1 ) B ( q 1 ) q 1
+

C ( q 1 ) S ( q 1 )

w (t ) +

S ( q 1 ) A ( q 1 ) + R ( q 1 ) B ( q 1 ) q 1

where A ( q -1 ) = A ( q -1 ) .

(12)

(t )

36
The characteristic polynomial of the transfer function can be decomposed to obtain the C
polynomial as a factor. To find the C polynomial in equation (12) the required
manipulations are shown in the following.
From the first Diophantine equation (3) it follows:
Hp

i = Hm

ki Fi ( q 1 ) = C ( q 1 )

Hp

i = Hm

k i q i A ( q 1 )

Hp

k E (q ) q
1

i = Hm

From the second Diophantine equation (4) it follows:


B ( q 1 )

Hp

k E (q ) q
1

i = Hm

= C ( q 1 )

Hp

Hp

k G (q ) q + k L (q )

i = Hm

i = Hm

Combining these two equations and the definition of the R polynomial:


Hp

BRq 1 =

C
Hp

i = Hm

Hp
Hp

i 1
i 1
B ki q A ki Gi q
i = Hm
i = H m

Aq 1 ki Li
i = Hm

Hp

i = Hm

The characteristic polynomial is equal to:


SA + BRq 1 =

A
Hp

i = Hm

Hp

1
C + q ki Li +
i = Hm

ki
Hp

Hp

i = Hm

i 1
i 1
B ki q A ki Gi q
i = Hm
i = H m

Hp

Hp

Aq 1 ki Li
i = Hm

Hp

i = Hm

The second term in the first row, and the last term in the second row are the same but
shown with different signs. Thus the characteristic polynomial:
SA + BRq 1 =

C
Hp

i = Hm

ki

Hp
Hp

i 1
i 1
A + B ki q A ki Gi q = CPc
i = Hm
i = Hm

(13)

37
Recall the equation (12) with the elimination of the C polynomial:
y (t ) =

Bq 1
S
w (t ) + (t )
Pc
Pc

(14)

From this expression, one important role of the C polynomial can be clearly seen. The
closed loop transfer function between the output and the reference signal (describing the
tracking behaviour) does not include the C polynomial, thus the stability and the tracking
performance is not influenced by the C polynomial. The transfer function between output
and the disturbance includes the C polynomial in the S, thus the disturbance regulation
depends on the C polynomial. During the derivation of these conclusions, perfect model
matching was assumed. If it is not satisfied (the model applied in the controller and the
process are not identical) then the shown eliminations can not be performed, and the
tracking behaviour of the GPC will also be influenced by the noise model.
In the equation (13) and (14) it seems, that the roots of the Pc expression give the poles
of the closed control loop that must be examined to check the stability of the control loop.

3.1.8 Tuning of the GPC algorithms


The main tuning parameters (as horizons and weighting factors) were already mentioned
by the cost function in Section 3.1.5. In this chapter, some basic guidelines are given and
some simulation examples are shown to illustrate the effect of the main parameters.
The minimum horizon is of little amount parameter. Since the control action affects the
process output only after the delay, it is reasonable to choose the minimum horizon
higher or equal to the process delay. If the process delay is not known, then the delay can
be set to one and the minimum horizon to zero without the loss of stability. The choice of
the minimum horizon can be interesting in case of nonminimum phase processes.
The prediction horizon has a remarkable effect on the performance of the controller. In
general, the prediction horizon is proposed to set around the settling time of the process
but at least equal to the order of the process. If the plant has a nonminimum-phase
response (unstable zero in the process transfer function), the prediction horizon has to be
long enough that the cost function could include the samples near to the settling time.
The control horizon is an essential design parameter. The increasing control horizon
parameter results in a more excited control signal and thus a faster response. Over a
certain value no further increase can be obtained.
The control weighting parameters has the effect of reducing the control activity. In the
case of a stable plant, increasing the weights reduces the effect of the feedback, thus
stabilising the control loop. The drawback of increasing the control weights is to slow
down the control loop since small control actions are resulted. This parameter has special
importance if the process output measurement is burdened with serious measurement
noise.

38
The cost function (6) includes the vector of the future reference signal. As a
consequence it is possible to prescribe certain tracking behaviour. Some parameter
settings of the GPC have special importance.
Set the control horizon equal to one, the control signal weighting factor equal to zero,
and suppose a unit step in the reference signal without any disturbance on the process. If
the prediction horizon is long enough, the control signal is close to a step, and the control
performance is similar to the mean-level control. If the process requires even more
damping of the control action, the control signals weighting can be increased. In most of
the stable plant cases, the mean-level control is suitable.
The deadbeat response can also be implemented in GPC. By setting the control and
the prediction horizon to be higher than the order of the process, and the control
weighting factor equal to zero, the realized controller results in deadbeat control. In this
case the process output reaches the reference signal within the possible minimum
instants.
Example 1: The effects of the tuning parameters.

To show the effect of the main tuning parameters, a series of simulations are presented.
The process to be controlled is the following:
G (s) =

2e 3 s
100 s 2 + 15s + 1

thus the time constant of the process is T = 10 sec ; the damping factor is = 0.75 and
the resulting settling time (2 %) is Tsettling = 37 sec .
Hence, in the future, if it is not indicated otherwise, the sampling time is one second,
and the weighting factor of the predicted errors is one. The minimum horizon is set to be
equal to the process delay and the control signal weighting is zero.
Outputs

1
0.8
0.6
0.4

Reference signal
H =13
p
H =23
p
H =53

0.2
0
0

10

20

30

Fig. 6. The effect of the prediction horizon.

40
time, sec

50

60

70

80

39
Control signals

2.5

H =13
p
H =23
p
H =53

1.5

0.5

0
0

10

20

30

40
time, sec

50

60

70

80

Fig. 7. The effect of the prediction horizon.

In the first simulation the control horizon is equal to one; the prediction horizon is
changed to demonstrate the its effect. The different tracking performances and the
corresponding control signals are shown in Fig. 6 and Fig. 7 respectively.
The figures clearly show the effect of the prediction horizon. The shorter the horizon,
the faster the response and the control signal is more excited. As it is expected, the long
prediction horizon results in performance that is close to the mean level control. The
control signal is almost a step and the tracking behaviour tends to the step response with
unit gain as the prediction horizon increases.
The effect of the prediction horizon can be also followed on the closed loop poles of
the control loop. The prediction horizon is changed from 6 to 53 and all the other
parameters are kept constant. The resulted poles of the closed loop are presented in
Fig. 8. The poles are moving towards a slower region (exactly towards the process poles);
meanwhile the resulting damping factor does not change remarkably, as it could have
been already seen in the simulation results.

40
1

10T

7/10T

0.8

0.1

/5T

0.2
0.3
0.4
0.5
0.6
0.7
0.8

0.6

0.4

0.2

3/5T

/10T

0.9

-0.2

/10T
-0.4
3/5T

-0.6

/5T

-0.8

-1

10T
0

7/10T
0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Fig. 8. The effect of the prediction horizon from 6 to 53 on the closed loop poles.

In the next simulation, the control horizon was changing; meanwhile the prediction
horizon was fixed to 23 steps. The results are given in Fig. 9 and Fig. 10.
Outputs

1
0.8
0.6
0.4

Reference signal
H =1
c
H =2
c
H =3

0.2
0
0

10

15

20
time, sec

25

30

35

40

Fig. 9. The effect of the control horizon.

The figures illustrate the effect of the control horizon well: the settling time of the control
loop decreased remarkably by increasing the control horizon. From the close to mean
level control (Hc=1), with the increasing control horizon value the dead-beat response
(Hc=3) is reached. Evidently, the higher (than 3) control horizon value can not result in
any further acceleration in the control loop. The corresponding control signals reflect the
great influence of the parameter. The higher control horizon value results in a more
excited control signal. In Fig. 10 the control signal axis is truncated, the peaks of the
control signals are approximately 1, 14 and 54 respectively.

41
Control signals

2
1.5
1
0.5
0
-0.5
-1

H =1
c
H =2
c
H =3

-1.5

-2
0

10

15

20
time, sec

25

30

35

40

Fig. 10. The effect of the control horizon. The control signal and time axis are truncated.

In the next set of simulations the prediction horizon is equal to 23 steps and the control
horizon is equal to 2 with the control weighting factor being changed from 0 to 100. (The
control weighting factors are assumed to be constant along the time in the prediction.)
The simulation results are shown in Fig. 11 and Fig. 12. For better orientation about the
weighting factor values, the elements of the GTG matrix are within the range 12 to 15 in
Eq. (9).
Outputs

1
0.8
0.6
0.4

Reference signal
Ro=0
Ro=1
Ro=10
Ro=100

0.2
0
0

10

20

30

40
time, sec

50

60

70

80

Fig. 11. The effect of the control weighting factor.

The Fig. 12 shows how the increasing weighting factor slows down the control loops by
the smoother control signal. As a consequence of the smaller control increments, the
overshoot also increased by the increased control weightings.

42
Control signals

2
1.5
1
0.5
0
-0.5
-1

Ro=0
Ro=1
Ro=10
Ro=100

-1.5
-2

10

20

30

40
time, sec

50

60

70

80

Fig. 12. The effect of the control weighting factor.

3.1.9 Feedforward control in GPC


In many cases, it is possible to apply the feedforward idea for disturbance rejection, if the
external disturbance can be measured. This situation is typical in processes where the
output is influenced by variations of the load regime and a variable describing the load
level is measured.
A good example of feed-forward control (Roffel & Betlem, 2004) can be found in the
distillation columns. The temperature of the column is influenced by the feed and the
reflux medium. The feed can be considered as a disturbance, and the temperature of the
column is controlled by the changing of the reflux. If the feed is changed then throughout
the feedforward controller the reflux flow is also changed.
According to Camacho & Bordons (2004), to implement the feedforward action in the
predictive controller, the process model must be extended with the external measured
disturbance:
y (t ) =

B ( q 1 )
A ( q 1 )

u ( t 1) +

M ( q 1 )
A ( q 1 )

v ( t 1) +

C ( q 1 )

D ( q 1 )

(t )

(15)

where v(t) is the measured disturbance at t instant, and M(q-1) is the polynomial
describing the effect of the disturbance.
To derive a k-step ahead predictor, similar steps are required that were performed in
Section 3.1.4. The k-step ahead prediction is:
y ( t + k t ) =

B ( q 1 )
A ( q 1 )

u ( t + k 1) +

M ( q 1 )
A ( q 1 )

v ( t + k 1) +

C ( q 1 )

D ( q 1 )

(t + k )

43
The first Diophantine equation is identical to Eq. (3).
C ( q 1 )

D ( q 1 )

= Ek ( q 1 ) + q k

Fk ( q 1 )
D ( q 1 )

After some evident manipulations:


y ( t + k t ) =

Ek ( q 1 ) B ( q 1 )
C ( q 1 )

u ( t + k 1) +

+ Ek ( q 1 ) ( t + k ) +

Fk ( q 1 )
C ( q 1 )

Ek ( q 1 ) M ( q 1 )
C ( q 1 )

v ( t + k 1) +

y (t )

The first term in the second row can be neglected for the same reason as earlier, since its
expected value is zero. To separate the past and future measured disturbance signals the
following Diophantine equation is solved:
Ek ( q 1 ) M ( q 1 )
C ( q 1 )

= N k ( q 1 ) + q k +1

Ok ( q 1 )
C ( q 1 )

(16)

After the substitution, the prediction is:


y ( t + k t ) =

Ek ( q 1 ) B ( q 1 )
C (q

Ok ( q

u ( t + k 1) + N k ( q 1 ) v ( t + k 1) +

) v t + F (q ) y t
+
()
()
C (q )
C (q )
1

To finally separate the effect of the past and future control signals the same Diophantine
equation is required as in Section 3.1.4:
Ek ( q 1 ) B ( q 1 )
C ( q 1 )

= Gk ( q 1 ) + q k

Lk ( q 1 )
C ( q 1 )

44
The final form of the predictor is:
y ( t + k t ) = Gk ( q 1 ) u ( t + k 1) + N k ( q 1 ) v ( t + k 1) +
+

Lk ( q 1 )
C ( q 1 )

u ( t 1) +

Ok ( q 1 )
C ( q 1 )

v (t ) +

Fk ( q 1 )
C ( q 1 )

y (t )

(17)

In the predictor two new terms appear comparing to the equation (5). The three terms in
the second row represent the free response, the process output if there is no change in the
control signal and in the measured disturbance. The free response is based on the past
values of the control signal, of the measured disturbance and of the process output. The
first term in the first row is the forced response, the effect of the future control signal
changes. The second term in the first row represents the effect of the future measured
disturbance. The changes in the measured disturbances are sometimes available ahead,
but it can be often assumed that there is no change in the future measured disturbance. If
the future disturbance is supposed to be constant, then the second term of the equation
disappears.
The control algorithm given in Eq. (9) can be kept:
u = H 1b = ( G T 1G + 2 ) G1 ( y free w )

but the free response is extended:

y free

LH ( q 1 )
OH ( q 1 )
m

LH +1 ( q 1 ) u ( t 1) OH +1 ( q 1 ) v ( t )
m
m
=
+
+

1
1

C (q )
C (q )

1
1
LH p ( q )
OH p ( q )
FH ( q 1 )
m

FH +1 ( q 1 ) y ( t )
+ m

C (q )

1
FH p ( q )

Applying the receding horizon idea, the control signal to be realized is:
u ( t ) = w ( t )

Hp

i = Hm

KL

u ( t 1)
C (q

KO

v ( t )

C (q

KF

y (t )

C ( q 1 )

45
where
OH ( q 1 )
m

OH +1 ( q 1 )
O= m
.

1
OH p ( q )

To get the final form of the controller, the same R-S-T polynomials are applied as they
were derived from the original case, and the new extra polynomial is:
Hp

Q (q

)=

kO
i

i = Hm

Hp

i = Hm

The final controller structure is shown in Fig. 13.

v(t)

( )
( )
1

Q q
C q 1

w(t)

( )
( )

C q 1
S q 1

( )
( )

M q 1
A q 1

u(t)

y(t)

( )
( )

q 1 B q 1
A q 1

( )
( )

C q 1
A q 1

( )
( )

(t )

Rq
C q 1

Fig. 13. The structure of the GPC controller extended with feedforward action.

46
The overall transfer function between the process output and the measured disturbance
can be expressed with the help of the previously given transfer functions:
y (t )

v (t )

Q ( q 1 ) y ( t )

C ( q 1 ) w ( t )

M ( q 1 ) A ( q 1 ) y ( t )
A ( q 1 ) C ( q 1 ) ( t )

q 1Q ( q 1 ) B ( q 1 ) + M ( q 1 ) S

S A ( q 1 ) + B ( q 1 ) R ( q 1 ) q 1

The closed loop transfer function between the measured disturbance and the output
clearly shows that the extension of the GPC reaches its aim: the transfer function has zero
transmission in the steady state.
Example 2: Feedforward disturbance compensation in GPC.

In this example the feedforward control implemented in GPC is shown. The process
model is the same as in the Example 1, but the process has a measured disturbance as
well. The measured disturbance model is:
M (s)
A(s)

( 10s + 1) e3s
100 s 2 + 15s + 1

In the simulations, the original GPC is compared to the GPC extended with the
feedforward. The controllers settings in both controllers were: Hp=23; Hm=3; Hc=2; the
control weighting factor is zero and the noise polynomial is C ( q 1 ) = 1 0.85q 1 . The
controller performances are shown in Fig. 14 and Fig. 15. In the simulation, there is a
reference signal step at t=5 sec, and a step kind (with amplitude 0.5) measured
disturbance at t=50 sec.
As it was expected, the control loop behaviours are the same in the tracking, since all
the tuning parameters are the same. The advantage of the extension appears at the
disturbance rejection. The figure of the control signals shows that the extended GPC acts
immediately after the disturbance, and it does not need to wait until its effect appears in
the output of the process. The process output is much smoother in the case, when the
GPC is extended with the feedforward action. The control signal is smaller and less effort
is required for the disturbance rejection.

47
Outputs
1.2
1
0.8
0.6
0.4
Reference signal
With feedforward
Without feedforward

0.2
0
0

10

20

30

40
time, sec

50

60

70

80

Fig. 14. The performances of the GPCs with and without feedforward disturbance
compensation.
Control signals

5
4
3
2
1
0
-1
-2
0

With feedforward
Without feedforward
10

20

30

40
time, sec

50

60

70

80

Fig. 15. The disturbance rejection of feedforward extended GPC loop.

3.2 Robustness of the GPC algorithm


In every model based controller design method, information about the dynamics of the
process is required. The accuracy and reliability of this available information may vary a
lot in practice. The behavior of the plant itself may change in time and that change is not
necessarily captured by the process model. Thus it is important to obtain a controller that
is less sensitive to these model uncertainties, or in other words the controller would need
to be robust enough.

48
To measure the robustness of the obtained controller, two indicators are here
presented: the modulus margin and the stability limit for the additive error. Both methods
are derived from the Nyquist stability criterion.
In the case of an open loop stable system, the control loop is stable if the Nyquist plot
of the open loop transfer function passes the critical point (-1,0) on the right. In case of an
open loop unstable process, according to the Nyquist criterion the control loop is stable if
the number of encirclement of the critical point counter clockwise is equal to the number
of the unstable poles of the open loop.
The minimal distance between the critical point and the Nyquist plot of the open loop
transfer function is related to the robust stability of the control loop.

3.2.1 The Modulus Margin


The modulus margin (M) is the radius of the circle centred in the critical point (-1,0)
and tangent to the Nyquist plot of the open loop transfer function of the control loop. The
modulus margin is illustrated in
Let us denote the transfer function of the process with p and that of the controller with
c. The open loop transfer function is pc. The transfer function between the output
disturbance and the controlled variable (also called as sensitivity function) is given by:
S (s) =

1
1 + p (s) c (s)

From the figure it follows:


1

M = 1 + p ( s ) c ( s ) min = S ( s ) max

In other words, the modulus margin is the inverse of the maximum modulus of the
sensitivity function. As a consequence the reduction of the modulus from the sensitivity
function will imply the increase of the modulus margin. The modulus margin of the
control loop is an important property: the low modulus margin implies a small tolerance
to parameter uncertainties in the critical frequency region; and consequently the larger
value of the modulus margin indicates more robust performance.

49

(-1,0)

pc

Fig. 16. The illustration of the modulus margin.

In the literature, the gain margin and phase margin measures are widely used. The
modulus margin gives boundaries for these measures. Based on trivial considerations, the
1
gain margin is higher than or equal to (1 M ) . The phase margin is higher than or
equal to arcsin ( M ) . With the application of the modulus margin it is possible to use
only one measure to obtain a lower limit for both the gain and phase margin.
Modulus margin

0.8

H =1
c
H =2
c
H =3

0.7

0.6
0.5
0.4
0.3
0.2
0.1
10

15

20

25

H 30
p

35

40

45

50

Fig. 17. The effect of the prediction horizon on the modulus margin in case of different
control horizons.

Note, that considering the R-S-T controller representation, the c controller is given by the
expression:
c (q

R ( q 1 )

) = S

(q )
1

50
Example 3: The effect of the control signal weighting on the modulus margin.

The modulus margins are calculated for the same process and controller settings as in the
Example 1. In Fig. 17 the modulus margins are illustrated as the function of the
prediction horizon, and the control horizon is the parameter of the functions.
The figure shows several important properties of the GPC algorithm. The control
horizon has ultimate effect on the modulus margin. The longer the control horizon, the
less the modulus margin is, thus the control loop is less robust. The prediction horizon
has a remarkable effect on the modulus margin when the control horizon is equal to one.
The longer the prediction horizon the more robust the control loop is. A longer control
horizon results in a smaller effect of the prediction horizon on the robustness. These
essential relationships must be kept in mind in the tuning of the controller.
In Fig. 18 the modulus margin is a function of the control weighting factor. The
prediction horizon is 20 and the control horizon is equal to 2.
The figure shows how the modulus margin is influenced by the control weighting
function. The larger factor resulted in a higher modulus margin, and thus a more robust
performance.
Modulus margin

0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
-1
10

10

Ro

10

Fig. 18. The effect of the control weighting factor to the modulus margin.

10

51

3.2.2 The stability boundary for the additive modelling error


The model uncertainties can be described in many different ways. A simple way is to
derive the process with a family of linear time-invariant models. On the Nyquist plane it
is assumed that the plot of the members of the model family at a certain frequency lies
within a disk shaped range. Let us denote the nominal plant with p and the radius of the
disk la , which is also the function of the frequency. The family of plants described by the
disk is defined by:

= p: p ( i ) -p ( i ) < la ( )

(18)

This family of plants is also referred as an additive uncertainty description, and la is the
allowed additive uncertainty.
Considering an open loop stable plant, based on the Nyquist criteria the control loop is
stable if the additive uncertainty satisfies the following inequality:
1 + pc
c

> la

(19)

where c represents the controller in the control loop.


The graphical explanation of the inequality is given in Fig. 19. The derived additive
error formula can be considered as a robust stability limit, if it is satisfied and the process
satisfies inequality (18) the control loop is stable.

Im

Nyquist Diagram

(-1,0)

Re

Imaginary Axis

1+ ~
pc

la c

1 + pc

Fig. 19. The graphical derivation of the robust stability limit.

52
Example 4: GPC performance in the presence of modelling error.

The controllers of the previous example are tested in case of serious modelling error. The
controllers prediction horizons are 23 and 53; the control horizons are equal to 2, the
control weighting factors are chosen to be zero; the C noise polynomial is:
C ( q 1 ) = (1 0.9q 1 )(1 0.6q 1 ) .

The process is modelled as in the previous example:


G (s) =

2e 3 s
,
100 s + 15s + 1
2

but the real transfer function of the process to be controlled is:


G (s) =

2.6e 4 s
.
100 s 2 + 15s + 1

In Fig. 20 the stability limits for additive error (left side of Eq. (19)) are plotted, and the
modelling error is also plotted.
The modelling error exceeds the additive error boundary when the prediction horizon
is equal to 23 steps, thus the criterion of the stability is not satisfied, and the control loop
is unstable. In case of the long prediction horizon, the boundary is over the modelling
error, thus the control loop is stable. To test the result of the robust stability limit
calculation, the simulation of these control loops is also presented. In Fig. 21 the
simulation results are given and they are in accordance with the theoretical result. The
controller with prediction horizon equal to 53 is able to control the process even in the
presence of the modelling error. This result is in accordance with the result of Example 3:
the longer prediction horizon results more robust performance.

53
Stability error limit

10

-1

10

-2

10

H =23
p
H =53
p

Modeling error

-3

10 -3
10

-2

-1

10

10
frequency

10

10

Fig. 20. The robust stability limits at different prediction horizon and the modelling error.
Outputs

2
1.5
1
0.5
0
-0.5
-1
-1.5
-2

Reference signal
H =23
p
H =53
p

10

20

30

40

50
60
time, sec

70

80

90

100

Fig. 21. The simulation results in the presence of modelling error.

3.2.3 Effect of the noise model on the robustness


In Section 3.1.7 the transfer functions between the reference signal and the output, and
also the transfer function between the disturbance and the output were given. It was
shown, that in the transfer function between the reference signal and the output, it is
possible to eliminate the C polynomial that results in the transfer function given in
Eq. (14), (see on page 37).
Both in Fig. 5 and in the control algorithm given in equation (11) it can be seen, that
both the control increment and the process output are filtered with the C polynomial.
Thus with the proper choice of the C polynomial, several goals can be achieved: prevent
the too excessive control action resulted by the high frequency disturbances and attenuate

54
the prediction error caused by the model mismatch, which is particular important at high
frequencies.
Consequently the C polynomial can be considered as a design parameter to also tune
the robustness of the control loop. The slower filter results better robustness, but
unfortunately this is not always true (counterexamples are presented in Yoon & Clarke
1995). On the other hand, by applying a slower filter as C polynomial, the disturbance
rejection behaviour is also influenced.
The tuning of this C polynomial is a difficult question. Even though there are some
guidelines in (Yoon & Clarke 1994), it is suggested to tune it individually in every case.
Example 5: Modulus margin as a function of the C polynomial.

In the example the modulus margin is calculated in case of different C polynomials. The
process is the same as in Example 1. The controller parameters are the following:
prediction horizon is 23, control horizon
is 2, and control signal weighting is 0. The C
2
polynomial is: C ( q 1 ) = (1-k q 1 )
The calculated modulus margin values are plotted in Fig. 22. The effect of the C
polynomial can be properly followed, with the higher value of k parameter (slower filter)
resulting in a higher modulus margin, and thus a more robust controller. This effect is
limited, over a certain value of k, the modulus margin drops down.
Modulus margin

0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

0.2

0.4

0.6

0.8

Fig. 22. The modulus margin at different C polynomials.

As it is known, the more robust performance is usually associated with slower


disturbance rejection. To trace this phenomenon, the control loop is simulated with three
different C polynomials. The simulation contains a step like reference signal change, and
a step kind output disturbance with 0.1 amplitude. Even though, in the derivation of the
GPC coloured noise disturbance was assumed, in the following the disturbances are
deterministic step kind disturbances. As a disturbance, the step signal was chosen to
facilitate the evaluation of the regulation performance of the controller.

55
The simulation results are given in Fig. 23. As it was expected, the C polynomial does
not affect the tracking behaviour, only the disturbance rejection is influenced. The more
robust control loop results slower regulation.
Outputs

1
0.8
0.6
0.4

Reference signal
C=(1-0.5q-1)2
C=(1-0.85q-1)2
C=(1-0.93q-1)2

0.2
0
0

20

40

time, sec

60

80

100

Fig. 23. The simulation results with different C polynomials.

To track the robust performance, the simulation is repeated with modelling error. The
process model and the real process is the same as in Example 4. In Fig. 24 the robust
stability limits are presented for control loops applying different C polynomials. In
Fig. 25 the simulation results are presented for the different polynomials. The conclusion
of the robust stability limit calculation and that of the simulation is clear. The modelling
error curve (|la|) exceed the stability limit of the control loop in case of the faster filter
and that means the control loop is unstable, which can be also observed in figure
including the simulation results. The stability limit of the control loop is over the
modelling error curve at every frequency, therefore the control loop is stable. It means
that it is possible to stabilize the control loop by the proper choice of the C polynomial.

56
Stability error limit

10

-1

10

-2

10

-3

10 -3
10

C=(1 -0.5q-1)2
C=(1 -0.93q-1)2
Modeling error
-2

10

frequency 10

-1

10

Fig. 24. The robust stability limits of the control loops, and the amplitude plot of the additive
modelling error.
Outputs

2
1.5
1
0.5
0
-0.5
-1
-1.5
-2
0

Reference signal
C=(1 -0.5q-1)2
C=(1 -0.93q-1)2
20

40

time, sec

60

80

100

Fig. 25. The simulation results with different C polynomials in the presence of modelling
error.

3.3 The cascade GPC algorithm


The basic idea of the cascade control structure was discussed in the introduction. In this
section a special cascade structure is introduced. In spite of the traditional two controllers
of the cascade structure, only one controller is proposed as shown in Fig. 26. This realizes
the same performance as in the traditional cascade structure. The control algorithm is
based on the GPC algorithm, but the original predictor is replaced by the special cascade
predictor. Also, the cost function is unchanged.

57
e2 (t )

e1 (t )

w(t )

u (t )

CGPC

Inner
Process

v(t )

Outer
Process

y (t )

Fig. 26. Cascade Generalized Predictive Controller.

3.3.1 The cascade predictor


The predictor covers the whole process. It predicts the process output based on the
measured process outputs y ( t ) , on the measured intermediate variable v ( t ) , and the
past control signals u ( t - 1) . The applied process models are ARIMAX models. The
model of the inner process:
v (t ) =

B1 ( q 1 )
A1 ( q 1 )

C1 ( q 1 )

u ( t 1) +

D1 ( q 1 )

1 ( t ) ;

where D1 ( q 1 ) = A1 ( q 1 )(1 q 1 )
The model of the outer process:
y (t ) =

B2 ( q 1 )
A2 ( q 1 )

v ( t 1) +

C2 ( q 1 )

D2 ( q 1 )

2 ( t ) ;

where D2 ( q 1 ) = A2 ( q 1 )(1 q 1 )
The k-step ahead predictor is derived throughout similar steps as in the case of the
original GPC algorithm. The k-step ahead prediction of the process output:
y ( t + k t ) =

B2 ( q 1 )
A2 ( q 1 )

v ( t + k 1) +

C 2 ( q 1 )

D2 ( q 1 )

2 (t + k )

The 1st Diophantine equation:


C 2 ( q 1 )
D2 ( q

= F2 k ( q

G2 k ( q 1 )

)+ D

(q )
1

qk

(20)

58
After substituting the equation:
y ( t + k t ) =

B2 ( q 1 ) D2 ( q 1 ) F2 k ( q 1 )
A2 ( q 1 ) C2 ( q 1 )

v ( t + k 1) +

G2 k ( q 1 ) B2 ( q 1 )
C2 ( q 1 )

v
t
2 ( t ) +

+
1
(
)
1
1
1
C2 ( q ) A2 ( q )
D2 ( q )

+ F2 k ( q 1 ) 2 ( t + k )

Rearranging the equation and considering that the expected value of the last term is equal
to zero:
y ( t + k t ) =

B2k ( q 1 )
C 2 ( q 1 )

v ( t + k 1) +

G2 k ( q 1 )
C2 ( q 1 )

y (t ) ,

where:
B2k ( q 1 ) =

B2 ( q 1 ) D2 ( q 1 ) F2 k ( q 1 )
A2 ( q 1 )

= B2 ( q 1 ) F2 k ( q 1 )(1 q 1 )

The 2nd Diophantine equation:


B2k ( q 1 )
C 2 ( q 1 )

= F2k ( q 1 ) + q k +1

G2k ( q 1 )

(21)

C2 ( q 1 )

Substituting to the prediction equation


y ( t + k t ) = F2k ( q 1 ) v ( t + k 1) +

G2k ( q 1 )
C2 ( q 1 )

v (t ) +

G2 k ( q 1 )
C 2 ( q 1 )

y (t )

Replacing v with the model of the inner process:


B1 ( q 1 )

C1 ( q 1 )
y ( t + k t ) = F2k ( q 1 )

u
t
k
t
k

2
+
+

1
(
)
(
)
1
D1 ( q 1 )
A1 ( q 1 )

G2k ( q 1 )
G2 k ( q 1 )
v
t
y (t )
+
+
(
)
C 2 ( q 1 )
C 2 ( q 1 )

59
rd

The 3 Diophantine equation:


F2k ( q 1 ) C1 ( q 1 )
D1 ( q 1 )

= F1k ( q 1 ) +

G1k ( q 1 )
D1 ( q 1 )

q k +1

(22)

After substituting the equation:


y ( t + k t ) =

B1 ( q 1 ) D1 ( q 1 ) F1k ( q 1 )
A1 ( q 1 ) C1 ( q 1 )

+ F1k ( q 1 ) 1 ( t + k 1) +

u (t + k 2) +

G2k ( q 1 )
C 2 ( q 1 )

G1k ( q 1 )
C1 ( q 1 )

v (t ) +

v (t ) +

G2 k ( q 1 )
C 2 ( q 1 )

y (t )

After additional rearrangements and considering zero mean value for the future
disturbance:
y ( t + k t ) =

B1 ( q 1 ) F1k ( q 1 )(1 q 1 )
C1 ( q 1 )

u (t + k 2) +

G1k ( q 1 ) G2k ( q 1 )
G2 k ( q 1 )
v (t ) +
y (t )
+
+
C 2 ( q 1 )
C1 ( q 1 ) C2 ( q 1 )

thus:
y ( t + k t ) =

B1k ( q 1 )
C1 ( q 1 )

u ( t + k 2 ) +

G1k ( q 1 ) G2k ( q 1 )
G2 k ( q 1 )

v
t
y (t )
+
+
+
()
C 2 ( q 1 )
C1 ( q 1 ) C2 ( q 1 )

where
B1k ( q 1 ) = B1 ( q 1 ) F1k ( q 1 )

The 4th Diophantine equation:


B1k ( q 1 )
C1 ( q 1 )

= F1k ( q 1 ) +

G1k ( q 1 )
C1 ( q 1 )

q k +1

(23)

60
Substituting it to get the final expression of the k step prediction:
y ( t + k t ) = F1k ( q 1 ) u ( t + k 2 ) +

G1k ( q 1 )
C1 ( q 1 )

u ( t 1) +

G1k ( q 1 ) G2k ( q 1 )
G2 k ( q 1 )

v
t
y (t )
+
+
+
(
)
C 2 ( q 1 )
C1 ( q 1 ) C2 ( q 1 )

The final expression of the cascade predictor contains one extra term of the v(k)
compared to the predictor of the original GPC controller.

3.3.2 The control algorithm


Based on the derived cascade predictor, the control algorithm is given in a similar way
than in the case of the original GPC. The only difference is that the formula of the free
response that is extended with the terms including the effect of the intermediate variable:

y free

G1, H ( q 1 )
G1, H ( q 1 )
m
m

u ( t 1)
v (t )
=
+
+

1
1
C
q
C
q

(
)
(
)
1
1
1
1
G1, H p ( q )
G1, H p ( q )
G2, H ( q 1 )
G2, H ( q 1 )
m
m

v (t )

y (t )
+
+

1
C2 ( q )
C 2 ( q 1 )

1
1

G2, H p ( q )
G2, H p ( q )

The cost function is exactly the same as in the case of the original GPC algorithm (see
Eq. 6), thus the analytical solution leads to:
u = H 1b = ( F T 1 F + 2 ) F 1 ( y free w )

where:
f0
f
1
F=

f H p

0
f0
f H p 1

f H p H c +1
0
0

61
and:
F1,k ( q 1 ) = f0 + f1 q 1 + + f k q k

The applied control signal according to the receding horizon concept is:
u ( t ) = K ( w y free )

where K is again the first row of the matrix ( G T 1G + 2 ) G T .


Assuming that the future reference trajectory keeps constant along the prediction
horizon, the control algorithm is given by:
1

u ( t ) =

Hp

Hp

k w (t ) k G
i

i = Hm

i = Hm

Hp

ki G2i
i = Hm

v (t )
C2

u ( t 1)

1i

C1

Hp

kG

i = Hm

2i

Hp

kG
i

i = Hm

1i

v (t )
C1

y (t )
C2

or equivalently:
Hp
Hp
Hp

u ( t ) C1 + q -1 ki G1i = C1 ki w ( t ) ki G1i v ( t )

i = Hm
i = Hm
i = Hm

Hp

ki G2i
i = Hm

p
C1
C
v ( t ) ki G2i 1 y ( t )
C2
C2
i = Hm

From this equation, the R-S-T kind polynomials can be seen:


Hp

S ( q 1 ) =

C1 + q 1 ki G1i
i = Hm

Hp

i = Hm

ki

Hp

R1 ( q -1 ) =

Hp

kG

i = Hm

1i

Hp

i = Hm

R2 ( q -1 ) =

Hp

k G

i = Hm

2i

Hp

i = Hm

R3 ( q -1 ) =

kG

i = Hm

2i

Hp

i = Hm

62
The structure of the control loop is given in Fig. 27.

1 (t )

2 (t )

( )
( )

( )
( )

C 2 q 1
A2 q 1

C1 q 1
A1 q 1

( )
( )

w(t)

( )
( )

u(t)

C1 q 1
S q 1

v(t)

q 1 B1 q 1
A1 q 1

( )
( )

q 1B2 q 1
A2 q 1

y(t)

( ) ( )
( ) ( )
R (q )
C (q )

R1 q 1 R2 q 1
+
C1 q 1 C2 q 1
1

Fig. 27. The control structure of the Cascade Generalized Predictive Controller.

In the derivation of the predictor, both the inner and the outer sub-processes are modelled
with ARIMAX models. As a consequence, an error free disturbance regulation is
expected from the controller. In Fig. 27 the integrator can be well observed in the inner
loop. To obtain error free regulation, an integrator should appear in the outer loop as well;
therefore the closed loop transfer function of the inner loop (without the outer feedback)
should contain an integrator.
The closed loop transfer function of the inner loop can be expressed as follow:

Y1'CL

B1q 1 C1
A1 S
q 1 B1C1
=
=
B q 1 C1 R1 R2 A1C2 S + q 1 B1 ( C2 R1 + C1 R2 )
1+ 1
+
A1 S C1 C2

Recalling that A1 = A1 , the first term of the denominator includes the 1q1 term.
From the third Diophantine equation (22):
Hp

kG

i = Hm

1i

Hp

Hp

i = Hm

i = Hm

= C1 ki F2i q i 1 A1 ki F1i q i 1

From the second Diophantine equation (21):


Hp

C2

i = Hm

ki F2i q i 1 = B2

Hp

i = Hm

ki F2i q i 1

Hp

k G

i = Hm

2i

63
Combining these expressions we get:
Hp

C2 R1 = C2

kG

i = Hm

1i

Hp

i = Hm

B2
= C1

i
Hp

Hp

ki F2i q i 1

i = Hm

C1

Hp

i = Hm

i = Hm

C2 A1

Hp

Hp

ki G2i

i = Hm

kF q

i = Hm

i 1

1i

Hp

i = Hm

Noticing that the second term is equal to the C1R2 expression:


Hp

C2 R1 + C1 R2 = C1B2

kF

i = Hm

Hp

i = Hm

C2 A1

i = Hm

i = Hm

kFq

i = Hm

1i

Hp

i = Hm

i 1

i 1

1i

Hp

i = Hm
Hp

C2 A1
ki

kF q

ki F2i q i 1
Hp

Hp

q i 1

Hp

i = Hm

= C1 B2

2i

This expression means that the closed loop transfer function of the inner loop contains an
integrative effect, and thus facilitates the error free regulation of the disturbances arising
in the outer loop as well.
Example 6: Comparison of a simple GPC and the CGPC.

This example illustrates the advantage of the cascade structure: a process is controlled by
an original GPC, and by a cascade GPC. (The CGPC is going to be compared to cascade
loop containing two GPCs later in Section 3.3.4.) The original GPC is identical to the
controller presented in Section 3.1.6.
The inner process is:
G1 ( s ) =

0.35e 3s
16 s 2 + 7 s + 1

64
The outer process is:
G2 ( s ) =

2e 20 s
1020s 2 + 50s + 1

These processes are going to be applied in the following to test and compare the cascade
structures. The ratio of the average residence times (corresponding to the 2 % error) is
about six, therefore it can be considered as a typical process where cascade control is
relevant.
The parameters of the controllers are: Hmin=23; Hp=83; Hc=1; the control weighting
factor is 0. In the CGPC, the C1 polynomial is equal to the denominator of the inner
process, the C2 is equal to the denominator of the outer process. The C polynomial of the
GPC algorithm was equal to the convolution of the C1 and C2 polynomials. In the
simulation, there was an inner (e1) and an outer disturbance (e2) at 200 and 600 sec
respectively, both are step kind with amplitude 0.5. The outer process outputs (primary
outputs) of the control loops are presented in Fig. 28. The figure clearly shows the
difference and the similarities of the control loops. The tracking behaviours and the outer
output disturbance (e2) regulations are identical in both cases, since the controller
parameters are identical. This implies that all the good properties of the GPC are kept in
the CGPC. The main difference is in the intermediate disturbance regulation. The CGPC
acts much earlier to regulate this disturbance, because its prediction is based on the
intermediate variable as well. This behaviour is specific to the cascade structures.
Both controllers could regulate the disturbances without error. Thus the CGPC
controller is also able to regulate perfectly. This result was expected based on the
derivation, since ARIMAX models were applied for both the inner and the outer
processes.
Outputs

1.8
1.6
1.4
1.2
1
0.8
0.6
0.4

Reference signal
GPC
CGPC

0.2
0
0

100

200

300

400
time, sec

500

600

700

800

Fig. 28. The tracking and regulation behaviour of the CGPC and the original GPC algorithm.

It is important to distinguish the CGPC controller and the GPC controller extended with
feedforward action. In the CGPC control loop, the disturbance is not measured, only its
effect on the intermediate variable. Meanwhile in the disturbance feedforward

65
compensation, the disturbance signal is measured. As a consequence in the CGPC all the
disturbances affecting the inner variable are regulated not only the measured disturbance
as it is in the feedforward compensation.

3.3.3 Robustness properties of the CGPC


The CGPC is derived as the traditional GPC. The only difference is the special prediction
model. Therefore all the robust properties of the GPC loop, shown in Section 3.2 are
valid. However, due to the special predictor, the robustness of the inner and outer loop
can be tuned separately as it is proved next.
To investigate the effects of these C polynomials, the open loop transfer functions of
the inner and outer loop are examined. The block diagram of the CGPC control loop is
redrawn to facilitate the derivation of the open loop transfer function of the inner loop
and it is shown in Fig. 29.
w(t )

( )
( )

u (t )

C1 q 1
S q 1

( )
( )

v(t )

q 1B1 q 1
A1 q 1

( )
( )

( )
( )
y (t ) q B (q )
A (q )

R1 q 1 R2 q 1
+
C1 q 1 C2 q 1

( )
( )

R3 q 1
C 2 q 1

Fig. 29. The redrawn block diagram of the CGPC controller.

Based on the figure, the open loop transfer function of the inner loop:
Y1OL

=
=

B1q 1 C1 R1 R2 B2 q 1 R3
+
+
=
A2 C2
A1 S C1 C2
B1q 1 A2 C2 R1 + A2 C1 R2 + B2 C1 R3 q 1
A1 A2 C2 S

From the first Diophantine equation (20):


Hp

i = Hm

ki G2i = C2

Hp

i = Hm

ki q i A2

Hp

kF

i = Hm

2i

qi

66
From the second Diophantine equation (21):
Hp

i = Hm

ki F2i q i =

C2
B2

Hp

i = Hm

ki F2i q i +

q
B2

Hp

k G

i = Hm

2i

Combining these two equations and the definition of the R3:


R3 =

1
Hp

i = Hm

R3 =

Hp
A2 C2
i
C2 ki q
B2
i = Hm
ki

1
Hp

i = Hm

Hp
A2 C2
i
C2 k i q
B2
i = H m

Hp

k Fq

i = Hm

2i

Hp

A2

k G

i = Hm

k F q q B

i = Hm

2i

Hp

A2
B2

2i

(24)

R2

The last term of the numerator in the open loop transfer function is:
Hp

B2 C1C2
q 1 B2 C1 R3 =

i = Hm

Hp

i = Hm

ki q i 1

Hp

ABCC
2 2 2 1
B2

k Fq

i = Hm

2i

Hp

i = Hm

q 1 A2 B2 C1
R2
B2

Substituting this expression to the numerator of the open loop transfer function:
q 1 B1 A2 C2 R1 + A2 C1 R2 + q 1 B2 C1 R3 =
Hp
Hp

i 1
k
q
ki F2i q i

i
i
H
i
H
=
=
= q 1 B1 C2 A2 R1 + B2 C1 Hm p
A2 C1 mH p


ki
ki

i = Hm
i = Hm

+ A C R A C R
2 1 2
2 1 2

The last two terms can be skipped in the numerator, thus the transfer function can be
simplified by the C2 polynomial. The resulting transfer function:

Y1OL

Hp
Hp

i 1
k
q
ki F2i q i

i
i=H
i=H
A2 C1 mH p
q 1 B1 A2 R1 + B2 C1 Hm p

ki
ki

i = Hm
i = Hm

=
A1 A2 S

67
The inner loops open loop transfer function does not include the C2 polynomial, thus its
robustness properties are not influenced by the C2 polynomial, only by the C1 polynomial.
To investigate the robustness properties of the outer loop, the open loop transfer
function of the outer loop is required:
Y2OL =

B1C1C2 q 1
B2 q 1 R3
A1C2 S + q 1 B1C2 R1 + q 1 B1C1 R2 A2 C2

Y2OL =

q 2 B1 B2 C1 R3
A1 A2 C2 S + q 1 A2 B1C2 R1 + q 1 A2 B1C1 R2

From the third Diophantine equation (22) it follows:


Hp

kF

1i

i = Hm

C1 p
1 p
ki F2i

ki G1i q i +1
A1 i = H m
A1 i = H m

From the fourth Diophantine equation (23):


H

p
B1 p
1 p
ki F1i = ki F1i +
ki G1i q i +1

C1 i = H m
C
i = Hm
1 i = Hm

Based on these equations:


H

p
p
B1C1 p
B
1 p
ki F2i 1 ki G1i q i +1 = ki F1i +
ki G1i q i +1

C
C1 A1 i = H m
C1 A1 i = H m
i = Hm
1 i = Hm

or equivalently:
Hp

k G =

i = Hm

1i

p
B1C1 p
B p
ki F2i q i 1 C1 ki F1i q i 1 1 ki G1i

A1 i = H m
A1 i = H m
i = Hm

With the necessary substitution:


H

S=

C1
Hp

i = Hm

ki

Hp
Hp

i 1
1
1 B1
ki F2i q i 1 q 1
1 + q ki F1i q q

A1 i = H m
i = Hm

B1 p
ki G1i
A1 i = H m
Hp

i = Hm

68
Observing the last term, in the expression the R1 polynomial can be found. The
denominator of the outer loops open-loop transfer function is:
den (Y2OL ) =

Hp
H

A1 A2 C1C2
q 1 B1 p
1
i 1
ki F2i q i 1
1 + q ki F1i q

Hp
A1 i = H m
i = Hm

ki
i = Hm

q A2 B1C2 R1 + q 1 A2 B1C2 R1 + q 1 A2 B1C1 R2 =


=

Hp
Hp
Hp

A2 C1
1
i2
i2

A
C
A
C
k
F
q
B
C
k
F
q
q
B
ki G2i

1
2
1
2
1
1
2
2
2
i
i
i
i
Hp
i = Hm
i = Hm
i = Hm

ki

i = Hm

Since the C1 polynomial appears as a factor in both the numerator and denominator, it is
possible to eliminate it from the open loop transfer function of the outer loop that
follows:
q 2 B1 B2 R3
Y2OL =

Hp

i = Hm

Hp
Hp
Hp

A2 A1C2 + A1C2 ki F1i q i 2 B1C2 ki F2i q i 2 q 1 B2 ki G2i


i = Hm
i = Hm
i = Hm

This result means, that C1 is eliminated from the open loop transfer function of the outer
loop, thus neither its stability nor robustness is influenced by this polynomial.
Remembering that in the case of the inner loop, the C2 polynomial was shown not to
affect the inner loop robustness, and now, the open loop transfer function of the outer
loop is shown to be independent from the C1 polynomial. Thus in the CGPC controller it
is possible to independently tune the robustness of the loops, i.e. the robustness against
the model uncertainties of the inner and the outer processes.
Example 7: The effects of the disturbance polynomial on the modulus margins.

In this example the modulus margins of the inner and outer loops of the CGPC loop are
calculated to demonstrate the independent tuning of the inner and outer loops. The
processes and the CGPC controller settings are the same as in the previous Example 6.
The C1 and C2 polynomials are changing, and the modulus margins are calculated for
both the inner and the outer loop. The results are shown in Fig. 30. The C1 curve
shows
2
the calculated modulus margin as2 a function of the k parameter; C1 = (1 kq 1 ) and C2 is
fixed to be equal to 2(1 0.85q 1 ) . The C2 curve shows the modulus margin values when
the C1 is (1 0.5q 1 ) .
The result is in accordance to the previous derivations: the noise model of the inner
process affects only the inner loop robustness; meanwhile the disturbance model of the
outer process affects only the robustness of the outer loops.

69
Outer loop

Inner loop

0.9
C
1
C
0.8

0.8

C
1
C
2

0.7
0.7

0.6
0.5

0.6

0.4
0.5

0.4
0

0.3
0.5
k

0.2
0

0.5
k

Fig. 30. The effect of the C polynomials to the modulus margins of the inner and of the outer
loop.

3.3.4 Comparison of CGPC with cascade GPC-s


In Example 6, the CGPC algorithm was compared to an original GPC algorithm to show
the advantage of the cascade structure. In this section, the regulation behaviour and the
robustness properties of the CGPC will be compared to those of a cascade structure
including two GPCs (2GPC). The 2GPC control scheme is identical to Fig. 1 where the
two controllers are GPCs. The control loop is presented in Fig. 31 with the corresponding
R-S-T polynomials.
In the tuning of the 2GPC loop, it is necessary to calculate the inner loops closed loop
transfer function, since the outer GPC needs the model of the process to control, that
includes the whole inner loop. First the inner loop is required to be tuned, and then to
calculate its closed loop transfer function by equation (12) and finally to tune the outer
GPC controller.

70

1 (t )

2 (t )

( )
( )

( )
( )

C1 q 1
D1 q 1

w(t)

( )
( )

C2 q 1
S 2 q 1

vref (t )

( )
( )

C1 q 1
S1 q 1

( )
( )

q 1 B1 q 1
A1 q 1

(
(
R (q
C (q

R1 q 1
C1 q 1
2

C 2 q 1
D2 q 1

v(t ) q 1B (q 1 )
2

( )

y(t)

A2 q 1

)
)
)
)

Fig. 31. The structure of the cascade control loop including 2 GPCs.

Example 8: Control behaviour of the CGPC and of the 2GPC loops.

In this example, the presented simulation contains a reference signal step (t=1 sec), a step
kind inner disturbance (e1) at t=200 sec and a step kind outer disturbance (e2) at
t=500 sec. Both disturbances have amplitude of 0.5. The controller parameters are in the
CGPC and in the outer GPC of the 2GPC structure: Hp=83; Hmin=23; Hc=1; the control
signal weighting factors are zero. The inner loop parameters are: Hp=13; Hmin=3; Hc=2
and the control signal weighting factor is zero. The C polynomials were chosen to result
the same modulus margins in the corresponding loops
(in the 2GPC loop the polynomials
2
2
are C1 = (1 0.81q 1 ) and C2 = (1 0.955q 1 ) ; in the CGPC loop they are
2
2
C1 = (1 0.8q 1 ) and C 2 = (1 0.96q 1 ) ).
The controlled variables and the reference signal are presented in Fig. 32.

71
Outputs

1.6
1.4
1.2
1
0.8
0.6
0.4

Reference signal
CGPC
2GPC

0.2
0
0

100

200

300

400
time, sec

500

600

700

800

Fig. 32. The tracking and regulation performance of the CGPC and the 2GPC control loops.

The tracking behaviour and the second disturbance regulation are close to each other,
with the 2GPC loop being slightly faster. The main difference can be observed at the
inner disturbance. This difference comes from the original cascade structure. To
understand it, the intermediate variables are presented in Fig. 33 with the intermediate
reference signal of the 2GPC loop.
Intermediate variables
2.5

V
2GPC
V
ref,2GPC
v

CGPC

1.5
1
0.5
0
-0.5
180

200

220

240
time, sec

260

280

300

Fig. 33. The intermediate variables of the CGPC and 2GPC control loops.

The figure clearly shows the disadvantage of the traditional cascade structure. The
intermediate variable of the 2GPC loop is regulated well, right after the disturbance.
However, after the delay of the outer process, the reference signal of the inner loop is
changed, and a new course is started, since the effect of the inner disturbance appears on
the process output therefore the outer controller also tries to regulate the disturbance. In
the CGPC this phenomena can not be observed, since there is only one controller. The
CGPC calculates the control signal considering the error between the predicted output
and the reference signal in the cost function, the inner variable is not regulated directly.

72
These facts explain the remarkably faster inner disturbance regulation, even with the
similar controller parameters.
Example 9: Comparison of robustness.

In the previous section, the robustness of the CGPC controller was shown. According to
that result, the inner and the outer loops can be tuned separately by the C polynomials. In
this example, the original cascade structure is examined to see, if the loops can be tuned
separately from the robustness point of view. The same figure (the C polynomials effect
on the modulus margins) is prepared for the 2GPC case as for the CGPC in Example
7
2
and presented in2 Fig. 34. The fixed polynomials were equal to C1 = (1 0.8q 1 ) and
C2 = (1 0.9q 1 ) .
0.65
0.6

Inner loop

0.65

C
1
C

0.6

C
1
C
2

0.55

0.55

0.5

0.5

0.45

0.45

0.4

0.4

0.35

0.35

0.3

0.3

0.25

0.25

0.2
0

Outer loop

0.5
k

0.2
0

a)

0.5
k

b)

Fig. 34. The modulus margins of the 2GPC control loops.

In Fig. 34 a), the inner loop modulus margin is shown, and a remarkable difference can
be observed against the same figure of the CGPC algorithm (Fig. 30): both C1 and C2
polynomials affect the modulus margin.
In Fig. 34 b) one can see, that the outer loop modulus margin behaves the same way as
in the CGPC case: only the C2 affects the robustness. This can be easily understood
remembering the closed loop properties of the GPC. The equation (14) showed that the C
polynomial does not appear in the transfer function between the reference signal and the
output in a simple GPC control loop. Considering that the inner loop is part of the outer
loop as a closed loop, therefore the C1 does not affect the outer loop behaviour.
The effect of the C2 polynomial to the inner loop robustness can be easily proved by
the closed loop and open loop transfer functions. The notation is based on Fig. 31.
The open loop transfer function of the inner loop is:
2 GPC
Y1OL
=

S1 S 2 A1 A2
S1 S2 A1 A2 + q 1 B1 A2 R1 S2 + q 2 B1 B2 C1 R2

73
The closed loop transfer function of the outer loop is:
GPC
=
Y22CL

B1 B2 C1C2
S1 S2 A1 A2 + q B1 A2 R1 S2 + q 2 B1 B2 C1 R2
1

It is important to notice, that both transfer functions have the same denominator. It is
already shown, that the C1 polynomial does not affect the closed loop transfer function of
the outer loop. It follows, that the C1 polynomial must be a factor of the denominator,
since it appears in the numerator.
Regarding the series of the inner loop and the outer process as the process to control
from the outer loop point of view, and considering the fact, that the noise polynomial
does not affect the closed loop transfer function as it was already shown, the denominator
must include the C2 polynomial as a factor as well.
Regarding the numerator of the open loop transfer function of the inner loop, the S1
polynomial does not include the C2 polynomial, the S2 polynomial includes the C2
polynomial, but not as a factor. The results show that it is not possible to eliminate the C2
polynomial from the open loop transfer function of the inner loop therefore it influences
the robustness properties of the inner loop.
In Section 3.2.3, it was already shown how the noise polynomial effects the
disturbance rejection. In Section 3.3.3 the robustness properties of the CGPC algorithm
were presented and it was found that the robustness of the inner and outer loop could be
totally decoupled. Both the regulation dynamics and the robustness properties are strictly
related to the open loop transfer function in general and as a consequence it can be
assumed that the regulation performances of the inner and the outer loop can be also
decoupled in case of the CGPC algorithm. For example the disturbance rejection of the
inner disturbance can not be influenced by the C2 outer noise polynomial, only by the C1
inner noise polynomial, and vice versa with the outer disturbance. To prove it, the transfer
function between the disturbances and the process output must be investigated.
The transfer function between the inner disturbance and the process output is given:
y ( q 1 )

1 ( q

q 1 B2 C2
A1 A2 C2 S + q A2 B1C2 R1 + q 1 A2 B1C1 R2 + q 2 B1 B2 C1 R3
1

74
In the first two terms of the denominator, the C2 is clearly a factor and the rest of the
denominator has to be further examined. Substituting R3 according to equation (24) into
the last term, it results:
Hp

q 2 B1 B2 C1 R3 = B1 B2 C1C2

kq

i2

i = Hm

Hp

i = Hm

Hp

A2 B1C1C2

k Fq

i = Hm

i2

2i

q 1 A2 B1C1 R2

Hp

i = Hm

Substituting it to the denominator, and neglecting the terms with R2, the denominator is
given by:
Hp

A1 A2 C2 S + q A2 B1C2 R1 + B1 B2 C1C2

kq

i = Hm

Hp

Hp

i2

i = Hm

ki

A2 B1C1C2

k Fq

i = Hm

i2

2i

Hp

i = Hm

The C2 polynomial appears in all terms, thus it is possible to eliminate this polynomial
from the transfer function. Therefore, the transfer function between the inner disturbance
and the process output (the regulation dynamic of the inner disturbance), is independent
of the outer disturbance polynomial.
The transfer function between the outer disturbance and the process output is:
y ( q 1 )

2 ( q
where Y2OL =

1
1 + Y2OL

q 2 B1 B2 C1 R3
A1 A2 C2 S + q 1 A2 B1C2 R1 + q 1 A2 B1C1 R2

In the previous section, the open loop transfer function of the outer loop (Y2OL) was
already examined, and the inner noise polynomial (C1) was found in the denominator as a
factor. Since the polynomial appears explicitly in the numerator of the open loop transfer
function, it is possible to eliminate it from the transfer function. Therefore the outer
disturbance regulation is independent from the inner noise polynomial.

75
Example 10: The effect of the C polynomials on the regulation.

To present the independency of the regulations,


in this example the C1 and the C2
2
polynomials were changed to C = (1 0.9q 1 ) , and the regulations of the inner and outer
disturbances are shown. All the other details are identical to the Example 6. The
regulations of the CGPC are presented in Fig. 35, the regulations of the 2GPC scheme are
shown in Fig. 36.
1.8

1.6

Outputs
Reference signal
Original C and C
1
2
Modified C
1
Modified C
2

1.4

1.2

0.8
200

300

400

time, sec

500

600

700

Fig. 35. The regulation of the CGPC structure with different noise polynomials.

The simulation results are in accordance with the expectations: in the CGPC case, the
change of the inner loop noise polynomial (C1) affects only the inner disturbance
rejection, meanwhile the change of the outer loop noise polynomial influences only the
outer loop regulation performance.
In the previous example, in the 2GPC structure, it was found that the inner loop noise
polynomial (C1) does not influence the outer loop robustness, but the outer loop noise
polynomial (C2) affects the inner loop robustness. This result implies that the change of
the C1 does not influence the regulation of the outer disturbance, but the change of the C2
polynomial modifies the regulation of the inner disturbance. The simulation results
shown in Fig. 36 exactly satisfy this assumption.

76
Outputs

Reference signal
Original C and C
1
2
Modified C
1
Modified C
2

1.5

0.5
200

300

400

time, sec

500

600

700

Fig. 36. The regulation of the 2GPC structure with different noise polynomials.

3.4 Constraint control problem


So far the control problem was formulated considering all the signals to be unlimited. In
the GPC controller, the cost function could have been minimized analytically because of
this assumption. Unfortunately this is not realistic, in practice the processes are usually
subject to different constraints, such as plant input constraints by actuator, plant output
constraints by operational limits or safety reasons.
The most common actuator saturation cases are the rate and level limits, for example
the valve opening speed, and its maximum-minimum flow. By operational and safety
considerations, it is often required to keep a certain variable within a band, or under
certain limits (for example the level of the tank, the maximum temperature of
constructions, the flue gas oxygen content because of the efficiency considerations).
Various anti-windup techniques are developed to cope with these problems.
Historically such techniques were applied since the 1960s (Glattfelder & Schaufelberger,
2003). In research, special interest was paid to this topic since the 1980s, and it is still an
important question according to numerous current publications.
As a natural requirement, the GPC must be mounted to handle constraints on the input
or output signal. These constraints are treated similarly within the predictive controller,
even though the nature of these are different. Since the output constraints are usually
based on safety considerations, these must be taken care of in advance, it is not allowed
to apply a control signal that would result constraint violation.
The unhandled input constraint may result in oscillatory or even in unstable process
output. This fact underlines the importance of the investigation of the constrained case.

77
Example 11: The effect of the input limits.

In this example, the same GPC loop is considered as in Example 1 with the following
controller parameters: Hp=23; Hmin=3; Hc=2; 2=0 and the C polynomial is equal to the
denominator of the process: C ( q 1 ) = 1 1.85143q 1 + 0.86071q 2 . The main difference
between the simulations is, that the control signal is subject to rate and limit constraints.
The control signal is in the [-1,+1] range and the rate limit is 0.4. The simulation
includes the original GPC loop without the input constraints as well. The process outputs
are presented in Fig. 37, the corresponding realized control signals are given in Fig. 38
with truncated time axis.
Outputs

1.6
1.4
1.2
1
0.8
0.6
0.4
0.2

Reference signal
Constrained GPC loop
Unconstrained GPC loop

0
-0.2

10

20

30

40

50
time, sec

60

70

80

90

100

Fig. 37. The degradation of the control performance due to the input constraints.
Control signals
1.5

Constrained GPC loop


UnconstrainedGPC loop

1
0.5
0
-0.5
-1
-1.5

10

15

20

25
time, sec

30

35

40

45

50

Fig. 38. The control signals with and without control input constraints.

The figures reflect well the effect of the control input constraints: the control loop slows
down and the overshoot is remarkable. The effect of the control input limitation in the
prediction appears only after the process delay, until the two controllers give the same
control signal, but the constraints modify it remarkably in the constrained case.

78

3.4.1 Constrained Generalized Predictive Control


The control algorithm in Section 3.1.6 was derived by minimising the cost function, and
considering that there is no signal subject to constraint. Only considering the control
signal constraints, in several cases it is acceptable that the analytical solution is applied to
the process (the actuator may be saturated), or the control signal gets clipped according to
the limitations. Between these solutions the difference is in the prediction: if the
analytical solution is applied, then the prediction is false, meanwhile in the clipping case,
the prediction is based on the realizable clipped control signal. The clipping method is
applicable if only the input signal is constrained.
Remembering that the cost function is quadratic, clipping the result of the analytical
minimization gives the optimal solution if the control horizon is equal to one. If the
control horizon is higher than one, simply clipping the control signal does not guarantee
the optimal solution. To illustrate it, consider the cost function and constraints presented
in Fig. 39 with the contour plot of the cost function. The controller control horizon is
equal to 2; the axes are the actual and the next control signal. The analytical solution is
the [-1,2] control sequence; the clipped solution is [-1,1], and the result of the constraint
optimization is [-0.1,1] control sequence. Since the GPC is a receding horizon controller,
the difference between the clipped applied control signal (-1) and the constrained
optimised control signal (-0.1) is remarkable.
3
du(k+1)
2
1
0

du(k)

-1
-2
-3
-3

-2

-1

Fig. 39. The analytical and the constrained optimal control signal on the cost function contour
plot.

The minimization of the cost function among constraints is a traditional quadratic


programming problem. From the several methods available in the literature, the Feasible
Direction Method was chosen to be applied in the following. The method is given in
detail in Appendix 3.
The output signal constraints and any predictable variable constraint can be handled
properly in the predictive control. When the analytical minimisation of the cost function
result in a control signal sequence that causes constraint violation, numerical optimisation
is required and the constraints are included in the optimisation. Naturally these

79
constraints are not the same simple boundaries as shown in Fig. 39, but do not introduce
difficulty for the applied constrained numerical optimisation method.
Example 12: Performance of the input constrained GPC.

In this example, the same control loop is examined as in the Example 11, but the Feasible
Direction Method is applied for the minimization. For comparison purposes, the
performance of the GPC loop without the numerical optimization method is also given
and the performance of the control loop, when there is no constraint minimization in the
controller, but the clipped previous control signals are applied in the prediction. The
simulation contains only reference signal change, there is no importance if the saturation
is the result of reference signal change or it is resulted by some serious disturbance. The
process outputs are presented in Fig. 40, the control signals are given in Fig. 41. The
figures clearly show the advantage of the numerical optimization. In spite of the large
overshoot, the setpoint is followed with small overshoot. According to the figure of the
outputs, the clipping method shows improvement against the unhandled constraint case.
Outputs

1.6
1.4
1.2
1
0.8
0.6
0.4

Reference signal
GPC with constraint min.
GPC without constraint min.
GPC with clipping

0.2
0
0

20

40

60

80

100

time, sec

Fig. 40. The process outputs of the GPC with and without numerical optimization.

80
Control signals
1
0.8
0.6
0.4
0.2
0
-0.2
-0.4

GPC with constraint min.


GPC without constraint min.
GPC with clipping

-0.6
0

10

20

time, sec

30

40

50

Fig. 41. The applied control signals of the GPC with and without numerical optimization and
with applying the clipped control signal in the prediction. (In the case of the GPC without the
numerical minimization, the actuator clips the control signal according to its constraints.)

The control signal in the case of the numerical optimization is a very nice, smooth signal.
The control signal of the controller with the clipping method is poor and realises
reasonable oscillation. Even the control signal of the unhandled constrains case looks
better.
Example 13: Constraint on the output.

In this example, the process output is not allowed to exceed a certain value. The process
is controlled with GPC without considering this limitation, and with a GPC, where the
constraint is included. The process is:
G (s) =

1
100 s + 10s + 1
2

The sampling time is one second and the controller parameters are: Hp=41; Hmin=1; Hc=1;
2=0 and the C polynomial is equal to the denominator of the process. The constraint on
the process output is equal to 10.1. The performances of the controllers are shown in
Fig. 40 and the corresponding control signals are given in Fig. 41.

81
Outputs

12
10
8
6
4

Reference signal
GPC without constraint on the output
GPC with assumed constraint on the output

2
0
0

20

40

time, sec

60

80

100

Fig. 42. The process outputs of the GPC with and without considering the output constraint.
Control signals

12
10
8
6
4
2
0
0

GPC without constraint on the output


GPC with assumed constraint on the output
20

40

time, sec

60

80

100

Fig. 43. The applied control signals of the GPC with and without considering the output
constraint.

According to the figure of the outputs, the GPC could properly satisfy the limitation on
the output signal. It is important to observe that the control loop slowed down because of
satisfying the constraint. The difference between the control signals is remarkable.

3.4.2 Constrained Cascade Generalized Predictive Control


The CGPC algorithm has to face the constraints as well. Since the application of the
cascade predictor does not require any additional considerations in the minimization of
the cost function, the handling of the constraint can be solved in exactly the same way as
in the traditional GPC.

82
However, regarding the traditional cascade schemes, there is an important question.
How to manage the control input constraint in the two separate loops? In the traditional
cascade control structure (as shown in Fig. 1), the control input saturation must be
handled in the inner loop, but as it was shown in the previous section, it has effects on the
outer loop as well. As it could be observed, the control input constraints always slow
down the control loop response. Thus in the cascade structure, during the tuning of the
outer loop, it must be kept in mind, that in the presence of control input saturation, the
inner loop slows down, resulting in modelling error. To illustrate this effect in the
following example the CGPC and the traditional cascade structure are compared in the
presence of control input saturation.
Example 14: Input constraints in the CGPC and 2GPC loops.

The processes and the controller settings are the same as in Example 8. In the CGPC and
in the outer GPC: Hp=83; Hmin=23; Hc=1; the control signal weighting factors are zero.
The inner loop parameters are: Hp=13; Hmin=3; Hc=2 and the control signal weighting
factor is zero. The C polynomials were the same as in Example 8, those are resulting the
same modulus margins in the corresponding loops. The inner loop control signal is within
the [-2,2] range; the rate of the control signal is in the [-0.2,0.2] range.
The outputs and the control signals are presented in Fig. 44 and in Fig. 45.
Outputs
1.2
1
0.8
0.6
0.4
Reference signal
CGPC
2GPC

0.2
0
0

50

100

150
time, sec

200

250

300

Fig. 44. The tracking performances of the constrained CGPC and 2GPC loops.

83
Control signals
2

1.5

0.5

0
0

CGPC
2GPC
50

100

150
time, sec

200

250

300

Fig. 45. The control signals of the constrained CGPC and 2GPC loops during the tracking.

The 2GPC loop has larger overshoot and its control signal shows oscillation. This is in
accordance with the expectations: the inner loop of the 2GPC structure slows down
because of the rate and level limit and it means modelling error for the outer loop. The
controller is tuned to be robust enough to regulate such a modelling error, but the
oscillation already appears in the control signal.
In the example, the performance degradation of the 2GPC loop is coming from the
modelling error arising from the active constraints that slows down the inner loop. If the
information about the saturation in the inner loop is delivered to the outer controller, then
the problem could be solved. In strm & Hgglund (1995), an anti-windup method is
proposed for cascade structures including PID type controllers (the block diagram is
presented later, in Fig. 17). In the following example, the CGPC is compared to a
controller loop (2PI), including 2 PI controllers with the special anti-windup extension.
Example 15: Input constraints in the CGPC and in the 2PI loops.

The process to control and set the level limit of the control signal are the same as in the
previous example, but the rate of the control signal is within the [-0.02, +0.02] range. The
CGPC controller is tuned as in the previous example, the inner PI controller parameters
are tuned based on the Kappa-Tau method presented in strm & Hgglund (1995).
Primary controller gain is 0.3, integration time is 50 sec and the b weighting factor
(weighting the setpoint in the error of the proportional part) is 0.6; the secondary
controller gain is 1, the integration time is 4 sec and the b factor is 1. The tracking time
constant of the controllers were chosen as the square root of the integration time of the
controllers as it is proposed in strm & Hgglund (1995). The process outputs are
shown in Fig. 46 and the corresponding control signals are presented in Fig. 47. In the
figures, the performance of the unconstrained 2PI cascade loop is also presented to
facilitate the evaluation of the constrained 2PI loop.

84
Outputs

1.6
1.4
1.2
1
0.8
0.6
0.4

Reference signal
CGPC
2PI
2PI without constraints

0.2
0
0

100

200

time, sec

300

400

500

Fig. 46. The tracking performances of the CGPC and 2PI loops in the presence of control
input saturation.
Control signals

2.5

1.5

0.5

0
0

CGPC
2PI
2PI without constraints
100

200

time, sec

300

400

500

Fig. 47. The control signals of the CGPC and 2GPC loops in the presence of control input
saturation.

The process outputs presented in the figure satisfy the expectation that the CGPC
controller remained the fastest, and also the performance of the 2PI loop remained
smooth. In the control signal figure, one can see that the control signal of the 2PI loop is
smooth and does not show any oscillation that could be resulted by the saturation. The
constraint on the control signal slowed down the 2PI loop and unfortunately it is even
slower than the rate limit.

3.5 Cascade GPC based on state space model


The state space models can be also used to formulate the predictive control problem. The
beneficial property of the state space representation is that the multiple-input multiple-

85
output (MIMO) process has the same description in state space, as a single-input singleoutput (SISO) process, but the dimensions of the input and the output vectors are
different.
Up to now, the proposed CGPC has been considered based on the transfer function
model, but it can be formulated naturally based on state space model as well. The overall
process can be assumed as a single-input (control signal) two-output (intermediate
variable and controlled variable) process.
In Fig. 1 the common form of the cascade problem was presented that is called serial
cascade control structure. Another possible interpretation of the cascade control problem
is the parallel cascade control structure that is presented in Fig. 48.
2 (t )
y (t )

Whole
Process

1 (t )

w(t )

Outer
Controller

v ref (t )

u (t )

Inner
Controller

Inner
Process

v(t )

Fig. 48. Parallel cascade control structure.

The two structures lead to different state space models. A simple example is chosen to
illustrate the difference. For the sake of simplicity, let assume that the inner process is a
first order process and the outer process is a second order process.
The discrete time models of the two processes are:
B1 ( q 1 )
A1 ( q 1 )

b1,0 q 1

1 + a1,1q 1

and

B2 ( q 1 )
A2 ( q 1 )

b2,0 q 1 + b2,1q 2

1 + a2,1q 1 + a2,2 q 2

Let us denote:
F1 ( q 1 ) = (1 q 1 ) A1 ( q 1 ) ; F2 ( q 1 ) = (1 q 1 ) A2 ( q 1 ) ;

Fe ( q 1 ) = (1 q 1 ) A1 ( q 1 ) A2 ( q 1 ) ; H 2 ( q 1 ) = (1 q 1 ) B2 ( q 1 ) and

Ce ( q 1 ) = C1 ( q 1 ) C2 ( q 1 )

86
The parallel cascade structure leads to the following state space model:
f e ,1
f
e ,2
f e ,3
A=
f e ,4
0

1
0
0
0
0
0

0
1
0
0
0
0

0
0
0
0
1
0
0
0
0 f1,1
0 f1,2

1 0 0 0 0 0
C=

0 0 0 0 1 0

0
0
b
0
2,0
b2,1
0
B=
0
0
b1,1
1


0
0

ce ,1
c
e ,2
ce ,3
E=
ce ,4

f e ,1
f e ,2
f e ,3
f e ,4

c1,1 f1,1

c1,2 f1,2

where the G matrix is the noise transition matrix.


Considering the serial cascade structure, the following state space model can be derived:
f 2,1
f
2,2
A = f 2,3

0
0

1
0
0
0
0

c2,1 f 2,1
c f
2,2
2,2
E = c2,3 f 2,3

0 h2,1
1 h2,2
0 h2,3
0 f1,1
0 f1,2

0
0

0
0

1 0 0 0 0
0 , B = 0 , C =


0 0 0 1 0
1
b1,1
0
0

c1,1 f1,1
c1,2 f1,2

The GPC algorithm for state space model is presented in Appendix 1. Let us suppose that
the state can not be measured, therefore the state observer must be applied, in this case it
was the fixed gain feedback observer. If the state space models of the serial and parallel
structures are applied in the GPC, an essential difference can be observed in the inner
disturbance regulation. Applying the same controller parameters as in Example 6, the
same results are obtained as are presented in Fig. 28. Consequently, the GPC based on the
state space model of the parallel cascade structure is identical to the basic SISO GPC
while the GPC based on the state space model of the serial cascade structure is identical
to the proposed CGPC.

87

3.6 Summary
In this chapter, the derivation and investigation of the proposed CGPC algorithm were
presented. The CGPC algorithm is based on the transfer function model of the processes,
and requires a special predictor, that predicts the process output based on the control
signal, the process output and the intermediate variable.
The properties of the proposed CGPC algorithm are also presented in the chapter. The
robustness analysis of the algorithm showed that the inner and the outer loop robustness
can be independently tuned. As a related issue, the regulations of the inner and outer
disturbances were also investigated, and its regulations can also be designed
independently. The performance of the CGPC in the presence of constraints was also
investigated and a remarkable benefit against the traditional cascade scheme was found.
The link between the proposed algorithm and the GPC based on state space model is
also enlightened.
Besides the analytical derivations, the chapter includes several simulation examples to
illustrate the performance improvement obtained by the CGPC against the traditional
cascade structure.

4 Flue gas oxygen content control


in fluidized bed boiler
Biomass fuels (like wood) and peat are important energy sources in the Finnish energy
production (Tekes 2004). The intensive use of the mixture of such energy sources can be
explained by a) the increasing demand of using domestic fuels (e.g. peat), b) the thermal
utilisation of the high caloric-value paper-industry by-products (wood chips, sawdust,
bark) which would be waste and c) diverting municipal solid wastes from landfill.
Fluidised bed boilers (FBB) are widely used to combust such fuel mixtures, since low
flue-gas emission-levels can be achieved and various types of solid fuels can be
employed to the fluidised bed either separately or mixed together. However, disturbances
such as variations in the fuel-feed rate and the fuel quality, may lead to performance
degradation and to the increase in the emission-levels. It has been also observed that
increasing the ratio of municipal wastes in mixtures with biomass-fuels, increases the
probability of disturbances.
One of the most economically and technically feasible solutions to compensate the
disturbances and stabilise the combustion process, is improving the control system.
Controlling the flue-gas oxygen-content is an effective way to decrease flue-gas
emissions and optimise the performance of the boiler at the operation level. The oxygen
content setpoint is optimised considering the thermal flue-gas losses, the CO-losses and
the flue gas emissions. The optimal setpoint also depends on the load level of the boiler
(Leppkoski & Mononen 2001).
The flue-gas oxygen-content is usually controlled by conventional PI-controllers. In
large-scale power plants, a significant delay exists between the oxygen control signal and
usually the oxygen measurement. When the process has long dead-times, classical
controllers, like PI controllers, show poor performance (Hgglund 1996). Usually, the PIcontroller is tuned to slow control performance to ensure the stability of the system.
When the oxygen-content drops below a certain limit, plant shutdown happens. To
prevent the shutdown, the oxygen-content reference signal is usually set higher than its
optimal value which results in a higher level of emission. With a delay compensating
controller, the performance of the oxygen-content control system can be improved and
the oxygen-content setpoint can be lowered to the optimum point, thus reducing the
emission.

89
A possible solution for such cases is to use a predictor, which anticipates the future
behaviour of the process. The predictive controller has the ability to predict the future
changes of the process output, and calculate the control action based on this prediction.

4.1 Process description


The schematic diagram of a fluidized bed boiler is presented in Fig. 49. The part of the air
flow (primary air) enters the boiler at the bottom and keeps floating the fluidized bed that
contains the inert material and the fuel. The rest of the air flow (secondary air) enters the
boiler from the wall (usually in couple of stages) to allow the burning of volatile
components.

Fig. 49. The schematic diagram of a fluidized bed boiler (Paloranta et al. 2004).

90
The block diagram of the oxygen-content control system of a typical fluidised bed boiler
is presented in Fig. 50. The controlled variable, the oxygen content of the flue-gas
depends on the combustion conditions, e.g. the fuel air ratio.
Fuel setpoint
Fuel feeding
system

fuel
air

Air setpoint
Air feeding
systems
Oxygen
setpoint
Oxygen
controller

3rd level of the


Secondary air
system

Combustion
process

O2

oxygen measurement

Fig. 50. The oxygen control loop of the boiler.

The higher-level power controller determines the setpoints of the fuel flow and the air
flows (primary air flow and total of secondary air flows) controllers. The setpoint of the
3rd level of the secondary-air controller is adjusted by the oxygen-content controller. Thus
the control loop of the oxygen controller includes the secondary air system, and the
combustion process. The combustion process is remarkably slower than the secondary air
system, both in lag and delay matter.

4.1.1 The traditional oxygen controller


The oxygen-content controller is generally a conventional PI controller. The parameters
of the original PI controllers were optimised using the integrated time absolute error
(ITAE) criteria. The optimisation (Paloranta et al. 2003) is based on a test simulation
including setpoint change and reasonable fuel flow change disturbance. The resulted
controller parameters are the following: Kp = 0.009, Ti = 30 s. The range of the control
signal is wide, with these controller parameters no saturation occurs. The controller
includes the integrator with the classical anti-windup method, but in normal operation it
does not influence the performance.

4.1.2 Description of the proposed controller


The proposed CGPC algorithm is tested for the oxygen control task. The process has a
relatively large delay due to the burning process. The ratio of the delay and of the
dominant time constant is close to one. This property underlines the application of a
predictive controller.

91
The applied CGPC algorithm (called later O2GPC) is extended to one disturbance
feedforward compensation path: the available fuel flow signal is also used to calculate the
control signal.
The predictive controller applies linear process model. The burning process is a
nonlinear process, but it was approximated with a linear model in the controller, without
any gain scheduling or parameter adaptation.
The control loop is shown in Fig. 51. The reference signal (refO2), the total air flow
(Mair), the actual fuel flow (Mfuel) and the flue gas oxygen content measurement (O2) enter
the controller.
M Fuel (t )
Other air flows

ref O 2 (t )

O2 GPC

u (t )

G Air (s )

P (s )

M Air (t )

O2 (t )

Fig. 51. The GPC control loop of the FBB.

The designed discrete-time GPC is based on the discrete transfer function approach as in
Chapter 3, thus no state variable observation is required. The cost function applied in the
controller is the regular cost function of the GPC controller:
J=

Hp

( ref ( k + d

i = Hm

O2

+ d 22 + i ) O 2 ( k + d1 + d 22 + i )

H c 1

+ ( u ( k + j ) ) (25)
2

j =0

where:
refO2 ( t ) , O 2 ( t ) are the flue gas oxygen-content the reference and the predicted

value respectively;
u ( t ) , is the increment of the control signal;
Hp, Hm, Hc are the prediction, the minimum and the control horizons;
is the weighting constant.
To derive the predictor, the model of the process is required. For the secondary air and
combustion process the model shown in Fig. 52 is applied.

92
M Fuel (t )

u (t )

( )
( )

( )
( )

B1 q 1
C q 1
u O2 (t 1) + 1 1 1 (t )
A1 q
A1 q 1

( )
( )

( )
( )

( )
( )

B21 q 1
B q 1
C q 1
M fuel (t 1) + 22 1 M air (t 1) + 2 1 2 (t )
A21 q 1
A22 q
A22 q

O2 (t )

Oxygen model

M Air (t )

Secondary Air
system

Fig. 52. The process model for the O2GPC.

The model of the inner (secondary air system) process:


M Air ( t ) =

B1 ( q 1 )
A1 ( q 1 )

uO 2 ( t 1) +

C1 ( q 1 )

D1 ( q 1 )

1 ( t )

(26)

where D1 ( q 1 ) = A1 ( q 1 )(1 q 1 ) .
The model of the external (O2 model) process:
O 2 ( t ) =

B21 ( q 1 )
A21 ( q 1 )

M Fuel ( t 1) +

C 2 ( q 1 )

D2 ( q 1 )

B22 ( q 1 )
A22 ( q 1 )

M Air ( t 1) +

(27)

2 (t )

where D2 ( q 1 ) = A22 ( q 1 )(1 q 1 ) .


In equation (27), the first term expresses the effect of the fuel flow, the second term
expresses the effect of the third level secondary air flow on the flue gas oxygen content.
The k-step ahead prediction:
O 2 ( t + k t ) =

B21 ( q 1 )
A21 ( q 1 )

M Fuel ( t + k 1) +

C 2 ( q 1 )

D2 ( q 1 )

2 (t + k )

The 1st Diophantine equation:


C 2 ( q 1 )

D2 ( q 1 )

= F2, k ( q 1 ) +

G2, k ( q 1 )
D2 ( q 1 )

qk

B22 ( q 1 )
A22 ( q 1 )

M Air ( t + k 1) +

93
Rearranging the equation with the substitutions and considering that the expected value
of the last term is equal to 0:
O 2 ( t + k t ) =

B21 ( q 1 ) F2, k ( q 1 ) A22 ( q 1 )


C2 ( q 1 ) A21 ( q 1 )

B22 ( q 1 ) F2, k ( q 1 )
C 2 ( q 1 )

M Fuel ( t + k 1) +

M Air ( t + k 1) +

G2, k ( q 1 )
C 2 ( q 1 )

O2 ( t )

The 2nd Diophantine equation:


B22 ( q 1 ) F2, k ( q 1 )
C 2 ( q 1 )

= F2, k ( q 1 ) +

G2, k ( q 1 )
C2 ( q 1 )

( k 1)

Substituting to the prediction equation and applying the inner process model:
O 2 ( t + k t ) =

B21 ( q 1 ) F2, k ( q 1 ) A22 ( q 1 )


C2 ( q 1 ) A21 ( q 1 )

+ F2, k ( q 1 ) M Air ( t + k 1) +

G2, k ( q 1 )
C 2 ( q 1 )

B21 ( q 1 ) F2, k ( q 1 ) A22 ( q 1 )

C2 ( q 1 ) A21 ( q 1 )

M Fuel ( t + k 1) +
M Air ( t ) +

G2, k ( q 1 )
C 2 ( q 1 )

O2 ( t )

M Fuel ( t + k 1) +

B1 ( q 1 )

C1 ( q 1 )
+ F2, k ( q 1 )
+

+
2
u
t
k
1 ( t + k 1) +
(
)
O2
1
1
A1 ( q )

D1 ( q )

1
1
G2, k ( q )
G2, k ( q )
+
+
M
t
O2 ( t )
(
)
Air
C2 ( q 1 )
C 2 ( q 1 )

The 3rd Diophantine equation:


F2,k ( q 1 ) C1 ( q 1 )
D1 ( q

= F1,k ( q

)+

G1,k ( q 1 )
D1 ( q

( k 1)

94
After substituting the equation and considering the zero mean value of the future
disturbance:
O 2 ( t + k t ) =
+
+

B21 ( q 1 ) F2 ,k ( q 1 ) A22 ( q 1 )

M Fuel ( t + k ) +

C2 ( q 1 ) A21 ( q 1 )

B1 ( q 1 ) F1,k ( q 1 )
C1 ( q 1 )

G2,k ( q 1 )
C2 ( q 1 )

uO2 ( t + k 2 ) +

M Air ( t ) +

G2 ,k ( q 1 )
C2 ( q 1 )

G1,k ( q 1 )
C1 ( q 1 )

M Air ( t ) +

O2 ( t )

Rearranging the equation:


O 2 ( t + k t ) =
+

B21 ( q 1 ) F2, k ( q 1 ) A22 ( q 1 )


C2 ( q 1 ) A21 ( q 1 )

B1 ( q 1 ) F1, k ( q 1 )
C1 ( q 1 )

M Fuel ( t + k ) +

uO2 ( t + k 2 ) +

G1, k ( q 1 ) G2, k ( q 1 )
G2, k ( q 1 )
M Air ( t ) +
O2 ( t )
+
+
1
1
C1 ( q 1 )

C
q
C
q
(
)
(
)
2
2

The 4th Diophantine equation: separation of the forced and the free response of the
control signal:
B1 ( q 1 ) F1, k ( q 1 )
C1 ( q

= Fuo,k ( q

)+

Guo,k ( q 1 )
C1 ( q

( k 1)

The 5th Diophantine equation: separation of the effects of the future and the past fuel flow
values:
B21 ( q 1 ) F2, k ( q 1 ) A22 ( q 1 )
C2 ( q 1 ) A21 ( q 1 )

= FFuel , k ( q 1 ) +

GFuel , k ( q 1 )

C2 ( q 1 ) A21 ( q 1 )

qk

95
Substituting it to get the final expression of the optimal predictor:
O 2 ( t + k t ) = Fuo,k ( q 1 ) uO2 ( t + k 2 ) + FFuel,k ( q 1 ) M Fuel ( t + k ) +
+

GFuel,k ( q 1 )

C1 ( q 1 ) A21 ( q 1 )

M Fuel ( t ) +

Guo,k ( q 1 )
C1 ( q 1 )

uO2 ( t 1) +

G1,k ( q 1 ) G2,k ( q 1 )
G2 ,k ( q 1 )
M Air ( t ) +
+
+
O2 ( t )
C1 ( q 1 )
C2 ( q 1 )
C2 ( q 1 )

The first term (with the polynomial Fuo) is the forced response ( O 2, forced ); the term
including the FFuel is the effect of the future fuel flow changes (since these values are not
available) the future fuel flow changes are assumed to be equal to zero, thus this term can
be neglected; and the terms including the G polynomials express the effect of the past
process inputs and disturbances ( O 2, free ).
Recalling the minimization of the cost function in Eq. (25), as it was shown in
Section 3.1.6, the control signal is:

uO2 ( t ) = K w O 2 ,free ,

where the K vector is the first row of the matrix: ( Fuo Fuo + ) Fuo and Fuo is the matrix
of the Fuo,k filters similarly to the notation in Section 3.1.6.
1

Assuming the future trajectory keeps constant along the prediction horizon, the control
law is given by:
uO2 ( t ) =

Hp

i = Hm

ki w ( t )

Hp

k G (q )

i = Hm

Hp

ki GFuel,i ( q 1 )
i = Hm
Hp

ki G2,i ( q 1 )
i = Hm

uO2 ( t 1)
C1 ( q 1 )

uo,i

M Fuel ( t )

C1 ( q

21

M Air ( t )

C2 ( q

) A (q )
1

Hp

Hp

i = Hm

1,i

O2 ( t )

k G (q ) C

i = Hm

M Air ( t )

k G (q ) C

2 ,i

(q )
1

(q )
1

96
Introducing the following notations:
Hp

S ( q 1 ) =

C1 ( q 1 ) + q 1 ki Guo,i ( q 1 )
i = Hm

Hp

i = Hm
Hp

R1 ( q 1 ) =

i = Hm

ki G1,i ( q 1 )
Hp

i = Hm

Hp

; R2 ( q 1 ) =

2 ,i

Hp

i = Hm

Hp

Hp

i = Hm

2 ,i

i = Hm

k G (q )

i = Hm

k G ( q )

Hp

R3 ( q 1 ) =

; R4 ( q 1 ) =

kG

i = Hm

ki

(q )
1

Fuel,i

Hp

i = Hm

The resulted control loop is shown in Fig. 53.


M fuel (t )

ref O2 (t )

( )
( ) ( )

R4 q 1
C1 q 1 A21 q 1

+
-

( )
( )

( )
( )

C1 q 1
S q 1

uO2 (t )

3rd level
secondary air
system

M Air (t )

Combustion
process

O2 (t )

( )
( )

R1 q 1
R q 1
+ 2 1
1
C1 q
C2 q

( )
( )

R3 q 1
C 2 q 1

Fig. 53. The block diagram of the oxygen control loop.

The controller parameters are determined based on elementary considerations. The


minimum horizon is chosen to be equal to the delay between the control signal and the
oxygen measurement. This delay is equal to the sum of the secondary air system delay
and the delay of the combustion process. The prediction horizon is chosen to be close to
the dominant lag of the process which is the time constant of the combustion process. The
control horizon is chosen to be equal to one, to get a smooth, unexcited control signal.
The control weighting factor is chosen to be equal to the Fuo Fuo expression.
The applied parameters of the O2GPC controller namely are the following: Hp = 50,
Hm = 20, Hc = 1, = 0.0008 and the sampling time is 1 second.

97
The linear models applied in the O2GPC are presented in Appendix 4. These models
are identified by Paloranta et al., and are presented in Paloranta et al.(2003).
The C polynomials are chosen based on robustness considerations: to get modulus
margins close to 0.5. The resulted C polynomials: C1 ( q 1 ) = 1 0.95q 1 and
C2 ( q 1 ) = 1 0.9512q 1 .

4.2 Simulation results


The proposed controller is compared to the original PI controller throughout simulations
and experiments on a pilot plant.

4.2.1 Simulator description


The simulator used in the simulations was developed by Paloranta et al. (2003). The
simulator has been already applied to retune the oxygen controller and the controllers in
the air supply system of a 185 MW BFBB power plant.
The simulator consists of the air supply systems, the fuel feeding systems and the
combustion process, according to Fig. 49.
The fuel feeding system is described by linear models, since the linear approximation
for the behaviour of the conveyors and screws is satisfactory. The air feeding system
(primary air, secondary air and flue gas recirculation) is modelled with first order transfer
functions and PI controllers. The combustion process is simulated with a Hammersteintype grey-box model proposed by Leppkoski et al. (2003). It contains a
phenomenological, non-linear, steady-state model followed by an empirical linear
dynamic model.
The simulator parameters are retuned based on measurement data from the pilot plant
described in Section 4.3.1.

4.2.2 Perfect model matching


The simulations show the tracking and regulation behaviours of the controllers. The
linear model applied in the controller is obtained by linear offline identification around
the working point of 350 Nm3/min air flow and 24 kg/min fuel flow.

98
Fuel flow
Measured
disturbance
Unmeasured
disturbance

M Fuel (t )
Other air flows

ref O 2 (t )

O2 GPC

u (t )

M Fuel _ Act (t )

M Air (t )

G Air (s )

P (s )

O2 (t )

Fig. 54. The block diagram of the simulation.

The initial values are 4.5 % oxygen content setpoint and 24 kg/min fuel flow. The
simulation includes two setpoint changes (+0.6 % at t=200 sec and -0.6 % at t=600 sec),
two measured disturbances (-4 kg/min at t=1000 sec and +4 kg/min at t=1500 sec) and
two unmeasured disturbances (-4 kg/min at t=2000 sec and +4 kg/min at t=2500 sec). The
interpretation of the measured and unmeasured disturbance is presented in Fig. 54.
The performances of the original PI and the proposed GPC controller are shown in
Fig. 55, the corresponding control signals are given in Fig. 56. The performances are
satisfying the expectations. The GPC controller behaves similarly to the PI controller
during the tracking, but the GPC does not overshoot as the PI.
Outputs
6

Oxygen content, %

5.5
5
4.5
4
3.5
3
0

Reference signal
PI
O GPC
2

500

1000

1500
time, sec

2000

2500

3000

Fig. 55. The performance of the PI and the O2GPC controller without modelling error.

The regulation behaviours of the unmeasured disturbances also show slight difference:
the O2GPC is faster, thus its peak error is smaller. The regulations of the O2GPC at the
+4 kg/min and -4 kg/min disturbances are different (especially in control signal) and that
fact reflects the nonlinear behaviour of the process.

99
Control signals

450

PI
O GPC
2

Air flow, Nm3/min

400
350
300
250
200
150
0

500

1000

1500
time, sec

2000

2500

3000

Fig. 56. The control signal of the PI and the O2GPC without modelling error.

The regulation of the unmeasured disturbance can be interpreted only in the case of the
O2GPC controller, since in the PI loop feedforward disturbance compensation is not
implemented. Comparing the regulation of the measured and unmeasured disturbance, the
feedforward compensation was found to be very effective. In the control signal figure it
can be clearly seen that the controller acts earlier than the effect of the disturbance
appears on the controlled variable.

4.2.3 Control performance in presence of modelling error


In the previous section, the proposed O2GPC controller and the original PI controller
performances were presented in the perfect model matching case. In this section, the
controllers are tested in the presence of a serious modelling error.
Before the simulation results, the robustness properties of the control loops are
presented. The air system model is reliable; robustness against it is not investigated. The
more complex process is the combustion model (referenced also as O2 model), so it is
important to investigate how the PI and O2GPC control loops perform in case of a
modelling error. In the linear approximation of the O2 model, both the fuel flow and the
air flow appear. From the stability point of view only the O2 model has to be considered
as only this appears in the control loop. The effect of the fuel flow signal is compensated
in a feedforward manner, so its modelling error influences the performance of the
feedforward disturbance compensation, but it does not affect the stability of the process.
In Section 3.2, the modulus margin was already introduced. The modulus margins are
calculated for the combustion process loops. The modulus margin of the O2GPC loop,
with the tuning parameters given in Section 4.1.2, is 0.4677. The gain margin of the loop
is 1.888 and the phase margin is 50.86.
The same measures of the PI control are the following: modulus margin is 0.63, gain
margin is 3.5 and the phase margin is 55. According to these margins, the PI loop is
more robust than the O2GPC loop.

100
10

10

-1

10

-2

10

-3

10

-4

10

-5

Stability error limit

Modelling error
O GPC
2

10

PI
-4

-3

10

10

-2

frequency

10

-1

10

10

Fig. 57. The stability boundary of the PI and the GPC control loops and the modelling error.

In the following simulation, the same linear models are applied in the controller as in the
previous section, but in the simulator the combustion process has a 30 % higher gain and
also 30 % longer delay than in the previous case. The gain error is assumed in both the
fuel flow and the air flow path, e.g. it appears in the B21 and B22 polynomials. The
stability boundaries of the controllers and the assumed modelling error are given in
Fig. 57. According to the figure, both control loops are stable. The stability boundary of
the O2GPC loop is close to the modelling error at high frequency, but it clearly stays
above the modelling error.
Outputs

Oxygen content, %

5.5
5
4.5
4
3.5
3

Reference signal
PI
O GPC
2

2.5
0

500

1000

1500
time, sec

2000

2500

3000

Fig. 58. The performances of the PI and the GPC controllers in the presence of modelling
error.

101
Control signals

350

Air flow Nm3/min

300
250
200
150
100
50
0

PI
GPC
500

1000

1500
time, sec

2000

2500

3000

Fig. 59. The control signals in the presence of modelling error.

The performances of the PI and O2GPC loops in the presence of the assumed modelling
error are presented in Fig. 58 and Fig. 59. The performance degradation is obvious, but as
it concludes from Fig. 57 both loops are stable. The tracking of the O2GPC is still faster
than the one of the PI, even though the same big overshoot happens. Both the measured
and unmeasured regulations of the O2GPC loop become oscillatory, but are still faster
than the regulation of the PI controller. The PI loops performance degraded less than the
ones of the O2GPC, which is reasonable according to the modulus margin values
presented at the beginning of this section.

4.3 Experiments on a pilot plant


The same experiments of setpoint changes and regulation of measured and unmeasured
disturbances were performed on the pilot plant with both the PI and the proposed O2GPC
controller as they were performed on the simulator. The parameters of the controllers
were identical to those applied during the simulations.

4.3.1 Pilot plant description


The applied circulating fluidised bed has 50 kW fuel capacity. The riser height is 8 m and
the inner diameter is 167 mm. Temperature levels in the reactor can be maintained with
electrical heaters, cooling system and by feeding in combustion air in appropriate
proportions. The amount of primary and secondary air flows fed from three levels are
adjusted and measured by thermal mass-flow meters. The reactor facilitates to study the
combustion behaviour of different fuels, deposit formation, formation of pollutants and
ash properties under CFB conditions. The reactor is suitable for characterising the
reactivity and the combustion behaviour of fuels and fuel mixtures. The boiler is well

102
instrumented to support the testing of different control loops as well. The controllers are
running on a separate personal computer, in LabView environment.
The O2GPC algorithm was implemented on a separate notebook, and connected to the
plant controller throughout serial connection. At each sampling time, the required
measurements were loaded to the notebook; the control signal was then computed and
sent back to the main controller computer. In the notebook the connection managing and
the frame functions are performed in LabView and the pure calculations are recalled from
Matlab.

4.3.2 Measurement results

5.5
5
4.5
0

O GPC controller
2

50

100

150

200

250

300

350

400

5.5
5

O content, %

O content, %

The tracking behaviours of the controllers are shown with different setpoint changes.
From 4.5 % oxygen content setpoint, in case of the O2GPC controller +0.6 % and -0.6 %
oxygen content setpoint changes were applied, meanwhile in the case of the PI controllers
+1 % and -0.9 % changes were applied. (Unfortunately the test could not be repeated
with the same setpoint changes in the different controllers, but the results still can be
evaluated.) The setpoint signals and controlled variables are presented in Fig. 60 and in
Fig. 62, the corresponding control signals are given in Fig. 61 and in Fig. 63,
respectively.

PI controller

4.5
0

50

100

150

200
time, sec

Fig. 60. Tracking performance of the controllers.

250

300

350

400

103
Control signals

440

Air flow, Nm3/min

420
400
380
360
340

PI
O GPC
2

320
0

50

100

150

200
time, sec

250

300

350

400

Fig. 61. Control signals during the tracking.

O content, %

The measured tracking performances of the controllers are similar to the simulation
results. The O2GPC reaches the faster the new setpoint than the PI controller. The
overshoot of the PI controller can not be observed because of the disturbances.
The control signals of the O2GPC controller basically behaves similar during the
simulation (in Fig. 56). Comparing it to the control signal of the PI controller, it can be
observed that the control signal of the O2GPC is much more excited. This can be
explained by the fact that the O2GPC responds faster to the disturbances.
5.5
O GPC controller

4.5
4
0

O content, %

50

100

150

200

250

300

5.5

350

400

PI controller

5
4.5
4
0

50

100

150

200
time, sec

Fig. 62. Tracking performance of the controllers.

250

300

350

400

104
Control signals

440

Air flow, Nm3/min

420
400
380
360
340
320
PI
O GPC

300
280
0

50

100

150

200
time, sec

250

300

350

400

Fig. 63. Control signals during the tracking.

The measured disturbance test is performed for the O2GPC controller, since the PI
controller is not extended with feedforward compensation. Similarly to the simulation,
the disturbance is caused by the fuel flow change. The test results are presented in Fig. 64
and in Fig. 65.
Measured disturbance
Oxygen content, %

5
4.8
4.6
4.4
4.2

fuel flow,
kg/min

4
0

100

150

200

250

300

350

400

50

100

150

200

250

300

350

400

50

100

150

200

250

300

350

400

25

20
0

air flow,
Nm3/min

50

200

100
0

time, sec

Fig. 64. First measured fuel flow disturbance test.

105
Measured disturbance

Oxygen content, %

5
4.8
4.6
4.4
4.2

fuel flow,
kg/min

4
0

100

150

200

250

300

350

400

50

100

150

200

250

300

350

400

50

100

150

200
time, sec

250

300

350

400

25

20
0
250

air flow,
Nm3/min

50

200
150
100
0

Fig. 65. Second measured fuel flow disturbance test.

The results of the feedforward compensation test should be compared to the regulation of
the O2GPC against the unmeasured disturbance, presented in Fig. 66 and in Fig. 67. The
feedforward control worked well in the O2GPC, in both measured disturbance cases. The
maximum bias of the oxygen content were less than 0.3 %, meanwhile in the unmeasured
cases they were above 0.5 %. As it could have been concluded from the simulation, the
better regulation of the measured disturbance is resulted of the feedforward path, the
controller can act earlier, than in the unmeasured disturbance case.
The regulations of the unmeasured disturbances are shown in Fig. 66 and in Fig. 67.
The figures show the measured fuel signals and the third level secondary air, e. g. control
signal. During this experiment the O2GPC controller received constant 24 kg/min fuel
flow signal and thus simulated unmeasured disturbance.
The measurement results well satisfy the expectations based on the simulations. The
O2GPC loop regulated remarkably faster the fuel flow change disturbance in the first test.
The control signal of the O2GPC was changed much earlier than the one of the PI
controller. In the second test the difference is not as obvious, but can still be seen: the
O2GPC is faster in regulation than the original PI controller.

106
Unmeasured disturbance regulation

Oxygen content, %

Reference signal
PI
GPC

4.8
4.6
4.4
4.2
0

100

200

300

400

500

100

200

300

400

500

600

fuel flow,
kg/min

25

20

air flow,
Nm3/min

0
400

PI
GPC

300
0

100

200

300

400

500

Fig. 66. First unmeasured fuel flow disturbance test.

Oxygen content, %

Unmeasured disturbance regulation

4.5

Reference signal
PI
O GPC

100

200

300

400

500

100

200

300

400

500

fuel flow,
kg/min

25

20
0

air flow,
Nm3/min

400
PI
O GPC

300
0

100

200

300
time, sec

Fig. 67. Second unmeasured fuel flow disturbance test.

400

500

107

4.4 Summary
In this chapter, the application of the cascade generalized predictive controller to the
oxygen control of a fluidized bed boiler is shown. The applied CGPC is extended with
the fuel flow disturbance feedforward compensation.
The proposed controller was compared to the original PI controller loop throughout
simulations and test run on a pilot plant boiler. The results meet the expectations; it is
possible to improve the tracking and regulation performance of the control system with
application of the proposed O2GPC algorithm.

5 The GPC in the Superheater control


5.1 Introduction
In steam-water cycle power plants, the fresh steam generated in the boiler is further
heated in the superheater system to achieve a high pressure and high temperature steam
(live steam) feed directly to the turbines. The principle-objective of the steam temperature
control is to maintain constant live steam temperature. The superheater system,
depending on the size of the boiler, may contain several (although commonly three)
superheaters. To obtain fine control of the live steam and avoiding overheating between
superheaters, the temperatures at the superheater outlets during the last two superheater
stages are controlled.
The constant live steam temperature plays a key role ensuring the appropriate cycle
efficiency. Without control, the temperature of the live steam would vary according to the
power level and as a result of the several disturbances in the boiler. The temperature
deviation from its setpoint has several negative effects which may cause the following
problems:
A higher than allowed steam temperature which endangers both the heat surfaces of
the boiler and the inlet of the turbine;
The lower steam temperature, which decreases the cycle efficiency, since the heat drop
on the turbine is decreased, and the lower steam temperature also moves the expansion
towards the wet zone;
The fast steam temperature change causes tension in the materials especially in the
high pressure boilers.
The superheated steam outlet temperature depends fundamentally on three main
variables: the steam mass flow, the inlet steam temperature and the transferred heat from
the flue-gas to the steam. The steam mass flow is defined by the power demand, thus can
not be applied as a control variable. The heat transfer and the inlet steam temperature are
possible to set according to the steam temperature control task. These represent two types
of actuations applied in the steam temperature control.

109
The heat transfer to the superheater can be influenced in two different ways (Czinder,
2000):
changing the flue gas volume flow, that can be implemented only with flue gas by
pass;
changing the Boltzmann-number, by flue gas recirculation or with moving the fire pip.
The main disadvantage is that these solutions require a priori constructional
consideration. Usually these methods are applied in combination with other types of
steam temperature control methods.
The actuation on the inlet steam temperature is the most widespread method for the
steam temperature control. One possible implementation of the steam temperature control
task is solved with direct contact heat exchangers. The saturated steam is cooled down to
the wet zone, or the superheated steam is cooled in the heat exchanger with feedwater or
with water from the drum. Similarly to the modification of the heat transfer, the
disadvantage of this method results from the large heat storage capacity of the heat
exchange surface resulting in the dynamical performance of the actuation being too slow.
The most commonly applied solution of the steam temperature control is the spraywater attemperator. It is simple and cheap, and the dynamical behaviour is the most
advantageous from the control point of view among the different solutions. The spraywater attemperators are installed between the superheater stages. A typical set-up is
presented in Fig. 68 with two stages.

m st , in

m st , in

m st , after sp

Tst , in

Tst , before sp

Tst , after sp

m sp

Tst , after sh

hsp

Fig. 68. Typical steam temperature control with spray-water attemperator.

The spray-water attemperator pumps a fine spray of relatively cold water droplets into the
steam flow. With the resulting mixing of hot steam and cold water, the coolant eventually
evaporates so that the final mixture is comprised of an increased volume of steam at a
temperature which is lower than that prior to the water injection. Since the cooled steam
flow is the input of the next superheater, the attemperator influences the steam
temperature at the superheater outlet. Whenever possible, the off-take point for the
attemperation water is the downstream of the feedwater control valve.
The attemperator is an effective manner of lowering the temperature of the steam.
Though, in thermodynamic terms, it results in the reduction of the plant performance,
since the steam temperature has to be raised to a higher value than is needed to provide
the necessary operational range for the attemperator. The attemperator then cools down
the steam during the controlling of the outlet temperature of the next superheater.

110
The steam temperature control is generally solved by the cascade control scheme. The
typical control architecture is presented in Fig. 69. The controlled variable is the steam
temperature after the superheater, the control variable is the sprayed water mass flow and
the auxiliary (intermediate or secondary) variable is the stream temperature after the
spray-water attemperator. The inner loop dynamic behaviour is especially suitable for the
cascade scheme, since it is fast. The inner loop is designed to regulate the disturbances
arriving from the previous section of the superheater surfaces. The primary controller
only has to regulate the disturbances raised in the superheater after the attemperator.

Tst , after sh

Tst , after sp

PI

Tset point
after sh

Fig. 69. Typical cascade control scheme for steam temperature control.

The primary controller is generally a PI controller. The PID algorithm could be also used,
but it is not applied in the practice, because according to the experience, the derivative
effect hardly influences the controller performance due to the large time constant of the
superheater.
The secondary controller can be P or PI controller. The performance with P secondary
controller is usually satisfactory, and the advantage of this controller from the stability
point of view is that there are no integrators in series in the cascade loop. The
disadvantage of the P controller in the secondary loop is the remaining steady-state error.
To improve the control performance, the application of the PI controller in the secondary
loop is a widespread solution.
The incorporation of a feedforward disturbance compensation signal into the scheme
may prove an advantage for the following reason: it often happens that during a load rise
the boiler is strongly overfired for the purpose of keeping the steam pressure within a
relatively narrow range. The result is a severe heating disturbance which affects the
superheater, and the disturbance can be effectively counteracted by the mentioned
disturbance compensation. To this effect, a signal proportional to the fuel flow is brought
via a derivative unit to the input signal of the secondary controller (Klefenz 1986). The
control scheme extended with the fuel flow disturbance compensation is presented in
Fig. 70.

111

Tst , after sh

Tst , after sp

Tset point
after sh

PI
D
m fuel

Fig. 70. Cascade control scheme for steam temperature control extended with fuel flow
disturbance feedforward compensation.

The application of another auxiliary signal is also common. The Fig. 71 shows the steam
temperature control scheme with steam mass flow compensation. This path modifies the
setpoint of the secondary loop according to the power load of the boiler. With the
presented feedforward compensations the performance of the original cascade structure is
improved and the regulation and tracking behaviour can be speeded up.

Tst , after sh

Tst , after sp

PI

Tset point
after sh

m st
Fig. 71. Cascade control scheme for steam temperature control with steam flow feedforward
compensation.

In this study the control task is the control of one superheater stage. The purpose of the
investigation is to improve the performance of the cascade loop without considering the
load characteristic of the plant. The proposed controller is going to be compared to a
simple cascade loop including a P and a PI controllers, and a mass flow feedforward
disturbance compensation. The proposed controllers are a simple CGPC as it was
presented in Chapter 3, and a modified CGPC (references as SH-GPC) that is a CGPC
extended with steam mass flow feedforward.
In the chapter, the applied superheater process simulator is presented first, followed by
the tuning of the PI controllers, then the tuning of the CGPC controller and the derivation

112
and lastly the tuning of the SH-GPC. Finally the controllers are compared based on
different simulation series.

5.2 Spray-Superheater simulator


The proposed superheater control loops are tested on simulator. The simulator is built to
simulate the behaviour of the spray and the superheater. Both sub-models are white-box
models and the basic physical phenomena are described with algebraic and differential
equations. The simulators are intended to catch the main dynamic behaviour of the sub
processes that are interesting from the control point of view.

5.2.1 The water spray model


In the attemperator, the mixing of the sprayed water is assumed to be perfect. The time of
the mixing, heating and evaporation of the sprayed water can be ignored comparing to the
time constant of the superheater. According to this assumption the model of the spray
consists of the mass and energy balance and a dynamic part describing the heat storage of
the pipe-work and the effect of the temperature measurement.
The mass balance of the attemperator according to Fig. 68 is:
min + msp = mst, after sp

The energy balance equation of the attemperator:


min hst, before sp + msp hsp = mst, after sp hst, after sp

(28)

where the enthalpy is the function of the temperature and the pressure of the media. The
enthalpy functions are implemented according to the functions proposed by the
International Association for the Properties of Water and Steam (1997).

5.2.2 The superheater model


The aim of the model is to estimate the superheated steam temperature based on the
available inputs, the steam inlet temperature, the steam mass flow, the steam pressure, the
flue gas temperature, the flue gas volume flow and the flue gas components.
The superheater is considered as a distributed parameter system that can be properly
described by partial differential equations. The simplest approximation is the linear
concentrated parameter model; the most complex approach would be the application of
the finite difference or the finite element method.

113
In the present simulator, the simplicity of the concentrated parameter model is
combined with the idea of finite element method. The model is built up from a series of
elementary blocks that include the concentrated parameter phenomenological model. One
block both represents a part of the surface, and describes the heat exchange process of the
surface. According to the real process, the blocks are connected in a counter flow manner
as it is presented in Fig. 72.

Tfg_in
Tst_out

Tfg_out_3

Tfg_out_n

nth
block

Tst_out_2

Tst_out_n-1

Tfg_out_2

2nd
block

1th
block

Tst_out_1

Tfg_out
Tst_in

Fig. 72. The connected blocks of the superheater model.

The model includes several approximations:


The superheating process contains two different sub-processes: the slow thermal
process and the fast hydrodynamic process. The assumption is that these two sub
processes can be modelled separately. Here the focus is on the thermal process, thus
the hydrodynamic process is assumed to be in a quasi stationary phase and the inlet
steam flow is equal to the outlet steam flow.
The thickness of the pipe wall is small and the heat conductivity is big and thus the
temperature gradient of the pipe along the radius is very small.
The geometrical dimensions are constant along the pipe.
The steam pressure is equal in the superheater, but it may vary in time.
The elementary block model is presented in Fig. 73.
Tfg_in
Tp
Tst_out

fg
st

Tfg

Tst

Qfg

Qst

vfg

vst

Tfg_out

Tst_in

Fig. 73. The elementary block of the superheater model.

The model is based on the energy balance equations and the heat transfer equations.
The energy balance of the steam volume:
dTst
h
Vst st st = mst ( hi ho ) + Qst
Tst
dt

114
The energy balance of the metal mass:
dTp
dt

M p c p = Q fg + Qr Qst

The energy balance of the flue gas volume:


dT fg
dt

V fg fg c fg = V fg fg c fg (T fg_in T fg_out ) Q fg

The energy transfer from the metal wall to the steam:


Qst = st (Tm Tst )

The energy transfer from the flue-gas to the metal wall:


Q fg = fg (T fg Tm )
The radiative heat transfer from the flue-gas to the metal wall:
Qr = r (Tr4 Tm4 )
The representative temperatures of the steam (Tst) and flue gas (Tfg) are calculated as the
mean value of the input and output temperatures.
The steam properties like enthalpy, density and specific heat value are calculated by
the corresponding steam property functions. The steam pressure is assumed to be constant
in the model in spatial term. The steam property functions apply the temperature and the
pressure of the steam (e. g. hin = h ( pst , Tst _ in ) )
Flue gas components and volume flow values are inputs of the model. The flue gas
content is applied to calculate the density and specific heat of the flue gas flow. The fluegas actual density is calculated from the components based on the molar weights. The
specific heat value of the flue-gas is calculated considering the flue-gas components and
temperature. The necessary database, including the properties of the elementary gases, is
taken from Barin (1989).
To approximate the distributed parameter process, as many blocks as possible should
be applied, taking into account that the computational time is exponential to the block
number. The aim of the model is to simulate the dynamic behaviour of the superheater. To
determine the necessary number of elementary blocks a series of simulations were
performed with different block numbers. Naturally, the dimensions of the elementary
blocks as mass along with the volumes and surfaces are calculated based on the number
of the blocks. The response of the steam temperature to a steam inlet temperature step,

115
the response of the steam temperature to a flue gas inlet temperature step and the
response of the steam temperature to a steam flow step are given in Fig. 74.
1
0.8
0.6
a)
0.4
0.2

1
3
10
20

0
0

500

1000

1500
time, sec

2000

2500

3000

1
0.8
0.6

b)

0.4
0.2

1
3
10
20

0
0

500

1000

1500
time, sec

2000

2500

3000

0
-0.2
-0.4
c)
-0.6
-0.8

1
3
10
20

-1
0

500

1000

1500
time, sec

2000

2500

3000

Fig. 74. The normalized superheater outlet temperature using a different number of the
applied superheater blocks in response to a) inlet steam temperature step; b) inlet flue gas
temperature step; c) steam mass flow step.

116
According to the step response figures, the application of 10 elementary blocks already
provides good approximation. The difference between the responses of the 10 and 20
blocks models is not significant, but the computation time is increased remarkably. In the
following, the simulator contains 10 elementary blocks.

5.2.3 Validation of the models


The presented models are identified by measurement data from a bubbling fluidized bed
boiler. In the spray model, the time constant of the dynamic part and the spray water
temperature parameters in the superheater model the heat transfer coefficients ( st , fg ,
r ) are to be identified.
The physical parameters (mass of the surface, volumes) applied in the model are from
a 185 MW Bubbling Fluidized Bed Boiler. The live steam parameters are the following:
10.7 MPa; live steam temperature 533C; nominal steam mass flow: 70 kg/s; and the
nominal efficiency of the boiler is 90.3 %. The identification of the model was performed
on the measurement data from the boiler. The cost function of the identification was the
sum of the quadratic error of the estimated and measured steam temperature after the
superheater.
The resulted spray model validation is shown in Fig. 75. The applied process inputs
are presented in Fig. 76. The validation results are satisfactory, while the spray model
gives a reasonable answer, and it is possible to apply it for testing the simulator.
Steam temperature after the spray

435

430

425

420

415

410
0

Estimation
Measurement
500

1000

1500
time, s

2000

2500

3000

Fig. 75. The model output and the measured steam temperature after the spray.

117
50
a)

45
40
35
0
4

b)

500

1000

1500

2000

2500

3000

500

1000

1500

2000

2500

3000

500

1000

1500
time, s

2000

2500

3000

3
2
1
0
480

c)

470
460
450
0

Fig. 76. The process inputs during the validation: a) steam mass flow in kg/s; sprayed water
mass flow in kg/s; inlet steam temperature in C.

The validation of the superheater model is presented in Fig. 77. The input variables
(steam mass flow, steam inlet temperature, flue gas inlet temperature and representative
temperature of the combustion chamber) were taken directly from the measurement data,
they are presented in Fig. 78. The flue gas components and total flue gas volume flow
were calculated by an Adaptive Neuro-Fuzzy Interference System (ANFIS) that describes
the combustion process. The identified ANFIS model is presented by Hmer et al. (2003)
in details.
Steam temperature after the superheater

500

495

490

485

480
0

Estimation
Measurement
500

1000

1500
time, s

2000

2500

3000

Fig. 77. The model output and the measured steam temperature after the superheater.

According to the results presented in Fig. 77, and considering the simplicity of the model,
the built superheater simulator is satisfactory for the purpose.

118

a)

b)

50
45
40
0
80

500

1000

1500

2000

2500

3000

500

1000

1500

2000

2500

3000

500

1000

1500

2000

2500

3000

500

1000

1500

2000

2500

3000

500

1000

1500
time, s

2000

2500

3000

70
60

c)

0
430
420
410
0

d)

800
760
720
0

e)

950
900
0

Fig. 78. The process inputs during the validation of the superheater model: a) steam mass
flow kg/s; b) flue gas volume flow Nm3/s; c) steam inlet temperature C; d) flue gas inlet
temperature C; e) representative temperature of the combustion chamberC.

5.3 Tuning of the PI cascade controllers


For the superheater control task three controllers were tested. The two proposed
predictive controllers are compared to the traditional solution. The PI controller scheme is
identical to the most widespread cascade structure applied for steam temperature control.
The primary controller is a PI controller and the secondary controller is a P controller.
The controller structure is extended with steam mass flow feedforward compensation.
The calculated feedforward signal is added to the control signal of primary PI controller
as shown on the control scheme presented in Fig. 71. To achieve better control
performance in the whole operation range, the controller parameters are scheduled.
For tuning the PI controllers the Kappa-Tau (KT) method was applied. The KT
method is described in strm & Hgglund (1995). The method is based on a test batch
including the representative transfer functions for the dynamics of the typical industrial
processes and on the dominant pole design. The method gives the normalized controller
parameters related to a delayed first order approximation of the process to be controlled.
The tuning parameter of the method is the maximum sensitivity (equal to the reciprocal
of the modulus margin).
The tuning of the controllers followed the reasonable order: first the secondary (inner)
P controller is tuned; then the secondary the primary (outer) PI controller is tuned and
finally the feedforward compensator is designed.

119

5.3.1 Tuning of the secondary controller


In the secondary control loop, the controlled process is the spray water attemperator; the
controlled parameter is the steam temperature after the spray, and the control signal is the
sprayed water flow.
To facilitate the tuning; the process is approximated with the following first order
model:
G (s) =

K e Ls
Ts + 1

The model parameters are the gain (K), the time constant (T) and the delay (L). To obtain
the first order approximated model of the process, off-line identification was performed.
The simulator was applied to generate the necessary data, and the sprayed water mass
flow signal was excited. The identification of the first order approximation of the process
was carried out by the least square method (as given by Ikonen & Najim, 2002) and
applying one second sampling time. Unfortunately the delay (L) parameter can not be
directly calculated by the least square method, thus the parameter search was repeated
with different delay values, and the delay that resulted with the least error, was accepted.
The gain of the process may vary remarkably according to the steam flow sprayed
water flow ratio. The PI controller parameters are scheduled according to the actual steam
flow and spray water flow value.
-14
-16

gain

-18
-20
-22

35
40
45
50
55

-24
-26
0

2
3
sprayed water flow, kg/s

kg/s
kg/s
kg/s
kg/s
kg/s
5

Fig. 79. The approximated gain of the spray.

The gain of the spray process is calculated by equation (28) at 30 different steam flowspray water flow points. The resulted gains are shown in Fig. 79 as the function of the
sprayed water, and the parameter of the curves is the steam mass flow. At these points the
gain of the controller is calculated. Since the control signal is burden with rate limit, to
avoid the adverse oscillation the gain is chosen to result in an open loop steady state gain
of about 10. The resulted gains of the secondary P controller are given in Table 1, where
the first row is the sprayed water flow and the first column is the steam mass flow in kg/s.
During the operation, the controller applied the gain interpolated according to the actual

120
steam and sprayed water value. To avoid the unlikely fast changes in the controller
parameter, the flows are filtered for the gain scheduling.
The negative gains of the controller can be explained by the negative gain of the
process. The physical explanation is that the more water is sprayed into the mass flow, the
colder the steam temperature after the attemperator.
Table 1. The gain of the secondary controller as the function of the steam mass and
sprayed water flows.
Inlet steam
mass flow

Sprayed water mass flow


0

35

-0.4075

-0.4310

-0.4553

-0.4802

-0.5058

-0.5320

40

-0.4655

-0.4891

-0.5132

-0.5379

-0.5632

-0.5890

45

-0.5235

-0.5471

-0.5711

-0.5956

-0.6207

-0.6462

50

-0.5816

-0.6050

-0.6290

-0.6534

-0.6783

-0.7036

55

-0.6397

-0.6631

-0.6869

-0.7113

-0.7360

-0.7611

5.3.2 Tuning of the primary controller


The plant to be controlled by the primary controller includes both the secondary loop and
the second sub-process. Therefore to tune the primary controller the secondary controller
has to be switched on. Since the tuning method requires the first order approximation of
the process, the simulator is used again to generate data for off-line identification. During
these simulations, the setpoint of the secondary controller was excited.
The first order approximation of the controlled process is obtained in the same way as
in the case of the secondary controller, by the least square method.
The dynamic behaviour of the superheater is affected by the actual steam flow values,
thus the approximation is repeated at 5 different steam flow values. The controller
parameters are calculated for the different working points. During the operation, the
controller parameters are interpolated between the pre-calculated parameters according to
the actual steam mass flow value. The parameters of the primary controller are given in
Table 2.
Table 2. The primary controller parameters.
Inlet steam
mass flow

Parameter
P

Ti

35

3.00

65

40

2.75

60

45

2.50

55

50

2.25

50

55

2.00

45

121

5.3.3 Feedforward mass flow disturbance compensation


The steam mass flow feedforward compensation is built as it is shown in Fig. 71. The
feedforward compensation signal is added to the control signal of the primary controller.
The tuning of this path followed the traditional way of tuning. The effect of the steam
mass flow on the superheater outlet steam temperature is identified, and then the inverse
of the resulted transfer function is applied as the compensator. The compensator is
connected to a filter with 10 second time constant, because the steam mass flow signal is
usually an excited signal.
The regulation performance of the controller against steam mass flow disturbance is
shown Fig. 80 and the corresponding control signal is given in Fig. 81. The figures well
present how the feedforward compensation improves the quality of regulation.
Outputs

Superheated steam temperature, C

493.2

PI with feedforward compensation


PI without feedforward compensation
Reference signal

493
492.8
492.6
492.4
492.2
492
491.8
0

100

200

time, sec

300

400

500

Fig. 80. The effect of the feedforward compensation in the regulation of mass flow
disturbance in the PI controller case.
Control signals

Sprayed water mass flow, kg/s

3.3

PI with feedforward compensation


PI without feedforward compensation

3.2
3.1
3
2.9
2.8
2.7
2.6
2.5
0

100

200

300

400

time, sec

Fig. 81. The control signals during the mass flow disturbance regulation.

500

122

5.3.4 Anti-windup method in the cascade structure


The control signal of the process is the sprayed water flow. As usual, in the real
processes, the water flow is limited in both rate and level manner. In the simulations the
rate limits are 0.1 kg/s2, and the control signal is within the range of [0, 5] kg/s. To avoid
the unfavourable oscillations and anti-windup method is applied in the primary controller.
In the primary controller, the windup problem is not evident. The control signal
saturation directly influences the secondary controller. The secondary loop slows down if
saturation happens, and therefore it affects the primary loop. The anti-windup method is
not simple, because of the constraint being within the secondary loop, but somehow the
information about the active constraint should be transferred to the primary loop.
yref

vref

PI

1
Tt,2 s

_
0

v
y

Fig. 82. The anti-windup method applied in the primary controller.

Fig. 82 shows the anti-windup method for a cascade controller that is presented in
strm & Hgglund (1995). In case of control input saturation, the inner loop drives the
intermediate variable to the outer loop as tracking signal. The integrated difference
between the received tracking signal and the controller output is added to the control
signal. According to strm & Hgglund (1995), the time constant of the integration
(Tt,2) is the square root of the integration time of the controller.

5.4 Tuning of the CGPC


The CGPC algorithm was presented in detail in Chapter 3. The controller requires the
transfer function model of the inner and the outer process. In this case the transfer
function between the sprayed water and the steam temperature after the spray, and the
transfer function between the steam temperature after the spray and the steam
temperature after the superheater are the models of the inner and the outer processes.
To determine these transfer functions the same method was used as during the tuning
of the PI controllers in Section 5.3. The transfer functions were off line identified by the

123
least square method from the same data that was applied in the tuning of the PI
controllers.
The spray model is a first order model, thus identical to the model required for tuning
of the secondary PI controller. The identification was not repeated, the same transfer
functions were applied in the CGPC as in the tuning of the secondary controller.
The superheater is modelled in the controller with a second order transfer function.
As it was already mentioned by the tuning of the primary PI controller, the behaviour
of the superheating process is influenced by the steam mass flow, thus the linear process
model in the controller is scheduled according to the steam mass-flow. The process is
linearized at 5 working points and stored as discrete transfer functions. Between the
working points the model parameters are interpolated. The model coefficients are
presented in Appendix 4 and the sampling time is one second.
Applying the considerations presented in Section 3.1.8, the controller parameters are
tuned to get reasonable control performance. The tuned parameters are the following:
minimum horizon is equal to 2, the control horizon is equal to 1 (to avoid the too excited
control signal and maintain the robustness); the prediction horizon is 52 (around the time
constant of the process) and the control signal weighting is zero (since the reasonable
measurement noise does not require attenuation). The disturbance polynomials are chosen
based on robust ness considerations, and they are the following: C1 ( q 1 ) = 1 0.9q 1 and
C2 ( q 1 ) = 1 0.93q 1 .

5.5 The extended cascade Generalized Predictive Controller


One of the most important disturbances of the superheater process is the steam mass flow
variation. The steam flow affects the steam temperature after the attemperator and it has
also effect on the superheater process.
In Chapter 3, it was presented how the GPC can be equipped with feedforward effect.
Since the controller is based on the same models and cost function as the original GPC; it
is reasonable to apply the feedforward compensation in the superheater controller, later
called SH-GPC.

5.5.1 Derivation of the controller


To derive the predictor, a linear model is required. The model applied in the controller is
presented in Fig. 83. The min is the steam mass flow, entering the spray; msp is the
sprayed water mass flow.

124

( )
( )

min (t )

q 1 B21 q 1
A2 q 1

( )
( )

q 1 B11 q 1
A1 q 1

Tout (t )

( )
( )

( )
( )

q 1 B22 q 1
A2 q 1

q 1 B12 q 1
A1 q 1

m sp (t )

( )
( )

( )
( )

C1 q 1
A1 q 1

C2 q 1
A2 q 1

1 (t )

2 (t )

Fig. 83. The superheater process model output applied in the GPC.

The model of the inner (Spray) process:


Ti ( t ) =

B11 ( q 1 )

min ( t 1) +

A1 ( q 1 )

B12 ( q 1 )
A1 ( q 1 )

msp ( t 1) +

C1 ( q 1 )

D1 ( q 1 )

1 ( t ) ,

where D1 ( q 1 ) = A1 ( q 1 )(1 q 1 )
The ARIMAX model of the external (superheater surface) process:
TOut ( t ) =

B21 ( q 1 )
A2 ( q 1 )

mi ( t -1) +

B22 ( q 1 )
A2 ( q 1 )

Ti ( t-1) +

C2 ( q 1 )

D2 ( q 1 )

2 (t ) ,

where D2 ( q 1 ) = A2 ( q 1 )(1 q 1 ) , and mi ( t ) = min ( t ) + msp ( t )


The optimal predictor applied in the GPC to the superheater process is derived
throughout a series of Diophantine equations.
The k-step ahead prediction is:
B21 ( q 1 )

TOut ( t + k t ) =

A2 ( q 1 )

mi ( t + k 1) +

C 2 ( q 1 )

D2 ( q 1 )

2 (t + k )

The 1st Diophantine equation:


C 2 ( q 1 )
D2 ( q

= F2 ( q

G2 ( q 1 )

)+ D

(q )
1

qk

B22 ( q 1 )
A2 ( q 1 )

Ti ( t + k 1) +

125
Rearranging the equation with the substitutions and considering that expected value of
the last term is equal to 0:
B21 ( q 1 ) F2 ( q 1 )

TOut ( t + k t ) =

C2 ( q 1 )

mi ( t + k 1) +

B22 ( q 1 ) F2 ( q 1 )

Ti ( t + k 1) +

C2 ( q 1 )

G2 ( q 1 )
C 2 ( q 1 )

,
TOut ( t )

The 2nd Diophantine equation:


B22 ( q 1 ) F2 ( q 1 )
C 2 ( q 1 )

= F2 ( q 1 ) +

G2 ( q 1 )
C2 ( q 1 )

( k +1)

Substituting to the prediction equation and applying the inner process model:
TOut ( t + k t ) =

B21 ( q 1 ) F2 ( q 1 )
C2 ( q

G2 ( q

mi ( t + k 1) + F2 ( q 1 ) Ti ( t + k 1) +

) T t + G (q ) T t + k
+
()
( )
C (q )
C (q )
B (q ) F (q )
( m ( t + k 1) + m ( t + k 1) ) +
C (q )
B ( q )
B (q )
m ( t + k 1) +
m ( t + k 1) +
+F (q )
A (q )
A ( q )
G ( q )
C (q )
G (q )
+
T (t ) +
T (t )
( t + k ) +
D (q )
C (q )
C ( q )
1

Out

21

in

sp

11

12

in

The 3rd Diophantine equation:

D1 ( q 1 )

= F1 ( q 1 ) +

G1 ( q 1 )

D1 ( q 1 )

F2 ( q 1 ) C1 ( q 1 )

sp

q ( k 1)

Out

126
After substituting the equation:
TOut (t + k t ) =

( ) ( )
( )
( ) ( )
( )
( )
( )
( )
( )

B21 q 1 F2 q 1
(min (t + k 1) + msp (t + k 1)) +
C2 q 1
B q 1 F1 q 1
B q 1 F1 q 1
+ 11
min (t + k 1) + 12
msp (t + k 1) +
1
C1 q
C1 q 1
G q 1
G q 1
G q 1
+ 1 1 Ti (t ) + 2 1 Ti (t ) + 2 1 TOut (t )
C1 q
C2 q
C2 q

( ) ( )
( )
( )
( )

Rearranging the equation:


B21 ( q 1 ) F2 ( q 1 ) B11 ( q 1 ) F1 ( q 1 )
min ( t + k 1) +
TOut ( t + k t ) =
+
C2 ( q 1 )
C1 ( q 1 )

B21 ( q 1 ) F2 ( q 1 ) B12 ( q 1 ) F1 ( q 1 )
msp ( t + k 1) +
+
+
C2 ( q 1 )
C1 ( q 1 )

G1 ( q 1 )
G2 ( q 1 )
G2 ( q 1 )
+
+
+
T
t
T
t
TOut ( t )
(
)
(
)
i
i
C1 ( q 1 )
C2 ( q 1 )
C 2 ( q 1 )

The 4th and 5th Diophantine equation: Separation of the forced and free response of the
control signal:
B21 ( q 1 ) F2 ( q 1 )
C2 ( q 1 )

B12 ( q 1 ) F1 ( q 1 )
C1 ( q 1 )

= Fsp,1 ( q 1 ) +

= Fsp,2 ( q 1 ) +

Gsp,1 ( q 1 )
C 2 ( q 1 )

Gsp,2 ( q 1 )
C1 ( q 1 )

qk

qk

The 6th and 7th Diophantine equation: Separation of the effect of the future and past mass
flow value:
B21 ( q 1 ) F2 ( q 1 )
C2 ( q

B11

= Fin,1 ( q

)+

Gin,1 ( q 1 )

( k +1)

C (q )
)
(q ) F (q ) = F q + G (q ) q (
( ) C q
C (q )
( )
1

in,2

in,2

k +1)

127
Substituting it to the prediction:
TOut ( t + k t ) =

( F ( q ) + F ( q )) m
1

sp,1

sp ( t + k ) +

sp,2

Gsp ,2 ( q 1 )
C1 ( q 1 )

Gsp ,1 ( q 1 )
C 2 ( q 1 )

msp ( t 1) +

msp ( t 1) + Fin,1 ( q 1 ) + Fin,2 ( q 1 ) min ( t + k ) +

Gin ,1 ( q 1 ) Gin ,2 ( q 1 )
G1 ( q 1 ) G2 ( q 1 )
min ( t ) +
Ti ( t ) +
+
+
+
1
1
1
C 2 ( q 1 )

C
q
C
q
C
q

(
)
(
)
(
)
1
1
2

G2 ( q 1 )
C2 ( q 1 )

TOut ( t )

where Fsp ( q 1 ) = Fsp,1 ( q 1 ) + Fsp,2 ( q 1 ) .


Considering that the future steam mass flow signal is not available, it is assumed to be
constant, resulting in the first term of the second row to be equal to zero. The final
expression of the optimal predictor for the superheater process is given in the following
form:
TOut ( t + k t ) = Fsp ( q 1 ) msp ( t + k ) + Gsp ,1 ( q 1 )
+Gsp ,2 ( q 1 )
+G1 ( q 1 )

msp ( t 1)
C1 ( q

Ti ( t )

C1 ( q

C 2 ( q 1 )

min ( t )

+ Gin ,1 ( q 1 )

+ G2 ( q 1 )

msp ( t 1)

C2 ( q

Ti ( t )

C2 ( q

+
+ Gin ,2 ( q 1 )

+ G2 ( q 1 )

min ( t )

C1 ( q 1 )

TOut ( t )

C 2 ( q 1 )

The free response part of the prediction:


TOut,free ( t + k ) = Gsp ,1 ( q 1 )

msp ( t 1)
C2 ( q

+Gin,1 ( q 1 )
+G1 ( q 1 )

min ( t )

C2 ( q
Ti ( t )

C1 ( q

+ Gsp ,2 ( q 1 )

+ Gin ,2 ( q 1 )

+ G2 ( q 1 )

msp ( t 1)
C1 ( q 1 )

min ( t )

C1 ( q 1 )

Ti ( t )

C2 ( q

+ G2 ( q 1 )

TOut ( t )

C 2 ( q 1 )

128
The GPC controller for the superheater has the same cost function as in the original GPC:
Hp

Hc

T ( t + j t ) w ( t + j ) + m ( t + j 1)

J ( u ) =

Out

j = Hm

sp

j =1

where the w is the setpoint for the steam temperature and is the weighting factor.
Since the cost function is exactly the same as in the case of the original GPC
algorithm, thus the analytical solution leads to:

msp = ( FspT Fsp + ) Fsp TOut,free w

where Fsp is the matrix of the Fsp polynomials.


The applied control signal according to the receding horizon concept is:

msp ( t ) = K w TOut,free

where K is again the first row of the matrix ( FspT Fsp + ) Fsp .
The derived control algorithm is:
msp ( t ) =

Hp

i = Hm

Hp

ki w ( t )

i = Hm

Hp

ki Gin ,1,i ( q 1 )
i = Hm
Hp

ki G1,i ( q 1 )
i = Hm

min ( t )

C2 ( q
Ti ( t )

C1 ( q

k G ( q ) C

i = Hm
Hp

C2

i = Hm
Hp

i = Hm

R4 ( q

)=

(q )
1

i = Hm

Hp

i = Hm

; R5 ( q

i = Hm

Hp

; R3 ( q 1 ) =

i = Hm
Hp

)=

C2

in ,2, i

Hp

i = Hm

C1 ki G2i
i = Hm

Hp

i = Hm

kG

i = Hm

TOut ( t )

2, i

sp ,2, i

i = Hm

Hp

Hp

k G (q ) C

Hp

i = Hm

C1 ki Gin ,1,i

kG

C1 ki G2, i

Hp

Ti ( t )

2, i

; R2 ( q 1 ) =

in

Hp

ki G1,i

C1 ( q 1 )

i = Hm

msp ( t 1)

Hp

Hp

R1 ( q 1 ) =

in ,2, i

(q )
1

sp ,2, i

m t
( q ) C q( )
( )

kG

i = Hm

kG

i = Hm

Hp

C1C2 + q 1C1 ki Gsp ,1,i + q 1C2

i = Hm

Hp

C2 ( q 1 )

Hp

S ( q 1 ) =

msp ( t 1)

ki Gsp ,1,i ( q 1 )

(q )
1

129
The resulted controller in polynomial form with the process is presented in Fig. 84.

( )
( )

min (t )

( )
( )

( )
( )

q 1 B11 q 1
A1 q 1

( )
( )

R q 1 R q 1
5 1 + 4 1
C2 q
C1 q

Tout,setpoint (t )

( )
( )

C1 q 1
S q 1

q 1 B21 q 1
A2 q 1

m sp (t )

1 (t )

( )
( )

Tout (t )

v(t) q 1 B22 (q 1 )

( )
( )

q 1B12 q 1
A1 q 1

( )

A2 q

( )
( )

C1 q 1
A1 q 1

( )
( )

R1 q 1 R2 q 1
+
C1 q 1 C2 q 1

2 (t )

( )
( )

C2 q 1
A2 q 1

( )
( )

R3 q 1
C 2 q 1

Fig. 84. The derived SH-GPC controller in polynomial form (The grey boxes represent the
process itself).

5.5.2 Tuning of the SH-GPC


The derived controller requires different models of the process as the CGPC or the PI
loop during the tuning. The linear models are obtained in a similar way than in the
previous cases.
The effect of the spray water on the superheater inlet steam temperature is modelled
by the same model as earlier. The effect of the steam mass flow to the superheater inlet
steam temperature is calculated again by the equation (28).
The linear models of the superheater are calculated by off-line identification. The
linear models of the superheater (B21/A2 and B22/A2) are chosen to be a second order
transfer function. The same simulation data was applied as earlier and the two transfer
functions were identified together in one least square calculation. The identified linear
models for the SH-GPC controller are given in Appendix 4. It is important to keep in
mind that generally the identification requires uncorrelated input signals, but in this case
this requirement can not be satisfied. For identification purposes, the only variable that
can be excited is the sprayed water flow. If the sprayed water flow is changed, then both
the steam temperature after the spray and the steam mass flow entering the superheater is
changed, so these two inputs of the superheater are always correlated.
As in the cases of the other two controllers, the SH-GPC is also scheduled according
to the steam mass flow. Thus the identifications of the linear models were repeated at

130
different steam mass flow values and the models are stored as discrete filters. During the
operation, the model parameters are interpolated according to the steam mass flow.
The same parameters have to be tuned as in the CGPC. The tuned parameters are the
following: minimum horizon is equal to 2 (the delay of the process), the control horizon
is equal to 1 (to avoid the too excited control signal); the prediction horizon is 52 and the
control signal weighting is 0. The disturbance polynomials are: C1 ( q 1 ) = 1 0.9q 1 and
C2 ( q 1 ) = 1 0.9q 1 are chosen to get similar robustness than the PI controller.

5.6 Comparison of the controllers


The comparison of different controllers is always a difficult task, since the performances
of the controllers depend essentially on the tuning. Usually the faster the controller the
less robust or stable the control loop is. As a consequence, if only the regulation and
tracking performances are considered this may lead to false results. To facilitate the
evaluation of the performances of the controller, first the modulus margins, and the robust
stability limits for additive error are given.
The performances of the controllers are then compared based on three simulations
performed on the spray-superheater simulator. In the first simulation one reference signal
step change and deterministic disturbances are applied to illustrate the tracking and
regulation behaviour.
In the second and third simulations the process inputs are taken from measurement
data. In the second simulation the tracking performances among the disturbances are
presented and in the final simulation the regulation performances are compared.

5.6.1 Robustness properties of the controllers


In this chapter, the same robustness measures are calculated as it was used in the
robustness investigation of the CGPC controller in Section 3.3.3. The modulus margins
and the robust stability limits for the additive error are calculated for the PI, for the
CGPC and for the SH-GPC controllers.
The direct comparison of the controllers is not possible, because the controllers model
the process in different ways. The CGPC and the PI loop are based on SISO models of
the spray and the superheater. The modelling error can be defined only for these two
transfer functions. The calculated robust stability limit for additive error (see
Section 3.2.2) is presented in Fig. 85. In both controller cases for the robustness
calculations, the process is modelled with the linear models applied in the CGPC
controller. In the figure a reference modelling error is also shown, 10 % gain error,
reasonable delay errors are assumed in the inner and outer process respectively.

131
B /A

B /A
1

10

10

10

10

-1

10

Modeling error
CGPC
PI
-4

10

-2

10

10

10

Modeling error
CGPC
PI
-4

-2

10

10

Fig. 85. The robust stability limit for the additive modelling error of the CGPC and PI
controllers.

According to the presented stability limits, both CGPC and PI controllers are well over
the assumed stability limit. At the first sub-process (the spray) the limit of the CGPC is
over the limit of the PI controller, thus the CGPC controller is more robust in almost
every frequency. The stability limits against the modelling error of the superheater model
show that the critical frequency range is different in the controllers, but the CGPC is
slightly more robust than the PI almost in all frequencies.
In the SH-GPC case, the predictor is based on MISO models for the spray and for the
superheater as well. For the spray, the control loop contains only the transfer function
between the control signal and the steam temperature after the spray (B12/A1). The transfer
function between the steam mass flow and the steam temperature after the spray is not
included in the control loop, thus it does not influence the stability and robustness of the
control loop, it affects only the performance of the feedforward disturbance
compensation. The superheater model of the SH-GPC is also a MISO model, and both
paths (B21/A2 and B22/A2) are included in the control loop, therefore its modelling error
influences the stability of the controller. The calculated robust stability limits are
presented in Fig. 86. The same modelling errors are presented as in the figure of the
CGPC and PI controllers.
It is important to note that the robustness of the inner loop (additive error robust
stability limit for the B12/A1 model) can be influenced by the C1 polynomial. The
robustness of the outer loops (i.e. additive error robust stability limit for the B21/A2 and
B22/A2 models) can be influenced by the C2 polynomial. Similarly to the CGPC, the C
polynomials affect only the robustness of the corresponding loop.
The robust stabilities of the SH-GPC are satisfactory, similarly to the CGPC or PI
controllers. The controller has enough margins at every frequency in cases of all the three
transfer function.

132
B /A

10

10

10

10

10

10

-1

10

10

-2

10

-3

12

-4

10

-2

10

21

10

10

-1

10

-1

-2

10

-2

10

-3

10

Modelling
error
SH-GPC

10

B /A

10

10

-3

10 -4
10

-2

10

10

B /A
22

-4

10

10

-2

10

Fig. 86. The robust stability limit for the additive modelling error of the SH-GPC controllers.

To numerically express the robustness of the controllers, the modulus margins (see
Section 3.2.1) were also calculated for the controllers. In Table 3 the modulus margins of
the CGPC and PI controllers are given. The modulus margins of the SH-GPC are
presented in Table 4.
Table 3. The modulus margins of the CGPC and of the PI loops.
Controller

B1/A1

B2/A2

CGPC

0.88529

0.90072

PI

0.83965

0.58845

Table 4. The modulus margins of the SH-GPC loops.


Controller

B12/A1

B21/A2

B22/A2

SH-GPC

0.93397

0.99483

0.91586

The modulus margin values express the strong robustness of the control loops. Among
the controllers the PI controller is the less robust, and the most robust one is the SH-GPC
controller.

5.6.2 Evaluation of the controllers based on deterministic disturbances


This simulation is prepared to present the tracking performance in a disturbance free case
and the regulation performances of the controller among deterministic disturbances. The
applied disturbances include all inputs of the superheater process. The changes of the
inputs were the following:
at t=100 sec reference signal step from 491.7C to 492.7C;
at t=500 sec the steam inlet temperature changes from 470C to 480C;
at t=1000 sec the main steam flow changes from 45 kg/s to 48 kg/s;

133
at t=1500 sec the flue gas flow changes from73.7 m3/s to 81 m3/s;
at t=2000 sec the representative temperature of the burning chamber changes from
930C to 890C;
at t=2500 sec flue gas inlet temperature changes from 775C to 785C;
at t=3000 sec the sprayed water temperature changes from 20C to 17C;
at t=3500 sec the steam pressure changes from 107 Pa to 1.1107 Pa;
at t=4000 sec unit step disturbance on the steam outlet temperature.
The simulation results are presented in three sections to clearly show the performances.
The process outputs are presented in Fig. 87, in Fig. 89 and in 0; the corresponding
control signals are given in Fig. 88 in Fig. 90 and in Fig. 92.
The simulation results are exciting and well express the difference between the
controllers. The tracking performances of the predictive controllers are very similar. The
PI controller is slower and slightly more damped than the predictive controllers. This can
be observed especially on the control signals.
Outputs

Superheated steam temperature, C

493
492.8
492.6
492.4
492.2
492

SH-GPC
CGPC
PI
Reference signal

491.8
491.6
0

500

1000

1500

time, sec

Fig. 87. The process outputs with the different controllers on deterministic disturbances on
the first section.

Sprayed water mass flow, kg/s

Control signals
3.2
3
2.8
2.6
2.4
2.2
0

SH-GPC
CGPC
PI
500

time, sec

1000

1500

Fig. 88. The control signal of the controllers on deterministic disturbance on the first section.

134
The inlet steam temperature change can be considered as an inner disturbance. The
regulation of the inlet steam temperature disturbance shows remarkable difference
between the predictive controllers and the PI loop. The peak of the bias is smaller in case
of the PI controller but it has a long settling time. In Section 3.3.4, the inner disturbance
regulations of a traditional cascade loop and the proposed CGPC algorithm were already
investigated, the significant improvement of the CGPC was already explained there.
The SH-GPC and the CGPC acts similarly, the improvement of the SH-GPC is not
important.
Outputs

Superheated steam temperature, C

493.5
493.4
493.3

SH-GPC
CGPC
PI
Reference signal

493.2
493.1
493
492.9
492.8
492.7
492.6
1500

2000

time, sec

2500

3000

Fig. 89. The process outputs with the different controllers on deterministic disturbances on
the second section.

The steam flow disturbance affects both the spray and the superheater processes. In the
spray the larger steam mass flow results in higher steam temperature; meanwhile in the
superheater the higher the steam mass flow, the lower the outlet steam temperature. The
regulation of the steam mass flow disturbance also distinguishes the controllers. The PI
controller acts with a remarkable drop of the sprayed water flow, because of the
feedforward disturbance compensation. As a result, the steam temperature tends to go
towards the other direction than in case of the other controllers, and its maximum bias is
about the half of the bias in the case of predictive controllers. The final value is slowly
reached after couple of oscillations that are not favourable. The SH-GPC is faster than the
CGPC in this regulation; and the maximum bias is decreased by about 35 %. This
improvement is the result of the modified model applied in the controller. It is interesting
to observe, that a better regulation performance of the SH-GPC is obtained by less
control action.

135
Control signals

Sprayed water mass flow, kg/s

3.4
3.35
3.3
3.25
3.2
3.15

SH-GPC
CGPC
PI

3.1
3.05
1500

2000

time, sec

2500

3000

Fig. 90. The control signal of the controllers on deterministic disturbance on the second
section.

The flue gas volume flow (at t=1500 sec) and the flue gas temperature (at t=2500 sec)
disturbances are very similar, and their regulations are also close to each other. The SHGPC regulation was the fastest, but the CGPC is not much worse.
The change of the combustion chamber representative temperature causes only a
marginal disturbance. The predictive controllers regulated faster this disturbance.
The sprayed water and pressure disturbances are not remarkable. Both of these
disturbances can be interpreted as inner load disturbances. The regulations of the
controllers are according to this fact: the PI loop regulates with overshoot, meanwhile the
regulation of the SH-GPC and CGPC are more damped.
Outputs

Superheated steam temperature, C

493
492.8
492.6
492.4
492.2
492
491.8
3000

SH-GPC
CGPC
PI
Reference signal
3500

time, sec

4000

4500

Fig. 91. The process outputs with the different controllers on deterministic disturbances on
the third section.

The regulation of the pure output disturbance of the PI loop is slower than the ones in the
case of the predictive controllers. The control signal of the PI is well damped, meanwhile

136
the control signal of the GPCs are shown to have larger overshoot. The fastest regulation
is performed by the SH-GPC.
Control signals

Sprayed water mass flow, kg/s

3.4
3.3
3.2
3.1
3
2.9
2.8
2.7

SH-GPC
CGPC
PI

2.6
2.5
3000

3500

time, sec

4000

4500

Fig. 92. The control signal of the controllers on deterministic disturbance on the third section.

5.6.3 Evaluation of the controllers based on measured input data


In the following simulations, the spray-superheater process inputs are taken from
measurement data and only the sprayed water flow is calculated based on the presented
algorithms. Besides the excited inputs, the process output is also burdened with a serious
output disturbance.

137
a)

480
460
0

b)

500

1000

1500

2000

2500

3000

3500

4000

4500

500

1000

1500

2000

2500

3000

3500

4000

4500

500

1000

1500

2000

2500

3000

3500

4000

4500

500

1000

1500

2000

2500

3000

3500

4000

4500

500

1000

1500

2000

2500

3000

3500

4000

4500

500

1000

1500

2000

2500

3000

3500

4000

4500

50
40

0
840
800
c)
760
720
0
80
70
d)
60
50
0
1000
e)
900
0
5
f)

0
-5
0

time, sec

Fig. 93. The process inputs during the simulation: a) steam inlet temperature C b) steam
mass flow kg/s; c) flue gas inlet temperature C; d) flue gas volume flow Nm3/s; e)
representative temperature of the combustion chamber C; f) additional output disturbance.

In the first example the tracking performance of the controller is tested in the presence of
different disturbances. The inputs of the process and the output disturbance added on the
process output are presented in Fig. 93. The process output is given in Fig. 94 and the
control signal is in Fig. 95. The setpoint of the steam temperature includes two steps and
two ramps.
Outputs

Superheated steam temperature, C

504

SH-GPC
CGPC
PI
Reference signal

502
500
498
496
494
492
490
0

500

1000

1500

2000 2500
time, sec

3000

3500

4000

4500

Fig. 94. The process outputs with the different controllers using measured input signals.

138
Control signals

Sprayed water mass flow, kg/s

2.5
SH-GPC
CGPC
PI
2

1.5

1
0

500

1000
time, sec

1500

2000

Fig. 95. The control signals of the controllers.

The control performances of the controllers satisfy the expectations based on the previous
example. All three controllers could well follow the setpoint, regardless of the type of the
setpoint change. The predictive controllers perform similarly, but the SH-GPC followed
the setpoint more tightly. The PI controller was slightly slower, and relatively large errors
could be experienced. The control signals of the controllers do not show a significant
difference. The predictive controllers have peaks at the step kind setpoint changes, but
otherwise the control signals run close to each other.
The performance of the controllers can also be compared by the calculated error
measures. The Table 5 shows the average absolute error value in the second column; and
square root of the average of the error square is presented in the third column. Both of the
error measures reflect well the performance shown in the figure that the smallest error is
obtained by the SH-GPC controller. The CGPC controller performed worse than the SHGPC, but still some improvement against the PI controller can be observed especially in
the quadratic manner.
Table 5. The error measures during the tracking.
Controller

e (i )
i =1

e (i )

i =1

SH-GPC

0.35203

0.6525

CGPC

0.52006

0.8015

PI

0.56432

0.98505

In the previous example the tracking performance of the controller is tested among the
different disturbances. In the next simulation the setpoint of the steam temperature was
kept constant and the regulations of the controllers were compared. The inputs of the
process and the output disturbance added on the process output are presented in Fig. 96.
The process output is given in Fig. 97 and the control signal is in Fig. 98.

139
480
a) 470
460
450

500

1000

1500

2000

35
0
800

500

1000

1500

2000

500

1000

1500

2000

50

100

150

200

500

1000

1500

2000

500

1000

1500

2000

45

b)

c)

40

750

70

d)

60
960
e)

920
880

f)

2
0
-2
time, sec

Fig. 96. The process inputs during the simulation: a) steam inlet temperature C b) steam
mass flow kg/s; c) flue gas inlet temperature C; d) flue gas volume flow Nm3/s; e)
representative temperature of the combustion chamber C; d) additional output disturbance.

The performances of the controllers are in accordance with the results from the previous
simulations. The best regulation is achieved by the proposed SH-GPC controller, the
performance of the CGPC controller is slightly worse. The PI controller has large errors,
but there is a time range ([500,750]), when the best regulation was performed by the PI
controller.
Outputs

Superheated steam temperature, C

493

492.5

492

491.5
SH-GPC
CGPC
PI
Reference signal

491

490.5
0

500

1000
time, sec

1500

2000

Fig. 97. The process outputs with the different controllers using measured input signals.

140

Sprayed water mass flow, kg/s

2.6
2.4

Control signals
SH-GPC
CGPC
PI

2.2
2
1.8
1.6
1.4
1.2
1
0

200

400

600

800

1000 1200
time, sec

1400

1600

1800

2000

Fig. 98. The control signals of the controllers.

The control signal figure shows a significant difference between the predictive controllers
and the PI controller. The control signals of the predictive controllers run together, but the
control signal of the PI controller shows oscillations at certain ranges that have not yet
been seen in the output. Thus, this excited control signal can be explained by two facts:
the inner controller and the steam mass flow disturbance feedforward compensation are
tuned to be relatively fast.
The error measures are presented in Table 6. According to the values presented in the
table, the errors of the CGPC are reduced by about 30 % by the SH-GPC, but errors of
the CGPC are about the 75 % of the PI loop, that is already remarkable.
Table 6. The error measures during the regulations.
Controller

e (i )
i =1

i =1

n
SH-GPC

e (i )

0.14779

n
0.19229

CGPC

0.24611

0.32296

PI

0.35377

0.41679

5.7 Summary
In this chapter, the proposed cascade generalized predictive controller was applied for the
steam temperature control of a superheater stage. The temperature control was performed
with three different controllers: a traditional PI based cascade controller with steam mass
flow disturbance feedforward compensation; a CGPC controller as it is derived in
Chapter 3 and finally a modified CGPC with steam mass flow compensation.

141
The controllers were tested on a white box simulator, based on mass and heat
conservation principles. The simulator was identified and validated based on
measurement data from a 185 MW bubbling fluidized bed boiler.
After the tuning, the controllers were compared from the robustness point of view. The
simulation study of the controllers included the tracking and regulation performance
among deterministic disturbances, and also among realistic disturbances taken from
measurement data.
According to the presented simulation results, the cascade generalized predictive
controller works well. The quality of the superheater control can be improved by the
application of the proposed control algorithm. According to the robustness measures, the
improvement of the control performance was not at the expense of the robustness.

6 Conclusions
The aim of this study was to explore the possibilities of applying the GPC in a cascade
control scheme. One of the key features of the GPC controller, the predictor is modified
to capture the cascade behavior. Hence, to solve the cascade control object, there is no
need for two (primary and secondary) controllers, but one GPC with the cascade
predictor that can properly control the process. The cascade predictor is derived from the
SISO transfer functions of the inner and of the outer process. The prediction is based on
the control signal, process output and on the intermediate variable.
The performance and the properties of the proposed controller have been investigated
in detail. The proposed algorithm has a remarkable advantage in regulation against the
traditional cascade schemes. The application of the cascade predictor facilitates the
regulation of only the outer process output while the intermediate variable remains
uncontrolled. The cost function of the CGPC includes only the error of the primary output
and thus the performance of the outer process output is optimized. However, the
intermediate variable is applied in the cascade predictor; and consequently the rejection
of the inner disturbances becomes smoother, than in the traditional cascade structures.
The robustness analysis of the CGPC algorithm found that opposite to the traditional
cascade structure, it is possible to separately tune the inner and the outer loop robustness
by the noise polynomials. This is especially important if the uncertainties of the subprocesses are different.
The robustness property is strictly related to open loop transfer function, just as the
regulation performance. Consequently not only the robustness properties of the loops can
be tuned separately, but the regulation of the inner and outer loops can be also
independently designed. The different inner noise polynomial results in different
regulation of the inner disturbance, but it does not influence the regulation of the outer
disturbance. This is also true vice versa, the different outer noise polynomials result in
different regulation of the outer disturbance, but does not affect the regulation of the inner
disturbance.
In practice, the variables are often constrained. The controller has to be able to deal
with these constraints, and one of the main advantages of the GPC is the way, it handles
the constraints. The CGPC was also tested among different kinds of input and output
constraints. The great advantage of the CGPC is, that there is only one cost function,

143
therefore any kind of constraint is properly considered. The simulation results properly
illustrated how remarkable performance improvements can be obtained by the CGPC in
the presence of limits on input or output constraints.
The proposed algorithm is tested not only in simple simulation examples, but in more
complex simulators and in a pilot scale fluidized bed boiler.
The oxygen control loop of the boilers is an important tool to reduce the emission of
boilers and improve the efficiency of the boiler. The aim of the control loop is to regulate
the oxygen content of the flue gas by changing the burning air flow. The controlled
process covers a complex nonlinear phenomenon that may have long time delay. The
proposed algorithm was in the oxygen control loop. The CGPC is extended with the
feedforward compensation of a measurable disturbance, i.e. the fuel mass flow. The
proposed CGPC algorithm uses the linear models of the processes those are linearized
around the middle working point.
The performance of the CGPC in the oxygen control loop was tested on a simulator
first. The uncertainty of the applied linear models was significant, therefore the
robustness of the designed control loop was an important issue. The simulation study
showed that the performance of the oxygen control can be improved by the proposed
CGPC even in the presence of serious modeling error.
After the promising simulation results, The CGPC was tested on a 50 kW pilot plant.
The tracking performances, the regulation of measured and unmeasured disturbances of
the proposed and of the retuned original PI controller were compared. The CGPC
satisfied the expectation, and the performance improvement was obvious.
The control of the superheated steam temperature is a traditional cascade control
problem widely considered in the literature. It was evident to test the CGPC for the
superheater control.
Three controllers were tested: a traditional PI controller extended with steam mass
flow feedforward compensation, a CGPC controller and a CGPC controller extended with
steam mass flow feedforward compensation. The comparison was performed on a spraysuperheater simulator, validated on measurement data from a 185 MW fluidized bed
boiler.
The results show a remarkable difference between the traditional PI based control loop
and the proposed CGPCs. The predictive concept improved the performance both in the
tracking and in the regulation manner. The CGPCs were tuned to have at least the same
robustness as the PI controller.
The presented examples not only illustrated the power of the CGPC, but also
demonstrated how the GPC or CGPC can be designed for the specific problem. With the
given description, the robustness analysis can be also solved and facilitate the robustness
analysis of the designed controller.

References
strm KJ & Hgglund T (1995) PID controllers: Theory, Design and Tuning, 2nd ed.,
International Society for Measurement and Control, North Caroline.
Barin I (1989) Thermochemical data of pure substances, Part I and II, VCH, Weinheim.
Bordons C & Camacho EF (1999) Application of simple cascade GPC with robust behaviour to a
sugar refinery, Proceedings of European Control Conference, Karlsruhe, Germany.
Buschini A, Ferrarini L & Maffezzoni C (1994) Selftuning cascade temperature control, IEEE WE7-5 753-758.
Camacho EF & Bordons C (2004) Model Predictive Control, Springer-Verlag, London, 2nd ed.
Clarke DW, Mohtadi C & Tuffs PS (1987) Generalized Predictive Control Part I.-II., Automatica
23: 137-148.
Clarke DW (1988) Application of Generalized Predictive Control to Industrial Processes, IEEE
Control Systems Magazine 8: 49-55.
Cutler CR & Ramaker BC (1979) Dynamic Matrix Control- A Computer Control Algorithm,
AICHE national meeting, Houston, USA.
Czinder J (2000) Control of Power Plants, in Hungarian, Megyetem Kiad.
Ferretti G, Maffezzoni C & Scattolini R (1991) Recursive estimation of time delay in sampled
systems, Automatica 27: 653-661.
Forrest S, Johnson M & Grimble M (1993) LQG self-tuning control of Super-heated steam
temperature in Power Generation, 2nd IEEE Conference on Control Applications, Vancouver,
Canada.
Garcia CE, Prett Dm & Morari M (1989) Model predictive Control: Theory and practice
a survey, Automatica 25: 335-348.
Glattfelder AH & Schaufelkberger W (2003) Control systems with input and output Constraints,
Springer-Verlag, London.
Goodwin GC, Graebe SF & Salgado ME (2001) Control system design, Prentice-Hall, New Jersey,
USA.
Greco C, Menga G, Mosca E & Zappa G (1984) Performance Improvement of Self Tuning
Controllers by Multistep Horizons: The MUSMAR approach, Automatica 20: 681-700.
Guo QG, Wang DF, Han P & Lin BH (2003) Multi-model GPC for steam temperature system of
circulating fluidized bed boiler, in Proceedings of the 2nd International Conference on Machine
Learning and Cybernetics, Xian, China.
Hgglund T (1996) An industrial dead-time compensating PI controller, Control Engineering
Practice 4: 749-756.

145
Hang CC, strm KJ & Ho WK (1991) Refinement of the Ziegler-Nichols tuning formula, IEE
Proceedings - D 111-118.
Hedjar R, Toumi R, Boucher P & Dumur D (2000) Cascaded Nonlinear Predictive Control in
Induction Motor, in Proceedings of the 2000 IEEE International Conference on Control
Applications, Anchorage, USA.
Hedjar R, Toumi R, Boucher P & Dumur D (2003) Two cascaded Nonlinear Predictive Controls of
Induction Motors, in Proceedings of the 2003 IEEE International Conference on Control
Applications 458-463.
Hmer Z, Wetz V, Kovcs J & Kortela U (2004) Neuro-Fuzzy model of flue-gas oxygen content,
Proceedings of 23rd International Conference on Modelling, Identification and Control,
Grindelwald, Switzerland, 98-102.
Huang HP, Chien IL & YC Lee (1998) Simple method for tuning cascade controllers, Chemical
Engineering Communications 185: 89-121.
Ikonen E & Najim K (2002) Advanced Process Identification and Control, Marcel Dekker, New
York.
International Association for the Properties of Water and Steam (1997).
Kalman RE (1960a) Contributions to the theory of optimal control, Bulletin de la Societe
Mathematique de Mexicana, 5: 102-119.
Kalman RE (1960b) A new approach to linear filtering and prediction problems, Transactions of
ASME, Journal of Basic Engineering 87: 35-45.
Kaya I (2001) Improving performance of cascade control and a Smith predictor, ISA Transactions
40: 223-234.
De Keyser RMC & Van Cauwenberghe (1985) Extended Prediction Self Adaptive Control, in
IFAC symposium on Identification and System Parameter Estimation, York, UK, 1317-1322.
Klefenz G (1986) Automatic Control of Steam Power Plants, Bibliographisches Institut
Mannheim/Wien/Zrich.
Landau ID, Lozano R & MSaad M (1998) Adaptive Control, Springer-Verlag, London
Leppkoski K & Mononen J (2001) Optimization of fluidized bed boilers and flue gas emission
control, Automation 2001, Seminar proceedings, Finnish Society of Automation, Publication
Series No 24, 228-233.
Leppkoski K, Mononen J & Kovcs J (2003) Adaptable model of flue-gas oxygen content.
Proceedings of the 20th IASTED International Conference on Applied Simulation and Modeling,
Marbella, Spain, 431-436.
Lestage R, Pomerleau A & Desbiens A (1999) Improved constrained cascade control for parallel
processes, Control Engineering Practice 7: 969-974.
Lee JH & Cooley B (1997) Recent advances in model predictive control and other related areas. In
5th International Conference on Chemical Process Control.
Lee YH, Park SW & Lee MY (1998) PID controller tuning to obtain desired closed loop responses
for cascade control systems, Industrial Engineering Chemical Research 39: 1859-1865.
Lemos JM & Mosca E (1985) A Multipredictor-based LQ Self-tuning Controller, in IFAC
Symposium on Identification and System Parameter Estimation, York, UK, 137-141.
Lewis FL (1992) Applied Optimal Control and Estimation, Prentice Hall, Englewood Cliffs, N.J.
Liu T, Gu D & Zhang W (2005) Decoupling two-degree-of-freedom control strategy for cascade
control systems, Journal of Process Control 15: 159-167.
Luyben WL (1990) Process Modelling, Simulation, and Control for Process Control, McGraw Hill
Book Company, New York.
Maciejowski JM (2002) Predictive Control with Constraints, Prentice Hall, London.
Maciejowski JM, French I, Fletcher I & Urquhart T (1991) Cascade control of a process plant using
predictive and multivariable control, Proceedings of the 30th IEEE Conference on Decision and
Control.

146
Madrigal-Espinoza G & Garza-Barrientos JA (1995) Autotuning PID controller for superheated
steam temperature in power plants, in Proceedings of IFAC Control of Power Plants and Power
Systems, Cancun, Mexico.
Maffezzoni C, Schiavoni N & Ferretti G (1990) Robust Design of Cascade Control, IEEE Control
Systems Magazine, January, 21-25.
Moelbak T (1999) Advanced control of superheater steam temperature an evaluation based on
practical applications, Control Engineering Practice 7: 1-10.
Nakamura H, Toyoda Y & Oda K (1995) An adaptive control system for steam temperature of
fossil-fuel-fired thermal power plant, in Proceedings of IFAC Control of Power Plants and
Power Systems Symposium, Cancun, Mexico.
Najim K & MSaad M (1991) Adaptive control: theory and aspects, Journal of Process Control,
1:84-95.
Paloranta M, Leppkoski K & Mononen J (2003) A simulator-based control design case for a fullscale bubbling fluidized bed boiler. Proceedings of the Twelfth IASTED International
Conference on Applied Simulation and Modelling, Marbella, Spain, 30-35.
Paloranta M, Beny I, Leppkoski K & Kortela U (2004) Time Delay Compensating Control of
Flue-Gas Oxygen-Content in FBB. Proceedings of the Seventh IASTED International
Conference on Power and Energy Systems, Florida, USA. 419-424.
Paloranta M, Kovcs J, Beny I & Leppkoski K (2005) A Simulator-Based Tuning of SmithPredictor in a Pilot CFB Boiler. Proceedings of the Fifth IASTED International Conference on
Modelling, Simulation and Optimization, Oranjestad, Aruba. 187-192.
Richalet J, Rault A, Testud JL & Papon J (1976) Algorithmic Control of Industrial Processes, In 4th
IFAC Symposium on Identification and System Parameter Estimation, Tbilisi, USSR.
Richalet J, Rault A, Testud JL & Papon J (1978) Model Predictive Heuristic Control: Application
to Industrial Processes, Automatica 14: 413-428.
Richalet J (1992) Practique de la commende predictive. Hermes.
Richalet J (1993) Industrial Applications of Model Based Predictive Control, Automatica 29: 12511274.
Roffel B & Betlem BH (2004) Advanced Practical Process Control, Springer-Verlag, Berlin.
Qin SJ & Badgwell TA (2003) A survey of industrial model predictive control technology, Control
Engineering Practice 11: 733-764.
Semino & Brambilla (1996) An efficient Structure for Parallel Cascade Control, Industrial &
Engineering Chemistry Research 35: 1845-1852.
Shinskey FG (1996) Process Control System, 4th ed. McGraw Hill Book Company, New York
Shaoyuan L, Yugeng X, Zengqiang C. & Zhuzhi Y (2000) Cascade GPC applied to biaxial Film
Production Line, Proceedings of the American Control Conference, Chicago, USA
Silva RN, Rato LM, Lemos JM & Coito F (1997) Cascade control of a distributed collector solar
field, Journal of Process Control 7: 111-117
Silva RN, Shirley PO, Lemos JM, Gonalves AC (2000) Adaptive regulation of super-heated steam
temperature: a case study in an industrial boiler, Control Engineering Practice 8: 1405-1415.
Smith OJM (1957) Close control of loops with dead time, Chemical Engineering Progress 53: 217219.
Soeterboek R (1992) Predictive Control: a unified approach, Prentice Hall, London
Song SH, Cai WJ & Wang YG (2003) Auto-tuning of cascade control systems, ISA transactions
42: 63-72.
Tan KK, Lee TH & Ferdous (2000) Simultaneous online automatic tuning of cascade control for
open loop stable process, ISA Transactions 39: 233-242
Tekes (2004) Growing power: Renewable solutions by bioenergy technology from Finland.
Markprint Oy, Lahti.

147
Ydstie BE (1984) Extended Horizon Adaptive Control, in Proceedings of 9th IFAC World
Congress, Budapest, Hungary
Yoon TW & Clarke DW (1994) Towards robust adaptive predictive control, Advances in ModelBased Predictive Control, Oxford University Press.
Yoon TW & Clarke DW (1995) Observer Design in Receding-Horizon Control, International
Journal of Control 2: 151-171.
Zhao WJ, Niu YU & Liu JZ (2002) Multi-model adaptive control for the superheated steam
temperature, in Proceedings of the 1st International Conference on Machine Learning and
Cybernetics, Beijing, China

Appendix 1 The GPC controller based on state space model


The ARIMAX structure in transfer function model is given by:
y (t ) =

B ( q 1 )
A ( q 1 )

u ( t 1) +

C ( q 1 )

D ( q 1 )

e (t )

The ARIMAX model represented in state-space form is given by:


x ( k + 1) = Ax ( k ) + Bu ( k ) + Ee ( k )
y ( k ) = Cx ( k ) + e ( k )

where:
f1 1 0
f
2 0 1
A=

f n 1 0 0
f n 0 0
C = [1 0

0
0

1
0

b1
b
2

B=

bn 1
bn

c1
c
2
E=

cn 1
cn

f1
f 2

f n 1
f n

0]

The i-step ahead prediction can be calculated from:


i

y ( k + i ) = CAi j B u ( k + j 1) + CAi 1 ( A EC ) x ( k ) + CAi 1 Ey ( k )


j =1

The future predictions organized in matrix form is given by:

y (t + 1)

y (t + H p )

C[A EC]
x(t ) +

H 1
CA p [A EC]
CE
0
u (t )

CB

y (t )

+
+

H 1
H 1
CA p B
CB u (t + H p 1) CA p E

149
or equivalently:
y ( t + 1) = K CAEC x ( t ) + K CAB u ( t ) + K CAE y ( t )

The free response can be expressed as:


y free ( t + 1) = K CAEC x ( t ) + K CAE y ( t )

The minimization of the same cost function already applied in Section 3.1:
J ( u ) = ( y w ) 1 ( y w ) + u T 2 u
T

leads to the following control sequence:


u = ( K CABT 1 K CAB + 2 ) K CAB 1 ( y free w )
1

According to the receding horizon concept and considering a constant reference signal,
the control algorithm is given by:
u ( t ) = K1 ( w ( t ) K CAE y ( t ) + K CAEC x ( t ) )
where K1 is the first row of the following expression:

(K

T
CAB

1 K CAB + 2 ) K CAB 1
1

The control signal is calculated from the reference signal, from the process output and
from the states of the process. If the process states are not measured, then state observer
is required.

Appendix 2 Diophantine equation solver


The derivation of the predictor requires solving Diophantine equations. In this appendix,
the solution of the following Diophantine equation is discussed:
F
X
= Ei + q i i
Y
Y

where X and Y are polynomials of degree nx0 and ny0 with no common factors and Ei
and Fi are the polynomials which form the solution for a certain integer variable i0. In
general, the Diophantine equation has many solutions. However, the objective is to
calculate the first I coefficients of the impulse response of X/Y and hence nEii-1. Two
cases must be considered separately: ny=0 and ny>0.
In case of ny=0 the Diophantine equation becomes:
F
X
= Ei + q i i
y0
y0

The degrees of the polynomials Ei and Fi are given by min(i-1,nx) and nx-i
respectively. If i nx+1 then nEi = nx and nFi < 0. Hence, Ei is given by X/y0 and Fi = 0.
If, on the other hand, i < nx+1, then the degrees of Ei and Fi are given by i-1 and nx-i
respectively. Hence, Ei is given by the first i elements of the polynomial X/y0 and Fi is
given by the last nx+1-i elements of X.
In case of ny > 0 the original Diophantine equation can be transformed to the following
problem:
P = AR + BS

where the degrees of the known polynomials P, A and B are 2n-1, n and n respectively.
Based on the shown equations: P = X ; A = Y ; B = q i ; R = Ei and S = Fi
The degrees of the polynomials R and S are n.
To efficiently solve the problem, the Sylvester matrix is required to be built:
1
a
1

M = an
0

b0
b1

b0

bn

a1

an

b0
b1

bn

151
The coefficients of the unknown polynomials can be obtained:
x = M 1 P

where: x = [ r0 , r1 , rn 1 , s0 , s1 , sn 1 ] .

Appendix 3 Feasible direction method


The constrained predictive control problem requires optimising the cost function in
presence of inequality constraints. A powerful tool for solving this problem is the
Feasible Direction Methods. The idea of the method is to improve an initial feasible
solution until the optimal solution is reached. To improve the solution, the gradient based
direction is taken, and the step size is determined to keep the solution in the feasible
range and has a smaller cost function value.
The convergence of the method is given in (Camacho & Bordons, 2004). The
inequality constraint problem can be stated as:
1
Minimize: J ( u ) = u T Hu + bT u + f 0
2
Subject to: Au a

where A is mn matrix and a is an m vector.


At a certain feasible point ( u k ), the constraints can be distributed to active constraints
A1u k = a1 and inactive constraints A2 u k < a2 .
Let the nn matrix P, called projection matrix with the following properties: P = PT
and PP = P.
The algorithm can be summarized as follows:
1. If the active set is empty then let P = I; otherwise let P = I A1T ( A1 A1T ) A1 .
1

2. Let the feasible direction: d k = P ( Hu k + b ) .

3. if d k 0 go to step 4; else:
If the active constraint set is empty then STOP; else
3.1
i)

Let w = ( A1 A1T ) A1 ( Hu k + b )
1

If w 0 then STOP; else choose a negative component of w, say wj


and remove the corresponding constraint from the active set. That is
remove j from A1. Go to step 1.
d T Hu k + bT d
4. Let the step size: k = min ( , max ) , where =
, and max can be
d T Hd
c j aTj u k
found as the minimum value of
for all j such that aTj and c j are the rows
aTj d
ii)

of the inactive constraint set and respective bound, and such the aTj d > 0 .
5. Then new solution is: u k +1 = u k +1 + k d k . Replace k by k+1 and go to step 1.

Appendix 4 The applied linear models of the processes


The processes of the flue gas oxygen content control application (Chapter 1):
The polynomials of the secondary air process:
B1 ( q 1 ) = 0.1813q 3

A1 ( q 1 ) = 1 0.8187q 1

The polynomials of the combustion process:


B21 ( q 1 ) = 0.007558q 24

B22 ( q 1 ) = 0.0004114q 17

A21 ( q 1 ) = 1 0.9843q 1

A22 ( q 1 ) = 1 0.9839q 1
The process parameters in the superheater control application in the case of the CGPC
controller are the following:
The spray model:
The denominator is constant: A1 ( q 1 ) = 1 0.9837q 1
The numerator is: B1 ( q 1 ) = b0 q 1

Where the b0 coefficient is given as the function of the steam mass-flow and the sprayed
water flow:
Inlet steam
mass flow

Sprayed water mass flow


0

35

-0.41913

-0.39621

-0.37510

-0.35565

-0.33767

-0.32102

40

-0.36687

-0.34922

-0.33281

-0.31753

-0.30328

-0.28996

45

-0.32621

-0.31219

-0.29905

-0.28675

-0.27517

-0.26429

50

-0.29366

-0.28226

-0.27151

-0.26138

-0.25179

-0.24273

55

-0.26701

-0.25756

-0.24862

-0.24011

-0.23205

-0.22439

154
The superheater model is given in the following form
The denominator: A2 ( q 1 ) = 1 + a1q 1 + a2 q 2
The numerator: B2 ( q 1 ) = b0 q 1 + b1q 2

The ai and bi parameters are functions of the steam mass flow:


Inlet steam mass flow

a1

a2

b0

b1

35

-1.9739

0.97420

-5.998410-4

8.948810-4

40

-1.9698

0.97014

-2.968610-4

6.734010-4

0.96462

1.132710

-4

3.735310-4

6.926610

-4

-1.033110-4

37.34910

-4

-31.98710-4

45

-1.9641

50

-1.9599

55

0.96046

-1.9639

0.96446

The process parameters in the superheater control application in case of the SHGPC
controller are the following (according to the notation of Fig. 83):
The denominator is constant: A1 ( q 1 ) = 1 0.9837q 1

The numerators are: B11 ( q 1 ) = b11,0 q 1 and B12 ( q 1 ) = b12,0 q 1


Where the b11,0 and b12,0 coefficients are given as the function of the steam mass-flow and
the sprayed water flow:
Inlet steam
mass flow

Sprayed water mass flow


0

1.132010-2

2.143610-2

3.048410-2

3.859110-2

4.586010-2

8.730510

-3

1.664010

-2

2.381410

-2

3.032810

-2

3.624510-2

6.937510

-3

1.329210

-2

1.911710

-2

2.446010

-2

2.936610-2

5.645110

-3

1.086110

-2

1.568310

-2

2.014310

-2

2.427310-2

4.682910

-3

9.040410

-3

1.309710

-2

1.687610

-2

2.040010-2

35

-0.41913

-0.39621

-0.37510

-0.35565

-0.33767

-0.32102

40

-0.36687

-0.34922

-0.33281

-0.31753

-0.30328

-0.28996

45

-0.32621

-0.31219

-0.29905

-0.28675

-0.27517

-0.26429

50

-0.29366

-0.28226

-0.27151

-0.26138

-0.25179

-0.24273

55

-0.26701

-0.25756

-0.24862

-0.24011

-0.23205

-0.22439

b11,0
35
40
45
50
55

0
0

b12,0

155
The superheater model is given in the following form
The denominator: A2 ( q 1 ) = 1 + a1q 1 + a2 q 2

The numerators: B21 ( q 1 ) = b21,0 q 1 + b21,1q 2 and B22 ( q 1 ) = b22,0 q 1 + b22,1q 2


The ai and bi parameters are functions of the steam mass flow:

Inlet steam

a2

a1

b21,0

b21,1

b22,0

B22,1

35

-1.6776

0.67996

-2.992810-3

-3.433810-3

5.348310-2

-4.956510-2

40

-1.5444

0.54802

-2.559610-3

-1.725210-3

8.313210-2

-7.690210-2

45

-1.4468

0.45134

-2.349710-3

-1.261110-3

10.64310-2

-9.798610-2

50

-1.4322

0.43748

-2.010810-3

-2.870310-3

10.76110-2

-9.816110-2

55

-1.4877

0.49287

-1.759210-3

-5.645810-3

8.883010-2

-7.981610-2

mass flow

Appendix 5 Implementation of the algorithms


The Matlab-Simulink software provides a comfortable environment to implement
dynamic simulations. All the presented simulations in this thesis were performed in
Simulink environment, and all the applied blocks and algorithms are the own
implementations of the author.
The applied algorithms can be divided into two main groups: the control algorithms
and the accessory algorithms. Only the control algorithms appear in the simulations.
The GPC, CGPC, O2GPC and SHGPC are represented with ready to use blocks.
Fig.A.1 shows the block of the GPC, and Fig. A.2 shows how the controllers are built.
The main calculations are implemented in s-functions that facilitate the storage and
application of the ancient data.
Similar blocks are built for the following cases:

GPC with input constraints;


GPC with input and output constraints;
CGPC with input constraints;
CGPC with input and output constraints;
O2GPC with input constraints;
SHGPC with input constraints.

The control algorithms always apply the original form of the GPC and not the R-S-T
equivalent form. It is important because of the signal constraints that can not be properly
handled in the R-S-T form.
The most important accessory functions are the following:

Diophantine equation solver as given in Appendix.


Simple algorithm to check if signal saturation happens on the prediction horizon;
Feasible Direction Method optimisation function as given in Appendix
Modulus margin and robust stability limit calculations for the different controllers.

These functions can be called from the above mentioned controllers.


xRef(k)

GPC du(k)
y(k)

GPC

Fig. 1. The mask of the GPC with input constraints block.

157

1
xRef(k)
1

2
y(k)

y(k)/C(q-1)

c(z)

du(k)
sfunc_GPC

c(z)

du(k-1)/C(q-1)

S-Function

z-1
1-z-1

Fig. 2. The block diagram of GPC with input constraints algorithm.


The applied s-function program is given in the following as an example for the
implementation of the mentioned controllers.

function [sys,x0,str,ts] = sfunc_GPC(t,x,u,flag,B,A,C,d,PredSett,Int,name,Bound,dt);


% This M-file is designed to be used in a SIMULINK S-function block.
% General Predictive Control
% u = [yref(k+d) y(k) du(k-1)]
%
u(k) is the control signal
%
y(k) is the controlled variable
nh=PredSett(1); % Minimum horizon
np=PredSett(2); % Prediction horizon
nu=PredSett(3); % Control horizon
Ro=PredSett(4); % Control weighting factor
switch flag
case 0
% Initialization
% Calculate the prediction matrices
[AlfaM,BetaM,GammaM]=PredM(B,A,C,d,PredSett(1:3));
nDu=size(BetaM,2);
ny=size(GammaM,2);
sys =

[0,
nDu-1+ny-1,
1,

% number of continuous states


% number of discrete states
% number of outputs

158
4,
0,
0,
1];
ts = [dt 0];

% number of inputs
% reserved must be zero
% direct feedthrough flag
% number of sample times

% State Variables and inputs


x0 = zeros(nDu-1+ny-1,1);
% Store the parameters and calculated prediction matrices
assignin('base',strcat(name,'AlfaM'),AlfaM);
assignin('base',strcat(name,'BetaM'),BetaM);
assignin('base',strcat(name,'GammaM'),GammaM);
assignin('base',strcat(name,'nDu'),nDu);
assignin('base',strcat(name,'ny'),ny);
case 2

% Discrete state update

nDu=evalin('base',strcat(name,'nDu'));
ny=evalin('base',strcat(name,'ny'));
Du=u(3); y=u(2);
% The permutation of the "signal" vectors
uV=x(1:nDu-1);
% The former input signals
yV=x(nDu:nDu+ny-2); % The former output (controlled) signals
if and(ny<2,nDu<2), sys = [];
elseif ny<2, sys = [Du uV(1:nDu-2)'];
elseif nDu<2, sys = [y yV(1:ny-2)'];
else sys = [Du uV(1:nDu-2)' y yV(1:ny-2)'];
end
case 3

% Output

nDu=evalin('base',strcat(name,'nDu'));
ny=evalin('base',strcat(name,'ny'));
AlfaM=evalin('base',strcat(name,'AlfaM'));
BetaM=evalin('base',strcat(name,'BetaM'));
GammaM=evalin('base',strcat(name,'GammaM'));
Duk1=u(3); yk=u(2);

159
% The permutation of the "signal" vectors
uV=x(1:nDu-1);
% The former input signals
yV=x(nDu:nDu+ny-2); % The former output signals
Du=[Duk1;uV];
y=[yk;yV];
% The free response
yfree=BetaM*Du+GammaM*y;
% The reference trajectory vector
wV=ones((np-nh+1),1)*u(1);
% The control signal vector
du=inv(AlfaM'*AlfaM+eye(nu)*Ro)*AlfaM'*(wV-yfree);
uPrev=u(4);
% Check if there is any saturation
[Bool,du]=CheckFeasibility_15062005(du,uPrev,Bound);
if Bool==1 % Bool=1 if numerical opt. is required!
H=2*(AlfaM'*AlfaM+Ro*eye(nu));
b=-2*((wV-yfree)'*AlfaM)';
temp=eye(nu);
for k=1:nu
for l=1:k
temp(k,l)=1;
end
end
Ao=[eye(nu);-eye(nu);temp;-temp];
Co=[ones(nu,1)*Bound(2);ones(nu,1)*-Bound(1); ones(nu,1)*Bound(4)-uPrev;-ones(nu,1)*Bound(3)+uPrev];
du=ActiveSet(H,b,Ao,Co,du0,0,0);
end
end

C236etukansi.kesken.fm Page 2 Thursday, March 16, 2006 12:16 PM

ACTA UNIVERSITATIS OULUENSIS


SERIES C TECHNICA

219.

Phan, Vinh V. (2005) Smart packet access and call admission control for efficient
resource management in advanced wireless networks

220.

Ylianttila, Mika (2005) Vertical handoff and mobility system architecture and
transition analysis

221.

Typp, Asser (2005) Pellon alarajan muutos ja sen vaikutukset viljelyyn ja


ympristn Keski-Pohjanmaalla ja Pohjois-Pohjanmaan etelosassa

222.

Jokelainen, Janne (2005) Hirsirakenteiden merkitys asema-arkkitehtuurille 1860


1950

223.

Hadid, Abdenour (2005) Learning and recognizing faces: from still images to video
sequences

224.

Simola, Antti (2005) Turvallisuuden johtaminen esimiestyn. Tapaustutkimus


pitkkestoisen kehittmishankkeen lpiviennist terksen jatkojalostustehtaassa

225.

Pap, Andrea Edit (2005) Investigation of pristine and oxidized porous silicon

226.

Huhtinen, Jouni (2005) Utilization of neural network and agent technology


combination for distributed intelligent applications and services

227.

Ojala, Satu (2005) Catalytic oxidation of volatile organic compounds and


malodorous organic compounds

228.

Sillanp, Mervi (2005) Studies on washing in kraft pulp bleaching

229.

Lehtomki, Janne (2005) Analysis of energy based signal detection

230.

Kansanen, Kimmo (2005) Wireless broadband single-carrier systems with MMSE


turbo equalization receivers

231.

Tarkkonen, Juhani (2005) Yhteistoiminnan ehdoilla, ymmrryksen ja vallan


rajapinnoilla. Tysuojeluvaltuutetut ja -pllikt toimijoina, tyorganisaatiot
yhteistoiminnan areenoina ja tysuojelujrjestelmt kehittmisen kohteina

232.

Ahola, Timo (2005) Intelligent estimation of web break sensitivity in paper


machines

233.

Karvonen, Sami (2006) Charge-domain sampling of high-frequency signals with


embedded filtering

234.

Laitinen, Risto (2006) Improvement of weld HAZ toughness at low heat input by
controlling the distribution of M-A constituents

235.

Juuti, Jari (2006) Pre-stressed piezoelectric actuator for micro and fine
mechanical applications

Book orders:
OULU UNIVERSITY PRESS
P.O. Box 8200, FI-90014
University of Oulu, Finland

Distributed by
OULU UNIVERSITY LIBRARY
P.O. Box 7500, FI-90014
University of Oulu, Finland

C236etukansi.kesken.fm Page 1 Thursday, March 16, 2006 12:16 PM

C 236

OULU 2006

UNIVERSITY OF OULU P.O. Box 7500 FI-90014 UNIVERSITY OF OULU FINLAND

U N I V E R S I TAT I S

S E R I E S

SCIENTIAE RERUM NATURALIUM


Professor Mikko Siponen
HUMANIORA
Professor Harri Mantila

TECHNICA
Professor Juha Kostamovaara

ACTA

U N I V E R S I T AT I S O U L U E N S I S

Imre Beny

E D I T O R S

Imre Beny

A
B
C
D
E
F
G

O U L U E N S I S

ACTA

A C TA

C 236

CASCADE GENERALIZED
PREDICTIVE CONTROL
APPLICATIONS IN POWER
PLANT CONTROL

MEDICA
Professor Olli Vuolteenaho

SCIENTIAE RERUM SOCIALIUM


Senior assistant Timo Latomaa
SCRIPTA ACADEMICA
Communications Officer Elna Stjerna
OECONOMICA
Senior Lecturer Seppo Eriksson

EDITOR IN CHIEF
Professor Olli Vuolteenaho

EDITORIAL SECRETARY
Publication Editor Kirsti Nurkkala
ISBN 951-42-8031-8 (Paperback)
ISBN 951-42-8032-6 (PDF)
ISSN 0355-3213 (Print)
ISSN 1796-2226 (Online)

FACULTY OF TECHNOLOGY,
DEPARTMENT OF PROCESS AND ENVIRONMENTAL ENGINEERING,
UNIVERSITY OF OULU

TECHNICA

You might also like