Download as pdf or txt
Download as pdf or txt
You are on page 1of 296

Advances in Industrial Control

Springer
London
Berlin
Heidelberg
New York
Barcelona
Hong Kong
Milan
Paris
Santa Clara
Singapore
Tokyo
Other titles published in this Series:
Neuro-Control and its Applications
Sigeru Omatu, Marzuki Khalid and Rubiyah Yusof
Energy Efficient Train Control
P.G. Howlett and P.J. Pudney
Hierarchical Power Systems Control: Its Value in a Changing Industry
Marija D. Ilic and Shell Liu
System Identification and Robust Control
Steen Tøffner-Clausen
Genetic Algorithms for Control and Signal Processing
K.F. Man, K.S. Tang, S. Kwong and W.A. Halang
Advanced Control of Solar Plants
E.F. Camacho, M. Berenguel and F.R. Rubio
Control of Modern Integrated Power Systems
E. Mariani and S.S. Murthy
Advanced Load Dispatch for Power Systems: Principles, Practices and
Economies
E. Mariani and S.S. Murthy
Supervision and Control for Industrial Processes
Björn Sohlberg
Modelling and Simulation of Human Behaviour in System Control
Pietro Carlo Cacciabue
Modelling and Identification in Robotics
Krzysztof Kozlowski
Spacecraft Navigation and Guidance
Maxwell Noton
Robust Estimation and Failure Detection
Rami Mangoubi
Adaptive Internal Model Control
Aniruddha Datta
Price-Based Commitment Decisions in the Electricity Market
Eric Allen and Marija Ilic
Compressor Surge and Rotating Stall: Modeling and Control
Jan Tommy Gravdahl and Olav Egeland
Radiotherapy Treatment Planning: New System Approaches
Olivier Haas
Feedback Control Theory for Dynamic Traffic Assignment
Pushkin Kachroo and Kaan Özbay

Reza Katebi, Michael A. Johnson and


Jacqueline Wilkie

Control and Instrumentation


For Wastewater Treatment
Plants

With 99 Figures
Springer
Reza Katebi
Michael A. Johnson
Jacqueline Wilkie
Industrial Control Centre, University of Strathclyde, Graham Hills Building,
50 George Street, Glasgow G1 1QE, UK

ISBN 1-85233-054-6 Springer-Verlag London Berlin Heidelberg

British Library Cataloguing in Publication Data


Katebi, Reza
Control and instrumentation of wastewater treatment plant.
- (Advances in industrial control)
1.Sewage disposal plants 2.Sewage disposal - Automatic
control 3.Sewage disposal plants - Data processing
I.Title II.Johnson, Michael A. (Michael Arthur), 1948-
III.Wilkie, Jacqueline
628.3
ISBN 1852330546

Library of Congress Cataloging-in-Publication Data


Katebi, Reza, 1954-
Control and instrumentation of wastewater treatment plant / Reza
Katebi, Michael A. Johnson, and Jacqueline Wilkie.
p. cm. -- (Advances in industrial control)
Includes bibliographical references and index.
ISBN 1-85233-054-6 (alk. paper)
1. Sewage disposal plants--Automation. I. Johnson, Michael A.,
1948- . II. Wilkie, Jacqueline. III. Title. IV. Series.
TD746.K38 1999 98-44143
628.3--dc21 CIP

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be
reproduced, stored or transmitted, in any form or by any means, with the prior permission in
writing of the publishers, or in the case of reprographic reproduction in accordance with the terms
of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside
those terms should be sent to the publishers.

© Springer-Verlag London Limited 1999


The use of registered names, trademarks, etc. in this publication does not imply, even in the
absence of a specific statement, that such names are exempt from the relevant laws and
regulations and therefore free for general use.

The publisher makes no representation, express or implied, with regard to the accuracy of the
information contained in this book and cannot accept any legal responsibility or liability for any
errors or omissions that may be made.

Typesetting: Camera ready by authors


Printed and bound at the Athenæum Press Ltd., Gateshead, Tyne & Wear
69/3830-543210 Printed on acid-free paper
Advances in Industrial Control
Series Editors

Professor Michael J. Grimble, Professor of Industrial Systems and Director


Professor Michael A. Johnson, Professor of Control Systems and Deputy Director
Industrial Control Centre
Department of Electronic and Electrical Engineering
University of Strathclyde
Graham Hills Building
50 George Street
Glasgow G1 1QE
United Kingdom

Series Advisory Board

Professor Dr-Ing J. Ackermann


DLR Institut für Robotik und Systemdynamik
Postfach 1116
D82230 Weßling
Germany

Professor I.D. Landau


Laboratoire d'Automatique de Grenoble
ENSIEG, BP 46
38402 Saint Martin d'Heres
France

Dr D.C. McFarlane
Department of Engineering
University of Cambridge
Cambridge CB2 1QJ
United Kingdom

Professor B. Wittenmark
Department of Automatic Control
Lund Institute of Technology
PO Box 118
S-221 00 Lund
Sweden

Professor D.W. Clarke


Department of Engineering Science
University of Oxford
Parks Road
Oxford OX1 3PJ
United Kingdom

Professor Dr -Ing M. Thoma


Institut für Regelungstechnik
Technische Universität
Appelstrasse 11
D-30167 Hannover
Germany

Professor H. Kimura
Department of Mathematical Engineering and Information Physics
Faculty of Engineering
The University of Tokyo
7-3-1 Hongo
Bunkyo Ku
Tokyo 113
Japan

Professor A.J. Laub


College of Engineering - Dean’s Office
University of California
One Shields Avenue
Davis
California 95616-5294
United States of America

Professor J.B. Moore


Department of Systems Engineering
The Australian National University
Research School of Physical Sciences
GPO Box 4
Canberra
ACT 2601
Australia

Dr M.K. Masten
Texas Instruments
2309 Northcrest
Plano
TX 75075
United States of America
Professor Ton Backx
AspenTech Europe B.V.
De Waal 32
NL-5684 PH Best
The Netherlands

To my Mother, Father and Brothers (Seraj, Hassam and


Farvardin)
Reza Katebi
To my Mother and Father, Joyce and Leonard Johnson
Michael A Johnson
To my husband Patrick, who doesn’t believe in dedications, and
my two-year old daughter, Róisín, who doesn’t believe in sleep.
Jacqueline Wilkie
SERIES EDITORS’ FOREWORD

The series Advances in Industrial Control aims to report and encourage


technology transfer in control engineering. The rapid development of control
technology impacts all areas of the control discipline. New theory, new
controllers, actuators, sensors, new industrial processes, computer methods,
new applications, new philosophies…, new challenges. Much of this
development work resides in industrial reports, feasibility study papers and
the reports of advanced collaborative projects. The series offers an
opportunity for researchers to present an extended exposition of such new
work in all aspects of industrial control for wider and rapid dissemination.
The environmental aspects of all of our society’s activities are extremely
important if the countryside; the sea and wildernesses are to be fully enjoyed
by future generations. Urban waste in all its manifestations presents a
particularly difficult disposal problem, which must be tackled conscientiously
to prevent long lasting damage to the environment. Technological solutions
should be seen as part of the available options. In this monograph, the
authors M.R. Katebi, M.A. Johnson and J. Wilkie seek to introduce a
comprehensive technological framework to the particular measurement and
control problems of wastewater processing plants. Of course the disposal of
urban sewage is a long-standing process but past solutions have used
options (disposal at sea) which are no longer acceptable. Thus to meet new
effluent regulations it is necessary to develop a new technological paradigm
based on process control methods, and this is what the authors attempt to
provide.
The monograph has chapters, which examine the full spread of
technological topics, which comprise the new paradigm. These topics range
from wastewater process plant modelling, sensors, control strategies and a
look at full process plant design. The monograph should appeal to a broad
spectrum of readers from those in the water industry through to the
interested control or instrumentation specialist.

M.J. Grimble and M.A. Johnson


Industrial Control Centre
Glasgow, Scotland, UK
FOREWORD

So many of our towns and cities now lie amid extensive conurbation
developments often containing several million people. It is quite surprising
therefore to realise that we have in the last fifty years become quite adept at
dealing with the waste products of our urban society. However, it is also
becoming increasingly clear that some of these waste disposal options are
no longer acceptable and some of the old methods will have to be modified
and enhanced or that new methods will have to be found.
The treatment and disposal of sewage is one of the oldest problems
known to man and is even mentioned in The Book Of Deuteronomy,
“thou shalt have a place also outwith the camp;
whither thou shalt go forth abroad;
.....thou shalt dig therewith, and shalt turn
back and cover that which cometh from thee.”
Over the last century there has been tremendous advances in the
treatment processes and theoretical understanding of wastewater treatment.
High quality effluents with the possible removal of nitrogen and phosphates
are now commonplace mainly due to the versatility of the activated sludge
process and the ever-increasing methods of tertiary treatment. The
capability to treat wastewater to this high standard coupled with society’s
expectation of a cleaner environment has instigated comprehensive and
demanding legislation.
The social and legislative demands on the wastewater industry can
only be consistently and economically achieved by the efficient operation of
wastewater plants and this can be greatly facilitated by the use of appropriate
process control technology. Process control has been used in wastewater
plants for over 20 years with varying degrees of success. The unique nature
of sewage treatment requires a multi-disciplinary approach to the design and
operation of the control regime. Thus there has to be an interlinking between
the civil engineer, scientist and process control specialist to ensure
appropriate control procedures are implemented.
In this monograph, the authors have brought together contributions
from the many sub-disciplines of process control to provide the wastewater
engineer with the necessary foundation material in process control. Thus
there are chapters on communications and computer process control as well
as the more traditional topics of process modelling, instrumentation and
control loop concepts. This development of a new technological framework
grew from a direct dialogue between a small team of engineers from the
wastewater industry and academics. It is hoped that the blending of
experience and disciplines that the West of Scotland Water and Industrial
Control Centre (University of Strathclyde) staff have tried to achieve will bring
to fruition a new era of insight and understanding of the role of process
control technology in wastewater treatment.

Gerry McCluskey
Operations Manager
West of Scotland Water
Glasgow, Scotland, U.K.
AUTHOR’S PREFACE

Increasingly stringent environmental and health regulations together


with a demand for cost-effective plants have made the improvement of the
computer-based infrastructure for Wastewater Treatment Plants (WWTPs)
an important priority. The introduction of advanced control technology in
WWTPs has been slow due to the lack of reliable instrumentation and the
harsh environment in which the computer and automation devices are
housed and operated. However, the potential for this situation to change is
now emerging due to advances and investment in communication, control
and sensor technology. There is a trend for minimally staffed integrated
plants leading to fully unmanned installations in the future. This means the
control system must supervise the process from the input wastewater right
through to sludge disposal and effluent dispersal. Thus, the long-term
objective of wastewater treatment process operation is to provide
autonomous, reliable and stable process control with highly efficient
throughput at minimum cost.
This monograph describes state-of-the art advances in computer-
based plant-wide control. The material presented is intended to provide an
introductory textbook in plant control and instrumentation technology relevant
to or applied in WWTPs. The book is aimed at WWT plant operators,
process design and control engineers, works managers and those who are
involved in the design, installation, commissioning and operation of computer
control systems for WWTPs. The material of the book was originally
prepared for a training course for the West of Scotland Water and
subsequently revised upon receiving feedback and comments from field
engineers.
The monograph comprises eight chapters as described below:

Chapter 1: Process Modelling and Simulation Methods

An overview on the physical wastewater treatment process is


presented to provide a framework for model development. The main
techniques used for model development and their potential application are
then discussed. This is followed by an overview of the role and use of state-
of -the-art simulation tools.
Chapter 2: Process Control Structures

The basics of the feedback loop and how to improve on open loop
control are discussed. The main features of common control loops found in
wastewater processes are examined. Particular attention is given to On-Off
control, PID control, cascade control loops, ratio control and feed-forward
control. The structures of gain scheduling and the self-tuning control
architecture are also discussed.

Chapter 3: Supervisory Control and Data Acquisition Systems and


Virtual Instrumentation

The state of the art technology in plant automation and control is


introduced. The chapter starts with the historical background to computer
control and its evolution over the last two decades. The use of Distributed
Computer Systems in WWTPs is discussed. The concepts involved in virtual
instrumentation (VI) are introduced. The objective in virtual instrumentation
is to use a general-purpose computer to mimic real instruments with their
dedicated controls and displays. The great advantage of virtual
instrumentation (VI) is the real added versatility that comes with the software.
LabVIEW is given as an example of the application of VIs to wastewater
treatment plants.

Chapter 4: Quality Control For Dynamic Processes

The basic concepts of Statistical Process Control (SPC) are


introduced as a tool for data analysis and data management. Data types and
data characterisation is described. Procedures are presented to determine
the stability of the process under statistical process control.

Chapter 5: Sensors and Actuators

In this chapter the principles and operation of some of the main


sensors and actuators used in the wastewater treatment industry are
described. The chapter is divided into parts discussing the physical
measurement of level and flow, followed by the analytical measurement
using ion-selective electrodes (ISE), (pH, Chlorine, Nitrates), Dissolved
Oxygen (DO), and Suspended Solids. These analytical and physical
measurements involve a range of techniques including ultrasonic and optical
techniques. The main actuators mentioned are pumps and valves.

Chapter 6: Data Communications

This chapter describes the structure of different communication


networks, the open-standard international standard (OSI) being used by
manufacturers, the use of HART for monitoring and control of field devices.
The chapter closes with a discussion on the current issues in the Fieldbus
area relevant to wastewater industry.

Chapter 7: Knowledge-Based Systems

Commonly called the emerging technologies, this chapter is devoted


to neural networks, expert systems and fuzzy logic control. A simple
description of the structure of a neural network and an expert system are
given. Conventional diagnostic tools are available which permit the
identification and analysis of faulty plant equipment or data. Expert systems
and neural net modelling can be used to provide a similar facility and this
chapter provides the introduction and use of these tools in a diagnostic
situation. The chapter closes with the application of Fuzzy Logic to control
systems.

Chapter 8: Wastewater Treatment Plants: An Exercise

In this chapter, a simplified version of the Holdenhurst Sewage


Treatment Works (Robinson, 1990) is presented and discussed. The layout
of the plant is used to design the control system including the sensors and
actuators. The activated sludge process is then modelled using Lab0VIEW
and the virtual instrumentation needed for efficient plant control is designed
and implemented. Examples of digital PID tuning and statistical process
controls are given and the chapter closes with exercises for the reader to
attempt.

Reza Katebi, Mike Johnson and Jacqueline


Wilkie
October 1998
ACKNOWLEDGEMENTS

The authors wish to express their thanks and gratitude to Gerry


McCluskey (West of Scotland Water) for his enthusiastic support and many
hours of useful discussion. The help and support from Dr Marc Bingley
(Severn Trent plc), Dr Jeremy Dudley (Water Research Council) on
modelling using STOAT and Mr Ken McNaught (National Instruments) for
permission to use LabVIEW are also greatly appreciated.
The authors wish to thank Mrs Shena Dinwoodie and Mr Andrew
Smith of the Industrial Control Centre for their excellent skills in typing the
monograph and drawing the figures.
The monographs make reference to the following trademarks:
MATLAB® and SIMULINK® are registered trademarks of Mathworks Inc.
LabVIEW® is the trademark of National Instruments Plc.
MATRIXx® is a registered trademark of Integrated Systems Inc.
EASY5x® is a registered trademark of the Boeing Company.
CONTENTS

1 Process Modelling and Simulation Methods................... 1


1.1 Process Review ........................................................................... 1
1.1.1 Preliminary and Primary Treatment Processes.................. 4
1.1.2 Secondary Treatment Processes ....................................... 4
1.1.3 Tertiary Processes.............................................................. 8
1.2 Modelling Preliminary and Primary Processes ............................ 9
1.3 Modelling the Activated Sludge Process ..................................... 10
1.3.1 Introduction ......................................................................... 10
1.3.2 The Aeration Tank Process................................................ 11
1.3.3 Clarifier Tank Model ........................................................... 19
1.3.4 Interim Conclusions ............................................................ 20
1.4 Uses of the Model ........................................................................ 21
1.4.1 Sub-Unit Studies................................................................. 21
1.4.2 Process Train Studies ........................................................ 21
1.4.3 On-line Process Control ..................................................... 22
1.5 Modelling Principles ..................................................................... 22
1.5.1 Process Control and the Modelling Activity........................ 22
1.5.2 Modelling from Physical Principles..................................... 28
1.5.3 Black Box Modelling Methods ............................................ 33
1.5.4 Hierarchical System Modelling and Simulation .................. 40
1.6 Conclusions ................................................................................. 41
1.7 Further Reading ........................................................................... 43

2 Process Control Structures .............................................. 45


2.1 The Actuator - Plant and - Measurement Sequence ................... 45
2.1.1 A Tank Level Process......................................................... 45
2.1.2 The Measurement Device .................................................. 48
2.1.3 Summary: Component Transfer Functions ....................... 49
2.2 A Unified Actuator - Plant - Measurement Processes ................. 50
2.3 Process Disturbances.................................................................. 51
2.3.1 Supply and Load Disturbances .......................................... 52
2.3.2 Noise Disturbances ............................................................ 53
2.3.3 Summary Conclusions........................................................ 54
2.4 Open Loop Control....................................................................... 55
2.4.1 The Basic Principle............................................................. 55
2.4.2 The Problems with Open Loop Control .............................. 56
2.5 The Feedback Control Loop ........................................................ 57
2.5.1 A Simple Feedback Loop ................................................... 57
2.5.2 Some Definitions ................................................................ 59
2.5.3 The Feedback Loop Analysis ............................................. 59
2.5.4 Feedback Control Objectives: A Full List .......................... 60
2.6 On-Off Control.............................................................................. 61
2.6.1 Basic Principles .................................................................. 61
2.6.2 Performance Assessment in a Wastewater Application .... 65
2.7 Three Term Controllers................................................................ 67
2.7.1 PID Controller Technology ................................................. 67
2.7.2 Basic PID Control Properties.............................................. 70
2.7.3 Industrial PID Controller Features ...................................... 72
2.7.4 PID Controller Tuning ......................................................... 79
2.7.5 Process Reaction Curve Method........................................ 79
2.7.6 Sustained Oscillation PID Tuning Method.......................... 80
2.7.7 Autotune PID Control.......................................................... 84
2.7.8 PID Control Performance ................................................... 85
2.8 Cascade Control Loops ............................................................... 86
2.8.1 Cascade Control Example.................................................. 86
2.8.2 General Cascade Control Principles .................................. 86
2.8.3 Cascade Control Loop Tuning............................................ 87
2.9 Ratio Control ................................................................................ 88
2.10 Feedforward Control .................................................................. 90
2.10.1 The Feedforward/Feedback Control Structure................... 90
2.10.2 Example in the Waste Water Industry ................................ 91
2.11 Inferential Control....................................................................... 94
2.11.1 Inferential Control in the Wastewater Industry ................... 96
2.12 Advanced Control Features: Methods of Controller Adaptation 96
2.12.1 Gain Scheduling ................................................................. 97
2.12.2 On-line Self-Tuning Control................................................ 101
2.13 Conclusions ............................................................................... 102
2.14 Further Reading ......................................................................... 103

3 Supervisory Control and Data Acquisition Systems and


Virtual Instrumentation......................................................... 105
3.1 Introduction .................................................................................. 105
3.2 Economic Benefits ....................................................................... 109
3.3 A Classification For Supervisory Control Problems..................... 110
3.4 Technological Background .......................................................... 112
3.4.1 Centralised Architecture ..................................................... 112
3.4.2 The Distributed Architecture............................................... 114
3.4.3 Supervisory Control System For Wastewater Treatment Plants 118
3.5 Distributed Control System Technology ...................................... 119
3.5.1 Generic Functional Modules............................................... 121
3.5.2 Real-time Data Highway..................................................... 126
3.5.3 Host Computer Interfaces and PLC Gateways .................. 128
3.5.4 Power Distribution System ................................................. 130
3.6 Functionality of the DCS .............................................................. 130
3.6.1 Data Acquisition and Processing........................................ 130
3.6.2 Low Level Process Control................................................. 132
3.6.3 Sequencing......................................................................... 132
3.6.4 Alarm Management ............................................................ 133
3.6.5 Operator Real-time Displays .............................................. 133
3.6.6 Data Logging ...................................................................... 134
3.6.7 Plant Performance Assessment ......................................... 134
3.7 On Designing Supervisory Control .............................................. 135
3.8 Virtual Instrumentation (VI) and a Design Exercise..................... 137
3.8.1 Introduction ......................................................................... 137
3.8.2 Virtual Versus Real Instrumentation................................... 137
3.8.3 VI and Intelligent Instruments ............................................. 139
3.9 Conclusions ................................................................................. 140
3.10 Further Reading ......................................................................... 142

4 Quality Control For Dynamic Processes ......................... 145


4.1 Introduction .................................................................................. 145
4.1.1 Understanding the Process ................................................ 147
4.1.2 Flowcharting ....................................................................... 148
4.2 Data Collection and Presentation ................................................ 149
4.2.1 Data Presentation: Histograms, Charts and Graphs......... 150
4.3 Elementary Statistical Measures ................................................. 152
4.4 Process Variations ....................................................................... 155
4.5 Process Control ........................................................................... 156
4.5.1 Mean Chart ......................................................................... 157
4.5.2 Range Chart ....................................................................... 159
4.6 Assessment of Process Stability.................................................. 160
4.7 Process Capability Indices........................................................... 163
4.8 Example ....................................................................................... 164
4.9 Conclusions ................................................................................. 169
4.10 Further Reading ......................................................................... 171

5 Sensors and Actuators...................................................... 173


5.1 Physical Measurement: Level..................................................... 173
5.1.1 Ultrasonic Level Sensor ..................................................... 174
5.1.2 Capacitance Level Sensor ................................................. 174
5.2 Physical Measurement: Flow...................................................... 175
5.2.1 Weirs and Flumes............................................................... 176
5.3Flumes .......................................................................................... 177
5.3.1 Magnetic Flowmeters ......................................................... 178
5.3.2 Ultrasonic Flow Measurement............................................ 180
5.4 Analytical Measurement: Ion Selective Electrodes..................... 182
5.4.1 Ion Selective Electrodes ..................................................... 178
5.4.2 Example of an Ion Selective Electrode: pH Measurement 180
5.5 Analytical Measurement: Dissolved Oxygen (DO) ..................... 186
5.5.1 Amperometric DO Sensor .................................................. 186
5.5.2 Equilibrium DO Sensor ....................................................... 187
5.6 Analytical Measurement: Turbidity and Suspended Solids ........ 188
5.6.1 Light Absorption Techniques.............................................. 189
5.6.2 Scattered Light Technique ................................................. 190
5.7 ‘Self-Cleaning’ Sensors ............................................................... 191
5.8 Actuators: Pumps........................................................................ 191
5.8.1 Centrifugal Pumps .............................................................. 192
5.8.2 Positive Displacement Pumps............................................ 193
5.9 Conclusions ................................................................................. 194
5.10 Further Reading ......................................................................... 195

6 Data Communications ....................................................... 197


6.1 Introduction .................................................................................. 197
6.2 Dumb Terminals and Smart Sensors .......................................... 200
6.3 Digital Communication ................................................................. 201
6.3.1 Communication Medium..................................................... 201
6.3.2 Data Transfer...................................................................... 202
6.3.3 Serial Interface Standards: RS-232, RS-422 and RS-485 202
6.3.4 Protocols............................................................................. 204
6.4 The ISO 7-Layer Model................................................................ 206
6.5 Distributed Communication Systems........................................... 208
6.5.1 Network Topologies............................................................ 209
6.5.2 Local Area Networks (LANs) .............................................. 211
6.6 HART Communication System .................................................... 212
6.7 Fieldbus ....................................................................................... 215
6.7.1 Different Standards............................................................. 216
6.7.2 The Current Status ............................................................. 219
6.8 Examples of WWTP Communications......................................... 220
6.9 Conclusions ................................................................................. 222
6.10 Further Reading ......................................................................... 224

7 Knowledge-Based Systems .............................................. 227


7.1 Expert Systems in Process Control ............................................. 227
7.1.1 Expert System Components............................................... 228
7.1.2 Expert Systems For Condition Monitoring and Fault Detection 230
7.1.3 Expert Systems in the Wastewater Industry ...................... 231
7.2 Modelling of Complex Process Using Neural Nets...................... 233
7.2.1 The Neuron and the Neural Network ................................. 234
7.2.2 Training the Neural Net (NN).............................................. 236
7.3.3 Neural Network Application Development.......................... 237
7.3.4 Possibilities for Neural Networks in the Wastewater Industry 238
7.3 Fuzzy Logic Control ..................................................................... 239
7.3.1 The Fuzzy Logic Controller (FIC) ....................................... 240
7.3.2 An Example of Fuzzy Logic Control ................................... 245
2.3.3 Applications in Wastewater Treatment Plants.................... 247
7.4 Conclusions ................................................................................. 248
7.5 References................................................................................... 249

8 Wastewater Treatment Plants: An Exercise ................... 251


8.1 Introduction .................................................................................. 251
8.2 Control Systems........................................................................... 255
8.2.1 Flow Balancing and Control ............................................... 255
8.2.2 DO Control.......................................................................... 255
8.2.3 Return Activated Sludge (RAS).......................................... 257
8.3 Alarms .......................................................................................... 258
8.4 Data Display................................................................................. 258
8.5 Fault Monitoring ........................................................................... 258
8.6 DO Control Using LabVIEW ........................................................ 259
8.6.1 Model Description............................................................... 259
8.7 Further Reading ........................................................................... 262

Appendix A: Modelling and Control Demonstrations... ... 263

Appendix B: Author Profiles............................................... 275

Subject Index......................................................................... 277


CHAPTER 1
PROCESS MODELLING AND SIMULATION
METHODS

CHAPTER 2
PROCESS CONTROL STRUCTURES

CHAPTER 3
SUPERVISORY CONTROL AND DATA
ACQUISITION SYSTEMS AND VIRTUAL
INSTRUMENTATION

CHAPTER 4
QUALITY CONTROL FOR DYNAMIC PROCESSES

CHAPTER 5
SENSORS AND ACTUATORS

CHAPTER 6
DATA COMMUNICATIONS

CHAPTER 7
KNOWLEDGE-BASED SYSTEMS
CHAPTER 8
WASTEWATER TREATMENT PLANTS: AN
EXERCISE

SUBJECT INDEX

APPENDIX A
MODELLING AND CONTROL DEMONSTRATIONS

APPENDIX B
AUTHOR PROFILES
1

1 Process Modelling and Simulation Methods


Objectives
(1) To present an overview on the physical process of wastewater
treatment to give a framework to model development for
wastewater treatment processes.
(2) To understand the main techniques used for model development
and their potential application.
(3) To gain an overview of the role and use of state-of-the-art GUI
tools for modelling simulation.

1.1 Process Review

Domestic or urban sewage is only one component of the problem of


wastewater treatment. Other such problems include wastewater from agricultural
activities, and all forms of industrial and manufacturing processes. These special
problems often present particular difficulties arising from the contaminants
(possibly toxic) in the wastewater. The treatment of domestic sewage has been
subject to engineering and scientific input for a very long time but has emerged as
a fully-fledged engineering discipline since about 1914 (Schroeder, 1985).

Influent urban sewage is characterised by three parameters: the biological


oxygen demand (BOD), the concentration of suspended solids (SS) and the
bacteriological quality. It is useful to give a definition of each of these:
(i) Biological Oxygen Demand (BOD). This is the amount of
oxygen uptake by bacteria of the organic content of the effluent for a set
of standard incubation period conditions. Usually the incubation occurs
over five days and at 20 C; this gives rise to the term five day BOD and

sometimes the notation BOD520 . BOD520 is the change in dissolved

R. Katebi et al., Control and Instrumentation For Wastewater Treatment Plants


© Springer-Verlag London Limited 1999
2
oxygen (DO) content of a sample over a five day period when incubated
at 20 C:

BOD520 DO
D o DO
D 5

where DOo and DO5 are the initial and five days DO content. BOD is

measured in {mg/litre}.
(ii) Suspended Solids (SS). The effluent contains material in
suspension and this is from a variety of sources. It is useful to divide the
suspended solids into inorganic and organic components. The inorganic
portion includes material like grit and silt. The organic component has a
much wider variety of sources but is likely to include bacteria, fats,
grease, human waste, and food waste. The SS content is determined by a
filter test and the SS content is measured in {mg/litre}.
(iii) Bacteriological Quality. Sometimes the quality of the effluent
is evaluated by a bacteriological assay for specific bacterial forms, for
example, faecal coliforms. Such an assay quantifies the most probable
number (MPN) of organisms present in a specific sample size, viz.
MPN/100ml. As an indication of the quality desired, drinking water
standards in 1914 were set at an MPN of 2.2/100ml, whilst by 1942 this
had decreased to an MPN of 1/100ml.
The following table gives a useful indication of the before and after
wastewater treatment performance targets (Barnes et al, 1981).
3

Sample Total Suspended BOD Org-N NH4-N NO3


Solids Solids (SS)
Raw municipal 600 250 250 40 30 <5
wastewater
Settled 500 100 180 30 20 <5
wastewater
Secondary & 500 30 20 15 25 5
treated
wastewater
Nitrified effluent 500 10 10 5 5 30
Table 1.1 Typical wastewater sample concentration {mg/litre}

In general municipal wastewater BOD and SS are between 100-400


{mg/litre}. The nitrogen in wastewater is organic nitrogen (Org-N) and ammonia
nitrogen (NH4-N). The third nitrogen compound is nitrate (NO3). The organic

and ammonia nitrogen content can be transformed to nitrate by a process known


as nitrification; indeed this is one of the processes used to reduce the
organic/ammonia nitrogen content of wastewater.

As can be seen in Table 0.1, the wastewater passes through a number of


treatment processes, and these are usually described using a sequential three-stage
process framework. The stages are called Primary, Secondary, and Tertiary as
shown in Fig.1.1. A brief description of each of the three stages follows.

PRIMARY SECONDARY TERTIARY


PROCESS TREATMENT PROCESSES
INFLOW OUTFLOW

Figure 1.1 Three stage wastewater treatment


4

1.1.1 Primary Treatment Processes

In the primary stages, mechanical and hydraulic methods are used to


extract and remove the larger sized particulate content of the incoming sewage.
Processes to settle grit and some of the larger sized organic waste is used. In the
case of grit, long channels are used to settle the grit whilst leaving the organic
material suspended. Some of the organic waste may also be removed by
settlement prior to secondary treatment. One method is to use a flocculating agent
to cause particles to coalesce to a large size which can then be removed by gravity
separation. Primary processes are able to remove 30-40% of the incoming BOD
and 60-75% of the incoming suspended solids when efficiently operated (Wilson,
1981).

1.1.2 Secondary Treatment Processes

In the secondary processes biotechnology comes to the fore and the


processes to treat wastewater are really different ways of engineering a biological
reactor. There are two biological processes, which can be utilised:
(i) Aerobic processes: These are processes in which the micro-
organisms use free dissolved oxygen (DO) present in the water phase. It
is useful to note that the saturation concentration of dissolved oxygen in
clean distilled water is about 9 mg/litre at 25 C and that DO is an
important control in the aerobic processes. The products of the aerobic
process are generally considered benign; carbon dioxide (CO2), water

H20), nitrates, sulphates, phosphate and new cells.

(ii) Anaerobic processes: These are processes in which the micro-


organism obtain oxygen to sustain life from the bound oxygen available

in the organic salts (nitrate ( NO3 ) , sulphate ( S024 ),


) phosphate

( PO43 ) ) present in the mixture. The outcomes of a generalised

anaerobic process include carbon dioxide, methane (CH4), phosphine

(PH3), hydrogen sulphide (H2S) and new cells. The methane is a


5
valuable fuel and the anaerobic process is often optimised to maximise its
production.
A very useful summary was provided by Beck (1986) which shows the
biological wastewater treatments in perspective. This is given in Table 1.2.

AEROBIC ANAEROBIC
Primary Secondary Tertiary Objective Primary Objective
Objective Objective
(1) (2) (3) Carbonaceous
Carbonaceous Nitrification De-Nitrification Substrate Degradation
Substrate
Degradation
Substrate Ammonium – N Nitrate – N Hydrolysis
Capture
(Nitrosomonas)

Nitrite –N Volatile acid generation


Nitrogen gas
Metabolism
(Nitrobacters) Acid conversion
(4)
Nitrate- Phosphorus
Water, carbon N ( NO3 ) Removal (by Methane, carbon dioxide,
dioxide chemical hydrogen sulphide
precipitation)

Table 1.2 Principal biological wastewater treatment processes.

The aerobic processes are of most interest and it is useful to briefly


describe the three sub-processes making up the aerobic group:
Carbonaceous substrate degradation: Heterotrophic organisms utilize organic
carbon for growth. In this way the organic content is broken down and absorbed
to create cell growth. The dissolved oxygen is also used in the process and hence
the DO reduces over time as it is consumed. The organisms involved are aerobic
hetertrophs.
Nitrification: After the reduction of organic carbon, a different group of aerobic
organisms are able to degrade the ammonia in the liquor, again utilising oxygen in
6
the process and reducing the effluent BOD further. Specialised bacteria
Nitrosomonas and Nitrobacters are used for this two-stage process. These are
known as aerobic autotroph organisms. Nitrification requires a mean cell
residence time of more than 10 days and a food to micro- organism ratio < 0.3.
Denitrification : This is the biological removal of the oxidised nitrogen. Aerobic
heterotropic organisms effect this under anoxic conditions meaning that the
organisms absorb molecular oxygen from the nitrate and nitrite already present.
Thus, these latter components are finally reduced to nitrogen gas.
The mechanical means of constructing efficient biological reactors
resolve into three main types; the activated sludge process, percolating filters and
oxidation ponds.
(a) The Activate Sludge Process: Aerated sewage is able to support
a floc of free floating sludge organisms which are able to use the organic
matter in the sewage as a food source. This feature is engineered into a
continuous recycle process comprising an aeration tank and a gravity
settler. Activated sludge is encouraged to grow in the aeration tank via
the enhancement and control of the dissolved oxygen content of the
mixed liquor. This mixed liquor transferred by flow to a gravity settler or
clarifier where the large particulate matter falls to the bottom and clear
effluent is withdrawn. Sludge in the classifier is bled off for disposal but
a proportion is feedback in a recycle path to maintain the biological
population growth in the aeration.
(b) Percolating filters : In this process, aerobic sewage is sprayed
into a media able to support a micro-organism population, for example
stones, plastic frameworks. The organisms are able to colonise the
supporting media extracting suspended solid material from the sprayed
sewage. A settling tank is used to separate out sludge and the clear
effluent.
(c) Oxidation ponds: This is a low cost but land intensive system
where typically rectangular ponds 1-to-1.5m deep are used to
form an aerobic sludge solution above an anaerobic sludge
7
process. Different variants of the basic process are used for
different purposes, for example lightly loaded ponds to polish
effluent properties.

1.1.3 Tertiary Processes

The tertiary processes form a group of operations pursued at the end of the
wastewater treatment to achieve particular objectives. Wilson (1981) puts several
different operations in this category.
(i) Solids removal. These are designed to improve the effluent
quality and Wilson (1981) lists micro-strainers, grass plots, pebble bed
clarifies, and sand filters for this purpose.
(ii) Disinfection. These are processes designed to reduce the
MPN/ml assay values to within specified values. Some of the physical
processes designed for solid removal can also help with disinfection;
where these fail then disinfecting agents like chlorine are used.
(iii) Nutrient removal. When discharging effluent to the natural
environment, it is essential to ensure that the effluent content does not
perturb the ecosystem. The nitrogen and phosphorus content of
discharge effluent can cause the excess growth of algae in river systems
and this has to be prevented. Phosphorus removal tends to be chemical,
and the biological processes of nitrification and denitrification as
described above previously can be used to remove nitrogen.
8

1.2 Modelling Primary Processes


The primary processes for wastewater usually comprise screen filters and
long channel runs to remove large sized objects and grit sized material. This is
often followed by a primary settlement of the influent to remove some of the
inflowing BOD. This arrangement is shown in Fig. 2.1.

Overflow Stream
Influent Stream PRIMARY
SETTLER
Vp, A p Underflow Stream

Figure 1.2 The primary process

Although the diurnal and seasonal flow variations cannot be controlled,


the primary treatment processes are often used to achieve some global treatment
plant control:

(i) Buffer storage may be used to smooth out the time variation in
the daily/hourly inflow.
(ii) Primary sedimentation can be used to achieve some uniformity
of the influent quality prior to transfer to the secondary
treatment processes.
(iii) Some of the influent BOD can be removed by primary
settlement.

Pre-Treatment
Screens
Watewater Filters
Inflow Long Channe
Runs Primar
Settleme
Basin
Primary sludge
formation

Primary sludge
formation

Figure 1.3 Primary treatment processes


9
The literature shows few dynamic models of the primary processes. In
the main steady state and empirical models are found. Henry et al have one such
model available. After reduction the model is a steady-state split between influent
and primary sludge as represented by the block diagram of Fig. 1.2.
10

1.3 Modelling the Activated Sludge Process

1.3.1 Introduction

This section concentrates on the Activated Sludge Process as shown in


Fig.1.4.

Effluent
Influent
Offtake
AERATION TANK SETTLE
TANK
From Primary
Settler

Sludge Recycle Sludge Removal

Figure 1.4 Activated Sludge Process

The main components are described as follows:


(i) Aeration Tank. This is a biological reactor containing a mixed
liquor. This is a mixture of liquid and suspended solids. The organic
content of the mixed liquor, often called the mixed liquor suspended
solids (MLSS), will support a micro-organism population provided
sufficient dissolved oxygen is available. Thus the main SS reduction
mechanism results from the mixture of micro-organisms plus oxygen
plus organic waste. To ensure sufficient oxygen is available the
dissolved oxygen level is manipulated by mechanical aeration.
(ii) Settle Tank. The second unit is a gravity settlement tank or
clarifier. Here the sludge and clear effluent separates out. The off-take
of clear effluent is continuous. Also in operation is sludge removal from
the settlement tank. Sludge removal follows two routes, firstly sludge is
drawn off for disposal, and secondly a proportion of the sludge is
recycled back to the aeration tank. The function of this recycle is to seed
the incoming effluent from the primary process and maintain a viable
population of micro-organisms in the aeration tank.
11
The modelling development proceeds by considering each component
model. These can then be further developed or used separately or in a composite
secondary process.

1.3.2 The Aeration Tank Process

In this section a systematic development of the Aeration Tank model is


considered. The biological population is defined as follows:
Xv(t) = the viable (living) cell population concentration {mg/litre}

Xnv(t) = the non-viable (dead) cell population concentration

{mg/litre}
S(t) = the substrate or soluble BOD {mg/litre}
DO(t) = the dissolved oxygen concentration {mg/litre}
V = the volume of the aeration tank
Fin(t) = the inflow rate

Fout(t) = the out flow rate

The aeration tank construction and variables are shown in Fig.1. 5

DO( t) ~ Dissolved Oxygen

Fin( t) FF( t) Fout( t) FF( t)


X v( t) ~ viable biomass
X vin( t) X nv( t) ~ non - viable biomass
in( t)
X nv M ( t ) ~ suspended solids

Figure 1.5 Aeration Tank Model

1.3.2.1 Biomass Growth Behaviour

The background in bio-engineering needed for the model of the process


can be found in a text like Bailey and Ollis (1986). The specialised application of
this knowledge to wastewater processes is found in many papers and the
explanation following has been gleaned from various sources.
12
The Michaelis-Menten Law: this comes from fundamental studies in the kinetics
of enzyme-catalysed reaction. This is the situation when an enzyme E and
substrate combine to form a complex ES. This complex then dissociates into a
product P and free enzyme, namely:
k1
S E ES
E
k 1

k2
ES P E

The kinetics of this reaction were described by a rate equation:

Vmax S (t )
(t )
K s S (t )

Thus this equation has become to be known as Michaelis-Menten kinetic,


Vmax the maximum or limiting velocity and Ks as the Michaelis constant (Bailey

and Ollis, 1986). The above work was progressed around 1900-1915.

The Monod Growth Law: In the 1940’s, Monod investigated the specific growth
rates for a single species as a function of substrate concentration, S(t). To explain
the observed behaviour, Monod invoked a growth law which has the same
functional form as the Michaelis-Menten kinetics:

max S ( t )
u
Ks S (t )

where max = maximum growth rate, and Ks is the value of the limiting
concentration at which max / 2. It should be noted that the substrate media are

classified as synthetic and complex. A synthetic medium is one having a well-


defined chemical composition whilst a complex medium is a mix of ill-defined
components. Domestic sewage is considered to be a complex medium.
Wastewater Biomass Growth Law: Most of the literature since 1980 appears to
use a double Michaelis-Menten-Monod type of growth law. This comprises one
term involving the substrate and a second term incorporating the Dissolved
Oxygen component:
13

LM S(t ) OPLM DO(t ) OP


N Ks S(t ) QN Ko DO
D (t ) Q
(t ) m

where m = maximum specific growth rate {h-1}

Ks = saturation constant {mg/litre}

Ko = half-rate constant for oxygen {mg/litre}

and (t) is the bacterial growth rate {h-1}.


Note that:
LM 1 OPLM
1 OP
N1 QN Q
(t ) m m
Ks / S (t ) 1 Ko / DO(t )

1.3.2.2 Biomass Yield Relationships

The yield factor has the definition (Bailey and Ollis, 1986):
mass of cells formed ( X )
Yx
mass of substrate consumed(S)

This can be written using concentrations as:

FG Yield of IJ xFG Substrate IJ FG GrowthIJ xFG Current IJ


HSubstrate ConversionK H Uptake RateK H Rate K H Live PopulationK
Yx S used (t ) (t ) X v (t )

where Yx = yield from substrate to biomass conversion{dimension less}

Sused (t ) rate of substrate used {mg/(litre h)}

(t) = biomass growth rate {h-1}


Xv(t) = current biomass concentration {mg/litre}.

Thus the rate of conversion of substrate to biomass is given by


((tt ) X v (t )
Sused (t )
Yx

1.3.2.3 Aeration Tank Flow Balance

The tank content is assumed to remain constant so that:


14

dV
Fin (t ) Fout (t ) 0
dt
hence Fout (t ) Fin (t ) F (t )

1.3.2.4 Material Balance for Substrate (BOD)

The rate of change equation is as follows:

RS Rate of change of UV RS In flowing UV RS Out flowing UV


Ttotal volume of substrateW Tsubstrate RateW Tsubstrate RateW
RS Substrate Loss UV
Tto growth of biomassW
Thus
d
dt
l
S (t )V q V
d
dt
S (t ) F (t ) Sin (t ) F (t ) S (t )

(t ) X v (t )
V
Yx

Dividing through by V gives:


dS
dt
F (t )
V
b
Sin ( t ) S (t ) g (t ) X v (t )
YX

1.3.2.5 Material Balance for Viable (Live) Biomass

The rate of change equation is given by:

RSRate of change of UV RS Inflowing UV RSOut flowing UV


T live biomass W T live biomassW Tlive biomassW
RS Growth of UV RS Death of UV
Tlive biomassW Tlive biomassW
Thus
d
dt
b
X v (t )V g V
dX v (t )
dt
F
in
FX v (t ) F
FX v (t ) (t ) X v (t )V K d X v(t )V

and
15

d in
V X v (t ) F
FX v (t ) F
FX v (t ) (t ) X v (t )V Kd X v (t )V
dt
where Kd = death rate of viable biomass.

1.3.2.6 Material Balance for Non-Viable (Dead) Biomass

The rate of change equation is given by:


RS Rate of change of UV RS Inflow UV RS Outflow UV
Tmass of non vviable biomassW T non vviable biomassW Tnon vviable biomassW
RS Death rate of UV
Tviable biomassW
d in
(VX nv (t )) F
FX nv (t ) F
FX nv (t ) Kd X v (t )V
dt

and V
d
dt
X nv (t ) e in
F X nv X nv (t ) j Kd X v (t )V

where Kd = death rate of viable biomass.

1.3.2.7 Dissolved Oxygen Balance

The dissolved oxygen balance is often omitted from many of the


models in the literature. In fact the treatment of dissolved oxygen is reacted in two
different ways:
(i) Omission of DO Balance
Many of the model developments use the assumption that
adequate provision of dissolved oxygen is achieved by the aeration
control loop. What this implies is that an accurate, energy efficient
setpoint for dissolved oxygen has been determined and that the dynamics
of the DO loop are insignificant compared with those of the aggregated
biomass behaviour. Olsson (1992) has given some indication that the
dynamics of the biomass growth is in the order of days whilst the
dissolved oxygen transfer takes 15-30 minutes. In this time frame,
biomass cell growth cannot be controlled hour to hour.
16
Under this situation dissolved oxygen would not be a control and the
biomass growth law of section 3.2.1 could be modified to:
LM S(t ) OP
N Ks S (t ) Q
(t ) M O

where the new coefficient o is given by

DO(setpoint )
o
Ko DO(setpo int)
D

and DO (setpoint) = the dissolved oxygen setpoint.


The model described by Hamalainen et al (1975) adopts the above
approach, mainly because a commercial DO sensor was not available
at the time.
(ii) Including balance for DO
A material balance for DO follows as:

RSRate of changeUV RSInflowingUV RSOutflowingUV


T of DO(t ) W T DO(t ) W T DO(t ) W
RS DO(t ) loss UV RS DO(t ) UV
Tto biomass growthW Tsupplied by controlW
d
dt
l
VDO(t ) q V
d
dt
DO(t ) F (t ) DOin (t ) F (t ) DO(t )

FG (t ) IJ
K DO
H Y
X v (t ).V
K DOc (t ).V

hence
dDO(t ) (t ) X v (t )
F (t )( DOin (t ) D
DO(t )) K DO DOc (t )
D
dt Y
where DOin (t ) influent DO

K DO coefficient of rate at which substrate uses DO(t)

DOc (t ) DO
D (t ) supplied by the aeration system.

The model by Nejjari et al (1997 ) has an equation of the above form for
the DO balance.
17

1.3.2.8 Mixed Liquor Suspended Solids Concentration

The suspended solids in the mixed liquor comprise the viable and non-viable
biomass, hence
M T (t ) X v (t ) X nv (t )

1.3.2.9 Aeration Tank Model Summary

1. Substrate Balance
ds
dt
F (t )
V
b
Sin (t ) S (t ) g (t )
X v (t )
YX

2. Viable Biomass Balance


d
dt
X v (t )
V
e
F (t ) in
X v (t ) X v (t ) j (t ) X v (t ) Kd X v (t )

3. Non-Viable Biomass Balance


d
dt
X nv (t )
V
e
F (t ) in
X nv (t ) X nv (t ) j Kd X v (t )

4. MLSS Concentration in Aeration Tank


M T (t ) X v (t ) X nv (t )

5. Biomass Growth Law


LM S (t ) OPLM DO(t ) OP
N Ks S (t ) QN Ko DO
D (t ) Q
(t ) m m

6. Dissolved Oxygen Balance


The inclusion of a dissolved oxygen balance depends on the assumptions
adopted for the model.

1.3.3 Settler Tank Model

The approach to a Settler tank model is mixed in the literature.


Hamalainen et al (1975) give an interesting dynamic model which covers the
separation process in lumped parameter model description. Other authors do not
separate out the role of the final clarifier from the aeration tank model, however it
is useful to have a generic model of clarifier for use within the wastewater
18
modelling library. In this section, this is achieved by introducing a number of
assumptions. A number of assumptions are introduced to ensure a simple settle
tank model:
(i) No biological activity takes place in the settle tank.
(ii) The settle tank is assumed to be in steady state so that the
dynamics of settling are ignored. More detailed models are
available, see Olsson et al (1985) for example. The model
scheme is shown in Fig.1.6.

Fin(t ) F (t )
X v(t ) SETTLE TANK
X nv(t )
M t (t ) M
MLSS

FSL(t )
FRC(t ) FW( t )

Figure 1.6 Settle Tank Model

1.3.3.1 Concentration Equations

At the settle tank inlet, the conditions are:


(a) Inflow rate = Fin(t) = F(t)

(b) Suspended solids concentration = MT(t)

At the settle tank effluent outlet, the conditions are:


(a) Effluent outflow = Fout(t)

(b) Effluent suspended solid concentration = ME(t), and

M E (t ) 1 M T (t ) 0 1 1

At the sludge take off point, the conditions are:


(a) Sludge take-off rate = FSL(t)

(b) Suspended solid concentration = MSL(t)


19

M SL (t ) 2 M T (t ) 0 2
1

1.3.3.2 Balance Equations

Two balance equations apply:


(i) Flow balance:
Fin (t ) Fout (t ) FSL (t )

(ii) Mass balance:


Fin (t ) M T Fout (t ) M E (t ) FSL (t ) M SL (t )

(iii) Substitution yields:


Fin (t ) 1 Fout (t ) 2 FSL (t )

1.3.4 Summary Conclusions

The model devised for the components of the Activated Sludge System
would form the basis of a library and be available for use in specific studies.
The model is non-linear, and dynamic. It is useful to determine the
number different ways in which the model could be used.

1.4 Uses of the Model


A suite of equations should be developed for each sub-unit in the global
process since this enables both separate unit investigations and process train
studies. In the latter it is necessary to connect sub-units together so that
interactions and global dynamic performance can be studied.

1.4.1 Sub-Unit Studies

With the model framework of a single sub-unit two aspects may be


pursued:
(i) Refinement of the model to yield a more detailed description.
(ii) Validation and parameter identification studies.
For example, the equations of Section 3.2 describe a single biomass
population however, the equations are generic and could easily be expanded to
20
describe a composite biomass structure. The model has certain parameters such as
yield factor Y, and biomass death rate Kd, which are uncertain. By using specific

experiments these coefficients could be identified either off-line or on-line using


recursive methods.

1.4.2 Process Train Studies

The concept of creating a library of sub-units with validation physical


coefficients enables complete process trains to be constructed and performance
examined. Whilst improvements in local loops are easily realised, it is only
through simulating, studying and optimising the global performance will the
second layer of performance improvements and benefits be identified. The
potential for optimising setpoint values and installing co-ordinating and predictive
control occurs at global process level.

1.4.3 On-Line Process Control

Many of the variables of the wastewater process are unmeasurable, yet


reasonably simple and robust models can be used to provide estimates of
variables. Thus with some calibration and simplifications, the models could be
used online to assist operators follow process changes and also provide input for
more advanced control strategies. The employment for SCADA systems at
wastewater treatment plant facilitates the use of models in this way.

1.5 Modelling Principles

The previous sections presented a process review and detailed many of


the physical mechanisms at work in wastewater treatment plant. This knowledge
was used in the derivation of a model based on physical principles. It is useful to
extract from this and other similar modelling and simulation experiences some of
the common features and principles.
21

1.5.1 Process Control and the Modelling Activity

The trend in process control, and the development of enabling simulation


tools has lead to a global approach to process control design. The multi-layer
hierarchy is an important concept in this approach. Hierarchical concepts emerge
in many ways of which the most common are:
(i) Technology: Modern plant is inevitably operated using a
Distributed Control System (DCS) or a Supervisory Control and Data
Acquisition (SCADA) system. These are essentially hierarchical in
structure having hardware and software of increasing power as the
hierarchy is ascended. The Fig. 5.1 (Popovic and Bhatkar, 1990)
illustrates this technological progression.
(iii) Information : Low level control samples at msec rates,
high level control requires global plant performance data. Thus
there is an information hierarchy where the quality of the data
increases, and the quantity of the data decreases as the hierarchy
is ascended from regulator/process unit level through to
company boardroom. Even at the low end of the information
hierarchy, a definite multi-layer of data transfer rates is
discernible as shown by Table 1.3.
Network Speed Predictable Updates Application

High 1-20 msec Motion control,


Drive co-ordinate
Medium 20-200 msec Machine sequencing,
alarms supervisor
parameters
Limited data collection
Slow 200 msec-2 sec Operation interfaces, data
collection/archiving
Table 1.3 Data transfer and update times
22
(iv) Process Control and Operations : Much attention has been
devoted to regulator design and implementation over past
decades but in the last decade or so the industrial emphasis has
been on designing, operating and optimising the Global plant
operations. Fig. 5.2 shows the industrial process control
hierarchy which has motivated this trend. By carefully
specifying the desired measurable performance at the regulator,
supervisory and scheduling layers of the hierarchy it is possible
to optimise global operational performance.
It is in this pursuit for global optimisation that process modelling
becomes important. The new simulation tools like MATRIXx, EASY5x and

MATLAB/SIMULINK provide the enabling technology to facilitate such studies


(Pike and Johnson, 1994). These tools work on the basis of libraries of
components, which can be selected as icons and assembled to form block diagram
based simulation models. Sub-units can then be connected to form process trains
and so on; thus hierarchical nature of plant and its information flows is replicated
to enable global optimisation to be investigated straightforwardly. The way in
which this could be done is described in Section 5.4.

1.5.2 Modelling from Physical Principles

In Section 1 of this module a descriptive review of the global wastewater


process was presented. In Sections 2 and 3 more detailed analysis was presented
for the subprocesses of primary and secondary treatment plant. This analysis
looked at the physical mechanisms present, the structure where the mechanism
were active, listed assumptions which enabled the model descriptions to be precise
and finally specific equations were given for the mechanisms and subprocesses.
Three categories of variables emerge during this process:
States: The state variables in a model describe the phenomena at work in a
process. The transformations and changes that occur to state variables are usually
given by a rate of change equation involving a material balance, a force balance,
an energy balance or work balance. The state variables describe the total past
23
behaviour of a system and are used to initiate the future process behaviour.
System states are usually doubted by the letter xi(t); i = 1,2,..,n.

In the model devised, the state variables included S(t), substrate


concentration, Xv(t), viable cell concentration and Xnv(t), non-viable cell

concentration. These were the information variables, which summarised past


behaviour and initiated future plant behaviour. The rate of change balance
equation had a straightforward structure:

RS Rate of change of UV RS Influx of UVRS Outflow of UV


Taccumulation in state variableW Tstate variableWTstate variableW
RS Creation of UV RSLoss / destructionUV
Tstate variableW T of state variable W
Inputs: In writing out the rate of change equations for the state variables, two
types of input contributions to the balance will be used. One type of input will be
manipulatable and will provide a means of driving or controlling the state vector
to some desired position. Such inputs are termed control inputs and are usually
denoted by the letter ui(t), i = 1,..m. Examples of control inputs of the

wastewater process are DO(t) concentration, and recycle flow FRC(t).

Other inputs to the process and forming contributions to the state


equations will not be controlled and these inputs are termed disturbances, formally
denoted using the letter di(t) ; i = 1,…,md. A typical example might be the sudden

variation in influent substrate associated with diurnal variations. This influx is not
controllable, and forms a serious disturbance input to the system.
Outputs: The window on the state variables of the process are via the output
variables, usually denoted yi(t); i = 1,…,r. Output variables come in four

categories; those which are measurable and those which are not, and those which
are to be controlled and those which are not. Thus, the matrix of output variable
types are shown in Table 1.4.
24

Output Variables To be Controlled Those not


Controlled
Measurement Output Ym,c Ym,nc

Unmeasureable Output Yum,c Yum,nc

Table 1.4 : Output variable types

This table has implications for the feasibility of any given control objectives.

1.5.2.1 State-Space System Description

Collect together the state variables, the control and disturbance variables and the
output variables to form vectors:
State Vector Control Vector Disturbance Input Vector

F x1 I F u1 I F d1 I
GG x2 JJ GG u2 JJ GG d 2 JJ
x GG . JJ u GG . JJ d GG . JJ
GGH xxn 1JJK GGH uum 1JJK GG d . JJ
n m H nd K
Output Vector Parameter vectors

F y1 I F I F I
GG y2 JJ GG 1
2
JJ GG 1
2
JJ
GG . JJ GG . J GG . J
. J JJ
y

GGH yyr 1JJK GG J GG


r H n p1 JK H n p 2 JK

Nonlinear State Space System


x f ( x, u, d , ; t ) State equation

y = h (x, u ,d, ; t) Output equation


f and g are nonlinear vector relationships.
25

1.5.2.2 Linear State Space System

x Ax
A Bu
B State equation
y = Cx + Du Output equation
where A is (n x n) matrix, B is (n x m) matrix
C is (r x n) matrix, D is (r x m) matrix

INPUTS OUTPUTS

Disturbance
Inputs yum
,uc
Reference PROCESS
Inputs PLANT
ym,nc

Modelx f (x,ud , , ;t
;t) yum
,c
STATES VARIABLE
ONLINE ym,c CONTROL CONTRO(x1,x2,....,xn)
MODEL UNIT CONTROL INPUT ym,c
(u1,u2,.....,um)
u

Figure 1.7 Generic State Space Model Description

Both equations need an initial condition, x(to).

The state space system is a common unifying notation for many simple
system descriptions, where the model has been developed from physical
principles. This structure is shown in Fig. 1.7.

Once a model has been given the parameters and data needed to be able
to use the model have to be listed and identified. Finding model parameter values
is often a difficult process and may involve using intelligent guesses, carefully
controlled experimentation or plant data generation and analysis.
The final step in the process is that of model validation. This comprises
two sub-steps:
26
(i) Simple test examples. These are tests constructed to see if the
simulation responded as expected when compared with past
experience, or intuitive knowledge. The tests can also examine
quantitative measures like steady-state values or qualitative
measures like speed of response.
(ii) Plant data test. Historical data can be used to see if the
model/simulation repeats the measured behaviour from plant
tests. A carefully selected set of scenario tests is often useful at
this stage to investigate the range and robustness of the model’s
behaviour.
The full sequence of steps in the model building activity is shown in Fig.
1.8.
27

Process Description
The literature; The Plant Diagrams
The Plant Operators

ASSUMPTIONS
- Geometrical
- Physical Mechanisms
- Chemical/Biological
- Parameter/Variable Changes

DERIVE THE STATE


EQUATIONS
- Which state variables to use
- Which inputs are needed
- What additional equations

OUTPUTS AND PROCESS PARAMETERS


- What process parameters needed ( , )
- What output equations

CONSTRUCT SIMULATION
- What tools
- What cost

MODEL VALIDATION
- Academic Test Examples
- Plant data tests

IS EXIT
MODEL
SATISFACTORY

Figure 1.8 Steps in Model Building Process

1.5.3 Black Box Modelling Methods

The key principles in the black box methodology are five-fold:


(i) Consider features of process behaviour.
(ii) Select a black box mathematical model.
(iii) Collect appropriate data ~ experimental tests.
(iv) Use the data to fit the model.
28
(v) Test model on new data not used in fitting.
Thus in this section, a brief review of two black box models is pursued.

1.5.3.1 First and Second Order Process Models

In Fig.1.9 the essential features of the link between the operations of


differentiation and integration and the link with the Laplace operator, s, is
displayed. These links enable the 1st order and 2nd order transfer function to be
used.
29

x(t)
d
d t
y (t )
d
d
dt
l q
x (t )

TIME DOMAIN – DIFFERENTIATION

x(s) s Y ( s) sX
s ( s)

FREQUENCY DOMAIN – MULTIPLY BY s

dx
Example (t ) 3x ( t ) 4u( t )
dt
sX ( s) 3 X ( s) 4U ( s)
LM 4 OPU (s)
X 9 s)
N s 3Q
x(t) z y (t ) z X ( ))dd

TIME DOMAIN - INTEGRATION

1 LM1OP X (s)
x(s) Y ( s)
NsQ
s
30

FREQUENCY DOMAIN – DIVIDE BY s

Example
z(t ) z
4 u( )d
)d
t

LM1OP U (s)
o

Z ( s) 4
NsQ
Fig. 1.9 Time domain and frequency domain

1.5.3.2 First Order Process Models

The transfer function description is given by


LM K OP
Y ( s)
Ns 1
U ( s)
Q
where K = d.c. gain
= time constant.

It is the time constant, which is being referred to in the colloquial


expression process G, has slow dynamics or process G1 has very fast dynamics.

The size of indicates the time the output takes to arrive at 63.2% of its final
value when excited by a step input. The full procedure for fitting a 1 st order model
is shown in Fig. 1.10.

If the process has a delay or dead-zone then a typical response is also


shown in Fig. 1.10. In this case the transfer function is modified to:
LM Ke OP
D
Ds

MN s 1 PQ
G ( s)

where D is the delay time.


31

u2 y1

yo
uo
1st Order
Process
U (t ) y (t )
INPUT OUTPUT

K
U ( s) s 1 Y ( s)

Time Constant
K=d.c.

y1
y1 y1 yo

y1
MODEL
y1 yo
K
u1 uo IS THEREA PROCESS
DELAY?
y( ) yo 0.632( y1 y )

READ OFF t = on
o time axis
DELAY

o t
t=D

Fig.1.10 First order response

1.5.3.3 Second Order Process Models

The second order transfer function model is given by:


32

LM OP
Y ( s)
MM K PPU (s)
MM s 2
2
1P
PQ
N 2
s
n n

where n = natural frequency of oscillation


= damping K = d.c. gain
The terminology of second order systems has evolved from a mechanical
engineering origin involving springs and dashpots. The thought-experiment
involved is shown in Fig. 1.11.

SPRING
DASHPOT HOOKE’S LAW
d
dx Fs
FD ext
d
dt
VISCOUS
FRICTION

Mg

Figure 1.11 Spring-Dashpot Experiment


33
The natural frequency is the frequency of the spring motion without
dashpot. Introduce a dashpot and in comes damping factor, . This damps out the
oscillation. The full range of responses are shown in in Fig. 1.12.

y(t)
UNDER-DAMPE

CRITICALLY
DAMPED
OVER-DAMPED

time
o

Figure 1.12 Second Order Process Step response

These are under-damped, critically damped and over-damped. From a


process control viewpoint the underdamped step response shown in Fig. 1.0 is
important. This response is used to define time domain control performance
specification; this can be considered in two parts.
34
OUTPU

OVERSH

Reference OFF-SE
Step
Input

TRANSIEN STEADY-
PORTION PORTION

TIME
o

Figure 1.13 Features of 2nd Order Step response

1.5.3.3.1 Transient Portion


The two features of interest here are speed response and overshoot:

Fy I x 100%
GH JK
Peak y steady
% Overshoot =
y steady

1.5.3.3.2 Speed of Response


Rise Time = Time from 10% to 90% of Final Value
1.5.3.3.3 Steady State Portion
The two features of importance are steady state offset and settle time:
Steady State Offset = y Steady yRef

Settle Time, Ts(5%) = Time for y(t) to lie within 5% of ysteady.

1.5.4 Hierarchical System Modelling and Simulation

The ease of using graphical user interface (GUI) icon based modelling
and simulation tools enables the rapid and effective construction of computer
simulations. The ability to construct simulations, which mimics the nested
structure of industrial plant, is of particular value. Fig. 1.14 shows how this
hierarchical nesting could be exploited for a wastepaper treatment plant.
35
FIN FOUT
WASTE-WATER
TREATMENT
PLANT
FSL
(OUT)

TREATMENT PLANT No1


PRIMARY TERTIARY
UNIT SECONDARY TREATMENT PLANT
TREATMENT PLANT No2

SLUDGE TREATMENT

AERATION CLARIFIER
BASIN No2 TANK No2
FIN
FRC

Aeration Clarifier
Basin No2 Mode No2 Model

u1 B1 C1 u B2 C2
y1

A1 A2

Figure 1.14 Hierarchical Modelling and Simulation


36

1.6 Further Readings


1. Wilson, F., 1981, Design calculations in wastewater treatment, Spon
Publishers, London, U.K.
2. Schroeder, E.D., 1985, Basic equations and design of activated sludge
processes, Comprehensive Biotechnology, Ed. Moo-Young, Vol. 4, Chapter
47, (847-870).
3. Barnes D., P.J. Bliss, B.W. Gould, H.R. Vallentine, 1981, Water and
Wastewater Engineering Systems, Pitman Publishing, London, U.K.
4. Beck, M.B., 1986, Identificaytion, estimation and control of biological
wastewater treatment processes, Proc. IEE, Vol. 133, part D, (254-264).
5. Olsson G. and D. Chapman, 1985, Modelling the dynamics of clarifier
behaviour in activated sludge systems. Advances in Water Pollution Control,
Editor R.A.R. Drake, IAWPRC, Pergamon Press, (405-412).
6. Bailey, J.E., and D.F. Ollis, 1986, Biochemical engineering fundamentals,
McGraw-Hill International Editions, New York, ISBN 0-07-0 66601-6.
7. Olsson G., 1992, Control of wastewater treatment systems, ISA Transactions,
Vol. 31, No.1. 87-96.
8. Nejjari F., A. Benhammou, B. Dahhou and G. Roux, 1997, Nonlinear
multivariable control of biological wastewater treatment process, European
Control Conference, Brussels.
9. Hamalainen, R.P., A. Halme, and A. Gyllenberg, 1975, A control model for
activated sludge wastewater treatment process, IFAC World Triennial,
August 24-30, Boston Mass., USA.
10. Pike A. and M.A. Johnson, 1994, Simulation tools for the 90’s, Measurement
and control, Vol. 27, 185-194, July/Aug.
37

Glossary
Flocculation: the coalescing or agglomeration of smaller sized particles to form
larger particles.
Mean cell residence time : The average time a mass of cells remain in a
biological system before being withdrawn in a waste solids stream. Also known
as the solids retention time or the sludge age. (Barnes et al, 1981).
Food-to-micro-organism ratio (F/M) : The ratio of the food concentration to the
micro-organism concentration.
38

2 Process Control Structures

Objectives
1. Understand the basics of the feedback loop and how it improves on open
loop control.
2. To examine the main features of common control loops found in
wastewater processes. Particular attention to be given to on-off control,
PID control, cascade control loops and ratio control.
3. To examine the structures of gain scheduling control, and the self-tuning
control architecture.

2.1 The Actuator- Pl and – Measurement Sequence

The general system transfer function description often found in control


textbooks is a simplification of real practical control system problems. To
illustrate how this simplification arises a simple tank process is used.

2.1.1 A Tank Level Process

It is assumed that it is necessary to keep the liquid level in a tank


constant. The motorised valve unit, the tank and the measurement device used are
shown in Fig. 2.1, and these are examined individually along with some useful
analysis.

2.1.1.1 The Actuator

In the example, the actuator is considered to be a motorised valve.


Note the following generic actuator features:

(i) The actuator uses a small voltage signal to control what is


possibly a large material flow, in this case the flow of liquid.

R. Katebi et al., Control and Instrumentation For Wastewater Treatment Plants


© Springer-Verlag London Limited 1999
39
(ii) The actuator has its own inherent dynamics or speed of
response; however, these dynamics may often be quite fast.
(iii) The actuator is nonlinear since its operation can only range
between closed to fully open. Despite having this non-linear
saturation characteristic, many analyses ignore this.

Uc

FIN(t)

Hm(t)

H(t)

Key Uc = Control signal


FIN = inflow A = area of cross –sectiono
FOUT = outflow
H = actual liquid height mH = measured liquid heigh

Uc ACTUATOR FIN PLANT H MEASUREMEN Hm


MOTORISE TANK LEVEL
VALVE SYSTEM SESNOR

Figure 2.1 A tanl level process

To represent the Actuator transfer function use:


U c ( s) G A ( s) Fin ( s)

LM KA 2
n
OP F
MN s PQ
U c ( s) 2 in ( s)
A ns n
40
This is a second order system to allow the actuator to exhibit damped
oscillatory behaviour if required.

2.1.1.2 The Plant

In this example the plant is the tank for which the following simple
analysis applies:
Input flow : Fin(t)

Output flow : Fout(t) H(t)

Giving : Fout(t) = k1 H(t)

where H(t) = head of liquid in the tank


State Equation:
{Rate of Change of Liquid Volume} = {Inflow} – {Outflow}
d
dt
l
AH (t ) q Fin (t ) Fout (t )

dH
A Fin (t ) k1H (t )
dt
Going over to Laplace Transforms
A sH ( s) Fin ( s) k1 H ( s)

LM 1 OP Fin (s)
N As k1 Q
hence H ( s)

giving H ( s) G p ( s) Fin ( s)

where the transfer function for the tank plant satisfies

LM 1 OP LM b1 / k g OP L OP
MN b A / k gs 1PQ MN
1 KT
N As k Q Q
G p ( s)
1 1 Ts 1

Thus the tanks has: 1st order dynamical response

and the tank gain KT (1 / k1 )

with tank time constant T ( A / k1 ).

The main points to note about this simple derivation are:


(i) The plant is characterised by a dynamic relationship.
41
(ii) It is not always straightforward to recognise what the input-
output variables might be. In this case the input-output pairing
is not Fin(t) – to - Fout(t) but Fin(t) – to- H(t).

2.1.2 The Measurement Device

In this example, a level sensor has been chosen to measure H(t), whose
measured version is Hm(t). But liquid head might have been measured by using a

turbine flowmeter to measure flow, Fout. The proportionality law would then be

used to recover a measure of the level height. Thus, in some cases there are
alternative measurement routes to accessing the states of the plant, for in this case
level H(t) is a plant state variable.
The main points to note about the measurement device are:
(i) In a practical system, the measurement signal is not necessarily
in the same physical form as the physical variable measured,
although it will not be labelled in this manner. For example,
liquid head, H(t) could be in metres whereas Hm(t) is a

representation of this in mV.


(ii) Measurement devices are interface devices and suffer from
process and measurement noise and measurement bias effects.
(iii) Measurement devices also have dynamics. These may be very
fast but they still exist.
To encapsulate these points in a transfer function description, use:
Hm ( s) Gm ( s) H ( s)

LM Km OP
N Q
with Gm ( s)
m s 1)

where Km, and m are measurement device gain and time constant.
42

2.1.3 Summary: Component Transfer Functions

2.1.3.1 Actuator – Motorised Valve

Fin ( s) G A ( s) U c ( s)

LM OP
G A ( s)
MM KA PP
MM s 2
2
1P
PQ
N 2
s
n n

2.1.3.2 Tank Plant

H ( s) G p ( s) U c ( s)

LM Kp OP
N 1Q
G p ( s)

2.1.3.3 Measurement Device

Hm ( s) Gm ( s) H ( s)

LM Km OP
N Q
Gm ( s)
ms 1

In Fig. 2.1, the conceptual block diagram is laid out from process
diagram to a single unified block.

2.2 A Unified Actuator – Plant – Measurement


Process
The example is now used to show how a generic process block
representation arises. In Fig. 2.2 the blocks are shown to be combined and the
transfer function simplification effected to produce a simpler process model. It is
important to realise when working with process models that:

(i) fast dynamic modes may have been removed and


43
(ii) process nonlinearities may have been ignored.
44

ACTUATOR PLANT MEAS. DEVICE


Uc Motorised FIN Tank H Level Hm
Valve System Sensor Device

Stage 1 : Conceptual Blocks

ACTUATOR PLANT MEAS. DEVICE


Uc KA FIN Kp H Km Hm
F 2
s 2 s 1I ps 1) ms 1)
G 2
Hn n K J

Stage 2 : Identify individual models

THE PROCESS
Uc
K AK pKm Hm
F
Gs2 2 .s I
1Jdps 1ibms 1g
H2n n K
Stage 3 : Combine models and simplify

THE PROCESS
Uc Hm
KE
b Es 1 g

Figure 2.2 Process Model Reduction


45

2.3 Process Disturbances

The basic process block diagram is now augmented with the


possible disturbances that can arise. Not all processes may have all the
disturbances active simultaneously but they do provide a comprehensive design
framework. For this discussion see Fig. 2.1 where typical examples are given, and
Fig. 2.2. where the formalised block diagram is found.

2.3.1 Supply and Load Disturbances

2.3.1.1 Supply Disturbances

Supply or input disturbances arise from variations in the material or


quality properties of the inflow to the process. They are effectively an
uncontrolled change to the control command signal, Uc.

Example: In the level system, the inflowing liquid might suffer upstream
pressure variations causing inflow variations. These are supply or input
disturbances.

2.3.1.2 Load Disturbances

Load disturbances or output disturbances arise from uncontrolled


demands placed on a process plant making it difficult to achieve its control
objectives.

Example: Seepage or uncontrolled leakage from the tank system is a load or


output disturbance.
46

Uc

Data transmission
Disturbed surfaces Noise
Causing noise Hm
Loss of Pressure
causing
H
FIN variations Poor Calibration
Tank
bias
Leak

FOUT
Load Change

Figure 2.3 Disturbance sources in a tank process.

di ~ input disturbance dn = process noise


do = output
disturbance

Uc Y
PROCESS

nbias
Measurement bias
Ym n
Measurement output Measurement noise

Figure 2.4 Process disturbance signal.


47

2.3.2 Noise Disturbances

The measurement device operates at across physical interfaces and its


signals are subject to different types of stochastic (noise) and deterministic
disturbances. The process output is also subject to the effect of noise exciting
system dynamics. A full categorisation follows.

2.3.2.1 Process Noise Disturbances

Possibly the most difficult to understand, process nose is active within


the process and effects the process states. The result is a coloured noise
disturbance added to the process output variables.

2.3.2.2 Measurement Noise Disturbances

Measurement noise arises from the measurement device itself and is a


result of noise at the physical sensor interface or in the transmission process. It is
usually represented as white noise of zero mean and specified variance added to
the output measured variables.

2.3.2.3 Measurement Bias Disturbances

Measurement bias arises from calibration errors, parasitic signals, and


poor sensor design and construction. It is a very important disturbance, which if
not eradicated, can defeat the purpose of closed loop control.

2.3.3 Summary Conclusion

It is important to identify in any control design which disturbances are


operating and select the control design to provide adequate disturbance rejection
capabilities. The checklist to be used is:
48
Supply disturbances

Load disturbances

Process noise
Measurement noise
Measurement device bias

The objective of a control design is thus to maintain certain system


outputs at selected reference values despite the presence of some or all of the
above disturbances. Thus control has to provide a trade-off between reference
tracking by the outputs and disturbance rejection. Two classes of control schemes
to be are discussed next: Open loop control and closed loop control.
49

2.4 Open Loop Control

2.4.1 The Basic Principle

Open loop control is essentially steady state control by a fixed input


control setting. The set-up is shown in Fig. 2.Error! Bookmark not defined..

Let the control input u1 cause the output to take value y1.

Let the dynamics of the process be stable so that when uc changes from u1 to a

new value u2, the process changes according to its stable dynamics.

Then by definition, the steady state gain is calculated by


y2 y1
K
u2 y1

Suppose new steady-state output of yR is desired, then the relation:

yR y1
K
uR u1

1
yields uR u1 ( yR y1 )
K
Open loop control then requires u1 to be reset to uR, whereupon the new steady

state output yR will be achieved after the process transients have decayed.

2.4.2 The Problems with Open Loop Control

The problems are:


(i) The dynamics of the control action are those of the process, and
the process may be slow.
(ii) The process has to be open loop stable, thus open loop unstable
processes cannot be controlled.
(iii) Every time a change of output value is required the calculation
for the new control setting has to be evaluated and
50
implementation. The implementation is likely to be via a
manual input device.
(iv) Supply or load disturbances are not automatically compensated
for. In fact, a measurement device as an offset from the
expected output can detect the presence of a low frequency
supply/load disturbance. Recalculation of the control can then be
used to correct for this error. However, if the supply/load
disturbance then disappears, the recalculated control input will
give the incorrect output value. This will cause recalculation,
and re-adjustment of the input control value. The continuous
cycle of adjustment becomes rapidly self-defeating. Automated
recalculation is needed. This is feedback control.
51

Output Disturbance
do

Uc THE
PROCESS

Control Comand
Setting
Ym

Measurement Device

Fig. 2.{seq fig} Open loop control scheme

2.5 The Feedback Control Loop


It is not difficult to realise that the process of recalculating the control
input value in open loop control to try to and keep a process output at a desired
value despite the presence of disturbances is a crude manual feedback loop. Thus
closed loop control simply makes this recalculation automatic, but there are also
other benefits in that the process dynamics can be altered, and unstable systems
can be stabilised. In this section two topics are discussed; firstly a simple loop
analysis is given to show the methodology at work and secondly the full set of
general performance objectives are listed and defined.

2.5.1 A Simple Feedback Loop

It is instructive to perform a simple feedback loop analysis to understand


what feedback does. The figures Fig. 2.Error! Bookmark not defined. and Fig.
2.Error! Bookmark not defined. cover this aspect. First to define some
components of the feedback loop.
52
do = disturbance

K Uc G Y = output
r e = error
CONTROLLER PROCESS
Reference UNIT
Signal

y d o G ( K (r y ))
y d o GKr
G GKy
G
GK ) y
(1 G GKr
do G
LM 1 OP LM GK OPr
y
N1 GK
G Q N1 GGK Q
do

Figure 2.5 The feedback loop

GK LM GK OPr
G Q
N1 GK
Gr yr
r
G
1 GK
Reference
Singnal +
Reference Tracking
y yyr y oy
Disturbance
Rejection +
1
do Gd
G )
(1 GK
ydo LM
1 O
G QP
N1 GK d
Disturbance
Rejection

GK 1
Gr G d 1
(1 GK
G ) (1 GK
G
Design Constraint

Figure 2.6 The feedback loop decomposed.


53

2.5.2 Some Definitions

The reference signal is the desired output signal trajectory. Usually it is a


constant of a particular value, or a ramp signal, or sometimes a parabolic signal.
The objective is that the process output should follow the reference signal. The
letter r, or R usually denotes reference signals.
The Error Signal
As a measure of how close the output is following the reference signal,
the error signal is computed as:
Error = reference – output, all at time t
e(t) = r(t) – y(t)
The Controller Unit
The controller unit can be regarded as an automatic re-computational unit
designed to automatically re-calculate the control input accordingly to the size of
the error signal. The controller unit can be analogue or digital. It can even be
mechanical, or pneumatic. Of recent years the controller unit is a software
algorithm equation embedded in a large process control system.

2.5.3 The Feedback Loop Analysis

The feedback loop is shown in Fig. 2.Error! Bookmark not defined..


The closed loop analysis is found by starting at the output and writing down an
algebraic around the feedback loop.
Viz. y do G ( K (r y ))

do G( Kr Ky )

do GKr GKy

Rearranging
(1 GK ) y do GKr

LM 1 OP LM GK OP r
thus y
N1 GK
G
do
Q N1 GK
G Q
54

and y ydo yr

This closed loop equation shows the key features of what feedback
control design tries to achieve:
(i) The closed loop system must be stable, so that all the responses
are well behaved.
(ii) The reference tracking term is

LM GR OP r LM 1 OP r
yr
N1 GGK Q MN1 b1/ GK )g PQ
This term tries to behave like [1] so that the output y follows the
reference signal as closely as possible.
(iii) The disturbance rejection term is

LM 1 OP LM b1 / GK g OP d
yd
N1 G
GK
do
Q MN1 b1 / GK g PQ o

This term tries to behave like [0] so that the disturbance signal is not
present in the output signal to move it away from the desired reference value of r.

Remark: It is not difficult to see that if K is large then (1/GK) will behave like 0
and the above good reference tracking and good disturbance rejection will occur.

2.5.4 Feedback Control Objectives : A Full List

Reference Tracking: Select a controller unit designs so that the system output
follows a given reference signal.
Regulation Control: If the reference signal is zero then the control problem is
called a regulator problem. This is because the controller simply has to have the
output follow zero whilst rejecting all disturbances.
Disturbance Rejection: This is the elimination of system effects from the process
output caused by the presence of supply/load (or input/output) disturbances in the
process.
55
Measurement Noise Rejection: This is the ability of the controller unit to prevent
measurement noise from passing round the feedback loop to affect the controller
signal, Uc. In this respect the controller acts as a noise filter.

Measurement Bias Elimination: The controller unit cannot eliminate the effect
of measurement bias. The presence of measurement bias has to be detected
independently and removed physically.

2.6 On-Off Control


The main attractiveness of on-off control is that it is relay based with no
apparent tuning difficulties and suffices for simple processes which require only a
crude level of control accuracy.

2.6.1 Basic Principles

Two forms of on-off control are considered : a simple relay controller,


and a threshold on-off controller.
(i) A simple relay on-off controller.
This is a feedback controller which is a simple switch. It uses
the difference between the desired output value or reference value, r(t)
and the actual output as a trigger for control action to begin or stop. If
this error, e(t) = r(t) - y(t) is negative then the output exceeds the desired
output value and the control input should be switched off. If the error e(t)
= r(t) - y(t) is positive then control action is switched on to drive the
actual output closer to the desired output. The algorithm is given as:
e(t) = r(t) - y(t)
if e(t) < 0 then control off
if e(t) 0 then control on
This is a crude way of automating open loop control, since control on
activates a constant open loop control action until the error changes from
negative to positive. This method is show in Fig. 2.Error! Bookmark
not defined..
56

y(t)

r(t)

ON OFF ON
e(t) = r(t) –

If e(t) < 0 control OFF


If e(t) 0 control ON

ON
e(t) 1 y(t)
r(t) Uc
PROCESS
-

OFF 0 e

Fig. 2.7 Simple On-Off control.

(ii) A threshold on-off controller


To try to reduce the cyclic behaviour of the simple relay on-off
controller, exploit the open loop dynamics of the process and satisfy the
low requirement for output control accuracy, a threshold controller is
used. The mechanism is shown in Fig. 2.Error! Bookmark not
defined..
57
In the threshold controller it is desired to keep the system output within a
desired range of values, usually, r(t) d. the strategy is as follows:
(i) If actual output y(t) exceeds r(t) + d, then the control is switched
off.
(ii) The control remains off until actual output y(t) falls below r(t) -
d, and then it is switched on.
(iii) If the actual output is below r(t) – d, then the control is switched
on until the output exceeds r(t) + d. When y(t) > r(t) + d then
the control is switched off and remains off until y(t) < r(t) – d.
This is shown in Fig. 5.2. The algorithm may be given as:
e(t) = r(t) – y(t)
If e(t) > d, the control ON until e(t) < - d then control
OFF
If e(t) < d, then control OFF until e(t) > + d then
control ON
Although the objective is to reduce the cyclic problem which occurs with
simple on-off control, a tuning problem has now appeared since the threshold d
has to be selected for acceptable control accuracy with reduced actuator cycling.
58

y(t)

r(t)
t

ON OFF ON
e(t)
+d
o
t
-d

ON 1
e(t) y(t)
r(t) Uc
PROCESS
-
OFF
0 e

Fig. 2.8 Threshold On-Off control.


59

2.6.2 Performance Assessment

Dissolved oxygen control has been implemented using on-off control,


and Fig. 2.Error! Bookmark not defined. shows plant data obtained by
Stephenson, (1985). The cyclic nature of the measured variable will correspond to
a similar on-off behaviour with the DO aerator actuator; consequently the
potential for excess actuator action and the resulting fatigue is self-evident. A
short list of advantages and disadvantages for ON-OFF control follow:
(i) Simple inexpensive relay control technology needed.
(ii) Relatively easy to commission and maintain
(iii) Smooth control impossible to achieve. Cyclic variation in the
controlled variable must be acceptable control performance.
(iv) Excessive actuator wears highly likely but for simple processes
with simple on-off actuation can be effective.
(v) On-off control equates to simple automation of open loop
control, thus disturbance rejection properties limited, and the
loop is dependent on the open loop process dynamics.

2.7 Three Term Controllers


The three-term controller or the PID is widespread in industrial and
process control. It has been estimated that in some industrial applications more
than 95% of the loops in a process plant will use PID control. For example in a
paper mill, there may be 2000 control loops so this equates to over 1,900 PID
loops and the remaining 100 loops being special application loops. Hence a key
problem in process control is to have a reliable, inexpensive method for tuning
PID loops. It has been this particular problem which has seen dramatic advances
over the last fifteen years with the advent of the AUTOTUNE technology. In this
section, a brief review of PID technology will be followed by some basic
properties of the controller. The industrial technology aspects are re-visited in the
final section where PID tuning is reviewed.
60

2.7.1 PID Controller Technology

The process control engineer is likely to meet PID controllers in three


different guises:
(i) Hard Wired PID Controller. In Fig. 2.Error! Bookmark not
defined. is shown the circuit diagram for a parallel PID
controller. This form is not seen often today but the figure
illustrates why PID control was considered difficult. The design
requires that the control coefficients Kp, Ki and Kd be translated

into appropriate resistor and capacitor values and vice-versa.


(ii) Process Controller Unit. A typical range of Process Controller
units is shown in Fig.2.Error! Bookmark not defined.. The
important feature is that PID control is available, but that the
tuning can be obtained automatically via the AUTOTUNE
facility. It is also useful to note the range of additional features
available on the more advanced unit including gain scheduling
and adaptation.
(iii) A SCADA PID Facility. Most SCADA systems will have an
engineer’s interface which allows access to the PID coefficients.
A typical example is shown in Fig. 6.3 where the coefficients of
the PID, and a Derivative filter are displayed. Some SCADA
systems have the applications software to allow online
autotuning by the plant engineer, otherwise the problem of PID
tuning is present once more.

2.7.2 Basic PID Control Properties

The textbook PID is a forward-path compensator as shown in Fig. 2.


Error! Bookmark not defined.Error! Bookmark not defined.with
three parallel paths. The paths are decoupled and independent so that the
representation is quite straightforward:
Time Domain
61

uc (t ) K p e(t ) K I z t
e( )d
)d KD
de
dt
P I D
Laplace Domain
1
U c ( s) K p E ( s) KI E ( s) K D sE ( s)
s
KI
=[Kp K D s]E ( s)
s
G PID ( s) E ( s)

where error signal: e(t) = r(t) – y(t)


E(s) = R(s) – Y(s)
And the PID control transfer function may be defined

LM K p KI OP
G PID ( s)
N s
KD s
Q
The main intuitive features of PID, which can be justified by formal analysis, are
as follows:

Proportional Term ~ P
(a) Increasing K p speeds up the system response.

(b) Increasing K p decreases any steady state offset if one exists.

(c) Increasing Kp too much may saturate actuators.

(d) The dynamical order of the closed loop system is the same as that of
the open loop system.

Integral Term ~ I
(a) Integral term will almost exclusively be used in conjunction with P
to give P I control.
(b) Integral control eliminates steady state offsets; this is a guaranteed
property.
62
(c) Measurement bia must not exist otherwise destroys use of I control
to remove offsets.
(d) PI control increases the dynamic order of the closed loop system
thereby introduces the potential for an unstable closed loop design.
Care needed when tuning.
(e) PI control can cause excessive overshoot in the system response.
Care needed when tuning.

Derivative Term ~ D
(a) The derivative term will always be used in a structure, which
includes P to give PD control at least.
(b) The derivative term can be used to reduce response peaks, and effect
the equivalent damping of a system. Rate feedback in motor control
is a special form of PD control.
(c) Derivative control has no effect on steady state errors.
(d) Pure derivative control will amplify high frequency noise in the
measurement signal, hence usually implemented by a filtered form.
(e) Derivative control does not effect the dynamic order of the closed
loop system.

2.7.3 Industrial PID Controller Features

In this section a number of features commonly found with industrial


PID controller technology are reviewed.

2.7.3.1 PID Controller Coefficients

Industrial PID does not follow the standard textbook from but uses
historically based conventions about which it is necessary to be aware. Firstly, the
easy decoupled form of PID introduced above is re-parameterised to give the
interactive PID coefficient structures. Recall the decoupled form as:
63

uc (t ) K p e(t ) K I z t
e( )d
)d KD
de
dt
then bring the proportional gain outside this expression as:

LM OP
uc (t )
MN
K p 1e(t )
KI
Kp z t
e( ) d
K D de
K p dt PQ
Then introduce two time constants:
Ti = Kp/KI = integral term time constant

Td = KD/Kp = derivative term time constant to give the industrial PID controller

form as:
LM OP
uc (t )
N
K p 1e(t )
1 t
Ti z
e( )d
)d Td
de
dt Q
or in Laplace transform variable s, form:
LM 1 OP
N Q
U c ( s) Kp 1 TD s E ( s)
Ti s

This is called the interactive form because changing Kp changes the

contribution of all three terms of the controller, whilst changing Ti, and Td can be

used to tune the I and D terms separately.


The important message is to ascertain which PID controller structure
is being used in industrial hardware and software; the list of available options is
quite extensive. Fig. 2.Error! Bookmark not defined.Error! Bookmark not
defined.. shows a list supplied by one well-known controls company.

2.7.3.2 Implementation 1 : The Derivative Term

The property that pure derivative amplifies high frequency noise has
lead to the use of various filter implementations and approximations for the D
term. Fig. 2.Error! Bookmark not defined..(b) shows two different industrial
solutions to this problem.
64

2.7.3.3 Implementation 2 : The PID Controller Structure

Fig. 2.Error! Bookmark not defined. shows many different varieties for
the structure of the PID controller. These use a mix of series, and parallel forms
and different signals within these structures.
Toshiba in their advanced Process Controllers used the so-called
two-degrees-of-freedom controller structure. This enables separate tuning and
optimization of reference tracking and disturbance rejection. The internal
structure of this controller is shown in Fig. 2.Error! Bookmark not defined..
Fortunately, Toshiba have rule-based tuning procedures and AUTOTUNE
facilities.

2.7.3.4 Implementation 3 : Algorithm Variations

PID controllers can suffer from two different implementational


problems, one is the class of kick effects and the other is excessive overshoot
caused by the integral action going on for too long.
The kick effects, typically proportional kick or derivative kick
appears as spike on the controller output. Such a phenomenon can cause problems
with the actuator circuitry, for example. These effects are removed by carefully
structuring the way the P and D terms are implemented. For example, instead of
D acting on e(t) = r(t) – y(t), it might only act on y(t). Indeed some of the
industrial variants exhibited in Fig. 2.Error! Bookmark not defined. are
designed to solve these types of problems.
Integral windup, which causes excessive overshoots, is cured by the
use of anti-integral circuits, and these are usually installed with the PID controller
software.

2.7.3.5 Implementation 4 : PID Variations

The literature shows that many interesting variations of the standard


PID algorithm have been investigated. In the Toshiba process controller, PID of
error-squared is offered. This has the useful property of speedily reacting to
significant changes in the error between reference and output.
65

2.7.4 PID Controller Tuning

The seminal contributions to PID tuning over the last fifty years or
so were (i) the Ziegler-Nichols 1942 paper which gave two procedures: the
process reaction curve and the sustained oscillation methods and (ii) the Astrom
and Hagglund patent for the relay experiment based PID tuning procedure. The
latter paved the way for the automatic tuning culture of modern process controller
technology. Just three aspects of this extensive field are reviewed in this section.

2.7.5 Process Reaction Curve Method

The process reaction curve method due to Ziegler and Nichols


(1942) involves an assumption that the system step response is of delayed first
order in shape. From this step response, delay and slope parameters, K, and Z
respectively are calculated. These are used in empirical rules to yield the PID
coefficients. The procedure is shown in Fig. 2.Error! Bookmark not defined..

2.7.6 Sustained Oscillation PID Tuning Method

This method of sustained oscillation is the second method devised by


Ziegler and Nichols (1942). Fig. 2.Error! Bookmark not defined.. shows key
features of the technique. It comprises an experimental procedure, and a rule base
for the calculation of the PID controller coefficients. A brief outline follows:
(i) The integral and derivative terms are removed from the controller, so
that the controller acts as P only.
(ii) The increase in gain from Kp = 1 to the point at which a sustained

oscillation is observed in the output variable is recorded as Ku.

(iii) A recording of the sustained oscillation in the output is required.


From this the period of the oscillation is measured. This period is
the ultimate period, Pu.

(iv) The data points Ku, and Pu are then used with the Ziegler-Nichols

rules to give the required PID controller coefficients.


66
The main problem with the sustained oscillation experiment is that
the proportional control is used to take the closed loop system to the verge of
instability. This is a procedure, which is both time-consuming and possibly
dangerous, nonetheless this remained one of the favoured tuning methods in
process control for a long time.

2.7.7 Autotune PID Control

The development of automatic tuning for PID control developed


from (i) an industrial demand for an improvement over the Ziegler-Nicholas
method and (ii) the use of microprocessor technology in the 1980’s to construct
new process controller units.
Several different automatic tuning methods were pursued based on
pattern recognition and other ideas. But, the most elegant of the autotune methods
was a re-invention of the method of sustained oscillation but using the simple on-
off relay controller to set up the conditions of a stable limit cycle at the -180
phase shift point. With the stable oscillation point found, then two actions
followed:
(i) the amplitude, a, of the signal to the relay was measured, and the
height of the relay, M was used in a simple formula to give the
ultimate gain:
Ku 4M / a

(ii) A peak-to-peak analysis was performed to obtain the ultimate


period, Pu.

(iii) The rule-base associated with the Ziegler-Nichols can then be used
as before to determine the PID controller coefficients.
Of course, the real point about autotune is that all this is available at
the press of the AUTOTUNE button, and no real knowledge of the theory is
required. However, it is useful to know (i) which rule based is being used since
this gives an idea of the likely performance achievable and (ii) if the system is
appropriate for control design by autotune since there are some systems for which
67
autotune or PID control is not appropriate. One the whole, this technology has
been extremely successful and is well accepted by industry.

2.7.8 PID Control Performance

Stephenson (1985) gave the traces for PID control of dissolved


oxygen. This was compared with the on-off control performance shown in Fig.
2.8. The improved performance is self-evident; the cycling behaviour has
disappeared and as time progresses airflow increased as more and more BOD is
utilized.

2.8 Cascade Control Loops


Cascade or nested loops are common in situations where a secondary
process in supplying a primary process in a sequential manner. If an intermediate
measurement is available then this can be used to attenuate the effect of supply
disturbances before they reach the primary process. This section opens with a
simple example and then presents the basic theory for such nested loops.

2.8.1 Cascade Control Example

Olsson (1985) gave an example of cascade control used in a DO control


loop. The system structure is shown in Fig. 28. The airflow supply is subject to
disturbances and the inner loop seeks to mitigate these before they affect the
aeration basin. The outer loop for control of DO is subject to load disturbances
caused by variations of inflowing substrate quality, by the quality of the recycled
sludge and by external environmental conditions. The outer loop has a target of
load disturbance rejection.

2.8.2 General Cascade Control Principles

Nested loops are quite a common feature of industrial process control.


The key principles are:
68
(i) The inner control loop moves swiftly to correct for inner loop
supply disturbances, thereby reducing the effect of these
disturbances on the outer process, G2.

(ii) The outer loop controller is concerned with correcting for load
demands on the outer process and ensuring that y2 remains at

the desired reference level, r.


(iii) The controller K1 can be used to attenuate measurement noise

associated with the inner variable, y1.

(iv) If K2 has integral action in the controller, then measurement bias

from the inner loop measuring device can also be rejected.

2.8.3 Cascade Control Loop Tuning

There are several aspects to this:


(i) Structure: the type of controllers to be used for K1, and K2.

The outer controller has to supply reference tracking


performance, hence integral action is invariably required; thus
K2 is usually PI.

(ii) The inner controller has to be fast, and reject the supply
disturbance, thus it is either a P or a PI controller. The I term is
not always necessary in the inner loop because steady state error
correction can be achieved in the outer loop. Common
structures are PI/P and PI/PI.
(iii) Tuning : tuning is always a two step procedure. The outer loop
is switch out, and the inner loop tuned using the Ziegler-Nichols
rules. The inner loop is switched in, and then the outer
controller selected again using Zieler-Nichols rules.
A recent variation on this process was published by Hang et al,
(1994). In their version of the cascade tuning procedure two autotune relay
experiments were performed. This is shown in Fig. 2.Error! Bookmark not
defined..
69

2.9 Ratio Control


Ratio control is special control structures designed to keep two flows
at a constant ratio to one another. The structure is shown in Fig. 2.Error!
Bookmark not defined., where:
(i) Stream A is uncontrolled but measurable
(ii) Stream B is both controlled and measurable.
The objective is that the stream B flow should satisfy:
FG 1 IJ FA FA
FB
H Ref K
or Ref
FB

The solution is to measure both streams and calculate the current value for the
ratio as:
FA ( measured )
m
FB ( measured )

The measured value is used to create a ratio error, e

e Ref m

and this is fed to a PID controller which adjusts the flow of Stream B accordingly.
70

2.10 Feedforward Control

2.10.1 Advantages of feedforward control

Although feedback control is used widely, it has disadvantages with processes that
are known to suffer from certain disturbances:
the control does not provide a corrective action until after the
disturbance has produced a change in the process output
there is no compensation for known or measurable disturbances
Significant advantages can be gained by including a feedforward component. This
effectively measures important load or disturbance variables and produces a
corrective control signal before the process has been upset. However, to
implement a feedforward controller , the following must be available
on-line measurements of the load disturbances
a form of process model should be available
The knowledge of the process model (steady state conditions or dynamic
behaviour) is required to develop the feedforward control signal.

2.10.2 Feedforward/feedback control structure

Figure 3.? shows the structure of a feedforward/feedback control system; the


feedforward control signal is added to the control signal from the feedback
controller to achieve a combined control signal which alters the input to the
process. The feedforward signal can be determined by a steady-state calculation or
by producing a dynamic controller.
71

Disturbance

Feedforward
Controller

Set Output
Point Feedback + + Process
Controller

Figure 3.? Feedback control with feedforward signal

2.10.3 Example in the waste water industry

Figure 3.? Shows the process control diagram for the activated sludge process.
The main objective is to reduce the BOD content to zero by the end of the time the
sludge spends in the aerator tanks. The recycle flow would then only contain the
liquid and biomass whose flow into the plant is controlled. The disturbances
acting on the process include the flow and concentration of substrate within the
flow.
The diagram shows the DO feedback control loop, which controls the aerators
from the DO profile at measured points in the activated sludge tank. The recycle
flow is measured using flow transmitter No 1 and passed to the feedback
controller FC1 whose output controls the valve on the recycle flow.
If the input flow could be measured (using the flow transmitter 2 in the diagram)
then this signal could be used by a feedforward controller (FC2) to provide a
combined feedforward/feedback flow signed to the recycle flow valve, thereby
giving improved rejection of disturbances in the influent flow. Moreover, if a
measurement of substrate was also available then this could also be used to
provide a feedforward signal (FC3) to the combined recycle flow controller.
72
Input Flow
Containing
Substrate + Activated Sludge
+
Concentration
Estimate/Measurement Aerators
FT
FT
2
FC DO
3 + Control
FC
FT +
2
Feed +
forward
FC Recycle flow
Feedback 1
containing
FT biomass
1

Figure 3.? Feedforward-feedback control of activated sludge process


Process equipment often includes the possibility of a feedforward signal within the
controller. for example, the Honeywell Series 7020 DO Analyser/controller can
be used in aeration control and provides a full PID control and the possibility of a
feedforward process flow input to enable immediate corrective action for flowrate
variations.

2.11 Inferential Control


Inferential control is used when the controlled output of the process cannot be
measured. However, if we can estimate or infer the output signal from other
measurements, then we can use both the estimate and the other measurements
within a feedback loop.
Consider the following example, (Figure 3.?). The controlled output, y, is not
measured, but another measurement, z, is available. The process transfer functions
relating the input signal ,u, to the outputs y and z are known , as are the transfer
functions relating the unmeasured disturbance, d, to the outputs.
y= Gy u + Gdy d
z = Gz u + Gdz d
73
We would like to use the knowledge we have of output ‘z’ to be able to estimate
the unmeasured output, y. This can be done by estimating the disturbance, d, from
the knowledge of the process (contained in the transfer functions Gy, Gz, Gdy, Gdz)
1 Gz
and the measured input and output signals u and z: d= y- u
Gdz Gdz
Removing the dependence on the unmeasured disturbance from the equations for
y and z gives:
Gdy Gdy
y = (Gy - G)u+ z
Gdz z Gdz
This relates the controlled output to the measured values of u and z. This can be
used in the feedback control scheme shown in Figure 3.?. Obviously the success
of the scheme depends on the availability and knowledge of the process models.
d Unmeasured
disturbance
Process

Gdy Gdz
Unmeasured output
Controller (to be controlled)
input signal
u + + y
Gy

+
z
Gz

+ Measured output
(can be used to
estimated/infer other
outputs)

Figure 3.? Process model.


74

d
Controller y
Set Point u
K Process z
+
ySP -

G dy
Gy Gz
G dz

+ G dy
+ G dz z
Inferred
output y (Measured)
Inferential calculation
Figure 3.? Inferential control system

2.11.1 Inferential Control in the Wastewater Industry

Inferential control is therefore used when the controlled output of the process
cannot be measured. However, if we can estimate or infer the output signal from
other measurements, then we can use either the estimate or the other
measurements within a feedback loop. Examples of where an inferential
measurement could be used in wastewater treatment are
(i) De-nitrification: Redox potential measurements can be used for
control ( Briggs et al, 1990)
(ii)Phosphate concentration: Turbidity measurements of the final
clarifier effluent provides an indicator of the suspended solids and can therefore be
used for control of phosphate concentration and COD (Kayser, 1990)

2.12 Advanced Control Features : Methods of


Controller Adaptation

Advanced control has an extra-ordinary wealth of techniques to offer


industrial process control. However, the uptake of new ideas tends to be slow and
rather conservative. Controller adaptation is one group of ideas, which have made
75
the transfer in two different guises: gain scheduling and online adaptation. Many
process controller units will offer either or both of these features.

Objective of Control Adaptation: Industrial plant often has to operate over wide
range of set-up, load and external environmental conditions. The dynamics of the
system usually change to reflect these different operating conditions hence to
obtain optimum performance the controller should be retuned accordingly. There
are two reasonably well accepted methods to automate this: gain scheduling and
online self-tuning; the former is open loop adaptation whilst, the latter is closed
loop adaptation.

2.12.1 Gain Scheduling

The method of gain scheduling has several components:


(i) Parameterising the Controller
(a) A Gain Schedule
For example, use the controller in the form:

Ki ( s)
Ki b 1s 1g
( 2 s 1)(( 3 s 1)

Thus for the partition of operating conditions, the schedule will retain
the common controller dynamics but schedule the gain:
Operating Condition Index, I
1 2 3 4
Ki K1 K2 K3 K4

(b) A Controller Schedule


For example, use the controller in the form:

Ki ( s)
b
Ki 1i s 1 g
( 2i s 1)(( 3is 1)
In this case, both the gain and the controller dynamics are
scheduled. The result will be an enlarged table:
76

Operating Condition
Index, i 1 2 3 4
Ki K1 K2 K3 K4

1I 11 12 13 14

2i 21 22 23 24

3I 31 32 33 34

(iii) A Trigger Mechanism


An integral part of the gain schedule is a trigger or changeover
mechanism, and there are two methods for this.
(a) Switch-over by measurable process variables
In this method, a single or groups of measurable variables are
used to partition the operating conditions and provide the
changeover logic. For example, influent flow might be used, or
fluid temperature might be used, viz.
Regime 1 Regime 2

0 Fin 5m3/h 0 5m3/h Fin 10m3/h

Two optimum controllers K1(s) and K2(s) would be selected,

one for each regime and as soon as the flow changeover

occurred at 5m3/h, the appropriate controller would be switched


in. The changeover would be automatic.
(b) Switchover By Operating Condition or Scenario.
In this method, a qualitative operating condition or an operating
condition scenario would be used. For example, a process unit
might operate under the following general scenarios:
Start-Up Steady Excess Shut-Down
Operation Load
K1(s) K2(s) K3(s) K4(s)
77

Thus each operating condition would have a controller


associated with it. Switch over could be Manual, or automatic.
(ii) Selecting the Controller
The wide availability and ease of using AUTOTUNE facilities in process
controllers has lead to the widespread use of gain schedule techniques. It
should be noted that gain scheduling is an open loop adaptation method.
The effectiveness depends on the process operating dynamics being
easily classified and categorised into useful sub-operating regimes. The
method is shown in Fig. 2.9.

Figure 2.9

2.12.2 On-Line Self-Tuning Control

Self-tuning control methods improve on gain scheduling methods by


being closed loop adaptation. In this case, the routines try to track on-line the
dynamic changes of the process. This information is used to re-design the
controller parameters and up-date the control action accordingly. Fig. 2.10 shows
the self-tuning control architecture. The main components are:
(i) Recursive Identifier. The process is given a fixed structure
model. The identifier block implements a recursive
identification routine to identify the parameters of the model.
78
Thus process measurements are used to track the dynamics of
the process as they change over time.
(ii) Controller Design Block. The identified model parameters are
transferred to the controller redesign algorithm. Here they are
used with design specification data to produce new controller
parameters. The controller parameters are then used in the
control block.
(iii) Jacketting Software. The self-tuner is a more complicated
methodology than gain scheduling. Some process units offer
this type of algorithm as a standard feature. Effective jacketting
software to manage the identification process, the control design
algorithm and the controller update is critical to the success of
this type of algorithm.

Figure 2.10
79

2.13 Further Readings


Stephenson, J.P., 1985, Practices in activated sludge process control, In
Comprehensive Biotechnology, Ed. M. Moo-Young, Vol. 4, Chapter 4, 1131-
1144.
Ziegler, J.G. and N.B. Nichols, 1942, Optimum settings for automatic controllers,
Trans ASME, Vol. 42, 759-768.
Astrom, K.J., and T. Hagglund, 1985, US Patent No. 4549 123, Method and an
apparatus in tuning a PID regulator.
Olsson, G., 1992, Control of Wastewater treatment systems, ISA Transactions,
Vol. 31, No.1, 87-96.
Hang, C.C., A.P. Loh, and V.U. Vasnani, Relay feedback auto-tuning of Cascade
controllers, IEEE Trans. CST., Vol. 2, No.1, 42-45.
Briggs, R. and K.T.V. Grattan , (1990), Instrumentation and control in the UK
water industry: A review, Proc 5th IAWPRC, Kyoto, Japan.
Kayser, R., (1990), Process control and expert systems for advanced wastewater
treatment plants, Proc 5th IAWPRC Conf, Kyoto, Japan.
80

3 Modelling and Control Demonstration


Objectives
1. To gain experience with a state-of-the-art modelling, simulation and
control analysis software.
2. To run several demonstrations, which illustrate the key concepts and
properties, demonstrated in the modelling and control presentations.
3. To run a biomass model with simple control loops in place.

3.1 Introduction
The state-of-the-art software used in the exercises is the MATLAB
environment making use of CONTROL SYSTEM TOOLBOX for control analysis
and SIMULINK for simulation. The exercises are largely self contained and
predominately use SIMULINK. Some MATLAB commands that might be useful
are those relating to the plotting procedures:
plot (time, output) ~ one graph
plot (t, output 1, t, output 2) ~ two graphs
hold on UV to add graphs to the same set of axes
hold off W
ginput(N) ~ cross-hair; enables N points to be sampled; N should be
numerical;
zoom ~ commands to home in on a particular graph to read off
specific values: use zoom, then ginput(N).

R. Katebi et al., Control and Instrumentation For Wastewater Treatment Plants


© Springer-Verlag London Limited 1999
81

3.2 First Order Systems


Use DEMO1 from SIMULINK.

There are three first order systems.

(i) Identify the d.c. gain for each.


(ii) Identify the time constant.
(iii) Click on the scopes.
(iv) Run the simulation (SIMULATION/Start)
(v) See the relative speed of response
(vi) Use the plot command to obtain a graph plot (t, y1, t, y2, t, y3).
(vii) Examine the 63.2% and read off the time constants.

3.3 Second Order Systems


Use DEMO2 from SIMULINK.
There are three second order systems.

(i) Use the standard form G2 ( s) e


K 2n / s 2 2 ns
2
n j to complete

the following table for the three systems displayed.


System K(d.c.gain) n Type of Response

1
2
3

(ii) Open up the scope


(iii) Run the simulation (Simulation/Start)
(iv) Do the results from the scope correspond with the predictions of your
tables?
(v) Use the plot command to obtain a graph, plot (t, y1, t, y2, t, y3).
82

3.4 On-Off Control

3.4.1 Simple On-Off Control

Use the DEMO31 from SIMULINK.


(i) Run the simulation, use Simulation/Start.
(ii) Observe the highly oscillatory response (Step Input 2.5).
(iii) Double the step input size to 5 units to see the oscillations
disappear.

3.4.2 Threshold On-Off Control

Use the DEMO32 from SIMULINK.


(i) Run the simulation.
(ii) Observe a good response and then the introduction of cyclic
behaviour as the reference signal changes.
83

3.5 Three Term Control

3.5.1 Demonstration of Effect of Integral Action

Use the DEMO41 from SIMULINK.


(i) Run the simulation
(ii) Observe the elimination of the reference signal/output offset as
integral action is increased from zero to 1.0.

3.5.2 Sustained Oscillation Tuning

Use the DEMO42 from SIMULINK


(i) In the tuning experiment, an ultimate gain of 33.6 can be found.
(ii) An ultimate period of 4.316 seconds can also be measured.
(iii) Zieler-Nichols Rules for PI Control
Kp = 0.45Ku = 15.124

Ti = Pu/1.2 = 3.9563

FG 1 IJ 38228
.
Therefore G PI ( s)
H
Kp 1
Ti s K 15124
.
s

Thus Kp = 15.124 and KI = 3.8228

(iv) Performance can be seen in the second simulation, where the


very poor performance can be readily observed.

3.5.3 Elimination of Derivative Kick

Use the DEMO43 from SIMULINK.


(i) Run the simulations
(ii) The top arrangement has the derivative acting directly on error.
(iii) The bottom arrangement has the derivative acting on the
feedback available.
84
(iv) Note the improvement in the response of the second form.
85

3.6 Cascade Control Demonstration


Use the DEMO5 from SIMULINK
(i) Run simulation
(ii) There are four scopes:
Scope 1 ~ Reference Tracking Performance
Scope 2 ~ Load Disturbance Rejection
Scope3A ~ Supply Disturbance Rejection
Scope3B ~ Supply Disturbance Rejection in Outer Loop

3.7 Ratio Control Demonstration


Use the DEMO6 from SIMULINK
86

3.8 Aeration Basin Model and PID Control


Use the DEMO7 from SIMULINK; Run using Simulation/Start
Notes: This model is taken from the paper by Neijjari et al (1997) entitled
Nonlinear multivariable control of a biological wastewater treatment process. A
global lumped parameter approach is taken for the composite aeration and settler
process. Brief details are:

3.8.1 Suite of Equations

X (t ) (t ) X (t ) D(t )(1 r ) X (t ) rD(t ) X r (t )


(t
(t
(t )
S (t ) X (t ) D(t )(1 r ) S (t ) D(t ) Sin
Y
Ko (t
(t ) X (t )
C (t ) D(t )(1 r )C(t )
Y
K La (Cs C(t )) D(t )Cin

X r (t ) D(t )(1 R) X (t ) D(t )( r ) X r (t )


where X(t) = biomass
S(t) = substrate
Xr(t) = recycled biomas

C(t) = dissolved oxygen


And D(t) is the dilution rate, r and represent ratio of recycled and waste flow to
the influent flowrate. Sin and Cin correspond to the substrate and dissolved

oxygen concentrations of influent steams. The biomass growth is described by the


growth rate, , the yield of cell mass, Y, and the constants Cs, and KLa, represent

maximum dissolved oxygen and the oxygen mass transfer rate. Ko is a model

constant. Biomass growth assumed a double Monod law in substrate and


dissolved oxygen.
87

3.8.2 Kinetic Data Points

Y = 0.65 -1
max = 0.15 h
r = 0.6 Ks = 100 mg A 1

= 0.2 Ko = 0.5

Cs = 10 mg A
1
= 0.018

Kc = 2(mg A 1)

3.8.3 Initial Conditions

X(o) = 215 mg A A
1 1
C(o) = 6 mg

S(o) = 55 mg A 1
Sin = 200 mg A 1

Xr(o) = 400 mg A 1
Cin = 0.5 mg A 1

The model usefully demonstrates how difficult it is to control the process


using PID laws.
88

4 Supervisory Control and Data Acquisition


(SCADA) Systems

Objective
The objective of this Chapter is to introduce the state of the art technology
in plant automation and control. The Chapter starts with the historical background
to computer control and its evolution in the last two decades. Some specific
remarks are made regarding the use of Distributed Computer Systems (DCS) in
Wastewater Treatment Plants.

4.1 Introduction
The recent advances in information technology, increased market
competition, the tightening of environmental regulations, the demand for low
cost operation and energy efficiency have all influenced the need for new
control design philosophies for complex industrial systems. The main impact of
these changes on the plant-wide control methodologies are summarised below:
New machinery and processing equipment is becoming progressively faster
and more complex.
Flexible and distributed plants are increasingly more popular in process
industries.
The demand for total plant optimisation with efficient and reliable unit
operation is increasing.
The integration of control and instrumentation equipment manufactured by
different vendors is a major issue in the control design for complex
systems.

R. Katebi et al., Control and Instrumentation For Wastewater Treatment Plants


© Springer-Verlag London Limited 1999
89

The global co-ordination of management, operational control and


maintenance functions is now an essential part of large scale plant computer
control systems.
The design and provision of control systems with the capabilities to
integrate a large number of plant functions have been a major concern for
control practitioners since the Sixties. The early attempts to produce a working
integrated system were concentrated on the application of a central digital
computer to plant wide control. This was achieved by the replacement of
pneumatic and analogue equipment such as sensors and actuators with their
digital or high power electronic counterparts. These new computer-based
control systems were known as Direct Digital Control (DDC) systems. They
were initially developed in power and the steel industries followed by the
process and petrochemical industries (Williams, 1964). The early applications
of DDC systems were restricted to small and local plant units. However, this
changed in the mid-seventies with the emergence of communication networks,
which enabled computers to be linked together. This improved communication
capability revolutionised the application of DDC systems, leading to the
emergence of Distributed Control Systems (DCS) and the closely related
SCADA systems. It is perhaps useful to introduce some clarification of the
terminology.

Definition 1.1 Direct Digital Control (DDC): A centralised computer system


using digital algorithms to replace analogue plant controllers and provide signal
processing capabilities.

Remarks
The term was introduced in the 1960’s (Williams, 1964).
The poor integrity of the centralised architecture of DDC’s was identified
very early in the introduction of this technology.
90
Definition 1.2 Distributed Control System (DCS): An integrated computer
network of microprocessor controllers and communications devices used for
process control and supervision.

Remarks
This term arose in the 1970’s when computer technology and
communications permitted an integrated computer system to be developed.
The underlying system structure is usually hierarchical and extends to
integrating both distributed control systems with business control systems.

Definition 1.3 SCADA System : A Supervisory Control and Data Acquisition


system using a computer network to control and communicate with, possibly
remote, autonomous production systems.

Remarks
The difference between DCS and SCADA is becoming vanishing small as
the technological capabilities of DSC and SCADA systems approach each
other.
SCADA is a term generic to certain industrial sectors, for example, the
offshore oil and gas industry.

The DCSs were initially used for data collection, limited data processing
and sequential control applications. PID controllers were soon incorporated into
these real time computer systems. With the development in control theory and
instrumentation technology, new features were added to such systems. The
hierarchical structure of such systems enabled plant supervision and optimisation
to be added to direct control. These systems are still under extensive development
and form an integral part of any modern manufacturing and processing plant. The
potential for advanced control features is slowly being realised and the new
methods will exploit the available computing facilities and advanced
communication buses. This is leading to a new generation of plant wide control
91
systems, which can provide efficient operation, integration, supervision,
optimisation, management, maintenance and control. The benefits of such systems
include, lower installation costs, lower maintenance costs, better system
reliability, flexible production configuration and the easier expansion of the plant
and the control system.

Computer automation has had a significant impact on various industries


over the last two decades. Some specific observations on implementation of these
systems for wastewater plants are summarised here (Gilman and Thompson,
1992).

1. A wastewater treatment plant accepts whatever flows down the sewers as


influent. Extreme loading conditions both in terms of quality and quantity
may occur. Volatile hydrocarbons may cause explosions; chemicals may
kill the useful bacteria; and severely high flows may flood the plant.
2. The reliability of sensors utilised is questionable. The control strategies
must be properly fail-safed against sensor failure.
3. The environment within a wastewater treatment plant is aggressive and
corrosive. The environmental conditions must be considered when
installing computer equipment.
4. Redundancy and fail-safing are very important. The wastewater treatment
process can not be completely shut down for repairs.

4.2 Technological Background

The advances in new computer and information technology have had a


major impact on how large scale industrial processes are controlled. The main
objective of total plant control is to obtain economic benefits (increases in
efficiency, throughput, quality, etc.) by globally integrating the process control
functions, the supervisory control modules, information services and management
decision making functions. The hardware and software requirements of such a
system are determined and constrained by the plant size and geography.
92
Although new plants may be designed to be fully compatible with the new
technology, there are still many old plants, which require supervisory control
design on an individual basis.
A centralised or a decentralised structure can represent most plant wide

control systems. These structures are discussed in the following sections:

4.2.1 Centralised Architecture

The key to this concept is a single computer unit at a location, which may
be remote from the process under control. This centralised facility executes all the
computational and operational tasks required by the process. As well as providing
the DDC of the plant, this single computer unit performs all the higher tasks such
as real-time data acquisition and processing, archiving, control and monitoring,
93
information analysis and management information. The centralised architecture is
shown in Figure 4.1.

Figure 4.1 Centralised System Architecture for Industrial Automation

The early approach to the computer control was through the centralised
architecture. However, the complexity and geographical scale of most large scale
production processes coupled to the development and falling cost of
microprocessor technology with associated advances in communications
technology lead to a distributed architecture for supervisory control. Although in
any control system, some centralisation is required, this is usually restricted to the
top level control functions and tasks. The main disadvantage of full centralisation
is lack of flexibility and an inherently low structural reliability. To ensure a
minimum downtime, at least two computers should operate in parallel with exactly
similar instructions and software. The advantage is that the user should only deal
with one type of operating systems and the communication modules are all built in
within the computer.

The main components of a centralised system are the interface, the


hardware, the software, the control and instrumentation units as shown in Figure
4.1.

4.2.2 The Distributed Architecture

The technological feature of the decentralised or distributed architecture


is that the control tasks are spread among a number of computing machines linked
by a communications highway. Each control task is a set of control functions.
The tasks are usually classified locally and executed on a local computer. The
interactions between the tasks are taken into account through higher level
94
modules, which initiates, monitors or controls the execution of the lower tasks.
Thus, the control tasks are primarily functionally distributed. The physical
constraints on the system such as geographical, safety issues, environmental
issues, security, energy utilisation and cost are also determining factors in
deciding how the control tasks should be distributed.

To develop an efficient data and information management system, each


computing machine should only be fed with the quantity and quality of
information needed to executed a designated task. This has certain implications
on the design of communication network and the architecture of the control
systems. The appropriate information packets should be generated and the
information flows and links should be established such that each local machine is
almost autonomous. This information packaging also introduces a certain
implementational flexibility if the global co-ordination is optimised and
engineered successfully. There are two distinct forms of distributed architecture:
horizontal and hierarchical.

4.2.2.1 Horizontally Distributed Architecture:

In this case, the process is partitioned into a number of sub-processes,


and each sub-process assigned a local computer unit. Each computer unit is
performing some of the tasks associated with the control of the complete process.
Global control is achieved by the use of a Local Area Network (LAN) which
transfers essential information through the horizontally distributed computer units
as shown in Figure 4.2.

Local Local Local


Processor 1 Processor 2 Processor n

A A S A A S A A
S

Process
95

Figure 4.2 Horizontally Distributed System Architecture

This type of distributed control systems is often used for sequential systems
where the output of one sub-process is the input to the following sub-process as
shown in Figure 4.3. The information is only available at the local sub-process
level.

Figure 4.3 Sequential Process Plant

The major drawbacks of a horizontally distributed architecture are:


It is difficult to globally optimise the overall process due to a lack of
information.
The architecture often duplicates the hardware installed and the software
utilised.

4.2.2.2 Vertical or Hierarchical Distributed Architecture

The vertical or hierarchical distributed architecture is often seen as


superior to both the centralised and the horizontally distributed architecture. The
reasons for this superiority lie in three characteristics, which reinforce each other:

Industry is essentially a business and the decision taking mechanisms is


essentially hierarchical. The quality and quantity of the information needed to take
the decisions in controlling a process are essentially hierarchical. The
geographical spread of water industry naturally leads to a functional
decomposition, which is hierarchical on a geographical basis. The traditional
hierarchical architecture uses a top-down approach to decompose the system into a
96
number of levels. The main control objectives are executed at higher levels and
the local control functions are executed at the lower levels. The information flow
decreases from the bottom to the top of the hierarchy. This disadvantage of this
approach is lack of flexibility and as a result any attempt to modify some
functionality require a major change in the whole system. Also, since there is no
direct communications between the top and low levels, the sensor and actuator
integration is difficult. The system cannot, therefore, respond quickly to the
unforeseen operating conditions, which often happens in wastewater treatment
plants.

4.2.2.3 Heterarchical Architecture

The heterarchical architecture uses a parallel structure where all system


modules directly communicate among themselves without any intermediary
interfaces or high level supervision (Valavanis, et al, 1997). The advantage of this
architecture is its flexibility but it suffers from high data traffic due to lack of
global supervision. Hence, it is difficult to achieve the overall plant
controllability.

4.2.2.4 Layered Control Architecture

The layered control architecture (Brooks, 1986) consists of controllers


working in parallel. Each controller performs a control function upon receiving
sensory information. One controller can subsume another controller and suppress
the lower layer control actions. Once the higher layer controller is no longer
triggered by a sensor, the lower level controller resumes control action. The
difficulty with this type of architecture is the synchronisation and timing between
control events and the lack of global controllability.
97

4.2.2.5 Mixed Architecture

In practice, a combination of the hierarchical, heterarchical and layered


architecture, known as mixed or hybrid architecture (Valavanis, 1987) is used. In
this architecture, the control system is divided into higher and lower levels. The
higher levels use the hierarchical structure to implement global control functions.
The lower levels uses layered and/or heterarchical structure to control the sub-
processes.

4.2.3 Supervisory Control System for Wastewater Treatment


Plants

Data acquisition, processing and distribution play an important role in the


control and optimisation of modern WWTP’s. In recent years the technology of
the SCADA and Distributed Computer Control Systems (DCS) has made its
appearance as the enabling control technology of the wastewater plant. The
industry is now keen to exploit the flexibility and power that is technology is
capable of. The implementation of a plant-wide control system is only possible if
a DCS is installed. A key issue in the successful implementation of the
wastewater computer control system is the choice of control and data
communication architectures. Traditionally, the control architecture implemented
on WWTPs is transferred from the process industry with minor modifications.
Because of the different nature of wastewater control problems, this has lead to
disappointing results. It is rare that these systems are operated effectively over a
long period of time (Kananya, et al, 1990). The feedback control loops are often
switched to manual and the role of DCS is reduced only to data collection. The
barrier to the successful implementation of the control system is not the control
algorithms or control equipment, but rather the problem of designing control
systems that are integrated with the plant operation and has a high degree of local
autonomy, flexibility and reliability. The control and instrumentation of
wastewater plants will be further discussed in Chapters 7 and 9.
98

4.3 Distributed Control System Technology


The term Distributed Control System is most commonly applied to a
plant-wide control system of distributed processors, operational peripherals
(VDU's, printers, etc.), minicomputers held together by a communication
network operating in real time. A typical structure for a DCS is shown in Fig.
4.4.

Figure 4.4 A typical DCS configuration based on functional decomposition

The basic functions of a DCS can be classified into two groups:


Primary Control Functions: These are related to the direct control of
subprocesses at the local level and include feedback control, feedforward
control, inferential control, ratio control, cascade control, etc. The
instrumentation used to realise these control functions is called primary
instrumentation.
Secondary Control Functions: These are related to the higher levels of the
hierarchy and are the supervision, monitoring, management, maintenance and
99
optimisation tasks. Examples are status indicator, alarm, record, optimise, start-
up, and shutdown routines. The DCS plays a major role in executing the
secondary control functions and the associated instrumentation known as
secondary instrumentation.

4.3.1 Generic Functional Modules

Each module of the architecture shown in Figure 4 is discussed below:

4.3.1.1 Input/Output Modules

Input/output modules provide the main interface between the DCS and
the process being controlled. They convert the information provided by the
process instruments into digital form. They also provide signal filtering, contact
de-bouncing, and in some instances they can also do alarming, signal
characterising and low-level logic. Four basic types of signals connect 110
modules:

Analogue Inputs, also called analogue INs or AIs.


Analogue outputs, also called analogue outs or AOs
Digital Inputs, also called digital Ins or DIs
Digital outputs, also called digital outs or DOs.
Analogue inputs are gradually varying signals (as opposed to two
positions), typically connected to sources such as 4-20 mA and 1-5 V DC
transmitters, thermocouples, and TRDs (resistance temperature detectors).
Analogue outputs are gradually varying signals, usually 4-20 mA, typically
connected to devices such as valves, dampers, and variable speed motors.
Digital inputs are typically connected to two-positioned devices such as
limit switches, relays and pulse contacts. Digital outputs are contact openings and
closing that operate controlled devices (such as valves, dampers and motors) in a
two-position manner.
I/O modules are typically designed for varying levels of input/output
loading of example
100
1. A single board to a single field device providing single-point- integrity.
2. A single board connected to a single input device and a single output device
providing single-loop integrity
3. A single board connected to multiple (4, 8, 12, 16, 32) inputs.
4. A single board connected to multiple (4, 8, 16) outputs
5. A single board connected to multiple inputs and multiple outputs (for
example, eight in and four out).
I/O modules may have separate, individual circuits, or they may share
components such as analogue-to-digital and digital-to-analogue converters and
multiplexes. Typical features to look for in I/O modules are:
Isolated or non-isolated grounding on a per point or per board basis
Level of fusing protection on a per point, per circuit, or per board basis
Accuracy and linearity of the sampling frequency
Protection from electromotive force (emf) and transients
Immunity to radio frequency (rf) interference
Fail-sale positioning
Overload and surge protection
Impedance matching with field devices
Loop feedback sensing
Manual override of loop control
Mean time between failure (MTBF) and mean time to repair (MTTR) (field
values, not theoretical)
Criticality - that is, if the board fails, what else will be affected.
With these criteria in mind, one should be able to evaluate the level of
reliability of I/O modules when comparing various vendors systems. This will
indicate when and where to apply redundancy at this level.

4.3.1.2 Local I/O Bus

The local I/O bus provides a bridge between the I/O and controller
modules and, by definition, is restricted in terms of geographical area and data
101
loading. It typically operates at a slower speed than the plant-wide data highway,
although communication rates can range from 9,600 to 250,000 to 1 million bits
per second.

I/O buses can connect varying numbers of I/O and controller modules. The
manner in which they provide communications can also vary, from polling or
scanning of the I/O by the controller modules to serial communications between
I/O and controller modules. They can also be arranged for serial or parallel
communications or a combination of both.
While I/O buses are seldom a bottleneck or a limitation, they become a
critical component if they fail. The loss of a single I/O bus can affect the control
of many end devices.

4.3.1.3 Controller Modules

Controller modules are the true brains of a DCS. Their primary function
is to use continuously updated information from 110 modules and then perform
the complex logic and analogue loop calculations needed to produce the controller
output signals that keep process variables at the desired values. It is at the
controller modules that many DCS functions, such as the following, as performed:

I/O Signal characterisation


Signal filtering
Alarming 1/0 modules
Ranging and engineering units
Control logic
Control interlocks
Sequencing
Batch control
Passing on of trending information
Passing on of report information
102
Controller modules are microcomputers and, as such, have similar
limitations. Although the various numbers associated with the various types of
controller modules can have a mesmerising effect, not all of these numbers are
important in evaluation of controller module performance. The important ones
are:
Available memory for configuration

Available idle time (based on a given scan rate)

I/O loading or criticality.

Number of available software addresses for input/output blocks

Number of available software addresses for control blocks.

In the sizing and selecting of a DCS, it is vitally important to ensure that


there is enough processing power not only to serve the active I/0 and control
functions but also to provide some spare capacity for future I/0 expansion. This is
an important consideration, because adding this processing power after the
installation is expensive. This is due to the added cost of the extra modules and
other associated equipment, such as communication modules, power supplies and
cabinets. This added cost is often determined on a non-competitive basis and is,
therefore, higher than it would have been if purchased as part of the initial
contract.
The second penalty is inferior performance due to the extra loading put on
the original and the new controller modules, the communication modules and the
data highway. This extra loading is the result of controller modules doing link
communications instead of simple control. Link communications are those that
pass high volumes of information between control processors. Such
communications consume large amounts of memory and scan time in the
associated controller and communication modules and load the data highway. A
simple way to avoid this potentially reduced performance is to specify suitable
values of I/O loading, memory usage, and idle time for controller modules. For
example, for a given scan cycle (114, 112, or I s on average), one can specify the
103
amount of spare memory and idle time to be available in the controller module
after execution of the I/0 and control functions. Spare memory and idle time
should normally range form 20% To 60% depending on the application. Limiting
the number of I/0 and control functions executed in a controller module is a good
idea for three reasons:

It ensures the availability of the microprocessor power needed to carry out


the specified functions and thereby simplifies configuration engineering. It allows
for easier, more flexible future expansion and reduces the risk of link
communications. It reduce the criticality of any given controller module by
limiting the number of 1/0 and loops controlled, thus limiting the damage caused
by failure of the module.

4.3.1.4 Communication Modules

Communications modules are also microcomputers, but they differ from


controller modules in function. Rather than execute control strategies,
communications modules manage the flow of information between the data
highway and controller modules, user interfaces and gateways to host computers
and PLCS. Although there is always a physical limit to the amount of data that
communication modules can handle, they are not often a bottleneck.

If problems do occur, the communications rate and memory capacity


should be checked. Performance improves if one either decreases the number of
communication modules or decreases the number of devices served by single
modules. Again, there should always be room for expansion. Communication
modules are critical to proper operation of a DSC without them; the operator may
be blind to the process.

4.3.2 Real-time Data Highway

Real-time data highways come in many variations. Topologies can be


linear, loop, or star, and they may or may not include traffic controllers. Since a
104
data highway is a microprocessor based module, it should be viewed as
considerably more than one or two cables strung out across the plant.
If controller modules are the brains of a DCS, then the data highway is its
backbone. It is an active component through which pass the system messages and
files transfers, all in real-time. It constantly updates the consoles, gateways and
other modules connected throughout the system countless time search second. It
is probably one of the most critical DCS modules, because it is common to all
other plant-wide components. If the data highway should fail, operators are cut
off from the process, link communications are lost, and process control is affected.
The data highway is the one DCS component that should usually be made
redundant. In this case, redundant does NOT mean one highway is active and one
is a hot standby; it means that both highways are active, permitting a bumpless
transfer between highways without need for human intervention. If traffic
directors are part of data highway, they should also be made redundant.
The following are principal issues to be addressed in the evaluation of a
DCS data highway:
1. Synchronised versus non-synchronised
2. Deterministic versus non-deterministic.
3. Token passing versus report by exception
4. Variation in protocol types (all are proprietary)
5. Peer to Peer versus collision detection-based communications
6. Speed of data transmission
7. Maximum transmission distance
The evaluation of the security and reliability of a data highway is not
straightforward because many factors are involved. Most importantly, speed isn't
everything. Other essential factors are module highway access, message buffering
and prioritising, and efficiency. For example, highways based on collision
detection and report by exception can lose 70/80% of their rated capacity when
message loading increases due to alarm burst and process upset conditions.
Unfortunately, it is under such conditions that it is most important for the data
105
highway to perform efficiently. Generally, one should evaluate a data highway
design based on a worst-case scenario. Consideration should be given to:
1. The number of stages (1/0 and control loops) that are connected to the
highway
2. How much trending and reporting information is being transferred?
3. The volume of link communications
The number of alarm points.

Once the required data highway capacity is known, the size, number and
configuration of highways (and traffic directors) can then be specified.

Repeaters or gateways are an integral part of real-time data highways.


When one data highway is fully loaded and more capacity is still needed,
additional highways can be used. Two common approaches are used to permit
communications between highways. The first is to link the highways together via
a high level or so-called super highway. Each real-time data highway is joined to
the super highway by means of gateway modules, which are usually redundant.
This would mean that connecting two redundant real-time data highways tighter
would require four gateway modules. The second approach is a straightforward
highway-to-highway connection via highway interface modules. In this second
approach, there is no super highway acting as a go-between.
Whichever approach is used, if one ends up with a requirement of multiple
highways, extra costs should be expected. If the requirement happens to be
unplanned, the extra costs could be substantial, considering the gateways, other
interface hardware, software, engineering and possibly re-engineering - all added
after the fact. Sizing a real-time data highway means looking as far as possible
into the future and planning for maximum loading.

4.3.3 Host Computer Interfaces and PLC Gateways

A requirement in many DCS applications is the transfer of information to


and from other types of computers. This can be required for a variety of reasons,
such as
106
1. Integration with management information systems (MIS) computers
2. Integration with optimising or modelling computers
3. Integration with production and maintenance computers or computer
networks already in place (or to come)
4. integration with other process control computers (such as PLCS)
Whatever the situation, the distinctly different computer systems must be
able to communicate with one another. That is, the real-time computer systems
may have to talk to MS-DOS, PS/2 or UNIX-based computers. As there is no
universal agreement on operating systems, all DCS vendors have taken the
approach of a translator box or host gateway. Typically, this gateway is a passive
device in that it does not initiate communications but merely translates the
transports information. Typically, it does this in a method similar in concept to
that used in a post office box as illustrated in Fig. 8.4.
This method is often explained in terms of a data transfer table and is
generally an efficient means of communication. It is faster and accommodates
more data than an approach that uses a direct question and answer on a point-to-
point basis. Gateways can also accommodate file transfers of large quantities of
data, such as trend or report files, although not all gateways have these abilities.
Since a host gateway module is normally a passive device that simply
translates, it needs to be told what information to translate and when to read and
write to the various system registers. In short, it requires a driver device with
driver software to take charge of the communications. This set-up is often a
master-slave relationship between the DCS and the host computer.
In communications with a PLC it is usually the DCS that is the master
handling the driver software. The reverse is normally true when a DCS
communicates with a host computer. It is essential to know if a vendor includes
the driver software with the interface or gateway. Proven, off-the-shelf driver
software is highly preferred to software that must be custom developed. In the
latter case, a user must be prepared to pay a high premium and, in addition, suffer
the frustration of on-the-job debugging. Custom software development is very
expensive in both the short and long terms.
107
While a host gateway module is passive in terms of communications, it is
an active computer device. It therefore has memory and scans time limitations to
be aware of in terms of
Size of database.
Speed of communications
Rate of data base refresh and
Types of data accessible (for example, trend files, report files, types of live
data and so on).

4.3.4 Power Distribution System

This is the part of a DCS that is most often overlooked and, like the real-
time data highway, it is a system component common to all others. It is the DCS
component the takes raw electrical power, converts it, conditions it, and regulates
it for the various other computer modules in the systems.
The typical power distribution systems can be spilt into two parts - bulk
power and power regulation. With bulk power, the key issue is to make sure that
variations in the main AC source do not exceed the capabilities of bulk power
supplies. Battery backup is usually mentioned in the same breath as bulk power
supplies and may appear in various forms: uninterruptible power supply, separate
battery packs, or integral battery packs. Whichever approach is used, the batteries
should be able to take over instantaneously if power fails or dips. Loss of power
to the microprocessor modules could erase some sections of memory and require a
reboot of the system. Battery backup is sized to keep the system energised long
enough to meet essential needs. Typical backup times may range from two or
three minutes to two hours.
Power regulation is also vital to the operation of a DCS but is almost never
lacking in capacity. However, redundant power regulation is recommended for
most systems modules and most applications.
108

4.3.5 Interfacing the PPS and ACS to DCS

The part of the DCS, which allows the interfacing of external software, is
the Applications Manager/Module. This part of the architecture sits on the DCS
network and can run user code so that interfacing problems between separate
hardware and the DCS can be avoided.
The object of this section is to investigate to what extent this feature is
supported on some of the industrially used DCS's.
Situated between the DCS and the dedicated controllers are two "device
gateways". Simply stated, the purpose of the gateways is to allow the DCS to
send and receive information from the dedicated controllers so that (to the DCS
and the plant operator) the remote data appears to be the same as the DCS data.
The difference between process data read via the gateway and data hardwired to
the DCS controller should be 'transparent" at higher level functions, such as the
historian, alarms, and human interface. The gateway provides a number of
services for the DCS. These are:
Communications port management.

Message error checking and identification.

Format Conversion.

Ownership of Data - this solves the problem of data being global available to
the DCS.

Alarming - the gateway generates the alarm for the dedicated controller.

To summarise the gateway service the communication links and map


remote data points into the DCS so that they behave as if they were hardwired into
the DCS. This way no special treatment is needed for these points at higher level
DCS control and engineering functions.
109

4.3.6 DCS Software

Like any computer system the DCS has to be correctly programmed to


perform its tasks in real time. The existing software for DCS’s is often non-
standard and proprietary. The basic modules are the operating system, system
support software, and application software and communication software. These
are briefly discussed here:

Operating Systems Software is the executive software for the computer system.
Its main distinguishing feature is its real time capability. Typical examples are
RTDS, OS-9, REAL-IX or UNIX.
System Support Software is used to aid the development of the application
programs. These are called system utility programs since they include editors,
debuggers, compilers, linkers and so on.

Applications Software is specifically related to the task of configuring a DCS.


They usually contain a library of simple operations like reading an input to more
complicated modules like a PID or lead/lag algorithm and may extend to
complex sequencing, optimisation routines or expert system facilities. These
programs are almost exclusively proprietary, for example GENSIS or
PARAGON 500.
Communication Software is used to establish the communication links between
different local computers and the local and highway buses.
It should be noted that the recent trend in the development of DCS
systems is toward an open system both in hardware and software. When this
happens, it will provide a significant improvement in the flexibility in the DCS
system and make the task of designing these systems much easier.
110

4.4 Functionality of the DCS

4.4.1 Data Acquisition and Processing

The DCS, like the programmable logic controller, is connected to primary


control elements such as temperature and pressure transmitters, flowerets, gas
analysers, pH and conductivity sensors, weight scales, contact switches, valves
and motors, and so on. From these field devices it receives electrical signals, for
example 4-20 mA, 1-5 V DC, 24 V AC and 120 V AC. The DCS converts these
signals (digitises them). Once converted, they can be used by the computer to:
1. Control loops
2. Execute special programmed logic
3. Monitor inputs
4. Alarm the plant operations
5. Trend, log and report data, and
6. Perform many other functions.
Field signals are divided into two basic categories - analogue and discrete.
Analogue signals are continuously variable: they act like the dining room dimmer,
which changes the lighting intensity in a gradual manner. Discrete signals can
have only two values or positions and are called two-position or on-off or snap-
acting. They are often associated with contact devices, such as the light switch in
a home. There is no in between with discrete devices, - they are open or closed,
true or false, on or off etc. Because a DCS is computer-based and all its
information is in digital form, it can easily combine analogue control loops with
discrete logic (interlocks and sequences).
A DCS can involve as little as a few hundred inputs, outputs, control loops,
and logic interlocks or tens of thousands of them. It can scan all the primary
elements or sensors characterise the input signals and alarm them, recalculate the
loop parameters and execute logic and then send the results to motors and valves
throughout the plant. It constantly re-evaluates the status of the plant and makes
111
thousands of incremental decisions in fractions of a second. It is capable of all
this and more for two main reasons:
1. A DCS is made up of many independent control modules that can operate
simultaneously and independently.

2. It has the ability to carry out rapid communications between these and other
modules by means of a communications link called a real-time data highway.

Input/Output Modules provide the main interface between the DCS and the
process being controlled. They are used to scan, digitise, process instrument
input/output signals and provide command inputs to the actuators. They also
provide filtering, contact de-bouncing, low level logic and local alarms. A single
input/output board may be connected to a single input/output device to provide
loop integrity. A single board may be connected to multiple inputs and multiple
outputs.
Typical features of these devices are:
Isolated or non-isolated grounding, a low level of fusing protection, high accuracy
and linearity of sampling frequency, protection from electromotive force, an
immunity to radio frequency, fail-safe positioning, overload and surge protection,
impedance matching, loop feedback sensing, manual override of loop control and
a high mean time between failure.

4.4.2 Low Level Process Control

Regulatory analogue loop control often involves simply maintaining a


process variable (such as temperature or pressure) equal to a set point. It is like
the cruise control maintaining its set speed. Of course, many different types of
control loops (feed-forward, lead-lag cascade, etc) are being executed in a DCS,
but simple, set point-maintaining loops often account for the bulk of them.
112

4.4.3 Sequencing

Discrete control very often consists of simple logic statements coupled with
field sensors to provide logic interlocks or process sequences. For example,
consider a tank to be filled with a liquid and then heated. To protect the product
and/or equipment one could use logic interlock that says:
1. IF the level is below a minimum point,
2. THEN the heater coil cannot be turned on (or must shut off).
The process might also call for the liquid to be stirred with an agitator. The
previous logic interlock could be coupled with sequencing logic that says:
1. First, fill tank.

2. Second, turn on heater

3. Third, start agitator

4. Fourth, empty tank.

In the sequence, the second step cannot take place until the first is
completed. Likewise, the third step cannot start until the second step is completed
and so on. By adding the IF-THEN logic interlock, if the level should ever drop
below the minimum level, the heater would still trip off.
The computer based DCS can easily combine analogue control loops with
interlocks and sequences. The above example could also incorporate an analogue
control loop to maintain a constant temperature in the liquid.

4.4.4 Alarm Management

Being computer-based, the DCS also offers intelligent alarm management.


It can force the operator to focus on the most important alarm, thus allowing him
or her to respond more appropriately to the situation. Some alarm functions
include the ability to:
1. Filter out nuisance alarms.
2. Recalculate alarm limits
3. Re-alarm lingering alarms
113
4. Prioritise alarms.

4.4.5 Operator real-time Displays

User interface modules allow the operator, the engineer or manager to


interact with the DCS.
These devices include display monitors, alphanumeric keyboard, operator
keyboard, trackball and mouse, printers and plotters. Recent information
technology developments have made operator-interfaces impressive; however, it
should be noted that such displays are expensive to produce. Further, the value of
the display depends on the accessibility and size of the process database, the
integrity of the information, the screen's build or image speed and the reliability
and security of the information.
Typical features of the operator display are as follows:
1. Graphical overview of the process providing a real time interactive interface
with the process. Usually also allow the operator to zoom in on a particular
plant component status.
2. Group Display made up of control station mimics allowing hundreds of
traditional instruments to be organised in one clear display.
3. Alarm Summary showing recent alarm conditions in a prioritised and/or
process area categorised manner.
4. Trend Displays showing the history data associated with a chosen
measurement.
5. Help Displays are often available to assist operators and to aid operator
training.

4.4.6 Data Logging

This feature of DCS's allows the user to record selected process variables in
sample form at user established intervals and store the data in a historical
database. Along with each sample, the process status, date and time are also
114
saved. This historical archive of process conditions can be accessed, by the user,
and used for several purposes:
1. Trip event Analysis. In a process trip situation, the user can access the data
related to the trip event and use this to trace the circumstances leading up to
the trip and also the effects of the trip event on other processes.
2. Trend Logs. The operator can access the data to produce long term trend plots
of the chosen process variable.
3. Log and Report Generation. Archived data may be accessed to produce
scheduled or user demanded reports including trend logs, sequence of event
reports and maintenance logs. The report format is user definable.

4.4.7 Performance Assessment

This feature of DCS's appears to be an optional extra since it requires some


form of model of the process to be controlled so that the actual plant operating
status can be compared to a theoretical status. However, the performance of a
process may be reasonably assessed in comparison to its normal operation by way
of the trend logs produced from the historical database.

4.4.8 Plant Management and Supervisory Control

In addition to the process instruments (such as temperature transmitters,


flowerets, pH sensors, valves, and so forth), that are common to any process
control approach, there are six generic functional modules:
Input/output or I/O modules scan and digitise process instrument
input/output data. Some may perform elementary simple logic.

The local I/0 bus links I/0 modules to controller modules.

Controller modules read and update field data and performance control
calculations and logic to make process changes.

User interfaces include operator interfaces and engineering workstations.

The data highway is a plant-wide communications network.


115
Communication modules provide a link between the data highway and other
modules, typically controller modules and user interfaces.

Each DCS vendor has a priority approach, and it is possible, for example,
for the functions of control and I/0 to be combined in the same physical
competent. Nevertheless, it is still possible, even preferable, for a DCS to be
described by means of the generic functional modules.

4.4.9 Technological Implications and Potential

4.4.9.1 Economic Benefits

In the 1960’s, the pursuit of the economic benefits of global control


resulted in the first centralised direct digital control computers and this was
reinforced during the 1970’s when distributed control systems became
widespread. The increasing communication capabilities, data acquisition and
processing power of the current and future plant-wide control technology will
undoubtedly lead to improved responsiveness in the dynamic scheduling of large
complex manufacturing and process plants. The impact of this on the control
system will be the closer integration of the control, monitoring and operation of
the process units. This trend can be observed across a wide range of industries but
particularly in the large-scale industrial processes of the chemical, petroleum,
paper and steel sectors. These more recent advances have made supervisory
control easier and cheaper to implement and the key benefits are seen as:

Increased Plant Capacity : In terms of the power industry where power


generation units are of interest, increased plant capacity is equivalent to
increased energy production efficiency and better availability of the generation
units to external loading and demands.
Lower Operating Costs : This has several components and they are self-
explanatory:

Less fuel feedstock required, improved energy utilisation or production,


lower maintenance costs, reduced labour costs, improved plant safety, improved
116
product quality and uniformity, reduced waste, improved process information and
management

4.5 A Classification for Supervisory Control Problems


The complexity and size of large-scale industrial plant, whether it be a
thermal power station or a chemical refinery, requires more than loop control or
even process unit control. Co-ordinated total plant-wide control is required for
optimised system operation. Supervisory control systems are designed to
provide the integration of the industrial subprocesses. There are perhaps two
aspects to the design of supervisory control systems:
(i) Responsiveness to external system conditions, for example, reacting to
the network loading which impacts the power generation production
requirements.
(ii) Responsiveness to internal system conditions, for example optimised
system set-up to take account of fault conditions or maintenance
schedules. This includes being able to reconfigure the production of
power given a particular plant availability scenario.
(iii) The supervisory control system design problem often takes the form of
on-line optimisation of the primary and secondary control functions.
These are classified as:
Scheduling : This is the long term planning problem of operating and directing the
plant operation within the constraints of maintenance requirements, faults and
machine failures. For example, a balance between power production and loss of
plant availability has to be achieved.

Optimal Plant Allocation : This involves the economic balancing of the available
throughput against external demand. A typical problem is the selection of set
points in a Combined Sewer Overflows (CSO) process to ensure optimal
exploitation of the plant capacity.

Operating Condition Optimisation : This is the problem of establishing the best


operating conditions giving maximum efficiency for least economic cost within
117
the primary and secondary system constraints. In combined cycle applications,
more flexible plant availability creates the problem of dynamically transferring
plant from one operating condition to another as demanded by the plant allocation
schedule.

Unit Optimisation : This is basically achieved by the local loop controller. The
feedback control design problem should be solved such that the unit control design
specification, formulated within the context of global system requirements, is
satisfied. There are numerous techniques in classical and modern control theory
to deal with this problem. However, methods dealing with constrained problems
are not so common and significant cost benefits will accrue from such techniques.

In a typical wastewater computer based system, pumps, valves, gates, etc must be
operated to divert wastewater or solids to in-line or off-line storage before a
bottleneck, and route them to subsequent treatment and receiving waters. It is
highly undesirable to generate flooding, overflows, and/or violation of standards if
the system has unused storage capacity at the same time.

The time sequence of set points of all regulations is termed control


strategy (set point optimisation) (Novotny and Capodaglio, 1992). The
determination of a control strategy can be either automatic using the
mathematical models, optimisation, expert system, trial and error or manual.

4.6 On Designing Supervisory Control Algorithms


An important step in designing supervisory control systems is the
functional specification. This is a description, which has been drawn up to answer
the following:
What are the tasks of the supervisor?
How the tasks are grouped into the primary and secondary control functions?
How will these tasks be accomplished?
What resources will be required to implement the supervisor?
Typically the functional specification might consider the following
issues:
118
Regulatory Control Structure : Input/output variables, equations for coupled
process variables, control equations, initialisation procedures, fail safe controller
strategy, failure mode analysis, operator interfaces and displays.
Monitoring Functions : Condition monitoring, fault monitoring, alarms, interlocks
and warning systems.
Performance Calculations : Performance indices, data processing, reporting
procedures
Operator Communications : Process data display, operator interface, alarm
interface and data representation.
Data base requirements : Data archival, data retrieval, data processing, data
management and data security.
Process models and optimisation : Operational objectives, process constraints,
process models and optimisation procedures.

Other issues such as implementation, software and hardware selection


and economic justification are considered the tasks of project engineers and
managers. Traditionally, supervisory control has been considered to be a set point
optimisation procedure. Although this is still an important function of supervisory
control, it seems prudent to recognise that supervisory systems are now a more
substantial technological development component.

A general procedure for the formulation of the optimisation problems


occurring in small scale supervisory control has been given by Seborg et al (1989)
based on earlier work by Edgar and Himmelblau, (1988). A more systematic
procedure for complex large-scale industrial systems is envisaged.
The design objectives that have been presented in the literature for
wastewater treatment process can summarised as follows:
Minimisation of untreated overflows for combined sewer overflows.
Stability of treatment processes and effluent quality. The Mixed Liquor
Suspended Solids (MLSS), the Dissolved Oxygen throughout the process and
the SS must be maintained at a stable level.
119

Minimisation of total pollution loads. This objective considers both effluent


and bypasses/overflows from the system, and is aimed at minimising the total
pollution load from it.
Avoidance or minimisation of "bottleneck" situations.
Economic constraints including minimisation of overflows, reduction of peak
energy consumption at peak pumping stations and reduction of treatment cost
at the downstream treatment plant.
120

4.7 Further Reading


1. Novotny V and A G Capodaglio, 1992, Strategy of Stochastic Real-Time
Control of Wastewater Treatment Plants, ISA, Vol. 31, No. 1, pp 73-84.

2. Gilman H and F P Thompson, 1992, Programmable Logic Controllers find a


Home in Wastewater Treatment, ISA, Vol 31, No. 1, pp 125-130.

3. Warnock, I G, 1988, Programmable Controllers: Operation and Application,


Pr entice Hall , U.K.

4. Morse A S and W M Wonham, 1971, Status of noninteracting control, IEEE


Trans. AC-16 No 6, pp 568-581.

5. Siljak, D D, 1984, Decentralised control of interconnected systems,


Encyclopaedia for Systems and Control, Pergamon Press, Oxford, U.K.

6. Bryant, R, 1986, Graph-based algorithms for Boolean function manipulation,


IEEE Trans Vol 35 No 8, pp 677-691

7. Ramadge, P J and W M Wonham, 1987, Modular feedback logic for discrete


event systems, SIAM |J Control and Optimisation, Vol 25, No 5, pp 1202-
1218

8. Corea R, M T Tham and A J Morris, 1993, An application of qualitative


modelling in an intelligent process supervisory systems, IEEE conf. on
Control Applications, Vancouver, Canada.

9. Basu R.N. and L.L. Cogger Integrated approachto cogeneration planning,


control and management. Proc. IFAC Symposium onAutomation and
Instrumentation for PowerPlants, 1986

10. Bhandari, V.A., R. Paradis and A.C. Saxena Using performance indices for
better control, Source unknown, ca 1986.
121
11. Bransby M.L Direct digital control in CEGB Power Stations, Chapter 9, Eds.
Bennet S., and D.A. Linkens Computer Control of Industrial Processes, Peter
Peregrinus Ltd. Stevenage, UK, ISBN 0-90604880X, pp. 155-169., 1982.

12. Tsai, T.H., J.W. Lane and C.S. Lin Modern control techniques for the
process industries Marcel Dekker Inc., New York ISBN 0-8247-7549-X,
1986.

13. Vahldieck R. and H. Krause Practical experience with progressive


automation concepts in power plants. Proc. IFAC Symposium on
Automation and Instrumentation for Power Plants, 1986

14. Bazaraa, M S, H D Sherali and C M Shetty, 1993, Nonlinear programming :


Theory and Algorithms, Wiley-Interscience Series in Discrete Math and
Optim., John Wiley & Sons.
123

5 Process Quality Control (SPC)

Objective
The objective of this module is to introduce the basic concepts of
Statistical Process Control (SPC) as a tool for data analysis and data management.

5.1 Introduction
In manufacturing and service industry, the word quality is used to signify
'excellence' of a product or service. A process (system) is the dynamic
transformation of a set of inputs, such as materials, actions, methods and
operation, into some desired outputs, in the form of products, information, and
services. Most engineering processes can be monitored and brought 'under control'
by measuring appropriate output process variables to manipulate and change some
appropriate inputs. This refers to measurements of the performance of the process
and the use of feedback for corrective action.
Statistical Process Control (SPC) is the use of statistical tools and analyses
to monitor, control, manage, and improve process the performance of the process.
It provides easy, reliable, and proven techniques for evaluating trends and point
values and determining variation in the process behaviour. A process that is 'in
control' is stable and predictable. It is also amenable to process improvement. An
'out of control' process is not, it is akin to diverge from a target operating point.
'Tampering' with the process, only increases the variation in the process and may
cause poor performance or instability.
SPC methods provide objective means of controlling quality in any
transformation process. SPC is a tool to reduce process variability, variations in

R. Katebi et al., Control and Instrumentation For Wastewater Treatment Plants


© Springer-Verlag London Limited 1999
124
products, in ways of doing things, in material, in people's attitudes, in equipment
and its use, and in maintenance practices.

Suppliers Process Customer


Key Stages
..............................

Inputs Quality Outputs Quality


......... ..................... ....................

Fig.5.1. Describing the boundary of a process.

Alternate
Process Process Decision

Data Predefined Internal


Process Storage

Document End Preparation

Manual Manual
Input Operation
Connector

Stored
Data
Or
Summing

Delay Merge Extract

Fig.5.2 Flowcharting symbols.


125

5.1.1 Understanding the Process

One of the first steps to understand or improve a process is to gather


information about the key process variables and functions so that a flow chart
diagram may be constructed. A flow chart is a picture of activities, which take
place in the process. In analysing the process, it is important to define the main
inputs/outputs, disturbances and describe their quality characteristics in terms of
performance indices. Fig. 5.1 shows a form, which can be used to focus on the
boundary of the process.

5.1.2 Flowcharting

The flow chart is a very important step for examining and improving any
process. The symbols of Fig.5.2 are usually used to construct the flow chart. A
critical examination and analysis of the flow chart helps to identify possible
improvement to the process. A well-established sequence of questioning technique
is used to examine the flow chart. Examples of these questions are given below.

The purpose for which


The place at which
The sequence in which
The people by which
The method by which

the activities are undertaken with a


view to

eliminating
combining
rearranging
or
simplifying

those activities.
126

5.2 Data Collection and Presentation

Data forms the basis for analysis, decision and action. The methods of
collecting data and the amount collected must take account of the need for
information and not the ease of collection. Process data arise from both discrete
items and continuous measurements. The former can only occur in discrete steps
e.g. 1,2,... defectives in a sample of 10, valve ON or OFF, tank EMPTY or FULL,
etc. Data, which arise from measurement usually, occurs on a continuous scale of
time and is called variable data e.g. temperature, flow, pressure, weight, density,
etc.

Data comes in two types of packages:


Variables Data - Quantitative data (temperature, blood pressure,
widgets/hour, etc.) as measured or observed. Variable data is further
categorised into continuous data - which can equal numeric value - and
discrete data - data restricted to integers.
Attributes Data - Qualitative data ('abnormal', 'defective', etc.) or quantitative
data derived from qualitative data (number of defects/part, abnormals/1000 pt.
days, number of unplanned readmission). Variable data is generally preferred
in SPC because it provides more information. For example, labelling a test
result 'abnormal' does not convey any information about how abnormal the
result was. At times, however, attributes data provides the only meaningful
data.

The object of data collection is to analyse and extract, using statistical


methods, information on which control action can be taken. The data should be
obtained in a form, which will simplify the subsequent analysis. The first basic
rule is to plan and construct the proformas for data collection. This should contain
not only the purpose of the observation and its characteristics, but also the date,
the observer, the sampling plan, the instruments used for measurement, the
methods and so on. SCADA software should contain a number of data sheet
templates to facilitate the design of these proformas.
127
In applying a systematic approach to process control there are two basic
rules:

i) Record all data.


The data is usually collected by the SCADA system and stored in a
database. The information recorded can be used to determine the
magnitude of process variations, and stability and trends of the
input/output variables.
ii) Use appropriate technique.
A wide range of simple problem-solving and data-handling techniques is
available on most SCADA systems. These are briefly explained in the
following sections.

1.1 1.0 0.7 0.4 0.5 0.9 0.7 0.4


0.1 0.8 0.3 0.4 0.5 0.5 0.2 0.8
0.6 0.7 0.1 1.2 0.6 0.7 0.3 0.5
0 0.6 0.4 0.9 0.2 1.0 0.7 0.6
0.9 0.8 0.6 0.4 1.1 0.7 0.8 0.3
0.6 1.0 0.8 0.7 0.5 1.0 0.3 0.7
0.9 0.4 0.9 0.5 0.8 0.5 0.6 0.2

Table 5.1 Concentration


128

con. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2
freq. 1 2 3 4 6 7 7 8 6 5 4 2 1
Table 5.2 Frequency distribution

8
7

6
5
4
3

2
1
0
-0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4

Fig. 5.3. Bar Chart of the Concentration

5.2.1 5.2.1 Bar Charts and Histograms

5.2.1.1 Bar chart

The bar chart provides a pictorial presentation of the 'central frequency',


or the average, and the 'dispersion' or spread of the range of results. The bar chart
also shows the lowest and highest values of the measured variable.

The bar chart can be drawn horizontally and can be lines or dots rather
than bars. Microsoft Eexcel has the tools for plotting different type of bar charts.
Table 5.1 shows the data measured for concentration of a chemical product. Table
5.2 is obtained by calculating the frequency of the concentration at values 0 to 1.2.
The Bar chart is the plot of the concenrtaion against the frequency as shown in
Fig. 5.3.
129

5.2.1.2 Histograms

When the number of variables is large, the picture of data can be


improved by studying the frequency at which the observations lie within a limited
number of intervals. It is often more useful to present the data in the condensed
form of a grouped frequency distribution.

Pie Chart
x
y
z
a
b

Fig. 5.4 Pie Chart

5.3 Graphs

Graphs can be drawn in many very different ways. Some types of graphs
are briefly described here.

Line graphs: The observations of one parameter are plotted against another
Parameter and the consecutive points joined by lines.
Pie charts: are used to present proportions and is usually limited to small
number of variables. An example of Pie chart is shown in Fig. 5.4.
130

5.4 Data Description

Numerical descriptive measures are commonly used to convey a mental


image of pictures, objects, and other phenomena. The two most common

mode
median
F
r mean
e
q
u
e
n
c
y

Variable

mean - mode= 3(mean-median)

Fig. 5.5 The normal distribution

numerical descriptive measures are the measures of central tendency and measures
of variability. The first measure of central tendency is the mode.

Mode: The mode of a set of measurements is defined to be the


measurement that occurs most often (with highest frequency).
Median: The median of a set of measurements is defined to be the middle
value when the measurements are arranged from lowest to highest.
Mean: The mean of a set of measurements is defined as the sum of the
measurements divided by the total number of measurements.
Fig. 5.5 shows the relationship between mean, median and mode.
Range (R): The range of a set of measurements is defined to be the
difference between the largest and the smallest measurements of the set.
Range is the simplest measure of scatter.
131

Variance: The variance of a set of measurement, y1, y2,......,yn with mean


n
(y y) 2
i 1
y is the sum of the squared deviations divided by n-1; .
n 1
Standard Deviation ( ): The standard deviation of a set of
measurements is defined to be the positive square root of the variance. It is a
measure of the deviation of the values from mean.

5.5 Process Variations

Most variables can be represented or approximated by a normal


distribution. The spread of values on the normal distribution curve may be

F
r
68.3% lies here
e
q
u
e 95.4% lies here
n
c
y 99.7% of values lie between

Variable

-3 -2 - + +2 +3

Fig. 5.6 Standard deviation

measured in terms of standard deviation . Fig.5.6 shows the population of the


output expected to be found between , 2 , and .
132
Variation of the mean values of samples will show fewer scatters than the
individual results. The relationship between the standard deviation, , sample size,
n, and standard error of the means (SE) is SE= /sqrt(n).

5.6 Process Control


To control a process using variable data, it is necessary to keep a check on
the current state of the accuracy (current tendency) and precision (spread) of the
distribution of the data. This may be achieved with the aid of control charts.

5.6.1 Mean Chart

When the process is stable, the individual results are expected to lie within
the range t 3 . t is the grand process mean (The distribution of sample
means). Fig. 5.7 shows the principle of mean control chart. If we are sampling
from a stable process, most of the sample means will lie within the range t 3SE.
We can use the mean chart as the template to decide whether the means are
varying by an expected or unexpected amount, judged against the known degree
of random variation. We can also plan to use this in a control sense to estimate
whether the means have moved by an amount sufficient to require us to make a
change to the process.
If the process is running satisfactorily, we expect that 99.7% of the means
of successive samples will lie between the lines marked upper action and lower
action. The chance of a point falling outside either of these lines is approximately
1/1000. The chance of a sample between the warning lines is 1/40. The presence
of unusual patterns such as runs or trends, even when all samples means and
ranges are within zone 1, may also be evidence of changes in process average or
spread. This may be the first warning of unfavourable conditions which should be
corrected even before points occur outside the warning or action lines.
Conversely, certain patterns or trends could be favourable and should be studied
for possible permanent improvement of the process.
133

Sample means
ca 1/1000
Zone 3
Upper action limit ACTION

Upper warning limit WARNING Zone 2

3 / n STABLE Zone 1

Process mean
STABLE Zone 1
2 / n
ca 1/40 Lower warning limit WARNING Zone 2

Lower action limit ACTION Zone 3

Individual population
Fig.5.7 The mean and range chart format.

The formulae for setting the action and warning lines on mean charts are:

Upper action line @ t 3 / n

Upper warning line @ t 2 / n

Process mean @ t

Lower warning line @ t 2 / n

Lower action line @ t 3 / n

The table in Appendix 5.A may be used to calculate the control limits for the mean
chart.

5.6.2 Range Chart

The control limits on the range chart are asymmetrical about the mean
range since the distribution of sample ranges is a positively skewed distribution.
The table in Appendix 5.B may be used to calculate the control limits for range
chart.
134

5.7 Assessment of Process Stability


1. Select a series of random samples of size n ( 4< n <12) to give a total number
of individual between 50 and 100.
2. Measure the variable of interest for each individual item.
3. Calculate , the sample mean and R, the sample range for each sample.
4. Calculate the process mean t – the average value of . and the mean range Rt
– the average value of R.
5. Plot all the values of and R and examine the charts for any possible
miscalculations.
6. Calculate the values for the action and warning lines for the mean and range
charts using Appendix 5.A (for mean) and Appendix 5.B (for range).
7. Draw the mean and range charts.
8. Examine the charts – is the process in statistical control?

If the process is under control, there will be (see Fig. 5.8):


a. No mean or range values, which lie outside the action limits.

b. No more than about 1 in 40 values between the warning and action limits.

c. No incidence of two consecutive mean or range values, which lie outside the
same warning limit on either the mean or range, charts.

d. No run or trend of five or more, which also infringes a warning or action,


limits.

e. No runs of more than six samples mean which either above or below the
process mean.

f. No trends of more than six values of the sample means which are either rising
or falling.
135

a b c

d e f

Fig 5.8 Out of control process

5.8 Process Capability Indices


It is important not only to measure outcomes from a process but relevant
process measures as well. 'Relevant' in this case refers to those measures that
indicate a process problem or predict outcome measures - either as causal
variables or risk factors. Thorough process knowledge, based on flowcharts and
cause-effect diagrams, would readily identify important process measures. It is
imperative, therefore, that this level of process knowledge is attained before
establishing either process or outcomes measures.

The primary use of statistical process control is to determine the cause of


variation in the indicator under study. SPC tools decompose variation into two
causes:
136

Special (or assignable) cause - Variation caused when some unusual or


external cause occurs. When special cause occurs usually only one cause is at
fault, the cause is identified and the data point removed to calculate true
control limits. No process improvement effort is possible until the special
cause is determined and removed. Attempting to improve a process
containing special cause variation only increases the instability and variation
in the process.
Common cause - the normal day-to-day variation in the indicator of interest.
When ONLY common cause variation is present no one cause is to blame for
process performance; any process improvement effort must consider all
sources of variation. A process with only common cause variation is also
stable and predictable, with the control limits serving as a measure of process
capability.

In formalising a relationship between the random component of the


variations within a process and the tolerances we make use of the standard
deviation, , of the process. We may express the approximate width of the random
variations as equal to six standard deviations, 6 , and recall that this covers 99.7%
of the actual values, if the distribution is normal. This is only met if the distance
between the upper specification limit (USL) and the lower specification limit
(LSL) are less than the width of the base of the bell. Thus, the process capability
index can be defined as:

U
USL L
LSL
Cp
6

Clearly any value of Cp below 1 means that the width of the random
variations is already greater than the specified tolerance band so the process is
137
incapable. For increasing value of Cp the process becomes increasing capable.
This index is accurate if the distribution is correctly centred about the mid-
specification.
Another index, Cpk, measures the distance between the process mean and
both the upper and lower specification limits and expresses this as a ratio of half
of the bell width:

USL r r LSL
L
C pk min(
m , )
3 3
A Cpk of 1 or less means that the width of the distribution bell and its centring is
such that it infringes one of the tolerance limits and the process is capable.
Increasing values of Cpk index corresponds to increasing capability.

5.9 Example
Consider the control of BOD at a desired set point of 6 mg/l. The overall
variability of such a process may be determined by measuring a large sample, say
100 measurements, from the process as shown in Table 2. The histogram for the
data is shown in Fig. 5.9. The mean and range charts limits are calculated as
follows:

Process Mean r = 6.05


Mean Range Rr = 0.5

Mean Chart
From Appendix 5.A and 5.B, for a sample size n=4; d2 = 2.059, therefore =
0.5/2.059 = 0.24 and :

F mean
r
e
q 2.5%
2.5%
u
e 0.1% 0.1%
n
c
y Variable
Fig 5.9 Distribution of sample ranges
138

Upper action line 6.05+(3*0.24/2) = 6.41


Upper warning line 6.05+(2*0.24/2) = 6.29
Lower warning line 6.05-(2*0.24/2) = 5.81
Lower action line 6.05-(3*0.24/2) = 5.69

Range Chart
Upper action line 2.75*0.5 = 1.37
Upper warning line 1.93*0.5 = 0.86
Lower warning line 0.29*0.5=0.15
Lower action line 0.1*0.5=0.05

The action and warning lines are schematically shown in Fig. 10, the mean
(Fig. 10.a and Fig. 10.b) and range (Fig. 10.c and Fig. 10.d) charts. It is clear that
this system is under statistical control.
139
Table 2 The BOD data used in the example
Sample
No (i) (ii) (iii) (iv) M R M F
1 5.8 5.8 6.2 5.8 5.9 0.4 3 7 5.8 1
2 6 6 5.4 6.1 5.9 0.7 2 5.9 3
3 5.8 5.7 5.7 6.2 5.9 0.5 7 6 10
4 6.2 5.8 6.1 5.9 6.0 0.4 10 6.1 8
5 6.3 6.1 5.9 6.3 6.2 0.4 2 6.2 2
6 6.3 6 6.3 5.9 6.1 0.4 8 6.5 1
7 6 5.8 5.9 6.2 6.0 0.4 R F
8 5.7 5.9 6.2 6.2 6.0 0.5 0.1 1
9 7.4 6 6.2 6.3 6.5 1.4 1 1 0.2 2
10 5.8 5.9 6.3 6.2 6.1 0.5 0.3 1
11 6.1 6 6.2 6.1 6.1 0.2 2 0.4 7
12 6.2 5.8 6.1 5.9 6.0 0.4 0.5 7
13 6.1 5.8 6.9 5.7 6.1 1.2 1 0.6 2
14 5.8 6.4 5.7 6 6.0 0.7 0.7 2
15 6 5.8 6 6.3 6.0 0.5 0.8 1
16 5.9 5.7 6.3 6 6.0 0.6 2 1.2 1
17 6.2 6 6 5.9 6.0 0.3 1 1.4 1
18 6.3 5.9 5.9 6.1 6.1 0.4
19 6.1 6.2 6.1 6 6.1 0.2
20 6.2 5.7 6 6 6.0 0.5
21 5.8 6.2 6 6.4 6.1 0.6
22 6.1 5.9 6.4 6.2 6.2 0.5
23 5.7 6.2 6.2 6 6.0 0.5
24 6 6.1 6 6.1 6.1 0.1 1
25 6.2 5.6 5.9 5.4 5.8 0.8 1 1
25 25
140

mean

5.10 Conclusion
6.6
6.5
6.4 Range
6.3
6.2
Statistical process control (SPC) provides mean
1.4 6.1 simple, yet powerful, for
6
1.2 5.9
managing process while avoiding process tampering. A process 'in control' (i.e.;
1 5.8
0.8 5.7 no special cause variation) is ripe for breakthrough process
exhibiting
0 5 10 15 20 25 30 Range
0.6
improvement.
0.4 A process still burdened with special cause variation is still in the
0.2 solving stage.
problem
0
0 5 10 15 20 25 30
Fig. 10.a BOD mean

Fig 10.c The BOD Range

Range Distribution
Mean Distribution
8 15
10
Frequency
Frequency

6
10 8
4

2 5 3 2
1 1
0
1 0 2 3 4 5 6 7 8 9

1 2 3
Range 4 5 6
Mean
Range Distribution

Fig 10.d
Fig. The
10.b BOD
BOD Range
Mean Distribution
Distribution
141

5.11 Further Readings


Numerous excellent SPC books are in print, examples are:
1. John F. Early, ed.. Quality Improvement Tools. Wilton, CT: Juran
Institute Eugene L. Grant and R. S. Leavenworth. Statistical Quality
Control. ASQC Quality Press.
2. Douglas C. Montgomery. Introduction to Statistical Quality Control.
ASQC Quality Press.
3. John W. Moran, R. P. Talbot, R. M. Benson. A Guide to Graphical
Problem Solving Processes. ASQC Quality Press.
4. Edward R. Tufte. The Visual Display of Quantitative Information.
Cheshire, CT: Graphics Press.
5. Edward R. Tufte.Envisioning Information. Cheshire, CT: Graphics Press.
6. Steven M. Zimmerman and R.N. Zimmerman. SPC Using Lotus 1-2-3©.
ASQC Quality Press
142

Appendix 5.A Constants used in the design of control charts for


mean
n ( d nor d 2 ) A1 2/3A1 A 2/3A2 A3 2/3A3
2 1.128 2.12 1.41 1.88 1.25 2.66 1.77
3 1.693 1.73 1.15 1.02 0.68 1.95 1.30
4 2.059 1.50 1.00 0.73 0.49 1.63 1.09
5 2.326 1.34 0.89 0.58 0.39 1.43 0.95
6 2.534 1.20 0.82 0.48 0.32 1.29 0.86
7 2.704 1.13 0.76 0.42 0.28 1.18 0.79
8 2.847 1.06 0.71 0.37 0.25 1.10 0.73
9 2.970 1.00 0.67 0.34 0.20 1.03 0.69
10 3.078 0.95 0.63 0.31 0.21 0.98 0.65
11 3.173 0.90 0.60 0.29 0.19 0.93 0.62
12 3.258 0.87 0.58 0.27 0.18 0.89 0.59

Formulae = R / d n or R / d1
Mean charts Action lines
X A1 Warning lines X 2 / 3A 1

X A2 R X 2 / 3 A R Process capability
2

X A3s X As 3
143

5.12 Appendix 5.B Constants used in the design of


control charts for range
__________________________________________________________________
______________
Constants for use with mean range ( R ) Constants for use with standard
Constants for use in
Deviation ( )
USA range charts
(n) D’0.999 D’0.001 D’0.975 D’0.025 D0.999 D0.001 D0.975 D0.025 D2 D4
2 0.00 4.12 0.04 2.81 0.00 4.65 0.04 3.17 0 3.
3 0.04 2.98 0.18 2.17 0.06 5.05 0.30 3.68 0 2.
4 0.10 2.57 0.29 1.93 0.20 5.30 0.59 3.98 0 2.
5 0.16 2.34 0.37 1.81 0.37 5.45 0.85 4.20 0 2.
6 0.21 2.21 0.42 1.72 0.54 5.60 1.06 4.36 0 2.
7 0.26 2.11 0.46 1.66 0.69 5.70 1.25 4.49 0.08 1.
8 0.29 2.04 0.50 1.62 0.83 5.80 1.41 4.61 0.14 1.
9 0.32 1.99 0.52 1.58 0.96 5.90 1.55 4.70 0.18 1.
10 0.35 1.93 0.54 1.56 1.08 5.95 1.67 4.79 0.22 1.
Action lines:
Upper = D' 0. 001R or D 0 001 L
Lower
. = D' R
0 999 or D 0 999
. .

Warning lines:
Upper = D' 0. 025R or D 0 025 L
Lower
. = D' R or
0 975 D 0 975
. .

Control limits (USA): Upper = D4R Lower = D R 2


144

6 Sensors and Actuators

Objective
The objective of this section is to provide an overview for the non-
instrument engineer, of the role, operation and problems associated with the main
sensors and actuators used in the control systems of the waste water treatment
industry. To this end the section will cover:

the physical measurement of level and flow,


the analytical measurements of Dissolved Oxygen (DO), Suspended Solids,
pH/nitrates (These analytical and physical measurements involve a range of
techniques including ultrasonic and optical techniques.)
Centrifugal and positive displacement pumps.

6.1 Physical Measurement: Level

The measurement of level may be required not only for determining the
height of fluid in a tank or channel but may be used within other measurement
schemes , for example, to determine the flow rate in a weir. The output can be
monitored or used to provide a control signal to operate valves or penstocks. The
most common techniques are to use either a sensor based on ultrasonic signals or a
capacitive device whose electrical capacitance, and hence voltage or current
output, depends on the level of material ( liquid) between the two capacitive
plates.

6.1.1 Ultrasonic level sensor

Ultrasonic techniques depend on the production of an ultrasonic pulse


being transmitted through a medium and the return pulse, and the time for the
return pulse or echo being timed. By knowing the velocity and the time taken, a

R. Katebi et al., Control and Instrumentation For Wastewater Treatment Plants


© Springer-Verlag London Limited 1999
145
measure of distance can be obtained. Figure 0.11 shows an application of
ultrasound techniques to level measurement. In one situation the meter is placed
above the tank and the time taken for the pulse to be reflected from the surface of
the fluid is measured. In the second example, the transducer is place below the
tank and the time taken for the echo to reach the surface of the liquid and return is
measured. There are also reflections from the other boundaries on the path of the
pulse, for example, the tank-fluid interface, or even from strata within the liquid.
This can be used for detecting the boundary between clear water and sediment.

ULTRASONIC
TRANSDUCERS

TANK

Figure 0.11 Ultrasonic Level Measurement

6.1.2 Capacitance level sensor

A capacitor consists of two plates separated by a dielectric material.


Figure 0.12 shows a capacitive level sensor which is comprised of two conductive
plates on a backing strip which is placed on the edge of a tank containing fluid
(the dielectric material in this case). The capacitance depends on the distance
between the plates and the area of the plates, as well as the dielectric. However,
for a capacitive level sensor the distance and area are fixed and it is the change in
the dielectric that causes a change in capacitance. When the fluid in the tank rises
due to a level change the amount of dielectric alters which produces a change in
the capacitance. This is measured as a change in the potential difference between
the two plates.

0.145
146

-ve

+ve

Capacitive
Level
Sensor Fluid
in
Tank

Figure 0.12 Capacitive level sensor

Another common capacitive level sensor is formed from two concentric


cylindrical plates, with the fluid (dielectric) rising up and down between the
cylindrical plates. Both the above level sensing techniques (ultrasonic and
capcitive) provide an electrical output which can, with appropriate signal
conditioning, be sent to a local or central monitoring system.

6.2 Physical Measurement : Flow

The flow rate of a process is calculated in terms of either the volumetric


or mass flow rate. The volumetric flow along a pipe can be written as

Q=A*
where Q: volumetric flow rate (m3/s)
A: cross-sectional area of pipe (m2)
: flow velocity (m/s)

If the volumetric flow rate and fluid density are known, the mass flow
rate can be calculated from:

W=Q*
147
where W : mass flow rate (kg/s)
Q: volumetric flow rate (m3/s)
: density (kg/m3)

Many flowmeters used in industry use the relationship that the


differential pressure (d.p.) or head of liquid produced is proportional to the square
of flowing velocity, (for example, orifices, pitot tubes and weirs). Square root
extractors are available for linearising the flow signal and are often built into
many of the transmitters. However, the square relationship between flow and d.p.
reduces the accuracy of these flowmeters at low flow. They also do not give
volumetric nor mass flow, since the density will affect the velocity of the flow and
the head of liquid. Therefore if density varies considerably, compensation should
be applied.

p
Volumetric Flow: Q k1

Mass flow: W k2 p.

where k1, k2: factor dependent on type of flowmeter (m2)


p: differential pressure (N/m2)
: density of fluid (kg/m3)

In the water industry, weirs or flumes are often used to measure open
channel flow, whilst the nature of the liquid/solids in pipes may require non-
invasive flow rate measurements such as magnetic flowmeters or ultrasonic
techniques(Doppler meters, transit-time meters).

6.2.1 Weirs and flumes

A weir is a flat bulkhead which has a specially shaped notch (rectangular


or v-shaped) on its upper edge ( Figure 0. 13). The flow rate is then determined
from:

0.147
148

V-shaped notch: Q = FV* V* 2g h5

Rectangular notch: Q = FR * R 2gh 3

where FV , FR: Flow coefficient for (v-shaped or rectangular) notch


V, R: Coefficient dependent on geometrical shape of notch
h : head over weir crest, (m)
g : gravitational constant, ( 9.8 m/s2)

The flow rate requires the measurement of ‘head’ or level. A level sensor
placed appropriately can find this. Often, the level sensor is placed in a ‘still well’
adjacent to the flow channel. As the flow through the channel increases, the level
rises simultaneously in both the channel and the still well, Hence the level in the
still well reflects the flow rate.

Capacitative
Still well
Level sensor

Flow

V-notch
weir
Open Channel

Figure 0.13 V-notch weir for flow rate measurement

6.3 Flumes

A flume is a specifically designed construction placed in the flow path. It


comprises of a converging section to restrict the flow, a throat section and a
diverging section (Figure 0.14). Discharge through a flume can occur for two
conditions of flow:
149
Free flow: This occurs when backwater effects do not restrict discharge from the
flume. The head in the approach channel can therefore be related to the flow rate
by a similar equation to the above set for weirs:
Flume: Q = FF * F* 2g h3
With free flow a ‘standing wave’ may occur downstream of the flume.

Submerged flow: This occurs when the water level downstream of the flume is
sufficiently high to reduce the discharge. This results in a reduction in velocity, an
increase in flow depth and causes a backwater effect at the flume. Submerged
flow requires measurements both upstream and at the throat to determine the flow
rate.
Throat

Flow

Level Sensing
Device

Figure 0.14 A flume

6.3.1 Magnetic Flowmeters

Flow of conductive liquids can be measured by a magnetic flowmeter


whose lack of obstructions makes it useful for slurries or sludge in waste water
treatment.

Advantages:
These meters do not create a pressure drop
It is a volumetric device in that the velocity is measured directly and
variations in density do not affect the accuracy.
They do not provide any obstruction to the flow.

0.149
150

They produce an electrical output which could be used by an


electronic control system.
Principle of operation

Magnetic flow meters use the principle of Faraday’s Law of


electromagnetic induction; when a conductor moves through a magnetic field at
right angles to the field , an induced voltage is developed. In the case of the
magnetic flow meters this voltage can be expressed as:

E = constant * D * B *
where: E: induced voltage (volts)
D: distance between the probes (m)
B: intensity of magnetic field (Tesla or Volt–s /m2)
velocity of motion of the conductor ( in this case the conducting
liquid) (m/s)

Since the volumetric flow rate is given by Q = A * and A = D2/4, the


induced voltage is given by:

B4
E = constant * ( )*Q
D

All the terms in brackets are held constant and therefore the induced voltage is
proportional to the flow rate.

A diagram of a pulsed DC flow meter is shown in Figure 0.15. The flow


meter consists of a straight length of non-magnetic pipe (called the metering
section). The conductor is the fluid flowing in the pipe. Magnets placed on either
side of the pipe produce the magnetic field. The direction of induced voltage will
then be perpendicular to both the motion of the flow and to the magnetic field
generated. Two electrodes project through the pipe lining to pick up the induced
voltage.
151

Vout

Electrodes

Pulsed DC Input

Figure 0.15 Pulsed DC Magnetic flowmeter

The pulsed DC magnetic flow meter is often used in preference to an AC


or DC flowmeter since it reduces the pickup noise and interference from other
voltage supplies or transmission and the pulsed DC input reduces the buildup of
material on the electrodes.

6.3.2 Ultrasonic flow measurement

The basic principle of transmitting and receiving ultrasonic pulses and


timing the echo was used in level measurement. In this section we apply the same
techniques of using sonic pulses to the problem of measuring flow.

Transit-time meters (time-of-flight meters)


Figure 0.16 shows a typical flow measurement using ultrasonic pulses.

0.151
152

Flow

Ultrasonic
Transmitter/Receiver

Figure 0.16 Flow measurement using ultrasonic pulses

The time taken for the transmission of an ultrasonic pulse over a given
distance with and against the flow is recorded, giving times t 1 and t2 respectively.

Hence the velocity = distance/time equation yields:


L
=t
cos (c + cos ) 1
L
= t2
cos (c - cos )
where
flow velocity at angle to the ultrasonic beam
L: longitudinal distance along pipe of transmitter/receiver
c: velocity of sound in the fluid
L 1 1
From which it can be deduced that = ( - )
2 cos2 t1 t2
Doppler meters:

Another form of ultrasonic flow measurement uses the Doppler effect.


The Doppler meter is based on the fact that if sound waves between a transmitter
and receiver are in relative motion, then the change infreuqency at the receiver is
dependent on the relative velocities of the transmitter and receiver. This can be
applied as shown in Figure 0.17.
153

Doppler meter
Receiver
Analysis
for frequency
shift
measurement
Flow Scattering
Centres

Transmitter
Pulse
Generator
1 –10 MHz

Figure 0.17 Flow measurement using a Doppler Meter

The fluid, into which the ultrasonic pulse is transmitted, causes scattering
of the sound. This scattered sound is received by a transducer and the resulting
frequency analysed. The shift in frequency, which may be of the order of 100 Hz,
can then be determined by further processing.

Both the magnetic flowmeter and the meters based on ultrasonic


techniques have the advantage that they are non invasive meters and have less
need for servicing than other contact-flow devices.

6.4 Analytical Measurement: Ion selective electrodes

(pH, Nitrates)
The monitoring of certain chemicals such as pH or nitrate may be required for
monitoring purposes to satisfy various legal requirements, or to form the
measurement within a closed loop feedback control system. This section examines
the use of ion selective electrodes for the use in determining the concentration of
various ions in the liquid.

6.4.1 Ion selective electrodes

The concentration of hydrogen (pH) or nitrate ions in solutions can be


determined by using an ion-selective electrode (ISE). Figure 0.18 shows an ISE
immersed in a solution. A reaction will take place between the charged substance

0.153
154
in the solution and that on the measurement sensor surface. When a chemical
equilibrium is reached between the two substances, there is a corresponding
equilibrium potential difference between the sensor and the solution. This
potential difference is caused by the concentration or activity of mainly one
selected ion. However, the ion-selective electrode only forms one half of an
electrochemical cell and therefore must be used in conjunction with a reference
electrode immersed in the same solution. The purpose of the reference electrode is
to provide an electrode at constant potential (also constant temperature) against
which to measure the potential of the ISE.

High input impedance meter


Or buffer amplifier

Reference
Measurement
electrode
electrode

Figure 0.18 Measurement and Reference electrodes

6.4.2 Example of an Ion Selective Electrode: pH measurement

Figure 0.19 shows a typical pH measurement electrode where the thin


pH sensitive membrane, which allows the transfer of ions, is a specially
formulated glass. This membrane forms the ‘liquid junction’ which permits the
transfer of ions from the solution to the KCl gel. This process achieves the
electrical continuity. A thin layer of hydrated silica forms on the wetted glass
which allows an equilibrium to be set up between the activities of H + ions in this
layer and those in the test solution. Any increase in the acidity causes more H +
155
ions to be deposited in the layer which raises its electrical potential relative to the
test solution..

Screened
connecting cable

Potassium
Chloride
Gel

Reference
electrode
(Ag – AgCl)

Glass membrane

Figure 0.19 pH sensor ( glass measurement electrode)

The common type of screened reference electrode is a silver-silver


chloride (Ag-AgCl) electrode in a strong solution or gel of potassium Chloride
(KCl). Combination electrodes are now available where the reference electrode
surrounds the measurement electrode concentrically. An example of a solid state
electrode is shown in Figure 0.20 In these sensors the Ag/AgCl reference is
permanently charged with KCl to give a non-fluid junction.

0.155
156

Epoxy

Ag/AgCl
Wooden Reference
Dowel

Wooden Plug Epoxy


(KCl Saturated) Barriers

Temperature
Compensation

pH measurement
electrode (glass)

Figure 0.20 Solid state combination electrode with permanently charged reference
electrode
Remarks
Typical sensitivity of the glass electrodes might be 30mV (0.5pH) and 5
mV for reference electrodes.
The sensor signal must be conditioned since it has a temperature -dependent
high electrical resistance (in the order of 100 to 300 M ). It must be
connected to a meter with a very high input resistance circuit or a pre-amplifier
circuit. The sensor scaling varies with temperature and a zero shift also occurs
so temperature compensation is usually applied.
A variety of 'sealed disposable' reference electrodes are offered as standard by
manufacturers.
Some manufacturers provide a level of self- diagnostics within their sensor
system. For example, a routine resistance check can be performed on the pH
glass electrode to ensure that no crack or break has occurred. Alternatively,
157
the reference sensor can contain two electrodes with a continuous comparison
to detect faulty performance.

6.5 Analytical Measurement: Dissolved Oxygen (DO)

Dissolved Oxygen is monitored during the activated sludge process in


order to control the aeration process within the tanks. It therefore requires a sensor
with an electrical output which can be used in the closed loop control system.

6.5.1 Amperometric DO sensor

The amperometric probe is a diffusion type probe which relies on


constant and continuous oxygen transfer across the membrane in a single
direction. As oxygen diffuses through the membrane an electrochemical reaction
takes place between the cathode and anode in the presence of the electrolyte.
(Figure 0.21

Anode: 2Pb + 2 H2O = 2 PbO + 4H+ + 4 e_


Cathode: O2 + 4 H + + 4 e_ = 2 H2O
The resulting current flow is proportional to the amount of oxygen diffusing
through the membrane.
Lead Anode

+ +

+ +

+ +
Inert
Cathode
- - - - Replaceable
Membrane

Figure 0.21 DO sensor

0.157
158
Temperature compensation is required and is commonly achieved by use
of a PT100 resistance probe. A typical DO sensor may measure in the range of 0-
20ppm or 0-250% saturation. However, the probe's measurement is dependent to
some extent on flow to replenish the consumed oxygen. In addition, the corrosive
by-products of the electrochemical reactions makes periodic electrode cleaning ,
replacing or recharging necessary. The membrane on the probe is commonly made
of Teflon and can be replaced as needed.

6.5.2 Equilibrium DO sensor

The equilibrium probe is based on maintaining an equal partial pressure


of oxygen inside and outside the probe. As the oxygen is reduced at the cathode,
an equal amount of O2 is generated by the anode. This reaction continues until the
partial pressure of oxygen on both sides of the membrane is re-established, that is
comes to an equilibrium.

The electrochemical reactions of the patented ( Honeywell) equilibrium


probe are as follows (Figure 0.22).

At cathode: O2 + 4H+ + 4 e- 2H2O


At anode: 2H2O O2 + 4H+ + 4 e-
159
Permanent
Membrane

O2

Permanent
Internal
Electrolyte -
+

O2 -
O2
+

Figure 0.22 Equilibrium DO sensor

6.6 Analytical Measurement: Turbidity and


Suspended solids

Turbidity (thickness) and suspended solids are often measured by using


either principles of light absorption or by the principle of light scattering. In
wastewater treatment, the liquid is viewed as containing suspended solids until the
point at which the liquid attains a certain density at which point it is referred to as
turbid. Both stages use the same techniques to determine an indication of the
suspended solids concentration or turbidity.

6.6.1 Light absorption techniques

An example of an absorption sensor is shown in Figure 0.23. The sensor


is immersed to the required depth in a tank. The light source (LED) is regulated
by the sensor system and emits light through an optical sensing gap to a photocell
which senses the amount of light transmitted through the liquid. This will

0.159
160
therefore produce an electrical signal related to the turbidity or the suspended
solids concentration of the liquid.

Light
Source
Photocell

Figure 0.23 Sensor using light absorption techniques

Microprocessor equipment can be linked to this equipment in order to


achieve a distribution over depth of the turbidity/suspended solids. The controller
can lower the sensor into a tank taking readings at stages as it goes down and
when the sensor reaches the sludge blanket, the depth can be calculated and the
sensor rises.

6.6.2 Scattered light technique

Figure 0.24 demonstrates the principle of using scattered light techniques


to measure a concentration of suspended solids. The light scattered depends on the
size of the particles; smaller particles scatter light at greater angles.
161
Light
Source Direct Light

Lenses
Photocell
Solution Scattered Light Arrays

Figure 0.24 Suspended solids by light scattering

The light beam is focussed in the substance flow and the dual beam
method uses the measurement of both direct and 12o forward scattered light. The
ratio between the scattered and direct light is calculated which gives a good
correlation between particle concentration and the measuring value. By using a
ratio calculation, disturbances affecting both forward and scattered light ( such as
colour of the solution, window coating and lamp ageing) are compensated. Other
models can use a triple beam system which measures the 90 0 scattered light
(smallest particles), forward scattered light (bigger particles) and the transmitted
beam.

Sensors may have ranges of 0-30, 0-100, 0-300, mg/l or 0-2, 0-20, 0-200 ppm

6.7 ‘Self-cleaning’ sensors

Many systems for waste water treatment claim to be ‘self-cleaning.


Although they do perform a cleaning operation, it is generally true that they do not
remove the need for cleaning but only reduce the rate at which the sensor must be
cleaned. Some of the techniques used for self-cleaning are briefly commented on
below.

Ultrasonic systems are sometimes used to send sonic pulses to systems to


provide a form of cleaning ( for example, ultrasonic cleaning systems
used to clean electrodes in magnetic flowmeter).
In ISE systems, Fisher Rosemount can provide a process of encasing the
electrodes in a case containing 4 Teflon balls which are agitated by the

0.161
162
flow passing the electrodes and therefore are continuously rubbing
against and cleaning the electrodes.
Systems containing microprocessors can provide a time-monitored
switching on and off of jets of water for cleaning.
The Monitek CLAM system for self-cleaning provides a mechanical
manipulation of a piston attached to a wiping seal to sweep across the
surfaces of the light absorption sensor.

6.8 Actuators: Pumps

Within the wastewater industry the main actuators used in controlling


sewage flow around a wastewater site are the valves, penstocks and pumps. Indeed
many flow meters are linked to valves/penstocks to provide, for example, storm
flow control. Pumps are used to transport solids, sludge and sewage to various
processing points in the treatment process. The two main categories of pumps
discussed here are centrifugal pumps and positive displacement pumps.

6.8.1 Centrifugal pumps

The centrifugal pump (Figure 0.25) derives its name from the fact that the
fluid is driven outwards by the movement of the blades in the pump. The velocity
of the fluid is reduced since it passes through the passage of gradually increasing
cross-sectional area; this reduction in velocity corresponds to an increase in
pressure , or ‘head’ between the inlet and outlet fluid pressures. If the discharge
side of the pump is closed then the pressure builds up to a maximum for the pump
and the pump effectively churns the fluid up, creating heat as a by-product.
Centrifugal pumps are also, in general, not disadvantaged by flows containing
solid particles.
163

Outlet

Inlet
Shaft

Impeller

Figure 0.25 Centrifugal pump

6.8.2 Positive displacement pumps

A positive displacement pump effectively works from changes in the


volume occupied by the fluid within the machine. In many cases, the change of
volume is produced by a reciprocating piston which moves to and fro in a cylinder
which has a suitable arrangement of valves. Figure 0.26 shows a typical
reciprocating positive displacement pump. When the piston is driven outward by a
motor, the pressure in the cylinder reduces; the inlet valve then closes and the
outlet valves opens so that fluid is discharged into the delivery pipe. The valves
open and close automatically due to the pressure changes in the piston. In other
designs, the valves may be replaced by ports in the sides of the cylinder; these
ports being covered and uncovered by the movement of the piston.

0.163
164

Hd

Outlet valve

hs
Inlet valve

Ls
Suction level

Figure 0.26 Reciprocating positive displacement pump

The disadvantages of the pump are that small clearances are required
between moving and stationary parts, thereby making them unsuitable for fluids
which may contain solid particles. There is also a problem if the discharge line is
blocked since even a small increase in pressure of the incompressible fluid can
cause the pump to stop or some parts of the casing to burst.
165

6.9 Further Reading


Bentley, J..P., (1995), Principles of Measurement Systems, Longman Group Ltd,
Harlow, UK.
Bottom, A.B., (1991), Whither pH, Measurement and Control, Vol. 24, Oct.
Chettle, T., (1995), Level technology is getting smarter, C&I, July.
Fowles, G., (1993), Flow, Level and Pressure Measurement in the Water Industry,
Butterworth-Heineman, Oxford.
Husu, M., (1995) Smart Positioners: Just how smart are they?, C&I, July.
Liptak, B.G. (ed) , (1995), Instrument Engineers handbook: Process Measurement
and Analysis, Butterworth-Heineman, Oxford.
McKnight, J.A. and A. Clare, (1990), Using ultrasonics to measure process
variables, Measurement and Control, Vol. 23, Sept.
McMillan, G. K., (1991), Understand some basic truths of pH measurement,
Chem. Eng. Prog. , Oct., pp30-37.
Medlock, R.S. and R.A.Furness, (1990), Mass Flow measurement - a state of the
art review, Measurement and Control, Vol. 23, May.
Reeve, A., (1989), Can you select the right flowmeter, C&I, Feb.
Teasdale, L.M., (1994), BPMA (British Pump Manufacturer’s Association)
searches out the common ground, Supplement to Water Services, March.

0.165
167

7 Fieldbus and Data communications

Objectives
1. To introduce the development of communication system from an
analogue to digital environment.
4. To present the Open System Interconnection (OSI) reference model as a
way of structuring the functionalities of communication systems.
2. To outline the communication media now available and used within
WWTP.
3. To describe the typical electrical standards (RS-232, RS-485, etc).
5. To introduce the HART system as a means of using combined
digital/analogue communication.
6. To discuss the current issues in the FIELDBUS area.

7.1 Introduction

Communication systems are used throughout industry, including WWTP,


for various purposes: to connect equipment in control rooms to either local or field
devices, to perform remote monitoring of processes, to co-ordinate sequential
activities. Part of a process network is shown in Figure 7.1.

R. Katebi et al., Control and Instrumentation For Wastewater Treatment Plants


© Springer-Verlag London Limited 1999
168
PSTN
Head
Office
RADI

Superviso PLC PLC PLC


Computer
Field B HART/RS-
4-20m Field B
PLC PLC PLC PLC

Senso Actuato Senso Actuato Senso Actuato

RS-48

Works 2 Works 3

Sensors Actuato

Works 1

Figure 7.1 Works communication network

The diagram is typical of WWTP in that it shows a main head office


monitoring controlling the activities of different sized treatment plants.

The aims of the communication process are to take the process


measurements and transfer them to monitoring and control systems. Most
physical variables provide a continuously changing (analogue) indication of the
process state (Figure 7.2). These can be transmitted via an analogue medium (4-
20mA) or the measurement can be used sampled and a digital representation
produced. The digital measurement can be directly in process computing and
automation and can be transmitted to other areas digitally.
169

Process Samp les

/ /'(
A.nologue Output

Time

Figun: 7.2: Analogue and Digital Signal Representation

There has been a continuing evolution of the communication modes


nvail.1ble for use with field devices (Figw-e 7 3. ). The cun-ent trend is towards a
wider use of digital communication which allows a greater degree of flexibility
lUld flUlclionaiily.

Pncwnalic instruments (3 tu J5psi)

Analogue electronic (4 to 20 mA)

Simultaneous analogue/digital cUirununicalions (HARn

Full digital cumtnWlicaUons (Ficldhus)

Figun: 73 Development o fdevice cOllun Wlicalions


170

7.2 Dumb terminals and smart sensors

Point-to-point connection is term for the connection between one specific


device and its controller (Figure 7.4). The variable signal transmission is on a 4-20
mA current loop consisting of a pair of twisted wires. It effectively provides
communication between a computer/PLC and a ‘dumb’ field device. There is
usually no two-way transmission.

RS232/4-20mA
COMPUTER FIELD DEVICE

Figure 7.4 Point-to-point communication


With the advances in smart transmitters which include microprocessors, the ability
to add extra functionality to the instruments became apparent. For example, these
features may include:
the ability to perform self-diagnostic tests
the ability to perform semi-automatic calibration
the inclusion of improved numerical calculations for non-linearity or
temperature compensation
the ability to combine two measurements to infer a third variable

However, this extra information must be communicated digitally to the


main plant control and monitoring system either by hand-held or portable
communicators or by some form of network. Using digital communications it is
possible for a single instrument to transmit more than one measurement and also
for the field instrument to be interrogated by the master computer on its
performance and calibration information.
171

7.3 Digital Communication

7.3.1 Communication Medium

There are various media used for communicating data. For WWTP, the
most feasible can be divided into the following classes; cable (twisted pair,
coaxial, fibre optic), telephone networks and radio/microwave links.

Cable (The twisted pair of wires ( two or 4 core)):

This has a low transmission rate <50000 bits per second and will allow
networks upto 2Km.

Cable (coaxial):

This mediium has a high transmission rate (typically up to 10MHz) and


can carry 200 times as much information as the twisted pair. It can transmit
several messages simultaneously and is often used as the signal medium for
transmission to base or receiving satellite stations.

Cable (fibre Optic):

This medium can carry 5 times as much information as the coaxial cable
and has the advantage that it is unaffected by electromagnetic interference.

Telephone Networks:

For example, PSTN (Public Service Telephony Network) is used in some


WWTP to communicate between field sites and monitoring stations.

Radio Microwave Links:

At present the licensed radio channels permit data transfer at low data
rates (Imrie, 1994). However, they can be used to download non-critical data
outside peak time. Microwave links provide higher speed transmission but require
line of sight transmission and therefore longer distances require large microwave
antena.
172

7.3.2 Data Transfer

Digital data can be communicated between devices by serial or parallel


communication, that is, either sequential or parallel data bit transfer. Parallel
communication, can achieve a faster data transfer rate but over a much smaller
transmission distance, for example, 15m. It is used, for example, for printer
interfaces. Serial communication is more widely used in communicating with
process devices since although it may transmit more slowly, it can be used over
much larger distances.

Serial communication can also take one of the following three forms:

(i) Simplex : where the controller can send information to the field device
but there is no transfer of information from the field device back to the
controller
(ii) Half duplex: where both controller and field device can communicate
with each other, but not simultaneously.
(iii) Full duplex : Simultaneous communication is possible between
controller and field device.

7.3.3 Serial interface standards: RS-232, RS-422 and RS-485

To connect industrial devices to one another or to computers, a common


electrical standard must be employed. Many items of process equipment use the
serial interface standards RS-232, RS-422 and RS-485.

RS-232

This is the common interface used between PC serial parts and devices.
Figure 7.5 shows the pin connection and their function on an RS232 connector.

Pin Abbreviation Signal/function


1 FG Frame/protective ground
173
2 TXD Transmitted data
3 RXD Received data
4 RTS Request to send
5 CTS Clear to send
6 DSR DCE ready
7 SG Signal ground/common return
8 DCD Received line detector
9
10
11
12 SDCD Secondary received line signal detector
13 SCTS Secondary clear to send
14 STD Secondary transmitted data
15 TC Transmit signal timing
16 SRD Secondary received data
17 RC Received signal timing
18 Local loop back
19 SRTS Secondary request to send
20 DTR Data terminal ready
21 SQ Remote loop back/signal quality detector
22 RI Ring indicator
23 Data signal rate selector
24 TC Transmit signal timing
25 Test mode
Figure 7.5 RS-232 Pin Connector

The minimum RS-232 connection between two data terminals is shown


in Figure 7.6; the signal grounds are connected and the transmit/receive in one
device are connected to the receive/transmit on the other device. The two devices
are often referred to as a DTE (data termination equipment, such as a PC) and a
DCE (data communications equipment).

Data Data or Communication


Terminal Terminal Equipment

2 2

3 3

7 7

Figure 7.6 Minimum RS232 connection


174
Although RS-232 is often used for connection to the serial ports of PC's, it suffers
from noise interface due to the connection to ground of one of the two connecting
wires (termed an unbalanced configuration). Differential transmission, which uses
two lines each for both the transmitting and receiving signals, gives greater noise
immunity.

Both the RS-422 and RS-485 serial transmission standards have


differential transmission. They can achieve higher transmission rates over longer
distances and can also support a higher number of receivers (Figure 7.7). The
RS485 standard increases the number of drivers and defines the electrical
characteristics necessary to ensure adequate signal voltage under maximum load.
Therefore, you can have networks of services connected to one RS485 serial port.

RS232 RS422 RS485


Max. number of drivers 1 1 32
Max. number of receivers 1 10 32
Maximum cable length (m) 15.2 1200 1200
Maximum data rate 20kb/s 10Mb/s 10Mb/s
Figure 7.7 Features of RS232, RS422, RS485

7.3.4 Protocols

Having defined the hardware connection between two devices, it is


necessary to specify how they communicate in software. The timing mechanism
between the transmitter and receiving devices can be either synchronous or
asynchronous.

In synchronous transmission: the transmitter receiver is synchronised to a


common clock and each character frame being sent is recognised as 7 or 8 bits in
length.

In asynchronous transmission: each frame of data being sent is preceded


by a start bit and terminated by a stop bit. This indicates the start and end point of
175
each frame to the communicating device. However, the checking of start and stop
bits limits the transmission speed to 1200 bits per second.

The protocol defines the software interface for communication between


two devices. It is a set of commands, which are grouped to form a reconfigurable
message by the receiver. A typical protocol uses ASCII (American Standard Code
for Information Interchange) strings to provide commands between devices. A
simple protocol for data exchange is shown in Figure 7.8.

Device 1 Device 2 Description

ENQ Device 1 polls device 2

AKN Device 2 acknowledges

SOH Device 1 sends Header


(amount of data, odd or even
ETB parity, sumcheck etc.

AKN Device 2 acknowledges and


SOH repeats message

ETB

STX
Device 1 sends text block
ETB

AKN
Device 2 acknowledges

ETX
Device 1 sends end of text

EOT
Device 1 sends end of
transmission

Figure 7.8 Communication between two devices

7.4 The ISO 7-Layer model

The OSI (Open Systems Interconnection) reference model was developed


by the International Standards Organisation (ISO). It defines a way of structuring
the specification and implementation of a communication protocol into ‘layers’
each of which has a specific function (Figure 7.9). Layers 1 to 3 address the
176
connections used in the system with Layer 1 specifying the hardware and Layer 2
specifying hardware and software. Layers 4 to 6 handle interoperation standards
for applications. Each device, depending on its complexity may or may not
support all layers shown.

Device 1/Layer Device 2/Layer

Application protocol
7 7

Presentation protocol
6 6

Session protocol
5 5

Transport protocol
4 4

Network protocol
3 3

Data link protocol


2 2

Physical protocol
1 1

Figure 7.9 OSI 7-layer model


177

The description of the layers is shown in Figure 7.10.


Layer Description Process Control Function
Application user Supports user services: e.g. Management functions
7 shared file access Supervisory controls
Sequence control
Direct digital control

Presentation Modifies transmitted encoded Operator function services: Displays,


6 data for presentation. printers, Control panels
Session Controls device dependent Real time operating systems functions
5 aspects; interfaces to local Device drivers; Interrupt handling
operating system. Application modules
Transport Controls start to finish of Local and remote routing of messages
4 communications session Error checks on entire session; Priority
levels.
Network Provides routing of messages Rerouting of messages; checking
3 network node failures.
Data Link Establishes data links, checks Provides error-free communications
2 at messages for errors, over physical channel controls access
controls use of to channels and field device addresses.
communication channels.
Physical Defines interfacing to Physical connection/disconnection
1 communication medium; Simple error detection
connects and disconnects
physical links e.g. hardware
I/O ports.

Figure 7.27 Description of OSI layers

7.5 Distributed Communication Systems

As automation increased, the need for communication between


computers, controllers and other devices also increased. Originally every device
was self-contained and the communication used was primarily a user interface for
controlling and updating the device operation. With the requirements to overlay
environmental control, energy management and supervisory control systems the
introduction of an updated plant -wide communication system was necessary.
178

7.5.1 Network topologies

From the original point-to-point system, the star topology developed


(Figure 7.11). Multiple computers could now communicate. The Central node (or
Master) has a communications port with multiple drops. However, the
disadvantages are that the Master has a severe software burden, (if the Master
goes down, the network will also go down) and that each device requires separate
wiring to the Master. Star networks were on the decline but due to the increase in
data rate to 10 Mbit/s over twisted pair cable, this topology is still being used.
Multi-drop protocols were developed and standardised which could be used by
ring and bus topologies (Figure 7.12, 7.13). These topologies meant that it now
became easier to add (or remove) devices from the network. The wiring became
easier due to a single wire being passed round all nodes to which the devices
connect. The disadvantages with these systems are (i) that the software must
decide who is required to transmit (ii) some protocols only allow communication
between the central control and any field device, therefore any information which
is to be passed device-device must first be sent to the central computer. There
arose the need for multinode networks without these problems. This led to the
development of the Local Area Network (LAN).
179
FIELD FIELD
DEVICE DEVICE

SCADA
SYSTEM

FIELD FIELD
DEVICE DEVICE

Figure 7.11 Star topology


SCADA
SYSTEM

FIELD FIELD
DEVICE DEVICE

FIELD FIELD
DEVICE DEVICE

Figure 7.12 Ring topology


180

BUS
CONTROLLER

FIELD FIELD
DEVICE DEVICE

FIELD FIELD
DEVICE DEVICE

Figure 7.13 Bus topology

7.5.2 Local Area Networks (LANs)

A LAN is a distributed communications network which covers a wide


area at high speed. The main characteristics are

(i) no central master terminal,


(ii) devices can communicate with other devices since each listens to
each transmission and accepts those addressed to it.
(ii) many data nodes may be incorporated
(iii) distance covered may be 1-5 km
(iv) high data transmission rate of , e.g. 100 Mbit/s

The LANs can still be formed in the star, ring and bus topologies. The
star LAN passes device-device commands through a central switching node. The
ring LAN passes messages round the ring in one direction only. Each node reads
the message, makes a copy if it is addressed to itself and passes the message. One
typical configuration of a ring network is the token ring. A token is passed round
the ring and any node may remove it, transmit a message and add the token to the
end. A receiving node will mark its safe acceptance, which will cause the
transmitting node to remove the message.
181
The bus LAN may be synchronous ( token passing) or
asynchronous (contention). The token passing is similar to the ring-token
described above. Asynchronous access implies that any node listens to the bus and
if it is not busy transmits its message. It can happen that two messages transmit
simultaneously (or almost simultaneously), therefore an access control
mechanism call Carrier Sense Multiple Access /Collision Detection (CSMA/CD)
is used to resolve the contention.

7.6 HART communication system

Many processes still rely heavily on the traditional 4-20mA signal.


Without fully redesigning a process, it was difficult to incorporate digital
communications. The HART system permitted the development of smart
transmitter switch could use both 4-20mA and the digital outputs (Howarth,
1994), (Orrison, 1995). The HART (Highway Addressable Remote Transducer)
communication is achieved by using a 4-20 mA current loop with a digital signal
superimposed to provide both an analogue and digital output. The HART system
uses the American Bell 202 standard frequency shift keying (f.s.k.) signal to
communicate (Bowden, 1996). The digital signal of ‘1’ or ‘0’ is represented by 2
frequencies, 1200 Hz and 2200 Hz respectively. These frequencies are
superimposed on the 4-20 mA d.c. signal (Figure 7.14). Since there is no d.c.
component of the sine wave, the digital data does not affect most analogue
instruments which require the d.c. mA level. (Low-pass filtering may remove most
of the digital communication signal if required). The data rate used for the HART
transmission is 1200 baud (1200 bps, bits per second). To avoid interference with
the communications signal, an upper limit of 25 Hz is imposed on the analogue
output signal.
182

Analogue signal The HART signal


mA 20
18
1200 Hz
16 2200 Hz

14

12

10

8 Slave response signal

6 Master command signal

4
0 10 20 30 40 50
Time (seconds)
Figure 7.14 A HART signal superimposed on a 4-20 mA process signal

The phrases of digital signal correspond to the master/slave commands


and responses since the HART protocol is a master slave protocol. The device
only replies when it receives a command. The command itself can be from two
master, a central controller and a hand-held terminal.

However if the transducer for the measured process variables process


produces a digital output, then the 4-20 mA analogue signal is no longer required.
This allows the possibility of implementing a multi-drop system; that is, multiple
field devices can be connected in parallel to a single twisted pair of wires and the
Master communicates with each device in turn.
183

The HART protocol takes the following structure.


Preamble STRT ADDR COM BCNT [STATUS] [DATA] CHK
ST
RT: Start character
ADDR: address (source or destination)
COM: Command
BCNT: Byte count ( of status and data fields)
STATUS: Command, communication and device status (slave to host only)
DATA: Data (0 to 25 bytes)
CHK: Checksum

The commands being sent can be subdivided into three groups


Universal commands (implemented in all field devices)
Examples: Read process variable (PV) and units.
Read transmitter range, units and damping time constant
Common-practice commands (commands common to most field devices but not
all)
Examples: Write transfer function (square root, linear etc.)
Re-range ( set span and zero)
Device-specific commands (functions unique to a particular device)
Examples: Read or write materials of construction
Read or write sensor type

With regard to the OSI model, HART implements layers 1, 2 and 7


(physical, data-link and application layers, respectively). The other layers (
network , transport, session and presentation layers) are not relevant to the type of
local network where HART is used.
184

7.7 FIELDBUS

There are current attempts to define a world-wide higher speed field


communications standard (Fieldbus). The bus structure would be used at the
lowest level of the OSI structure of functional devices and communications
networks. The performance being demanded is significantly greater than that
provided by, for example, HART, with increases in speed and support for multi-
drop and intrinsically safe operation being requirements. The primary goals of
Fieldbus users are therefore :

improved quality and reliability of control information


reduced installation, commissioning and maintenance costs.

The IEC committee set up to examine the introduction of an international


standard recognised that a good Fieldbus should support the following important
capabilities:

Precise time synchronisation


Levels of security and priority
Distributed databases
Bandwidth and distance
Cabling flexibility
Power through the bus and intrinsically safe devices
Automatic device identification
Cyclic scan scheduler
Fieldbus connectors.

7.7.1 Different standards

The problems arise in that in the absence of a common standard several


manufacturers provided their own versions of Fieldbus systems, with therefore
non-compatible protocols. The industry in general is reluctant to change to
Fieldbus systems unless it knows that its choice of Fieldbus would be supported in
185

Manufacturer’s Application & Vendor specific.


Application Not in the standard.
8
Application Provides a standard (primitive)
Layer 7 interface to the manuf.’s application

Link Arranges data transfer between


Layer 2 adjacent & remote devices

Physical Converts data to a form suitable for


Layer 1 transmission & receives data
from the modem

the future. Many of the existing Fieldbus providers have chosen to implement
only three of the 7 OSI layers, (Figure 7.15). However, they have added an eighth
or 'user' layer to represent the user interaction with the communication system.

Figure 7.15 Fieldbus device

The main Fieldbus alternatives are summarised below:


186

STANDARDS BODIES:

ISO/OSI International Standardisation Organisation / Open systems


Interconnection.
ISA Instrument Society of America
IEC International Electrotechnical Commission

FIELDBUS ‘STANDARDS’:
FIP Factory Instrumentation Protocol – French national standard.
PROFIBUS Process Fieldbus – German national standard.
Lon Works Communication system – conceived by the American company Echelon.
CAN Controller Area network – protocol conceived by Bosch for use in
vehicles.
ISP Interoperability Systems Project – established by Fisher Rosemount,
Siemens and Yokogawa.
ISPF Interoperability Systems Project foundation.
P-NET A fieldbus protocol developed by the Danish.
WorldFIP Enhanced version of FIP incorporating the IEC physical layer.

IEC/ISA Fieldbus

The IEC together with the ISA set up a sub-committee to specify an


international standard for a digital fieldbus in 1985. The protocol is based on the
7-layer OSI standard.

FIP

This represents the French national standard and is used mainly in


France and Italy. WorldFIP (Desjardins, 1994) was launched to give the standard a
global representation. They state their support for the IEC/ISA standard.

PROFIBUS

This was developed as a Process Field Bus standard by a number of


German companies and technical institutions (Squirrell, 1994). It was given a DIN
standard of DIN 19 245 and has been adopted widely by other German companies.
ISP have largely based their protocol on PROFIBUS. ISP state that PROFIBUS
will be a subset of ISP, giving PROFIBUS an upgrade path to further application
areas.

ISP
187
The Interoperable Systems Project was set up by Fisher, Rosemount,
Yokogawa and Siemens. It was set up with the stated intention of accelerating the
establishment of an international standard. It has adopted the IEC/ISA physical
layer. The rest of the protocol rests heavily on PROFIBUS and incorporates
aspects of FIP and HART (Allen, 1994).

P-NET

P-NET was conceived by Process-Data of Denmark and the first products


available in 1984. It was adopted as an in-house standard by some German and
Danish companies. The specification became an 'open' standard in 1989. It has a
wide range of application with no one specific application area.

CAN

The Controller Area Network was developed by Bosch as a digital


network protocol for sensors and actuators within cars. It has now migrated into
other applications.

LonWorks

In 1986, the founder of Apple Computers, started the company Echelon


in the USA specifically to address the network problem. LonWorks was conceived
as a solution to communication in all sectors, in comparison to most of the other
Fieldbus developers. The company therefore does not manufacture devices, but
sells the development kit to manufacturers for use with their own products.

7.7.2 The current status

Figure 7.16 shows the progress of Fieldbus standard development. In


1985, the IEC/ISA committees proposed that there should be an international
standard for digital field communication. The International Fieldbus Consortium
(IFC) was founded in 1990 and was joined by many Fieldbus developers,
including FIP and Profibus. Trials of the new standard were organised, with a
major trial at BP's Research and Engineering facility at Sunbury-on-Thames. This
188
proved to be a success (Loose, 1994) but as the trials neared their conclusion,
Cenelec (the European standards body, prompted by France and Germany, started
an initiative that would allow two standards within a European standard (although
neither was interoperable). There is alleged comment that this would stop the IEC
Fieldbus from gaining access to European markets. IFC subsequently dissolved
and was replaced by two separate groupings, ISP, representing the Profibus
supporters and WorldFIP representing the supporters of an enhanced version of
FIP. Under the protests of industrial users, the two (ISP and World FIP) joined
with others to form the Fieldbus Foundation (FF) whose agreed terms were to
create a FF standard that would converge towards the IEC Fieldbus standard.
However, in 1995, Cenelec proposed a standard (EN 50170) involving Profibus,
WorldFIP and P-NET. After some voting, EN 50170 was accepted, and therefore
has the force of law within Europe (Wood, 1997). The UK under the BSI
Fieldbus Committee, adopted the FF protocols as a UK draft and put this forward
for Cenelec approval. In November 1996, the IEC's proposal's for layer 2, the data
link layer, were rejected mainly by protest on a technicality by the German
representation. The final standard may result in two incompatible standards for the
data-link layer. The development is continuing.
189
IEC/ISA
(85 - )
Profibus FIP
Product

Product IFC
(90-94)
WFIP
ISP (92 - )
(92 – 94)
WFIP WFIP
(US) (Euro ) Product

FF

Product

Figure 7.16 Fieldbus development

Profibus

P-NET EN50170

WFIP

Figure 7.17 European Fieldbus Standard

7.8 Examples of WWTP Communications

The technology and the media used in communication systems is


increasing rapidly. Information from remote plants is now more readily available.
This has provided an impetus in many instances to introduce greater monitoring
and improved management information systems for the wastewater industry.

Many WWT plants have established central monitoring of remote


outstations. Northumbrian Water invested in a project to establish a method for
190
remotely monitoring the many small sewage works under its control (Alexander,
1992). On a larger scale Southern Water's regional telemetry system monitors and
controls 1600 - 2200 operational sites (Water Services, 1994b).

The communications systems may be connected via telephone


networking (PSTN) systems, or cellular radio networks (Imrie, 1994). Thames
Water will incorporate radio telemetry into some parts of replacement systems
(Water Services, 1994b) and North West Water collect data from a coastal site
using fibre optic radio links (Water Services 1994a). (The process data may be
collected at specific periods during the day and downloaded to a central computer
on a daily basis).

With the facility of the computer network comes the added advantage of
the use of SCADA systems for process monitoring and facilities management.
Grampian Regional Council has used its wide-area-network of linked PC to run a
sludge management information system (Water Services, 1996) to co-ordinate
transportation and disposal). Likewise, a network management scheme is
operated by Yorkshire Water (Newsome, 1991) which includes regional telemetry,
databases, analysis and strategic planning.

Many transmitters and instrumentation are HART compatible which


allows the possibility of changing to a multi-drop mode for full digital field bus
communication (Water Services, 196). Indeed, in Sluvad water treatment plant,
the number of existing HART compatible instruments made it more cost effective
to implement a fully digital monitoring system than the conventional analogue and
digital system (Cargill, 1997).
191

7.9 Further Readings


1. Alexander, I., (1992), Remote monitoring of very small sewage works, Meas
+ Control, Vol. 25, March , pp 44-45.
2. Allen, C., (1994), The interoperable systems project (ISP), Meas + Control,
Vol. 27, Mar., pp 38-41.
3. Bowden, Romilly, (1996), HART Field Communications Protocol, Fisher
Rosemount.
4. Cargill, M., (1997), At the flexible HART of WWT control, Water Services,
Sept., pp16-17.
5. Desjardins, M., (1994), WorldFIP, Meas + Control, Vol. 27, Mar., pp 42-46.
6. Howarth, M., (1994), HART-Standard for a 4-20 mA digital communications,
Meas + Control, Vol. 27, Feb., pp 5-7.
7. Imrie, A., (1994), Communication options in the water industry, Meas +
Control, Vol. 27, Sept. , pp 221-224.
8. Loose, G., (1994), Fieldbus - the user’s persepctive, Meas + Control, Vol.
27, Mar., pp 47-51.
9. Orrison, G.C., (1995) , Taking full advantage of smart transmitter technology
now, Control Engineering, Jan. , pp59-61
10. Squirrell, B., (1994), Profibus, a working standard fieldbus, Meas + Control,
Vol. 27, Feb., pp 9-12.
11. Water Service, (1996), Jan, p32.
12. Water Service, (1994a), Jun., p22-38.
13. Water Service, (1994b), Oct., p11-16.
14. Wilson, G., (1983), Using the IEEE-488 instrument bus, Electronic
Engineering, Mar., pp155-156.
192

8 Virtual Instrumentation (VI) and a Design


Exercise
Objective
The objective of this Chapter is to gain an understanding of the relatively
new technology of Virtual Instrumentation (VIs). The concepts of VIs are
introduced and the National Instruments Software Package, LabVIEW is used to
demonstrate the basic features. A design exercise based on a Wastewater treatment
is used to present design aspects of VIs and SCADA systems.

8.1 Introduction

The objective in virtual instrumentation is to use a general-purpose computer


to mimic real instruments with their dedicated controls and displays, but with real
added versatility that comes with software. Instead of buying a strip chart
recorder, an oscilloscope, and a spectrum analyser, we can buy a high-
performance analogue-to-digital converter and use a computer running virtual
instrumentation software to simulate all of these instruments. Most SCADA and
supervisory control systems have the VI tools to build and develop user friendly
instruments.

8.2 Virtual Versus Real Instrumentation

Virtual instrumentation systems are most effective when it is used with


distributed computer systems. The major drawback in using a Personal Computer
(PC) for implementing virtual instruments is that the computer has only one
central microprocessor. An application that uses multiple instruments can easily
overburden the processor, each dedicated to specific processing tasks. In addition,
these multiple processors can operate in parallel, providing a great increase in

R. Katebi et al., Control and Instrumentation For Wastewater Treatment Plants


© Springer-Verlag London Limited 1999
193

overall performance. But this increase in performance results in the expensive


price tag accompanying many dedicated instruments.

The technology in plug-in boards is aimed at addressing these issues. Many

Figure 8-1 A Virtual Two Channel Oscilloscope

boards now contain their own processors and are available at a more reasonable
price. Digital signal processors are an example of special-purpose processors that
find their way onto plug-in boards. Many plug-in data acquisition boards also have
sophisticated direct memory access (DMA), timing, and triggering capabilities
that can span multiple boards, resulting in improved synchronisation and signal
coupling between boards. These developments have brought parallel processing
capabilities to personal computers, making them more sophisticated platforms for
instrumentation and data acquisition.

VI offers the greatest benefit over real instruments in the areas of


price/performance, flexibility, and customisation. For the price of a dedicated
high-performance instrument, you can assemble a personal computer-based
system with the fundamental hardware and software components to design virtual
194

instruments targeted for specific applications. The hardware may be plug-in


boards, stand-alone instruments, or a combination of both. In either case, a
software interface can be a complicated or as simple as necessary to serve the
application. We can simplify the operation of a complex stand-alone instrument
that focus on controlling only subsets of the full capabilities of the instrument.

VI is becoming increasingly important in the instrument control world. VXI


(VMEbus Extension for Instruments), is a standard that defines physical and
electrical parameters and software protocols for implementing instruments-on-a-
card test systems. A VXI instrument is a card that plugs into a chassis containing
several cards. Because they are plug-in cards, individual VXI instruments do not
have front panel user interfaces. Users cannot interact with a VXI instrument
simply by pressing a few buttons or reading displays on a font panel. VXI systems
must be controlled by a computer, or some other processor-based device.

VXI instruments are natural candidates for VI implementations. In the area


of user interaction, software front panels offer a visual means of controlling VXI
instruments. In additions, the combination of plug-in modules and the high-
performance timing and communications capabilities of the VXIbus makes the
configuration of a VXI test system much more complex than a GPIB test system.

8.3 VI and Intelligent Instruments

Virtual Instrumentation is increasingly becoming an integral part of advaned


sensor technology, known as intelligent instrumentation. There are a number of
sub-systems in an intelligent instrumentation system.
195

Electrical
Measurand Signal
Primary Sensing Amplification
Element

Analouge Data Conversion


Filtering

Digital Signal
Compensation Data Processing

Digital
Communication

Figure 8-2 The components of an Intelligent Instruments

The above system is impossible to test by conventional instrumentation


tools. The traditional instrument in electronics is a stand-alone box that is
dedicated to a specific function. These can be categorised into three main groups:
data acquisition, analysis and presentation. Because of introducing PC technology
to instrumentation, these functions need not reside in the same box, neither do the
functions of an instrument need to be fixed by the vendor. This new concept in
instrumentation is called virtual instrumentation.

The powerful I/O capabilities of the PC create interesting and highly


effective possibilities for instrumentation systems. One of the hardware
components available for virtual instruments is the plug-in data acquisition (DAQ)
board. These boards install directly into the expansion slots of the PC. The
measurement front-end components, including A/D, D/A and digital I/O are all
parts of plug-in DAQ boards. All of these can powerfully controlled by using
standard PC features and software.
196

8.4 Introduction to LabVIEW?

LabVIEW a program development application, much like various C or


BASIC software development tools. It is, however, different from those
applications in one important respect. LabVIEW uses a graphical language, G, to
create programs in block diagram form while other programming tools use text-
based languages. LabVIEW includes libraries of functions and development tools
designed specifically for instrument control. It has application specific libraries for
data acquisition, serial instrument control, data analysis, data presentation and data
storage.

LabVIEW programs are called Virtual Instruments (VIs) because their


appearance and operation imitate actual instruments. VIs contain an interactive
user interface, which is called the front panel, because it simulates the panel of a
physical instrument. The Front Panel can contain knobs, push buttons, graphs,
and other controls and indicators. The actual program is included in the Block
Diagram Window.

8.4.1 How to Start LabVIEW?

Double-click on the LabVIEW icon. After a few moments, two blank,


197

untitled windows appear. The first window with the gray background is the Front
Panel. The one with the white background is the Block Diagram panel.

8.4.2 Running The Temperature Demo

Open the temperature System demo VI by following these steps:

a. Select file open.

b. Double-click on examples.llb.

c. Double-click on apps.

d. Double-click on tempsys.llb.

e. Double-click on Temperature System Demo.vi.

After a few moments, the Temperature System Demo VI front panel appears.
The front panel contains several numeric controls, Boolean switches, slide control,
knob controls, charts, graphs, and a thermometer indicator.
198

The Temperature System Demo VI simulates a temperature monitoring


application. The VI takes temperature readings and displays them in the
thermometer indicator and on the chart. The Update Period Slide controls how fast
the VI acquires the new temperature readings. LabVIEW also plots high and low
temperature limits on the chart, which you can change using the Temperature
Range knobs in the middle left border. If the current temperature reading is out of
the set range, LEDs light up next to the thermometer.

8.4.3 How to run VIs

Follow these steps to run the demo.

1. On the front panel toolbar, click on to run button to start the


demo.

2. Click on the stop button to stop the demo.

3. Click on the continuous run button to continuously run the demo.

4. Turn the data analysis on and off by clicking on the button

5. You can also stop the demo by clicking on the


Acquisition switch.
199

6. You can also the Pause/Continuous button to start and stop the program.
This is usually used for debugging the program. When you use this button, the
Block Diagram Panel opens and you can see the block diagram of the
Temperature demo.

7. On the Block Diagram Panel, click on File and click on Close to close this
window.

8.4.4 How to Change Data

To change any of the operating data on the front panel, follow these steps:

1. Click on the Windows menu.

2. Click on the Show Tools Palette. The following Tools Window will
appear.

You can move the tools Palette to a clear space by holding it with the mouse
and move it around.

Labeling Tool
Operating Tool Pop-up Menu
Positioning Tool Scrolling Tool
Wiring Tool Color Copy Tool
Break Point Tool Probe Tool
Coloring Tool

3. Click on the Label button. This is called the


Operating Tool.

4. Click on the value of High Limits on the Temperature Panel..

5. Use backspace key to clear the old value.


200

6. Enter the new value.

7. Now, try to change the following settings:

1. The upper limit of the temperature range to 87.0.

2. The upper limit of the Update Period slide (in system controls) to 2.0.

3. Try to change a few values of your own choice.

8. Close the Temperature Demo VI.

8.4.5 Creating a VI

VIs have three main parts: the Front panel, the Block Diagram panel, and the
icon/connector. The icon/connector will not be discussed in this tutorial.

Front Panel : You build the front panel of a VI with a combination of controls and
indicators. Controls are your means of supplying data to your VI. Indicators
display data that your VI generates. There are many types of control and
indicators. You add various controls and indicators to the front panel from the
various sub-palettes of the Control palette. Now, fo to file menu and open a new
VI. If the Control palette is not visible,

Select Show Controls Palette from the Windows menu.

Numeric Controls and Indicators : The two most commonly used numeric objects
are the digital control and digital indicator.

1. Click on the Numeric icon in the Controls Palette.

2. Click on the Digital Control.

3. Move the mouse to the front panel (hand symbol) and click. The Digital
Controller appears on the front panel.
201

4. Use the Operating tool to change the value by clicking on the increment
buttons. Click on your Digital Control with right-hand mouse button, click on
Show, click on Label, and use the keyboard to type ‘No.1’. This automatically
goes in the box that appears. A small box appears on the top of the digital control.

5. As in the diagram below, put another digital control on the front panel

and call it No. 2.

6. Put another digital control on the front panel and call it No.1+NO2. Then,
click on the digital control with the right-hand mouse button; click on Change to
indicator.

7. Put another digital control and call it No.1-No.2. Change to indicator.


Your front panel should look like this:
202

Note:

To edit text you must use the ‘A’ buttons from the Tools Palette.

To move a box you must use the pointer buttons from the Tools Palette.

8.4.6 Block Diagram

To open the block diagram, go to Window menu and click on Show diagram.
The following window will appear.

Input Terminals

Output Terminals

The block diagram is composed of nodes, terminals and wires.

Nodes are program execution elements. Nodes are analogous to statements,


functions and subroutines in text-programming languages. There are nodes in the
above program. You will see nodes later.

Terminals are ports through which data passes between the block diagram
and the front panel and between nodes of the block diagram. There are two types
of terminals, control/indicator terminals and node terminals. Controls and
indicators terminals belong to the front panel controls and indicators.
203

Control/indicator terminals are automatically created or deleted when you create


or delete a front panel control or indicator.

To include the functions for add and subtract, go to Windows menu (on
Block Diagram Panel) and click on Show Functions palette (If this not already
active). This window appears.

Click on the Numeric button to invoke the following window:


204

Click on the add function and then click on the block diagram window at the
point where you want to place the add icon. Repeat for the subtract function. The
window should look like this:

You can use the Position tool (pointer) in the Tool Palette to move the icons
around on the block diagram window. Try to move the add and the subtract icons.
(You should first choose the Position tool in the Tool palette by clicking on it.
Then click on the add icon, it starts blinking. Now move the icon by holding it
with mouse and move it around).

8.4.7 Help

Click on Help menu and then Show Help. Now, move the cursor to the add
icon, you will see the following window. The function has two inputs and one

Nodes

output. You have


205

to wire all the inputs and outputs for the VI to run.

8.4.8 Wiring

Wires are data paths between terminals. Data flows in only one direction,
from a source terminal to one or more destination terminals. To wire from one
terminal to another, use the Wiring Tool, which looks like a reel of wire in the
Tools Palette. Click with the wiring tool on the first terminal, moves the tool to the
second terminal, and click on the second terminal. You can start wiring at either
terminal.

When the Wiring tool is over a terminal, the terminal area blinks, indicating
that clicking will connect the wire to that terminal. You need not hold down the
mouse button while moving the Wiring tool from one terminal to another. You can
bend a wire by clicking the mouse button to tack the wire down and moving the
mouse in a perpendicular direction.

If you want to delete any wire, double-click the wire using the mouse pointer

and then press the Delete Key.


206

Your block diagram window should now look like this:

8.4.9 Running the VI

You can now go to the front panel and run your VI. Check the results for the
following numbers:

No.1 10 No.2 3

No.1 3 No.2 -5

8.4.10 Boolean Controls and Indicators

You use Boolean controls and indicators for entering and displaying Boolean
(True-False) values. Boolean objects simulate switches, buttons, and LEDs. The
most common Boolean objects are the vertical switch and the round LED.

Exercise 1: Compare the two numbers in your previous VI and turn on an LED if
the numbers are equal. Run your VI and confirm the result.

Hints: Your panels should look like this:

8.5 Activated Sludge Reactor Example

This example is similar to the one studied using Simulink. The process is an
activated sludge process, which is usually constituted by a bioreator (the aerator)
and a settler. The aerator is taken to be a well-stirred tank in which suspended
micro-organism react with the organic material in the wastewater and with the
oxygen dissolved in the water to produce more cell mass, carbon dioxide, and
water. The oxygen is injected in the aerator by compressed air and the suspended
micro-organism is separated completely in the settler. A portion of the
concentrated biomass is recycled to the bio reactor and the remainder is wasted to
maintain a bounded micro-organism concentration level in the system.
207

8.5.1 Objective

THE OBJECTIVE OF THIS EXERCISE IS TO EXAMINE

the digital implementation of the PID controller

the effect of the process and sensor noise on the control performance

the implementation of statistical process control (SPC)

Run reactor1 VI. Tune the PI controller for minimum overshoot in DO


response.

1. Double the sample time and rerun the VI. Retune the controller for the same
performance. Repeat by doubling the sample time.

2. Increase the sensor noise in steps of 1.0 and observe the mean chart.
Determine the value of noise for which the process is out of statistical control.

3. Increase the process noise in steps of 0.1 and determine when the process is
‘out of control’.
208

8.6 Design Exercise

The plant for this design exercise is a simplified version of the Holdenhurst
Sewage Treatment Works (Robinson, 1990). The layout of the plant is
schematically shown in Fig. 8.3.

Basic details of the plant are as follows:

Dry Weather Flow 55ML/d

Max flow to full treatment 82 ML/d

Effluent guarantee 95%

Inlet works 2 no. bar screens (25 mm spacing)

2 no. 7.0 m dia Dorr Detritors

Primary Sedimentation 6 no 24.4 dia 1412 m3 each

Aeration contact volume 9478 m3 (plug flow)

Stabilisation volume 4542 m3

Anoxic zones 1406 m3

Porous disc diffusers 10995. Tapered fit

4 no. Blowers 360 KW 17367 m 3/h each

Final tanks 10 no. 19.8 m dia, 860 m3 each

6 no. RAS pumps 44 kW 35 ML/d each

Storm tanks 6 no. 1137 m3 each

Effluent standard BOD 8 mg/l, SS 15 mg/l Amm N 0.6 mg/l

The Following areas are to be monitored and controlled:


209

Storm flows

Inlet Works

Primary Sedimentation

Biological Treatment

Final settlement and sludge return

Effluent recirculation

System function and security

Sludge treatment

The works has its own automatic diesel-powered generators. Each station has
a battery back up uninterruptible power supply for 2 hrs. The complete system has
about 400 instruments and sensors. Analogue inputs are monitored every 10 sec
and digital inputs are recorded every 6 sec. The stations communicate with the
host computer every 20 sec through a modem.

Exercise 1: Study a plant layout and suggest a decomposition strategy for


computer control system.
210

Low Leve
Sludge to B Sewage In
Hill Works

Diesel Tan Generato


House Final Sett
Pumping Stati Tanks

Main inlet
Screening Sedimenta
Tanks Contact Ta
Act
Retu
Slud
Contact Tan Pum
Storm Tan Hou
Stabilisation T AnoxicTan
Contact Ta
Contact Tank
Stabilisation T AnoxicTank

Effluent Re Final Out


Chamber

Figure 8-3 Layout of Holdenhurst Sewage Treatment Works ( Robinson, 1990)

Central Work Station

Workstation 2 WorkstationArea
3: 1
8.1.1.1.1 Workstation 1 Inlet penstock control Desluging primary tanks
Process air pressure control Balanced flow control Dissolved oxygen control
Blower control and ancillaries Storm tank control Air flow control
Standby generator control Sewage pumps control Settled sewage monitoring
Sludge pump monitoring
Inlet works monitoring
Scraper motor control
Workstation 5
Final tank sludge blanket level contr
Scraper control Workstation: Area
4 2
Return activated sludge pumps and f Desluging primary tanks
control Dissolved oxygen control
Surplus activated sludge flow contro Air flow control
Settled sewage monitoring

Workstation 6 Workstation 7
Final tank sludge blanket level control Sludge well level monitoring
Scraper control Control of sludge pumping inhibition
Return activated sludge pumps and flow co Activated-sludge treatment
Final effluent monitoring station Plant fault monitoring
Final effluent recirculation Generator set monitoring

Figure 8.4 The functional decomposition


211

Exercise 2: Suggest the type of senors, instruments or actuators needed to measure


the signals in Figure 4.

8.7 Control Systems

The main control systems are described here.

8.7.1 Flow Balancing and Control

A feature of this plant is its use of in-sewer balancing. About 9 Ml are


available in the 2.0 m dia Coastal Interceptor Sewer (CIS) and control is achieved
by means of an electrically activated penstock to control 60% of the works flow.
The remainder is uncontrolled but fluctuations to the works are balanced by
smoothing of the CIS flow

If a storm produces more than 82 Ml/d (cut-off flow to plant) the excess is
automatically diverted to storm tanks. When these are full discharge is either river
or sea via sewer overflows at predetermined rates.

Pressure sensors in the sewer monitor the head (see fig. 4) and at a pre-set
level, the work's flow is increased to maximum treatment plus required storm
flow. In addition the rate of rise of head is measured, and when it exceeds a set
point above a certain head , the storm flow sequence is triggered.

Under dry-weather conditions, flow balancing is based upon a fill period


(daylight hours) and an empty period. The mean total flow is calculated daily by
the host computer and is automatically used as the next day's predicted flow.

If during the fill period the inflow is lower than the predicted flow, the
desired rate of change of head is calculated to give a full sewer at the end of fill
period. If this desired rate deviates from the actual rate by a given dead-band, the
desired and the actual flow are compared in PID loop. The output of this loop is
212

fed to a secondary PID loop comparing the desired CIS flow with the actual flow.
This is used to update penstock position.

The sewage wet well control is part of the flow-balancing scheme, and is
based on level measurement. Downstream from the storm overflow the treatment
flow is split equally into two treatment lanes.

The advantages of flow balancing lie mainly in reducing surges of


ammonical nitrogen (amm.N) in the works effluent.

Exercise 3 Identify different components of the balance flow control systems,


i.e. sensors, actuators, loops, set points, etc. How do you tune this type of control
system?

8.7.2 Aeration Control

Each of the two-treatment streams has two stabilisation zones and four
contact zones for DO control. Each control zone has a DO electrode, air flow
meter, and motorised valve. Target DO concentration is compared with the
measured DO. A cascade PID control loops generates a difference signal to
calculate a required airflow. This is measured with the actual airflow, and the
difference signal is used to generate the movements of the corresponding control
valves. The control system is schematically shown in Fig. 8.5.

These valves operate independently and cause pressure variations in the air
main. The pressure is measured and compared with a set point, the difference
signal being used to control vane angle of the blowers, and thus the blower output.
The control system is shown in Fig. 8.6.

Exercise 4. Why do we need to use the non-linear functions f1() and f2() in the
aeration control system?
213

Low level sewer

Wet Well Storm Tanks

Pumps "
Contro
CIS Stream Flows

Level Inlet penstoc


Sewer f1( ) control
Head
PID
Correction Selec

Predicted M
Flow f2( )

PID

Plant Flow
Restricte
SF SP
Storm Tank Condit
Selec &
Discharge mode
River or sea
Unrestricte
SF SP

Figure 8.5 The CIS Control System.

Most Open Air V


Positions
Stream 1 Stream 2

Inlet Vanes Blowers Aeration


Tank Max (. , .)

PT

Selection o Start & Stop


Operationa Blower Mo
vanes Required Most O
Air Flow Valve
Positions

PI PI

Figure 8.6 The Aeration Control System


214

8.7.3 Primary Tank Desludging

The desludging sequence is time-based with several possible variations. Each


tank has an internal time adjusted to suit its hydraulic characteristic. When
bellmouth descends, water is pushed out of the bellmouth. This is followed by
sludge. As the sludge thins out, the 'mushroom' of sludge rises to strike the
mechanical plunger, which stops the desludging. The plunger signal is ignored for
a pre-set time at the start of desludging to avoid initial false signals. The
bellmouth can stay lowered for a pre-set percentage of the interval time, after
which a time-out alarm is activated. The desludging time is measured and if it is
very short, the next operation is skipped. This is used mainly at night when the
solids load is much less than daylight hours. If the bellmouth is not lowered within
15 sec of the command signal, an alarm is raised. Since the plunger is sometimes
fouled with rags, failure to clear at the end of a desludging cycle will raise an
alarm and initiate desludging for a pre-set time on the next cycle.

8.7.4 Final Tank Desluging and Return Activated Sludge


(RAS)

Desluging of the final tank is controlled by a motorised valve in the


discharge pipe. The sludge-blanket level in each final tank is monitored and
controlled about its own set point by opening or closing this valve. The valve can
not close beyond a minimum set point which is automatically adjusted according
to the return activated sludge (RAS) flow. A maximum rate of rise of blanket level
is also specified.

The RAS flow is controlled by a motorised valve on the pumping main. The
number of pumps operating is determined by the position of the valve. Should the
pump level fall, the RAS flow is automatically decreased to avoid air locking of
the centrifugal pumps. Also, if the blanket level in any of the final tanks exceeds a
set point, the RAS flow is incremented by a set amount, and the final tank valve is
215

opened. This effectively overcomes any airflow imbalance to the final tanks. At a
higher blanket level, a priority alarm is activated.

Surplus activated sludge is wasted directly from the RAS flow by a small
bore pipework and a motorised valve.

RAS Flow
Set Point

Settlement Desludging RAS Flow


Tank Valve Controller

Max
Minimum Averaging Aperture
Valve
L Control

PID

Figure 8.8 Final Tank Desludging and Return Activated Sludge Control

8.8 Alarms

Alarms are activated upon instruments fault, high or low signal values,
undesirable trends or incorrect digital points, i.e. plant unavailability. Alarms have
low or high priorities. Alarm points are normally displayed on a colour VDU and
are colour coded.

Exercise 5: List 10 top priority alarms in a wastewater treatment process.


216

8.9 Data Display

Important analogue and digital signals should be recorded at regular


intervals. Information can be called in tabular or graphical form. Data
characteristics, mean, standard deviation, mean and range charts should be
designed and made available to the operators.

8.10 Fault Monitoring

An important component of the wastewater treatment plant computer system


is the implementation of a fault diagnosis system. The sensors and actuators are
more likely to fail in the harsh environment of wastewater treatment plants. In
general, the time between fault observation and correction must be as low as
possible. Minor instruments faults can have drastic and sometimes unforeseen
consequences. Failure of a flow meter is trivial at most sewage treatment works,
but can lead to loss of flow balancing. For these reasons, faults should be
anticipated and suitable fallback positions should be installed into the software.

Exercise 6: List 3 common faults in the wastewater treatment process and


explain how it can be detected and isolated.
217

8.11 Further Readings

1. Beck M B, Identification, estimation and control of biological wastewater


treatment processes, IEE, Vol 133 Part D No 5m Sep 1986.

2. Taner A H and N M White, Virtual instrumentation: a solution to the problem


of design complexity in intelligent instruments, Measurement & Control, Vol.
29, 1996.

3. Robinson M S, Operating experiences of instruments, control, and automation


at Holdenhurst sewage-treatment works, Bournemouth, J IWEM, Dec 1990.
218

9 Fault Diagnosis through Expert Systems


and Neural Networks
Objectives
To provide a simple description of the structure of an expert system.
To describe the neural network structure.
To outline the stages required in a neural network application.

Diagnostic tools are available which permit identification and analysis of faculty
plant equipment or of the data. Two modern techniques, which can be used, are
expert systems and neural net modelling. Although the methods and analysis can
be used for a wide variety of purposes, other than fault diagnosis, this chapter
provides the introduction and use of these tools in a diagnostic situation.

9.1 Expert systems in process control


An expert system is a computer-based system which contains knowledge from a
specific area of human expertise. The technology developed from the 1970’s
where it was originally applied to simple-off-line applications such as assisting
with medical diagnoses. Expert systems have now developed to the point where
they are used for both off-line systems (which operate on static data) and on-line
systems (which consider information generated in real-time).
Conventional programming systems are suitable for
data acquisition
status monitoring and flagging alarms
mathematical processing and automating repetitive operations
However, conventional systems find the following more difficult:
interpreting the meaning of complex data, including filtering information and
focussing on areas of criticality

R. Katebi et al., Control and Instrumentation For Wastewater Treatment Plants


© Springer-Verlag London Limited 1999
219

incorporating modifications and extensions easily


The expert system can use the knowledge gained from operators, engineers or
system designers to help in the above cases. Typically, en expert system can be
used in monitoring and diagnosis, optimisation, scheduling, supervisory control,
advanced control and process modelling and simulation.

9.1.1 Expert system components

The expert system is comprised of 3 main components:


a knowledge base: used for storing the organised knowledge or expertise in the
form of rules or frames
an inference engine: the problem solving mechanism
a database : used for storing facts and deductions and intermediate hypotheses
Knowledge Base: This holds the knowledge developed from operator/engineer
experience , in the form of input/output relationships, maintenance
handbooks or system designers/commissioning staff. The knowledge
itself may be quantitative or, for some more advanced expert systems,
qualitative. The knowledge may be expressed as
rules ( such as IF...THEN...) rules , or
frames which are based on objects. Object -oriented development allows the
user to define objects and their properties and behaviour. Objects can inherit
properties and behaviour from objects within the same class or other classes.
Hence you can efficiently capture knowledge and save development effort by
creating generic rules, procedures, formula and relationships that apply across
entire classes of objects.
Rule-based expert systems are quicker and easier to develop but object -oriented
development may be more suitable for more complex or larger projects.
Inference engine : This is the problem solving mechanism. The rules can be event
driven ( through forward-chaining) to automatically respond when new
data arrives, or it can be data-seeking ( through backward chaining) to
automatically invoke other rules or formulas.
Database: This is the working storage area for the system (Fig. 9.1)
220

User

User KNOWLEDGE
Interface BASE

Database

Interference Engine

Figure 9.1 An Expert System Structure


The development of an expert system is made easier by the use of specific shells
which contain all the expert system structure required such as the automatic
inheritance procedures for objects etc.

9.1.2 Expert systems for condition monitoring and fault


detection

An expert system can be associated with data logging instruments to improve the
fault detection and analysis of a process plant. The initialisation of project would
include considering:
the role of the system: to help an operator, assist an engineer, etc.?
where the ‘knowledge’ will be derived from
what is the appropriate representation for the knowledge? (For example,
if…then or more complex mathematical representations)
hardware considerations?
user interface ( text or graphics)?
validation and testing, field trials?
Once the user has decided on, say, a rule-based system for fault detection and the
following issues need to be considered
221

which failure modes can be detected


what will the false alarm rate be
what is the chance a fault will not be diagnosed
what is the time taken to diagnose a fault
how complex is the system to be implemented
The data or knowledge of the faults or system condition to be considered must
then be presented in the form suitable for use by an appropriate expert system
package or development tool.

9.2 Modelling of complex process using neural nets


Neural computing differs from conventional computing in that instead of having
to programme the solution, the networks learn solutions from supplied data. For
example, by supplying certain fault signatures for specific equipment, the network
can learn to recognise faults. Neural networks have therefore been proved to be
useful for a variety of areas such as
Classification and fault diagnosis in engineering
Plant monitoring and control
Demand forecasting
Data analysis, the recognition of trends
Modelling of complex process plants
These tasks may be difficult to perform if using standard programming techniques.
In particular, neural networks can be used in situations where there is
poor quality data or incomplete data
lots of data available to train the NN
different types of input data require to be integrated
difficulty in specifying a model for a mathematical simulation
difficulty in specifying rules for a knowledge-based system
222

9.2.1 The neuron and the neural network

A neural network consists of a number of elementary units called neurons. A


neuron is a simple processor (Fig. 9.2) which takes one or more inputs and
produces an output. Each input to the neuron has an associated weight. The
neuron performs the following actions:
multiplies each of the inputs by its respective weight
sums up the result from all the inputs
determines the output according to the result of this summation

Input
Weight
Neuron
Input
Weight Output

Weight
Input
Figure 9.2 The neuron

The neurons can be connected together to form a neural network(NN) which can
be trained to perform tasks. The number of neurons in a NN can range up to
thousands. A NN is shown in Fig. 9.4. There are 4 inputs in the input layer which
pass to three neurons in the hidden layer which pass to 2 neurons in the output
layer. There may be many hidden layers within a NN. Although all the
information is passing from input to output in the Figure, some networks can
allow data to travel backward or to flow between neurons in the same layer, even
to themselves.
223

input

output
input

output
input

input
Hidden Layer

Input Layer
Figure 9.3 Neural Network

9.2.2 Training the neural net (NN)

Training a NN refers to the process of passing a series of input data to the NN


whose outputs are known. The NN alters the weights associated with each neuron
until it can successfully match each set of input data to its output data set. This is
referred to as supervised training since both input and output data sets are
available. Unsupervised training occurs when only different input sets are
available. The network must learn various patterns or classifications that are
hidden in the data sets.
Once the NN has been trained, the weights are fixed and the NN can be
used with new data presented to the network. Although the process of training
NNs can be slow, the speed of a trained NN is very fast which makes them
attractive for certain operations.

9.2.3 Neural Network Application Development

Figure 9.5 shows the application stages of a NN design including iterative


design/testing/optimisation phase.
224

Data collection

Data preparation

Design
Optimisation
Training and testing

Validation

Implementation

Maintenance

Figure 9.4 The neural network application development


The central stages are detailed below:
Data collection: Sources of data may come from process measurements,
simulations or mathematical formulas, or , indeed, a combination of all these.
Quality of data is important; supplying poor quality data effectively asks the NN
to learn how to deal with lack of information as well as how to solve the problem.
Data Preparation: This includes removal of data that is not required or that
would simplify the task considerably. Pre-processing of data into a form suitable
for the NN, compensating for missing data by providing estimated values. Data
partitioning: setting some of the data aside for verification once the training has
been achieved.
Design: Performance of the NN will depend on the network type, size, training
algorithm and choice of input features.
Training and Testing: This comprises initialising the neuron weights ( often
random to start with) and an application of an appropriate number of training sets.
225

The network's output errors should decrease rapidly as it learns the classification
of the data.
Optimisation : Features that can be altered during the design and optimisation
process include: Quantity of training data, input pre-processing, network
architecture, network size, training algorithm.
Diagnosis and Treatment : Problems can be : Slow learning/high training errors,
Poor generalisation, Low speed for new data.
Validation: Validation can be performed, for example, by running large amounts
of test data through NN and checking performance against an analytical model.
The NN may fail if the new data set is statistically significantly different from the
training data set. This is a current problem in the application of NNs, and limits its
application in safety critical systems.
226

9.3 Further reading


1. Cordier, B., D. Gokana, C.T. Huynh and Elf Aquitaine, (1991), SCOOTER, A
Real time Expert System for Process Control: Application to Gas
Dehydration on Offshore platforms, 6th SPE Petroleum Computer
Conference , Dallas, June 17-20.
2. DTI's Neural Network Technology Transfer Programme, (1993), Neural
Computing: Learning Solutions: Best Practice Guidelines for Developing
Neural Computing Solutions.
3. Hruschka, H. and W. Hegemann, (1981), Investigation on constant F/M ratio
by adaption of aeration tank volume, Water Sci Techol. , Vol. 9, (5/6), pp
646.
4. Hyde, T., (1996), Neural Computing: Towards a new technology, Control
Systems, April, pp 75-77.
5. Johnson, M., (1996), Neural networks in advanced instrument design, Meas.
& Control, Vol. 29 , May, pp 101-105.
6. Moo-Yung (ed), (1985), Comprehensive Biotechnology, Pergamon Press.
7. Olsson, G., (1983), Control Strategies for the Activated Sludge Process,
XXXX
8. Page, G.F., J.B. Gomm and D Williams (eds), (1993), Application of neural
networks to modelling and control, Chapman and Hall, London.
9. Seborg, D.E., T.F. Edgar, D A. Mellichamp, (1989), Process Dynamics and
Control, Wiley and Sons.
10. Thompson, L. and G. Mertz, (1993), Real-time expert system implementation
at Monsanto-Krummrich, Automatica, Vol. 29, No. 5, pp 1177-1183.
11. Tinham, B., (1995), Control with neural net models, C&I, August, pp30,32.
12. Yust, L.J., J.P. Stephenson and K.L. Murphy, (1981), Dynamic step feed
control for organic carbon removal in a suspended growth system, Water Sci
Techol., Vol. 13, (11/12), pp729-736.
227

10 Fuzzy Logic Control Design and Analysis


Objective
The basics of Fuzzy Logic is introduced in this Chapter. The Chapter
starts with a review of Boolean Algebra Logic. The mathematical background to
the set theory is then discussed. The fuzzy logic rules are then defined and
explained. This is followed by the application of Fuzzy Logic to control systems.
Examples are given to demonstrate the practical applications of Fuzzy logic
control methods.

Symbols
: belongs to
: does not belong to
: include or equal
: union
: exclusive OR
: intersection
: union
: for all
: not equal to
: separate the membership of an element from the element
: product
: summation
.:operator AND
: AND
: OR
x: Cartesian product
: there exist

R. Katebi et al., Control and Instrumentation For Wastewater Treatment Plants


© Springer-Verlag London Limited 1999
228

Expert system: a computer system capable of emulating a human being with


some area of expertise. An expert system usually comprises a knowledge base and
an inference engine.

Knowledge base: codified rules and practices obtained from an expert on a


particular subject.

Inference engine: the portion of the program in an expert system that controls the
selection and application of rules and practices in the knowledge base.

Backward chaining: a control strategy that assumes a hypothesis and works


backward to find evidence to confirm or refute it; also called goal directed
searching.

Forward chaining: a control strategy that determines what hypothesis may be


proved from a set of data, also called data-directed searching.

Production rule: knowledge formulated as an “if...then” statement, in which “if”


is the antecedent and “then” is the consequence.

Fuzzy production rule: a rule employing fuzzy sets in its antecedents and
consequence terms.

Fuzzy set: a set in which objects may have only partial membership.

Linguistic variable: a fuzzy set defining some particular linguistic concept- for
example, a “low voltage”.

Fuzzy logic: an infinite-valued logic discipline that allows a proposition to have a


value other than true or false.

Policy: a series of fuzzy production rules that are grouped for evaluation.

Hedge: an adverb that modifies the behaviour of a linguistic variable such as


“not”, “very”, and “fairly”.
229

Premise: A statement or idea on which reasoning is based

Antecedent: The statement on which the consequence of a reasoning is based

Noise word: a word that improves the readability of rules but has no mathematical
value, such as “should”, “that”, and “this”.

Cartesian product: The Cartesian product AxB of A and B is:

AxB={(a,b); a A b B}.
(a,b) is the ordered pair with first component a and second componnet b.

Binary Relation

A binary relation on AxB or from set A to set B is a subset of AxB, i.e (x,y) R

Composite Relation A

Let R1 be a relation from A to B and R2 be a relation from B to C. Then the


composite relation R1 R2 is defined as follows:

R1 R2={(x,z); x A, z C, ( y)( y B, (x,y) R1, (y,z) R2)}


230

10.1 Introduction

Fuzzy set theory is based on the concept that human thinking is seldom
mathematically precise. Fuzzy sets let a computer represent data and reasoning
the way human do, by manipulating imprecise and often ambiguous thoughts. In a
classic set, membership is predicted on Boolean logic. An object is either a
member of a set or is not. With fuzzy sets, an object may have partial membership
in a set. So whereas classic logic allows only for the possibilities of a proposition
being true or false, fuzzy logic allows infinite values. It takes into account the
possibility that a proposition can have any value between true and false.

Consider the range of voltages possible from a standard UK wall socket.


Classic logic permits definition of a set called “Voltage below 220”. But with
fuzzy set logic, a set called “ Low Voltage” may be defined as shown in Fig. 10.1.
When standard logic is applied, voltages below 220 Volts are separated from all
other values. However with fuzzy set theory, the set “Low Voltage” covers the
range from 180 to 230 volts. At 180 volts it is definitely true that the voltage is
low; at 230 it is definitely not true that the voltage is low. Between 180 and 230,
the proposition that the voltage is low declines from 1 (100%) to 0 (0%) false. In
other words, fuzzy sets replicate the kind of approximate reasoning that human
routinely use.

Fig.10.1 The membership functions for (a) conventional logic, (b) fuzzy logic
231

It should be emphasised that the truth or falsehood of a fact is entirely


based on human perception. However, as our knowledge about the subject
pertaining to the fact grows, the fact may become less and less true. In
conventional knowledge based systems, the facts are added to the data base and
stays there permanently unless it is explicitly deleted. In many practical systems,
the facts undergo a metamorphosis as the evidence changes. The problem with the
classic logic system is that time or dynamic can not be incorporated into the
system as in the case of differential equations. On the other hand given the
information at a particular instant of time, it might not be possible to exactly
determine the truth or falsehood of a proposition but it can be asserted that the
proposition is likely to be true or false with a certain degree of uncertainty. The
status of these propositions are more vague than just being either true or false.
Fuzzy logic associates a degree of certainty or truthwith each fact, thus allowing
for partially true facts with different shades of meaning between true and false.

10.2 Boolean Logic

It is worthwhile to briefly examine binary logic before discussing the


fuzzy logic in order to see how to quantify the degree of truth and interpret the
logical connectives AND, OR, XOR (exclusive OR) and NOT by suitable
mathematical operations. For two propositions x and y which can be either true or
false, the truth table for different connectives may be written as ( Table 10.1):

X y x x.y x+y x y
T T F T T F
T F F F T T
F T T F T F
F F T F F F
Table 10.1 The truth table
232

We now introduce the degree of truth or membership value of each of the


variables x and y. The membership value (x) is defined to be equal to 1 if the
proposition A belongs to a crisp set consisting of all propositions that are true, i.e
X. If x is false (x) is defined to be zero. Similarly the membership value (y) is
define to be 1 if y is true and 0 if y is false. In this case the membership function
has two discrete values 0 and 1. We can now express the connectives in terms of
membership functions as follows:

X y x x.y x+y x y
1 1 0 1 1 0
1 0 0 0 1 1
0 1 1 0 1 1
0 0 1 0 0 0
Table 10.2 The truth table

Examining the truth table we note that we may make the following
associations between the membership functions and the connectives:

x 1- (x)
x.y min[ (x), (y)]
x+y max[ (x), (y)]
x y max[min( (x),1- (y),min(1- (x), (y))]
Note that the exclusive OR is defined as:
x y=(x. y + x .y)=max[min( (x),1- (y),min(1- (x), (y))]

10.3 Fuzzy Sets

A fuzzy set is an extension of a crisp set. Crisp sets allow full


membership or no membership at all, where fuzzy sets allow partial membership.
In other words, an element may partially belong to a set. In a crisp set, the
membership or non-membership of an element x in set A is described by a
characteristic function A(x), where:
233

1 if x A
A(x)=
0 if x A

Definition 3.1: Let be a collection of objects, for example real numbers, =Rn,
and be called the universe of discourse. A fuzzy set X in is characterised by a
Membership Function (MF):

X: X [0,1]

with X(x) representing the Grade of Membership (GM) of x in the fuzzy set X.

Consider a domestic central heating system. Given a physical variable,


x(t), the room temperature, we are interested in its value if it belongs to some
precise interval called universe of discourse, =[10,35]. Otherwise the value may
be out of this interval and its value may be associated with the corresponding
extreme. In this universe of discourse some fuzzy subsets are defined,

COLD: [10,17] NORMAL: [15,25] and HOT: (22,35).

For each fuzzy subset a membership function is defined as being zero if


x(t) Xi and

COLD : 0 COLDi ( x )
C 1; x(t ) [10
1 ,17]; C
COLDi
( x) 0, otherwise
NORMAL : 0 N
NORMALi
( x) 1; x(t ) [15
1 ,25]; NORMALi ( x )
N 0, otherwise
HOT : 0 H i ( x)
HOT 1; x(t ) [2
22,35]; H i ( x)
HOT 0, otherwise
COLD, NORMAL and HOT are called linguistic variables. We give a more
formal definition for these variables later.

10.4 Membership Function (MF)

Fuzzy set theory extends the crisp set concept by defining partial
memberships which can take values ranging from 0 to 1 over the universe of
discourse X,:
234

A [0,1] over X.

Where A refers to the universal set defined in a specific problem.

10.4.1 Continuous Fuzzy Set

If X is continuous, then a fuzzy set A can be defined as:

A A ( x) / x
X

Note that in the above definitions, ‘/’ does not refer to a division and is
used as a notation to separate the membership of an element from the element
itself. As an example of continuous fuzzy membership function, the linguistic
term POSITIVE may be defined to take the following membership function:

1 if x > 4
PO
POSITIVE (x ) (x - 1)/3 if 1 x 4
0 otherwise
235

Fig 10.2 Examples of monotonic MF (POSITIVE) and triangular MF (ZERO)

10.5 Discrete Fuzzy Set

In practice one deals with discrete sets. A finite discrete fuzzy subset
contains n elements represented in standard fuzzy notation as the union of fuzzy
singletons, i/xi, where I is the membership value corresponding to xi. Using the
+ sign as the notation for union, a discrete fuzzy set F may be written as:

n n
F= 1/x1+ 2/x2 +....+ n/xn=
i
* i
i 1 xi i 1 xi

For example the discrete fuzzy set TALL may be represented as: TALL=0/150cm
+ 0.2/155cm+ 0.4/160cm+ 0.6/165cm+0.8/170cm+1/170cm Or A={0.2/e 1,0.6/e2}
has membership 0.2 for element e1 and 0.6 for element e2 in the fuzzy set A.

10.5.1 Definition of Membership Function

Fig. 10.4 An example of bell shape MF (NORMAL).


236

The following parameters are defined for each membership function as


shown in Fig. 10.4.

Support is the interval for which A ( x) 0 . The support for COLD is


[10,17], for the NORMAL is [15,25] and for HOT is [22,35].

Peak value is p such that Ai ( p ) 1.

Bandwidth is the interval for which Ai ( x ) 0.5 .

-cut: The -cut of a fuzzy set A is defined as the crisp set of all the
elements of the universe X which have memberships in A greater than or equal to
, where

A ={x X| A(x) }.

For example, if the fuzzy set A is described by its membership function:

A={.2/2+.4/3+.6/4+.8/5+1/6} and =0.3 then the -cut of A is:


A={.4/3+.6/4+.8/5+1/6}

Note that ‘+’ is the OR operator and ‘/’ shows the association of the
grade of membership to its element.

Figure 3.3 shows four different type of continuous membership


functions, namely, (a) monotonic, (b) triangualr, (c) trapezoidal (d) bell shaped.

(x) (x)
237

(x) (x)

(a) x (b) x

(x) (x)

(c) x (d) x

Fig.10.5 Different membership functions

Another popular choice is the Gaussian membership function:

A ( x) exp[
e ((xx ch ) 2 /(2 2
h )]

where the ch is the centre and h is the variance. Gaussian fuzzy sets do not have a
compact support as they always have a positive response, but they can be modified
(using the -cut) so that this property is incorporated into the fuzzy sets.
238

Discrete
0.5 Triangular

0
0 0.5 1 1.5 2 2.5 3 3.5 4

0.5 Discrete
Trapezoidal
0
0 0.5 1 1.5 2 2.5 3 3.5 4

Fig 3.4 Examples of discrete membership functions

Examples of triangular and trapezoidal fuzzy sets are shown in Fig. 3.4

10.6 Singleton

If the bandwidth of a membership function is narrow, it gives a clear


significance to the linguistic variable. If the membership function is flat, almost
any variable value strongly belongs to the fuzzy subset. In some cases, for
instance if we have measurement with no noise, the support of a fuzzy subset may
be reduced to a single value s. In this case, the bandwidth is zero and the cross
over and the peak value are the same. This fuzzy subset is called a singleton.

Singleton
m *

GM

s x
239

Fig. 10.6 Fuzzy set singleton

Exercise 1: Assuming the Mfs for the fuzzy subsets, COLD, NORMAL, HOT, are
symmetric triangle, identify the support, peak and bandwidth for each subset.

10.7 Simple Operation with Fuzzy Sets

In fuzzy algebra, we can define, with a particular interpretation, some


basic operations similar to the basic Boolean algebra operations: equality,
Inclusion, Complementation, Intersection and Union. In particular, these
operations are valid for fuzzy sets with common universe of discourse. Also
classical properties like commutativity, Associativity, Idempotency, Distributivity,
Absorption, Identity and the Morgan’s law are applicable. Some of these
properties are defined below.

10.7.1 Empty fuzzy set, A=0

A fuzzy set is said to be empty if and only if its membership function is


zero for all values of X.

10.7.2 Equality

Two fuzzy sets A and B are equal, A=B, if and only if

A(x)= B(x) x X.

10.7.3 Inclusion

A fuzzy set A is a subset of fuzzy set B, A B, if and only if

A(x) B(x) x X.
240

Fig 10.7 Fuzzy inclusion

10.7.4 Complementation

The complement of a fuzzy set A is denoted by A and defined by

A
( x) = 1- A(x) x X.

Fig. 10.8 Fuzzy NOT

Linguistically, the complement can be represented by the operator NOT.

10.7.5 Intersection

The intersection of two fuzzy sets A and B is a third fuzzy set C,


C A B , with the membership function defined by:

C(x)= min[ A(x), B(x)] x X..


241

This is usually abbreviated as:

C(x)= A(x) B(x) x X.

Fig. 10.9 Fuzzy intersection

The intersection of fuzzy sets can be linguistically interpreted as the


operator AND.

10.7.6 Union

The union of two fuzzy sets A and B is a third fuzzy set C, C A B,


with the membership function defined by:

C(x)= max[ A(x), B(x)] x X.

This is usually abbreviated as:

C(x)= A(x) B(x) x X.


242

Fig. 10.10 Fuzzy union

The union of fuzzy sets can be linguistically interpreted as the operator


OR.

10.7.7 Other Relations

Many of the basic ordinary set identities hold for fuzzy sets when the
complementation, union and intersection operators are defined as above, for
example,

Associative Laws:

A (B C) (A B) C
A (B C) (A B) C

Distributive Laws:

A (B C) (A B) (A C)
A (B C) (A B) (A C)

De Morgan’s Laws:

(A B) (A B)

(A B) (A B)
243

10.8 t-norm and t-conorm

The definitions in section 10.7 show only one possible choice of the
operators for intersection, union and complement. Based on different
interpretations, which range from intuitive argumentation to empirical or
axiomatic justifications, other operators have been suggested. The t-norm for
intersection and t-co-norm for union are examples of these interpretations.

A t-norm, denoted by * is a two-place function from [0,1]x[0,1] to [0,1],


which includes fuzzy intersection, algebraic product, bounded product and drastic
product defined as:

fuzzy intersection: x*y = min{x,y}


algebraic product x*y = xy
bounded product x*y = max{0,x+y-1}
drastic product x*y = { x if y = 1; y if x = 1; 0 if x,y<1}
where x,y [0,1].

A t-co-norm, denoted by ‘+’ is a two place function from [0,1]x[0,1],


which includes fuzzy union, algebraic sum, bounded sum, and drastic sum defined
as:

fuzzy union: x + y = max{ x, y}


algebraic sum x + y = x + y- xy
bounded sum x + y = min{1,x+y}
drastic sum x + y = { x if y=0; y if x=0; 1 if x,y>0}
where x,y [0,1].

Exercise 2: Assuming the MFs in exercise, find the MF for the following
operations:
244

NORMAL HHOT
COLD NNORMAL

( NORMAL H )'
HOT
COLD ((N
NORMAL H )'
HOT

Exercise 3: Consider two qualitative statements, ‘big’ and ‘medium’ with the
following membership functions:

big={0,0.3,0.7,1.0}

medium={0.2,0.7,1.0,0.8}

find big medium, big medium,NOT( big), NOT( medium)

10.9 The Extension Principle

The extension principle is a tool for translating crisp mathematical


concepts to fuzzy sets. Let X and Y be two universes of discourse and f be a
mapping from X to Y, i.e y=f(x). For a fuzzy set A in X, the extension principle
defines a fuzzy set B in Y by

B ( y) sup
p A ( x)
u f 1( y )

that is B(y) is the supremum of A(x) for all x X such that f(x)=y, where y Y
and we assume f-1(y) exists, if f-1(y) does not exits for some y, we define B(y)=0.

10.10 Linguistic Variables

A linguistic variables is defined as follows:


245

Definition: Linguistic Variable

A linguistic variable is characterised by a quintuple (x,T(x),U,G,M) in


which x is the name of variable, T(x) is the term set (vocabularies) of x, that is the
set of names of linguistic values of x with each value being a fuzzy set defined on
U; G is a syntactic rule for generating the names of values of x and M is a
semantic rule associating each value with its meaning.

This definition may give the reader a feeling that linguistic variable is a
complex concept, but in fact it should not be. The aim of introducing the concept
of linguistic is to present a formal way of saying that a variable may take words in
natural languages as its values. For example, if we can say ‘the speed is fast’ then
the variable speed should be understood as a linguistic variable, but this does not
mean that the variable speed cannot take real values. In this spirit, we can have
the following definition of a linguistic variable.

If a variable can take words in natural languages (for example, small,


fast, and so on) as its values, this variable is defined as a linguistic variable. These
words are usually labels of a fuzzy set. A linguistic variable can take either words
or numbers as its values.

For example , the linguistic variable speed can take ‘slow’, ‘medium’,
and ‘fast’, as its values. It can also take any real numbers in the interval [0,V max]
as its values. The linguistic variable is an important concept that gives us a formal
way to quantify linguistic descriptions about variables.

Since in linguistic description we often use hedges such as ‘very’ and


‘more or less’ to describe other terms, we need formal definitions for what these
hedges mean. Although in everyday use the hedge ‘very’ does not have a well
defined meaning, in essence it acts as an intensifier. In this sense, we can define
these hedges as follows:
246

Let X be a fuzzy set in (for example X =small), then ‘very X’ is defined


as a fuzzy set in with the membership function

2
very X(x)=[ A(u)]

and ‘more or less X’ is a fuzzy set in with membership function:

1/2
(x)=[
more or less X A(u)]

1
very A
0 . 8

0 . 6
A

0 . 4

0 . 2

0
2 4 6 8 1 0

0 . 8 moreless A

0 . 6

A
0 . 4

0 . 2

0
2 4 6 8 1 0
247

10.11 Inference Rules

In fuzzy logic and approximate reasoning, there are two important fuzzy
inference rules, namely, Generalised Modus Ponens (GMP) and Generalised
Modus Tollens (GMT). These inference rules are the generalisation of the
classical logic modus (meaning mode) ponens ( from the Latin ponere, meaning to
affirm) and modus tollens ( from the Latin tollere, meaning to deny). For fuzzy
logic these are defined as follows:

Definition: Generalised Modus Ponens (GMP)

GMP is defined as the following procedure:

premise1 : x is A’
premise 2: if x is A, then y is B
consequence y is B’

where A’ , A, B and B’ are fuzzy sets, and x and y are linguistic variables.

Table !! shows the intuitive criteria relating premise 1 and the


consequence in GMP. We note that if a causal relation between ‘x is A’ and ‘ y is
B’ is not strong in premise 2, the satisfaction of criterion 2-2 and criterion 3-2 is
allowed. Criterion 4-2 is interpreted as ‘ if x is A then y is B, else y is not B’.
Although this relation is not valid in formal logic, we often make such
interpretation in everyday reasoning.

x is A’ (premise 1) y is B’(consequence)
criterion 1 x is A y is B
criterion 2-1 x is very A y is very B
criterion 2-2 x is very A y is B
criterion 3-1 x is more or less A y is more or less B
criterion 3-2 x is more or less A y is B
criterion 4-1 x is not A y is unknown
criterion 4-2 x is not A y is not B
248

Table !! Intuitive criteria relating premise 1 and the consequence fro


given premise 2 in GMP

Generalised Modus Tollens (GMT)


GMT is defined as the following inference procedure:

premise1 : y is B’
premise 2: if x is A, then y is B
consequence x is A’

where A’ , A, B and B’ are fuzzy sets, and x and y are linguistic variables.

y is B’ (premise 1) x is A’(consequence)
criterion 5 y is not B x is not A
criterion 6 y is not very B x is not very A
criterion 7 x is not more or less B x is not more or less A
criterion 8-1 y is B x is unknown
criterion 8-2 y is B x is A
Table !! Intuitive criteria relating premise 1 and the consequence fro given
premise 2 in GMT

10.12 Cartesian Product

Consider two fuzzy sets defined as two different universes of discourse,


A=COLD and B={ Central heating is on for about 10 minutes}, then we may
define the Cartesian product as follows:

The Cartesian Product of two fuzzy sets A and B in the universe of


discourse U and V, is a fuzzy set denoted by AxB in the Cartesian Product space
UxV with its membership function defined by:

AxB(u,v)=Operator[ A(u), B(v)]


249

where Operator is some operation defined a priori.

10.13 Fuzzy Relations and Their Compositions

The fuzzy relation in U and V is a fuzzy set denoted by R with a


membership function denoted by R(u,v). If both A and B are propositions then
AxB denotes a proposition that is a composition of A and B. The common
composition rules are logical AND, OR and logical implication.

Logical AND

The propositions are related by means of an ‘AND’. For example,

X is A AND Y is B then A.B(u,v)=max[ A(u), B(v)]

Logical OR

The propositions are related by means of an ‘OR’. For example,

X is A OR Y is B then A+B(u,v)=min[ A(u), B(v)]

Exclusive OR

The propositions are related by means of XOR. For example,

If X is A XOR Y is B then A B(u,v)=max[min[ A(u),1- B(v)], min[1- A(u),

B(v)]]

Apart from fuzzy composition there are other examples of fuzzy relations such as

10.14 Sup_Star Composition

Let R and S be fuzzy relations in UxV and VxW, receptively. The sup-
star composition of R and S is a fuzzy relation denoted by R S and is defined as:
250

R D S (u, w) sup
s p[ R (u, v ) * S ( v, w)]
v V

where u U, w W, and * could be any operator in the class of t-norm defined


previously. Clearly, R S is a fuzzy set in UxW. It is possible that S is just a fuzzy
set in V; in this case S(v,w) becomes S(v), the R S(u,w) becomes R S(u), and
others remain the same.

The most commonly used sup-star compositions are the sup-min and sup-
product compositions which replace the * by min and algebraic product,
receptively as explained in the next section.

10.15 Fuzzy Implication

A fuzzy implication has the general form

if Premise then Conclusion


An arbitrary number of expressions of the form
xi is lij
is combined by the operators AND or OR in the premise.

Example of premise

x is NEAR and v is FAST

The terms NEAR and FAST are defined as fuzzy sets in the linguistic
variables for position and speed of a car. The conclusion is an expression of the
form xo=xoj. In this case loj is the j-th fuzzy set of the linguistic variable describing
the output value and changed by the implication. The equal sign is an assignment
of the fuzzy set loj to the output variable xo.
251

Let A and B be Fuzzy sets in U and V, respectively. A fuzzy implication


A B, can be understood as a fuzzy if-then rule:

if x is A then y is B, where x A and y B are linguistic variables. There


are six common different interpretation of the if-then rule based on intuitive
criteria or generalisation of classical logic:

Fuzzy conjunction A B (u,v)= A(u)* B(v)

Fuzzy disjunction A B (u,v)= A(u)+ B(v)

Material implication A B (u,v)= A(u) A


(u) + B(v)
Prepositional calculus A B (u,v)= A(u) A
( u) + A*B(v)

Generalisation of modus ponens A B (u,v)=sup{c [0,1] A(u)*c B(v)}

Generalisation of modus tollens A B (u,v)=inf{c [0,1] B(v) +c A(u)}

Example

Let the composition rule of inference to draw an inference using a fuzzy


composition to be:

I=E R

where E is the fuzzy set of actual inputs to the relational system


consisting of a single relation R and I is the inference.

The membership function for I is defined by

I(i)=max{min[ E(e), R(e,i)]}


252

The maximisation here is with respect to e for each I hypothesised. This


is known as max-min convolution.

IF E is crisp, that is E has a single known value, it can be represented by


a fuzzy singleton with a membership:

(e)=1 for e=e1 and (e)=0 for all other e, then

I(i)= R(e,i) for e=e1 and (e)=0 for all other e then

I(i)= R(e) at e=e1


In this case when the fuzzy relation R is a fuzzy implication of the type
if e is E then i is I
R(e,i)=min[ E(e), I(i)]

In practice the relational system may be a collection of rules {R j


,j=1,...,n}. The rules may be interpreted as:

if....then.... j=1
else
if....then.... j=2
else
if....then.... j=3
else etc.

In this case the ‘else’ connective is interpreted as the ‘or’ connective and,

I (i ) m [
max R j ( e, i ) e e1 ]
j
= max min [ E j ( e), I j (i )]
j j

This is known as the max-min rule of inferences.


253

Example:

Suppose two fuzzy rules have been formulated for a system, namely:

1. if the error (e) is zero and the error change (de) is small positive (SM) , then
the control input is small negative (SN).

2. If the error (e) is small negative (SN) and the error change (de) is zero (ZE),
then the control input is large positive (LP).

The rules are interpreted as functional diagram as shown in Fig. 1.


Consider a process having an error e=-1 and an error change de=1.5. The points of
intersection between the values of -1 and the graph in the first column have the
member ship functions 0.6 and 0.6. Likewise, the second column shows that de
has 0.7 and 0.2. The control input for the two rules is the intersection of the paired
values obtained from the graph, i.e min(0.6,0.7) and min(0.6,0.2), which reduces
to 0.6 and 0.2, respectively. Now, for a pair of (e,de) two sets of control input
exist. To determine the value of action to be taken from these contributions, we
choose the maxim value (other methods will be discussed later). In our example
the maximum value is 0.6 which corresponds to a control input of approximately -
2 units.
254

Example

if x is small then y is medium


where
small=1/1+1/2+0.9/3+0.6/4+0.3/5+0.1/6
medium=0.1/2+0.3/3+0.7/4+1/5+1/6+0.7/7+0.5/8+0.2/9

The relation R is defined by the follwoing relational matrix may be


defined by using the min operator at each discrete sample point.

0 .1 .3 .7 1. 1. .7 .5 .2
0 .1 .3 .7 1. 1. .7 .5 .2
0 .1 .3 .7 .9 .9 .7 .5 .2
0 .1 .3 .6 .6 .6 .6 .5 .2
A= 0 .1 .3 .3 .3 .3 .3 .3 .2
0 .1 .1 .1 .1 .1 .1 .1 .1
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
255

0 . 5

0
1 0
1 0
5
5
0 0

The realtion matrix may be viewed as shown in Fig 10.11.

Example

Consider
if x is small, then y is medium
x is very small

We have R as in previous example, and very small is defined as:

versmall=1/1+0.9/2+0.6/3+0.3/4+0.1/5
256

0 .1 .3 .7 1. 1. .7 .5 .2
0 .1 .3 .7 1. 1. .7 .5 .2
0 .1 .3 .7 .9 .9 .7 .5 .2
0 .1 .3 .6 .6 .6 .6 .5 .2
0 .1 .3 .3 .3 .3 .3 .3 .2
0 .1 .1 .1 .1 .1 .1 .1 .1
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0

B =verysmall R
B’=[1 0.9 0.6 0.3 0.1 0 0 0 0]
=[max(min(1,0),min(0.9,0),......min(0.1,0),.....,min(0,0))......]
=[0 0.1 0.3 0.7 1 1 0.7 0.5 0.2 0]

10.16 Fuzzy Logic Controller (FLC)

In the design of fuzzy controller , one must identify the main control
parameters and determine a term set which is ate right level of granularity for
describing the values of each k=linguistic variable. For example a term set
including linguistic values such as {Small, Medium, and Very Large} may not be
satisfactory in some domains, and instead require the use of term set such as
{Very Small, Small, Medium, Large and Very Large}.

Different type of fuzzy membership functions have been used in fuzzy


logic control. However, four types are most common. The first type is a
monotonic membership function such as a straight line. Other types using
triangular, trapezoidal and bell-shaped functions.

The selection of the types of fuzzy variable affects the type of reasoning
to be performed by the rules using these variables. After the values of the main
control parameters are determined, a knowledge base is developed using the above
257

control variables and the values that they may take. If the knowledge base is a
rule base, more than one rule may fire requiring the selection of a conflict
resolution method of decision making, as will be described later.

Different methods for developing fuzzy logic controller have been


developed. We discuss one common architecture known as the Direct FLC. The
general structure of the FLC is shown in Fig. !!.

Measurements from the sensors is usually fed to an A/D converter and


then filtered or processed to get trends, compensations, integrals and so on. The
discrete input variables may be signals such as error, error rate, function of errors,
process state variables, process performance indices and or references. This
information and the operator commands and actions must be first converted into
suitable fuzzy data for processing by the Inference Engine (IE). The output of the
FLC can be either linguistic (logic), asking for actions such as binary actuators
switch on/off, or numeric for process regulation or tracking. The output to the
operator should be both graphical and numeric to represent the process dynamics
and linguistic or symbolic to provide the operator with the process of controller
reasoning and decision making process. Unlike the conventional controller, the
availability of controller and process data in FLC provides a framework for
258

process and controller supervision such as autotuning, learning, adaptation and


fault monitoring.

The main components of a FLC are Fuzzifier (FF) to convert from


numeric to linguistic, a Knowledge Base (KB) system, Data Base (DB), Inference
Engine (IE), Defuzzification (DF) to convert from linguistic to numeric and
Supervisor (SR) to deal with tuning, selective options and operator interface.
Each of these modules is discussed in the following sections.

10.16.1 Fuzzifier

Since the fuzzy theory generally defines operators only for fuzzy sets, a
fuzzy set is to be assigned to each distinct measured value (crisp value). This
process is called fuzzification. The fuzzifier role is therefore to map (convert)
data from the sensors and the supervisor into a fuzzy set using the information
available in the data base. The supervisor may provide the numerical values of
signals such as error, error rate (PD-action) or error, integral of error (PI-action) or
any other available information.

There are two possible choices for the above mapping:

1) Singleton Fuzzifier

Consider a crisp point x U, where U is the universe of discourse for x.


Define a fuzzy singleton with support m, that is

A(x)=1 for x=m and A(x)=0 for all other x U with x m.

The singleton fuzzifier is the most widely used fuzzification process.


Note that A(x) in this case is the same as normalised impulse response.

Nonsingleton Fuzzifier
259

Define A(x)=1 for x=m and A(x) decreases from 1 as x moves away from m, for
example, a triangular membership function or bell type functions.

Nonsingleton fuzzifier is useful if the signals are corrupted with noise. In


this case, a triangular membership may defined where its vertex refers to the mean
value of the data set of the sensor measurement and the base refers to a function of
standard deviation ( for example twice the standard deviation). Then in this case,
fuzzification refers to finding out the intersection of the label’s membership
function and the distribution for sensor data as shown in Fig. !! .

10.17 Knowledge Base (KB) System

A rule also called fuzzy implication has the general form

if premise then conclusion

An arbitrary number of expressions are combined in the premise using


the operators and or or. The simplest rule has the following classical structure:

R1: if fact1 and fact2 then fact3

where (fact1 and fact2) is the premise, fact1, fact2 are the antecedents and fact3 is
the consequence.

There are two main tasks in designing the control knowledge base.
260

1) A set of linguistic variables must be selected which describe the values


of the main control parameters of the process. The input and output parameters
must be linguistically defined at this stage using proper term sets. The selection of
level of granularity of a term set for an input or output variable plays an important
role in the smoothness of the controller output signal.

2) A control knowledge base must be developed which uses the above


linguistic description of the main parameters. There are four methods for
developing the control rules.

10.17.1 Experts Knowledge

This is the most widely used method to design FLC knowledge base. In
modelling the expert control knowledge fuzzy control rules of the form

If error is small and change-in-error is small then force is small

have been used. A rule may also be developed whereby its conclusion is a
function of the input parameters. For example the following implication may be
written

if x is A and y is B then z=f(x,y)

where z is a function of the values that x and y may take. In a variation of this
method, the control action of the operator may be directly modelled.

The format of fuzzy implications clearly makes them suitable as a


descriptive language to express an expert’s thinking, which is essentially fuzzy in
nature. A good example is the operating manual for a cement kiln as shown in
Table !!. These protocols are just like fuzzy control rules.
261

Case Condition Action to be taken


1 BZ low When BZ is drastically low:
OX low a)reduce kiln speed
BE low b)reduce fuel
When BCE is slightly low
c) increase ID speed
d) increase fuel rate
2 BCE low a) reduce kiln speed
ox low b)reduce fuel rate
BE OK c) reduce ID fan speed
3 BCE low a) reduce kiln speed
OX low b) reduce fuel rate
BE high c) reduce ID fan speed
BE=back end temperature, BZ=burning zone temperature, ox=% of
oxygen gas in kiln exit gas

In many cases where an operator plays an important role in process


control, it is very useful to find his know-how on control by interviews and to
express it in terms of fuzzy implications. It is also possible for a control engineer
to list a number of protocols based on his knowledge about a process to be
controlled and his general control engineering knowledge.

Example

Let us assume a process has a second order step response shown in Fig.!!.
262

The fuzzy control rules may be formulated using the operator knowledge
combined with the control theory knowledge. These rules may for example take
the form shown in Table !!. The first rule corresponds to the starting point where
the error is big and the rate of change of error is zero, the control action should
then be big., i.e.

if e is PB and de/dt is ZO then u is PB

The second rule corresponds to the point t 1 and so on. Rule 13 is used to
force the error to zero when the error is small.

Rule e de/dt u Point


1 PB ZO PS t0
2 ZO PB NB t1
3 NB ZO NB t2
4 ZO NB PB t3
5 PM ZO PM t4
6 ZO PM NM t5
7 NM ZO NM t6
8 ZO NM PM t7
9 PS ZO PS t8
10 ZO PS NS t9
11 NS ZO NS t10
12 ZO NS PS t11
13 ZO ZO ZO set point
Table 13 rules for controlling 2nd order system
263

The disadvantage of this design method is that it is often difficult for the
operator to linguistically describe his actions. Moreover, it is often difficult to
find the control action if the process dynamic is unknown. The method can not,
therefore, be generalised as a design procedure. It is possible to improve this
method by directly modelling the operator’s control action using the input output
data. The procedure for building a fuzzy model is similar to the modelling process
models. There are two methods for designing a fuzzy controller based on a fuzzy
model. The first one is a heuristic method in which we set a fuzzy control rule to
compensate an undesirable system behaviour by considering control objective.
That is there is one control rule corresponding to one system behaviour. The
number of control rules is generally smaller that that of system behaviour, since
there are some desirable system behaviours which do not need to be compensated.
The second method is to determine the structure and the parameters of control
rules so that the system with a controller satisfies the control objective, for
example shows a desirable response, minimises a performance index, etc. In
either case we have to find a fuzzy model of the process.

10.17.2 Modelling a Process

In this method an approximate fuzzy model of the system is built using


the implications which describe the possible states of the system. A fuzzy
controller is then designed to control the fuzzy model. This approach is similar to
the traditional approach taken in control engineering. Hence, model parameter
and structure identification are needed. A typical rule of the form:

Ri : if x1i is A1i and x2i is A2i ... xri is Ari then y=p0+p1x1+...+ pmxm

for i=1,...,n where n is the number of rules and the consequence is a linear function
of the m input variables. The inverse of the process model may then be used to
control the process.
264

10.17.3 Self-organisation

The main idea in this method is the development of rules which can be
adjusted over time to improve the controller’s performance.

For n physical variables with m attached linguistic variables, the full set
of rules will contain nm rules. It is therefore clear that the complexity of the
controller rapidly increases as the number of rules increases. A rule base must
have the following properties:

Completeness

For any input pattern an output must be generated.

Consistency

Same input must give the same conclusion

Interactivity

Any of the possible output must be the result of at least one input pattern.

There is still the question of how to determine the membership functions


for the rules. If the rules provided by the human expert, then the membership
functions should be specified by the experts because these functions are integrated
part of the expert knowledge. For example, if an expert says that ‘ if error is large
then the control is large’, he or she should tell what the large means by specifying
the fuzzy membership functions for the linguistic variable large. If the rules are
determined by the numerical data, then the first task is to determine the functional
forms of the membership functions such as triangular, Gaussian, trapezoid, etc.
After the selection of the functional, the problem is how to identify their
parameters from the measured data. This is discussed in more detail later in the
notes.
265

It is common that more than one control rule may fire at a time. The
methodology which is used in deciding what control action should be taken as the
result of firing several rules can be referred to as the process of conflict resolution.
Consider the following example:

R1: if X is A1 and Y is B1 then Z is C1.

R2: if X is A2 and Y is B2 then Z is C2.

Now, if we have x0 and y0 as the sensor readings for fuzzy variable X and
Y, then their grade membership is represented by A1 ( x0 ) and B1 ( y0 ) ,

respectively. Similarly for rule 2 we have A2 ( x0 ) and B2 ( y0 ) as the grade of

membership values. The strength of these rules may be calculated as:

1 A1 ( x 0 ) B1 ( y 0 ) min{
m A1 ( x 0 ), B1 ( y 0 )}

2 A2 ( x 0 ) B2 ( y 0 ) m
min{ A2 ( x 0 ), B 2 ( y 0 )}

The control output of these rules are calculated by applying the matching
strength of its preconditions on its conclusion:

D1 ( ) 1 C1 ( )
D2 ( ) 2 C2 ( )

where ranges over the values of the support of the rule conclusion. This
implies that as a result of reading sensor values x 0 and y0, rule 1 is recommending
a control action with D1 ( ) as its membership function and rule 2 is

recommending a control action with D2 ( ) as its membership function. One

possible conflict-resolution process then produces:


266

C( ) D1 ( ) D1 ( ) [ 1 C1 ( )] [ 2 C2 ( )]
= max{[min[[ 1 , C1 ( )]
)], min[ 2, C2 ( )]}
)]

The pointwise membership function C ( ) should then be translated


(defuzzified) to a single value as discussed later.

10.17.4 Generality of the Rules

The question arise whether the general if-then fuzzy rule of the type

Ri : if x1i is A1i and ...and xni is Ani then y is Bi

is general enough to include other type of linguistic information. In this


section, it is shown that the if-then rules include many other type of fuzzy rules as
special cases.

Incomplete if-part rule

The general if-then rule include the following ‘incomplete if-part rule’:

Ri : if x1i is A1i and ...and xmi is Ami then y is Bi

where m< n.

Clearly this incomplete if-part rule is equivalent to

Ri : if x1i is A1i and ...and xmi is Ami and x im +1 is I....and x in is I then y is


i
B

where I is a fuzzy set in R with MF I(x)=1. This rule is in general form.

Or Rule
267

The general rule include the following ‘or-rule’ as special case:Ri:

if x1i is A1i and ...and xmi is Ami or x im +1 is Ami 1....and x in is Ani then y is Bi

using the definition of the logical operator or, this rule may be decomposed to the
following two rules:

if x1i is A1i and ...and xmi is Ami then y is Bi

if x im +1 is Ami 1....and x in is Ani then y is Bi

These two rules are special case of the general rule.

Membership Rule

The general rule include the fuzzy statement

y is Bi

as a special case. Clearly the fuzzy statement is equivalent to:

Ri : if x1i is I and ...and xni is I then y is Bi

which is in the from of general rule.

Gradual Rule

The following statement: The smaller is the x, the bigger is the y may be
represented using the format for the general rule by defining the following Mfs for
x and y:

X(x)=1/(1+exp(5(x+2))) Y(y)=1/(1+exp(-5(y-2)))

then, the preceding statement may be written as:


268

if x is X then y is Y

Unless Rule

Ri : y is Bi unless x1i is A1i and ...and x in is Ani

This rule is equivalent to


Ri : if not { x1i is A1i and ...and x in is Ani } then y is Bi

using De Morgan’s law, this can be written as:


Ri : if x1i is not A1i or ...or x in is not Ani then y is Bi

Consider not A1i as a single fuzzy set, then the rule is in general if-then form.

Conventional rules

The conventional production rule can be represented in the general if-then form by
defining member ship functions which can take only the values 1 or 0.

10.17.5 Defuzzification

Defuzzification is a process by which an inferred fuzzy control action is


translated to a nonfuzzy control action. Several different strategis exits. We
discuss some of the commonly used techniques here.

10.17.5.1 Tsukamoto’s Defuzzification Method

If monotonic membership functions are used, then a crisp control action


can be calculated by:

n
wi x i
i 1
u n
wi
i 1

where n is the number of rules with firing strength w i > 0 and xi is the amount of
control action recommended by rule I.
269

10.17.5.2 The Centre Of Area (COA) Method

Assumimng that a control action with a pintwise membership function C

has been produced, the COA method calculates the centre of gravity of the
distribution for the control action. Assuming a discrete universe of discourse, we
have:

m
wi C ( xi )
i 1
u m
C ( xi )
i 1

where q is the number of quantization levels of the output, x i, is the amount of


control output at the quantisation level I and C(xi) represents its membership
value in C.

10.17.5.3 The Mean Of Maxima (MOM) Method

The Mean of Maxima method (MOM) generates a crisp contro action by


averaging the supprot values which their membership values reach the maximum.
For a discrete universe, this is calculated by.

m
xi
u
i 1 l

where l is the number of quantised x values which reach their maximum


memberships.

10.17.5.4 Rules with Functions of their Inputs

As mentioned earlier, fuzzy control rules may be written as a function of


their inputs. For exampel:

if xi is Ai and yi is Bi then zi is fi(xi,yi)


270

assuming that I is the firing strenght of the rule i, then

m
i f i ( xi , y i )
i 1
u m
i
i 1

1 1 1
C1

0.5 0.5 0.5


A1 B1

0 0 0
0 10 20x 0 10 20 0 10 20

1 1 1
B2 C2

A20.5 0.5 0.5

0
y 0 0
0 10 20 0 10 20 0 10 20

where m is the number of firing rules.

0 . 7
0 . 6
0 . 5
0 . 4
0 . 3
0 . 2
0 . 1
0
0 2 4 6 8 1 0 1 2
271

Fig.9.1 Defuzzification of the combined conclusion of rules described in example


9.1

Example 9.1

Assume that we have the follwoing two rules:

R1: if x is A1 and y is B1 then z is C1


R2: if x is A2 and y is B2 then z is C2

Suppose x0 and y0 are the sensor readings for fuzzy variable x and y and
the with the following membership functions:

x 2 x 3 z 1
2 x 5 3 x 6 1 x 4
3 3 3
A1 B1 C1
8 x 9 x 7 z
5< x 8 6<x 9 4<x 7
3 3 3
y 5 y 4 z 3
5 x 8 4 x 7 3 x 6
3 3 3
A2 B2 C2
11 y 10 y 9 z
8< x 11 7 < x 10 6<x 9
3 3 3

Let the senosr readings be the crisp values x0 =4 and y0=8. For these
values, we have:

R1 : A1 2/3 B1 1
R2 : A2 1/ 3 B2 2/3

Using the min operator the strength of each rule may be calculated:

R1 : 1 m
min{ A1 ( x0 ), B1 ( y0 )} m 2 / 3,1} 2 / 3
min{
R2 : 2 m
min{ A2 ( x0 ), B 2 ( y0 )} min{
m 1 / 3,2 / 3} 1 / 3

Applying 1 to the conclusion of Rule1 results in the shaded trapezoid


figure shown in Fig. 9.2-a.
272

CC1(z)=min{ 1, C1(z)}

0 . 7

0 . 6

0 . 5

0 . 4

0 . 3

0 . 2

0 . 1

0
0 2 4 6 8 1 0 1 2

Applying 2 to the conclusion of Rule2 results in the shaded tapezoid


figure shown in Fig. 9.2-b.

CC2(z)=min{ 2, C2(z)}

0 . 3 5

0 . 3

0 . 2 5

0 . 2

0 . 1 5

0 . 1

0 . 0 5

0
0 2 4 6 8 1 0 1 2
273

By superimposing the resulted memberships over each other and using


the max operator, the membership for the combined conclusion of these rules is
found. Now using the COA method, the defuzzified value for the conclusion is:

1 2 2 2 1 1 1
2 3 4 5 6 7 8
u 3 3 3 3 3 3 3 4.7
1 2 2 2 1 1 1
3 3 3 3 3 3 3

Using the MOM defuzzification startegy, three quantised values reach


their maxium membership in the combined membership function, namely, 3,4 and
5 with membership values of 2/3. Thus:

3 4 5
u 4.0
3
274

Subject Index

Activated Sludge Process, ix, xiii, Flow, xii, xiv, 15, 20, 147, 148, 149,
xiv, 6, 11, 21, 213, 223, 235 151, 155, 156, 172, 215, 219
Actuator, vi, ix, 38, 39, 41, 42 Flow Balancing, 116, 219, 220, 225
Aeration, ix, xi, xiv, 11, 12, 15, 18, Fuzzy Logic, xiv, xv, 236, 238, 240,
19, 86, 215, 220 241, 243, 244, 245, 249, 250, 251,
Biological Treatment, 1, 216 252, 259, 260, 261, 266
BOD, 1, 2, 3, 4, 6, 9, 12, 15, 65, 71, HART Communication, vii, xiii,
138, 140, 215 173, 188, 189, 190, 191, 194, 197,
Charts 198
mean, 143 LabvVIEW, vii, viii, xiii, 199, 203,
Control 205
cascade, v, 38, 66, 67, 98, 111, MATLAB, viii, 24, 80
220 Modelling, v, vii, viii, 1, 12, 19, 22,
Feedfprward, x, 70, 71, 72, 98 24, 35, 80, 106, 120, 227, 228,
Inferential, 98 235, 270, 273
On-Off, 54 Monitoring, vii, xiv, 118, 225
PID, v, x, xi, 38, 54, 55, 56, 57, Fault, xiv, 225, 227
58, 59, 60, 61, 62, 63, 64, 65, Oxygen, vi, xii, 1, 14, 16, 19, 118,
69, 72, 80, 86, 87, 90, 109, 214, 145, 161
220 PID Control, v, x, xi, 38, 54, 55, 56,
Ration, v, 6, 38, 68, 69, 86, 98, 57, 58, 59, 60, 61, 62, 63, 64, 65,
138, 166, 169, 235 69, 72, 80, 86, 87, 90, 109, 214,
Data Management, v, vi, xi, xii, xiii, 220
xiv, 22, 23, 87, 88, 90, 97, 104, Pumps, vi, xiii, 117, 145, 169, 170,
108, 110, 113, 118, 126, 130, 173, 171, 215, 223
178, 184, 190, 206, 212, 225, 230, Returned Activitaed Sludge, xiv,
233, 268 215, 223, 224
Design, v, vii, xiii, xiv, 37, 78, 199, Sedimentation, 215, 216
215, 233, 236 Sensors, vi, 165
Desludging, xiv, 222 Analytical, xii, 156, 161, 164
Display, xiv, 113, 142, 225 DO, vi, xii, 2, 4, 6, 12, 16, 17, 18,
Dissolved Oxtgen, vi, xii, 2, 4, 6, 12, 25, 53, 66, 71, 72, 145, 161,
16, 17, 18, 25, 53, 66, 71, 72, 145, 162, 163, 164, 214, 220
161, 162, 163, 164, 214, 220 Flow, vi, xii, 6, 9, 12, 25, 39, 40,
Effluent, 20, 215, 216 41, 68, 69, 71, 72, 76, 86, 96,
Expert Systems, xiv, 227, 228, 229, 103, 125, 126, 145, 147, 148,
235, 238 149, 150, 151, 152, 153, 154,
Fieldbus, 193, 198 155, 158, 162, 166, 167, 215,
219, 220, 223, 224, 225, 231
275

flumes, xii, 148, 149 138, 139, 143, 144, 214, 219,
Level, vi, xii, 11, 22, 23, 39, 41, 225, 255, 256, 269
43, 51, 67, 93, 95, 96, 98, 100, Median, 130, 131
105, 108, 109, 111, 112, 119, Mode, 118, 130, 131, 178, 197,
137, 145, 146, 149, 150, 154, 257
161, 188, 191, 214, 219, 220, Variance, 44, 131, 248
223, 266, 270, 279 Suspended Solids, xii, 20, 164, 166
self-cleaning, xii, 166 Temperature, xiii, 162, 204, 205,
Weirs, xii, 149 206, 207
Simulation, v, 1, 22, 24, 27, 28, 35, Treatment
80, 81, 82, 83, 85, 228, 231 Primary, 4, 9, 10, 12, 24, 65, 98,
SIMULINK, v, 24, 80, 81, 82, 83, 101, 110, 116, 117, 137, 191
85, 86 Secondary, ix, 3, 4, 5, 99, 178
Sludge, ix, xiii, xiv, 6, 11, 20, 21, Tertiary, ix, 4, 5, 7
213, 217, 223, 235 Valves, 41
Software, xi, 78, 109, 199 Virtual Instrumentation, vii, 199,
Statistical Process Control, vi, 123 201, 202
Statistical Propreties VME Bus, 201
Mean, 6, 44, 100, 104, 105, 111,
131, 132, 133, 134, 135, 136,

You might also like