eat
ore hat
a |
UP Re le aTCONTENTS
1 WHAT IS ADAPTIVE CONTROL? 1
1.7
Introduction 1
Linear Feedback 3
Effects of Process Variations 9
Adaptive Schemes 19
The Adaptive Control Problem 24
Applications 27
Conclusions 33
Problems 34
References 38
2 REAL-TIME PARAMETER ESTIMATION 41
2.7
Introduction 41
Least Squares and Regression Models 42
Estimating Parameters in Dynamical Systems 56
Experimental Conditions 63
Simulation of Recursive Estimation 71
Prior Information 7&
Conclusions 82
Problems 82
References 87
3 DETERMINISTIC SELF-TUNING REGULATORS 90
3.4
3.2
3.3
Introduction 90
Pole Placement Design 92
Indirect Self-tuning Regulaters 102
xiiixiv
Contents
3.4 Continucus-Time Self-tuners 109
3.5 Direct Self-tuning Regulaters 112
3.6 Disturbances with Known Characteristics i121
3./ Conclusions 128
Problems 129
References 135
STOCHASTIC AND PREDICTIVE SELF-TUNING REGULATORS = 137
4.1 Introduction 137
4.2 Design of Minimum-Variance and Moving-Average Controllers 137
4.3 Stochastic Self-tuning Regulators 146
4.4 Unification of Direct Self-tuning Regulators 156
4.5 Linear Quadratic STR 164
4.6 Adaptive Predictive Control 168
4,7 Conclusions 178
Froblems 179
References 181
MODEL-REFERENCE ADAPTIVE SYSTEMS. 185
5.1 Introduction 185
5.2 The MIT Rule 186
5.3 Determination of the Adaptation Gain 194
5.4. Lyapunov Theory 199
5.5 Design of MRAS Using Lyapunov Theory 206
5.6 Bounded-Input, Bounded-Output Stability 215
5.7 Applications to Adaptive Control 230
5.8 Output Feedback 235
5.9 Relations between MRAS and STR 243
5.10 Nonlinear Systems 245
5.11 Conelusions 255
Problems 256
References 260
PROPERTIES OF ADAPTIVE SYSTEMS 263
6.1 Introduction 263
6.2 Nonlinear Dynamics 265
6.3 Adaptation of a Feedforward Gain 274
6.4 Analysis of Indirect Discrete-Time Self-tuners 280
6.5 Stability of Direct Discrete-Time Algorithms 293
6.6 Averaging 299
6.7 Application of Averaging Techniques 306
6.8 Averaging in Stochastic Systems 3199
10
6.9 Robust Adaptive Controllers 327
6.10 Conclusions 338
Problems 338
References 343
STOCHASTIC ADAPTIVE CONTROL 348
7.1 Introduction 348
7.2 Multistep Decision Problems 3850
7.3 The Stochastic Adaptive Problem 352
7.4 Dual Control 354
7.5 Suboptimal Strategies 362
7.6 Examples 365
7.7 Conelusions 370
Problems 371
References 372
AUTO-TUNING = 375
8.1 Introduction 375
8.2 PID Control 376
8.8 Auto-tuning Techniques 377
8.4 Transient Response Methods 9378
8.5 Methods Based on Relay Feedback 380
8.6 Relay Oscillations 385
8.7 Conclusions 388
Problems 388
References 389
GAIN SCHEDULING 390
9.1 Introduction 390
9.2 The Principle 391
9.3 Design of Gain-Scheduling Controllers 392
9.4 Nonlinear Transformations 398
9.5 Applications of Gain Scheduling 402
9.6 Conclusions 416
Problems 416
References 417
ROBUST AND SELF-OSCILLATING SYSTEMS 419
10.1 Why Not Adaptive Control? 419
10.2 Robust High-Gain Feedback Contral 419
10.8 Self-oscillating Adaptive Systems 426
Contents
EVxvi
11
12
3
Contents
10.4 Variable-Structure Systems 436
10.5 Conelusions 442
Problems 442
References 445
PRACTICAL ISSUES AND IMPLEMENTATION 448
11.1 Introduction 448
11.2 Controller Implementation 449
11.3 Controller Design 458
11.4 Solving the Diophantine Equation 462
11.5 Estimator Implementation 465
11.6 Square Root Algorithms 480
11.7 Interaction of Estimation and Control 487
11.8 Prototype Algorithms 490
11.9 Operational Issues 493
11.10 Conclusions 494
Problems 496
References 497
COMMERCIAL PRODUCTS AND APPLICATIONS 499
12.1 Introduction 499
12.2 Status of Applications 500
12.3 Industrial Adaptive Controllers 503
12.4 Some Industrial Adaptive Controllers 506
12.5 Process Control 517
12.6 Automobile Control 527
12.7 Ship Steering 529
12.8 Ultrafiltration 534
12.9 Conclusions 544
References 542
PERSPECTIVES ON ADAPTIVE CONTROL = 545
18.1 Introduction 645
18.2 Adaptive Signal Processing 0546
13.3 Extremum Control] 550
13.4 Expert Control Systems 554
13.5 Learning Systems 557
13.6 Future Trends 559
13.7 Conelusions 561
References 562
INDEX 565CHAPTER 1
WHAT IS
ADAPTIVE CONTROL?
11 INTRODUCTION
In everyday language, “to adapt” means to change a behavior to conform to
new circumstances. Intuitively, an adaptive controller is thus a controller that
can modify its behavior in response to changes in the dynamies of the process
and the character of the disturbances. Since ordinary feedback also attempts
to reduce the effects of disturbances and plant uncertainty, the question of the
difference between feedback control and adaptive control immediately arises.
Over the years there have been many attempts to define adaptive control
formally. At an early symposium in 1961 a long discussion ended with the
following suggestion: “An adaptive system is any physical system that has
been designed with an adaptive viewpoint.” A renewed attempt was made by
an IEEE committee in 1973. It proposed a new vocabulary based on notions like
self-organizing control (SOC) system, parameter-adaptive SOC, performance-
adaptive SOC, and learning control system. However, these efforts were not
widely accepted. A meaningful definition of adaptive control, which would make
it possible to look at a controller hardware and software and decide whether
or not it is adaptive, is still lacking. However, there appears to be a consensus
that a constant-gain feedback system is not an adaptive system.
In this book we take the pragmatic attitude that an adaptive controller
is a controller with adjustable parameters and a mechanism for adjusting
the parameters. The controller becomes nonlinear because of the parameter
adjustment mechanism. It has, however, a very special structure. Since general
nonlinear systems are difficult to deal with, it makes sense to consider special
classes of nonlinear systems. An adaptive control system can be thought of
as having two loops. One loop is a normal feedback with the process and the
controller. The other loop is the parameter adjustment loop. A block diagram
12 Chapter | What Is Adaptive Control?
Parameter
adjustment
Controller
parameters
Control
ontrouer Control
signal
Figure 1.1 Block diagram of an adaptive system.
Setpoint
po. Output
of an adaptive system is shown in Fig. 1.1. The parameter adjustment loop is
often slower than the normal feedback ijoop.
A control engineer should know about adaptive systems because they have
useful properties, which can be profitably used to design control systems with
improved performance and functionality.
A Brief History
In the early 1950s there was extensive research on adaptive control in connec-
tion with the design of autopilots for high-performance aircraft (see Fig. 1.2).
Such aircraft operate over a wide range of speeds and altitudes. It was found
that ordinary constant-zain, linear feedback control could work well in one
operating condition but not over the whole flight regime. A more sophisticated
controller that could work well over a wide range of operating conditions was
therefore needed. After a significant development effort it was found that gain
scheduling was a suitable technique for flight control systems. The interest in
adaptive control diminished partly because the adaptive control problem was
too hard to deal with using the techniques that were available at the time.
In the 1960s there were much research in control theory that contributed
to the development of adaptive control. State space and stability theory were
introduced. There were also important results in stochastic control theory. Dy-
namie programming, introduced by Bellman, increased the understanding of
adaptive processes. Fundamental contributions were also made by Tsypkin,
who showed that many schemes for learning and adaptive control could be
described in a common framework. There were also major developments in
system identification. A renaissance of adaptive control occurred in the 1970s,
when different estimation schemes were combined with various design meth-
ods. Many applications were reported, but theoretical results were very limited.
In the late 1970s and early 1980s, proofs for stability of adaptive systems
appeared, albeit under very restrictive assumptions. The efforts to merge ideas