Professional Documents
Culture Documents
System Identification: A Survey: Link To Publication
System Identification: A Survey: Link To Publication
System Identification: A Survey: Link To Publication
Document Version:
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please
follow below link for the End User Agreement:
www.tue.nl/taverne
a survey
K .J . ~ström , P, Eykhoff
SYSTEM IDENTIFICATION, a survey
Purpose of Identification
3
experiments that can be performed. In order to problem given by Zadeh (1962) is still rele-
get realistic models it is often necessary to vant:
carry out the experiments ~~!!~8-~~!~~1-~E~!~: "Identifiaation is the deter>mination~ on the
tion. This means that if the system is pertur- basis of input and output~ of a system within
bëd: the perturbations must be small so that a speaified atass of systems, to whiah the
the production is hardly disturbed. It might system unde:r> test is equivalent".
be necessary to have several regulators in
eperation during the experiment in order to Using Zadeh's formulation it is necessary to
keep the process fluctuations within accepta- specify a class of systems, S = {S}, a class
bie limits. This may have an important influ- of input signals, U, and the meaning of
ence on the estimation-results. "equivalent", In the following we will call
When carrying out identification experiments "the system under test" simply the E!~5:~~! and
of this type there are many questions which the elements of S will be called models.
arise naturally: Equivalence is often defined in t;~~;-~f a
criterion or a loss function which is a func-
o How should the experiment be planned? Should tional of the pr~~~;;-~~tp~t y and the model
a sequentia! design be used, i.e. plan an
experiment using the available a priori in- output Ym• i.e.
formation, perferm that experiment, plan a
V = V(y,ym) (2. I)
new experiment based on the results obtained,
etc. When should the experimentation stop?
Two models m1 and m2 are then said to be
o What kind of analysis should be applied to ~g~!Y~1~~~ if the value of the loss function
the results of the experiment in order to is the same for both models i.e.
arrive at control strategies with desired V(y,ym) = V(y,ym ),
properties?·What confidence can be given to I 2
the results? There is a large freedom in the problem fermu-
lation which is reflected in the literature on
o What type of perturbation signa! should be
identification problems. The selection of the
used to get as good results as possible
class of models, S, the class of input signals,
within the limits given by the experimental U, and the erfterion is largely influenced by
conditions?
the a priori knowledge of the process as well
o If a digital computer is used what is a suit- as by the purpose of the identification.
able choice of the sampling interval? When equivalence is defined by means of a loss
function the identification problem is simply
In spite of the large amount of work that has an gEfi~i~!!i2~-EI2~!~~: find a model S0 €S
been carried out in the area of system identi- such that the loss function is as small as
fication we have at present practically no possible. In such a case it is natura! to ask
general answers to the problems raised above. several questions:
In practice most of these general problems are
o Is the minimum achieved?
therefore answered in an ad hoc manner,
leaving the analysis to more specified pro- o Is there a unique solution?
blems. In a recent paper Jacob and Zadeh (1969)
discuss some of the questions in conneetion o Is the uniqueness of the salution influenced
by the choice of input signals?
with the problem of identifying a finite state
machine; c.f. also Angel and Bekey (1968)". o If the salution is not unique, what is the
Some aspects of the choice of sampling inter- character of the models which given the same
vals are given in Fantauzzi (1968), Îström value of the loss function and how should S
(1969) and Sano and Terao (1969). · be restricted in order to ensure uniqueness?
Since the general problems discussed above are
very difficult to formalize one may wonder if Answers to some of these problems have been
there will ever be rational answers to them. given for a simple class of linear systems
Nevertheless it is .worthwhile to recognize the arising in biomedical applications by Bellman
fact, that the final purpose of identification and ~ström (1969). The class of models S has
is aften the design of a control system, since been called !!!!:~fi!.i.!!~!!: if the optimization
this simple observation may resolve many of problem has a unique solution. Examples of
the ambiguities of an identification problem. identifiable and non-identifiable classes are
A typical example is the discussion whether also given.
the accuracy of an identification should be The formulation of an identification problem
judged on the basis of deviations in the model as an optimization problem also makes it clear
E!!!~~~!! or in the ~!~~=I~!E~~!~· If the that there are connections between identifica-
ultimate purpose is to design control systems tion theory and approximation theory.
then it seems logical that the accuracy of an Many examples of these are found in the lite-
identification should be judged on the basis rature e.g. Lampard (1955), Kitamori (1960),
of the E~!!g~~~~ of the control system de- Barker and Hawley (1966), Roberts (1966 and
signed from the results of the identification. 1967) and ethers, where covariance functions
are identified as coefficients in orthogonal
Formulation of Identification Problems series expansions. Recent examples are
Schwartze (1969), Gorecki and Turowicz (1969).
The following formulation of the identification
I
Another type of identification problem is ob- tornpare Sectien 7.
tained by imbedding in a probabilistic frame- It can also be argued that the problem of
werk. If S is defined as a parametrie class, cantrolling a process with unknown parameters
S = {S. }, where S is a parameter, the identi- can be approached without making reference
ficati~n problem then reduces to a parameter to identification at all, As a typical example
estimation problem. Such a formulation makes we mention on-line tuning of PID regulators.
it possible to exploit the tools of estimation In any case it seems to be a worthwhile pro-
and decision theory. In particular it is poss- blem to investigate rigorously under what
ible to use special estimation methods e.g. conditions the separation hypothesis is valid,
the maximum likelihoed method, Bayes' method, Initial attempts in this direction have been
or the rnin-max method. It is possible to made by Schwartz and Steiglitz (1968), Rström
assign accuracies to the parameter estimates and Wittenmark (1969).
and to test various hypotheses.
Apart from the obvious fact that it is desi-
Also in many probabilistic situations it turns rabie to choose a class of roodels S for which
out that the estimation problern can be reduced there is a control theory available, there are
to an optimization problem. In such a case the also many other interesting questions in the
loss function (2.1) is, however, given by the area of identification and control e.g.
probabilistic assumptions. Conversel~ to a o Is it possible to obtain rational choices
given loss function it is often poss1ble to of model structures and criteria for the
find a probabilistic interpretation, identification if we know that the results
There are several good books on estimation . of identification will be used to design
theory available, e.g. Deutsch (1965) and Nah1 control strategies?
(1969), A sumrnary of the important concepts
and their application to process identifica- o What "accuracy" is required of the solution
tion is given by Eykhoff (1967). An exposé of of an identification problem if the separa-
the elements of estimation theory is also tion hypothesis should be valid at least
given in Appendix A. with a specified error?
Partial answers to these questions are given
Also in the probabilistic case it is possible by Rström and Wittenmark (1969) for a restric-
to define a concept of !~~~!!~!~~!1!!l using ted class of problems.
the framewerk of estimation theory. In ~ström
and Bohlin (1965) a system is called identi- Accuracy of Identification
fiable if the estimate is consistent. A neces-
sary condition is, that the information matrix The problem of assigning accuracy to the
is positive definite. This concept of identi- result of an identification is an important
fiability is pursued further in Balakrishnan problem and also a problem which always seems
(1969), Staley and Yue (1969). to give rise to discussions; e.g. Qvarnström
(1964). The reason is that it is possible to
Relations between Identification and Control; define accuracy in many different ways and
- the Separatlon Hypothesis that an identification which is accurate in
one sense may be very inaccurate in another
Whenever the design of a control system around sense.
a partially known process is approached via For example in the special case of linear
identification it is an a priori assumption systems it is possible to define accuracy in
that the design can be divided into two steps: terms of deviations in the transfer function,
identification and control. In analogy with in the weighting function (impulse response)
the theory of stochastic control we refer to or in the parameters of a parametrie model.
this assurnption as the ~~E~!~!!~~-~lE~!~~~!~· Since the Fourier transform is an unbounded
The approach is very natural, in particula7 if operator small errors in the weighting function
we consider the multitude of techniques wh1ch can very well give rise to large errors in the
have been developed for the design of systems transfer function and vice versa. A discussion
with known procesa dynamica and known environ- of this is given by Unbehauen and Schlegel
ments. However, it is seldom true that optimum (1967) and by Strobel (1967). It is also pos-
solutions are obtained if a process is identi- sibie to construct examples where there are
fied and the results of the identification large variations in a parametrie model in spite
are used in a design procedure,developed of the fact that the corresponding impulse-
under the assumption that the process and its response does not change much. See e.g. Strobel
environment are known precisely, It can be (1967).
necessary to modify the control strategy to Many controversies can be resolved if we take
take into account the fact that the identifi- the ultimate goal of the identification into
cation is not precise, Conceptually it is account. This approach has been taken by
known how these problems should be handled, St~pán (1967) who conaiders the variation of
In the extreme case when identification and the amplitude margin with the system dynamics.
control are done simultaneously for a system The following example illustrates the point.
with time-varying parameters the ~~!1-~~~!E~!
concept of Fel'dbaum (1960,1961) can be ExamEle. Consider the process ST given by
applied. This approach will, however, lead to
exorbitant computational problems even for dx
simple cases, C,f, alsoMendes (1970), and -dt = u(t-T) (2.2)
5
The transfer function is 3. Classification of identification methods,
. I -sT (2.3) The different identification schemes that are
H (s) = - • e
T s available can be classified according to the
basic elements of the problem i.e. the class of
and the unit step response is
systems S, the input signals U and the crite~
rion, Apart from this it might also be of
a< t < T
h.r(t) J a (2.4) interest to classify them with respect to im-
l t-T t > T
plementation and data processing requirements.
For example: in many cases it might be suffi-
cient to do all computations off line, while
Assume that the process ST is identified as Sa. ether problems might require th~t-thë results
Is it possible to give a sensible meaning to
are obtained on line, i.e. at the same time the
the accuracy of the identification? It is imme-
measurements ~~ë-d;ne. Classifications have
diately clear that the differences been done extensively in Eykhoff (1967),
Balakrishnan and Peterka (1969).
max I ~(t) - ha(t) I (2.5)
t
The Class of Models S,
max I Hr(jw) - Ha(jw) I (2. 6)
The models can be characterized in many diffe-
w
rent ways: by ~~~E~!~~!!if representations
can be made arbitrarily small if T is chosen such as impulse response, transfer function,
small enough. On this basis it thus seems covariance functions, speetral densities,
reasonable to say that Sa is an accurate repre- Volterra series and : by E~!~~!!if models such
as state models
sentation of ST if T is small. On the other
hand the difference
dx
dt f(x,u,l3)
llog HT (jw)- log Ha (jw)j = lwTj (2. 7)
y = g(x,u,l3) (3. I)
i.e. the difference in phase shift, can be made
arbitrarily large, no matter how small we choose where x is the state vector, u the input, y the
T. output and 13 a parameter (vector), It is known
Finally assume that it is desired to control that the parametrie models can give results
the system (2.2) with the initial conditton with large errors if the order of the model
does not agree with the order of the process.
x(a) = I (2. 8) An illustration of this is given in an example
of Sectien 5. A more detailed discussion of
in such a way that the criterion parametrie model structure is given in Sectien
4. The nonparametrie representations have the
I {a
00
advantage that it is not necessary to specify
2 2 2 (2.9)
V- x (t) + u (t)}dt the order of the process explicitely. These
a representations are, however, intrinsically in-
finite dimensional which means that it is fre~
is minimal. Suppose that an identificatioó has quently possible to obtain a model such that
resulted in the model sa while the process is its output agrees exactly with the process out-
actually ST. How large a deviation of the loss put. A typical example taken from Gerdin
function is obtained? For s the control stra- is given below.
0
tegy which .minimizes (2.9) a given by
Example. Suppose that the class of models is
taken as the class of linear time-invariant
u(t) = - Clx(t) (2. 10)
systems with a given transfer function. A
The minimal value of the loss is reasonable estimate of the transfer function is
then gi ven by
min V • Cl · fT y(t)e -st dt
0
If a = I it can be shown that a very slight in- T u(t)e-st dt
crease of the loss function is obtained if say 0f
T = a.OOI.
However, if Cl c 2000 (>nf 2T) the criterion where u is the input to the process and y is
(2.9) will be infinite for the strategy (2.10) the output. To "eliminate disturbances" we
because the system is unstable. We thus find might instead first compute the input covari-
that the same model error is either negligible ance function
or disastrous depending on the properties of
the loss function. T-1•1
J u(t)u(t+T)dt
0
is pos1t1ve definite, is sufficient to get con-
sistent estimates for least squares, maximum
likelihoed and maximum likelibood in the spe-
cial case of white messurement errors.
One might therefore perhaps dare to conjecture
that a condition of this nature will be re-
quired in general,
Apart from persistent excitation many applica-
tions will require that the output is kept
within specified limits during the experiment.
The problem of designing input signals, energy-
and then estimate the transfer function by and time-constrained, which are optimal e.g.
in the sense that they mlnimize the variances
T of the estimates, have been discussed by
R (-r)e-s'd-r Levadi (1966), Aoki and Staley (1969). The same
J
-T
uy problem is also discussed in Rault et al. (1969~
It is closely related to the problem of optimal
T signal design in cammunication theory; see e.g.
Middleton ( 1960)
J Ru(-r)e-s'd-r
-T The danger of identifying systems under closed
loop control also deserves to be emphasized,
It is easy to show that
1
H=H .
The reason is Consider the classical example of Fig. 3.1.
simply that the chosen transter function will
make the model output exactly equal tó the pro-
cess output, at least if the process is initi- n
ally at rest.
Interesting aspects of parametrie versus non-
parametrie models are found in the literature u y
HP
on time series analysis. See for example Mann
and Wold (1943), Whittle (1963), Grenander and
Rosenblatt (1957), Jenkins and Watts (1963).
Needless to say the models must of course
I I
finally be judged with respect to the ultimate HR
aim. l I
The Class of Input Signals
~
3
y+ay •u d CU
" ' ,.Q
QJ
e•y-w-y-(7[u, a]
e•y+ay 3-u W+aw3•u
0 QJ
d d
•.-l
~
Q
Representation of Linear Systems coordinates the matrix A can be brought to
diagonal form.
Linear .time-invariant systems can be represen-
. ted in many different ways: by input-output
À! 0 0 SI I 6 12 ..... s lp
descriptions such as impulse response or trans-
fer function H or by the state model S(A,B,C,D) 0 À2 .•. •• 0 621 622 s2p
defined by dx
x+ u
dt
dx
dt Ax + Bu 0 0 ... À
n s nl 6n2 ... s np
(4. I)
y = Cx + Du (4.4)
+ ••• + [ dip+
b lps n-1 + b 2ps n-2 +. ·. ,+ b
n n-1 np U (s)
J x(k+l) ~x(k) + ru(k) + v(k)
s + a s + ••• +a P (4. 12)
1 n
y(k) 9x(k) + Du(k) + e(k)
(4. 9)
where k takes integer values. The state vector
where Y and U. denote the Laplace transfarms of x, the input u and the output y have dimensions
y and u. , A cänonical representation of a pro- n, pand r; {v(k)} and {e(k)} are sequences of
cess ofLthe n ~h order with p inputs and one independent equally-distributed random veetors
output can th~s be written as w~th zero mean ~alues and.covariance R and R •
1 2
SLnce the covarLance matrLces are symmetrie
dn dn-1 'b i -n u,- + , ••
d
the model (4.12) contains
~+aE..__l+
dtn
I
dtn-
I ••• + any Lo dtn +
n 2 + np + nr + pr + I n(n+l) + I r(r+l)
2 2
dnu ·
+ b' I u ] + , • , + [b' _ _P + ••• + b' u
n 1 op dtn np p J 3 I r
n(2n + 2 + P + r) + r(p + 2 + 2)
I
An analogous form for systems with several out- parameters. Two modelsof the type (4.12) are
puts is said to be equivalent if: (i) their input-out-
put relations are the same when e = 0 and v = 0
d u,
n· and (ii) the stochastic properties of the out-
puts are the same when u = 0. The parameters of
[ B , - - + ••• +
o dtn ~. r and e can be reduced by the techniques
applied previously. .
It still remains to reduce the parameters re-
dnu
+ B 1u
n 1
J+. •,. + [B __e + ••• + B u '
op dtn np p
J presenting the disturbances. This is accomr
plished e.g. by the Kalman filtering theorem.
It follows from this that the output process
(4, IJ) can be represented as
This form was introduced by Koepcke (1963), It x(k+l) = ~x(k) + fu(k) + KE(k)
has been used among others by Wong et al. (1968) (4,14)
and Rowe ( 1968), y(k) = ex(k) + nu(k) + E(k)
The determination óf the order of the process
(4,11), which in general is different from n, where i(k) denotes the conditional mean of x(k)
as well as the reduction of (4,11) for state' given y(k-1), y(k-2), ••• , and {E(k)} is a se-
form has been done by Tuel (1966). Canonical quence of independent equally distributed ran-
forma for linear multivariable systems have dom variables with zero mean values and cova-
also been studied by Luenberger (1967), The riance R,
simplification of large linear dynamic process- The single output version of the model (4.14)
es has been treated by several authot:s; the was used in Rström (1965). Kailath (1968) calls
reader may consult Davison (1968) for an ap- (4, 14) an ~!::!~Y!.E!~!:L!:~E!:~~~!:.E!.H-2!: of the
proach in the time domain, Analogous results Rrocess. A detailed discussion is given in
hold for discrete time systems. Äström (1970). The model (4.14) is also used by
When the matrix A has multiple eigenvalues and Mehra (I 969).
is not cyclic it is not clear what a "minimal Notice that if the model (4.14) is known the
parameter representation" means, The matrix A steady state filtering and estimation problems
can of course always be transformed to Jordan are very easy to solve. Since K is the filter
canonical form, Since the eigenvalues of A are gain it is not necessary to solve any Riccati
not distinct the matrix A can strictly speaking equation. Also notice that the state of the
model (4.14) has physical interpretation as the
11
conditional mean of the state of (4.12). ·The following canonical form
If ~ is chosen to be in diagonal form and if
conditions such as (4.6) are introduced on r
and e the model (4.14) is a canonical represen- + ••• +
tation which contains
r I B (q) C(q)
N = n(p + 2r) + r(p + ~ + 2) (4. 15)
4 + -fiiT u
p q p
(k) + A(q) e:(t) (4.20)
parameters.
For systems with one output, where the addition- has been used by Bohlin (1968) as an alterna-
al conditions are as e'
= [I 0 ••• the equa- oJ, tiveto (4.16).
tion (4.14) then reduces to The choice of model structure can greatly in-
fluence the amount of work required to solve a
particular problem. We illustrate this by:
qy(k) - y(k+l) (4.17) where {e(k)} and {v(k)} are discrete-time white
noise with covariances R and R , the likeli-
2
the polynomials hood function for the estimation problem can be
written as
A(q) K qn + a qn-1 + + a
1 n
[v 1 (k)R - v(k) + e 1 (k)R
2- e(k~+
1
B. (q) • bI n + bI n-1 + + bi .
- log L =..!.2 Î 1
1
1. Oiq liq nl.
k=l
(5.1)
13
To find the minimum of the lossfunction V we
introduce
y(n+1)
N+n
y(n+2) - I y(k) y(k-1)
k=n+1
y = N+n
-I y(k) y(k-2)
k=n+1
y(n+N)
·:.:.y(n) -y(n-1) , , .-y(1) , u(n) u(n-1),, ,u(I)
I N+.n
1-y(n+1) -y(n) , , .-y(2) 1 u(n+1) u(n) , , ,u(2) -L y(k) y(k-n)
k=n+1 (5. 8)
4l'y· =
N+n
L y(k) u(k-1)
I
k=n+1
~y(N+n-1) -y(N+n-2),,,-y(N) iU(N+n-1) , , ,u(N) N+n
L y(k) u(k-2)
k=n+1
(5. 5)
N+n
equation defining the error (5,3) then be- L y(k) u(k-n)
k=n+1
e•y-4l8 (5, 6)
(5.7)
N
I u2(k)
k•1
14
2 1
Notice that the technique of this section can a [<t> 1 ct>]- , Notice that this does !:!:!:?! follow
immediately be applied to the identification of from the general properties of the maximum
nonlinear processes which are linear-in-the- likelibood estimate since they are derived
-parameters e.g. under the assumption Of independent experi-
2 ments.
y (k) + ay (k- I) b u(k-I) + b u (k-1)
1 2
(5, 10) In practice the order of the system is seldom
known. It can also be demonstrated that serious
A Probabilistic Interpretation. errors can be obtained if a model of the wrong
order is used. It is therefore important to
Consicier the least squares identification pro- have some methods available to determine the
blem which has just been discussed, Assume order of the model, i.e. we consider S as the
that it is of interest to assign accuracies to class of linear roodels with arbitrary order.
the parameter estimates as well as to find
methods to determine the order of the system if To determine the order of the system we can
it is not known. Such questions can be answered fit least squares rr-.odets of different orders
by imbedding the problem in a probabilistic and analyse the reduction of the loss function,
framework by making suitable assumptions on the
residuals, We have e.g.; To test if the loss function is significantly
reduced when the number of parameters is in-
Theorem, Assume that the input-output data is creased fr~m n to n we can use the following
generated by (5,4) where the residuals e(k) are 1 2
test quant~ty
independent, equally distributed with zero
mean. Assume that the moments of e(k) of fourth v 1 - v 2 N- n
order exist and are finite, Let all the roots 2
t (5. 12)
of V2 n2 - n I
2
which is asymptotically x if the model resi-
have magnitudes less than one. Assume that the duals are gaussian.
limi ts The idea to view the test of order as a
decision problem has been discussed by
N Anderson (1962), It is also a standard tool in
lim...!. L u(k) and regression analysis,
N..- N k=I Notice that the least squares method also in-
cludes parametrie time series analysis in the
N sense of fitting an autoregression. This has
lim...!. L u(k) u(k+i) = R (i) been discussed by Wold (1938) and Whittle
N..- N k= I u (1963), Recent applications to EEG analysis
have been given by Gersch (1969),
exist and let the matrix A defined by
Using the probabilistic framework we can also
A = {a .. = R (i-j)} i,j = 1,2, ... ,n give another interpretation of the least
~J u
(5. I I) squares methods in terms of the general defi-
be positiye definite, The least squares nition of an identification problem given in
estimate S then converges to the true para- section 3. First observe that in the general-
meters b in mean square as N..- , ized error defined by (5.3) another y can be
used: m
The, special case of this theorem when b.=O for
all i, which correspond to the identifiêation e(k) A*(q- 1)y(k) - B*(q- 1)u(k) =
of the parameters in an autoregression, was
proven by Mann and Wald (1943). The extension = y(k) - ym(k) (5. 13)
to tne case with b.#O is given in Rström (1968),
~ Consequently:
It is simple to find an expression for the
accuracy of a
in this case: ym(k) = y(kik-1) = [1-A*(q-I)Jy(k) +
*
+ B (q -1 )u(k)
cov lêJ = a 2 [c~>'c!>J- 1 = -a y(k-l) -
1
- a y(k-r1)+
n
2 (5. 14)
where a is the varianee of e(t),
Estimates of the variances of the parameter
estimates are obtained from the diagonal ele- Notice that y (k) = y(k!k-1) has a physical
ments of this matrix. interpretatioW as the best linear mean squares
predietor of y(k) basedon y(k-1), y(k-2), ,,,
If it is also assumed that the residuals are for the system (5.4). The generalized error
gaussian we find that the least squares esti- (5.3) can thus be interpreted as the difference
mate can be interpreted as the maximum likeli- between the actual output at time k and its
hood estimate, i.e. we obtain the loss function prediction using the model (5.14).
(5.2) in a natural way. The least squares procedure can thus be inter-
It has been shown that the estimate S is a- preted as the problem of finding the parameters
symptotically normal with mean S and covariance for the (prediction) model (5,14) in such a way
15
that the criterion N-i
R (i) = N-i I y (k) u(k+i)
~
N yu
k=l
I [y<k) - y (k)' 2 (5. 15)
(5. 16)
k=l m . .;
N-i
I
is as smal! as possible. Compare with the block
R (i) =
uy
=--r
N-~ I
k=l
u (k) y(k+i)
diagram of fig. 5.2. This interpretation is use-
ful because it can be extended to much more Gomparing with the least squares identification
general cases. The interpretation can also be of process dynamics we find that the elements of
used in situations where there are no inputs the mat~ices 4> 1 4> and 4> 'y of the least squares
' e.g. in time series analysis. procedure are essentially correlations or cross-
correlations. Neglecting terms in the beginning
and end of the series we find ~
u (k)
process
: -R ( 1)
i y
I-Ry(2)
i .
+ t (J<)
/-Ry (n)
.
I
4>'y (5. 18)
, - R- ~~
uy
Fig 5,2
R (2)
uy !
. !
. I
R (n) ;
uy _;
~
N-~
u(k) u(k+i)
u k=l
(5. I 6)
N-i
I
R (i) --
y =-r
N-~ I y(k) y(k+i)
I
k=l I
'
I R (0)
y
R (I)
y
R (n-1) -R
Y . yu
(O) -R
uy
(I)
... -Ruy (n-1~
R (0) R (n-2) 1-R (I) -R (O) -R (n-2)
y Y . yu yu uy
......
I
R (0)
y 1
-Ryu (n-1)-Ryu (n-2) .•• -Ryu (0)
-l ·-· ·-· -- (5. 17)
i
I 1 R (O)
u
R (I)
u
R (n-1)
u
l
R (0)
u
Ru (n-2)
R (O)
u
16
Correlated Residuals In (5,20) tte additive noise n is only present
in e and not in u; consequently it does not af-
Hany of the nice properties of the least squares fect the expectation of 6. In (5.21) the &dditive
methoq depend critically upon the assumption noise is present both in e and y; this l~ads to
that the residuals {e(k)} are uncorrelated. It a term
is easy to find real-life examples where this
assumption does not hold. 2
n (k-1)
E~ample. Consider a noise-free first order sys-
tem
which does cause a bias of a.
x(k+l) + ax(k) = bu(k) To see how this works out consider an ~~ample.
Assume that x is observed with independent Example. Assume that the . process is actually
maasurement errors (additive noise) i.e: described by the model
y(k) = x(k) + n(k) a b
y(k+I)+0,5 y(k) 1,0 u(k)+n(k+I)+O,I n(k)
then
where {n(k)} is a sequence of independent normal
y(k+l) + ay(k) = bu(k) + n(k+l) + an(k) (0,1) random variables, but that the system is
identified using the least squares method und.e r
We thus get a system similar to (5.4) but with the assumption that the residuals are uncorre-
correlated residuals. lated. Below we give a typical result obtained
from 500 pairs of inputs and outputs
When the resiquals are correlated the . least
squares estimate will be biased. The bias is process
given by parameter estimates
b I, 0 s 1,018 + 0,062
where S is the estimate and b is the true value
of the parameter. The reasen for this bias can We are apparently in a very bad situation; not
be indicated as follows; c.f. Fig. 5.3. only is the estimate & wrong but we have aJso a
great deal of confidence in the wrorg result.
(The true value 0.500 deviates from the estimate
yOd with about Sa).
u(k)
proc~ss
• -1 • -1
l Necessary conditions are: A (q ) y(k) = B (q ) u(k) + Àe(k)
N where
av
11
:::::. I
k=1
e(k) u(k-1) 0 (5.20)
A•(q- 1)=1+I,I9q- 1-o.aiq- 2+o,s2q- 3-o,35q- 4+o,J2q-s
N 1
B* (q - )., 1.08q- 1-0, 75q-z+0,48q- 3-o, 25q "' 4+0, 12q-S
av
ää I
=k=1 e(k) y(k-1) = 0 (5. 21)
17
± a (a) s ± a(S) 2
Qn xn-1 The basic idea is as follows. Let the process
B* (q-I ) u(k)
+ --- -- - --- - -- 1------ - • -1
21 I, 015 ± 0,045 1,086 ±0,056 ' 469,64 1 50.94 A (q ) y(k) 1- v(k) (5,22)
-0,377 ± 0,039 -0,520 ±0,072 • where A* and B* are polynominals and {v(k)} a
·- . - -- -- ------ - - ~
i or
-0,349 ± 0,076 -0,252 ±0,086 .
I
---~--:··---- - ·-· 1
0,117 ± 0,044 0,123 ±0,076 (5. 25)
..6 ,1: I, I 9~ -~- ~--~~;·-·· ·· I. 079 ±0, 055 ·. .416,56 0,99 where
I
1-0,339 ± o.079 -o. 7 51 ±0,078 1
y(k) y(k) (5.26)
i 0,555 ± 0,088 0,487 ±0,087 c*(q-I)
-0.410 ± 0,088 -0,290 ±0,090 l'i
I
ü (k) u(k) (5.27)
j~~-o.o6t
_
o. 208 ± o. 079 o. 183 ±0,087 '
± ?_._o45___ ~o.o~~ _±o~_o7~---·-- - 1
c*<q-I)
y(k) = ~ u(k)
q+0.5
+ À
A~(q-1)_
e(k) B* ( q-1 ) [ * I _I u (k)
G (q )
J.
(5.28)
We can thus conclude that if we choose S
not as the class of linear first order systems Cernpare with the block diagram of Fig. 5.4.
but as the class of linear systems of arbitrary This sho~s how the generalized error can be ob-
' order it is at least possible to evereome the tained from the process inputs and outputs and
d~f~iculty of correlated residuals in the spe- the model parameters a and S in the generalized
c~f~c example. This idea was mentioned briefly least squares method,
in iström (1967); it has, however, net been per-
sued in generaL The correlation of the residuals and the pulse
transfer function G are seldom known in practi~
ad b) Generalized least sguares, Another way to ce. Clarke (1967) has proposed an iterative
evereome the difficulty with correlated residuals procedure to determine G which has been tested
is to use the method of generalized least squares. on simulated data as well as on practical mea-
See e.g. Clarke (1967). surements (distillation column identification).
!I!
D (q
-1
)
*
= Aj-l(q )
-1
n<k >
y (k)
---,---t~ process thus making a least squares fit of the model
*
A. (q -1 )
ll' -1
B. (q )
e(k) y (k) - -;.--_-1:- u(k)
A.
* -1
I (q ) A. I (q )
r r
e(k)
y(k) -
B.* (q -1 )
* _1 u(k)
J*
A. (q -1 )
[ A. (q ) A.* I (q -1 )
J r
Fig. 5.4 (5.33)
19
Also notice that the optimization of L with res- and speetral densities, one might expect that
pect to a and À can be performed separately the estimates can be expressed as nonlinear
in the following way: First determine e such functions of the sample covariances. Estimates
that the loss function of this nature which are asymptotically equiva-
lent to the maximum likelihoed estimates for
parametrie time series analysis have been given
V(9) (5.37) by Zetterberg (1968).
(5.40) (5.45)
is as small as possible, The parameter estimate obtained from
Notice that (5.34) can also be written as
(5.46)
20
Multivariable Systems . Let the class of models be all systems descri~
bed by the state equation
The essential difficulty in th~ identification
of multivariable systems is to find a suitable dx
dt f(x, u, 13, t)
representation of the system. Once a particular
representation is chosen it is a fairly straight
- forward procedure to construct identification y
m
= g(x, u, S, t) (6, I)
methods analogous to those given for the ca~e of
systems with one input and one output. We refer where the parameter vector S belongs to a given
to Section 4 for a discussion of structures for set. Let the criterion be given by the loss
multivariable systems. Also c.f. Graupe, Swanick function
and Cassir (1968). For the structure (4.14) the
likelihood function is given by
V(S) J[;(k) - ym(k,l3~ 2d(:
0
N I N -I (6. 2)
-log L(9,R) = Ïlog det R + LE
2 k=l
1
(k)R E(k) +
where y is the process output and ym the model
nN output.
+
2 log 27T (5.47)
Estimation for a parametrie model
Even for multivariable systems the maximization
of L(9,k) can be performed separately with res ... For special classes of nonlinear systems, where
peet to 9 and R. It was shown by Eaton (1967) the linear dynamics and the nonlinearity-with-
that the maximum of L(B,R) is obtained by finding out-memory can be separated, the identification
9 which minimizes technique by means of an adjustable model is
sametimes feasible; c.f. Butler and Bohn (1966),
V(9) = det rLk=lÏ E(k)E' (k)J (5.48)
Narenda and Gallman (1966).
The least squares estimate is then given by Recursive versions of the generalized least
( 5,7). It can be shown by simple algebraical squares procedure have been derived by Young
manipulations that the least squares estimate (1969), Recursive versionsof an instrumental
satisfies the recursive equation variable metbod has been derived by Peterka and
Smuk (1969), An approximative on-line version
8(N+I)=8(N)+r(N)[y(N+I)-~N+I) 8(N~ (7.4) of the maximum likelibood metbod has been pro-
posed by Panuska (1968),
where 8(N) denotes the least squares estimate A discussion of on-line methods is also given
based on N pairs of input/output data and in Leathrum (1969).
r (N) •P (N) ~(N+ I) (a+~(N+ 1)P (N)!f~N+ I)] -I (7. 5) Contraction Mappinas
P (N+ I) =P (N)-P (N) 'f(N+ I) (a+ f{N+ I )P (N) <p(N+ I)] -I. A technique of const~ucting recursive algorithms
have been suggested by Oza and Jury (1968,1969).
'~(N+I)P(N) = P(N)-r(N)9?(N+I)P(N) • We will explain the technique in conneetion with
the least squares problem. Instead of solving
=[I-f(N)'f{N+I~ P(N) (7.6) the equation
1
P(N) =arcfl (N )<fl(N
L:
l] -I (7.7) 1 1
0 0 0 <fl (N)cfl(N)8(N) = <fl · (N)y (7 .8)
1
and N is a number such that cfl (N ) <fl (N ) is po- for each N and showing that 8(N) satisfies a
sitivg definite. · · · 0 · 0 recursive equation, Oza and Jury introduce the
mapping
The recursive equation (7,4) has astrong in~
tuitive appeal. The next estimate S(N) is form-
ed by adding a correction to the previous esti- (7. 9)
mate. The correction is proportional to
y (N+ I) - Cf (N+ I) 8 (N) • The term <P 8 would be the where y is a scalar, It is then shown that the
value of y at time N+l if the model were per- sequence
fect and there were no disturbances, The correc-
tion term is thus proportional to the differen- (7. I 0)
ce between the measured. value of y(N+l) and the
predietien of y(N+I) based on the previous model under suitable conditions converges to the true
parameters. The components of the vector r(N) parameters as N~. When applied to the ordinary
are weighting factors which tell how the eerree- least squares problems the algorithm (7.9) is
tions and the previous estimate should be weigh- not efficient in contrast with the recursive
ted, least squares method. To obtain an efficient
algorithm it is necessary to make y a matrix.
I
With the choice Theorem (Kalman). Let the initial condition of
(7.14) be a normal random variable (m,R ). Thè
(7. 11) best estimate of x(k) (in the sense of Îeast .
squares) given the observed outputs y(l), y(2),
the algorithm becomes equivalent to the least ••• , y(k) is given by the recursive equations
squares,
x(k) <~> x(k-1) + r (k) [y(k) - c <~> x(k-1 >]
The method of Oza and Jury can be applied to
more general cases than the least squares, It x(O) m (7. 15)
was actually proven for the case when there are
errors in the measurements of both inputs and where
outputs, provided the covariance function of the
measurement errors are known. The assumption of r(k) S(k) C
1
@S(k) c 1 + Rz]-l
known covariances of the measurement errors se- 1
verely limits the practical applicability of the S(k) <I> P(k-1) <1> + R
1
method. ' (7 .16)
P(k) S(k) - f(k) C S(k)
Stochastic Approximation·s
S(O) R
0
The formula for the recursive least squares
The matrix S(k) has a physical interpretation
S(k+ I) .. 8 (k) + r (k) [Y (k+ I) - 'f(k+ I) 8 (k)] as the covariance matrix of the apriori estimatè
of x(k) given y(l), ••• , y(k-1) and the matrix
(7 .12) P(k) as the covariance of the posterior estimate
of x(k) given y(l), •.• , y(k).
where r was chosen by the specific formula (7,5),
It can be shown that there are many other choices Now consider the least squares identification of
of r for which the estimate 8 will converge to the system
the true parameter value b, Using the theory of
stochastic approximations it can be shown that
the choice
y(k) + a y(k-l) + ••• + a y(k-n)
1 0
=
(7 ,17)
= b 1u(k-l) + •.. + b u(k-n) + e(k)
0
r (k) = ~ Alf' (k+ I) (7. 13)
where {e(k)} is a sequence of normal (Ü,À) ran-
dom variables.
will ensure convergence if A is positive definite. Introduce the coefficients of the model as state
See e . g. Albert and Gardoer (1967). A particu- variables
larly simple choice is e.g. A = I. The algorithms
obtained by such choices of r will in general
give estimates with variances that are larger
than the varianee of the least-squares estimate.
The algorithm5 are, however, of interest because
they make it possible to reduce computations, at
the price of a larger variance. Using stochastic
approximations it is also possible to obtain re-
cursive algorithms in cases where the exact on- x (k)
n
= an
line es timate is ei ther very compli cated or very (7. 18)
difficult to derive. There are excellent surveys xn+l (k) bi
available on stochastic approximations. See e.g.
Albert and Gardoer (1967) and Tsypkin (1966), xn+2(k) .. b 2
Recent applications are given by Sakrison (1967),
Saridis and Stein (1968a, 1968b), Holmes (1968),
Elliott and Sworder (1969), Neal and Bekey (1969),
x (k) ... b
Real-time Identification 2n . n
The recursive version of the least squares me- and define the following vector
thod is closely related to the Kalman filtering
theory. Kalman conaiders a dynamical system C(k)= [-y(k-1), ••• ,-y(k-n), u(k-1), ••• , u(k-n)] ·
where {e(k), k • 1,2, ••• } and {v(k), k ~ 1,2, •• } x(k+ I) "' x(k) (7, 20)
are sequences of independent equally distributed
random veetors with zero mean values and cova- The equation (7,17) can now be written as
riance matrices Rl and R2 respectively. Kalman
~as proven the fo lowing theorem, y(k) ~ C(k) x(k) + e(k) {7,21}
24
and the least squares identification problem can for discrete time systems, The (state) estima-
' be stated as a Kalman filtering problem with tion problem obtained in this way will however
• = I R = 0 R = A2 . '
~n general be a nonlinear problem since the pa-
'
' I ' 2 •
' The recursive equations of the least squares es- rameters frequently occur in terros like bx(t)
timate can thus be obtained directly from Kal- i~ the original parameter problem, Only in spe-
man's theorem. This has an interesting conse- c~al cases an optimal identification scheme can
quence because it turns out that if the parame- be fo~ed (Fari~on, 1967~; in other cases only
ters ai.are not constauts but gauss-markov pro- subopt1mal nonl~near est1mation schemes are
cesses ~.e. known, In the general continuous time case we
are thus faced with a filtering problem for the
a. (k+ I ) = a a. {k) + v. (k) (7. 22) model
~ ~ ~
Approximations
~. 0 (7 ,23)
dt
From the functional equation for the conditional
for continuous time systems and distribution it is possible to derive equations
for the maximum likelibood estimate (the mode)
b (k+ I ) • b (k) (7. 24) the minimum varianee estimate (the conditional'
mean) etc. It turns out that these equations are (stationary stochastic processes). The relation~
also very unattractive for numerical computations. between different techniques are reasonably well
If we want to compute the conditional mean we understood and the choice of methods can be done
find that the differential equation for the mean largely on the basis of the final purpose of the
will contain not only covariances but also high- identification, There are, however, unresolved
er moments, It is therefore of great interest problems also in this area, for example conver~
to find approximative methods to solve the non- gences proofs for the generalized least squares
linear filtering problem, Approximative schemes method.
have been suggested by Bass and Schwartz (1966),
Nishimura et al. (1969), Sunahara (1969), Kushoer Multivariable Systems. The essential difficulty
(1967b), Jaszwinski (1966, 1969), Kushoer's ar- with multivariable systems is to choose a suit-
ticle contains a good survey of many of the dif- able canonical form. A few such forms are avai-
ficulties associated with the approximative lable but the str.u ctural. problems are not yet
techniques, The following type of approximation neatly resolved. For example there are few good
has been suggested by several authors. Estimate techniques to incorporate a priori knowledge of
x of (7,25) by given by x the nature that there is no coupling between two
variables or that there is a strong coupling
di • f(i,t)dt + r(t) [dy- g (x,t)dt] (7. 27) between two other variables. If a multivariable
system is identified as a conglomerate of sin-
The estimate x
is sometimes referred to as the gle-input single-output systems, how should the
~!t~ug~g Kalman filter. The gain matrix r of different single-input single-output systems be
(7,27) can be chosen in many different ways e.g. combined into one model? How do we decide if a
by evaluating the optimal gain for the linear- particular mode is common to several loops ta-
ized problem or simply by choosing king uncertainties into account?
Once a particular structure is chosen the so-
lution of the multivariable identification pro-
r ( t) • tI g' (x, t) (7.28) blem is straightforward.
in analogy with stochastic approximations. See Nonlinear Systems, The techniques currently
e.g. Cox (1964). The essential difficulty with used simply convert the identification problem
all the approximative techniques is to establish to an approximation problem by postulating a
convergence, In practice it is often found that structure. The few non-parametrie techniques
the algorithms converge very well if good ini- available are computationally extremely time
tial conditions are available, i.e. if there are consuming. .
good initial estimates of the parameters, and if
suitable computational "tricks" are used. A com- On-line and Real-time Identification, This is a
putational comparison of several nonlinear fil- f~ddlers paradise, Much work remains to be done
ters is given by Schwartz and Stear (1968). A to prove convergence as well as to devise ap-
technique allowing secend order nonlinearity in proximation techniques,
system measurements is discussed in Neal (1968),
Empirica! Knowledge Available. The extensive
The application of (7.27) to system identifica- applications of identification methods which
tion was firs t proposed by Kopp and Orford ( 1963). are now available provide a souree of empirica!
It has recently been applied to several indus- information which might be worth a closer ana-
trial identification problems e.g. nuclear reac- lysis.
tors in Habegger and Bailey (1969), stirred tank One of the most striking facts is that most me-
reactor in Wells ( 1969) and head box dynamica in thode yield very simple models even for complex
Sastry and Vetter (1969). systems. It seldom happens that models of a
single-input single-output system arè of an
order higher than 5. This fact, which is extre-
8. Some concluding remarks. mely encouraging from the point of view of com-
plexity of the regulator, is not at all well-
In the previous sections. applicable identifica- understood.
tion techniques as well as (yet) unsolved pro- Most methods seem to work extremely well on si-
blems have been mentioned, Particularly from the mulated data, but not always that well on actual
point of view of the practising engineers (being industrial data. This indicates that some methods
university professors we are really sticking our might be very sensitive to the a priori assump-
necks out now) there are many important questions tions. It therefore seems highly desirable tà
that remain to be answered, For example how develop tests which insure that the a priori
should the sampling interval be chosen? What is assumptions are not contradicted by the experi-
a reasonable model structure? How should an iden- mental data. It also means that it is highly
tification experiment be planned? How can it be desirable to have techniques which are flexible
ensured that the a priori assumptions required with respect to a priori assumptions. A typical
to use a particular metbod are not violated? example is the assumption that the measurement
If we leave the area of general problems and errors or the residuals have a known covariance
· study specific methods the situation is better. function, It is of course highly unnatural
from a practical point of view to aasurne that
Linear Time-invariant Systems, There are good we have an unknown model but that the residuals
techniques available for identifing linear sta- of this unknown model have known statistica.
~ tionary systems as well as linear environments
26
Comparison of Different Techniques p.ngel, E,S, and G,A. Bekey (1968). Adaptive
finite~state models of manual control systems,
In spite of the large literature on identifica- IEEE Trans. man- machine sys tems, MMS-9, 15-20,
tion there are few papers which compare diffe-
rent techniques. The exceptions are Van den p.oki, M, and R,M, Staley (1969). On input sig-
Boom and Melis (1969), Cheruy and Menendez (1969), nal synthesis in parameter identification,
Gustavsson (1969). Of course it is more fun to Proc. fourth IFAC congress, Warsaw, paper 26,2,
dream up new methods than to work with somebody to appear in Automatica,
else's scheme. Nevertheless for a person enga-
ged in applications it would he highly desira- p.oki, M, and R.M. Staley (1969a). Some computa-
ble to have comparisons available. It would tional considerations in input signal synthesis
also he nice to have a selection of data to problems, In: Zadeh, Neustadt, Balakrishnan
which several known techniques are tried which (Edit,), Computing methóds in optimization pro-
can be used to evaluate new methods. blems - j , N.Y., Academie Press.
Where is the Field Moving? Ao~i 1 _ M_,_and
P.C.Yue {J969), On certain conver-
genee questions in system identification, SIAM
It is our hope and expectation that the field Journal on control; to appear,
is moving towards more onification and that
there will he more comparisons of different g~t:röm,~.. K.J. ( 1964). Control problems in paper-
techniques, The textbooks which up to now have making, Proc. IBM Scientific computing sympo-
beén lacking will definitely contribute to that; sium on control theory and applications,
forthcoming are: Eykhoff (1970), Sage and Melsa Yorktown Heights, N.Y.
(1970), The area of identification will cer-
tainly also in the future be influenced by vi- Rström K.J. (1967). Computer control of a paper
·-·· · . . ... J.
gorous development of other fields of control machine - an application of linear stochastic
· systems theory. One may guess that the present- control theory. IBM Journal of Research apd
ly active work on multivariable systems will Development, 11, 389-405,
result in a deeper understanding of such systems
and consequently also of the structural problems. Rström 1 K,J, (1967a), On the achievable accuracy
The recent results in stability theory might in idend:Ëicat.ion- problems. Proc. IFAC symp.
influence the real time identification algo- Identification in autom. control systems,
rithms, Pattem recognition and related theories Prague, paper 1,8,
will contribute also to the field of identifi-
cation; e.g. Tsypkin (1968), ~~tröm_1 K,J! ( 1968), Lectures on the identifi-_
Also after the IFAC, Prague, 1970 symposium a cation problem. The least squares method, Report,
lot of work remains to he done, Division of Automatic Control, Lund Institute
of Technology, Lund,Sweden,
Ackerson, G.A • . and K,S, Fu (1969), On state esti- g_str:_ö:m, K,J. ,_ T.!.. Boh_Hn_~n.c;l. S..~ . Wg_nsll1-~.r!.L{!.965),
Automatic construction of linear stochastic
mation in switching environments, Proc. JACC,
dynamic roodels for industrial processes with
179-183,
random disturbances using eperating records,
Alber_h_ A.E. and L.A. Gardner, jr. (1967). Sto- Report, IBM Nordie Laboratory, Stockholm,
.ch~tic approximation and nonlinear regression.
Cambridge (Maas,), M,I,T. Press. 204 pp. Rst_;:~D_l..t. l{!J. and_ ~. _ w~_tt:~~l!;:k_lL9.62J, Problems
!\lp_e r 1 p, (1965), A consideration of the discrete of identification and control, Report, Elec-
tronic Sciences Laboratory~ University of
Volterra series, IEEE Trans. autom, con trol, AC-I O,
Southern CaliforniaJ Los Angeles, Calif.
322-327.
Anderson, T,W, (1962). The choice of the degree ~~_!l~ns,
?1.'-L.R_,;p_, Wishn,~r ans,t ~ ~~nqlin.LJJ9§.8.L
öT·a-pÖ'ïynomiäTiegression as a multiple decision Suboptimal state estimation for continuous-time
prablem, Ann, math, stat, 33, 255-265, nonlinear systems from discrete noisy measure-
ments, IEEE Trans, autom, control, AC-13, 504-
514, --
27
~_alakrishnan,A. V, (1_96?), Techniques of system Brown, R.F. (I969), Review and comparison of
identification, Lecture notes, Institutie Elec- drift correction schemes for periadie cross
trotecnio, University of Rome. correlation. Electranies letters,~, I79-181.
Balakrishnan, ~.V, an_9. _y..! .~e~~!:~a__(l_9§.9_), Iden- ~!uni, C,_L A. Isidori and A. R~berti ( 1~69).
tification in automatic control systems. Proc. Order and factorization of the impulse response
fourth IFAC congress, Warsaw, survey paper. matrix, Proc. fourth IFAC congress, Warsaw,
paper 19.4.
Bandyopadhyay, S.K.K. and S, Dasg~pta (19~9),
Identification of the parameters of a tidal ~uchta, H. (1969), Method of estimation of ran-
channel from simulations and other approaches, dom errors in. the determination of correlation
Proc. fourth IFAC congress, Warsaw, paper 12.6. functions of random infralow frequency signals
in control systems. Proc. fourth IFAC congress,
~arker, H.A. and D.A. HawJey (1966), Orthogon- Warsaw, paper 33,4.
alising and optimization. Proc. third IFAC con-
gress, London, paper 36A, ~ucy, R.S. (1965), Nonlinear filtering theory.
IEEE Trans. autom, control, AC-10, 198.
~ass, R. and L, S<;,~wartz (1966), Extensions to
multichannel nonlinear filtering. Hughes report !lucy, R.S, (1968). Canonical forms for multi-
SSD 604220R. variabie systems, (short paper). IEEE Trans,
autom. control, AC-13, 567-569,
~ekey 1G,A, (1969), Identification of dynamic
systems by digital computers. Joint automatic ~uel, .J .D., H,H, Kagiwada and -~.E. ~al~l:>a 0?. .~7).
control conf. (Computer control workshop), 7-I8, A proposed computational method for estimation
of orbital elements drag coefficients and po-
:B_ek~y, G.A. and R.B. McGee. (1964). Gradient tential fields parameters from satellite measu-
methode for the optimization of dynamic system rements. Annales de Géophysique, 23, 35-39.
parameters by hybrid computation. In:
Balakrishnan,Neustadt (Ed,), Computing methods J!uel, J.D. and R,E, Kalaha (1969). Quasilïneari-
in optimization problems, N.Y., Academie Press. zation and the fitting of ïlcJnîinear roodels of
drug metabolism to experimental kinetic data.
13élanger, P.R. (1968). Comments on "A learning Mathematica! Biosciences,
method for system identification". (correspon-
dence), IEEE Trans. autom, control, AC-I3, 207- Burget.t, A.L. ( ). Human operator characteris
208. -- tics based on an analog computer oriented para-
meter identification technique,
Bellman, ~~ .. .?:.nc:L~d··- ~~tr.:öm _{19~9). On structu-
ral identifiability. Technica! Report No, 69-I I But_ler,R.E. _é!nd ~.V~ _ ~q.h~ ()966_). An automatic
Electronic Sciences Labaratory University of identification technique f or a class of nonli-
Southern California, Los Angeles, Calif. near systems. (short paper). IEEE Trans, autom.
control, AC-I I, 292-296.
~~ll.lllan 1 R.~..!_ ~nd
R.E. Kalabé!__(J9652. Quasi-
linearization and nonlinear boundary-value pro- Çhe_ruy., J. and A. M~ne_I}g~?; (I 969). A comparison
blems. N.Y., Am. Elsevier, 206 pp. of three dynamic identification methods, Pre-
prints JACC, 982-983,
Ben~er 1 E. (1962). Dynamik von Kreuzs troniwärme
Austauschern. VDI/VDE Aussprachetag, Frankfurt, Chidambara~_J1.!.~. __(_1_967).
On the inverses of cer-
Germany. tain matrices (correspondence). IEEE Trans.
autom, control, AC-12, 214-215,
~erkovec, J.W. (I969). A method for the valida-
tion of Kalman filter models. Proc. JACC, Clarke, 1?.!~_01_~). Generalized-least-squares
488-493. estimation of the parameters of a dynamic model,
IFAC symp. Identification in autom. control sys-
~ohl~._ T., (1968) .: Real-time es tirnation of tem, Prague, paper 3.17.
time-variable processes characteristics. Report,
IBM Nordie Lab., Lidingo, Sweden. · ,ÇQ~-~ L .iL9.§{±), On
the es tirnation of state varia·
bles and parameters for noisy dynamic systems,
~ohlin, T. (I969). Information pattern for IEEE Trans, autom. control, AC-9, 5-12.
linear discrete-time roodels with stochastic
coefficients, Report, IBM Nordie Lab.; to appear Crum, L.A. and S.H. Wu ~- · Convergence of
in IEEE Trans, autom, con trol. . the quantizing learning method for system iden-
tification, (correspondence). IEEE Trans. autom.
B<i>hlin, T. _( ~-9?.Q). Information pattern for control, AC-I3, 297-298,
linear discrete-time roodels with stochastic
coefficients. IEEE Trans, autom. control; to Cue!lC?_c!, _,!;t.__ ~~~ A. Sage (19ill_. Comparison of
appear. some methods used for process identification.
IFAC symp. Identification in autom. control sys-
!3oom~ c.f. Van den Boom, tems, Prague, survey paper; also in Automatica,
~ 235-269 ( 1968).
~rocket.t, .R~ 'i..!_ _a~d _M..._~..!_}1e_
s arovic . {_I ~Ç~). The
trs.
r~producibility of multivariable control sys-
JMAA, _I_! • 548-563. .
28
J2.a vison, E.J. (1968), A new method for simpli- Genii}_L.X.! (1 ~68). A note on linear m1m.mum
fying large linear dynamic systems. (correspon- varianee estimation problems. (correspondence).
dence), IEEE Trans. autom, control, AC-13, IEEE Trans. autom. control, AC--~ -~. 103.
214-215. --
Q.~_~sch..~. W! _(1969), Speetral analysis of e.e.g.'s
p~~tsch, R. (1965). Estimation theory. Engle- by autoregressive decomposition of time series.
wood Cliffs, N~J., Prentice Hall, 269 pp. Report, Center of Applied Stochastics, School of
Aeronautics, Purdue University; to appear in
R_~sJdnd,T. ( 1969). Parameter evaluation by Mathematica! Biosciences.
complex domain analysis. Preprint JACC, 770-781.
ç_t.ras1 T.C. ang B.,_A._ MI.Jt:gf_~!ii~.(!969). Power
pouce, J .L. and T.M.W. Weedor1 (1969). Amplitude plant parameter identification. Preprint JACC,
distortien of multi-level pseudo-random sequen- 984-985.
ces, Inter-University Inst. of Engng Control,
Univ, of Warwick, Coventry, Engl., techn. re- Çt~tLW. (1969). Systematic errors in the deter-
port. mination of frequency response by means of
impuls-testing (in German). Industrie-Anzeiger,
~aton 1J~ (1967), Identification for control nr. 52. Ausgabe "Antrieb-Getriebe-Steuerung" • .
purposes. PaperJIEEE Winter meeting, N.Y.
Çgr~c!<~, H. and A! TI.J_r_q~icz ..{!.~§2_2. The approxi-
~lliott.a.
D.F. and D.D. Sworder (1969). Applica- mation method of identification, Proc. fourth
tions of a simplified multidimensional steebas- IFAC congress, Warsaw, paper 5,5.
tic approximation algorithm. Proc. JACC, 148-
154. Graupe, Du_l?_!~-!.. . Swé!I!_i~and Q...R_,_ç;!§_sir__(_u!_f?_8).
Reduction and identification of multivariable
~ykhoff 1P, (1963). Some fundamental aspects of processes using regression analysis. (short
procesa-parameter estimation, IEEE Trans. autom. paper). IEEE Trans. autom. control, ~C:!}, 564-
control, AC-8, 347-357. 567 • .
EykhQ_fJ~ r_!__
(_1_9I_Q). System parameter and state !!~P.~~r .. I.!J:.• ...~~ R.E~--~ailey (19692_. Minimum
estimation, London, Wiley; to appear. varianee estimation of parameters and states
in nuclear power systems. Proc. fourth IFAC
f.~n_!:_~uzz_:i~_.._ _(!~_
(196?). The relation between the congress, Warsaw, paper 12.2.
sampling time and stochastic error for the im-
pulsive response of linear time-independent sys- !lé!Y..a.ê..l:l.!L ,N_._ (19§9}.. On a metbod of correlation
tems, (short paper). IEEE Trans, autom. control, analysis for multivariate systems. Proc. fourth
AC-_!}, 426-428. IFAC congress, Warsaw, paper 33,1.
f~r..i..ê9na .[._~..!__(19_~1). Parameter identification !!C>L l!_._l:,!_!l_Il.~..R.c!~! __K?J.Ill!l!! JlJ_§6).. Effective coJ}-
for a class of linear discrete systems. (corres- struction of linear state-variable roodels from
pondence), IEEE Trans. autom, control, AC-I~, input/output functions. Regelungstechnik, 14, .·
109, 545-548. --
Fisher 1 _!~~_._01.§.~. Identification of linear Hsia, T,C, and D.A. Landgrebe (1967}. On a
systems, Paper JACC, 473-475. method for estimating power spectra. IEEE Trans.
instrum. and measurem., IM-16, 255-257.
Fish~r._...!.! (1~66).
Conditional -probability den-
sity functions and optima! nonlinear estimation, HsiaL1'.b..!!!!.<!._Y!_y:h_m_Elvaniç~- 02.69), An on-line
Ph.p. thesis, UCLA,Los Angeles, technique for system identification. (short
paper). IEEE Trans, autom. control, AC-14, 92-
Galtie!h..G.•fo.-'-~-~n. The problem of identifi- 96, - -
cation of discrete time processes, Report, Elec-
tronica Research Lab., University of California, -!_acoi?.a.ll• and L.A. Zadeh_ill.Q2l. A computer- ·
Berke'ley, oriented identification technique. Preprint
JACC, 979-981,
Galtieri 1 C.A. (19661. The problem of regulation
in ~he computer control of industrial processes,
parf I: theory, IBM Research report RJ 382.
?0
Jas_~:winski. 2 A. ~ 1966). Filtering for nonlinear Kush!!_E!J:':~._ _l!.~.-:1. • . () 9_§.?~~·
Nonlinear filtering: the
dynamical systems. IEEE Trans, autom, control, exact dynamica! equations satisfied by the con-
AC-: I I , 7 65. ditional mode, IEEE Trans, autom, control, AC-12,
262-267. -----
..!a.?winski,_ A.H. (1969). Adaptive filtering.
Automatica,_~, 475-485. Kusl!._~~;-. 1 H.J. (1967b). Approximations to optimal ·
nonlinear filters. Preprints JACC, Philadelphia,
Jenkins, G, and D. Watts (1968). Speetral ana- Pa, 613-623.
lysis and its applications. Holden Day, San
Francisco, 525 pp. ~~~hner, H.J. (1969), On the convergence of
Lion 1 s identification metbod with random inputs.
.John~gn,G.W. (1969). A deterministic theory of Report, Division of Applied Mathem., Brown Uni-
estimation and control. Proc. JACC, 155-160. versity, R.I.
Koivo. -~ ~J.!..jjJ_6J.)_. On the identification of Ma_!l..!!..t_. H,B • . .!!~__ A_._ R..!!.1Q._..U2.!tJl • On the s tatis tic al
time lag in dynamical systems •. Preprint JACC, treatment of linear steebastic difference equa-
1e2-186. tions. Econometrica, l!, nr. 3 + 4.
Kopp 1 R.!_~_.__ anÈ R.,J.._.Q.go;-_5!___(!._~~~). Linear re- Marguard~-~~_1!963), An algorithm for least-
gression applied to system identification for squares estimation of nonlinear parameters.
adaptive control systems, AIAA Journal, 2300- J. Soc. ind, ap];>l. math,, _!._!, 431-441.
2306.
Maslo~b.L.. .U~_ffi, Application of the theory of
Koszel~i.~~~JL. Malkiewicz_ and ~t. Trybula statistica! decis{ons to the estimation of object
(1969), A metbod todetermine the transfer parameters. Automn. remote control, 24, 1214-
er
functions
I
of power systems,
.
Proc.
gres s, Wars aw, paper 33 •.2.
.
fourth IFAC
30
1226,
!'fehra 1 R.K. ( 1969a), On the identification of Nis!lJ.ml!ra, . t:f. 1 _I:{~_.__ .fuii.i and Y, Suzuki ( 1969).
variances and adaptive Kalman filtering. Proc. On-line estimation and its application to an
JACC, 494-505. adaptive control system. Proc. fourth IFAC
congress, Warsaw, paper 26,3.
Mehra, R.K. (1969b). Identification of stochas-
tic linear dynamic systems. Proc. 1969 IEEE Oza, __ K_!_(;_._ ~n4. . ~-~L_Ju~Li1~§ê.}. S.ystem identifi-
symposium on adaptive processes, Pennsylvania cation and the principle of contraction mapping.
State University. SIAM J. control,_§, 244-257.
~eissinger 1H.F. and G.A. Bekey (1966). An ana- Oza, ~!..ç_ ._ and E.I. Juri (1962.), Adaptive algo-
lysis of continuous parameter identification rithms for identification problem. Proc. fourth
method, Simulation,_§, 95-102, IFAC congress, Warsaw, paper 26.4.
!:fenahem1 M, . ( 1969). Process dynamic identifica- Panuskaa. . .. ~.. 0~§.82. On the maximum likelihoed
tion by a multistep method. Proc. fourth IFAC estimation of rational pulse transfer function
congress, Warsaw, paper 5.3. parameters. (correspondence), IEEE Trans.
autom, control, _!Ç: I3, 304,
~endes 1M! (1970), Dual adaptive control of
time-varying systems. (in German). Regelungs- _y_.__ _(l969). An adaptive recursive least·
Panl;!sk~ _
technik; to appear. squares identification algorithm. Proc. 1969
IEEE symposium on adaptive processes. Pennsyl-
~iddleton,D. (1960), An introduetion to statis- vania State University,
tical communication theory. N.Y., McGraw-Hill,
1140 pp. Parks, P.C. (!9_6_62_. Liapunov redesign of model
reference adaptive control systems, IEEE Trans.
~owery 1V.O. (1965), Least squares recursive autom. control, AC-11, 362-367.
differential-correction estimation in nonlinear
problems. IEEE Trans. autom. control, ~C~IO, ~a:z:dera, J._~ • ... .!i1ld_ H!JLJ?ott:i~U9._§9).
Linear
no, 4, 399-407. system icieri.tification via Liapunov design. tech-
niques. Preprints JACC, 795-801.
~agumo, J, and A. NQda (1967). A learning methad
for system identification. IEEE Trans, autom, ~earson 1J.D. (1967). Deterministic aspects of
con trol, b-C-12., 282-287. the identification of linear discrete dynamic
systems. (short paper), IEEE Trans. autom, con-
~ahi 1N.E. (1_2~9), Estimation theory and appli- trol, ÀC~l 2, 764-766,
cations. N.Y., Wiley, 280 pp.
~et~rk~J " ..0_96~). On-line estima-
_v •. ~d _h -~_!lluk
~ahi, N.E. (1969a). Optimal inputs for parameter tion of dynamic model parameters from input-
estimation in dynamic systems with white abser- output data, Proc. fourth IFAC congress, Warsaw,
vation noise, Proc. JACC, 506-513. paper 26.1,
~~~-~~.<ira,
K.s. and P.~.! d c;~U~all.0_9_§~). An itera- Phill_ipson, G.A. ~_?c!_S,K, ~itter _(_1_969). State
tive methad for the identification of nonlinear identification of a class of linear distributed
systems using a Hammerstein model, (short paper). systems, Proc. fourth IFAC congress, Warsaw,
IEEE Trans. autom. control, AC-11, 546-550. paper 12.3.
~ash 1 R.A. and F.B. Tuteur U968), The effect of Popoy, y,_M, ( 19~~). Absolute stability of non-
uncertainties in the noise covariance matrices on linear systems of automatic control. Automation
the maximum likelihoed estimate of a vector, and remote control, 22, 857-875.
(short paper), IEEE Trans. autom, control, AC-13,
86-88, . - ·· _!>C?_Fl:.~t.. T~f.!..'--G...!.~!_..Q;-!!~-~~~!1. and ..A!..~_•. q~e}:' (!2§1),
The automatic determination of human and other
~~al,_S .R,_ ( 1968) •. Nonlinear estimation techni- system parameters. Proc. West. Joint Computer
ques, (short paper). IEEE Trans. autom, control, Conf., Los Angeles, _!.~, 645..:660.
AC-13, 705-708,
Qvarng_!'öm, B.t.....{L2_..9_:l). An uncertainty re lation
~eal, C!.-B. and G•.I\ •.J~~l<,e_y (1969). Estimation of for linear mathematica! models, Proc. second
the parameters of sampled data systems by means IFAC congress, Basle, 634.
of stochastic approximation. Preprints 1969 Joint
autom. control conf. (sess. VI-E), 977-978. Rajbman. N,S,. S.A... ..~Lsimq_v ang f,~. _HQ..VE!e2_ian
c,f, also: Neal, C.B. Estimation of the parame- {JJ69), Some problems of control plants identi-
ters of sampled data systems by stochastic fication. Proc, fourth IFAC congress, Warsaw, ·
approximation, Univ. of South Calif., Electronic paper 5,1,
Sciences Lab,, techn. report (1969).
Ra~~ H. _(1_ 9~.ê). The determination of frequency
_l.Hkiforuk 1 P._~. a~d M.M. Çu_2~a (19~). A biblio- responses from stochastic signals by means of
graphy of the properties, generation and control an electronic analog computer, (in German).
systems applications of shift-register sequen- Regelungstechnik, 1§, 258-260,
ces, Int. J. Contro1,_2, 217-
31
Ras~:!=:h.&~-~.L_!._'~! an~ __Y..!..S~ Tr"!~tenberg_ ( 1969). Sari<U:s~G.N. an~__G, Ste~_!}__{_!_2~8g.). Stochastic
Multidimensional extrapolation for optimal con- approximation algorithms for linear discrete-
trol and design. Proc. fourth IFAC congress, time system ideotification. IEEE Trans, autom.
Warsaw; paper 12.1. control, AC-13, 515-523.
Rau!t_.._A.~ _ R!.. J?ouliguen _an<J .J. Riç_hal~~ i!.2.§.~_ . ~~!'-~~i_s.J G_.~. a?~ __Q! -~-~ei..!!.._(I_~.§.?.P). A new algo-
Sensibilizing input and identification. Proc. rithm for linear system identification, (corres-
fourth IFAC congress, Harsaw, paper 5,2, pondence), IEEE Trans. autom. control, AC-13,
592-594. - -
,B.~id,_ .!<.~~·-
(I_2_6 9a). Lateral dynamica of an idea-
lized moving web. Preprint JACC, 955-963, Sas try, - ~~~nd W,J_. Vett~____(1969). The appli-
cation of identification methods to an industrial
B~id 1 K!_~_!___(_l ~-§~b). Lateral dynamica of a real process. Preprints JACC, 787-794.
moving web. Preprint JACC, 964-976.
Schu.!.!..._~~!-(!96§). Es tirnation of pulse transfei
~ob erts,__]> d!__ Q_266).
The application of ortho- function parameters by quasilinearization. (short
gocal functions to self-optimizing control sys- paper). IEEE Trans. autom, control, AC-13, 424-
tems containing a digital compensator. Proc. 426.. -.
third IFAC congress, London, paper 36C.
~hwartz, L. ~~d_ E;~B, St~ar _ (1968). A computation-
~OQf!rts 1 P.~!.. (19§.V.
Orthogonal transformations al comparison of several nonlinear filters.
applied to control system identification and op- (short paper). IEEE Trans, autom. control, AC-13,
timization, IFAC symp, Identification in autom. 83-86. ---- --
control systems, Prague, paper 5.12.
~çhwartz,_S, C, and_K. ?te_ig!i_t:z;_ (I 968). The iden-
~qg_ers 1
_A,E !. and K. Ste~g)-i tz__ (I ~67). · Maximum tification and control of unknown linear discrete
likelibood estimation of rational transfer func- systems. Report 24, Information Sciences and
tion parameters. (short paper). IEEE Trans. Systems Laboratory, dept. EE.;Princetown Univer-
autom. control, ~Ç~!2, 594-597. sity, ·
_Rose, R.E..! _and G.M._ Lance ( 1969). An approximate ç;, (12~9). Determination of roodels for
-~chwarze.L
steepest descent metbod for parameter identifi- identification with quality characteristics in
cation. Proc. JACC, 483-487. the time domain, Proc, fourth IFAC congress,
Warsaw, ~aper 19,3,
~owe, I.H, (19§.8). A statistica! model for the
identification of multivariable stochastic sys- Se_g§rstahl, B. (1969), A theory of parameter
tems, Proc. IFAC symp. on multivariable systems, tracking applied to slowly varying nonlinear
DUsseldorf. systems. Proc. fourth IFAC congress, Warsaw,
paper 5.4.
Rowe 1 I.H. ( 1_969). A bootstrap metbod for the
statistica! estimation of model paraneters, Pre- ~ekiguchi, T, (1969). Observabili ty of linear
prints JACC, 986-987. dynamic measuring systems and some applications,
Fourth IFAC congress, Warsaw, paper 12.5.
Roy, R. J. and J.. She.lïJ!~I].-~< 196 7) • A learning
technique for Volterra series representation, ~_hackcloth_,_ B~ .
and R •.1.~ ---~~t~h.a.!'~....0.2§?). Synthe-
(short paper). IEEE Trans. autom. control, AC-12, sis of model reference adaptive systems by
761-764. - - - Liapunov's secend method. IFAC symp. theory of
selfadaptive control systems, Teddington, 145-
~a&.h_A._,_? _,___an<i_Q!.'\:1'. _~S_il__ {1_9._
69),
Adaptive filter- 152.
ing withunknown prior statistica. Proc. JACC,
760 -~hiryaf!y 1
A, N. _ ill§6), On stochastic equations in
the theory of conditional Markov processes,
5_ag_e 1 A,P, and J ,L, Mf!ls~ (1970), System iden- Theory of probability and its applications, uss~
tification, N.Y., Ácademic Press; to appear, u. 179.
pain, M.K. and J.L. Massey (1969). Invertibility êUve_~an, _b!~.!. _(.!.~M),
Inversion of multiva-
of linear, time-invariant dynamical systf!ms, riable linear systems. IEEE Trans. autom. contzo~
IEEE Trans. autom, control, AC:-:: _14, 141-149, AC:J4, 270-276,
ê~o, A• .a!14 _
_M,__ Te~a.Q (1.~69). Measurement opti- .§!!_it~.L_!'!.!....ill.§.§.)_ ,
System Laplace-transform es-
mization in batch process optimal control, timation from sampled data. IEEE Trans, autom,
Proc. fourth IFAC congress, Warsaw, paper 12.4. control, AC=J3, 37-44.
Sari5!J s_,_ G.~.!... an~R!...Ricker ( 1969). The on- Staley, R.M. and P.C. Yue (1969). On system para-
line identification of chemical reaction coeffi- meter indentifiability, Information Sciences;
cients from noisy measurements, Report, Purdue to appear.
Lab, for Appl. Industr. Control, Lafayette, Ind,
§tassen, H.G_. (1_9~~)_. The polarity coincidence Welf_()!l~~!J__~!_(12._~~.
Stochas tic signals; sta-
correlation technique - a useful tool in the tistical averaging under real conditions. (in
analysis of human operator dynamics. IEEE Trans. German), Fortschritt Berichte, VDI-Z,_§, nr.12
on man-machine systems, MMS-_10, 34-39. 165 pp.
!_?teiglit2;, -~ -a!:!d _L.E.!__l-!~ride ___(!965). A tech- Wells, C.H. (1969). Application of the extended
nique for the identification of linear systems. Kalman estimator to the nonlinear well stirred
(short paper). IEEE Trans. autom. con trol, AC-!_~ reactor problem. Proc. JACC, 473-482.
461-464.
h'~!>-~!~~~i_J,R.
(1968). A handhook of numerical
~t~p~~-a. J ~- (1~~_7). The identification of control- matrix inversion and solution of linear equa-
led systems and the limit of stability. IFAC tions. N.Y., Wiley.
symp. Identification in autom. control systems,
Prague, paper 1.11. ~-~~al<:~~~~ -~~~ - ~~~). Design of a model-
reference adaptive control system for aircraft.
~tratonov~_ch_,R.L_. (l~~?). Conditional Markov Report R-164, Instrumentation Lab., MIT, Boston.
processes. Theory probab. appl., 2, 156-178.
~Jtq~.._ g__• _l_!_2.2~2· Predietien and regulation.
_str~!>~l, l!!__i!i§Z). On the limi ts imposed by Princeton N.J., Van Nostrand.
random noise and measurement errors upon system
identification in the time domain. IFAC symp. Wieslander, J. (1969). Real time identification
Identification in autom, control systems, Prague, Part T -and IT. -Re-ports, division of automatic
paper 4.4. control, Lund Institute of Technology, Lund,
Sweden,
êtrobel 1 . H.!._ .U267 1 ___1968).. The approximation pro-
blem of the experimental system analysis. (in Wilde 1 D.J. (1964). Optimum seeking methods.
German). 161 references. Zeitschr. Messen, Steu- Englèwo"Od --Cliffs., N.J., Prentice-Hall, 202 pp.
ern, Regeln, 19, 460-464; !I, 29-34; 73-77.
!iPJe_r~ 1 R,H~ . DJ6~). Signal- and frequency res-
§_trobel, H. (1968). System analysis using deter- ponse analysis on strongly-disturbed systems.
ministic testsignals. (in German). Berlin, (in German). VEB Verlag Technik, Berlin, 195 pp.
V,E,B, Verlag Technik,
Wilk:~e 1 p.f! ___
~I'!9o.!i!B-~J?erkin~ (_1969). Design of
§unahara, Y. (1969). An approximate method of model foliowing systems using the companion
state estimation for nonlinear dynamica! systems. transformation. Proc. fourth IFAC congress,
Proc. JACC, 161-172. Warsaw, paper 19.2.
Y~!!..Ae.n _Boom,
A.J .R_! ~d J__Ji_.~!M· -~~H.s__ i!__2_§_2). ~o. • .I<• T!. ( 19?.0). Identification of noisy systems
A comparison of some process parameter estimating Fortbeoming Ph.D. thesis, School of Engng and
schemes. Proc. fourth IFAC congress, Warsaw, Applied Sciences, University of California, Los
paper 26.5. Angeles.
Wegrzyn, s .• G, Denis and J. Deliele (1969). The Yo@z.._ E•.Q._. (1~~9). An instrumental variable
identification of processes by minimization of method for real-time identification of a noisy
the j distance between sets of signals, process, Proc. fourth IFAC congress, Warsaw,
PrT . f our th IFAC congres s, Wars aw·, paper 19. 1 • paper 26.6, ·
33
~_9-eh.J ~.f.l..
~ 19~2). Fr om circuit theory to sys-
tem theory. Proc. IRE, 50, 856-865.
t ____ c-:~
;,~
426.
1l :
Zames, G. (1?6§). On the input-output stability ---u
of time-varying nonlinear feedback systems.
IEEE Trans. autom. control, ~c-1~, 228-238. arctan~
I u(k) y(k)
k
8 (A.3)
I
k
u(k) u(k)
Appendix A - A resumé of parameter estimation.
A
As a tutorial resumé of the statistica! methods 6 is the optimal estimate under the conditions
the following example of utmost simplicity may given, Note from (A,2) that the terros
suffice. Consider the situation of fig. A.1 y(k)-6u(k) are weighted with respect to u(k);
quite naturally the larger the input signal,
the more importance is assigned to the devia-
u(k) tion between observation y(k) and 11 predict;ion"
6u(k)! Equation (A.3) refers to the correlation
methods. For the extension to the more-parame-
ter case and to the generalised least-squares
methad c.f. sectien 5.
Using the least-squares method the estimate Using the Bayes' method for the same case, one
is chosen in such a way that the loss function, needs, as before, p , but also the a priori
defined as probability densitynfunction pb of b, Note that
previously b was an unknown constant, and that
V(6) = I [y(k) - 6u(k)] 2 • (y-6u) 1 (y-6u), now b is a random variable. From Bayes' rule
k
p(bly) = p(~,b) = p(ylb) y<b)
is a minimum. p y) p(y
In fig. A.2 the differences between the obser-
vations y and the 11 predictions 11 8u are indi- This can be interpreted as: the probability
cated, The minimization can be pursued analy- density function of the parameter b, given the
~ic, lly; a necessary condition ·for the minimum result of the measurement on y. This can be
l.S rewritten as:
2
ddB I [y(k)-6u(k)] .. 0 p(y,b) .. p(y-ub,b) I a(y;;b) I = pn(y-up) Ib(b)
k . ~
or 6 =8 (A.2) I
t t
u(k)
L ( yOc); (')
Fig.A.J Fig.A{.
!_E!!~!i knowledge, available befere the measu- The problem of input noise, The simple example
rement of u(k) and y(k). These measurements of fig. A.l also serves very well to illustrate
provide us wi th a "cut" through the probabili ty- the so-called "problem of input noise".
-density function, from which the ~-E~~!~!i~!! Consider the system illustrated by the block
probability function for b fellows. This new diagram in fig. A.7 where neither the input nor
probability function now may be used as the a the output can be observed exactly.
priori knowledge for the following measurement.
In this way the development of pb with increas- It is wellknown in statistica (see e.g. Lindley,
ing number of observations can be followed; (1965) that, unless specific assumptions are
c.f~ fig. A.6, Note that p , the additive noise made concerning the variations in u, n and v
. . n
being stationary, does not change, it is not possible to estimate the dynamica of
the process,
The reader is invited to consider special cases Consider e.g.
like u(k) = 0 and u(k) ~ oo, Again the method
can be generalized to more parameters; its y(k) bu(k) + n(k)
vizualisation has severe limitations, however.
Note that in this case the knowledge on b is ü(k) u(k) + v(k)
g.iven in terms of pb' a function, In practice
thé reduction from a function to a single value Assume that v(k) and n(k) are independent
(estimate) can be done by using a cost or loss stochastic variables with zero mean values. If
function, providing a minimum cost- or minimum v(k) = 0 we find that the estimate of b is given
loss estimate. by (A. 3)
y-ub
p(Y,b)
JPb
t
I I
/I'
/ I ',
I \
a
. / I ' ........... -
-.0::::..---;J----
I '
/
./
/
./
35
y
n(k)
t 1
(!y(k)- u<k>
u(k)
~--~~--·
y(k)
y(k) --......
-----y ~ /
~I
"..... I
u<k> - u
Y(k)
observed signals
Fig.A.7
Fig.A.8
-~[U I
10
I I
20
lil
30
Illilllill
40 50
11 illHJ I U
60 70
IFijl m
80 90 100
--•k
OUTPUT
10r----r.~~~----~~----~-----r----.-----,----.-----~--~
15'------~--~n---~----7,.--~~--~----~----h----d~--~
10 20 30 40 so sa 10 ao 90 too
k
Fig.B.1
36
where {e(k)} is a sequence of independent nor- Proceeding to the case of À= 0.1, i.e. the
mal (0,1) random numbers generated by a pseudo standard deviation of the disturbances is one
. random generator. The following values of À tenth of the magnitude of the input signals, we
have been used: O, 0.1, 0.5, 1.0 and 5.0. find that the matrix~'<!> is still badly condi-
tioned when a third order model is computed.
In Table B.l we show the results obtained when
a model having the structure (B.I) is fitted to Analysing the details we find, however, that
the generated input-output data using the least the Gauss Jordan method gives a reasonable
squares procedure. accurate inverse of <1> 1 <1>. Pre- and postmulti-
The estimates are calculated recursively for plying the matrix with its computed inverse, we
models of increasing order to illustrate the find that the largest off-diagonal element is
very typical situation in practic~ when the 0.011 and the largest deviation of diagonal
order of the model is not known. In Table B.l, elements from 1.000 is 0.0045. We also find
we have shown the least squares parameter esti- that the estimates a3 and s3 do not differ sig-
matea and their estimated accuracy, the loss nificantly from zero.
function and a conditioning number of the matrix
~·~. The conditioning number ~ = 2n max{(A) .. } We will also discuss some other ways to find
1
max{(A-1), . } is chosen rather arbitrarily. J the order of the system. We can e.g. consider
1J the variances of the parameters. We find e.g.
from Table B.l that the coefficients ~ and s
We wi 11 now analyse the re sult of Tab le B. I •
Let us first consider the case of no disturban-
3
do not differ significantly from zero 1n any
3
ces À • 0, In this case we find that it is only case. In Table B.2 we also summarize the values
possible to compute models of order I and 2. of the loss function as well as the values of
When we try to compute the third order model, the F-testvariable when testing the reduction
we find that the matrix <1> 1 <1> is singular as of the loss function for a model of order n 1
would be expected, The conditioning number is compared to a model of order n as was discus-
2 2
1,3 x 10 6 , We also . find that the estimated sed before. We have at the 10% level x = 2.32.
standard deviations of the second order model We thus find that by applying the x2-test in
are zero, this case we get as a result that the system is
To handle the numerical problems for a model of of second order for all samples. The actual
third o~der in a case like this, we must use parametervalues of Table B,l as wellas the
numerical methods which do not require the in- estimated accuracies give an indication of the
version of <1> 1 <1> e.g. the reduction of <I> to accuracy that can be obtained in a case l.ike
triangular form using the QR algorithm. This this.
has been persued by Peterka and ~muk (1969).
They have shown that this calculation can also It should, however, be emphasized that when the
be done recursive l y in the order of the system. same procedure is applied to pratical data the
results are very seldom as.clearcut as in this
simulated example. See. e.g. Gustavsson (1969), ,
I
... ... ... ... ...
model 1 À V
order "'I "'2 "'3 al a2 a3 ~
i
true
I. 50 0.70 o.o I, 0 0.5 o.o .I
parameter
. .......... ____
I 0,88:t0,03 ! ! 1.23:t0,18: : i 1.71 ; 265,863 j 59
À • o.o 2 1,50:t0,00 0,70:tO,OOI : I.OO:tO,OO; 0,50:t0,001 , 0.00 0.000 ! 203
3 I - ' - : - ; I. 3xl o6
- ·- ····- - - ·· · i 4-· ·t -- ·· · ·· : ----· - · -- ·- -- ··· ---- - - - ---1--- ·· - -·
1
I 0,88:t0,03 'l t.l9:t0,18 i . 1.66 : 248.447 60
À • 0.1 2 I.SO:tO,Ol 0,69:t0,01 ~ ,0,99:tQ,QI : 0,49:t0,02 1 0.11 ; 0.987 205
--------~- - ~· 52:tO,tl . o. ~:t~,~~~~02:t_~.' ?8: o~-9~:t~_o_t~' ~·-~~:t~,-~1 j -o_._o2_:t~:~8 l ~ ~-~--~ - o. 983 3=9~4__ ___
I , 0.88:t0,03 . 1.10:t0,17 I 11,59 227.848 63
À . 0.5 ' 2 . 1.48:t0,04 :0.67:t0,03 : 0,96:t0,06 0,48:t0,07 J 0,53 24.558 206
3 i l,54:t0,11 0.76:t0,16 ! 0.04:t0,08 0.96:t0,06j 0,42:t0,12 l-0,04:t0,09 0.53 24.451 1518
- - ---+---- --.: ________l_ - ·- ·· -· .J. - --- --- .. --·------..... -- - -- -· . - ·-- - - --- -----·--- I ' -- :------·-1
I i0.88:t0,03 I t.OO:t0,20 I ! 1.84 1i 308.131 - 75
~ • 1.0 2 11.47:t0,06I0.66:t0,06 0.93:t0.12 0,46:t0,14 i ' 1,06 99,863 ! 212
3 l,55:t0,11 :0 . 82:t0,17 0,09:t0,09 0.93:t0,130.37:t0,16 i ~0.02:t0.15 1.07 98,698 476 I
I . --- ... - - • .. . -- ---- I -··1
lo.86:t0,05 ' o,20:t0,83 ' l ' 7,55 5131.905 520
5,0 i
À- 1.48:t0,07 0,74:t0,08 0,98:t0,61 0.41:t0,61 i 5.29 2462,220 : 1174
_ _ _____._ _ ....._1_._5_5:t_o_._l_l.._io_._a7:1: o. 18 o. 09 :to. 09 o. 9 7 ~~ ~ 64 j o. 24 :t o. 6_6...._o_._I_3_:t:o_._6_4...L._5·_3_2....J,_2_44_o_._2_4_5l__:_2_
0~- ___
1
37
case I À =0
~·I
i
O i V ~ 2
~
0 2666.23 59 I
I 285,86 205 0 i 442.4 00
2 '
i 0,00 35974 00
case 2 À 0.1
~3- -
n V ~
.
0 .2644.15 -·
I
I 248.447 60 0 472 57000 42100
2 0.987 205 12000 5900
3 0.983 35974 2 0.191
case 3 À = 0,5
n V
- --·. -- · - ~.
~
-·-· --------
'·nl.,_ n2', I 2 3 4
0 2701.35 - -~----------------·--
227.848 63 0 532 2616 1715 1283
2 24.558 206 397 195 130
3 24.451 1518 2 0.2 0.1
4 24.006 2982 3 0,8
case 4 À = 1.0
'\,
n V ~- "n2 : I 2 3 4 5
-- -r--- - .. ----- -
0 3166.78
- n~-- - -- - -·- ---- - - - -- - - ~
..
I 308.131 212 0 i 455 734 487 365 292
2 99.863 476 100 50 33 25
3 98.698 873 2 0.55 0.72 0,80
4 96.813 1351 3 0,88 0.92
5 94.800 1647 4 0.96
case 5 À • 5.0
n ~--
------- - V- --- - ~ -- 2 3 4 5
0 21467.64 - - -· ------- --- - - ··- -···· - - .
I
I 5131.905 520 0 156. 185 122 92 75
2 2462.220 1174 52 26 18 14
3 2440.245 2031 2 0,21 0.94 1.2
4 2375.624 2910 3 1.2 1.5
5 2290.730 3847 4 1.6
38