Professional Documents
Culture Documents
Part 8 - Recursive Identification Methods
Part 8 - Recursive Identification Methods
Part 8 - Recursive Identification Methods
Motivating example
Part VIII
Recursive identification methods
Motivating example
Motivating example
Features
Recursive methods:
Require less memory, and less computation for each update,
than the whole batch algorithm.
Easier to apply in real-time.
When properly modified, can deal with time-varying systems.
If model is used to tune a controller, we obtain adaptive control:
Motivating example
Disadvantage
Motivating example
Table of contents
Motivating example
y (k ) = (k ) = 1 b
After calculation, the solution of this system is:
1
b
(k)
= [y(1) + . . . + y (k )]
k
Intuition: Estimate is the average of all measurements, filtering out
the noise.
Motivating example
Motivating example
b ) = (k
b 1) + 1 [y (k ) (k
b 1)]
(k
k
b ) computed based on
Recursive formula: new estimate (k
b
previous estimate (k 1) and new data y (k ).
b 1)] is a prediction error (k ), since
[y(k) (k
b = yb(k ), a one-step-ahead prediction of the output.
b 1) = b
(k
Update applies a correction proportional to (k ), weighted by k1 :
When the error (k ) is large (or small), a large (or small)
adjustment is made.
The weight k1 decreases with time k, so adjustments get smaller as
b becomes better.
Motivating example
Table of contents
Motivating example
k
k
X
X
b )=
(k
(j)> (j)
(j)y(j)
j=1
j=1
Motivating example
Pk
j=1
(j)> (j):
k
X
b ) = P 1 (k)
(k
(j)y (j)
j=1
k1
X
= P 1 (k)
(j)y (j) + (k )y (k )
j=1
i
b 1) + (k)y(k)
= P 1 (k) P(k 1)(k
h
i
b 1) + (k)y(k)
= P 1 (k) [P(k) (k)> (k )](k
h
i
b 1) + P 1 (k ) (k )> (k)(k
b 1) + (k )y (k )
= (k
h
i
b 1) + P 1 (k )(k ) y (k ) > (k )(k
b 1)
= (k
h
Motivating example
Least-squares: Discussion
h
i
b
b 1) + P 1 (k )(k) y(k) > (k )(k
b 1)
(k)
= (k
b
b 1) + W (k )(k )
(k)
= (k
b ) computed based on
Recursive formula: new estimate (k
b
previous estimate (k 1) and new data y (k ).
h
i
b 1) is a prediction error (k), since
y (k ) > (k )(k
b 1) = ybk 1 (k ) is the one-step-ahead prediction using
> (k )(k
the previous parameter vector.
W (k) = P 1 (k )(k) is a weighting vector: elements of P grow
large for large k , therefore W (k ) decreases.
Motivating example
P 1 (k 1)(k)> (k )P 1 (k 1)
1 + > (k )P 1 (k 1)(k )
Motivating example
Overall algorithm
Motivating example
Initialization
b
The RLS algorithm requires initial parameter (0),
and initial inverse
1
P (0).
Typical choices without prior knowledge:
b = [0, . . . , 0]> - a vector of n zeros.
(0)
P 1 (0) = 1 I, with a small number such as 103 .
An equivalent initial value of P would be P(0) = I.
Intuition: P small corresponds to a low confidence in the initial
parameters. This leads to large P 1 (0), so initially the weights W (k )
b
are large and large updates are applied to .
b can be initialized to it, and
If a prior value b is available, (0)
correspondingly increased. This means better confidence in the initial
estimate, and smaller initial updates.
Motivating example
(P 1 (k 1))2
P 1 (k 1)
=
1
1 + P (k 1)
1 + P 1 (k 1)
P 1 (1) = 1
1
P 1 (2) =
2
1
P 1 (k ) =
k
b 1) and W (k ) = P 1 (k ) = 1 , leading to:
Also, (k ) = y(k) (k
k
b ) = (k
b 1) + W (k)(k ) = (k
b 1) + 1 [y (k ) (k
b 1)]
(k
k
formula for the scalar case was recovered.
Motivating example
Table of contents
Motivating example
In explicit form:
y(k) + a1 y(k 1) + a2 y (k 2) + . . . + ana y(k na)
= b1 u(k 1) + b2 u(k 2) + . . . + bnb u(k nb) + e(k)
where the model parameters are: a1 , a2 , . . . , ana and b1 , b2 , . . . , bnb .
Motivating example
y (k ) = a1 y (k 1) a2 y (k 2) . . . ana y (k na)
b1 u(k 1) + b2 u(k 2) + . . . + bnb u(k nb) + e(k)
= y (k 1) y (k na) u(k 1) u(k nb)
>
a1 ana b1 bnb + e(k )
=:> (k ) + e(k )
Regressor vector: Rna+nb , previous output and input values.
Parameter vector: Rna+nb , polynomial coefficients.
Motivating example
Recursive ARX
With this representation, just an instantiation of RLS template.
Recursive ARX
b
initialize (0),
P 1 (0)
loop at every step k = 1, 2, . . .
measure u(k), y(k)
form regressor vector
>
(k) = [y(k 1), , y(k na), u(k 1), , u(k nb)]
b 1)
find prediction error (k) = y (k ) > (k)(k
1
P
(k1)(k )> (k )P 1 (k1)
update inverse P 1 (k ) = P 1 (k 1)
1+> (k )P 1 (k1)(k )
compute weights W (k) = P 1 (k )(k )
b ) = (k
b 1) + W (k)(k)
update parameters (k
end loop
Remark: Outputs more than na steps ago, and inputs more than nb
steps ago, can be forgotten.
Motivating example
Guarantees idea
Motivating example
Table of contents
Motivating example
System
To illustrate recursive ARX, we take a known system:
y (k ) + ay(k 1) = bu(k 1) + e(k),
a = 0.9, b = 1
& Stoica)
(Soderstr
om
Motivating example
Recursive ARX
Identification data.
Array containing the orders of A and B and the delay nk.
ff, 1 selects the algorithm variant presented in this lecture.
th0 is the initial parameter value.
Pinv0 is the initial inverse P 1 (0).
Motivating example
Results
na = nb = nk = 1
P 1 (0) = 1 I, left: = 103 , right: = 100.
Conclusions:
Convergence to true parameter values...
...slower when is larger (good confidence in the initial solution)
Motivating example
bq 1
u(k ) + e(k ),
1 + fq 1
f = 0.9, b = 1
& Stoica)
(Soderstr
om
Motivating example
Results
na = nb = nk = 1, so we attempt the ARX model:
y(k) + fy(k 1) = bu(k 1) + e(k )
Also, P 1 (0) = 1 I, = 103 .
Conclusion:
Convergence to the true values is not attained! as expected,
due to colored noise
Motivating example
Motivating example
Table of contents
Motivating example
#1 "
#
N
N
1 X
1 X
>
Z (k )y (k)
Z (k ) (k )
N
N
k =1
(8.1)
k =1
(8.2)
Motivating example
Recursive formulation
k
X
b = P 1
Z (j)y (j)
(8.3)
j=1
where P =
Pk
j=1
Z (j)> (j).
Motivating example
k
X
b ) = P 1 (k)
(k
Z (j)y (j)
j=1
k1
X
= P 1 (k)
Z (j)y (j) + Z (k )y (k )
j=1
i
b 1) + Z (k)y(k)
(k) P(k 1)(k
h
i
b 1) + Z (k)y(k)
= P 1 (k) [P(k) Z (k)> (k )](k
h
i
b 1) + P 1 (k ) Z (k )> (k)(k
b 1) + Z (k )y (k )
= (k
h
i
b 1) + P 1 (k )Z (k ) y (k ) > (k )(k
b 1)
= (k
=P
Motivating example
Final formula:
h
i
b ) = (k
b 1) + P 1 (k)Z (k ) y (k ) > (k)(k
b 1)
(k
b ) = (k
b 1) + W (k)(k)
(k
with the Sherman-Morrison update of the inverse:
P 1 (k ) = P 1 (k 1)
P 1 (k 1)Z (k )> (k )P 1 (k 1)
1 + > (k)P 1 (k 1)Z (k)
Motivating example
Recursive IV
b
initialize (0),
P 1 (0)
loop at every step k = 1, 2, . . .
measure y(k), form regressor vector (k )
b 1)
find prediction error (k) = y (k ) > (k)(k
P 1 (k1)Z (k)> (k)P 1 (k 1)
1
1
update inverse P (k ) = P (k 1)
1+> (k)P 1 (k1)Z (k)
1
compute weights W (k) = P (k )Z (k)
b ) = (k
b 1) + W (k)(k)
update parameters (k
end loop