Part 8 - Recursive Identification Methods

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

System Identification

Control Engineering B.Sc., 3rd year


Technical University of Cluj-Napoca
Romania

Lecturer: Lucian Busoniu

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Part VIII
Recursive identification methods

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recursive identification: Idea

Recursive methods can be seen as working online, while the system


is evolving.
b ), based on the
At step k , compute new parameter estimate (k
b 1) and newly available data u(k), y(k).
previous estimate (k
Remark: To contrast them with recursive identification, the previous
methods, which used the whole data set, will be called batch
identification.

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Features
Recursive methods:
Require less memory, and less computation for each update,
than the whole batch algorithm.
Easier to apply in real-time.
When properly modified, can deal with time-varying systems.
If model is used to tune a controller, we obtain adaptive control:

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Disadvantage

Recursive methods are usually approximations of batch techniques


guarantees more difficult to obtain.

Motivating example

Recursive least-squares and ARX

Table of contents

Motivating example: Estimating a scalar

Recursive least-squares and ARX

Recursive instrumental variables

Recursive instrumental variables

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recall: Estimating a scalar


Recall scalar estimation. Model:
y (k ) = b + e(k) = 1 b + e(k) = (k) + e(k )
where (k ) = 1k, = b.
For the data points up to and including k :
y(1) = (1) = 1 b

y (k ) = (k ) = 1 b
After calculation, the solution of this system is:
1
b
(k)
= [y(1) + . . . + y (k )]
k
Intuition: Estimate is the average of all measurements, filtering out
the noise.

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Estimating a scalar: Recursive formulation

Rewrite the formula:


b ) = 1 [y (1) + . . . + y(k)]
(k
k
1
1
= [(k 1)
(y (1) + . . . + y (k 1)) + y(k)]
k
k 1
1
b 1) + y (k )]
= [(k 1)(k
k
1 b
b 1)]
1) + y (k ) (k
= [k (k
k
b 1) + 1 [y (k ) (k
b 1)]
= (k
k

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Estimating a scalar: Discussion

b ) = (k
b 1) + 1 [y (k ) (k
b 1)]
(k
k
b ) computed based on
Recursive formula: new estimate (k
b
previous estimate (k 1) and new data y (k ).
b 1)] is a prediction error (k ), since
[y(k) (k
b = yb(k ), a one-step-ahead prediction of the output.
b 1) = b
(k
Update applies a correction proportional to (k ), weighted by k1 :
When the error (k ) is large (or small), a large (or small)
adjustment is made.
The weight k1 decreases with time k, so adjustments get smaller as
b becomes better.

Motivating example

Recursive least-squares and ARX

Table of contents

Motivating example: Estimating a scalar

Recursive least-squares and ARX


General recursive least-squares
Recursive ARX
Matlab example

Recursive instrumental variables

Recursive instrumental variables

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recall: Least-squares regression


Model:
y(k) = > (k ) + e(k )
Dataset up to k gives a linear system of equations:
y (1) = > (1)
y (2) = > (2)

y(k) = > (k)


After some linear algebra and calculus, the least-squares solution is:

k
k
X
X
b )=
(k
(j)> (j)
(j)y(j)
j=1

j=1

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Least-squares: Recursive formula


With notation P(k) =

Pk

j=1

(j)> (j):

k
X
b ) = P 1 (k)
(k
(j)y (j)
j=1

k1
X
= P 1 (k)
(j)y (j) + (k )y (k )
j=1

i
b 1) + (k)y(k)
= P 1 (k) P(k 1)(k
h
i
b 1) + (k)y(k)
= P 1 (k) [P(k) (k)> (k )](k
h
i
b 1) + P 1 (k ) (k )> (k)(k
b 1) + (k )y (k )
= (k
h
i
b 1) + P 1 (k )(k ) y (k ) > (k )(k
b 1)
= (k
h

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Least-squares: Discussion

h
i
b
b 1) + P 1 (k )(k) y(k) > (k )(k
b 1)
(k)
= (k
b
b 1) + W (k )(k )
(k)
= (k
b ) computed based on
Recursive formula: new estimate (k
b
previous estimate (k 1) and new data y (k ).
h
i
b 1) is a prediction error (k), since
y (k ) > (k )(k
b 1) = ybk 1 (k ) is the one-step-ahead prediction using
> (k )(k
the previous parameter vector.
W (k) = P 1 (k )(k) is a weighting vector: elements of P grow
large for large k , therefore W (k ) decreases.

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recursive matrix inversion

The previous formula requires matrix inverse P 1 (k ), which is


computationally costly.
For P, an easy recursion exists: P(k) = P(k 1) + (k )> (k), but
this does not help; the matrix must still be inverted.
The Sherman-Morrison formula gives a recursion for the inverse P 1 :
P 1 (k) = P 1 (k 1)

P 1 (k 1)(k)> (k )P 1 (k 1)
1 + > (k )P 1 (k 1)(k )

Exercise: Prove the Sherman-Morrison formula! (linear algebra)

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Overall algorithm

Recursive least-squares (RLS)


b
initialize (0),
P 1 (0)
loop at every step k = 1, 2, . . .
measure y(k), form regressor vector (k )
b 1)
find prediction error (k) = y (k ) > (k)(k
P 1 (k1)(k )> (k )P 1 (k1)
1
1
update inverse P (k ) = P (k 1)
1+> (k)P 1 (k1)(k )
compute weights W (k) = P 1 (k )(k)
b ) = (k
b 1) + W (k)(k)
update parameters (k
end loop

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Initialization
b
The RLS algorithm requires initial parameter (0),
and initial inverse
1
P (0).
Typical choices without prior knowledge:
b = [0, . . . , 0]> - a vector of n zeros.
(0)
P 1 (0) = 1 I, with a small number such as 103 .
An equivalent initial value of P would be P(0) = I.
Intuition: P small corresponds to a low confidence in the initial
parameters. This leads to large P 1 (0), so initially the weights W (k )
b
are large and large updates are applied to .
b can be initialized to it, and
If a prior value b is available, (0)
correspondingly increased. This means better confidence in the initial
estimate, and smaller initial updates.

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Back to the scalar case


y (k ) = b + e(k) = (k) + e(k), where (k ) = 1, = b
b = 0, P 1 (0) . We have with Sherman-Morrison:
Take (0)
P 1 (k ) = P 1 (k 1)

(P 1 (k 1))2
P 1 (k 1)
=
1
1 + P (k 1)
1 + P 1 (k 1)

P 1 (1) = 1
1
P 1 (2) =
2

1
P 1 (k ) =
k
b 1) and W (k ) = P 1 (k ) = 1 , leading to:
Also, (k ) = y(k) (k
k
b ) = (k
b 1) + W (k)(k ) = (k
b 1) + 1 [y (k ) (k
b 1)]
(k
k
formula for the scalar case was recovered.

Motivating example

Recursive least-squares and ARX

Table of contents

Motivating example: Estimating a scalar

Recursive least-squares and ARX


General recursive least-squares
Recursive ARX
Matlab example

Recursive instrumental variables

Recursive instrumental variables

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recall ARX model


A(q 1 )y (k ) = B(q 1 )u(k) + e(k )
(1+a1 q 1 + + ana q na )y (k ) =
(b1 q 1 + + bnb q nb )u(k ) + e(k )

In explicit form:
y(k) + a1 y(k 1) + a2 y (k 2) + . . . + ana y(k na)
= b1 u(k 1) + b2 u(k 2) + . . . + bnb u(k nb) + e(k)
where the model parameters are: a1 , a2 , . . . , ana and b1 , b2 , . . . , bnb .

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Linear regression representation

y (k ) = a1 y (k 1) a2 y (k 2) . . . ana y (k na)
b1 u(k 1) + b2 u(k 2) + . . . + bnb u(k nb) + e(k)


= y (k 1) y (k na) u(k 1) u(k nb)

>
a1 ana b1 bnb + e(k )
=:> (k ) + e(k )
Regressor vector: Rna+nb , previous output and input values.
Parameter vector: Rna+nb , polynomial coefficients.

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recursive ARX
With this representation, just an instantiation of RLS template.
Recursive ARX
b
initialize (0),
P 1 (0)
loop at every step k = 1, 2, . . .
measure u(k), y(k)
form regressor vector
>
(k) = [y(k 1), , y(k na), u(k 1), , u(k nb)]
b 1)
find prediction error (k) = y (k ) > (k)(k
1
P
(k1)(k )> (k )P 1 (k1)
update inverse P 1 (k ) = P 1 (k 1)
1+> (k )P 1 (k1)(k )
compute weights W (k) = P 1 (k )(k )
b ) = (k
b 1) + W (k)(k)
update parameters (k
end loop
Remark: Outputs more than na steps ago, and inputs more than nb
steps ago, can be forgotten.

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Guarantees idea

b = 0, P 1 (0) = 1 I and 0, recursive ARX at step k is


With (0)

equivalent to running batch ARX on the dataset


u(1), y(1), . . . , u(k ), y(k).
the same guarantees as batch ARX:
Theorem
Under appropriate assumption (including the existence of true
parameters 0 ), ARX identification is consistent: the estimated
parameters b tend to the true parameters 0 as k .
The condition on P is just to ensure the inverses are the same as in
the offline case.

Motivating example

Recursive least-squares and ARX

Table of contents

Motivating example: Estimating a scalar

Recursive least-squares and ARX


General recursive least-squares
Recursive ARX
Matlab example

Recursive instrumental variables

Recursive instrumental variables

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

System
To illustrate recursive ARX, we take a known system:
y (k ) + ay(k 1) = bu(k 1) + e(k),

a = 0.9, b = 1

& Stoica)
(Soderstr
om

system = idpoly([1 -0.9], [0 1]);


Identification data obtained in simulation:
sim(system, u, noise);

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recursive ARX

model = rarx(id, [na, nb, nk], ff, 1, th0, Pinv0);


Arguments:
1
2
3
4
5

Identification data.
Array containing the orders of A and B and the delay nk.
ff, 1 selects the algorithm variant presented in this lecture.
th0 is the initial parameter value.
Pinv0 is the initial inverse P 1 (0).

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Results
na = nb = nk = 1
P 1 (0) = 1 I, left: = 103 , right: = 100.

Conclusions:
Convergence to true parameter values...
...slower when is larger (good confidence in the initial solution)

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

System outside model class


Take now an output-error system (so it is not ARX with white noise):
y (k ) =

bq 1
u(k ) + e(k ),
1 + fq 1

f = 0.9, b = 1

& Stoica)
(Soderstr
om

system = idpoly([], [0 1], [], [], [1 -.9]);


Identification data:

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Results
na = nb = nk = 1, so we attempt the ARX model:
y(k) + fy(k 1) = bu(k 1) + e(k )
Also, P 1 (0) = 1 I, = 103 .

Conclusion:
Convergence to the true values is not attained! as expected,
due to colored noise

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Other recursive algorithms

Other recursive identification methods available in Matlab, e.g.:


ARMAX, rarmax
output-error, roe
generic prediction error method, rpem

Motivating example

Recursive least-squares and ARX

Table of contents

Motivating example: Estimating a scalar

Recursive least-squares and ARX

Recursive instrumental variables

Recursive instrumental variables

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recall: Instrumental variable method


IV methods find the parameter vector using:
"
b =

#1 "
#
N
N
1 X
1 X
>
Z (k )y (k)
Z (k ) (k )
N
N
k =1

(8.1)

k =1

which is the solution to the system of equations:


"
#
N
1 X
>
Z (k )[ (k) y (k )] = 0
N
k=1

The regressor vector:


(k ) = [y (t 1), , y (t na), u(t 1), , u(t nb)]
Motivation: instrument vector Z (k) uncorrelated with the noise
can handle colored noise.

(8.2)

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recursive formulation

Rewrite the equation for the recursive case:

k
X
b = P 1
Z (j)y (j)

(8.3)

j=1

where P =

Pk

j=1

Z (j)> (j).

Note averaging by the number of data points has been removed; it


was needed in batch IV since summations over many data points
could grow very large, leading to numerical problems.
Recursive IV will add terms one by one, and will therefore be
numerically stable.

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recursive formulation (2)


With this definition of P, the recursive update is found similarly to
RLS:

k
X
b ) = P 1 (k)
(k
Z (j)y (j)
j=1

k1
X
= P 1 (k)
Z (j)y (j) + Z (k )y (k )
j=1

i
b 1) + Z (k)y(k)
(k) P(k 1)(k
h
i
b 1) + Z (k)y(k)
= P 1 (k) [P(k) Z (k)> (k )](k
h
i
b 1) + P 1 (k ) Z (k )> (k)(k
b 1) + Z (k )y (k )
= (k
h
i
b 1) + P 1 (k )Z (k ) y (k ) > (k )(k
b 1)
= (k
=P

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recursive formulation (3)

Final formula:
h
i
b ) = (k
b 1) + P 1 (k)Z (k ) y (k ) > (k)(k
b 1)
(k
b ) = (k
b 1) + W (k)(k)
(k
with the Sherman-Morrison update of the inverse:
P 1 (k ) = P 1 (k 1)

P 1 (k 1)Z (k )> (k )P 1 (k 1)
1 + > (k)P 1 (k 1)Z (k)

Motivating example

Recursive least-squares and ARX

Recursive instrumental variables

Recursive IV: Overall algorithm

Recursive IV
b
initialize (0),
P 1 (0)
loop at every step k = 1, 2, . . .
measure y(k), form regressor vector (k )
b 1)
find prediction error (k) = y (k ) > (k)(k
P 1 (k1)Z (k)> (k)P 1 (k 1)
1
1
update inverse P (k ) = P (k 1)
1+> (k)P 1 (k1)Z (k)
1
compute weights W (k) = P (k )Z (k)
b ) = (k
b 1) + W (k)(k)
update parameters (k
end loop

You might also like