Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

2.

160 System Identification, Estimation, and Learning


Lecture Notes No. 11
March 15, 2006

6.5 Times-Series Data Compression

b3
b2
b1
u(t) y(t)
FIR

t
Finite Impulse Response Model

Consider a FIR Model


y (t ) = b1u (t − 1) + b2 u (t − 2) + ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ + bm u (t − m)

The transfer functions of (11) are then: G (q ) = B ( q ), H (q ) = 1

One-Step-Ahead Predictor
[
yı(t θ ) = H −1 (q)G ( q)u(t ) + 1 − H −1 (q) y (t ) ]
= G ( q )u ( t )

yı (t θ ) = ϕ T (t )θ : linear regression

Given input data {u(t ), 1 ≤ t ≤ N } , the least square estimate of the parameter vector

was obtained as
N 2
1 1
θıNLS = arg min ∑ ( y(t ) − yı (t | θ )) = (R( N ))−1 f (N ) (40)
θ N t =1 2

where
N N
1 1
R(N ) =
N
∑ϕ (t )ϕ (t )
t =1
T
= ΦΦ T
and f ( N ) =
N
∑ ϕ (t ) y (t )
t =1
(41)

Pro’s and Con’s of FIR Modeling


Pros.
LSE gives a consistent estimate limθı
LS
N = θ0 as long as the input sequence {u (t )} is
N →∞

uncorrelated with the noise e(t), which may be colored.

Cons
Two failure scenarios
• The impulse response has a slow decaying mode

1
• The sampling rate is high
The number of parameters, m, is too large to estimate.
The persistently exciting condition rankΦ = full rank can hardly be satisfied.
Check the eigenvalues of ΦΦ (or the singular values of Φ )
T

λ1 ≥ λ2 ≥ � ≥ λm
It is likely
λ1 ≥ λ2 ≥ � ≥ λn ≥ λn+1 ≅0 = � = λm
Often m becomes more than 50 and it is difficult to obtain such an input series having
50 non-zero singular values.

Time series data compression is an effective method for coping with this difficulty.
Before formulating the above least square estimate problem, data are processed so
that the information contained in the series of regressor may be represented in
compact, low-dimensional form.

ϕm
ϕn

.
. . . ϕ2 ϕ1

. . Filter Design Lk(q)

ϕ1 λ ≅0
.
i

Rm Rn n<m

Coordinate Transformation (Data Compression)

L1(q) b1

u(t) y(t)
L2(q)
b2 ∑


Ln(q) bn

2
6.6 Continuous-Time Laguerre Series Expansion

Let us begin with continuous-time Laguerre expansion.

[Theorem 6.6]
Assume a transfer function G(s) is
• Strictly proper lim G ( s ) = 0 G(∞) = 0 (45)
s →∞

• Analytic in Re(s) > 0 No pole on RHP

• Continuous in Re(s) ≥ 0

Then, there exists a sequence {g k } such that

k −1

2a ⎛ s − a ⎞
G (s ) = ∑ g k ⎜ ⎟ (46)
k =1 s+a ⎝s+a⎠
where a > 0

Proof:
Consider the transformation given by
s+a
z= (47)
s−a
sz − az = s + a s ( z − 1) = a ( z + 1)

z +1
Bilinear transformation ∴s = a (48)
z −1

Im Im
s+a
s z = =1 for s = jω (42)
s−a z-plane s−a
1
∠z = ∠( s + a ) − ∠( s − a )
-1 1
Re Re

s
s−a s+a The imaginary axis is
mapped to the unit circle.

s+a The right half plane is


z = >1 (43)
s−a mapped to the outside of
the unit circle.

3
Im Im

G(s) is analytic in Re( s ) > 0


1

-1 1
Re Re

⎛ ⎞
Consider ⎜ z +1⎟ (44) G (z ) is analytic outside
G (z) = G⎜ a ⎟
⎜ ��z −�1 ⎟
⎝ s ⎠ the unit circle.

Therefore, there exists a Laurent expansion for G (z ) :


G ( z ) = ∑ g k z −k z >1 (49)
k =1

s+a
Considering that G (∞) = 0 lim = 1 ∴ G ( z ) = 0 at z=1
s →∞ s − a


1
G ( z) = (1 − z −1 )∑ g k z −( k −1) (50)
2a k =1

Substituting (40) into (46)


− (k −1)
⎛s+a⎞ 1 s−a ∞ ⎛s+a⎞
G⎜ ⎟ = G (s) = (1 − )∑ g k ⎜ ⎟
⎝s−a⎠ 2a s + a k =1 ⎝ s − a ⎠
(k −1)

2a ⎛ s − a ⎞
∴ G( s) = ∑ g k ⎜ ⎟
k =1 s+a ⎝s+a⎠
(k −1)
2a ⎛ s − a ⎞
Lk (s ) = ⎜ ⎟ (51)
s+a ⎝s+a⎠

s−a jω − a
Low Pass Filter
=1 All-pass Filter
s+a jω + a

The Laplace transform of the Laguerre functions

The Main Point


The above Laguerre series expansion can be used for data compression if the
parameter a , called a Laguerre pole, is chosen such that slow poles, i.e. dominant
poles, of the original system are close to the Laguerre pole. Let pi be a slow real pole
of the original transfer function G(s). If the Laguerre pole a is chosen such that

4
a ≅pi , then a truncated Laguerre expansion:

k −1
n
2a ⎛ s − a ⎞
Gn (s) = ∑ g k ⎜ ⎟ → G( s ) (52)
k =1 s+a ⎝s+a⎠
converges to the original G(s) quickly.

- Recall -
For a continuous-time system, a pole close to the imaginary axis is slow to converge,
while a pole far from the imaginary axis converges quickly. Likewise, in discrete time,
a pole close to the origin of a z-plane quickly converges, while the ones near the unit
circle are slow.

s-plane Im z-plane

Fast Slow Re Fast Slow

Choose a such that p1 ≅−a , where p1 is a slow, stable pole. Using the bilinear
transform, this slow pole in the s-plane can be transferred to a fast pole in the z-plane,
as shown below. Representing in the z-plane, the transfer function can be truncated;
just a few terms can approximate the impulse response since it converges quickly.

p1
Im

a ≅p1 Re
{g k }
Transformation
Im s+a
z=
1 s−a
Truncated

-1
time
1 Re
Fast Convergence

-1
Fast pole z ≅0

When the original transfer function has many poles, the Laguerre pole is placed near
the dominant pole so that most of the slow poles may be approximated by the
Laguerre pole.

5
Im Im

Dominant pole
1

-1 1
Re
z
Re

In the transformed plane, all


−a
the poles are confined within
a circle of radius r<1.

With the bilinear transform, these slow poles are transferred to the ones near the
origin of the z-plane. They are fast, hence the transfer function can be approximated
to a low order model.

6.7 Discrete-Time Laguerre Series Expansion

[Theorem 6.7]
Assume that a Z-transform G(z) is
• Strictly proper G(∞) = 0

• Analytic in z > 1 RHP

• Continuous in z ≥ 1

Then
k −1

K ⎛ 1 − az ⎞
G (s ) = ∑ g k ⎜ ⎟ (53)
k =1 z−a⎝ z−a ⎠
where -1<a<1 and

K = (1 − a 2 )T (54)
Proof
Consider the bilinear transformation:
z−a
w= (55)
1 − az
w+a
w − azw = z − a w + a = z ( aw + 1) therefore, z = (56)
aw + 1

6
Transformation
z−a
z-plane w= w-plane
1 − az

⎛ w+a ⎞
G ( w) = G ⎜ ⎟
⎝ aw + 1 ⎠
z >1
w >1

⎛ w+a ⎞
G (w) = G ⎜ ⎟ is analytic in w > 1, and is proper G(∞) = 0
⎝ aw + 1 ⎠
1 1
lim w = − G (− ) = 0 (57)
z →∞ a a

Therefore

T
G ( w) = (a + w −1 )∑ g k w −( k −1) (58)
K k =1

−(k −1)
⎛ z−a ⎞ T 1 − az ∞ ⎛ z−a ⎞
G( z ) = G ⎜ ⎟ = (a + )∑ g k ⎜ ⎟ (59)
⎝ 1 − az ⎠ K z − a k =1 ⎝ 1 − az ⎠

k −1

K ⎛ 1 − az ⎞
∴ G( s) = ∑ g k ⎜ ⎟ (53)
k =1 z−a⎝ z−a ⎠
Q.E.D.

Now we can write


y (t ) = G ( q)u(t )
k −1
n
K ⎛ 1 − aq ⎞ n
= ∑ gk ⎜ ⎟ u(t ) = ∑ g k Lk ( q)u(t )
k =1 q − a ⎜⎝ q − a ⎟⎠ k =1

where Lk ( q ), k = 1,�, n is a series of filters. Once the original input data are filtered

7
with Lk ( q ), k = 1,�, n ,

xk = Lk ( q )u(t ), k = 1,�, n (60)

The output y (t ) is represented as a moving average of the transformed input xk ,

that is, a FIR model.

x1 g1
L1(q)

y(t)
u(t) x2
L2(q) g2 ∑

xn
Ln (q ) gn

Furthermore, xk (t ) can be computed recursively.

K Kq −1
x1 (t) = L1 ( q)u(t ) = u( t ) = u (t )
q−a 1 − aq −1

x1 (t) − ax1 (t − 1) = Ku(t − 1)

x1 (t) = ax1 (t − 1) + Ku(t − 1)

1 − aq q −1 − a
x2 (t) = x1 (t) = x1 (t)
q−a 1 − aq −1

x2 (t) − ax 2 (t − 1) = x1 (t − 1) − ax1 (t )

x2 (t) = ax2 (t − 1) + x1 (t − 1) − ax1 (t )

xk (t) = axk (t − 1) + xk −1 (t − 1) − axk −1 (t) (61)

Recursive Filters

You might also like