L 7 B

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Signal Modeling

The idea of signal modeling is to represent the signal via (some) model
parameters.
Signal modeling is used for signal compression, prediction, reconstruction
and understanding.
Two generic model classes will be considered:
ARMA, AR, and MA models,
low-rank models.
EE 524, # 7 1
AR, MA, and ARMA equations
General ARMA equations:
N

k=0
a
k
y(n k) =
M

k=0
b
k
x(n k) ARMA.
Particular cases:
y(n) =
M

k=0
b
k
x(n k) MA,
N

k=0
a
k
y(n k) = x(n) AR.
Taking Z-transforms of both sides of the ARMA equation
N

k=0
a
k
Z{y(n k)} =
M

k=0
b
k
Z{x(n k)}
EE 524, # 7 2
and using time-shift property, we obtain
Y (z)
N

k=0
a
k
z
k
= X(z)
M

k=0
b
k
z
k
.
Therefore, the frequency response of causal LTI system (lter):
H(z) =
Y (z)
X(z)
=

M
k=0
b
k
z
k

N
k=0
a
k
z
k
=
B
M
(z)
A
N
(z)
.
EE 524, # 7 3
Pole-zero Model
Modeling a signal y(n) as the response of a LTI lter to an input x(n).
The goal is to nd the lter coecients and the input x(n) that make the
modeled signal y(n) as close as possible to y(n). The most general model
is the so-called pole-zero model:
EE 524, # 7 4
All-pole Modeling: Yule-Walker Equations
All-pole model:
H(z) =

A(z)
.
Consider the real AR equation:
y(n) +a
1
y(n 1) + +a
N
y(n N) = x(n)
(with a
0
= 1) and assume that E{x(n)x(n k)} =
2
(k). Since the AR
model implies that
y(n) = x(n) +
1
x(n 1) +
2
x(n 2) +
we get
E{y(n)x(n)} = E{[x(n) +
1
x(n 1) +
2
x(n 2) + ]x(n)} =
2
.
EE 524, # 7 5
All-pole Modeling: Yule-Walker Equations
Similarly,
E{y(n k)x(n)}=E{[x(n k) +
1
x(n k 1) + ] x(n)}
=0 for k > 0.
Represent the AR equation in the vector form:
[y(n), y(n 1), . . . , y(n N)]
_

_
1
a
1
.
.
.
a
N
_

_
= x(n).
EE 524, # 7 6
All-pole Modeling: Yule-Walker Equations
E{y(n)x(n)}=E
_
y(n) [y(n), y(n 1), . . . , y(n N)]
_

_
1
a
1
.
.
.
a
N
_

_
_
= [r
0
, r
1
, . . . , r
N
]
_

_
1
a
1
.
.
.
a
N
_

_
=
2
.
EE 524, # 7 7
All-pole Modeling: Yule-Walker Equations
E{y(n k)x(n)}=E
_
y(n k) [y(n), . . . , y(n N)]
_

_
1
a
1
.
.
.
a
N
_

_
_
= [r
k
, r
k1
, . . . , r
kN
]
_

_
1
a
1
.
.
.
a
N
_

_
= 0 for k > 0.
EE 524, # 7 8
All-pole Modeling: Yule-Walker Equations
_

_
r
0
r
1
r
N
r
1
r
0
r
N1
.
.
.
.
.
.
.
.
.
.
.
.
r
N
r
N1
r
0
_

_
_

_
1
a
1
.
.
.
a
N
_

_
=
_

2
0
.
.
.
0
_

_
If we omit the rst equation, we get
_

_
r
0
r
1
r
N1
r
1
r
0
r
N2
.
.
.
.
.
.
.
.
.
.
.
.
r
N1
r
N2
r
0
_

_
_

_
a
1
a
2
.
.
.
a
N
_

_
=
_

_
r
1
r
2
.
.
.
r
N
_

_
,
or, in matrix notation
Ra = r Yule-Walker equations.
EE 524, # 7 9
Levinson Recursion
For this purpose, let us introduce a slightly dierent notation:
_

_
r
0
r
1
r
N
r
1
r
0
r
N1
.
.
.
.
.
.
.
.
.
.
.
.
r
N
r
N1
r
0
_

_
_

_
1
a
N,1
.
.
.
a
N,N
_

_
=
_

2
N
0
.
.
.
0
_

_
which, in matrix notation is
R
N
a
N
=
2
N
e
1
,
where e
1
= [1, 0, 0, . . . , 0]
T
.
EE 524, # 7 10
For N = 1, we have:
r
0
+r
1
a
1,1
=
2
1
,
r
1
+r
0
a
1,1
= 0,
and thus
a
1,1
=
r
1
r
0
,

2
1
= r
0
_
1
_
r
1
r
0
_
2
_
.
EE 524, # 7 11
Levinson Recursion (cont.)
Goal: Given a
N
, we want to nd the solution to the (N + 1)st-order
equations R
N+1
a
N+1
=
2
N+1
e
1
.
Append a zero to a
N
and multiply the resulting vector by R
N+1
:
_

_
r
0
r
1
r
N
r
N+1
r
1
r
0
r
N1
r
N
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
r
N
r
N1
r
0
r
1
r
N+1
r
N
r
1
r
0
_

_
_

_
1
a
N,1
.
.
.
a
N,N
0
_

_
=
_

2
N
0
.
.
.
0

N
_

_
,
where
N
= r
N+1
+

N
k=1
a
N,k
r
N+1k
. Use the symmetric Toeplitz
EE 524, # 7 12
property of R
N+1
to rewrite
_

_
r
0
r
1
r
N
r
N+1
r
1
r
0
r
N1
r
N
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
r
N
r
N1
r
0
r
1
r
N+1
r
N
r
1
r
0
_

_
_

_
0
a
N,N
.
.
.
a
N,1
1
_

_
=
_

N
0
.
.
.
0

2
N
_

_
.
Now, make a weighted sum of the above two equations.
EE 524, # 7 13
Levinson Recursion (cont.)
R
N+1
_

_
_

_
1
a
N,1
.
.
.
a
N,N
0
_

_
+
N+1
_

_
0
a
N,N
.
.
.
a
N,1
1
_

_
_

_
=
_

2
N
0
.
.
.
0

N
_

_
+
N+1
_

N
0
.
.
.
0

2
N
_

_
.
Now, pick

N+1
=

2
N
,
which reduces the above equation to R
N+1
a
N+1
=
2
N+1
e
1
, where
a
N+1
=
_

_
1
a
N,1
.
.
.
a
N,N
0
_

_
+
N+1
_

_
0
a
N,N
.
.
.
a
N,1
1
_

_
and
2
N+1
=
2
N
+
N+1

N
=
2
N
[1
2
N+1
].
EE 524, # 7 14
All-pole Modeling: Pronys Method
Yule-Walker equations do not show an explicit way of nding the AR model
coecients from the data.
Consider the AR equation:
y(n) =
N

k=1
a
k
y(n k) +x(n),
written for L N measured data points {y(n)}
L1
N
. In matrix form:
_

_
y(N)
y(N + 1)
.
.
.
y(L 1)
_

_
=
_

_
y(N 1) y(N 2) y(0)
y(N) y(N 1) y(1)
.
.
.
.
.
.
.
.
.
.
.
.
y(L 2) y(L N 1)
_

_
_

_
a
1
a
2
.
.
.
a
N
_

_
+
_

_
x(N)
x(N + 1)
.
.
.
x(L 1)
_

_
.
EE 524, # 7 15
All-pole Modeling: Pronys Method
In matrix notation, the overdetermined system:
y = Y a +x.
EE 524, # 7 16
All-pole Modeling: Pronys Method
To nd a solution, use LS, i.e. minimize
x
2
= (y +Y a)
H
(y +Y a).
The solution is given by the normal equations:
Y
H
Y a = Y
H
y.
Solving normal equations, we obtain a. Formally, solution can be written
as
a = (Y
H
Y )
1
Y
H
y.
Relationship between the Yule-Walker and normal (Prony) equations:
Ra = r, Y
H
Y a = Y
H
y,
EE 524, # 7 17
i.e.
R Y
H
Y r Y
H
y.
In practice,

R = Y
H
Y r = Y
H
y
represent sample estimates of the exact covariance matrix R and covariance
vector r, respectively!
EE 524, # 7 18
Linear Prediction All-pole Models
Consider the problem of prediction of the future nth value y(n) of the process
using the linear predictor based on the previous values y(n1), . . . , y(nN):
y(n) =
N

i=1
w
i
y(n i) = w
T
y,
where w is the predictor weight vector and y is the signal vector
w =
_

_
w
1
w
2
.
.
.
w
N
_

_
, y =
_

_
y(n 1)
y(n 2)
.
.
.
y(n N)
_

_
.
EE 524, # 7 19
Linear Prediction All-pole Models
Minimize the Mean Square Error (MSE)

2
= E{[y(n) y(n)]
2
}
= E{[y y]
2
}
= E{[y w
T
y]
2
}
= E{y
2
2w
T
yy +w
T
yy
T
w}
= E{y
2
} 2w
T
r +w
T
Rw.
Taking the gradient, we obtain

2
w
= 2r + 2Rw = 0 = Rw = r
and we obtain Yule-Walker equations (w = a)!
EE 524, # 7 20
Order-recursive Levinson-Durbin algorithm can be used to compute
solutions to Yule-Walker (normal) equations (Toeplitz systems).
The covariance (Prony) method can be modied to minimize the forward-
backward prediction errors (improved performance).
AR (all-pole) models are very good for modeling narrowband (peaky)
signals.
All-pole modeling is somewhat simpler than pole-zero modeling.
EE 524, # 7 21
All-zero Modeling
All-zero model
H(z) = B(z).
Consider the real MA equation:
y(n) =
M

i=0
b
i
x(n i).
How to nd the MA coecients?
EE 524, # 7 22
All-zero Modeling
One idea: nd the MA coecients through the coecients of an auxiliary
higher-order AR model. We know that nite MA model can be approximated
by an innite AR model:
B
M
(z) =
M

k=0
b
k
z
k
=
1
A

(z)
.
Since AR() is an idealization, let us take an auxiliary nite AR(N) model
with large N M to nd an approximation to the above equation:
B
M
(z)
1

N
k=0
a
k,aux
z
k
.
EE 524, # 7 23
Clearly, the reverse equation also holds
A
N,aux
(z)
1
B
M
(z)
.
When the auxiliary AR coecients are obtained, the last step is to nd the
MA coecients of the original MA model. This can be done by
min
b
_
_

A
N,aux
(e
j
)B
M
(e
j
) 1
2
d
_
.
EE 524, # 7 24
Durbins Method
Step 1: Given the MA(M) signal y(n), nd for it an auxiliary high-order
AR(N) model with N M using Yule-Walker or normal equations.
Step 2: Using the AR coecients obtained in the previous step, nd the
coecients of the MA(M) model for the signal y(n).
EE 524, # 7 25
Pole-zero Modeling: Modied Yule-Walker Method
Pole-zero model (for b
0
= 1 and a
0
= 1):
H(z) =
B(z)
A(z)
.
Let us consider the real ARMA equation:
y(n) +
N

i=1
a
i
y(n i) = x(n) +
M

i=1
b
i
x(n i)
and assume that
E{x(n)x(n k)} =
2
(k) :
Write the ARMA(N, M) model as MA() equation:
y(n) = x(n) +
1
x(n 1) +
2
x(n 2) +
EE 524, # 7 26
Similar to the all-pole modeling case, we obtain that
E{y(n)x(n)} =
2
, E{y(n k)x(n)} = 0 for k > 0.
ARMA equation can be rewritten in the vector form:
[y(n), y(n 1), . . . , y(n N)]
_

_
1
a
1
.
.
.
a
N
_

_
= [x(n), x(n 1), . . . , x(n M)]
_

_
1
b
1
.
.
.
b
M
_

_
.
EE 524, # 7 27
Pole-zero Modeling: Modied Yule-Walker Method
Multiply both sides of the last equation with y(n k) and take E{}:
k = 0:
[r
0
, r
1
, . . . , r
N
]
_

_
1
a
1
.
.
.
a
N
_

_
= [
2
,
2

1
, . . . ,
2

M
]
_

_
1
b
1
.
.
.
b
M
_

_
.
EE 524, # 7 28
Pole-zero Modeling: Modied Yule-Walker Method
k = 1:
[r
1
, r
0
, . . . , r
N1
]
_

_
1
a
1
.
.
.
a
N
_

_
= [0,
2
,
2

1
, . . . ,
2

M1
]
_

_
1
b
1
.
.
.
b
M
_

_
,
. . . so on until k = M.
k M + 1:
[r
k
, r
k+1
, . . . , r
k+N
]
_

_
1
a
1
.
.
.
a
N
_

_
= [0, 0, . . . , 0]
_

_
1
b
1
.
.
.
b
M
_

_
= 0.
EE 524, # 7 29
Pole-zero Modeling: Modied Yule-Walker Method
Therefore, we obtain the modied Yule-Walker equations:
_
_
r
(M+1)
r
(M)
r
(M+1)+N
r
(M+2)
r
(M+1)
r
(M+2)+N
.
.
.
.
.
.
.
.
.
.
.
.
_
_
_

_
1
a
1
.
.
.
a
N
_

_
= 0.
To solve for a
1
, . . . , a
N
, we need N equations:
_

_
r
M+1
r
M
r
MN+1
r
M+2
r
M+1
r
MN+2
.
.
.
.
.
.
.
.
.
.
.
.
r
M+N
r
M+N1
r
M
_

_
_

_
1
a
1
.
.
.
a
N
_

_
= 0,
EE 524, # 7 30
where we use r
k
= r
k
. The matrix is N (N + 1). Equivalent to
_

_
r
M
r
M1
r
MN+1
r
M+1
r
M+1
r
MN+2
.
.
.
.
.
.
.
.
.
.
.
.
r
M+N1
r
M+N2
r
M
_

_
_

_
a
1
a
2
.
.
.
a
N
_

_
=
_

_
r
M+1
r
M+2
.
.
.
r
M+N
_

_
,
with the square N N matrix. In matrix notation:
Ra = r (modied Yule-Walker equations).
EE 524, # 7 31
Pole-zero Modeling (cont.)
Once the AR coecients are determined, it remains to obtain the MA part
of the considered ARMA model. Write the ARMA power spectrum as
P
y
(z) =
2
B(z)B(1/z)
A(z)A(1/z)
z=exp(j)
=
2

B(e
j
)
A(e
j
)

2
.
Hence, ltering the ARMA process y(n) with the LTI lter having a transfer
function A(z) gives the MA part of the process, having the spectrum:
P(z) = B(z)B(1/z).
Then, the MA parameters of the ARMA process y(n) can be estimated from
this (ltered) MA process using all-zero modeling techniques (for example,
Durbins method).
EE 524, # 7 32
Digression: Rational Spectra
P(e
j
) =
2

B(e
j
)
A(e
j
)

2
.
Recall: we consider real-valued signals here.
a
1
, . . . , a
N
, b
1
, . . . , b
M
are real coecients.
Any continuous power spectral density (PSD) can be approximated
arbitrarily close by a rational PSD. Consider passing x(n) zero-mean
white noise of variance
2
through lter H.
EE 524, # 7 33
Digression: Rational Spectra (cont.)
The rational spectra can be associated with a signal obtained by ltering
white noise of power
2
through a rational lter with H(e
j
) =
B(e
j
)/A(e
j
). ARMA model: ARMA(M,N)
P(e
j
) =
2

B(e
j
)
A(e
j
)

2
.
AR model: AR(N)
P(e
j
) =
2

1
A(e
j
)

2
.
MA model: MA(M)
P(e
j
) =
2

B(e
j
)

2
.
EE 524, # 7 34
Remarks:
AR models peaky PSD better,
MA models valley PSD better,
ARMA is used for PSD with both peaks and valleys.
EE 524, # 7 35
Spectral Factorization
H(e
j
) =
B(e
j
)
A(e
j
)
.
P(e
j
) =
2

B(e
j
)
A(e
j
)

2
=

2
B(e
j
)B(e
j
)
A(e
j
)A(e
j
)
A(e
j
) = 1 +a
1
e
j
+. . . +a
M
e
jM
.
a
1
, . . . , a
N
, b
1
, . . . , b
M
are real coecients.
Remark: If a
1
, . . . , a
N
, b
1
, . . . , b
M
are complex,
P(z) =
2
B(z)B

(
1
z

)
A(z)A

(
1
z

)
EE 524, # 7 36
Spectral Factorization
Consider real case:
P(z) =
2
B(z)B(
1
z
)
A(z)A(
1
z
)
Remarks:
If is zero of P(z), so is
1

.
If is a pole of P(z), so is
1

.
Since a
1
, . . . , a
N
, b
1
, . . . , b
M
are real, the poles and zeroes of P(z) occur
in complex conjugate pairs.
EE 524, # 7 37
Spectral Factorization
Remarks:
If poles of
1
A(z)
inside unit circle, H(z) =
B(z)
A(z)
is BIBO stable.
If zeroes of B(z) inside unit circle, H(z) =
B(z)
A(z)
is minimum phase.
We choose H(z) so that both its zeroes and poles are inside the unit circle.
EE 524, # 7 38
Low-rank Models
A low-rank model for the data vector x:
x = As +n.
where A is the model basis matrix, s is the vector of model basis parameters,
and n is noise.
s is unknown, A is sometimes completely known (unknown), and sometimes
is known up to an unknown parameter vector .
EE 524, # 7 39
Low-rank Models
EE 524, # 7 40
Low-rank Models (cont.)
Case 1: A completely known. Then, conventional linear LS:
min
s
x x
2
= min
s
x As
2
=
s = (A
H
A)
1
A
H
x.
This approach can be generalized for multiple snapshot case:
X = AS +N, X = [x
1
, x
2
, . . . , x
K
],
S = [s
1
, s
2
, . . . , s
K
], N = [n
1
, n
2
, . . . , n
K
].
min
S
X

X
2
= min
S
X AS
2
=

S = (A
H
A)
1
A
H
X.
EE 524, # 7 41
Low-rank Models (cont.)
Case 2: A known up to unknown . Nonlinear LS:
min
s,
x x
2
= min
s,
x A()s
2
=
For xed : s = [A
H
()A()]
1
A
H
()x. Substituting this back into the
LS criterion:
min

_
_
_x A()[A
H
()A()]
1
A
H
()x
_
_
_
2
= min

_
_
_
_
I A()[A
H
()A()]
1
A
H
()
_
x
_
_
_
2
= min

_
_
_P

A
()x
_
_
_
2
max

x
H
P
A
()x.
EE 524, # 7 42
Low-rank Models (cont.)
Generalization to the multiple snapshot case:
min
S,
X

X
2
= min
S,
X A()S
2
=

S = [A
H
()A()]
1
A
H
()X.
Substituting this back into the LS criterion:
min

_
_
_X A()[A
H
()A()]
1
A
H
()X
_
_
_
2
= min

_
_
P

A
()X
_
_
2
= min

tr{P

A
()XX
H
P

A
()}
= min

tr{P

A
()
2
XX
H
}
= min

tr{P

A
()XX
H
} max

tr
_
P
A
()XX
H
_
.
EE 524, # 7 43
Low-rank Models (cont.)
Note that
XX
H
= [x
1
, x
2
, . . . , x
K
]
_

_
x
H
1
x
H
2
.
.
.
x
H
K
_

_
=
K

k=1
x
k
x
H
k
= K

R,
where

R =
1
K
K

k=1
x
k
x
H
k
sample covariance matrix!
Therefore, the nonlinear LS objective functions can be rewritten as
min

tr{P

A
()

R} max

tr
_
P
A
()

R
_
.
EE 524, # 7 44
Low-rank Models (cont.)
EE 524, # 7 45
Low-rank Models (cont.)
Case 3: A completely unknown.
In this case, a nice result exists, enabling low-rank modeling.
Theorem (Eckart and Young, 1936): Given arbitrary N K (N > K)
matrix X with the SVD
X = UV
H
,
the best LS approximation of this matrix by a low-rank matrix X
0
(L =
rank{X
0
} K) is given by

X
0
= U
0
V
H
where the matrix
0
is built from the matrix by replacing the lowest
K L singular values by zeroes.
EE 524, # 7 46
Low-rank Models (cont.)
EE 524, # 7 47
Low-rank Modeling of Data Matrix
1. Compute SVD of a given data matrix X,
2. Specify the model order L,
3. From the computed SVD, obtain the low-rank representation using the
Eckart and Youngs decomposition X
0
.
EE 524, # 7 48

You might also like