Professional Documents
Culture Documents
03 Frequency Estimation Lect Release PDF
03 Frequency Estimation Lect Release PDF
Spectrum Estimation
Suppose x ( n ) is an ergodic WSS random process with unknown power
spectral density Px (e j w )
The Problem:
Given N successive observations of x ( n ) , { x(0), x(1),
we say about Px (e j w ) ?
They include:
(i)
(ii)
(iii) Extensions of the above methods such as the Minimum Variance (MV)
method (Hayes Section 8.3) and the Maximum Entropy Method (MEM)
(Hayes Section 8.4)
Problem reduces down to one of estimating the model parameters from the
measurement data
(a)
bk e- jk w
H (e j w ) =
k =0
(1)
1+
ak e
- jk w
k =1
H (e j w )
w(n)
x(n)
Px (e j w ) = H (e j w ) sw2 =
bk e- jk w
k =0
p
1 + ak e
- jk w
sw2
(2)
k =1
, p} , {bk , k = 0,
(ii)
(iii)
, p.
, p} and {bk , k =
0, , q} are in general non-zero. Estimation method is based on the
modified Yule-Walker equation (see Hayes Section 8.5.3)
In addition, there is the problem of determining the IIR filter orders p and q
(see Hayes pp. 447)
(b)
Frequency Estimation
Suppose x ( n ) is a p-component harmonic process in additive white noise
p
x(n ) =
Ai e jnw
+ w (n )
(3)
i =1
(4)
(ii)
fi are random, uncorrelated with each other and with w ( n ) , and each
uniformly distributed over [-p, p ]
(iii)
wi are distinct
Px (e j w ) = 2p Pi d(w - wi ) + sw2
(5)
i =1
where
Pi = Ai
, p} , {wi , i = 1,
(6)
x(n ) =
Ai e jnw
+ w (n )
(7)
i =1
(8)
(ii)
wi are distinct
(iii)
fi
(iv)
and
rx ( m ) =
Pi e jmw
i =1
Pi = Ai
where
+ sw2 d(m )
(9)
(10)
(11)
Thus, x ( n ) is WSS
Px (e j w ) = 2p Pi d(w - wi ) + sw2
(12)
i =1
x(n ) =
Ai e jnw
+ w (n )
i =1
Px (e
jw
) = 2p Pi d(w - wi ) + sw2 ,
i =1
Px (e j w )
Pi = Ai
2pP2
2pP1
sw2
0
w1
w2
2p
[R x ]mn = rx (m - n ) ,
m, n = 1,
,M
M > p
where
(13)
(14)
(15)
Rs =
Pi ei eHi
(16)
i =1
e i = 1 e j wi e j 2wi
e j ( M -1)wi
(17)
(18)
and
P = diag {P1,
e p
(19)
, Pp }
(20)
(21)
(i)
(ii)
,M
s
lM
(22)
l1s
and
lps +1 =
lps > 0
s
= lM
=0
(23)
(24)
With reference to (21), it should be noted that, in general, lis Pi 1 and vsi e i .
Moreover, as can be readily verified, ei , i = 1, , p , are not orthogonal
See (40)
(25)
,M
li = lis + sw2
(26)
(ii)
vi = vsi
(27)
(iii)
, v p is
li = lis + sw2 , i = 1, , p , are called signal eigenvalues, and their corresponding eigenvectors, v1, , v p , signal eigenvectors
, e p } = span {v1,
, vp }
(28)
span {e1,
Proof:
, e p } = span {v1,
, vp }
From
R x v i = sw2 v i ,
i = p + 1,
(29)
,M
we have
(R x
- sw2 I) vi = 0 ,
i = p + 1,
(30)
,M
or from (25)
EPEH v i = 0 ,
i = p + 1,
(31)
,M
But E and P have full rank. Therefore, it follows from (31) that
EH v i = 0 ,
i = p + 1,
(32)
,M
or more explicitly
eHj vi = 0,
Therefore, e1,
v p+1,
j = 1,
i = p + 1,
,p
(33)
,M
spanned by v p+1,
, e p } = span {v1,
, vp }
e(w ) = 1 e j w e j 2w
e j (M -1)w
(34)
,M
(35)
10
(35) holds for w = w1, , w p , and vi = v p +1, , vM . But there could exist
an wo w1, , w p such that, for a certain vi , e(wo )H vi 0
one should search for w s that satisfy (35) for all noise eigenvectors
Now, e(w )H vi is in general complex. More convenient to search for ws that
minimise following real function (whose minima equal 0)
f (w ) =
i = p +1
e(w )H vi
H
H
H
H
e
(
w
)
v
v
e
(
w
)
e
(
w
)
v
v
=
i i
i i e(w )
i = p +1
i = p +1
M
(36)
Remark
There does not exist an w , other than w1,
, wp , such that f (w ) = 0
Proof
Suppose there exist an wo such that
f (wo ) = 0, wo w1,
, wp
{e(w1 ),
, e(wp ) , and
11
To average out (i.e. reduce) errors, and suppress false peaks (see (36)),
Schmidt (1979) proposed following MUSIC pseudo-spectrum (or
eigenspectrum) whose peaks are located at wi , i = 1, , p
PMU (e j w ) =
1
M
e(w ) vi
1
H
(37)
i = p +1
Vn = v p +1
where
v M
(38)
M > p +1
and
(39)
Pi , i = 1,
V (e j w1 ) 2
1
V (e j w1 ) 2
2
V (e j w1 ) 2
p
V1(e j w2 )
V2 (e j w2 )
Vp (e j w2 )
j w p 2
V2 (e )
j wp 2
Vp (e )
V1(e
j wp 2
l - s2
P1
w
1
P
l2 - sw2
2
P
2
lp - sw
p
(40)
M -1
[ vi ]k +1 e- jk w
(41)
k =0
12
Proof
Observe signal eigenvalues/eigenvectors satisfy
R x v i = li v i ,
i =1
(42)
,p
Suppose eigenvectors are normalised such that pre-multiplying both sides of (42) by v H
i gives
H
vH
i R x v i = li v i v i = li ,
i = 1,
(43)
,p
But
H
2
H
2
vH
i R x v i = v i (R s + sw I) v i = v i R s v i + sw
(44)
Therefore
2
vH
i R s v i = li - sw ,
i = 1,
(45)
,p
i = 1,
,p
(46)
1 e- j w1
1 e- j w2
EH v i =
1 e- j w p
e- j 2w1
e- j 2w2
e
- j 2w p
V (e j w1 )
e- j ( M -1)w1 [ v i ]1
i
j w2
- j ( M -1)w2 [ v ]
i 2
V (e )
e
= i
j wp
- j ( M -1)w p [ v i ]
Vi (e )
(47)
P
=1
Vi (e j w ) = li - sw2 ,
i =1
,p
(48)
13
1 K -1
R x =
x(k ) x(k )H
K k =0
(49)
2.
l1 l2
lM
(50)
4.
, lM
li
M - p i = p +1
(51)
v M ]
(52)
(53)
e( w ) V n V nH e( w )
H
jw
e j (M -1)w . w i , i = 1,
where e(w ) = 1 e
MU
5.
6.
M -1
[ v i ]k +1 e- jk w ,
i , = 1,
, p
(54)
k =0
Pi , i = 1,
V (e j w1 ) 2
1
j w1 2
V2 (e )
j w1 2
Vp (e )
jw 2
V1(e 2 )
2
j w
V2 (e 2 )
jw 2
Vp (e 2 )
2
j w
V1(e p )
2
j w
V2 (e p )
2
j
w
Vp (e p )
l - s 2
P1
1
w
2
P2
= l2 - sw
P
l - s 2
p
w
p
14
Remarks
(i)
(ii)
(iii)
MDL( p ) = - N (M - p ) ln j( p ) +
where
M
1 (M -p )
j( p ) =
l
i = p +1 i
1
2
(55)
p(2M - p ) ln N
(56)
M
1
M - p i = p +1 i
(57)
,p
15
PMU (e j w ) =
e(w )H vi
(58)
i = p +1
Vi (e j w ) =
Therefore
M -1
[ vi ]k +1 e- jk w
k =0
= e(w )H vi
PMU (e j w ) =
i = p +1
jw 2
(59)
(60)
Vi (e )
1
M
(61)
*
Vi ( z )Vi (1 z )
i = p +1
D( z ) =
Vi ( z )Vi* (1 z* )
(62)
i = p +1
D( z ) has the property that its zeros appear in conjugate reciprocal pairs, i.e. if zo is a zero, then
1 z*o is also a zero
16
PEV (e j w ) =
1
M
1
l e(w )H v i
i = p +1 i
(63)
PMN (e j w ) =
1
H
e(w ) a
(64)
( V n V nH )u1
uH ( V V H )u
1
u1 = [1 0
n n
(65)
0 ]T
(66)
17
x(n ) =
Ai e jnw
+ w (n )
i =1
Ai = 1, i = 1,
,4
18
With signal subspace methods, we aim to approximate the signal autocorrelation matrix R s and use it in place of the data autocorrelation matrix R x
in spectral estimators such as the non-parametric MV and MEM methods
li v i v Hi =
i =1
li v i v Hi +
i =1
li v i v H
i
(67)
i = p +1
R s =
li v i v Hi
(68)
i =1
19
x(n ) =
Ai e jnw
+ w (n )
(69)
i =1
(70)
and it is assumed that: (i) Ai and wi are unknown but not random; (ii) wi
are distinct; (iii) fi U [-p, p ] are uncorrelated with each other and with w ( n ) ;
and (iv) w ( n ) has mean zero and variance sw2
As can be shown [see (10)]
p
rx ( m ) =
Pi e jmw
i =1
Pi = Ai
where
+ sw2 d(m )
(71)
(72)
Now, suppose
p = 2q
and
w p-i +1 = 2p - wi
, 2p
(73)
rx ( m ) =
Pi {e jmw
rx [ m ] =
i =1
q
i =1
(74)
Recall [R x ]mn = rx ( m - n ) , m, n = 1,
,M
20
y (n ) =
(75)
i =1
ry ( m ) =
(76)
i =1
Now, comparing (76) with (74), it can be seen that, provided x ( n ) meets the
conditions (73)
Rx = Ry
(77)
Remarks
(i)
(ii)
Under (73), phases of e jnwi and e jn(2p-wi ) are allowed to be random and not
correlated with each other. In contrast, writing y ( n ) as follows
y (n ) =
i =1
i =1
q
i =1
21
22