Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Frequency Estimation

Spectrum Estimation
Suppose x ( n ) is an ergodic WSS random process with unknown power
spectral density Px (e j w )
The Problem:
Given N successive observations of x ( n ) , { x(0), x(1),
we say about Px (e j w ) ?

, x(N - 1)} , what can

There are 2 main classes of methods to estimate Px (e j w ) : non-parametric


and parametric

1.1 Non-Parametric Methods


These methods make no assumptions about how x ( n ) was generated

They include:
(i)

Periodogram methods such as the Welch-Bartlett methods (see DSP304


and Hayes Sections 8.2.18.2.5)

(ii)

Correlogram methods such as the Blackman-Tukey method (see DSP304


and Hayes Sections 8.2.6)

(iii) Extensions of the above methods such as the Minimum Variance (MV)
method (Hayes Section 8.3) and the Maximum Entropy Method (MEM)
(Hayes Section 8.4)

Welch-Bartlett and Blackman-Tukey are based on the discrete Fourier


transform (DFT), while MV and MEM modify and/or extend the DFT

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

1.2 Parametric Methods


Parametric methods assume x ( n ) was generated in accordance to some
parameterised model
Also known as model-based methods

Two main signal models are autoregressive-moving average (ARMA) and


harmonic

Problem reduces down to one of estimating the model parameters from the
measurement data

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

(a)

ARMA Spectrum Estimation


Suppose x ( n ) is generated by passing zero-mean white noise w ( n ) with
variance sw2 through the IIR filter
q

bk e- jk w

H (e j w ) =

k =0

(1)

1+

ak e

- jk w

k =1

H (e j w )

w(n)

x(n)

x ( n ) is called an ARMA(p, q) process. Its power spectral density is given by


q

Px (e j w ) = H (e j w ) sw2 =

bk e- jk w

k =0
p

1 + ak e

- jk w

sw2

(2)

k =1

Problem is to estimate {ak , k = 1,


observations x (0), x (1), , x (N - 1)

, p} , {bk , k = 0,

, q} , and sw2 from the

There are 3 sub-problems:


(i)

AR(p) spectrum estimation where it is assumed bk = 0 , k = 1, , q , and


b0 0 . Estimation method is based on the Yule-Walker equation (see
Hayes Section 8.5.1)

(ii)

MA(q) spectrum estimation where it is assumed ak = 0 , k = 1,


Estimation method is Durbins method (see Hayes Section 8.5.2)

(iii)

ARMA( p, q ) spectrum estimation where {ak , k = 1,

, p.

, p} and {bk , k =
0, , q} are in general non-zero. Estimation method is based on the
modified Yule-Walker equation (see Hayes Section 8.5.3)

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

In addition, there is the problem of determining the IIR filter orders p and q
(see Hayes pp. 447)

(b)

Frequency Estimation
Suppose x ( n ) is a p-component harmonic process in additive white noise
p

x(n ) =

Ai e jnw

+ w (n )

(3)

i =1

where amplitudes Ai are complex


Ai = Ai e j fi

(4)

and it is assumed that:


(i)

Ai and wi are unknown but not random

(ii)

fi are random, uncorrelated with each other and with w ( n ) , and each
uniformly distributed over [-p, p ]

(iii)

wi are distinct

(iv) variance of w ( n ) is sw2

It can be shown x ( n ) is WSS with power spectral density


p

Px (e j w ) = 2p Pi d(w - wi ) + sw2

(5)

i =1

where

Pi = Ai

Problem is to estimate {Pi , i = 1,


observations x (0), x (1), , x (N - 1)

, p} , {wi , i = 1,

(6)

, p} , and sw2 from the

In the sequel, we shall focus on above estimation problem

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

Spectrum Estimation of p-component Harmonic Processes


Recall definition of a p-component harmonic process in additive white noise
p

x(n ) =

Ai e jnw

+ w (n )

(7)

i =1

where amplitudes Ai are complex


Ai = Ai e j fi

(8)

and it is assumed that:


(i)

Ai and wi are unknown but not random

(ii)

wi are distinct

(iii)

fi

(iv)

w ( n ) has mean zero and variance sw2

U [-p, p ] are uncorrelated with each other and with w ( n )

It can be shown (exercise) mean and autocorrelation function of x ( n ) given by


mx ( n ) = 0
p

and

rx ( m ) =

Pi e jmw
i =1

Pi = Ai

where

+ sw2 d(m )

(9)

(10)
(11)

Thus, x ( n ) is WSS

Also, through Wiener-Khinchine Theorem, it can be shown x ( n ) has power


spectral density
p

Px (e j w ) = 2p Pi d(w - wi ) + sw2

(12)

i =1

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

x(n ) =

Ai e jnw

+ w (n )

i =1

Px (e

jw

) = 2p Pi d(w - wi ) + sw2 ,
i =1

Px (e j w )

Pi = Ai

2pP2
2pP1

sw2
0

w1

w2

2p

Theoretical power spectral density of a 2-component harmonic process in white noise

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

2.1 Eigendecomposition of the Autocorrelation Matrix


Define the M M autocorrelation matrix R x

[R x ]mn = rx (m - n ) ,

m, n = 1,

,M

M > p

where

(13)
(14)

It follows from (10) that R x can be decomposed as follows (exercise)


R x = Rs + Rn

(15)

where R s is the signal autocorrelation matrix


p

Rs =

Pi ei eHi

(16)

i =1

e i = 1 e j wi e j 2wi

e j ( M -1)wi

(17)

and R n is the noise autocorrelation matrix


R n = sw2 I

(18)

e i , i = 1, , p , are the signal vectors, each corresponding to the signal


frequency wi

Alternatively, defining the M p and p p matrices


E = e1

and

P = diag {P1,

e p

(19)

, Pp }

(20)

R s can be expressed more compactly as follows


R s = EPEH

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

(21)

(i)

Since wi are assumed to be distinct, it can be shown E has full column


rank p (exercise)

(ii)

Clearly, since Pi > 0 , P has full rank p

Therefore, M M matrix R s has rank p < M , and is positive semi-definite

Denote eigenvalues and eigenvectors of R s by lis and vsi , i = 1,

,M

Suppose eigenvalues are ordered such that


l1s l2s

s
lM

(22)

Since R s has only rank p < M , we have

l1s
and

lps +1 =

lps > 0
s
= lM
=0

(23)
(24)

With reference to (21), it should be noted that, in general, lis Pi 1 and vsi e i .
Moreover, as can be readily verified, ei , i = 1, , p , are not orthogonal

See (40)

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

Recall now (15), (21) and (18). We can write R x as follows


R x = R s + R n = EPEH + sw2 I

Denote eigenvalues and eigenvectors of R x by li and v i , i = 1,

(25)

,M

From our previous study on autocorrelation matrices, we see that


(i)

li = lis + sw2

(26)

(ii)

vi = vsi

(27)

(iii)

Since R x is Hermitian, (26) implies space spanned by v1,


orthogonal to space spanned by v p +1, , vM

, v p is

li = lis + sw2 , i = 1, , p , are called signal eigenvalues, and their corresponding eigenvectors, v1, , v p , signal eigenvectors

v1, , v p span a p-dimensional subspace called the signal plus noise


subspace. By convention, this space is often simply referred to as the signal
subspace

Likewise, li = sw2 , i = p + 1, , M , are called noise eigenvalues, and


v p +1, , vM , noise eigenvectors. The noise eigenvectors span an (M - p ) dimensional subspace called the noise subspace

We next show the important result


span {e1,

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

, e p } = span {v1,

, vp }

(28)

span {e1,

Proof:

, e p } = span {v1,

, vp }

From
R x v i = sw2 v i ,

i = p + 1,

(29)

,M

we have

(R x

- sw2 I) vi = 0 ,

i = p + 1,

(30)

,M

or from (25)
EPEH v i = 0 ,

i = p + 1,

(31)

,M

But E and P have full rank. Therefore, it follows from (31) that
EH v i = 0 ,

i = p + 1,

(32)

,M

or more explicitly

eHj vi = 0,
Therefore, e1,

v p+1,

j = 1,
i = p + 1,

,p

(33)

,M

, e p span a space that is orthogonal to space spanned by noise eigenvectors

, vM . But signal eigenvectors v1,

spanned by v p+1,

, v p also spans a space that is orthogonal to space

, vM . Hence, we must have


span {e1,

, e p } = span {v1,

, vp }

Above result suggests frequencies of complex exponentials can be found by


searching for signal vectors that are orthogonal to the noise subspace
In other words, defining the test signal vector

e(w ) = 1 e j w e j 2w

e j (M -1)w

(34)

signal frequencies can be found by searching for w s that satisfy


e(w )H vi = 0, i = p + 1,

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

,M

(35)

10

(35) holds for w = w1, , w p , and vi = v p +1, , vM . But there could exist
an wo w1, , w p such that, for a certain vi , e(wo )H vi 0
one should search for w s that satisfy (35) for all noise eigenvectors
Now, e(w )H vi is in general complex. More convenient to search for ws that
minimise following real function (whose minima equal 0)
f (w ) =

i = p +1

e(w )H vi

H
H
H
H

e
(
w
)
v
v
e
(
w
)
e
(
w
)
v
v
=

i i
i i e(w )
i = p +1

i = p +1
M

(36)

Remark
There does not exist an w , other than w1,

, wp , such that f (w ) = 0

Proof
Suppose there exist an wo such that

f (wo ) = 0, wo w1,

, wp

i.e., e(wo ) is orthogonal to the noise subspace.


Now, since wo w1,

{e(w1 ),

, w p , then e(wo ) must be linearly independent of e(w1 ),

, e(wp ) , and

, e( w p ), e( wo )} will span a ( p + 1)-dimensional space. But noise subspace already

{e(w1 ), , e(w p ), e(wo )} can only span a p-dimensional


space which contradicts deduction that {e( w1 ), , e( w p ), e( wo )} is ( p + 1)-dimensional

has dimension M - p . Therefore,

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

11

2.2 MUltiple SIgnal Classification (MUSIC) Method


It follows from (36) that, to find the signal frequencies, all one needs is just
one noise eigenvector, i.e., M = p + 1
However, in practice, R x is estimated from measurements. Noise eigenvectors
will thus contain errors

To average out (i.e. reduce) errors, and suppress false peaks (see (36)),
Schmidt (1979) proposed following MUSIC pseudo-spectrum (or
eigenspectrum) whose peaks are located at wi , i = 1, , p

PMU (e j w ) =

1
M

e(w ) vi

1
H

e(w ) Vn VnH e(w )

(37)

i = p +1

Vn = v p +1

where

v M

(38)

M > p +1

and

(39)

Trade-off is increase in computational load one is required to eigendecompose a larger R x matrix

Pi , i = 1,

, p , can be found by solving

V (e j w1 ) 2
1

V (e j w1 ) 2
2

V (e j w1 ) 2
p

V1(e j w2 )

V2 (e j w2 )

Vp (e j w2 )

j w p 2
V2 (e )

j wp 2
Vp (e )

V1(e

j wp 2

l - s2
P1
w
1

P
l2 - sw2
2

P
2
lp - sw
p

(40)

where Vi (e j w ) is DFT of ith normalised signal eigenvector evaluated at w = w


Vi (e j w ) =

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

M -1

[ vi ]k +1 e- jk w

(41)

k =0

12

Proof
Observe signal eigenvalues/eigenvectors satisfy
R x v i = li v i ,

i =1

(42)

,p

Suppose eigenvectors are normalised such that pre-multiplying both sides of (42) by v H
i gives
H
vH
i R x v i = li v i v i = li ,

i = 1,

(43)

,p

But
H
2
H
2
vH
i R x v i = v i (R s + sw I) v i = v i R s v i + sw

(44)

Therefore
2
vH
i R s v i = li - sw ,

i = 1,

(45)

,p

and from (21)


H
H
H
2
vH
i (EPE )v i = ( v i E )P(E v i ) = li - sw ,

i = 1,

,p

(46)

Now, from (17) and (19)

1 e- j w1

1 e- j w2
EH v i =

1 e- j w p

e- j 2w1
e- j 2w2
e

- j 2w p

V (e j w1 )
e- j ( M -1)w1 [ v i ]1
i

j w2
- j ( M -1)w2 [ v ]

i 2
V (e )
e

= i

j wp
- j ( M -1)w p [ v i ]

Vi (e )

(47)

Therefore, after substituting (47) into (46), we get


p

P
=1

Vi (e j w ) = li - sw2 ,

which leads to (40)

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

i =1

,p

(48)

13

The MUSIC Algorithm


1.

Estimate R x from the measurements, e.g. by

1 K -1
R x =
x(k ) x(k )H

K k =0

(49)

where each x( k ) contains M successive observations of x ( n ) 2

2.

Determine eigenvalues and eigenvectors of R x . Order eigenvalues as follows

l1 l2

lM

(50)

Determine p , the number of complex exponentials, from l1,


3.

sw2 given by average of M - p smallest eigenvalues, i.e.


sw2 =

4.

, lM

li
M - p i = p +1

(51)

Form V n from eigenvectors corresponding to M - p smallest eigenvalues


V n = [ v p +1

v M ]

(52)

Plot MUSIC pseudo-spectrum


PMU (e j w ) =

(53)

e( w ) V n V nH e( w )
H

jw
e j (M -1)w . w i , i = 1,
where e(w ) = 1 e

P (e j w ) has its p largest peaks


T

, p , given by frequencies where

MU

5.

Normalise, if necessary, signal eigenvectors found in Step 2

6.

From estimated frequencies w i , i = 1,

, p , found in Step 4 and normalised signal

eigenvectors, compute the DFTs


jw
Vi (e ) =

M -1

[ v i ]k +1 e- jk w ,

i , = 1,

, p

(54)

k =0

Pi , i = 1,

, p , found by solving following set of linear equations

V (e j w1 ) 2
1

j w1 2
V2 (e )

j w1 2
Vp (e )

jw 2
V1(e 2 )
2
j w
V2 (e 2 )

jw 2
Vp (e 2 )

2
j w
V1(e p )

2
j w
V2 (e p )

2
j
w

Vp (e p )

l - s 2
P1
1
w

2
P2

= l2 - sw

P
l - s 2
p
w
p

Alternatively, R x can be constructed from correlogram of measured data

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

14

Remarks
(i)

If M = p +1 , i.e. there is only one noise eigenvalue/eigenvector, MUSIC


algorithm known as Pisarenko method
Clearly, Pisarenko method not practical since it is highly sensitive to noise.
Notwithstanding, it is important in that it provides a key stepping stone to the
subsequent development of more practical eigendecomposition-based methods

(ii)

If w ( n ) is not white such that R n aI where a is some positive real constant,


then one must perform a pre-whitening procedure before applying MUSIC
Pre-whitening is beyond scope of this course (but see Hayes Problem 8.24)

(iii)

In Step 2, p can be determined by looking for a distinct change in eigenvalues


as one scans l1, , lM starting from the smallest (noise) eigenvalue lM .
However, this procedure works only if signal powers are much larger than the
noise power
In situations where there is no clear break between signal and noise eigenvalues, following statistical procedure may be useful:
Find p that minimises either the Akaike information criterion (AIC) or the
minimum description length (MDL) given below
AIC( p ) = - N (M - p ) ln j( p ) + p(2M - p ).

MDL( p ) = - N (M - p ) ln j( p ) +

where

M
1 (M -p )

j( p ) =
l
i = p +1 i

1
2

(55)

p(2M - p ) ln N

(56)

M
1

M - p i = p +1 i

(57)

and N is number of observed samples


(iv) In (53), if V n is ideal, then e(w )H V n V nH e(w ) = 0 when w = wi , i = 1,

,p

However, if V n is estimated from measurements, then it is almost certain


e(w )H V n V nH e(w ) 0 at w = wi . The best one can do then to estimate wi , is
to look for the p smallest local minima of e(w )H V n V nH e(w )

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

15

2.3 Root MUSIC


MUSIC, as summarised in pp. 13, requires a search for peaks in PMU (e j w )
may have problems with identifying peaks

Recall (37) and (41) repeated below


1

PMU (e j w ) =

e(w )H vi

(58)

i = p +1

Vi (e j w ) =

Therefore

M -1

[ vi ]k +1 e- jk w

k =0

= e(w )H vi

PMU (e j w ) =

i = p +1

jw 2

(59)

(60)

Vi (e )

In z-transform notations, (60) yields


PMU ( z ) =

1
M

(61)
*

Vi ( z )Vi (1 z )

i = p +1

Peaks of PMU (e j w ) correspond, therefore, to zeros on unit circle of following


polynomial

D( z ) =

Vi ( z )Vi* (1 z* )

(62)

i = p +1

With measurement data, w i , i = 1, , p , equals angles of the p roots of


D ( z ) that are closest to and inside the unit circle3

D( z ) has the property that its zeros appear in conjugate reciprocal pairs, i.e. if zo is a zero, then
1 z*o is also a zero

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

16

2.4 Other Eigenvector Methods


See Hayes Section 8.6.4

Eigenvector (EV) method appears to produce fewer spurious peaks than


MUSIC

PEV (e j w ) =

1
M

1
l e(w )H v i
i = p +1 i

(63)

Minimum norm method is computationally more efficient

PMN (e j w ) =

1
H

e(w ) a

(64)

where a is a vector constrained to lie in the noise subspace, has minimum


norm, and has 1 as its first element
a =

( V n V nH )u1
uH ( V V H )u
1

u1 = [1 0

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

n n

(65)

0 ]T

(66)

17

Example Comparison of Pisarenko, MUSIC, EV and minimum norm


(Hayes pp. 469)
x ( n ) consists of 4 complex exponentials in white noise

x(n ) =

Ai e jnw

+ w (n )

i =1

Ai = 1, i = 1,

,4

wi = 0.2p, 0.3p, 0.8p, 1.2p


sw2 = 0.5

Simulation creates 10 realisations of x ( n ) with N = 64 samples per realisation

R x is 5 5 for Pisarenko, and 64 64 for other 3 methods

Hayes remarked that


(i) for some plots, peaks not well-defined;
(ii) some plots show only 2 or 3 peaks

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

18

2.5 Signal Subspace Methods


See Hayes Section 8.7

Pisarenko, MUSIC, EV and minimum norm are noise subspace methods


they search for signal vectors that are orthogonal to the noise subspace

With signal subspace methods, we aim to approximate the signal autocorrelation matrix R s and use it in place of the data autocorrelation matrix R x
in spectral estimators such as the non-parametric MV and MEM methods

Idea is that approximate signal autocorrelation matrix, denoted by R s , will


contain less noise than R x and so should give better spectral estimates

R s can be found as follows. After an eigendecomposition of R x , we have


R x =

li v i v Hi =
i =1

li v i v Hi +
i =1

li v i v H
i

(67)

i = p +1

where li and v i , i = 1, , M , are eigenvalues and eigenvectors of R x . R s is


approximated with first summation term

R s =

li v i v Hi

(68)

i =1

i.e. R s contains only signal eigenvalues and eigenvectors of R x

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

19

2.6 MUSIC and Real Sinusoids


Recall (7)
p

x(n ) =

Ai e jnw

+ w (n )

(69)

i =1

where amplitudes Ai are complex


Ai = Ai e j fi

(70)

and it is assumed that: (i) Ai and wi are unknown but not random; (ii) wi
are distinct; (iii) fi U [-p, p ] are uncorrelated with each other and with w ( n ) ;
and (iv) w ( n ) has mean zero and variance sw2
As can be shown [see (10)]
p

rx ( m ) =

Pi e jmw

i =1

Pi = Ai

where

+ sw2 d(m )

(71)

(72)

Therefore, R x is in general, complex and Hermitian4

Now, suppose
p = 2q

and

w p-i +1 = 2p - wi

and Pp-i +1 = Pi for i = 1,

, 2p

(73)

(71) then simplifies to


q

rx ( m ) =

Pi {e jmw

rx [ m ] =

2Pi cos mwi

i =1
q

i =1

+ e jm(2p -wi ) + sw2 d(m )


+ sw2 d(m )

(74)

In other words, under the conditions (73), R x is real and symmetric


4

Recall [R x ]mn = rx ( m - n ) , m, n = 1,

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

,M

20

Consider next the p-component harmonic process whose complex exponentials


are real sinusoids
q

y (n ) =

2Ai cos (nwi + fi ) + w (n )

(75)

i =1

where p = 2q and, similar to before, it is assumed that: (i) Ai + and wi


are unknown but not random; (ii) wi are distinct; (iii) fi U [-p, p ] are
uncorrelated with each other and with w ( n ) ; and (iv) w ( n ) has mean zero
and variance sw2 and is real

It can be shown that (exercise)


q

ry ( m ) =

2Pi cos mwi + sw2 d(m)

(76)

i =1

where Pi = Ai2 . Thus R y is real and symmetric

Now, comparing (76) with (74), it can be seen that, provided x ( n ) meets the
conditions (73)

Rx = Ry

(77)

(77) implies frequencies of real sinusoids in y (n ) can be found by applying,


with no modifications, the MUSIC algorithm to R y

Remarks
(i)

(76) holds irrespective of whether w ( n ) is complex or purely real

(ii)

Under (73), phases of e jnwi and e jn(2p-wi ) are allowed to be random and not
correlated with each other. In contrast, writing y ( n ) as follows
y (n ) =

2Ai cos(nwi + fi ) + w (n ) = Ai {e j (nw +f ) + e- j (nw +f ) } + w (n )


i

i =1

i =1
q

Ai {e j (nw +f ) + e j [n(2p-w )-f ] } + w (n )


i

i =1

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

21

we see phases of e jnwi and e jn(2p-wi ) are locked to each other


Interestingly, in spite of this significant fundamental difference, R x = R y
(iii)

There is no need to plot MUSIC pseudospectrum of y ( n ) outside w [0, p ]


since plot in this frequency range is mirrored across to w [ p, 2p ]

Y H Leung (2006, 2007, 2009, 2014, 2016)


Curtin University, Australia

22

You might also like