Adv DSP Hayes Unit II Complete

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 148

IB,'Chapter 4~ we saw 'dlat several different signal modeling problems require 'finding the solution to a set of linear equations

of 'the form

...

~ ,
i
I

'IL....... 11: ~ pii.>.' ...l,"(: .,', .. hod C ;I..,. wnere 'R" IS a "iT": ;'J:' Ioephtz matrix. I the raoe approxnnanon metn 'I ;" ror ex-amp1 the oenomtn e, inator coefficients ap (k) which are represented by the vector 9p are found by solving ,R set of Toeplitz equations. of 'idle. form (5.'1) where Kx is a non-symmetric 'Ioep eplitz matrix COl1mining the. signa] values x (q )-~X I(q +, 1) .x (q + P ~' 1) in the first column and the signal values x (q ),' x(q ,_, I)!, "" , ~x(q ~ .p ,+ 1)-in the :first row, In addition, the vector b contains the, signal values ,x(.q +, I), x(q + 2)~ ~.'~,', (q + p) and is therefore tightly constrained by x the. values. in the matrix ,RX'~ , similar set of linear equations is also found. in the modified A Ylllle,--'Wruit:erequations used in modeling an A'RMA, process. 'We saw' that Toeplitz equations also arise in ,aU-po,le modeling of deterministic signals using either PrODY"S, method 'NI.' .,L_..,,__..j, d ,· ..... 1i u ~I: 1)."..' '" " or *',b' autocorrelation mernoa anc tn an-pore m odeung 0.f stoci tasnc proces;s,es using th:e 11l.,He :, Yale- Walker method" Unlike: the, Pade approximation, however, in these, cases ,RX'is a Hermitian Toeplitz matrix, 'Of autocorr-elation values r (0)" rx (I), ~'. "» (p - 1)-.,In addition,
'I
~. _' ., _ _i

j'

'.'

,~,

J[

r.

'I'

smce b ~ _','rx(l)~ '..' 'i(P) - " the vector on tile right side of ,Eqr (5~1) IS ,ag,amtightly constrained by 'me values in the Toeplitz matrix Rx ,. In Shanks' method for finding- the ,~~,C:C ~ • ., t..,~ numerator coetticients, 'we agam f d a. set 0"f 'R ermman 'rr tmc ,:: roep 1-'· equations. I .. case, itz m rms however, unlike the previous examples, the vector b is. not constrained by the values in the matrix 'Rx',. In Chapter 7~ Toeplitz equations wiU again be; encountered, when we consider the design of f.1R, Wiener fillers. .. As in Shanks' method, '1:][ 'will be a Hermitian 'Ibeplitz matrix 'but the vector b 'win be, in genenll, independent of the values in Rx ~ , Due to 'the importance of solving Toepli tz equations in a variety of different problems, in this chapter, 'we look at efficient algorithms for solving these equations. In. the process of deriving these algorithms, we will also discover a number of interesting properties of tbe solutions to these equatio:~s. and will gain some insight into how other arppr:oacbes to signal 'iii d " +1-.. mo d e I·' may iIJ". d evetopec, we negm, ln. S' " " 5" 2'" Wit"h Ute der mg be Secnon a.z, " .'L envanon 0.f' me'Le' .•. " vinson+~
r ,.

']T

'-'

""

~,

,,-

"

,~

',(]'1

"

"

'D' b ,FUfOlD

,. " recursion, Th-:--, IS


I ._i
a,

recursion may
,.

'iIl-....-.,

I~

d use" to so 1 the ve h

'n-.....

Cluny

..... 1111 1 _""""." ..... '. ,dUJ:~,PO normal1 equations e

and. the autocorrelation normal equations, The Levinson-Durbin

recursion win ,MS'O' lead us

.'

----

---

216,

THE LEVlNSO,N RECURSION'

to several interesting results including the lattice filter structure, the Schur -Cohn stability test for digital filters, the Cholesky decomposition. of a Toeplitz matrix, and a procedure for recursively computing the inverse of a Toeplit z' matrix", In Section. .5 .,3, we develop the Levinson -recursion 'for solving a general set of Hermitian 'Ioeplitz equations in which the vector b is unconstrained. The Levinson recursion, may be used in Shanks' method, and. it may be used to solve the general FIR Wiener filtering problem developed in, Chapter 7" U· ""'-'~lll ., S ., AI: "'- Le .' ~ ,. ,. 'I!·' tty r-mat y, m - ecnon 5·,.'"f. we' deri errve .t. sp'111.·. evmson recursion Thlus recursion is sug htl more UJI.e "~ efficient than the Levinson- Durbin recursion and introduces the idea. of singular predictor polynomials and, line spectral pairs that are of interest in speech processing applications ~
I '

I~ ,5.2 TlH- ':.E


J

l~

l~,E'VI"'- 'N,····5····-,0··-·: ,'-D····U",',R·'·,B-,'Nj·j · N,··'


r _ •_ ' _':__
1_-,', I," ,.",..
j '.

'.

.'

J',

,'.

In..~'.

,D',rC'U-',-"U-'-"""",-I'Oi' iN-"
'. " .'. ',"'

'1'..... JUI

linear prediction problem, Lev ins OD.4'flerred, to 'the algorithm. as a "matbematicall; ~trivial procedure" l[ 1,6] Nevertheless, this recursion has led to ,R number of important discoveries , including' the Iattice filter structure, which has found widespread application in speech processing, spectrum estimation." and digital filter implementations, Later, in ] 961~Durbin improved the Levinson. recursion for the special case in. which the right-haad side of the Toeplitz equations, is aunit vector [7],. In this section we develop' this algorithm, known as
+

'~9'4-Na Levmson presenteed a reeursi ith :f:'~ -7, '" /:.Le ~ a recursive a.1-gontnm J.... soivmg a 'o;enera ·1 set 0.f r J!" unear ..._._~. ~. . .. t:I' symmetric Toeplitz equations Rx8, = b,.JAppearing mu an expository paper o.m. the Wiener Z .
,&_ ~.,

me Levtnson-, D'-··,·.--.'b.,:·· .. recursion. "If" """A,,::.l~;+~.·,,; we wiu explore ""', ..... oftheeronerties ofth e ..urn tn m aaomon, ,-' -""UI ',',':.-1,-"--,, some Oilme propell.nes 0" recursion show'-:' ". ~'f, leads v a- 1UI~. ,~.wr~filter srmeture for dizital filte ;l"1i':I('1'0" and prorve ..~. 'h'10W-'c' n 'I~ " v' ,v .'1:,... nice .~ u.......tl~ "-'" ' ..._ . y. .
.'L,...
r,",-,·~'-,c '.

,-','.":c.'~.

'.

-'\.,I ..

,d

,IIL!Iir· ~

.~

t;

OilU

~~~._i;tl,

. ,ilJ .,]',JLU,o-,,~.1

'th,

-:

9it·

that the all-pole model derived from the autocorrelation

method. is stable.

S.2~ ,De'velopmenf of' the :Rec'u,rs';on J


All-pole modeling using Prony's metbod or the autocorrelatioa solve the normal equations which, for' a. p'th~oroermodel, are method requires that we

rx' (I:)

+ :r::, ,ap (l).~;t,{k


p i=',~

'_ /") := 0-

k =: I,2'1' ~ '.. ' , P

'5' '2")

\<. ~,.. :.

dI 'I· w h.ere. th mouemg: error :i,e

IS .
a

.'
'€,P

:= Tx(O)

,+ L t1.p (l)rx (1)


P
[,;;;::d,

(5~3)

Combining Eqs. (5,.2) and (5~3) into matrix, form, we have


r.J[ ('0'), . . rx(l)
,'X','"

r.*('1), (.,':, - x'


rx(O)
T'Jt

r (':'2'.)"

:r*(.'·2·.··),, iI',,JI;:'" .. r'~('" 'l~, .1:' .'. :, Tx{O}..

'*(J .r1l"P' .r:*(p'. - l) x',·


r'*(p'. x
,

1.

II
I

(1)' ',

.
'

,-

Z)

"

a.P'·(1) .'.' ' .'. (2) ap',·,'

[, il I

rx(p)
'€p~

f:X(p' -

1)
".,

which is a,set of' p + 1 linear equations in the p' + 1. unknowns


Equivalently, Eq. (5.,4) may be written as,
.--------~

,r;c(p' -- 2)

,,~,',

.rx(Oj
"

'.'

I
-

'. tl,:p(p)

.. ..

- 'E,p

.
I

! ;
!

1. 00
,.

(5..4)

II
I

..
I
1-

up (:I), ,flp(2.),

,.~,.'~ (p) and ill'

:Rp,8p

:::::: £pUl f

..... -

..

(5..5)

'THE'tfVlNSON~DURBJN

RECURSION

217'

where Rp is a. (p +. 1) x (p +. 1.) Hermitian Toeplitz matrix and a 1 = II.,. 0.,. ~ "' " ,O]T is a uni t vector 'with '1 in the first position. Inthe special case of real data, the RJ; is a symmetric Toeplitz matrix. r The Levinson-Durbin recursion for solving Bq ..(5 .. ) is amalgorithm that. js recursive in. 5 the model order, In other words; the coefficients of the U·+ Jjst-order all-pole model, I!}+l, are found from tbe coefficients of 'the j -pole model, ~j. We begin, therefore, by showing how the 'Soiutmon to the. jth.=o:rder normal equations may be used to derive the solution 'to the (j + Ijst-order equations ..Let (lj(i') be the solution 'to the. jth.-or-der normal equations
[I :1
I

rx(O)

r;(1)

rx(l)
rx(2.)
~ ~
,

r,(0) .:'.
.rx(l)
.•.
~

,.*(2) x .

r;(I)
rx(O)
r

I!'

,II;

1"'*(} X-'

.*( r: .J. _ 1)
r

r*'U') .:t . '

1
{li(l) aj(2)
i ~

Ej .'

2)

.,.x.J .•) r7;(} : ( ...


'"

._ 1)~

~
"

.. ..

.
[I

.. ..
,

0 0
•. ,.
;.

(5 .. ) 6
,

Yx(i

~,

2)

rz(O)

aj'(j)

,.

hi h .. , ., ,'. ; W rcn, In matnx notation 18


-.

(5~7)

Given. 3j~. 'we want 'to derive the solution to the (j

+, 1)st -order
.... ;

normal equations,

Rj+13j+1

_.·'"E}+,fu.] ' .
I

The. procedure lor doing this is. as follows. Suppose: that we append a zero tO the vector ~J and. mnltiply the resulting vector by Rj+.ll .. The- result is rz(O) Tx(l) r, (2)
•.

r;(I)

rx (0)
r~(l)
,.

,.*(2) x r;(l)
T:x (0) ' .' 1,1
M

+ •. +

.,....r;U ",,:1jiJ: (. .~.X I., J+ .:


-

r;(j') 1,)
'.'

r;(j 2)'" -:
:r
;;(

r;U)

+, 11..)
'1)".
....'

1
I IQjrI)

:'
I

fj

-I.

~.

U,·
'._'

0
0
·0·

_-

aj(2)
[ ,

.,

..

~
+ .• ~

.. .

I:

II
I

"

.rx(})
I.

Tx(}

+ 1)

rx(j ~ 1) .rx(i - 2)
"xlj,)
rx(j-l)

r, (0)
rx(l)

r;·(l.)
7~.(O)

aj (") 1.'

0
Yj

il
_.

(5 .. ) 9

~,,+

where the parameter )/) is


,

YJ ~ ..~];(j

.+ i) + L
i=l

j'

aj

(i).~x (j,

+ i -:l)

(.5:'0.10)

1.:,

Note that if Yj == 0." then the right side of Eq. (5;.,9)is a scaled unit vector and. Bj+] =; {l , Uj ( 1.) ~ tl,j (j) 0 iT is the solution to the (} + I )st -order normal equations (5.·8)., In general, however, Yi #- O,and. II., aj (1.) ~.,,.~ aj (j) ,I O]l is not the solution. to Eq. (5 J~).. The key step in the derivation of the Levinson-Durbin recursion is to note that the Hermitian Toeplitz property of Ri-_;"]allows us to rewrite Eq, (5~9) in 'the equivalent form
r. .•.

+ ."

l'

.rx(O)
1';(1) *(2) r~" ..' '.
'Ii
,.

".(1) .l' '

rx(2)
r~.(l)

'Yj

Tx(O) . *(U) r,)! ".,


.. ..
~

'0

fx(O)
.,.

II

..

.
..

o
,

~ ; (5~11)

r;U)
I~;(j

+ 1)

r;(] '- 1) r;(j - 2)


r;(i)

r;(i ,_1)

Tx(l)
,.;.,' "*(1)" .r.x .. '

'x,(.0)

O· '

_._-_ .._--

THE

a.,:l~ :rJ'Cr"" IDSI-' u:;- ¥'IUJ """/""ii~.J ~,~.....vJ\,: O'" ;".


J

I'

~J"

N", .,.'

Taking the complex conjugate of Eq, (5~l t) and combining the resulting equation with. Eq~(5.9), it follows. that, for any (complex) constant rj+1;1

1
R:],+-1:
,

-0

lIj{l) aj(2)
~ ~

.+ rj+~.
.,
'I

a_j(j

_ajlj)
"

Ej

Il
I

0
...
.,

r. 0
i
1

;j;

I:

+ rr+1

0
..
..

(5... 12)
,

:.

at(j)
II

,a!(l). J
~
. \1

0
·Y·· J
e-

0
E'* j
[I ,I

.0
9)+1

·· t, Smce we want to' fi d tne vector me ...

ve ,t"'t' O· 1(' n '0'i"'~ that


. ''V. , ,.~ ,

·m

1if .I!.

'W' '..

"e'set
,'!-:t

hich.wh 11· 1'·...... ~ Id .... 1 ,. wmcn, wnen munrpned...:il b R 1" yseios a scaiead nmt by Ij+

men. Eq ..(5..12) becomes

where
'i

1 ,a.(:1.). J -.
aj(2)
•. ..

~j+l

.
,

(5~14)
,a'! (I)'",. 'J
.-

aj(j)

ik L.~. i .... . to '-lie (-,.+ l' ~,lI wmc h is tne so.I'unoa 'to th :IJ " "ill)'st ~Order normar equations, ',,'
u

oJ

I
p'' 'urth

urn ermore,
(5~ 15)

is the (]

modeling error.. If we define aj (0) :~ I and aj IU + 1) .:;: 0 then i Eq .. (5~14), referred to- as me Levinson order-update equation, may be. expressed as
0'

+ .1 jst-order

!il

" ~",.

;=0'1

1"

'!II'!>'"

}··+l'
'.

'I

'

~ . ..~ .. . . , '~ AU that IS required 'to comp ete ':.e reeursion is to define the conditions necessary 'itoirritialize the. recursion These. conditions are given by 'the. solution for the model of order j == O~

,"

~..

th

. '- -'~ .

. -.

e-

ao.{O) = I
Eo

.7:t (0)

In summary, the: steps of the Levinson-Durbin recursion are as fOUOWSL The recursion is first initialized with the zeroth-order solution, Eq ...(5 17);. Then, for i =. 0". 1., ..~..; p - I, the (j + 1)st-order model is found from the j th-order model in three steps ...The first step is. to use Eqs, (5.10) and .(5 ~13) 'ID determine the value of fj +] ,. which. is referred to as.
+

,f iLlJ;l·N··· VI, . UftD~I', IHil:':E: ,I..£: To II T,eo:" ..'N-. :~D-"r.t,I),DIrN·


r, '". r, .

Dcnll'_n'S'J'Q_ ," ~CII..r·UIt··,1~ _ .

N·-,

2i

'1- 9'- .--

l ...

'Table 5',,1 rite ,levinsan·.. urbin ,Recurs'ion D


1~ Initialize 'the recursion,
(a)

a~(O) = 1

r: 1

(b}

;Eo

= r)f(O}

For j
(a)
(b)

(e)1 (d)
(e)

==o.j I, ~.,.~p'...... l ¥j;;;;;;' '-x (j + 1) + 'E':=l Qi (i)rx (j '-- + 1) i rj+1 = ~'fjl€j ~ S' \ 3 For i ~ )",2" ..' ~~i
,aj+l

--

(,i')

=-", aj

{l)

aj+:l(j
~)+l

+ ill) ~ rj+1 _'" Ej [1 - Itj+. 12]'


E

-.:

+ rj+laj(j.
,

,~ i

,+ I)

--

~~,

3.

b(O)

~"jE;

the (] ,+ 1.)S-1 :r.eflecti'on (;oe",'ci'e'flt~ The next step of the recursion is. to use the Levinson order-update equation to compute the coefficients Uj..1.'], (i) from, aj (i')., The 'final step of the recursion is. to update the error, €j'+] , using Eq, {S.15);,This error 'may aliso be 'written in two eqnivalen t forms that will be useful in later discussions. The first is
'(j-VI

== E"j[l

irj+lI2] ~' r~(O) ,0[1 .....d2] ,lr


i;.l

i+l

(S-·' 18'-':)'.' III· ',,:

and the second, which follow'S from EqR (5,.3), is


EJ+l

== T_((O) +) -,.: ~j+l'(i)rx(i')


i=l
I.,

j+l

,.. ere Th e comp_ 1 r,ecurSlon


[I:'_'Q'
~~

LS 1" ed . !rn 'fa.; "C 5' 'ISt[.. i T bl

11 l

d I:' ·. '. F· 5 an.. a Mi ATLAB, PflO grnm is ,glvemt In " 'J_g. ,'., 'II;
.. ,J'to,10: '·11,.. 1'11:' E-· ··'IJ~.,"':;,::O·:Jift",,"' , 1,·'·~. ,., :·.·~aiI
I,.

:2

'm' '

'plle'-' il ..

',' '.

5·-'2· 1 ...

S-. .'.r ..'vin ::'0:/' ~b

Or

""-e' AU: . ¥J(U.".r: ,..,1,;. ,~,~, t·A~(J,... 'O"-.~OI


,c1;l~{.1 ",.

1!'lIr

1""'"'•.

Let us use the Levinson-Durbin re CUTS ion,to solve the autocorrelation normal equations and find a third-order ali pole model for a signal having au toe orrel ati 00, values
=

r.r (0····) 1 =
I:

!,

r:%:~", illI ).

::=:;

0 '. 5: "

. '(2")"~, Y:x:_·"

I.I~,_,:;;

O' 5

0: '. rx ('3)'' ......;;. -25 ".".~

The normal equations for the 'third -order all-pole model are

.: os

O~5 05
~. .

a3(1)
:1

[I

0..5

.][ 05

0,.5

03,(2) a3(3}

Ii
I

,_

O~25

05 ~ 05 '.'
.. -

'

2The com'V[ention used in the Levinson-Durbin reeursioa varies ,a bit from. one author 'Eoanother, Some authors. fer example, use: ,-rj+t in Eq. (5.12)" wildie others use rJ~+'~,.This results, of course, In different siign conventions fOF ~
rj+]~

220·

·T'1.JC ~ lnl[~ •

,C"i·..MN·'.. '·r0:".. ',,-,~,.r S "N"


J

'I
-'.J

Rlic:1I"""l ~'nllt"lO"N _'.L.l~,IU,"-~:rl .'.


J'.

f unc

a.on
0

r~r (;. s )
p.~lenrgrth
a.~l~
(:r}

-1;.
j'

·e-psi l.on~r (1) for j::::2 :p+l.;'

·g:~trma~-r (2 ~ ''It'flipud(a) j}

a= [a: 0] + rgi·cumna * ·1 .' 'e:ps.L .on=epiSilo:rr][' :*. (1 ~. ,abs (gamm.a) · ..2) ,; .' endl

lepsil,on.; {O :.c:onj -( lipud (a} ) ']; f

Figure, 5.1; A.MA:.nl' ..'B programfor s,alving .a set of Toeplit; equations using the Levinson- Durbin ReTil:.,• .'h • .i!iL iP ~ fil' J,n; CUTSlo.n.. .i hism-fu« ~' p,rov,i de' 'SQ·1Mpp·t.ngjrom. tne~ ,(lutocorreu:t.tif;O'nsequence rJ. (k) to .t"h' 'e': ,terc'(),ejl,:cu?nt'S ' .:", ap(k), hence th.e- name rtoa+
o 0

1" First-order mod.el:

and
ali
J.

==,

[ r1] :_.[ 1 ]
~,1
,r

,I -~ . - 1/2 ,:

2., Second-order model:

~.... - £] = pC 1I,;:;'"l,. ,_
. ,.

'.~

[[1
r ,

~.,"" ':2

2: r,''}],
I~ ,", ,I
i~ ,_, ~

..

........

~,

and
'

-~~·/2

I!

o
,-1/2

,I

'-,1/3
I!

Ii
:1

-1/3

3., Third-order model:

Y2 ~ Tx(3)
.'

,+ u2,(I)rr(2) +. a2(2)rx(1) = -=12'


.

T3

~~1

,..;".", -= Y2!'€2~-

"

,.

if]

:=

I r3t:.':. ~"rI' r~," I - '.]' ,I

:=

2'~ 4 32

:-:--,.

~1/'3

I.

~1/3!,

.! ,+;
:
~
.,.J

~1/3 8 Ii -!/3
ill

1.

-. 318 -3/8

1./8

Finally, with
:: b(O)]
"

....;;.;: ./'€1 ~

8 ",42

1.r;;:;

THE lEVtNsct-J~DURB,IN ,llECURS10N'

221,

H"'J[(z)1 '.,.l' ~

-I .3...,-1 J -2 - 8'"' _, iZ

~/8

[--3 + 'g,G

.~

Having found the third-order all-pole model for x(n),; let us determine what the next value in the autocorrelation sequence would be if H3.(Z) were the correct model for x(n)." i,e., if x(.n) is the inverse z-transform of H3(Z)., In this case, '[be model will remain unchanged, for k: > 3~i.e., lr.t =: 0 and, therefore, Yk = 0 for k >, 3~ThUS,. it follows from Eq, (5.10) by setting j =, 3 that

which may be solved for rx(4) as follows


rx(4)

:3
=

LaJ.{i)rx(4, _, i)
r.:-l

= 7/32

Successi ve val ues of rx (k) may be determined in a similar fashi on, For example, if k: ~ 4;
'then

rx(k) '_;,_' ,~,

EU3(i)1'x(k
"3:

.;J

-1')

~:=l

'which are the YUle-Walker equations for an AR(3) proeessv: 'It 'Follows from our- discussions in Section 3.6.2 of Chapter ]. that if white noi se wi til. a variance of is filtered with a first-order all-pole 'filter

u;

H(z)==.

1-

1. ~-.

ClZ-1

then the output will be M A'R( 1.) process with an autocorrelation Tx(k) =
_~Ull

sequence of the form

.2 Q'w

hJ

~k~
r

I -a2

In the following example, the Levinson-Durbin recursion is. used. to find the sequence of reflection coeffiC'~,ents rk and model errors [f,k that ate associated 'with this precess.

Exa!mpl~e ,5.2.,2, The R'eftection Coeffi~ie'nts'for an ,AR{l)


Let x(n) be an ,AR(I) process with an autocorrelation

PrOC'f/S:S

rx.

=:

[0';' ",.2 pT ,. ,[ t, (!. a. ,,0, ,. ., ~ ,£r ] J 1- a-

.,

To find the refi:ection. coefficients associated with this vector of antocorrelations use the Levinson-Durbin recursion as follows. Initializing the recursion ''With
,9:0,

we may

==

€o

=== r~;,(O) =: . '~!V

0'2

1 ,_ 0;2

222

JHE LEViNSON RECtfRSION

it follows that the first reflection coefficient is

r .. 1
.

_.

_,

r ,(:[O,)I'j( .' .

x,(I)

._

-(I'

....

·and th a.t
,'I -

I"

,I ..

Thus" for the first-order model


a'~ -

[ -a ]
1

For the second-order model

and
,.
'!l'

...::::2. ___:::. [.;; 1 _.

._

_.;'T"-'1 'v. tv

so the coefficients of the model are

3.z;=
,I

1
'-'(1'-

I.

Continuing, we may show 'by induction that rj =: 0 and €-j' suppose that at the }tlb. step of the recursion rj =; 0 and

== ,0'; for all i

>. I. Specifically,

,OJ ~ [1~, ~a., 0, ..... ~ O]T ,

Since. Yj ==

'x' (J· +.1) + ~ (~) (. _ z..- 1) La/I'Tx} +.


~-1

I r :=:

( -).Ill' ''xl" +.111)

,-

(.' (lTx·.J' ")

we see from the form of the autocorrelation sequence 'mat Yi ~ 0..Therefore, ~j+ 1. = {) and. 3j+l is formed by appendinga zero to ,aj and the result fonow'S. ill. summary, for an AR{l)
process,

'.rl p _ [....~
1 -.~~

I'V'

'0-

"j]

l\

u.~ ...~"!II


".:

'I]~ r

We now look at the computational complexity of the Levinson-Durbin recursion and compare it to Gaussian elimination for solving the pth·=order- antocorrelatien normal equations. 'Using Gaussian elimination to' solve a set of P linear equations in. p unknowns, approximately ~ pl3 multiplications and divisions are required, With the Levinson-Durbin recursion, on the other hand, at the jtb step of the recursion 2j + 2 multiplications, I division, ..and 2] .+ 1 additions are necessary, Since there are P steps in the recursion, the total

THE lEV1NSON-,DJJR:BJN' ,RECURS1.0N· number of maltiplicariens and divisions isj

,223·

"'(2' ., .) + 2' ,L.-J "..'.1' + 3') =: P 2 ,,·"PI


;~-~

..

];;;;;0·

and the number of additions is

L{2i +. m) _.
].=0

p·=:ru

p2

Therefore, the number of'multiplications anddivisions is.proportional '£0 p2 for the LevinsonDurbin recursion compared with. p3 for Gaussian elimination, Another advantage of 'the Levinson-Durbin recursion over Gaussian elimfnation is that it requires less memory for data storage. Specifically, whereas Gaussian elimination requires ..p'l memory locations" the Levinson-Durbin recursion requires only :2(p + ]) locations: p + 1 for the autocorrelation sequence l"7i (0), .. r~ (p)., and p' for the model parameters Qp ( I), [Q p (p), and one 'for
m ." r. •••• ,

the le·.··ITQI' .~'. HI. ., r 'fi::.p~

In spite of the increased efficiency of the Levinson-Durbin re eursi om.over Gaussian eliminanon, it should be pointed out that solving the normal equations May only be a small fraction of ~e 'total computational cost in the modeling process. For example, note that for a signal . of length N'II computing the autocorrelation values rx(O)~1 ..~"rx:(p) requires ~ approximately N t» I) multiplications and additions, Therefore, if N ».p" then the cost associated 'with. finding the autocorrelation, sequence will dominate tbe computational requirements of the modeling' aJ.gorithm..

'+

One of the by-products of the Levinson-Durbin recursion is. the lattice structure for digital! filters ,. 'Lattice 'filters are used routinel y in. .goo'wl ;filter i~mp~em~ntat-ia:n~s a result of a
number of interesting and important properties that they possess, These properties include a modular structure, low' sensitivity 'to parameter quantization effects, and a simple method to ensure filter stability .. In this section, we show how the Levinson. order-update equation may be used to derive the lattice 'filter structure for FIR, digital filters. In Chapter 6" other Ii ~ ·'[1 .•-_ l 'le tatnce anc the poe-zero d '111.. '1 Iattice fil ters structures 'Wl' be dleve I ope dl-·I~Inc1...1!'; uulng the all -poie t" "" lattice fi~.terS1' it will be shown how these filters may be used far' signet modeling ... and _'· fil .' gln.s Wl.·tbI, the L ~ del d .. Th ,1..... • 01 tne li.... .. I,ederivation fth Ja,ttJce..: tel" be' - . ~'I .evinson order-update equation given ill. Eq, (5~.·r .. First, however, it will be convenient 'to define, the ~p;ro?fJl..¥ectac'J denoted 6) by af, which. is the vector that is formed by reversing the order of the [elements in '~j and 'taking the complex conjugate, .
+
l d·'

i.

I
[I

1. ~

It* ("J"),. '.'.1


1 ••••••

uj(l)
ap(2)
~
'i'

a~(J'~-. ". 1) 'J ". a.!(:'J'~ 2). J '. .


=
'

. ~

R -. a· 'j _.'

aj(l) 1

~'H'f
'''l
. I.

lEVi" I...·· ".


j':'_.O'

N'
If.

'. 'SON"
r.' ....

~. .'
J.

. ". "

,", RECUR'
".'

.' . [-,
'.

-.

I;.'

"

S· ~ON'"
. .I

..

: . I' '.

.•

..

~.

._".

~.

-" ...

-~i:;""-

..

or,
.'
;

R('.) ,aj',:ill

:=

.') ,Uj*('. ._ i . .'.J

(5,.20)

for ,~:,= O~1~'.~'. ~,j ~ Using the reciprocal vector in the. Levinson order-update equation, Eq~(5,,16) becomes (5,.2~)

Willi Aj (z) the z -transform of OJ -(f) and (z') the ,Z -transform' of the reciprocal .. -C' d' .' if!.' ( ... ) aj/R'(' ;: ~,It.,'£,0-}··'1··" -u fr _ ·14.ri 1':;,.,-'2"'0"';;',')' ,~·L· . A" ',j'Zi-"II W,S re 1'ater to A····''j ':4,11)(.bv , .,). ': ·'· OWS rom cq..(C' mat ("
·1' .

A,:

sequence

,
"

(5".22)

Rewriting Eq, (5.21) in terms of '~i(Z) and Af(z)

gives, (5' 2''3")'


'_ :,.1... '~' . .r '.:

. ( ,_, . 'j .). .. A' J+1£,' ...) ~ A ,( I.', +'- I", Z--1 A,' R ( .) }+1 '.j \,~
whi. c..' "lS w, '-'h
1 ' ••

1'. -, update v'1J,:- w,~" ' '-' fo~I" A-' i "';C .)..", Ann' arion :" ~ (..~ '. '.' r., ,~O'.' riv '<lIn' ;., ..., U ,1lJ!,U!r de . ate "1~P!~ '.'R a R - .(. Th'" nex., .s~p lS .~"'-".·n,·,. e. w ...rde 'r·_'I--p,AII ..... .....eouation for ,(lj+]}"('~)I nd A-''1+IZ )..~Bezinnina e e , ,t '.' c.'& with Eq, (5,.2m )';' note dlat if we take the complex conjugate of both sides of 'the equation.
-de: aD. on .'.:r-,~,,~!!!.!.,.i
,.. .. -.

II.

'!Ii" ....

,'

'0"

,"",.I,

~I,

.1,."

'.H

IQIT'I(jii 'T"-Finl 'iW.U,1U!!. .!.

11.q;,C'P ; 'W-· ith I, 'Iir',[rl,""'Ii,.r "'_'. .

"_.

J'. -

,~

; +': 1 w:e" ha 'v'·e·.


. ,', "" . -.

* ,.( aj+li (.. '~,l . + 1) = ~jl . -:- ''.

'=

11 I. + I;' + r:f: ajl('0) r= ---', ./+l


''a]

(5.24)

Incorporating the definition ofthe reversed vector, Eq, (5~20)~into Eq, (5~24) gives the
desired update equation for

,af~'1(i).t

.. ,R _f~)1,~,R(':
0j--:_]_ \:~ .. """""'":' j ,u

~..l.- 1) ,,'
'"

+ "r':!oIo!, uJ.-J 1+] "(~,).


I '

'.

(5.25)

Expressed in. the. z-domain, Eq. (5,.25) gives the. desired, update equation for A:f+ l (z),

A~- 1...... = J+ '(7')" ~

~1 A.' '~"(" )1 ,;, " ~,l4

,_

+, I'~1 . .1+

n.,)

A, ~i,_ ')

'l(.Z~

(5.26)

In summary, we have. a pair of coupled difference equations;

(5..27) fo up ..anng ,(1) . n ann o'j ...n.).,~, ong WItH a. patr 0·. coup. e equanons t;., nodatin the system :-1:,:-,,"'·/'''1"1...'1 :.~ fcouoled. -,I' I+~""" lor iIDE.pung .' e sv ..'t: '.', or mdatinee .'·(··:')I··I·'~i.,.R',(, . '.' a .... a functions A,i (z} and A,f (z), which 'may be written in. matrix form as
I

.~

(52···8···.... J
~:~ ·,L,· ': •.

Both of these representati ODS describe to the two-port network. shown. in Fig . .5.2.,This two-port represents the basic module used 'to implement au FIR lattice filter, Cascading P such lattice filter modules with reflection coefficients r 1 r:2 I' fOnDS the, ptb,=order Iattice P filter shown in Fig. 5 ~3 Note that the system function between the input a' (,t) and the output ,. a_p(n) is A p (z), whereas the system func~on relating tile input to the output (n) is (z) ..
'I' 1. ~ .' " l'
'j
.L

a;

A:

':,i:lgure 5 .>- A two-port networ~ .UiatUig Clj,n .' anti F~I.. r '1L ',' 2 all: (1'1).1 to elj +~(n) and at+. l, (,n).. This ne.rn;,{),rk is a sln,gle stag« of an F lR' lauice fi'lte t: .
i ,',.
iI ()'

,..:
r'IO~'

" ..

,~

,'"'

'

1---";,--

'''!Ii.' ~-":-,,---I

Op-l(n)

,..~ (n')'' u..p ,


,1",'" ,

,~(n)
,.Jr.

r 'p

.',
IFiglun

...... ..
'

{I,:(n)

5.. A. pth-order ,FIR lattice filter built up as ,a cascade of 3

p two-port networks.

''13 5 ...... ,.1, "

.. P~pe",'r"~;'~e'
IVi"""

,"Ie ," "~

his section, we wil ill I n t, 18 secti

bli ',' ~ ~ estab ts h some important - propernes

sequence ,;;.L' is generatedd b th 'L" that by the ,I evmson~mportant property of the solution to the autocorrelation normaleq nations. Specificall y, we .. win show that the roots of A,p (z) 'will be inside the unit circle if and only if the reflection coefflcients are bounded by one in magnitude Of, equivalently, if and only if RIp' is positive definite, We will also show' 'IDaltthe all-pole model obtained, from, the autocorrelation, method is guaranteed.to be stable, FinaJ1Y1' we will establish 'tbe asaocorrelation matching property, 'which states that if we set b(O) ~' in the autocorrelation method and if h(n) is the inverse z-transform of
I,

'L, t~ renecnon coerffic ~e 11' cient 'D b'. recursion ,M'd' we WTh, estabhshh an. " "II' in.. lIi'· ',UTIn

f' 0':

,H(.z) =: '

b(O)

A-p(z)

.'

then the autocorrelation of h (n) is equal to the autocorrelation of x (n ) '. The first property of the reflection coefficients is the ioUowiling+
Property .1' The reflection coefficients that. are generated, by the. Levinson-Durbin recursion to solve the autocorrelation normal equations ,3!1'ebounded by one in mag.... itud_e~ • - _, 't nIU ,Ii'·, [I'J, ~«: 'l .'
ii,
• 1

Thi s property follows immediately from, the factthat since

Ej

is,the minimum, squared error,

e, := L [,e{n)12
Di.JI

n:=O

2:26

THE'1EVINSON rRECURSJON
fEj ~

where e(n) is the j th-order mOGielin,g error, then

(t

Therefore, since

it. follows that if.Ej .> 0 and

€i~']

> O~then

Irjl <: 1..

It is important to keep in :mind that the vaUdity of Property

• ~ relies on the uonnegativity


t

of f) ..Implicit in this nonnegativny is the assumption that the antocorrelation values .rx(k) in R.p are computed according to Eq, (4-.,1,21).. .It is not necessarily 'h1Je1- or example, that f IEj ~ o and Irj:1 ,< 1 if an arbitrary sequence of numbers r, (0) r:x(l)~ ~. ~" 'i(p _, I) are used in the matrix .Rp. To illustrate this point, consider what happens if we set r, (0) ~, l and r, (1) 2 in the normal equations and solve for the first-order model, _ this In case,

'which bas a magnitude '~hatis greater than one+ This example is, n.ot m. conflict with, Property 1 however, since for any valid autocorrelation sequence, Ir, (0) l ~ ~x (l ) I.,~ we 'will r soon discover (see Property 7' on P'~ 253),~the unit magnitude constraint on the reflection coefficients F j and the nonnegati vity of the sequence rf.j are tied to the posi tive definiteness
'l

-~ ,,. -,~. R" ot lb·';e matnx:p"

The next property establishes a. relatio nship between the locations of the roots of the
polynomial A p' (z) and the magnitudes of the reflection coefficients

rj ~

Prop·eT.'Y 2'. Ifap'(k.) is a. set of model parameters and


ref ection coefficients, then

rJ is, the

corresponding set of

A( . 'p.l
To

' ")1 ~
m

'11,I
I.

+ ~ap
~',

(k:')' ".Z

-k

k=l

cir-cle) if and only if Irj II < I for all i., Furth ennore if roots of Ap(z) must lie either inside- or on the unit circle.
'l'

~'ll~ i. I j,il.. fA () ··in be" t, ~ WI;eb a minimum pnase potynomiai j' ('alli: 0:f tne roots 0':, 1,.pZ.: Wl1.1'.-,tnsiid. tne nmt < .'e....

Irj I <

1 for all

i then

the

There are many different: ways '10 establish this property :[m 4'1' 20~ 24]~ One W3:Y:~aLS, demonstrated below" is to use' the encirclementprinciple (principle oflhe argument) from complex anal ysis [3]~

,..

.Enc,ircie,ment Principle .. Given a rational function of


.

~Po. .

P (z;) ~" B'(,1: ) "


., .-_ A(z)
.Z,

:t

let. C be a simple closed CUfV'e in. the

plane as, shown in Pig. .5~4+. '[he path C ,As

is traversed in a counterclockwise direction, a closed curve is generated in. the ,P (r) plane that encircles the origin t{Nl - Np') times. in a counterclockwise direction 'where Nz. is. the number of zeros inside C and. Np is. the: number oipoies inside C'.4

THE

' lEVi''N'S-'ON"
r r ". r
j" '. • : • 'J

",'

.' .
• ,..:' •

1'.'·

00', 'llS1N
I, r ~
I ~J ,.' '. (

REO' .
• • •

,,",.,

URSIO'" "N
r. • • _ • ," ._

'._' ,

:227'

Jirn('-)I '. .t..

Im.P(;d

X .!

o
x

Figure 5·A A simple closed curve .in the z-plase encircling [a simple' zero and the corresponding contour in the: P (z) plane.

The: [encirclement principle ma:y be interpreted geometricaljy as follows, Since C' is a simple closed curve, beginning and [ending at zo~then .P' (z) ·will. also be a closed carve, beginning
di an.d enomg
the' at. '1'. •. same point, P ·:ZO,I,. 'If' .f.L ere ," ( )' =: u....
~ IS·

~ ··tbi·, ie th a. zero contamecd 'WJ._:,)j·D.··th contour ": len.

....

the angle f) from the zero to a point on the contour will Increase by 21r as the contour is traversed. In the P'(z) plane, therefore, the curve that is generated. will encircle the origin in a counterclockwise direction, Similarly~ if there is a 'pole inside the contour, then. the. angle 'will decrease by 21t" as C' is traversed, which produces a. curve in the P (z) plane that ~ .. ~ . me ·00fl>o· d~ ,~ rill, ..... N 11 .• id t., me s, encirc I'es the ODgIn m .,,1,. c 1 ...Wise ."erecnon. l."1·l.:th· N, z zeros an d . r p poles msme C'" ... ...... change in angle is 21r (N:z. _. N p) and the encirclement principle follows, Althou.gh not stated in the encirclement principle, it should. be clear that if there is .8. zero of P (z) on the contour · ne . (z) 11""_ +1'1' bL h I,. ... , th til C· '~:'Ien me curve In 'ith P'..:z:: psaae wut pass ti:dvugi·· the ongm, With the encirclement principle" 'we may usc induction. to establish Property ~., Specifically, fora first-order model,
i.'

(z) A t,Z: = "II + r-- 1,Z _.] 11.'

b' if if ~ r pnase 1,' an.d: on'1 Y l..~ ].;!II < 11 Le us now assume I .11.~ Let .,L ( ~ ,. ~ d snow 'Illtl:at A' :1+1 :2:'1 Wi[., I: nummum ph ase 1:: an. ony 1:. .. 'iL.. () Ln"be ·P.· ~ d I' .f f that AIL, ) ~8 rmmmnm pih ase an.' ...,,11.. /.
I

.. an d ].1 follows til .'. :[..::. 'W1:lti be_' mtmm~ .. i in nat A (z) 2 :
~'li~

-,

1..To accomplish this, we will use the z-domain representation of the Levinson order-update equation given in Eq ..(5~23),'which is repeated below for convenience
<

~. I! rJ+

(z) .( A i+I'.z[ .= A' j::Z· )

. , . I' '}+lZ:

-IA' j,..[:Z.i ) . :(

, R~'

Dividing both sides ofthis update. equation by Aj(z) 'we have

P(z, '.

",..: .)._ Aj+,] (z)


.AJ{z.)

,,

= 1+ rj+1z '.=A'~' ' (~'. ~),'


"'i

..

'

-l

Af(z.)

,z·

(5.30)

'JTUf,"' 1'-J

I~C'!o tU;:" JIk

fe0N··· ." .~ S· r" ... · I R' I~C':.~.lR.· .... I'-If "( ~ ~ .Vli
\

""'' ,1'

''\'];.1

phase, A j (z) has i zeros inside the unit circle and j poles at Z = 0..Now let us assume that id we t '..'!Ii · id •. .'0." A j+] '.,Z. ha l. zeros outsic e .. , unit crrcte an d' "J '. + :I _', I zeros lRSi.·'e",.: S'".mce A' () '. 'S ,It '.1+1:() z: W11:1 have i + m poles at z ,= 0" it follows that the number of times that P (z) encircles the origin in a counter clockwise direction is,

. h 1 . J.,,:;......~ I, 11 ....t.. h' b 1',e Wl+l'ilinow SJ.OW tha t I "_ 18 a erose d .contour tb traverses me umt circte tnen tne numoer '. I".' if .Iat of times that ,P'(z) encircles the origin in a clockwise direction is. ,eq~ to the number of zeros of Ai+il (z) that are outside 'the unit circle, Since. A:i(z) 'is. assumed to be minimum
C" ~

Of,

1 times in a clockwise direction.. Now, note that since the contour C is the unit [circle, then z e.l~' and

'_
I

rj+ z:
1

_ -II

A.f'Cz') .
,A1ez) :c

" '.. " .

I:

=.

rj+l

II

A;

(e.j.f)))
j(Jr)'

AJ(e

,_ /

):c

= Irj+~, ~

Therefore, ]it 'follows from Eq, (5~30) that P (z) traces out a curve which is a cacle of radius [F j 4- I ~that is centered at z = 1 (see Fig, 5.,5),. Thus, if I: I' 11 ,< 1.; then. P (z) j+

does not encircle or pass through 'the, origin, Conversely, if "ri+ll > 'I then P(z) eneirh .. ~111 .. ·'d ,d... .•• C c Ies tl: e ongm anid A'j-..Ll l )"'''.v I; not: have at -0,f i ,zero's mg·]):'l '~jll,e: unit ClfC Ie~ ::.onse.... ,Z.· will ItS. ,e quently, if A j (z) is minimum. phase, then Aj +1(z) wiU 'be minimum phase if and only if
~rj+l~-c 1~

There is an intimate connection between the positive definiteness of the 'Ibeplitz matrix, Rp'~and the minimum phase property of the solution to the normal equations ~In particular, the following property states 'that .Ap (z) will be minimum phase if and oldy if Rp is positive definite (see also Section $,..2.,7),.

1,_

Prt'p·erty' J. Ifsp is the solu~joij to the Toeplitz normal equatioas Rp,3p := Ep'Ul,,' then Ap('z) will be minimum phase if and. only if .Rp is positive definite, Rp > O~

bnP(z)

...
I.

II

ReP(z)

II
,I

'ligJure 5.,5 The' cu,roe' P(zJ, which is a circle t;fradjus ~rj+ll thas is centered ca Z :=: L

TLiI'~

,':~.~~JC'i""\Ji;._I~OO' r AD. n,ID ,'nt:; ILI:'¥':I'"~''' .,"

IN.. [l\iI::C:' " ~\t:'


r
U"

.:iD~/ ..· a ..ft.il.l.

11k.1 'J.""III

To establish this- It'"" - .t"'O;O' -:J t- let III be a ~root of the r--' ,,..'f '"'..oJ nrooertv - . . - nolvnomial , A: (.z) 'In general '~"Cl' will be -- - -',p . ....,.,. _., , complex and. we may factor Ap (Z) ss follows
Ap(z)

= 1 + ~ap(k)z'-"
~ k,-l

=:: (]._

az:- [)(1,+ b],Z'-- l

(l} ,+ bzi---'2 + ~.~ bp,-lZ-':P--) +

'We 'win show' that if Rp > '0, then


I
up,(I)
Gp(2)
_i
" ...

lui
I

< 1" Writing the vector a:p in factored form as l b 1. b'l

o
1
b1
'.
e.

_':
!
,r

'[: 1 ,], 1_((:=8

.[

-~

3- 1 .. (5-_....: 'j[1').
I

, i: ]1 .. 1.1 10:, ows ..&.tnat

Mu]tiplying

Eq. (5,.32) on. the left by ,B.H we have

BHR B ,[- '~(l',r , .....:p, -:

'J-

= E'l!'

[~'

0,

I ']~

Since B bas fillU rank, if Rpris, positive- definite, then B HRpB will be- positive definite (see
11U" .. ',1',· P, •. 4'.(11.), '.: TIt· erefore
'.1,1

,1I.!\r.II~

..

'which implies 'thwt


~ 12. > ~s;o_
'['Slll I' ~2

S·, _Ulce

B, . , -HR"· p'B:,

S
'1,.11;.--

nJ

_l_

* So

Thus, Eq, (5,.33) implies that ~,a <: ] and AT?' (z) is minimum phase. Conversely, if ,Ap(z) is I not minimum, phase, then A.~(z) will have a root a with ~a'~· I. Therefore, fromEq,. (5.34) >, ~ it follows that Is:~ >, ~ II and ,R" is not positive definite. II So _
,

Property I states that. the reflection coefficients that are generated in the course of solving the autocorrelatioa normal equations are bounded by one in. magnitude Property 2 stares that a. polynomial A (! (z) will have all ofits roots inside the, unit circle if its reflection

coefficients

are less than one in magnitude, ICO'mbm111g:Propertie-s "I and 2. leads to' the

foll owing property,

230

THE .LEVINSON' ,RECURS1ON

Property 4,r.The autocorrelation method produces a stable all-pole model

In the next property, we consider what happens to the roots of the polynomial ,Ap(Z) when. II' I: == I with the remaining terms in the reflection coefficient sequence being less p than one in magn.iimde., Properly 5"-Let aP' (k) 'be a set of fil tel coefficients and I' the corresponding set of j reflection coefficients, If irj'I < 1 for j ~: 1,., p ~' I and !I F pi = 1~'then the polynomial
.e .' ~

Ap{z)

== I +

L ap(k)z.-k
.P

k=1

has all of its, roots on the unit circle,


This property may be established easily as follows. Let rj be a sequence of reflection coefficients with ~ri <: 1 for j < p and Irp r -, I Since ~rj' 1. for all }" it follows from I I< Property 2 that all of the zeros of Ap (e) are on or inside the unit circle. If we denote the zeros of ,Ap(z) by Zk, then
+

II

Ap(Z) ==

fl(~ cZkZ~l) _,
p

k=l

By equating the coefficients of z -'.11' on both sides of the equation, 'we nave
Qp(p)

==

k:=1

n
p

2:1

(5~35)

However, since

Gp (p)

=' r,p, ]if I,r'p ,I

== W,' then

nl:zkl = m
k;='~

(5',.36)

Therefore, if there is a zero of Ap(z) that has at magnitude that is less 'than one" then there must be at least one' zero with. a magnitude greater than one, otherwise the product 'of the roots would not equal one, This, however, contradicts the requirement that all of the roots
must lie on or inside the unit circle and Property ,5 is established,

.1

..

This property may be, used to develop a <constrained lattice 'HItler for detecting sinusoids in noise (see computer exercise C6~1 Chapter 6)~,It also forms the. basis. for an effiin cient algorithm in spectrum estimation. fOI finding the Pisarenko harmonic decomposition (Chapter 8) of a stationary random process [I l.]. The next property relates the autocorrelation. sequence of x(u) to the autocorrelation sequence of h (n J~ unit sample response of the a.U-,polefilter that is used to model x (n ) . the Specifically." as we demonstrate below, if b~O),the numerator coefficient in H (z), is selected

._

231
so that .x(n) satisfies the. energy matching: constraint, r;l'(O) = .rh(O)~ then b{O) -

.JE; and.

the autocorrelation

sequences are equal for ~ [I < p ..This is the so-called ,~tocorre lation k:

matching prop'e rty.

Property 6--A,u.taclJrr€·laiion matt-,hing prt)p,erty~


energy matching constraint, then b(O) .;; x (a) and h (.n) are equal. for' lik:! ':5; p ..

-JEP' and

If b{O) is chosen. to satisfy the

the autocorrelation

sequences of

."'"

In order to establish this property, we: will fo-1~('Jw'the approach given in. [18]+ We begin 'by noting that since h (n) is the unit sample response of the all-pole model for x (n) with

" ..-

where. the coefficients ap (k) are solutions to. the ~o.rmaI 'equations (5 ..37) then h (n.) satisfies the difference equation
.J

h ,.n(".. ).. +._.~ ap'"(k)" h, _ (: Z:: . ...n.


k=l

.P

,-,

A,.,

1 )· ..;,.,;;;. ..

..

b'(0' I).,O .n. ). ~I('


,'
l

(5 .. .8) 3

MUl~tiplying both sides. of Eq, (5,.38) by h*(n -I) and. summing over n yields
00

Lh(n).h*(n

.,.=0

-I)

+ L,up(k)
k.=1

00

00

Lh(.n ·-k)h*(n -l) ~ b(O)'EfJ(n).h'*(n


n =0 n -,0

-l)

= b(O).h*(-l)
With
.;.,;.
..

rhO) ==.

L .h.(n)h*(n. -_ I) -~ '-h( ~l.)


00

.~:d)

we may rewrite Eq, (5 ~ in terms of 3:9)


P'

Fh (I)

as follows
k)

Th(l)

+ .Lap.(k)rh(lk:..:::d

== b(O)h*(-.l)

Since h (n) is causal" then h(n) = 0 for n -c 0 and. .h(O) = b(O). Therefore, for I > 0" H.

follows that

r» (./) .+

.L ap(k)rh (I - k) = Ib(O)
.P

12(; (I)

:;

l > '0

k=]

__

'=""" __ ..._----.....

..~

232

THElEVlN~SON RECURSION . .-

US~Dgthe: conjugate symmetry of Th(l) 'we: may 'write Eq, (5.40) in matrix 'fonn as
"1[

TA (0)

rZ,(l)
rh(O)
(J' ,

~(2) Th,: .....


~h(l) rh(O)
, .
~

[!

'[,I
i!

rh(M) rh(2)
,.
+

'.

rh,(l)
• .' • -

rk*( s» '*'( 'h"'P,

,r;(p)

-,

:
1

- 'I) - 2)

1.
op(l)

1.

I ,. I

0
.

ap(2)
'i!

== lb{O)12
:i
:1
I

().
~ ~ ~

(5 ..41)

..
L

'

~
-,

..

rh('p)

rh(p'

1)

7h(p
• r·, ••

2)

Th(O)

'. ap(p)

,.

0.

or
': , - ::•. .' . ~ 1lll.1 R'.hap '~lb'··(:'O·)·t2 "
I •

(5,.42)

Therefore

lip

is the solution to each of the following sets [of 'Ioeplitz [equations, R 11: a ~,p
.R;z·,3p

=
==

~bl2.u ] :"0 '


1 1

€pU.1

(S 4:13')
·l!>[~(;' •. :"

N OW' S uppose malt b (0) is chosen to sati s~y the energy matching constraint,
t

rx(O) =

:E' Jx(n)1
00

L ~1·h(n)l2.= rlt(O)
OC"

(5 44)·
,.;;, rI! .....

By induction we may then show' that rx (k) :== r» (k) for k > 0 as. follows, Using '(he Levinson-Durbin recursion to solve Eq, (5,.43) we have
a'~
... 1 ".

... (m, ' .- -.


r ._,

Tx(l)
7,
r

, -- ~~-rh(l) (,0)':' r'A.I(.O)


,
n:
..
8

Since rx(O} = rh(O)~ it follows that rA:(I) == Th (I), Now, assume that rx.(k) ~ k = 1~2,. ~... ~} ..Using Eqs, (5.10) and (5.13) we have
r.l;(i

Th

(k) for

+ I)

:~ ·-rj+],Ej ~

2: t~l(i)rx(j
j ~='l

+ I ,-_ i) +1i) rh (}

'h(}

+ 1) ==:
r» (i)

l'
-rj-.,.l:fj' ): .i~1 aj

(i)rJ~(j

(.5..45)

Therefore, since
T

Fx

(.i) :==

for i

:=:

1~2, ..,.

.,

i. then

r ~ (j

from Eq (5 ..41) note that'

+ :1) ==

+ 1),. Finally,

Ib(O)1 = rs (0)

+ L (lp(k)rh{k)
P
1(-,1.

~='~ ..

Ib(O)r . =: r:;t.(O) +
2

L::ap.(k)r;r; (k)
P'

== f'p

(5 46)
a,

Therefore, b(O)

= ....(E;' and 'the autocorrelation


"r"."
J" '

k='l

. matching pr-operty is established.

I!:_[ ,;;;:'1",

24

,.,

The,' S···:;~p'-'U-'.':p·-·· [and


,["[.,,,I~r,J

Sitep··:-DlAown·_':'r ~[R'·:e··:c·.u·rsio-···ns·:
-"V'." , .., .. ,

The Levinson- Durbin recursion solves the autocorrelation normal equations and produces an all-pole model. for a signa]. from. the autocorrelation sequence, ,);'(k). In addition to the model parameters, the recursion generates a. set of reflection coefficients, F 1~ I' ... ,. rr» 2.,
t

THE [tEVlNSON-DURBIN RECURSION " "

233

along

with. the final. modeling


, _.. ,.

error,
rx'(p)
". .

'~p.'

Thus, the recursion may he viewed. as a mapping


.',
:

{ rx(O)"

r~~(l)~
H

"

"

,I } LEV' "

('2')" a ,' .. ' aP" (['1") :P·'. ;", •. , .. "~po I(p')' , ,U' ("0;')''' ... ' ..
:!!.' a"
'j

'.' ~

,~

r1",1:2"

,... ~ "

rp, '£'p

from an autocorrelation sequence to a set of model parameters and 'to a set of reflection coefficients. There. are many instances in which it wou1ld be convenient to be able to derive the reflection coefficients, 'rj,~ from the 'filter coefficients ap(k)., and vice 'versa. In this section, we show how one set of parameters 'may be, derived from the, other in a recursive fashion, The recursions for performing these transformations, are called the step-up and step-down recursions, The Levinson order-update equation given in Eq. (5..16) is ,3 recursion for deriving the filter coefficients ap{k) from the reflection coefficients, ri'" Specifically, since (5~47)
(i) may be easily fouad from OJ (i) and 'rj+l ~ The recursion is. Initialized by setting ao(O) .~ 1 and, after the coefficients ap(k) have boon determined, the reQj'+l

thenthe 'filter coefficients

cursion is completed. by setting b(O) '= .J?;~ Since Eq, (54.47) defines 'how the model param- . eters for a i®h,-oFder 'filter may be updated (stepped-up) to a (j ,+ 1jst-order filter given r i+ 1·, Eq, (5,.47) is 'referred to as the step-up recursion ..The recursion, represented pictorially as
Ii r2 ' S \rl;.:·" ~ ~ ., , r lH €.p. }'t,ep·~.u.p {' '~;.ap,r
I

(.

2 .b 1), apt')!, ~.."'",0p,(p)t'<';::.i'(0)' )


"J ••

· ~ ,. ~ ,-lg o. is summariz· ed lnJ..,a; ile 5 ~:~ d a M' ~< ,. ~ 'b- :.,'2 001i. ::ATLAB· program is, given m 'F"· 5'6

-,;:.

1~ . Initialize
2<>. .

me recursion;

Go,tO) =: 1

FOr J =: 0, t '.....p ,~ 1 ,
(a) For ,; = 1., 2~~, ..~~J ~
aj+~(i) ~ aj(i) (b)
,(1)+1

+. rj+1,aj(j

-i

+ 1)

(j

+ 1) == fj'·H

3..

b(O)

=~

function ., a~l:r
:5!l,_,;:r,

,a=gt'o,a ( gamma,.

-1f1i,::I,,.........,,'~ J' ~ ) • g1[~;~;u:U.!i::aL:--,~j~w~,,, p~l.ennrth,t:l!llt.'!fama:na" ,. ~ ...... . ~:. for J'I' =·2 ...T"+1. .. j ,9.= [a.i' 01 + gamma. (j
If .

[1"

j,

II'

-- -

"

[ ..~

'-1)'~' '[[0; oorrj

tflipu,d

(a) )

1~

_._ ...':11 tt\!I..1U

ASure ,5~6.A MATLAB pmg-m.m for the' step-up recursion to find' .theJilter cOf!fficientS' a_, (1)from. ,the
,re 'CtitJ.n,coefficients..
-

2'34,:
1.

Given the reflection coefficient sequence

r2

2'

[r! r,2]'T
t

first d ,~jl ~ be fonnd fr F-L' ., th e nrst- an', second-or d er au-pose mo d e IS may , ::' 'Loon , trom tne step-up recursion as -OIL follows ..From Eq, (5r14), the first-order model is
31

=[

al~l) ]

= [ ~ ] + r, [ ~] = [
1

;1 ]
0, 1

Again using Eq. (5;~ we have for 14)

,32;.,

1
.82 ~

II a2(1)

al(l)
:[

:. ,a2(2)

;to; a>' (' ''1) :. I '

il
~

rl
0

Ii

,+ r2 r* 1
1

~
I

rl ,+ r;'r2 r2
I'2" the general form. for the. second-

Therefore, in terms of the reflection coefficients

r] and

order .aU-pole model is.

Given a. second-order model, a,2, note that. if a third reflection coefficient, to 'the sequence, then 83, maybe easily found from 8:2 as. follows 1

r3, is

appended

'1

I
, + r:3:

'a2{1)

~2(2) , 1

Thus, it is always possible. to increase the order of 'the. filter without having' to solve again for the Iower -order filters.,
The step-up, recursion is a. mapping, £. : .rp ~ ,ap~ from a reflection coefficient sequence to a filter. coefficient sequence 8p,m In most cases, this mappingmay be. inverted" p.- ~ ..I ,',':. ......:1' eenru:ne t,n,jut,th"~.'em.: 'il ' iod L,.. , : ,a ~ 'r" , p'" anU. th" - 'fl" - +~;, Coeffi 'clenllS tie,t'·, " "', d. ~ ............. ,ell. par,aJne t" .. I .r e:re ~ec,LJilo:n .':...' ~el':S~, lip.

r,p

'c"

The mapping that accomplishes this is called the step-down or .backw:ardLevinson. recursion, The procedure for finding the refi,oction. coefficients. is based on the fact that since

rj't"'lD .....

::=:

aj':j::'

("')

(5 ..48)

then. the reflection. coefficients may he computed by runniag 'the Levin son- Durbin recursion b ,'1.~.. _..J~ Specmes all:.y,,~gJJnllrng- WI',:I &p we set,p' ;-R b .... ith T' ( ) ." .. 1 ,l.n. _ackwards. S' {l.p':P·.I,., Tb ent. we recumveiy fi d

each [of 'the lower-order models,

8j,.

for j

:::=

p '_ ii, P ~ 2, .. 1,1 and set


'I' ~

rj ~

Qj(j)

as

illustrated, below
~
-'

3p
ap'~l
Il;p-2

a.P (p ,:"

2 "
.

')" '

,_

ap---l

(1)

,Qp-l

(2)

~'.'~

ap-l

(p -,2)

.. ..

32
"f

,~,

_'

il2(1)
~

31

TOI see how the ith-urder'mode.1 maybe derived from the (j + Ijst-order model, let us as~,um:.e that the, coefficients aj-t-l (i). are known, The Levinson order-update equation, Eq, (5.16)~, expresses aJ+l(f) in terms o:f ,Qj (i) .and aj(j ~'i ,+ 1) as follows
aj+~(i)

= aj(L) + rj+1a;(j

~'i

+ 1)

(5.49)

which provides us with one equation in the two unknowns" OJ (l) and aj (j - i ,+ 1)~However, if 'we again use the Levinson order-update equation to express ~j'+l (j -- i +, I) in, 'terms of ,aj(j -. i + 1) and aj(i') w~ have

aJ+l(j - i ,+,1),

= aj'(j

-i

+ 1) +,rj+i,a;:(i)

which, after taking complex conJugates\!'

gives us a second linear equation in the same two unknowns. Writing Eqs. (5.49) and. (5.. 0) 5
in matrix fonn we have
, "1+

a '"

I' ,(;) ,,10,

']

[:[1 ,

I'

'* '. [ ,aj-L~,(1., - ,l.' + ,[ :_ 1)

,i

*' rj+l

',r' "J'~+l ]: ,[
"

I[

a,:",J'" '(I' ) "f"

"~'
'

11:.

aj(j'-i+l)

If I'rj+'~ :¥:- I, then, the matrix in Bq, (5,,5:]) ,is ~vierti,ble ,and we may solve uniquely for ~, 5 ,Qj (i) ~ The solution is
,

(S.52.)
which is the step-down

recursion. This recursion may also be written iin vector form as


,
1

follows
~

aj{l)
I

,Qj+l(l)
:~

(2) aj'''',J
,I,

1
1

Qj+l(2)
..
"

• ~

1. - Irj+1f;2
:__

II
-

a·,(j) '" '1':,

OJ+l(j)

-,'rj

I
:

+],

a* (j) 1+1:,,' * (" 1) ai+l[,j


>
'--="'

• ..

'I

(5.53)

* ',", "j+l ('I)


,r

,,.;II (J, and'I, tb recursion continues b y ") ~ .. Onee ,.'J.. sequence ~J(~) nas been IOnn'UL, we set I'j . me I: 1:...;;.,,,,, t _D f I te -..A, 11 ial Th d ,. 'fin d'" th next Iower-order P01JJ,y.nOml, I .,1, : ' e step-down recursion, representer dn nng ~ne ,pictoru"ally
111

= a.f'

_!

· .....",--

236'···
'.
"

'RlO'""'U· .fi,It'~.' 'THE' ~YI\J.~ .1l":.......N> : [,··,L;II\...·.···,l-'i;v.. UI··..., - , ~'~N"

as
'{ap,(l))
, .

ap(2), .'~.'; ap.(p).~b(O)}

.'

,. "Step-dow,~

=---+

{r'l; r.2~' ~ r p" Ep'}' .. '.,

,,

"

is summarized in, Table 5'..3~, ,.'L ..:I 't.. ..." 11 enve t.:L own recursion IS. An omer and pemaps more elegant way to deri me step- down recersion i to use, lve ror ae : "., .' E q (5' 28') to serve f tb J·m-or d·er porynomrai1A.F, Z )' In terms or A'.j+] () Th sor11utton ,IS zx ;" ' , ( ·4 , ,Z"., me easily seen to be
I
1

+,

'm

'~

...

,Aj (z)
[ [I _ _

1
'J'

Aj:,.z.)

R(

II ,_ ,. ~1{'111
I!

z-·Jl

"J[

Ir}+l!!'
" .I

l2)~" I[

Ii
I

Z~l

'r*' ,'-: J+l


",-J,

~,ri+I,Z

' .~l
1

(5~54)

Therefore "
_,_.£

l' .AJ,n(:Z), , .',

= 1 -- I~.rj+ll";:. "

-[r

A}~-b'lI

("z) ,':

r,-,+" lA'J'''+'~'(;Z')''
.. ,' ",1. " '

_-

.]
'

I,

",

(5..55)

~""'!li1li'l'" .I:."""",w

which is the z-domain formulation of Eq, (5,..52).. ,A :MATLAB, pr-ogram for the step-down ion " iven In. Fig',...5~7··' , ~, 1 ..." ,. Sl,'·. . 1"";So gI
I

r.

iIiI"1,.;.:L 1

Jlilwe'_ 5.3' '" 1.. 2.

''PI..,

5 .s.: ,1.De ,:,'Jep-UUWD'

R' ". ecu.rslon


oil'

Ser I', == t:lp{p)

For i ~p - 1, P ....., , ~..~.~ 2 1


(.a)For i

= I, 2, ..
,

.~

j
trJ+Jaj+l (j _, i

aj{i}

= '1 _. I~' /[~llj-LI'ti) _,


, '1-+1
,Q - [(1
}"'):'

+' .1)]1

(b)" Set .' ,.


3.
t;p.

r.· =: .J " " . ,J (e) H J~ril 1.,., Quit. =

= b'1.(O)

~Le S.··.'1Il"4n .. ," . ~" '~r' ,D:'~-r,; R··"'n:P1I'.iI'ppi.n,III "" ~.,.


~IL-'~

rJ.Ji~

•. "

'fun.ct i,on garmrta~a·tog",g_ f


%
L~~tA.

;!:I,_-=-

( •. 'Ii •
.•
J

po= 1 en.g·th (a).


a:a{.2 ::1'.)

la

{l}.:=

ga:nrma. p-l } =0.. "p'~l! ; { f,o,1': j~-·l: -1::2;.' a= fa ('.l:,j.-l} ,"" gamma (j ).*":flipud (con.j {a (1 ::j -,1)· )1,) ' ). "1 - .~~1\2. t ~gn'l!'l'!l'l:'n~ l J'i)· '~...... 'II ~, . ~.JIJL~ J ~'1 /.
.JI~J!~ ~", I
'!'

.., I

'g,armna (j~1) =a'(j-l}

;'

F'i:gtJre 5,.7 A lMATLAB, programfo» the. step-down recursion to.find ,the .r:eftection ·coefficients from. o
set ,of jilteT ·coefficients aI" (k)"

baimlple 5,.2A The, Step-Dow'lI Relcunw'n


Suppose th,at 'we 'would lik,e to implement the third-order
H{z:) =' 1

:FIR filter

,+ O~5Z~1,_ O~li-:2-

O~5l,-3

using: a lattice filter structure. Using tbe step-down recursion, 'the reflection coefficients may be found from 'the 'vector

0 as, ~ [1'",,-, 5
'J, ••

[~

S·, 0" ,.' "... .1" -: 0:'...'::]T


J'

and. then, implemented using;' the structur-e shown, in Fig. ,5~3~ The step-down recursion begins by sening r3- = ,ei) (3) := '~O.5~Then, the coefficients of 'the second-order polynomial 82 = [1., a2 (1)1 .a2, (2) IT are found. from, Q:3,(i) usieg Bq, (5.53) with i 2 as follows

[ G2(2) _ '='1 ~

Q2(1)

'J

f'[. a3(1)]' ri 1 a:J(2) ,: !.,

[I ,a:J(2) ] }:I 3 i ,a](l) _[i'

From the given values for 83 and ri+l we find,

:[ a2(1) ] _ .,1 ,', 02(2) ~] '~O~25


and
:r2
:=

l' [,I' 1,

-,O~J :J!

O~5], + 0 5 [.
1 ".':

-o.i ]i '1 ~ [
O~,5_'

J _, ,

0.,6 ]" 0.2,'

,(11(2) := O.,2~ Having foundaj, Eq, (5~52) may again 'be used to'fmd ,81 as follows

01(1)

= 1 1 1'2[02(1)- I'2a2;(l)] = 0.5


.
. _, I

"I!'

Thus"

r1 = al (1) = 0.,.5and the reflection

coefficient sequence is _O.S]T

r = [O+S',~ Oa2,

Therefore, a lattice filter baying the given system function is [as,shown in Fig, 5 ~g,.,

"

,~-,!,

An interesting and useful application of the step-down recursion is. the Schur-Cohn stability test for digital filters [21] ~This test ES 'based on. Property 2 (,p_ 226)", which states that the roots, of a polynomial will lie inside the unit circle if and only if the magnitudes of the re','8"!ec-uon. COO:. ,·Clenl~are, }.... ,.' ..ujL. ...,- on.e~Th"," ..,er'.·'11 v, ,gl,ven a. cau~.... Ii·' ". ·i.,.,::A, -m t aIlu' hi" ···,ffi···· . ~., +-"'" '. .:e"Ss, utLu ..... ' , efere ,.:,.. ,.". "",1: ,near SJlll.U.. ':'''I'i:rOi~' e ..;lL
l'

,(;U.,...

filter with a rational system function,


ZJ~~~ B(e.) H'(' '\ AC!)
,,,,1 -

"

the filter may be" tested for stability as. fo lows, First, the step-down recursion is, applied tO the ceeffielents of the denominator polynomial A. (z) to generate a reflectioe coefficient
I

. JH"'"C ~11:\.I1A. ,,~_ ,IICii'"'"'lIN"·' [1\,' .." .' , ','IL. 'I" Itl " .' DEC"·,
U"

ft~J"l' -'", .

nC'f.QN' . ,

sequence, F j The, :61te:rwill then be stable if and only if all of the reflection coefficients are AI '. , u "",1-0,,,,_ ess thai e f' 11}owing examp 1 ,. · strate s tb proceeure. I,OJJi !eurn ne 1 '".an one In magnitude.
r.

.-

, ~..... ''"

'Th'

'III

EXGlmpl. ,5.2.,5 The Schur-Cohn: Stab'ilit] Test


Let us use the Schur-Cohn stability test to check: the stability, of the filter

l'
H(z) = 2 +,42',-1
-=

3z--2

+ Z-,3

First note that. 'the leading coefficient in. the denominator polynomial is not equal to. ODe~ Therefore, we 'begin by rewnting H (z) as. follows. ' H(IZ' )" =

..

+ 2:z:~1 ~.

us

t.~5Z-2 + 0.5,,-3
polynomial A.z(z) ]

0.,5 we then solve for the seeoad-otder a2- ("l.).z ~], + a2 (2)z -2 using Bq, (5,,5'3) as follows W1tb tl3(3
1)

r3 =

,+

[ az(2) ,

:' a2(],)

.],

1~

~, I

ri' l'

i(l

a,3.(I) ] ~

,a3(2):

rJ[
i.

a,3,(2)] };'
1. [.

~ a:;(l)

= ~{[ -~.5 ] _ 0.5 [ _~_; ]}

=[
Therefore, since

!:t;3 ]
~:r21>, 1 and the filter is. unstable.
l

:r2; == a2,(2) ~
,ill!

_-'lO/3~ then

-25 . e 5•. '~.':'. Th'',', Inverse

1'.D ....L..' .'·8("UrSiOft . ,II! M::VlIISOIJ'···.Umin',R

We have seen how to derive the reflection coefficients, and the model parameters, from, an autocorrelation sequence, .. We have also derived recursions, that map' a sequence of reflection coefficients into a sequence of model parameters and vice versa. If. is also possible: to recnrsi vely compute the autocorrelation sequence, from, the reflection coefficients {rl~' r2~~ rp} and €p' or from the coefficients of the ptb,-order. model ap(k) and b'(O)~ The recursion for doing tbis is, referred to as the inverse Levinson recursion. To see how' the recursion works.",assume that we are given 'the reflection coefficient sequence: rj for } = 1~~~~, .p and the ,p'ili,-orcdererror ,tE,l' '. Since
I•• "

Ep'

=: r x,(.o) IT(l ,,_ 2) Irj'1


~'=1

P'

(5~.56)

then the first term. in

the autocorrelation sequence is


Y,;; (O)

==

Ep ~p-,~---=------.-=-

(5.57)
,.

0'(1 _,Ird2)
,

23····9·
'. I, '. " ",I.

Tabl1e SA
ill ,

[l1Ie' Inverse ,levinson'~Durhjn lecur5;0f1[

Im,tiaJJize:the recursion
(a)

r, (0) := Ep.J
,ao{O)

F.or'i = 0'1 I, .,,.~'"p _, ) (a) For ,i = 1, 2~


-e ~

(b)

= ill

'n:.~-]
."~

(1 -

Ir,

r~)
- i'

[~j+l
l'b)·. .. 'L
'

(.i) ~

. ,aj +1 ( 1.-1)''. T:x'{}

+ == r "It-I
.

tIj

(i)

,+ r;+l,aj:(j
,"..II.]

+, 1) + 1 -l)

(c)

:3-,

+ 1)1 ~,,~

Done

E:'

aj+],(i)r~(j

'This" along with the zeroth-order 'model


I,

(5.58)

nnualizes tne recursion. Now' suppose that the first j terms of the autocorrelation sequence, are known, along 'Wl~;Hll,. b e' 'th-order 'fil'" :.~·'~III",,"tiJ,.,'j'.":'" '.,."" a- (l~')'ure WI· 'Ill", o t 'rA't"' ""J""F>a,ftj-'-"cI"len' n term in "',Jiill.!l ...!,'·:,:"iii .. n~:'·-I'-'.W'u.·":.··,I":'.S'L·OW b ow to 'find the next IL.,:':· .... :,,1':.1: the sequence, .rx(j + I), First we use 'the step-up' recursion to find. the coefficients a,j+l (i) from i+1 and ar(i') Then, setting P =- j + 1 and k ~ j ,+ 1 in Eq, (j~2) 'we have the following expression for rx(} + 1.):
l

• ., ....,11.:

",:1..

:1',)

I~.,

_1'.,

','.:,"

4,

(,5:.,59)

..L.~;L...U!.-i.:3.J.[.)

·, 1, ........ ' 11 ~ Smce .,:iI.., hie,sum, on '_'L. ng b.ton 1 lDVOI ves xnown antoeorre 1"""" vames, *,1..': expression may me ., :y ,. anon uU.s, be used to determine r~(j I), 'wbich.completes the recarsion, The inverse Levinson-Durbin I"A ..... ~,~~~ '0' n which ma'y be' .' renresented 1'\1' 1_"" ',.': llli' ;, II :'1_:' ~.-' ....--,r .. -.0 ~~ pi , '.!. ~:' " ctorial "=i ~1 Ij..y.IL.J',_

+,

'1

'I

,,g,C"

. ~ lS 8ummanze d

i Ta';,; . ,',., ~ m - b'1e54


The l"vers'e Levins'(},lJ-D,u,ro'in Re,carsion,

Example, ,5.2.6

Giv,en th~ sequence of reflection coefficients, rl~ r,2 = r 3 =. and a. model error ?'f = '€3, = 2( ~}3, let us. find the autocorrelation seque~oe rx := [~x.' (0) ,',rs (1).,.rx (2)" Tje (3) 17 Initializin.g: the recursion with '
,rx:' ".-,' ~

(0)

'n'~-(1 E3
, t£~l

r:~) := ".
£

2"

240

'TH' JJ:' .~'''·1Jl··:;: ·SON··"C··· ,. f\.lL\..rJUP\: ~'R.J1I., \I:' .. anN· ,OCl"""l .rns· '.~ ~ .
'J

"L

.. S.

'we begin by finding the first-order model, which is given by


8111'-'
.L

-'! rl. -':!


~'

[: 1 J' [I]' .' o.s


II
Ii

Updating the model parameters using the step-up recursion we have .1


,a2

il
I

=,
;1

~lCI)

+ r2

0
.

ul(I) . ': ==

1/2
0

+-

'I
I

I.

2,

1/2
1.
,
'

III

3/4

1/2

Applying the step-up recursion to


II

82

we have 1

a3

='1

a2'.('1.s
1 )

I (11(2)

o " a2(2) + r3 : a.., ("})" . ...


II
I.

~i

-'r
.

1 .. -

Finally, for .r.t (3) we have .

3/4 1/2 0

!,

.+~
2

1'

0-

1
1

1/2
3/4· I

7/8 1/2
~ 1 = 1/8

.rx(3) .= ~aj.(1)rl;(2) - u.3(2)rx(1) ._ ,tl.j(3)rx(O) = 1/4

+, 7/8

and the autocorrelation

sequence is
·rX,. -

[2; ~ t,

-Ij4,~, 1/8]1

autocorrelation sequenc-e from the reflection coefficients and, 'f:p If, instead of the reflection coefficients, we were given the filler coefficients up (k)., then the autocorrelation sequence m,81Y be determined by using the step-down recursion to find the sequence of reflection still coefficients and then using the inverse Levinson recursion .. MATLAB, programs for me in, verse l .evieson recursion to 'find r x (k) from either the reflection coefflcients I' or tb,e filter j coefficients ap (.k) are given in Fig, 5 ~ 9;0 . u1 f" S'" ,; 5' 2 ,4 2" 5' . ~ . su.mmm:y~,co,mlmng th" [e res~' ts [0,[ Sections a.z ..' an,d 5:.. ,. ·we.see 4<11.., 'Uiere IS.an that ... ,1.: In b .. ~ ... ~- .. ....'1! " f' ... . , ... (k':') l.IJI.!i;;. .....'1 equrvatence betw .. ."t.._. sets .0' parame ters: the autoeorreianon sequence .r); '..' .1, 4I-L...... ,:aJL i.-' between tnree . :e : ],-w*," " ~""'1" .. pole model.up (k) and b(O), and Olereflection coefficients rj along 'with ~p'.' This equivalence is illusttated in. Pig, 5.10., which shows how' one set of parameters may be derived from another,
cO

The inverse Levinson recursion described above, provides a procedure for 'finding the

.5":·..2',', 6' ._.

I'll." ..
.e

.r~Lu· K. ','," .r· l!I,iecu. ~J1.


. •
1!

'Fs:i"'n··
,'V1
~

The
.

Levinson-Durbin recursron provides ,M e,":~aent SOlution 'to the aatoccrretanon Donna] c . . .... 1.... d I' ,'., ,. ~ .. equations ..10 a. p:u.l"",orde:rmoe ei, thi S recursion requires on ~L-': oroer 0" P 2 antr mene UP'"' r'iOt me .-..-...11... _ f ,.' lth
'i .~ • •

ffi'

11...... ...:;

t,'

eranons compared to an order of p3 operanons for Gaussian elimination, Therefore" with • 1 j.'iI... ~ • f' of.t. ~ jCt.._ • the T ..,..,. ...., a single 'processor '~lujat 1[... ~ one Unlt 0[' time tor an arithmetic.' operanon t ~ Levinson~-S' Durbin recursion would require 0.0 the order of p2 units of time "tosolve, the norma] eqnaof ~ ,L._'~ d ...,..,...,11..." . .. 'L.':: nons. "IJ':tb .tne mereasmg use o f'' VLS' I tecnnoiogy an: parallel1 processmg arcaitectures nlJ·. ""L_. ~ .•I! ..• ,.

'THElEVJNSON-:DURB.IN· .RECURSION ~
..
"

241

. "7'1...')': .... . 1. ,'~f·..R-Perse .'

~·""uDO.n:-

J'_,...:..........

V". urum R·".' ,... '. ~ .. _I.~'. 'IJ'CftTS,WlIS


r

function

:r'::"i~rtor(g~a.j'

epsilon)

r= l·ength

(g·amma.,) ;

·aa~gamma (,1-. t ) :r.;..· 1 -"gamma{l)];' [

for j =.2: P.;


aa= [a.a; 0] +gamma (j). ,end.i
lIr.

[conj, (flipud.{aa.}}

;,'.1.] ;

r~(r -fliplr(r)*aaj;
.if ,nargin ~~ 2f
r .... r'*epsilon/prod{l-.abs:
·end; .~ f . uncr aon r·:=:at·or. (a" b) : ' ..

{g'arnma) ~":2:} ;

%
p~l·ength{a.) =1 i
gamma;;;.at.ag· {,at} .;

r.,.".g·to-r (.g:~).:
I~

.f

if
',r' -. -

na,rg in =--r .2.r r*s·qr· t· '''1;...\ Ip.r··'~""'d'll'1 , -', \,...Ii.,LJ".:.


~r'"1 ~ .,

~'-I1br;~ 1"f'·:!:Io'!'i'i'i"li'll::!!l1 ~"Cl·\,:=.I~\"""-Ai~r.;N.,'",

I-

''I,

..... ~,~

"":J"

,~'

Figure· 5•. MATI.A8 programs to find the' aut-OC'O.rrelll.tion. sequ.enc,e .r'x (k.) ,from either ,th.e refiectio« 9 co,ejfici-ents rJ and ,rhe modeling error f.PJ· or the.}ilter coejJk,ie,nts ap(:k) and b~O)+

Tx (0)

'~x(l), , .. ~·t rx(p) .,

Levin son-Durbin

Inverse

....

Schur

...
Step-D
~

Levinson
Durbin

'0' I" ...

,:w·· '·n"
II

.'
b'll'O";), 't .\_ ..,
I,
·f

,(: '1: ap.li."· ') 5' l' P.. . Igure~. ,.,:! O'
_ .....1. ~ . moaet,
T:L...

U"

2,)" fJ (" .:",

•.••

a ..'. ", '-'p (p.)

.... Il ~ ',~po.f,e:

I ',.

.1: ne

• ill ·equivuj;ence
'j' .~"",~"

1.. L .l ;!.. ·vetween tne amoco'rre+anon, sequence, tne

Q'

,tutu

..._ ...l' +l."~ .' ·,·IJ~·" ,.:;


;l;J"~

.f<p ~J,~c~... CO~J!""",.i:en,,,,,~ on,

.. .. nal .. ~ '. ~ . k ,. b .. lid[ In sign processing app].icanons, an mterestmg quesnon to ass IS how mucn tnne wour..· ' be necessary to solve the normal equations if we were to use p processors in parallel, Although one would. hope that the computation time would be proportional to prather. than p2,. this is not. the case, In fact, it is not difficult to see: that. with P processors the
I

r.-r

242

'THELEViNSON RECU:R;S/ON

..

Levinson-Durbin recursion would solve the normal equations in at time that is proportional to p ~,og:2 instead of p ..The reason for this is the requirement to compute the inner prod= P net
ill

f)

== r

r,

(j

+ 1) +L,aj (i) Tx (j + f i=]

i) == [r r (j

+ 1) ~ r x (j)

-Uj(l)
L ,•••

'X' (1)]

..

: ('5; 6',"0.'oj', , '.' .• '


I

which is the primary computational bottleneck. in. the Levinson-Durbin recursion. Although the j multiplications .IIIay be performed in one unit of time using j processors, the j additions require lOR~j- units of time, Therefore, with p processors, finding the solution to the p·th~order normal equations requires on the 'Order of P' log2 p units 'Of
time,

In this section, we derive another algorithm for solving: the normal equations known as the Schur recursion. TIlls recursion, originally presented in 1917 as a procedure for testing a. pol ynomial to see if it is analytic and. bounded in the unit disk [9","22l~avoids the inner product in the calculation of n and is therefore better suited to parallel, processing,

Unlike the Levinson -Durbin recursion,


!1

finds the filter coefficients ap (k) in .addition to the reflection coefficients ri the Schur recursion only sol ves for the reflection coefficients.
'IIhich

Thus, the Schur recursion is. a mapping from an.autocorrelation sequence r, (k) to a reflection

coefficient sequence

fill coe £fi'..... ' (1..') .,.,.1.;. S' h recursion is shgnt y . ~.. 1· LtJ nter I: cients, Qp ::.111;:.[" lillie, 'C'W ·L··Le· D' b iii': .• moree ffici cient .L ~f.antne· rvmson-our 'b· recursion mterms 0f'th nn .tnenumoero f mu 1".tiJ)I,iuCal1.ons" even in a single process/or implementation of the algorithm. The development of the Schur recursion begins 'with the autocorrelation normal equa, c. ,·th -...l d trons 'tor a ju -order mo,.:er]I[
;jf:i,.. ",. tne computanon
a. ~ +

· ·d· B)~ avoanng

h -0f·' the

r?t. (k)

+. ·L. (l)r .z (k .aj


j
,~=:]

1) = 0

k =. 1,.:, ~.'...,,J 2

.,

If we set

aj'

(0) ~ 1 then the jth-order normal equations

'may be: written as


(5·,.63)

L
which
we

Clj

(l)r). (k - 1)

=0

,= 1~ 2~~'.~~' j

~'=O

will refer to as the orthogo.nality condition . _ Note thatthe left side. of Eq. (5.,63) is
...

!'I

r, (k).~,

the convolution of the finite Jength sequence aj (.I::) with the autocorrelation sequence t» (k) .. Therefore, let. us define gj (k) to be. 'the. sequence that is formed by convolving aj (k) with

.i
g)(k) ~ L,,(lj(l).rx(k
1=0

-·/) ~' lij(k) ~ Tx(k)

'I

(5,.. 4) 6

Th us, as snown In F'· 5' 'II gj"(··k·).",.1.,.me output Oi i'" e jtn-oroer.....pre,d~ '. error ...lter.· jZ ) .' h . f'the f h-erd , fi'1 A ( I ig... IS acnon : tb 11 •. (l,..) d"..!... ~L 1 ....1~ d'· ., w h en' thei te Input IS tne autocorrelation sequence Tx·,A..,: ana me ortnogonaaty cendmon .states
Po

·I'.~ : .! ,',

1
I

IHiI::'~

,! 0 ANS"".,," ,. ~U:'1f'~.'

aN'· .
J

r.J ':

D' 'Ut -.' .m·~ ill lJir.'

iIlR'S

R·C'"rl ~\DS"ON'., .L.I\...U'A' :.1:..


J •

243

:'

. ·O·J

IQ!'.~(,

k).··

p'

A j"-' (z)

"

;110.'

1;"

...

A~{z)·._.
J"

"th·.r.!I!:t. J(!.IJ..

g:,:.1.1Jt ~)·il1' .... (·1•.


,I ,~,~

e'q1l1IQ']
. - _,

.~l

'Ii'......

Ll:'

zero

.&....... 7,'" _',' J.U£

k: .-_. ,1, -=-.


I"

I ••

1 l.

't

.~'
~

(5,.165)

In addition, since
8j (a.)
::=

L ill' (l)r
.

J[

(1) = €j

1=0

where the last equality follow'S from Eq. (5 ..3)~.hen g),(O) is equal to the jth~ordermodeling t
(prediction) error, Now consider 'what happens when aj(k) is replaced with at'(k:) and .gf(k) is the con-

volution of a.R (1:')' with r ·(le)1 :"1 :.':_". '.-.'


·.·c·.· '-'.' ".' .. " '.J;'

...- ... ,

gj (k)~'

-R;·· ._....

,La), .R··t··· "k (.)rxf '~


.:i' _____rr.

.J

~·,.:_I

1)" =.aj (k).... :*T~(k) ,'. R_,-.- .. : , -. ,'--".

(5.61)

.;~

Thus, ,gf(k)

then

is

'me response
j

of the filter At(::) 'to the input Tx(k)., Since aj~(k) := a;'(j' ~ k),

gJ«k):=

EajC} -l)rx(,k
l--O

-1)

= Laj(l)rx(k'
l~

-j

+ I_)

(5~68)

Using the conjugate symmetry of the autocorrelation


j

sequence, Bq. [(568) becomes ...

~f(k) == L.aj<[)r;:([j
1=0.

- k] -·1)

= gj(i ~ k)

(5,.. 9) 6

..
Therefore, it 'follows from Eqs, (5,.65) and (.5,.·66) that
i6}:

.a··~,(." =: O· : J~)I ':'. ,!It-.. -

k '_ ~

10: 1. ""t

!'

'.

..

'j

J~,_ I'
,~':

(5,.70) (5,.71)

and gjR '( ~)'"_, .C .,J' =»)


These [constraints on gjl? (k) are illustrated
1!l..1('
Co

Mml. :Fig·., .11. 5


1.-

The next step is 'to use the Levinson-Durbin recursion to show' how the sequences gj (k) -~ --. be id . t 'd' t fon ~, . .( :l'r)' . ('t~)· [atlu.g}R. 0.;)' mayn up ae:oormg}+l" (k)'",'an. gi+1i(k' '). [S··_- __. ... .d R >,. ~ilnoegJ+1'ft,': = UJ+.'li ("k')' :+: rx,~I~
using the Levinson order-update equation for (lj+l(k)
," [(" " ~ SJ+l,.' k" )''.'. -,

we have
.e
1

:=:
],
f[

(,tr') ,,:. 'Q)i:. «tg'J., Fl..:[, + rJ+l b} III..~ _. 1)';'


.o-

.' " . 1~)['_'. ('I.-)'·I~' .[, '.' (.:~'~) .", : il}+.I. :( ;1\,. .... *- rx,!!\.-.· -,:a}.!\..

+ r:. ..R (:'f;~,_ 1)[]i ,.,' 'x (..:l_) }+,I,aj"'1!i;, '.' *


,I(,

(.5.72)
1(., .',

, .-~ --,' .,'J. ashio __ .'""". -."-. ' . ,fi. . _ ,R n, a. 51mbaf " , "as, ,.o:tffi,~ s.llnoe. gj+ 1,(1":),1 I ,l

,,,rU,,. updateI~ ,.,.J'~·!I.I'Gtt.JiI ... eqruation .


,

.Q.j ~~ ]

,R

.(·l"')··... -*, r;r ('..1~) app _-'lmca.' on 0., ,t'L., ' '"Ii ,'"'- ,. : .',',.' , __.' d'...er-' '__ , ". ~'··ati ,', 'f' ~.He.Il....iCVlnson 011· . ·.Itl.
,R' (. J,.

',1' gt~VP-'IS
. I~.)

. _R' 1,,) gj+ i (. :«':,'

~ =::::::

.,R.' (., .: l~") (1)+ l'K.,'"Jr') , r X' '('K:,i ~,

['

i .11j

,K,. ,_,

,R gj.,K.('1~~

l) I
"I

+:'r~ gp,K. j+l .".(J.'~)'

I).+ Fj + 1. aj ,"'-.ii]- . '.::Il:: rx {1,""):. .' 1..1 .. ," "K


'$; . ("

(5 ..7"3')
•••• J "".'·1

"

.'

:'

',.'

Equations (5,.72) and (5.73) are recursive update equations for gi(k) and 8)!l-'(k;) and are

identical in form to the update [equations given in Eq. (5 +27) for aj (k) and a}~(k) . The 0111 y dLifierenoe between 'the two recursions is in. the initial conditioe Specifically, for aj(k) 'we L__ 1~) R (J",,) fl .(: , ,,': ' .'., f 8} .-' '\ k' ~~", u.ave ao I('~,' ,- ,tlQ11t ... - O,.i\..,_..'):, w.:"herea;s, 1E.or.,. ',(;': "we. ,"", has '. go (k")I'~ 8.0'1\..,I~)I Tx; .(,:~~:), '., ave .' '.', ,- -.R .( -_ ',' ..J!t ., T .... J!.41l'i..,[tug
=.. ,
c

'=

the z -transform of Eqs, (5,.72) and. (5,.13) and putting them in matrix form gives

',rlF.:! '~,+'~
J
,iI.

(5~74)

..

A. lattice filter interpretation of these update equations is shown in.. ig, 5., 12~ F Recall that. Out goal is to derive II recursion that win take a sequence of autocorrelation values and generate the corresponding sequence of refi,ootion coefficients. Given. the first }"reflection coefficients, the lattice. filter shown in Fig, 5,.12 allows one to determine the sequences gj (k) and (k) from the autocorrelation sequence, Jl.U that remains is to derive

gt
,.(.

a method, to find the reflection coefficient, ~j+:i' from


,as: follosows~ s·'~;"" 8j+l Jr. }' .,- O' ev·, nattin - Eq' ' ~ '(':5" '.... ,1nce. , +) :. ·:aI· '.'" ]n.g, .10
r" ;' '
i '. = ' .•"
I ,.[~, ..

gl?'(k) .. This may be. done '7'2_-')',~or~ -_.) ~ l' w'e'.hc" ...ay,eor: ,:f~, _, '.,' ',
gjl(k)
'. .. ,

and
.

+
I

"

'o'..!! T,

P':L~_ I

(} '.

+, I) =

g'. ')'~

" ','

(J',"

.+ 1) ,+ rJ:.L.l o'~' (j',") == 0


, ~,0IJ'
,Q' o:}'~ ....,

r-,' 1 -. -"_. J'+


.

,('1~,+ 1')"
1 ' ,

,0] \", "

,~,!J.lJ'~)·'

and the recursion is complete, In. summary; the Schur 'recursion begins by initializing go (.k) and g:'{k) fOJ k: =: 1:.12.1 ~ '.. " p with .rx (k) , go(.t)- -.- g~(k) ~ ,~x I(k)

The first reflection coefficient :is then. computed from, the ratio r,]

~-

gC (0)

go(l) =. ,_ ~xrl) .. ,~~ ''-

r, (0)

THE lEVlNSON-.DURB1N RECURSJON'

245·

go(k)
,_ '-iJII'

..
....

gz(k)

.gp~l'(k)
........ ...........j

1--..... - ... ;_ ................'.' --.-, - '.'.

...
"".

.........

..

=-'

....

_,.............

!iii,,.. ---;",:

._---1
___,;I

aR(k). ,.0,0--·

gl,{k)

..R. (·1~.), .g p_]_i'[, ...... __

g~'(k).
P'

.. "

(b) A.pth-order lattice filter for generating ,gp (t) and

g:' (k) from r (k).,


I.

Figure' 5.12 The.lattice filter used' in the Schur recursion to generate the sequences gp (k) and
from the autocorrelation sequence.

g:

(1)

used to' generate 'the,sequences g 1(k) and _g,r (k) '.The second -iIHI.' ffi' ,'. men .-...41 fr '. reneeuon coer ."cientts ..,,'L.. computec trom tho ratio tne

Given

r 1" a lattice filter is then

j. .

r,

2::::::::::

=-

g ]" (2)
oR
..

,~.1 ":"

'"I )',
=. ..

_.

which in tum is used in 'the lattice- filter to ·@'e.neratemesN1uencesg:.""!!(:k)·.andg.,l?:(.··k')·.and ' .", ,~,. ., . - - '. _. -- "=' - ~. --"'ll' . . '. ."',", .. . , :. 2. . '. so on. In updating' the sequences 81'+'], k) and g1?+'1 (k) ", note' that since gj+], (k) ::::=: 0- for ( .',., 'diE' [1..') -' t: '=, ~. v ues ,n.ecu '~, 'J..,. .... ." ..... ~O :!.)'!:; ;1: k - ',1 2'·· J'~ -I- '1 ,an,_.gj+f\Jll,' - 0-' ,10'r ,iN,. ,~ 10- I' ,., .,~"J.,... no. t a]' I P " 'al"'" .
- .. = 'l'-'j
+ '•• "

~!

updated. ~peci,:ficaUy;aU that is required is to evaluate ,gj+ I(k) for k > .J + for k >: i ..he complete, recursion is. given in 'Iable 5,.,5" T
+

~and gj~+],(k)

Additional insight into the, operation of the Schur recursion may be gained if it is formulated using vector notation. Let gj and ;g; be the vectors that contain the seqnences; 8j (k) and gt{k), respectively, and consider the 2 x (p + 1) matrix that bas gj' in 'the 'first
Thb~I:,I<: 5' ,f:: _.J 'llle
:~:"IIi~

~,' , In.

-8., ~-' '. . .. I' .. ,U.'·.'' '~~,~;(i!Iii.···:,·.·n,' ~-''. ,u· " ~' ..... ', ~I'W.
h' , .11i'!.·
.','AIII'."

1"

Set g,o'(k) =' 8,: (Ie) =~ (k) fur k: :; rx

0, 1~.

'f

,,~

p;

2~ For j =, 0, 1.~ . ,.. , '~ .P ~ :~,


(a) (b)
(c)

Set fj+]
,OJ.,IJ,·,

= ~ 8; (j ,+ I JIg! (j)
+ 2!1 ,.. ,.
"- p
tl, 0"'.,11(. ('_')... = ,OJ'(;i\-').• -, ,_

'For,k = j
.0 '..J!..!

For,k; = i

+ 1!1 '.. ' ,.,,'p

+ rJ-y;-'
.

I.

·t_'~

~:) g"'J' '~' ~ I '". ' 'R'(li. '.

R i 1.) R: ('J. 8)+1 \A' = gj',It' -

't- ).

3..

Ep

=' ,8: (p)

(L) + ,W,i~lgj,1\. .
']""1:$

246

'THEtEVlNSON RECURSION

....

_. ... R '... l... .... nd row rowan, dl ,gi m tae secon '. row,
'.

'J

gT.:

]~;

[,~,,~ (:0)1 oj . ).

g~'(l)':-'. J ".

[ '. (gf)T

.[ = : g!(O)

g!(1)

...

:f~].
(5.76)

From Eqs, (5~65),(5~,66)~ (5'~70)"we see that 'this matrix has the form'6. and

gJ] " _ -'II [


(gj)
' .. RT

[.i·
:1

EJ

.'.

We, are now' going to perform a sequence of operations, 00 'this matrix, that correspond to those expressed in the z -domain by Eq ~(5',.14),. These operations 'will comprise one iteration of the Schur recursion, The first operation is to shift the second row of the matrix to the;ri,ght by one with, gl": (~ I) entering as the first element of the second lOW' (this corresponds to a 1 delay of ,gf (k) by one or a, multiplication of (z) '~y ,~'~). Note that after this shift", the ratio of the 'two terms in column number (j + 2) is, equal to ,_rj+ 1. Therefore, evaluating the reflection coefficient we may then form. the matrix

at

..
I

"-"1 ~ [' 8)+1 _:

m r'~, ')+1

rj+l ],1 1:

(5~77)

and multiply the shifted matrix, by ~J+l to generate 'tile updated sequences Rj+l and gf+l

as follow'S
'1" [

. ,g'J..
'"R

..L'~

1,

(gJ+l)

'. T

]1[1 r*
j _

1.

r·'+l ~I J,
:

,-

,', J+l

'[[

'E~ )

gj(p)

.
&

Ii. gte-I)

O' 8)+1 (j

R(.·) gj"JI

~.'~ ,gJ~(p~, - 1) .' ..... .',


••• gg_~..+i ~p'P })_.
, j+1\.'·
",i

+ 2)

]i
*
m ....

This completes OD.,e step of the recursion, We may" however, simplify these operations slightly by noting that the first column of the matrix in Eq, (5.76) never enters into the ... "Jl .e' f .: : ,. 1:':::" . C·IiU<C' ulanons. 'The' i~ ",[.rerore, we may suppress +L [&V aluati on, 0', thesc entries b tmnahzing .j' 'e tne .... I 'ry 1[ 11 db first element in me fi.rst COlumn to. zero an:I~Ivly .1!.i,nUgtD,g at zero mto th Ie secon d row eacrh m " • '.A. d . ~ R' th th di -" _..1:00 ... time that it is shifted to the right, Note thar smce €'j ::=: g'i (j).,.i':en me erroris notdiscarded ~.. l!... t.,,; ~ 1:''': .. "1' AI ...."'" "'IL.· d .. n~ . WI!!JJl bus snnpuncanon .._n summary, 'i,th.e steps cescnber.....d· ab above II [ea.·to th'e f'0., owing matnx fonnulanon of the, Schur recursion, Beginning with
'1'

,1[1 ~

..

a-

tho

'.,;L

1)".

".'

..

!~

which is referred to as the, generator matrix, a new matrix, second. row of Go to the right 'by OD,e
,_ f"!_'
"IIJ()
, .'

IGo,. is, formed. by shifting the


i'!IWI

..

[' 0 ' 0 :.

,r. 1(1)'1 ,;t".

rx·...x ,(0) r '1(' .'1)'"

T_;t (2)' ,

.,,. .. ~..
Po

r, t»..- I')' , x····"

r,,' 1(. P',,',.-)] ,..'I,

]1

' :

Setting

r] equal

to the negative of the. ratio of the two terms in the second column of

.....
CO't
L

6We could also 'Use Eq. (5. 71) to set

gj (j)

===

f.J

in this matrix, hut. this would make the next step of 'oomputbl:g:

the reflection coeMd.ent

Ij+].

using Eq, (5.,75) a little more obscere,

we then, form the matrix

and, evaluate G1 as, follows

0 , ,[ ,= £:loG- '0 =, ' 0 gf(l) ~1'

[0

The recursion then repeats these three steps where, in general,


I. Shift the second row' ofGJ to the right by one, l~ Compute
,,..,.,.

at the jtb step we

'r} +1

as the negative of the ratio of the two terms in the (j + 2)nd cohnnn,

,3., Multiply

~j

by 8:1+,1 to form Gj+l-

The following example, illustrates the procedure,


EXGlmple

5.2.7 The $chru'RAc,un,ion


sequence r x
*-,L,"",

Given the autocorrelation 1'.' S ',:,tep 1: 11]," be" y.tie~""gm .

:=:

recursion to find. 'the reftection coefficients Go

:r 1" :r2t and r 3


Tx(3) .], 'x (3) ,

[2" ,-1 ~ -1/4,!, 1/8]1


ro

T~

let us use the Schur


I

-![ .
,

~ oy ~ .,. lizing me S h ur recursion b nun ali '.' ',:'C,I


i

th generator matnx':o. G' t-o '. '; ie

L Tx(O)

0.

"x(1)

,.xCL)

TA2) ~x{2)

= [I

II 2

0 ,-1

~1

-1/4
~'1/4,

1/8 .. ]
lIS

2. Ste'PI ,1: Forming 'the shifted matrix


,

Gn= [~
it follows that

r'l :::0.5 and

e[
1

,i-

i
!-

O~5

0]'.'15

]111

This. step only requires one division, 3,. Step ,3::: orming the product SliGo 'we find F
~,,""'i :G'l

.-

== elGin

,[

0..5

1.

0..,5]i ~[O -]2. 1 0

,-l/4
,==]

= [~
~'"

3~2

=!~:.
3
"',

i:

'~1/41

]/8, '

]i
:1

~3~16 ] .
'

Since we mow that the second element inthe first row of 'G1will be equal to'zeroafter multiplying Go by [91, this step' of the recursion oldy requires, five 'multipl.i.cations.
4. Step .':;From the shifted matrix

G~

=[
"

°0:
I"

o?

' "3 ,/2 ·9?';,'0, 'I;)


')' ,,

· tms ,,',"I.:,. nave a "' ... A gam, m 4:t..! S tep 'we '1.. ...., one di VISlO:O,.,
+

,- -,", ~'.. Ste "5~'''F> ,,,:'. r th .'11'.. ormmg ne


2 2 I=

....., .oduct B pro uc 02 G'-" ;]_ -e he ',,"we ave.

G = 8 G [O~5 °i J [~ ~ ~)~4
L"or-~,:L...,~ i±tl~L·',~ li'l ,:.,- UJlb steo
'W' .' -"I

-9/8

o ]1 _, [.
I ,-,

0 ,0

9,/8

-9

9-

1,IQ6].
0

e have . three multinli'l1l. ,[IL~ - ·,-t·· IIJ. . ," ,Ll,Y


"

Ii ," .' "

I!;

C~·hO' .'. ~u~lJI} ..:

ns I.
.
~I

16. Step 6: Finally, forming the shifted matrix Gz ~

G2 = [.~
we find that T 3 :== 0..5,

~~

~:f~6]

1 =991/'1 6 , ] 8
,

:::= ['

I~I,

00'.

'.

.0

27/32

.0

. "" Th ere f ore" th e error 1S


... [ " '.. " _, .~ '. '_",. 'J. '.
'I~

E'3 :_

27/32

In this last step, we. have one division,


p 1··, icauons
ail:" ,," .'."V]S10ns,. I

and one multiplication.

Counting 'the number of arithmetic operations 'we 'find that this recursion required 9 multid 3 di ., .

recursion, the Schur recursion is well suited to parallel implementation. In particular, Kung and!Hu [t3] developed. the pipelined structure shown m, Fig, 5 'I3 for implementing the Schur recursion. This stmcmre consists of a cascade of p
U:nIIDt.ehe Levinson-Durbin t
r

modular cells, each containing an upper pro cessi ng unit that computes the sequence values
gj'
c,

(k) and a lower processing unit that computes the values (k)~ The upper processing unit in cell :1 differs from the others in that it is a divider ce lllhr computing the reflection aeffi ,:>, r.,,,, -L: ed arcmtec co :. cients r Th-' ,:~".,~,~""...;;-'ii"["iIl..·;, pipe "li"""' nee ,'" "h-"t' ..I~1e,S as follow 'e cperauoa -', ,01 .~~]!S,,"~ I .lO-J!.OWS~
"h1i1T!' ':, ,~",

gIl

L The upper processing units are initialized with 'the autocorrelation values r 1); r 1; (p-) and 'the lower units with. rx (0) ,~, .'; r x i» -, 1) as shown in. Fig. 5 'i_3a.: This step .'"" ~ 'ad".:s thll e, cens WI·t·h +1-., V'.-.1i'I'" 0-f" t·1I:-. " mamx '....0 .. "1·1 '"' , . of·i G""' 1o ,~' UJ}e vames ne
j;' (
r ..
-i'

'_II

2~ The divider cell computes the first reflection coefficient 'by dividing the contents, in the upper unit of cell I with. the contents, of 'the lower unit,

This reflection coefficient is 'then propagated simultaneously to each. of the other

cells.
,3"" The upper units of each cell are updated as follows
..
"

and ~ ~ lower units~,~ undated as he " . .. . IW,J . . ....


~I:. •

. R' gl" (k)'


" rl'

'I

"(k'.' _':t..
.. -

l_.

1):'

..+,r';+;:' ('k)·'. ]_";(,,,


,,'

.' ,

k-1Ii

lL~,~, ••

"p ,.

THE l.E\I1NSON-.DUR8~N RECURSION


r~(l) Tx(O)
~

~.

,I
'~

ill
, Iii

.Z-l

.,

1;-1

Z-.~ .

i
-

~ _.

r z·(lli)
.. ~-. ,
!

....
,

rx(2)

...
it
I

..

......
~
[

'7.-1

~~

rX(p
II
:1

- 1)1
~~
:

....
:
"

r X. (p).' .'
"

~i ~
:
I
, I:

"
~

J
[

, J{(0) r " .:,


~i.

i
[ [

~
i

,"
,

,
,

__ ~r
!

:
:

i
:,

r;t~(lli)
,

;rx(p

2)

,
I

I'

rx(p

1)

~JI'

.• J
:

:I

(a) I:lJJrtiiaillizatio]lof the processing ualts with me awtoCD:rre·]aurnI. sequence rK,Oe).·. The. dlvlder cell computes the first refl·ecdon coefficient. '\li'lbidh is then. simultaneously propa.ga'led to each of llie other units,

r~

.,

~ _~

r* 2.
B(l) g1 .'
I
I

::
:, :

.,. -i ....
,

_,


!:
,
[

Z:-1.
I

,-1
~ ~
I

l~'

.....

7-.f

u ,I
1

g:~(2)
:

__

g' . , 1(3)
:
J~

...,
'

81" (p)
.ii, .

..~

.~.

Ji~.
i

,..
I

I~
,
[

~~
I

gR(l) 1.
~~

.
-~.

g'f' (2)
..
'~

'."...

!
i

',81 'P.'
,J

gf(p
"

--

.
[

1)

. ,R'C )
~~

(0): Swte ofthe Schur


t'

pipeline after the firsr spdate,

fi ~ . h" ' C ~ F-sure 5.. 1 3" A pipe 1'·' d structuretor tmp tementmg tne S~hur recursum. '., ." ., mea
This operation corresponds to' the updating of Go by multiplying bye] to form. the
..

new ..... senerator matrix G1~ ,4" The contents of the upper units in each cell are then shifted to the left by one cell. 'Note,that the last cell in the top :row is. left vacated,

S., The next reflection coefficient is 'then computed in the divider cell. by dividing the contents in the. upper unit of cell I wi.. the contents of the lower usit. This reflection th
coefficient is then propagated simultaneously to each of the other cells, cAt this point, the pipeline structure is in the state shown run Fig, 5..13b.,
16,. TIle upper units of each cell are again updated using

r
2.50
1

THE lEVlNSON RECURSION


•.• an d .;1;" 'tower units using rne m

....

g!+I: (k.) = g).~'(k ,_ 1) +, r;+lgj (k)


Steps 4, 5 and 6 are repeated until all of the reflection coefficients have been computed.

We conclude this discussion of the. Schur recursion with an,evaluatioa of the computatinned requirements of the algorithm. Note that .at the j th stage of the recursion, one divisi on is required ~'O:" determine I'-. and p'; ~). -1' multiplication ~ al'0': "D,'I'g' with P ~J': -]. additions are 'JTll'~il ,uJ.. _ : necessary' to evaluate the terms in the sequence gj+ 1Ck) (step 2b in Table ,5,,5).. ill, addition, (p _ }) multiplications and {p _, j) additions are necessary in order to evaluate the terms in the sequence gj~~ (k) (step, 2~ in Table 5 ~5)<o illi p steps in. the recursion, the total number 1. W of multiplications and divisions, is
' '-",,"""~' lli.,_ •. _ .•

i~.1

,"

".r,":'

,_C!t;1.,

1.

_.• ,._~

1" .•..•••

,"'

,".'

[_']-

P'-' 1

p +2,L(p'- j) _ pl = ,p2 +,p i-O


along with p,2 additions, Thus ~in comparing the, Schur recursion to the Levinson-Durbin recursion (p, 223), 'we see that although the SaJD.e number of additions are required, with the Schur recursion, there, are p fewer multiplications.

-,

We~,have seen that the Levinson- Durbin recursion is, ,an efficient algorithm for solving the autocorrelation normal equations, It may also 'be used, to perform a Cholesky decomposition of the Hermitian Toeplitz autocorrelation matrix Rp ..The Cholesky (LD'U) decomposition, of a Hermitian matrix C· is ,3, factorization of the form

.C = L'D'L'H _~'"""",,'.'
'J

(5 .. 1) 8

where L is a. lower triangular matrix with ones along the diagonal and, ,0 is a diagonal matrix+ If the diagonal terms of the matrix D are nonnegative (C is. positive, semidefinne), then D may be split. into. a product oftwo matrices by' taking square, roots
Dl =

.DIP'DI/2

(5 .. 2) 8

and the Cholesky decomposition of C may be expressed as the productof'an upper triangular and a lower triangular matrix as follows

C' = (LDI/2) (Dl/2LH)

(Sr83)
-

With a Cholesky decomposition of the autocorrelation matrix we will easily be able to establish the equivalence between the positive definiteness ofRp, the positivity oftbe error sequence f:j', and the unit magnitude constraint on the, reflection coefficients ~j. In addition, we will 'be able to derive ,3:. closed-form expression for the inverse of the autocorrelation ,. 11 . a1 ~'hme mvertmg aIoep Ii'·tz matnx. ~ ~ ~ matrix as we':' as a. recnrsrve :. gontt '1'1 lor ,. To den ve the Cbolesky decomposition, of Rp, consider the (,p' +, 1) x (p + 1) upper . . tnangu Iar matrix
I

af(])

a~(2) ;·1 ~., a;(I) 1


"

.. .. '.

Ap

'_,

- ..

0 0
..

0
~ ~

ap':::."~
'(P"

. '*

a.;(p)
'I: ')
.::,1

a'* ('p" ,_ 2)", P', ,'."


'

..

..

($.,.84)

'THElEVtNSON-DUR8tN

RECURSION'

,2;5 '1

Durbin recursion is applied to the autocorrelation, sequence rx (0), .," . ; r:x (p). 'Note that the jdl column of A p contains the filter coefficients." af_ 1~padded with zeros. Since · "aR R') j
~

This matrix is formed from the vectors

,geh 3,1, .' ~~ ,

,ap that are,produced 'when the Levinson-

~··'Uf'•

f:::J

'I)

where ~J := [0, O~ ~'.'",', l]r is a umt vector of length j

+ 1 'with a one in 'the. 'final position,


., 0

then
E'O
~
,_,

0
tl
.*-

0
.

, .. .

Rp,Ap

*
.
~

*
..

0
E:2
~ ~ ~ .. ~

0
.

0
"

(5J36)

~ ~

..

f·.· . :.... Ul-· 0 ','.U:': (an Iower-.triangu lar matrix with the- y'" ... nrediction errors .. ej' along' the diaaoual '... r··~,t~, ~ ed ~ m·· ..... 1, ,. ....,1 li'T'b astensk 'is use ·:.to me cate +1!... mose dements ilia~tare, 'In generai nonzero ') AlthII ougn 'we h ::' "lave not yet established Eq. (5"85)~ may be inferred. from dIe Hermitian 'Ibeplitz property of it. 'Rp,. For example, since JR;J == Rp 'then

which'
." . "'".

r.:..Jr·, 1'·'~ a

**
:, ",1,~

*
:!
I",

..
.

.' ..
rEp
. .... , ..

I,.,J·;.

1:

]_

,I

'.:1::

,11_

",'1,.,

I,

,'-

••

I_~

Multiplying on the I,eft,by .J and using the fact fuat J2 = I and .Jay = (afY*, Bq. (5.,8.5:)follows from Bq. (5·~81)after tiling complex conjugates, Since the product of two lower triangular matrices is a. lower triangular matrix, if 'we multiply RpAp on the left by the lower triangular matrix, A: then we obtain another lower .. H ... t.." • ... ~ 1 triangular matrix, A:p Rp A... Note that since ... terms along: the" di ':',r: ' 01e f~gon~:1 0fi-'llA are, equariI to one, the diagonal of A: RpAp will! be the same as 'that of RpAp~ and ,A:'RpAp 'will also
,t

1).".

."

have 'the form


I:

e:'o

*
,.
"

0
€l

o
'.
oj

o
..

i'

'.

..

:1

,.

'. .

0
~ ~
'

(5.,88)

Ep

Even more importantly" however, is the observation that since A,:RpA.p is Hermitian, 'then the matrix. on. the right side of Eq, (5~8·8)must also 'be Hermitian, Therefore, the terms below the diagonal are zero and

where D· ,~:. .p

1'·"-" .
,~1.

a diag'O'i[i"1I","' matrix
'. _ .. .' _

l.lLlr(l.1·

__ .

1.

--J

Dp = diagtEo~ t:l,!,

~~ ~"

Ep.} .

Since A.: is a lower triangular matrix with. ones, along the diagonal, then ,A,: is nonsingular with det(Ap) :== 1. .and the. inverse of .A,: .is also a lower triangular matrix, Denoting the inverse of A: by Lp,. if we multiply Bq. (5'.8'9) on the left by Lp and on the right by L:
'r

r -.
2,52
we have RP' =' L' LD 'pDp fJ
I .

THE LEVINSON RECURSION'

,£-J:

(5.,90)

which is the desired factorization. ~ ,. ,. be' b (5 ~.,':' d trom ~,. ':.,1.,0'.:' An mterestmg an'd important property 'urn! may :', esta blisnec fro -.:;::('.., °9") IS' that. the determinant of the autocorrelation matrix, Rp is equal to the product of the 'modeling
,j;,'~, ... ,
L

errors""
,"

,J

detRp :=

1==0

O'€'t
p

(5',.91)

To see this, note that if 'we take the d.eterminant of both sides of Bq, ($.. 9) and use: the fact 8 that det(A,p) ;;__: then 1,

detDp = del (.A:'Rp,Ap)

= 'd"e."tAH). r -~p' = detR;


and Eq. (5~9]) follows since
'.
·1

,'el

(d- tR" )' ',[,,-,,'1"; if A" ) 'p,' ("de


(5,.'92)

detD;

=::

1:..;..,[1

n
P'

EIr;

..

Using' Eq, (5.'9 I) we may also show' that the positivity of the autocorrelation matrix Rp is tied to 'the positivity of 'the error' sequence f:'k-In particular, note 'that Rp win 'be positive definite if and only if

(5,.93)
Therefore, since

...

t n,j de,.D

-TI~
J -, "cit
t~1

~·-u

j'__

it follows that Rp win be positive definit-e if and only if Ek > 0 for k := 0, I, ,.~. P'" Furthermore, Rp win, be singular, dlet(Rp} == O~if Ek := 0 for some k, Ibis relationship'
!,

between the positive definiteness of Rp and the positivity of the [error sequence €k may also be established using Sylvester's law of inertia [21]~ which states, that if ,AI." a nonsingular is matrix, then RpAp has the same number of positive eigenvalues ,asRp.~· same number the

A:

of negative eigenvalues, and the same number of zero eigenvalues, Since, A:: RpAp, Dp' with Ap being nonsingular, then ~p and Dp have, the same number of positive" negative, and zero eigenvalues, Since the eigenvalues of ,0 p are equal to '€,k it follows. that ~p ,> 0 if and only i" €k > 0 for all k,. In summary, we have established the following fundamental property of Hermitian Toeplitz matrices,

i.

Prop'erty 7~For any (p

+ 1) x
1.." ~ '•• ' , P

{p' +,1) Hermitian Toeplitz matrix Rp;, the following

are equivalent:

i,
,2,.

Rp:~O
'Ej

> 0,' j I,

':=

,.

Irjl <

i ==

l~,.,~.",p

Furthermore, fhis [equivalence remains valid if the strict inequalities are replaced with less than or equal to inequalities.

In,the following example we use this property to show how' a Teeplitz matrix may be 'tested ~i;." , . '.,' to see w aemer or not It '.. posmve e definite IS " :.....: u,e,:·nne, .
.. ., h"

Example! 5'~2,..,8 ,Testing,for Positive Dejj'niteness'


Consider the 3 x 3, symmetric Toeplnz matrix,
I

1 (~
(I'

R ==:1
Let US 'find. the, ",-alues of
0:

, fJ

1.
€t

/3
(I'

~. sequence" 'r == [,,1~IO!.~,fJ' "]T ~ Using the Levinson -Durbin recursion we, may express the values, i
+ .' • ..

is positive definite. The 'first row of R, may 'be considered to. be the first three terms of an autocorrelation. and

fJ for which. R

of the reflection coefficients,

r it and! r2~' 'terms of a and (J,., Since in


r'l ,= -,,.(l)/r(O)
=: -a.

"it.'~n Ul~i,'

. a''l r2 ,~ -- VI - ~"" ~1, ,E], '""","

=~' [(l2

Now, in order for 'R 'to be positive definite it is necessary that ~,r.d< '1 and Irzl < 1~ which
implies that.
(i) (Ii)
i

lod
fl2 --

-c 1
.-

/3!

'1-a2

<1

From the second! condition, it follows 'that


'II -' , <, -,

.2 ,a:- ~a

:1 ,_ a'~

,P

,< 1 "

f•
I

254,

THE lEVlNSON' .RECURSlON'

,
I
.'

II

-2

? .....
,

'Figure 5'1.114 The valuesfort» and.p for·w.hich theautocorrelation matrixassociated with th~'autocorrelation se,quence r = [1." 0; ~..8] T' is positive definite.

hi W]C, h
'i

smce
r.

lla~
~ ~

«: 1.necomes .' b

• ,;

The left -hand inequality gi ves


.and the right-hand inequality is. equivalent to
1-==

'1 .'j

~>2a.2_.

Therefore, the i]!Uowable values for (r and. 13 lie wimin the. shaded. r-egion shown in Fig ..5 ~I4·...

5.2,.B 'TheAulocorre'atio.n' EDen,siOn Proble.m


,
~

In this section, we consider the autocorrelation extension problem; which addresses the

following question: Given the first (p


k:
=:

.+, 1.) valses

of an autocorrelation sequence, .rr (k) for

0.,. I, . ~~, P', how may we extend (extrapolate) this partial autocorrelation sequence for k > .p in such a. way that me extrapolated sequence is. a valid autocorrelation sequence? In Section 3 .3~5 of Chapter 3- we saw that. in order for a finite sequence of numbers 'to ;b..".,. """,;,,'1'. d .. ... ,1 . }' ,. !I'.t.,. 1 +.;f' d f' 'h: ll7C a vano partial autoeorre anon sequence, 'nit au tOCOITe i anon matrix termed :rom tnts secu en ce m mst b e non n:e·· g.: at ·l~ e d e·fi'·n·111:t·-e:. R-....•.. ,_ '0'· Th :e' .l~ v . .1P... .> r·e. fore anv e'Y~'D;1i:! ifl!'Ij'l: m us t. y oreserve this nonnegati ve definite property, i.. Rp+ 'I. '2: O~and. R p+2 2: O~and. so on. in. Section e~, 3.,6.1 of Chapter 3 we discovered. one possible way to extrapolate a partial autocorrelation sequence ..Specifically, given. .rx(k) for k = O~1~.,,. p, if rx(k) is extrapolated fore .> p di ., aOCO,f1 to th'e recursion mg ..
+ ~~, .. \r.I'. _, .. .. . "'.". .. _~ ~ _
~'L

!!!.

~,~,;-

.,a;~,

'1 _~,' VJ;,I!

~.

,~~,.l

.,..

+~

r;,..(Ie)

= _.E ap.(l)r
.?
.{=],

r (,k

-l)

(5..'95)

where the coefficients

ap (l) are the sol UUO!ll ito the Yu~.e-·Wa1ker (no-anal) equations
Rpap =. Eplll
(5 ..96) '

'THELEViNSON·,DURBJN RECURSION
,
'

2,55

then. the resulting sequence win be a vali d autocorrelation sequence. This follows from the
..

fact that this ex trapolanon generates rhe autocorrelation sequence, of the ,AR( p ) process 'that .'I,S consistent witti th e given autocorre 'I anon. vaiues, i.e., r x',A;' c • • 'h I: .. '. ... 1 " ( :~r) I J'~ f ., lor '.,K- I ~, p, T'h : I~US, a, nmte seq uence of numbers is extendible if and only if Rp > 0 and, if extendible, one possible extension is given by Eq. (5~95).,We have not yet addressed, however, the question of 'whether OF not other extensions. are possible. Equivalently, given, a partial autocorrelation sequence (x (k) for k ....... 1,"~. ~, p ~what values of rx I(P + '1) will produce a valid partial 0" au toe orrel ati O[ID. sequence with Rp+ 1 > 01 The answers to these questions may be deduced. from Property 7 (p, 253)~ which states that fuf R" is nonnegative definite, then 'Rp'+l 'will be nonnegative definite if and only if Ir p+ i I <: 1. Therefore, by expressing r x (p + 1) in terms of the reflection coeff eient rp+ 1 we may place at bound 00 the: allowable values for T:X(p + l). The desired, relationship follows from Eqs, (,5,.10) and (5",13)~vvh~ch.gives, for
r;tl(p

,+ 1)~

,Tx{p

+, I)

~ -rp+]f€.'p ~,

L:tlp(k).rx(p
p

-k

+ I)

(5: 9":,7')"
. ~I,." I

1=-1

With r P1" t a. complex number 'that is bounded by one in. magni tude, I, :rp+:l ~ < 1~Eq. (5,.97) implies that ,P:x(p +,1) is constrained to lie within or on ,(1, circle of radius €r that iscentered ,at 'C' =: _' ap,(k)rx(p -.k +, I)!' as shown in Fig" 5~liS. In the real case, the range of allowable values is

'Lf,
p

fEp _

:E,Qp(k)rx'(p
k=1

-~ k

+ I)

,~ r;..(p

+ 1) ,< Ep ,-

L ~p(k)rx(p,p

+ 1)

(5~98)

k~l

There are t\VO special cases of autocorrelation sequence extension that are worth. some discussion. The first occurs when we set rp+'l ~ o..1:nthis case", the extrapolation is performed according 'to Eq ~(5.95) wi th

r, (p' + 1) ,::=

L
p

t1p

(l)r_Jf (p

+ 1 '_~ ,I)

[=1

Furthermore, if the extrapolation of rs: (k) is continued in this manner for all.k >- P'" then the-resulting autocorrelation sequence will correspond to an ,A'R(p) process that has a power

:1

FigJure 5 15 The o1lowable. values for the autocorrelation coefficient rx (p- + I) given the partial ausocorrelation sequence r. k)' for k =' 0 1 l. P' are shown to lie within the circle o-F radius- ,E', .x.. "". :I -. .
i>
• _ .•• 1. .• _

IL

-,'

..

..

C,

"!II!

(Ill

!II'

.• _

"'_;'

. '_ -'

[.

'f)

centered at C~

256

THE'i.EVjNSON RECURSION

spectrum equal to
'X']',

P" (:ejW),

,_'

=-,

Ib(O)12 -~~--~-~""7""
:'1" '~,ap,1t "

+~

.p

('~)It J

,2
,," .

9 , (5 9'":_:J )...
, "I.

-,)·'.I;w

'The second case occurs when I 1'+1:1' = 1~ When the magnitude of p..bl is equal to one, (p + 1) lies on the circle shown in Fig, 5'L]S~the (p[ + ljst-order model error is zero,

'x

1:=],

Ep+.r _
" ,~"

[fp [

? - [F
: _'

p,~],~

,"l,t;.~

' 11]'1l _I ~

(5"IOO)

the matrix Rp~n is singular;" and the predictor polynomial Ap+1(z) has all of its roots on the unit circle (see; Property .5,~p~230).. As Vie 'win see in Section 8,~6of Chapter 8~this case win be important for frequency estimation and for modeling signals as a sum of [complex "al exp'0" ' :Ii'fI[e'n' ,+..,.0: I Y.
'-"1"'"

:!.=!Il.:_."

i1

<_

:~,.

Examp1le 5.,2.9' Autoc,Q:"elatio[n Erte1J8,ion


Given the partial autocorrelation sequence rx (0) I and rx (1) '~' 0.,5, let us find 'the set of allowable values for rx(2)~ assuming that r- (2) is real. For the 'first-order model 'we 'have

rl ~ - ,'ret) ' .' ~, ,r.x (0)


and ~al(I)
we have ,rx(2) =, -rzE],
,_ IQ[

,_ ~O

,_

,.,., '..

"5

== f\ = -O~5"Thus,

with a. first-order modeling error of

e'l

TX

(0)

[1 _, ~rl ~, O,.7? 12]


(l,)rx(l) == -OL7Sr2 +,0.25
<:;: .rx (2) <. m ..

Therefore" with

:1

r2 J

-<S 1 it follows

that ,-,0.5

In. 'the special case of rz = O~.rx(2) ~ 0.25 and, in the extreme cases of autocorrelation values are r, (2) := -0. S and rx (2) ~ 1~

r2 =

±l" the

In this section, we will derive a recursive algorithm for inverting a Toeplitz matrix'R, and show how Urisrecursion may be used to derive 'the Levinson recursion for solving a general
set of Toeplitz equations. Rpx:=, 'b
'n 1: ·"e

...

b ~ oy "...:L. • th"] decc . ~ derrve d oegm b snowing 'h ,I,:OW .., e decomposrtioa . ,. express 'the inverse of a 'Ioeplitz matrix :in terms of the Levinson-Durbin recursion wiU then be applied to this

i S ".. 5 2"7 In Section ': _:".:: may b us ed ,.e ....'~O vectors 3j and. the errors Ej'. 'The expression for Rp 1 to derive the

Toeplitz matrix invers ion re:cUTS ion.. ,


Let :Rp be a nonsingular Hermitian 'Ioeplitz matrix. Using

Eq. (5~89), taking the inverse of both sides" we have

'me decomposition.

given in

(5,.101)
:Multiplywg both sides of this equation by A.p on the left, and by A:: on the right gives the

THE lEVINSON-DURB.IN RECURSION

257

.. .Il..Ii"-' , ex P·1j"'P-:it"C"';·O~· desired .......... 'D" uc


JL",",'''~.III.,

!O~· 1[' .... f} 1 '!I .II

·R.~

di .. ,. +..... ..t &' f 'R~' . tnee D· is a:', ragon ill '.'» _ '.matrix, D] is easiily computed. " ,.erefore, fino t g t'be .Inverse oi·..• ,:p' '· din ':e S· simply involves applying the Levinson-Durbin recursion to 'the sequence rx (0), ~... , rJ;(p), ., th . _"'p ~ J: h ~ ..... In ~ (5 ,-.,~ - f.OnFJi.ID:ng matnx A ann perrormmg the mamx product ~.. Eq- tq.. 1'.".. 102),~ ae
ilp I 'Th- +

[iCl~a~' Ii, ~ .

'm'p' lie 5: '2"I. 1'0':, Inverting .'


1.
r

'.'11

,I'.'·tt- ,..[~ ,- -""'f.J~'

n tAr

T oiI!Iii1ll;#.I9' 1,..1:,· .1'" [LA.. :lila"-H.4~ J! 'lr~"""'~'4

Let us find the inverse of the 3 x 3 autocorrelation

matrix

R2=

[.[2 0 I! :1 i' 0. 21 '. il 1.


[

o "

02

We begin by using the Levinson-Durbin recursion to find "the.vectors for j .:= 0 I, 2..For.i := O~we have ao ~ '1 and Eo = .r:x (0) .= 2. With
~I

3j

a,' d the errors

Ej

rl
':'6" fir', - l"St"' ,_," ~ O_..:iIL ..."... U.le '1.~.I!:.,

:=

·~.rx{l)/rx(O)

== 0

m'"" 0' d[el :.i'is .,'.'

with
. )

"'111" ~~

error
~L~' '!c

e: 1 =: e: cO

1['1 ;

r2,]_ = 2"'" 1.

for the second-order model"


and

[~~

..

Thus, 1
32::=
1','

0.

·-1./2
and
,E2 ':z:::. tEl

[1 - 'r'i]
0

= 3/2

Finally, using Eq..(5 .. 02) 'we have for the inverse [of R:2., 1
.-

B-2 E

:=

A2;DIl,'EAf =

1 0 '-.1/2 ,. 0 1. 0.

1/2
0
00 I 0, 01

0
1../2
0

0 0

ri

Or

1 1 .0 -'1/2-

2/3

00 0 10 ·-1/2 0 I 1.
,

'1/2
0

l/~~
..0
-'

. 0

-1/3
-0 2/3, .. -·1/3
'

0'
I,

~.. 2/3

0 Ill.

-1/3

2/3

~:

...... 2'5""8'
... .:.., '
",

Note that the inverse ofa Toeplltz matrix is not, in general, 'Ioeplitz, However, as discussed in Section 2..3,.7 (p, 3.8)and ..as iUustrarediin tills example, the-inverse of a. symmetric 'Ioeplitz matrix is centrosymmetric [17] ..

ln the next example, Bq...(5..102) is used. to find the inverse autocorrelation matrix of a.

first-order autoregressive process ..


·. -52'''' !I Examp II:el'''''li '11

Let Rp be the autocorrelationmatrix


equation

ofan AR( I) process that is. generated by the difference

x(,u) = ax(n .~ 1) + w{n)


.. , -,
~
'.1

where la'~ ,< 1 and w(n) is zero mean whjte noise with a. variance lation sequence of x (n) is

0';,. Thus, the autocorre-

:·1 ':
'..I

.i .,

~:

and the autocorrelation matrix. is


[i

l
,tl(

.p

R-,-,

-2 ,-""'i"

1
a

,et. ]
,~, !to 'i'

.(l:P=·]

I:

1 ~ .op·
J -

UI'W

".

a2

,000P'--2
'
"

aP

.
'

a·p~l

!I.

'Ii

I,

From Example 5~2.. (p, 221) we know that the. jth-order model for x(n) is 2
Sj

.==.

. 11 [ 1\ 'tI

~IO!.~,·.,. .. .. ,. 'tI
'0' .'

' .. ':

'0":] T

and that the sequence of modeling errors is

and
Et. := ~
(!U:JI .

,~

',.

J:" ,~

> I'
.

Therefore,

1 '~a
A .p
~.

0
_.(l

Ii'

0 0
..

ii'

!fo

'.

0 0

0
.,

1
'•. .'

0
~

0 0 0
..

--

..
J

..

..

0 II 0 1-

0 0

0
0

1 '-a
0 l

.and
'D: ~l 'P'
-, -.

d-mg [E.o ,. € Jl [.-1 -1


+--,

~ ,....

e. ,

.. -'l ,.~l ];, Ep~ D. 'j' if P' .' -

.(J

_!_dt. rr I .'("1- .... '.':; 1~ 1 2·)·1 : ..., 2 . la.g Cl


.1

~ .. ~.,

I.I ·1

'I
~

'i I III~ ~ TU,C" I'

i:ti~"',-'~ON'" ",00' ,!l!:Iij!)l'N' "'-" ft.DJi .


a, ~J~: : .. ', . , :.,~

RJ.,1!

DCO'" II!

I.0S'iO··'ih...";" rV[f\'! ,.V-I .. " 'I 'II"

25-:-·9" .
_'"
,

....

(5~102)Itfollows 'that From Eq~


1 " •

,I

I 0

,-(l

o~
~,a
1. '. .

R~l
"'p

]
-'-

0
~ ~

1 0
~ ~

. . ..
'

0
0 0
,Ii,

0
0 0
.,

I,
1

1 - Q!:2

o
. ..
,

0:

1
0
'.
'

'.

~ +.

0
il
:i II
1

2 a: 1£1

..

..

,I

0 0-

0
0

,. ., ,.

~a.

[,-

1.

_.

O· -0
1.
'-€X

.'

. ,.

. .' .. .
,.

. .. .,
..
'

00
0 0
.. .

0
0
,.,
a-

'

..

10
.. ..
~

0
0

t.
0 0 0
~ ~ . ~

.. ~ ..
.. .' ..

0
0

1
_"a:
..

0
.

,.

'. ~ '.
~ ~ •.

0
'~ '

.,

0
-.

'.'

.
",

•.

t
-(rt.

0
I

-1 - ,cr
1
1

~a.'

0 .0
','

1
0 . ..
~

.. ., .. ..

~,

'!I,

0 0
0
,

"

"

0
,

1.

0
Q~ . ..
1

.. ,.

~a ., 0
'F
'

0
1

.. .. ~
.. '.
L

0
0 0
..' ...

0 0
,

[
I

-(l'

0 -0 '1
,.,

(12

.,

..

1.19

..
:

,.

..
.,

..

. ..
~

0
6
]l

0 0
. (I'
e-

1 ~a 01
~
,

:1

0 0

0
0
I

'Ii

.. .. ~ ., ..
'

..

],-at'

== -,2.
0'10

1.

_,a: 0
~

1 + «2
'~Q!'

0
0 0
,

.. ..

..
+

..
'

. .
.. .' ~

0 0
.

;-

.. •.
L

." ., =(l'

0 .0

'. .' ~

1 + a 1: -a'

Thus." we
.• ~

see

that Rp' 1 is centrosymmetric

and tri-diagonal: .

Equation (5~t02) provides. a. closed form expression for the. inverse of 3. Hermitian 'T"! . . om:·"iI'rP7' ,·f' .... . ices " ," , ~ D L roepnrz rnattrix m """.Ii.l.Uo 0: rule mauri..... A''/)' an d'I,'''..'" 11. 'H,"owever; sr ce ~:[L.,.. mamces may l........ '" . ". . .' ~ ~n' , .... tnese ~ L •. :1' ! ., I .. th Le ~ Du....L .' recursion, we m.ay ,1.101,[,_; thi recursion mto ~ s: lid ms ., UUJJ.lt np recursive y nsmg tne 'VlBSODrbm.. the evaluation of Eq. (5 ~1.02) and derive a method for recursively computing the inverse· of Rp ~In particular, given
« . ':fc~"" ",
c.
'0 .(
('rr ~ ., . :"

II

'-l RJ,
,-

A:' 1"... ,., H ::::::::::' 'J' D",~IA'


,'J'
t. "

.J

(5~l03)

wemay:find
'R,-l
,j

+1 ~A:'J+1 n-lA·1+1. _' ~J+l "


H

(S..l04) ,

/
,

'2:60:'"
,

.
,

THE'LNNSON

RfiCU:RSJON

i,

II

f aj+l (j
j ,I

,+ 1')

II

~
u
j

,_"_~_.

__

oiiiio_,;o,;",_,

-0
and

~. ~

! o:
I

~
I

(5,.105) a;+l (I) "'~l_,_,:';.... 1.


a;. ....

=_,...JI-."!""'''!''''!!'~=!!!!!'~

l = D.1+1 '-_,
I"!""!'''!'''I'~_'r.J_l

n-,l_
J
1 ••••••• __

o
.'
'-...011_[~.-;..,.;;;....;;;._____

II

(.5,.,106)
;

'".'

~ 1.

E,·...L~I

,-1

,1

. ' .. l:ng I,ncorpora b'· .' Eq


,1~,:S"

',,".,';,

(5: 1(0"5)' ,an", ("5' 106· )' In,,: 0' ,:I,., (5' ..104'':'1) we. ,Ih,ave, .' d .,.' lnt rEq .,' lL', .. " ".
l
I" ,I .

.
I
"!""!'''!111'' __ '_

,AJ{i
.
....-;,..;... ._

J",l_._[.-;;_J

!0 ~
~

, (a'~l!.'-1 )1' H III JI,


.

:
I!
~

1.

i5:
\.:.

[101'

1:07" )"
,.,_,:' r

litti ".c: ,-1 '. Sp, tmng uus expression ror X- :'j+llDto a sum as
,j,,'L..,'.

1.0:

~'

nOWS
I
__

,AjDJ':-lA,fi ! 0 =~_,_- ":i_, __ ,__ ,;,,_,_..!...=~~,; Or [10


:t

.
II

II
I,

+"
I

_£'

'::- I ,

1 J

aj+]_I,aj+l,"'~ : ~j+l ~_-.~_..J.l __ ,_,...


~;II:,-';:""';:'iii. ..... __ ,

-R:

(-'R

)H

I
I

..
f. Ii
I"

-R

':

'Ii;},+,

I(

g,

),u a,'~....L1
._.

Ii

' ."
r

iII.

,iT,

~1

(5.108)

r we see that Xj , appears in the upper-left corner of tile 'first tel1.n [~d that: the second term.

.' 'may be wntten as am outer pro d-met


E _,
I ,.

: ; ~~... '--'-'--- - - _,.. .. ~ . ,_,_._,


,'}" ''') -,l'
I

-'R (,-'R ')H ,a,,,+' 1·'3",+]'


{ifH)H

ii
I

-R: a;+,] J
__

a 1)+1
, , ,

-R",

.-t-;. ...... _""!!"""!!!"'!!!!'''!!!!'

j+l

1
-

(~'R r 1,3j+I )H

~'

II I

]
(5,.109)

1 R; I: -8" R (01" 'I'H .' ' I+'] "j+ll '€j+l


fro'
1

TN O'II.!e ',"til :&,~e:r:',.U!lOll.": gh. ... IS, notmng diffi""i:: cult aJoout' tbe s: ]1' '. de' "l1V3Jtl(lE! m a mdU!femab.ctu.'1 POUlt '.... 114.,. 1 :11.......... • "",;Ht.. , •• ,• .' w ,e: ,_~,.ll AI'+ili.. were . •... ro "~!OW~Dl,g" • of 'View!from a noeadonal point o:E' view it. is a bit complex, Since were is not much to be galaed in going through fum derivation" amer than a feeling: of 'acco;mpili.sbmemt~ the reader may wish. to jump' to the main, resuks 'which are contained in Eqs, (5.,I 10) and (5~ru,14)_Altho:ugb. d\e d~scmsjQn f;oUowm,g Bq, (5:.1 j,4} uses. these, results to derive ilie general Levinson. recursion, tb~srecursion is re-derived in the 'f.oUQwing section" without using these reseits,
iI- .• .,.,
,i,

· THElEVlNSON~URBIN RECURSlON

26,11

Therefore" we have R-i


j
,~

!i

R",I",+:1,1!
.J

~1
~

;;;;;;;;:;

'-,~~>e''''~'~'~'J,.,--~,-:

~ 0'
I,'

I
,

~,

"

0'

:0

.1

.:

fJ+l

,a,:" " '(a: 1)', ,;+:E' J+, "


ill

R:

H'
"

(5~110)

'\..! h ~ d' ., ed '.c' +. ~ .. iali ., W,.luC" IS tne " ,eswr.···'up.d ate equation .lor,R-1 '"Th' ~erecursIon IS lW'U,', ' 2Je. d'b settmg ~ 'j "
':Jl,.,
I 1 I I'Y 'j'

'D,~l &~I

'

=: '11 1~

E'o - ,rx (0) and 810 == I ~Then, from, :Rjli ~ £j, and aJ" the Levinson- Durbin recursion is used '[0 determine ay+ 1 and 'Ej+ 1 which" in. tum" are; then used in, Eq, (5~110) to perform the update
, nle If.:::..C' d" of' R'_,],~ Th e' 'f' ouowmg exampse: illustra ~~ ....s, ' proceoure, , :J me lIii - '. 1.\1[,1 stn we
'0'

Example 5.2., Ii 2 matrix,

RecW'Sive Bvaluation' of an.llll'erse' Autoco"elatioll Matrix'


the inverse of the 3,x3 autocorrelation

In this example we show how to. recursivelycompute


1

'I 12
m

1/2

Rz -

1/2

~/2
1

1/2 1/2
Jo

Initializing the recursion we have


'!)iIIIO"O","

-v. ,-,,-,

,I,

I'

,.

.,
,!,

.,

Then, using the Levinson-Durbin recursion to calculate the fi.r-st-orde'r model

~ r 1 '_ ~"r
and

X\,'

ll)"'{' ,':

11" I',X"

(:'O,·,),~, ]/':'2',·
,I, _~ ,:,

I -1/2
Computing the first -order error
El
W~ e

2] = E,n '['1 - r,~, == 3/4


0.

fi' nd the '..'nverse of' ',',.' !lUI, '1· '. '1l.~' ,':
1

R" 'I 't"0 be' i.' .. I'"

'~1 R

,-l

_, _

RQI'],

o
10

0
I

_' 1' "'. ......... + ,.El-a 11io.("a' 1')''"


1
D

003
I-

4 , +~,

-1/2'
1

,-1/2

]'_

4/3

-2/]

-2/3

4/3

(5~'11.1)

'262

TiLliE,' JI n

IL£,;

IJIDtlN'SQN""',",

v ,I" '.

-.',

'I

I:I'\.L \,.,IU,J\>;.J

D~,II'D,c'IO":'-'N""
_

Using the Levinson-Durbin

'we next generate the second-order model. With

r2, ~:
£2.
""'.',"_'

'Yt =: .rx(2)

,+

U']

(l)rx(l) = 1/4,

= E'],II '~' riJ = 2/3

-'Y1/E,], = ·-1/3

d ror we fino, fi la~~


: I

,'-'1':.",

'

Ii
82 ==:
;

1
_]-1!,1,,_',-2,',

o
-,1./2I, i
I

I Therefore tbe inverse of R2 is,


J)~l
..L~

-1./3 ,-1/3

,_
,_

4/3

,-2/3 0
-4/3

,~213

a.

000
-

':

3/2
,-··1/2

-1/2 -1/2
. ::'
,

I'

Ii

I!

3/2
-1/2

,-1/2

(5~112)
i:
!j

-1/2
~i;i!:lJ.:

3/2

"W'. ", .,~,I,~ '1~_ .;···:1l..1 ' ri)' 'f' " 'h:]

tl e des '~'r-~I resultIlL•. h [~,l~:l;.U"


I' .,1 '.

,A slight variation. of the recursion given in Eq, (5;.,110)may be derived, by exploiting the Toeplitz structure of 'Rp~Specifi-cally" since
, ".p'

,--: J--R-', J-1--- R* -, p


I .-',

Ii W.. ere
.. .' : .. , ':

"

J .1Sth,Ieex,c, lange matnx, taki,ng tn- ,f; mve.rse 0. ,- 0"th SLe-s- 0,f ,thi1.8equation, '. h ' -feb' - .. . ide - ' - ,-"1....
, '" : ;.' ., . ". ' . ", ' i.,
J. • ':

I"

:1

I ,+

.'

, I~.

"

:-

['

+1'1 I'

._.

'-'

..

,~I·

..

. . '.

'I

. ~ I'

I'

'."

'II

..

',"

'I,

-' ','- I'. ~e, u;slng - dJ'


",'
I. [" ': r

fact that

J-1 ~' J, we find

Applying this trenstormation

to the expression for Ri~':[ given in. Eq, (5~1.10) we have


)'.'f; (R:,j
-'-'-'~~~,~'-

- .~1 .
»:
-

_0

i ,0, '-j I ...


I :.
-:-'~-~-

Or

!:o!

'I

J +'
I -

1
"

€j+1.

J (aJ+l) (a,,:+ 11[) J ~ ,


'

--'R

'*.

-.R

. T :;'

THE.tfVlNS'ON~URBIN' .RECURSION'

·26·····.. 3··' ... -. .

Simplifying the expression on the right by multiplying by the exchange matrices, we find

(5.1.14)

Wl [IlIC!u IS

1I....~L_

•.

th 'de ~ 00" . . rieCUfSlon .. .. .• .1 ~:e .Iern

..

In 'the next section, we den ve the, 'Levinson recursion which 'may be used 'to solve a general set of Toeplitz equations of the fOnTI,

1 [b:'(""O-"'):' b'·,'''· .,- .-,·.o-,;~ -:.' ',.' .', ,."".,. ··:'"iJli1 derive (I), "..~, b(p), ]1.I ..', an arbitrary vector, Here, we w· r aenve an .IS equivalent fOnTI, 'Of the recursion that is based on the recursion given in. Eq, (5~110). The .. derivation proceeds as follow's...Suppose that we are, given the, solution to the Toeplitz [equations R.f~j· = bJ [and that we want to derive the solution to

.' -' -. where"', bp .-

I[

"

..

'-:---'.

"'j

1,1,

.'~

where

Applying Eq ~(5.,] 10)

"[0

Bq, (5... 116) we have


:' -

. ·R--,·-I: 0
. .."J~ 'I,
I .

_i

.
"

~-~~-~--i-~~-bj+w

Or 10'
I

.'

R"· +. - 1 a'~+l J.
.;

"

,IL.-,

1<:1'+ .

:.:,a·':4--1 .J

".Ill;.

:D

')"

~'I

bj+1

..
or
[aj'+l

;'

. + E."+:. 'I
,i

an "'j+].
.

_Ib'

(5~I18)

J" '.'

(-j"

'J

'11-.'

119")
iI", ;',
,,'

Equations (5~1] 8) and {5~.·~ along 'with "the Levinson-Durbin 19) the updates [ofaj constitute the Levinson recurslou .. ,

~~----~~--------~~~--~~----------~I

recursion for performing'

.~ ..

".

..' 264' .,

mE~ .

J rl::'1L/I'N" .. ~:. Q' LL""~'.·r.'

'5' 'N::' Riil::rU' .''b't'"r~0'" ,"_ r.~'--I.,:.r.'~'~J'

We have seen how"the Toeplitz structure of the all-pole Donna} equations IDS exploited in the Levinson-Durbin recursion to efficientl y solve a set of equations of the form
Rp3p -; 'Ep"U.l (5.12.0)

Toeplitz matrix. and '1Il! is a unit vector, There ,are many applications" however, that require solving a more general set of 'Toeplitz equations (5J21) where the vector on 'llie right-hand side is arbitrary, For example, as we saw in Chapter 4; Shanks ~method requires solving a set of Iinear equations of this form, In addition, we will 'S1eein Chapter 7 that the, design of am.:FIR digital Wiener filter requires finding the sol ution to the Wiener-Hopf equation

where Rp is a (p' + 1) x (p

+ I) Hermitian

Rpap

:.;;;;.;. r:dz

where Rp is a 'Ioeplitz matrix and I":dx is a vector of cross-correlations that is, in general, unrelated to the elements fun. Rp. Therefore, in this sre~ction6Y'ederive me original recursion developed 'by 'N~Levinson in 1947 for solving a general set ofHermitian Toeplitz equations of the. form. given in Eq, (5 ~121) where b is an arbitrary vector, This. recursion, known as ." 11th '" ,. the L evmson recursion, snnu I taneousiy soives ae f." IIOWing two sets 0"'f li ;IO! nnear equatrons h."
rx(O)
,,

r;(l)

r'*(2) ' x r;(l)

,. ~

r;(j) *( " r; ':} - 1) .r;'(j

1
I.

,E)
-:--

-'x(l)
fx(2)

r. . 'x· (0)
,rx(l)
.•

.
'

'~x(O)
'.

,_ 2)

,Qj(l) ,uj(2)
• .,

I;
I. ': ~
j

0
(5,.122) .. ~
..

-,0

rx(i) and

.r-x(i
r;(1)

.-

ID)

.rx(J

2)

. .. .
,

rx{"O)

I I
[

aj(j)

0
,

r.*(2)'. Jf.
~

r;(j)

III

b(O)
~

'x. (0)
r (111) ill"
]I"; '.

r;(j" - 1)

. ,I
r:

b(l) II x,j(2)
.,
,. , .

r;(J
..
+

·_,2)

b(2)
'.

''..
,r.

,.

II

".

fxj'-'''''
J '"

(. ,.

'2")"
'.

rx(O)

.• b(j)

Ii)
II

tor i == .0, I, .. ~~p~ Since Eq...(5..122) corresponds to "the, ail-pole normal equations, we will find that, embedded 'within the Levinson recursion, is the Levinson ..Durbin recursion, . In Section 5~2,.9we used the Toeplitz matrix inverse recursion to derive one form. of the Levinson recursion .. Here, we w~11rederive the recursion. without using this 'matrix inverse recursion, The derivation, in fact" will closely resemble that used for the Levinson-Durbin recursion IDn Section. 5.2~ The Levinson recursion "begins by finding the solution to Bqs .. (5..122) and (5~123) for

:==

0+The reselt is clearly seen. to. 'be


Eo

=~ rx(O)

and

xo(O) := b{O)/rx'(O)

Now let us assume that 'we knowthe solution to the jth-order equations and derive the solution to the (j +'l)st~orlier equations, Given the solution 8j to Eq, (5 .. 22) the. Levinson1

Durbin update [equation (5~14) gives us the solution

8j+l
Ii

to tbe (j

+ Ijst-order

equations

.rx(O)
t:x(l)

r ',x*{l)'"

'il

'f!

r*(' ,t\ ,x,j-)

r;:J ~

"( •

+ 1.,"
.. ,.,

.)

'._

:1

fx(O)
+

r*'(.' ,~)':

.xl

:~

.' •

..
+

_.
,1

[
[
' [ [

o
.,

_!

.'

(5~l24)

,T Ji!o:("]) 'X·' Tx{j

+ 1)

1"x(O)

o o

Exploiting the Hermitian Toeplirz property of :R.j+1" if we take the. cos plex conjugate of Eq. (5..124) and reverse 'the order of the rows and columns of Rj~:1~' Eq, (5,.,1-24) may 'be wri tten in, the, equivalent form
.r);(o.)-

r;(1) rx,(O)
,. ,. ..

.' .'
+

r;(j)

~U r~._ + 1)
" ]f;

II

aj+l (j
..

1)

,J
II

r~fI)
.. .,
I

r;(j - 1)
... ..

r*(})
"

"

aj+'],(j)
~ ~
"

[
[

.' .,

,_
II
I
I

-0
.. ~
.,

r(j)' ,x

.rr(j - 1.) .. .' ..


I.) '~x,(j)

rxl(O)

-xo
T'x(O).

(5',.125)

i,

.a·+1 (1) * :' J


I:, ,

0
,

rx(j

rx(l)

il

1.

~,,*

j+l ~ ~

Since it is assumed that the solution ~jlO Eq, (,5,.,123)is known, suppose that 'we,append a zero to Xj and then multiply this extended vector by 'Rj'+'~ as we did in ilie·l_g:vinson,-DuFbin , recursion. This, Ieads 'to the following set of linear equations .
.r:r;(O)
rx,(l) b(O)

rx(O)
.. .

·'... r* ,(:)~ 1)' -, x"

",t*[(', j,,':')"

bel)
.. ,.
'

.' .
'

.
I

.. .. ..
"b(j)

(5 ..126)

rx{j)

,. ('J,~_, 1): ,.. ' ..


;;t . ,

r;'(l)
r ,,(1)',
~, ,

Xj

,:' U·)'

r~,(j
where

+ I.)
by

1"';:(0.)

Of

aj is. given

..

Since 5j is not, in general, equal to b (} I}; the extended vee-tor 01 is not the solution to, the U' +, 1.) st-order equations, Consider what happens, however, if we form the sum

,+

[xJ:~

Xj(O)
xj(l)

, !I
!

b(O)

i:

b{l)
"

. ..
..

"'"

.'

, ~ 1:2'7) (5' " ,I,.' '.:

Xj(j)

aj+t(l.)

,iIIIlilh.LiL.

.&QUI

Iqj+~
1

"} b(
'.

,0

+ 1)" ~" ,,-}


L1 "
,

rE'j,-, 1

.-

(5~128)

then and the vector

Xj(O)
II'
1

,x~ [(-1)--_· j._-

* a·+1 ('") J
1 }.:

(S.129)
_.,,* a,' -:tIl" (-I)' J"T'" ~ :. .
I

L .. is the desired solution, Note that since


'

1
[€j

is 'real, the complex conjugate in Eq.. (5~128) ed rnus, .l.....Le' evmson recursion, as summanzed m '1: bl e S'- 6 is d fi di b , ~ .. ~ ........ ,..11 ,. may b e negjected, Th'" Wt;, tat o.o, ,. oenne li~[y Eqs. (5~128) and (,5,. 29) along with die Levinson-Durbin recursion, -" ]
111

Example! 5·.,3,.1 The' Le,:ms;Oll R,ec,W"Sion


Let ususe the Levmson recursion to solve thefollcwing set of Toeplitz equations,

4- 2
2

I 2

x (0)

x(l)

==
~

The, solution is as follows,


~ .. 1 Initia Jizat •. .. izanono f-' ..'L..... recursion. cne
EO

= ,.(0) =4

1:0(0) .= .b(O}/7'(O) := 9/4

1 For}~' ~O _",
-,

'::-

Yo

:rl = -Yo/Eo
a , 1.,_.

== ,7-(1) = 2
=: .~ t/2

Therefore we have, for the first-order model,

] -ji
1-

-'l·'

1/':2-:[ ; ':
l

Continuing 'we have,


lOt

= ("41 -. 11'11
=

] "'"

4(1 -

1/4) = 3
9/2)/3 = 1/2

&0 ~ .xo(O)r(l) ~ 2('9/4)

~= 9/2-

.' qI

[b(l) -&0 ]/€]

= (6 -

r:

TH:ElBIIN50N

RECURS10,N

'2""'6"',7'.'
", J ',' "" '.

1..,

fuitiali7i~

e recursion

(ay··
F

ao(O) = 1

(b) (c)

,xo(O) '~ b(O)/7x(O)

2~

For j
(a)
\V

:=:

O I, ~~~, p ,_ 1.
2 ,

)lj

=, 'xCi +,1)
Ii" " r J'T...

+ L{ ], j',(i')r~U' ,_ i ,+ I) a
!,

llj.;):

.r

Y:j·,·/;f" " , -j.'

(c)

FOii

i = 1.,. 2, ~. '. J

,a!j"l (i)
(d)

= ,Gj(i) + rj+ia;<j'

- i '+ 1)1

.q-j+t{j +,1) =: fj+I


E:j,+] -Ej [1

. '. (Ier {f) (g)

lrj+~ ~2,]
_ i ,+, ill)

8)

;=

L::mQ Xj (i)rx'tj

. qj: [ :~ [b(j

,+ 1) '_ ,5j l/-Ej+l


(l) =, Xj (i)

(h)

For.i = O~,1, . ,..'" j


Xj+l

+ lJi+laj ~1r(j ,_,i 1- I}


E

.(1)

Xj.,=,,1 (j

+ 1) = qj+J.

....4'

which leads to the first-order solution


..

01(1)

9/4
0

+, 1/2 I
I

-J/2
I

,I

I;

tl

3. For

i = .~ find that we
Y.I, ~ r(2)

,-

1./2

Therefore" I' 2

= 0 and the second-order 'model is simply


o

+.a] (l)r(l) ~ I + 2(-1/2) ~O-

'I' au, .inon, n ddi


+

IE"'2, __:_

d ,El ,atl:~

$1

=[

X,'

,,]

(O)t

,Xl

(1),' , ]

.r

(2)
,r(])

,. ' ' ~2 + 2(1./2) =: 3

q2 = [b(2) -151 ]/f:2

= (12 2
i, !

3)/3

=3
o,
.

Thus, the desired solution, is


Xl

(0)
(1)

2 =1

"'"'..... -'
4", ~

Xi

1/2 ; +·3 : -,1./2

268····
• : '_.1

TLrE,I' Ji

,!'I:\ nN' [U;; -rr, rt'"ON" .:,;}:."_

r ••

, R'L/""U" :.Ifji~JV1'-" ill ',£:\.r .. n·t"",~

Ge'' ,"j'1£ . , .. .~.,,;e ,:'.. ·neM.l· '. ·vm.~on .R' ··eca.rsw'n


T'1:..

.:

fun.ction
%

x=,g·le.v{r·", b)·

r:=:r' -(::) ;.
p~!,ength(b) ; a='l; x=b (.1) Ir (l} i'

,epsilo.n~:l'·'1} ;

.for

j=2 :;PJ g.=.r (2 :~ ) ,'*flipud (.OJ.) ; j

gamma~-g'/-epsilon;
a= [a.;' O]! + garEl.'ma* [0'; conj Irfli:pud raJ ).- ;. ]
ep$i.lon~eps;ilon*
[

fl -. abs.. gawm,a) "":;0 {


_l [~~ .....

d""".l,!10- :!Il:- ......t-» • j. l\or.-~-['" 'i,IIi!iJ


1Ii::.
I.

'll .•. I

'*·f··l. ~;n1,',."...;}~,~ ~ ,u .I:_'} ~

q- (b (j ) ~del'ta) lepsiloR.; x= lxr 0] -t. q*' Icorri "flipud. {,a} } .1 ;.

end
Figura 5·.16 A MATLAB programfor the: general Levinson recursion; .soiylng a: set of ToepUtz equations qf theform
:~p~p

=. busing

We conclude this section. by looking: at the computational complexity of the Levinson recursion. From Table 5.6 we see that at jtb step of the recursion there: are two divisions, 4(j + 1) multipiications, and .4j + 3 additions, Since there are pi steps in the recursion, 'the. total number of multiplications. and divisions that are required is
p-l

2:(4}
j~

+ 6) = 2p2. +4p

and the. number of additions. is


p-]

,L(4}
j~

+ 3) == 2p2 +. p

Therefore, the number of multiplications and divisions is proportional to p 2 for the. Levinson recursion compared with p3 for 'Gaussian elimination, and twice that required lor the Levinson-Durbin recursion ..A '~TLAB program {tor the general Levinson recursion is. prod g V..; ed " l'~ F •. 5' 16 JiUo.. ·n .
",.4i "lil'" ".

The Levinson-Durbin recursion provides en efficient algorithm for solving the ptb-order autocorrelation normal equations given in Eq .. (5,.5) using pZ .+ 2p multiplications and ~ ~biJI,·r.. '. b · 111 ,addi ,. • htrons I t ts POSSl: ,lIte"nowever, to red nee '0 ie Dum 'b 0-f IDiLlIUP:'1" .. if..iI... ner i '11·' tcanons r y apprr-xtmatesy
r

I.

a factor of 2 while maintaining approximately the same number of' additions using what IS '1.~~ tb S 1· ......... · • .. :~,: Tb k ey 1\...~:. t...!: d ~. known as . e ,"P' l~ Levinson R «ecurston [5']... 11~e·' to acmevmg tms rec uction rs 'to repIace the predictor polynomials Sj with symmetric vectors, Sj., and use these instead of ,3j in a "Levinson- like" recursion, Due to the symmetry of ,Sj" only half of the coefficients need to
I.

,I

·_ . ~r'""UJ Li::'V'~ ;.'.< "Llli.JUAI:,'.JI".,' THE-'SNM"',I',~nN·S·ON.' : ," R:crTfD'S'JON" .'.,


..
1 .. r ",'

269'·'··
" ".

be, computed. Vihat needs to. be done to make this. procedure work, is to develop a. recursion
for 'the symmetric vectors along with ,at procedure to find :ap from fhe symmetric vectors, The split Levinson algorithm was originally developed. by Delsarte and Genin for' the ,?ase of real data [5]., Later they extended this work by developing a split Schur recursion along with, at split Iattice filter structure [6] ~Although split algorithms have also been developed. for the complex case [1~2~ l2]~these algorithms are .a bit more complicated .. Ther-efore, in this section we will restrict our attention to the case of real data and follow the development origiaally presented in. [5] ,.In the 'following chapter, we derive. the split lattice 'filter structure .. , From our development of the Levinson-Durbin recursion, in Section ,5~2~ we have seen 1; that the solution to the pth-oroer normal equations is built UPI recursively from tbe sequence of reflection coefficients I' for j = 1, ...,'.; ,P' and the prediction error filters .A j (z) as follows j
Ai(z)

= Aj~l(Z) + rJt-'[~f~l(z)
l)' IS ~'iIl... '. UU,
•. rectprocail
=

;:

Ao(z)

==

($..130)

h W·. ,:ere,'j·:.zl).. AR(

==

( z~'iA"'j"-":

1 ~ l' If I potynomsat: 8 .11 we rep.ace

ith ~ Z WI.I Z: ~1 'In


t

Eq, (5~ 130) and multiply both sides by z satisfies a similar recursion giv-en by
~ .. R AR(. '). _, z: -IA'. j~'11 j ··Z,'

w'e find, as we did in. Section 5.2.1 that ,A (z)


(.... ).

Co. )'" .. Z.'

+ Fj A"..-l j
1

:Z:

·t

..

.A 0"<:'"
A, .El

· (z)

-,~.

-.'

i I

Let us .now define a second. set of polynomials, Sj(z:) and Sj·(z),. that. are formed! by setting the reflection coefficient in Eq..(5:~130) equal to rj := 1. [and.rj =: -1,. respectively,

S '-i1"
u,

(z) ='!j=l'Z - A'"


, .:),

(. ).. :
I ,I

+ Z.-:1 A·R ( ,). ·'l-l.z,'


.~
~

(5..132)

Z 1 (.,'

A: 'J" ~. (z) =, ''', -.iI......

.. ~

,~lA·I R '11,7"z) (1 ... J-~ """',


:.,.
1

(5~133)

These polynomials, arereferred to as the.sing_ularpredictorpol'ynomlals, .. By adding together 9 Eqs, (5".13.0) and (5~131) wier may relate ~(z) to Aj(z) and. A!'(z) as follows,
I

+ (1·;

'I

rj ). I ,).-, A','i '( 7)' + r: j'Z' S··' A' .llf.(). :.Cj(Z. ,= .


r-» '.

(5,.134)

Similarly, if we subtract Eq ..(5 .. 3],) from Eq. (5 ~130) 1 sj'(z) :is related to .~j (z) and Af(z) by
p . "*(" ('1 ~ i j)~j"'z)

WI~ 'find

that the singular polynomial

() =,A' fl."

-:' All.' 'Z ). j(


I

If we denote the coefficients ofthe SIngular predictor polynomials Sj(Z) and S;(z) by sj{i)
and

sj*' (,i), respectively,


S:jl[Z ..' (z)
=: '~. ~Sj.!>·z ('.) ,-i .
.1:=0
J

and
II)';:

[~(.).'.' .J :z:,

E' :=...'.

*(.,.).. s, 'l'IZ J ...

-!

i;:;:;O

:8R.e.ca]l

A j (z") ,an~t" s a result, a


given in. Eq. (5,,22)]~

'that we are assuming that the coefficients '(l.j" (Ie) are real, Therefore, Ai (z) is rOO!1ljru.g,a:te. s.ymmetric~ Ai (z)
the reciprocal po.],ynom~ru has the form given here [compare this 'to the definition of

Af

:=

(:z)

9It is imeresting to note that the singular predictor po:~ynamiai[s are identical to interest ~nspeech processing [4,,23]"

me, line JpectliOJ

pain ·that are of

'U~ ~ICl.nN·· DC'C' U\DC""QN' 1.I~~ ~"["'J ,":-::, n;~l.....f\~! . .' ,.'.'
'SON, .' , - .. "
.J

,. id let s ,_ [,, tIO)' an, ie Sj ,_ $j\,"',

'-U~);']IT ~ .... '!' Sj . i,..

, nd an,

vectors, then it follows. from Eqs, (5.,134) and (5.135) that


]

,,* ,_ [5i*('0') '.'.


Sj
=

'f

~ ,•• , ~

. *'(j~')llrbe Sj ., '.II .e
aj(j') aj(j _, 1)

the smguiar "pre, cor -' .. di etc e '\" 11,.


I",

'. II aj(j - 1) Ii
:1

" .

(5.,].36)
I

laJ(I)

I'
.,

aJ(j)

and'
,

'

','

1 IOj(l)

aj(j)

,I

I.

l1:j(j .- 1)
..
..

. ..
,

(5':13'7"')1
" . ,".II!! " .' ,. ~' •

tlj(j

- I)

OJ(l)

aj(j)

Therefore, 51 is a symmetric vector that is formed 'by taking tile symmetric (even) part of the vector ,a.), and sj' is an antisymmetric vector corresponding to. the antisymmetric (odd) part of 8j., 10 'Thus" Sj (z) is a symmetric 'polynomial
Sj(z) ~ ,z-J

Sj'(z'-l)

and

S; (z) is. an, antlsymmetric

polynomial,
S;(z) ~ -z~j S!(z'-l)

and as a result, both have linear phase ..]1

We now want to show that ~J and Isj satisfy a set of Toeplitz equations. Clearly from
T

Eq~ 1.36) it follows that (S.,

··",·, -'R--;.. {1,+rl·)R.. ,-:]8J,. ~,+R,i.'_ R l ;sJ :. ~i'B:J


I,

-''. ...,._" . Rjill} _' ~J,,[1 0'


,,!,.,

'I' .. '...

'~

i1i.']T' u,

and

' R..aR' <·']'i


then

'_ ~,,[['O';' '-'re} ,',

"~,, t

1',,~ 0"1

l:]T
":,

"j::.... '1'38'-) \ ..
""",

IOIt is,for this reason that the ,:al,gorimm we: are, dievel,opmn,g is called a "split" Levinson recarsion, i.e., ,it spJJts H1e
..:iI~ '_'I ~ • predrctor po'[ :ynonu;;u 8j Into Its symmetrtc ..._ ...:1' antJsymmet1'l,(;• parts. ana

l~'We,may also see this from the defiru.tiot1ls of Sj{z) and

S_;(z)

g~\!'enin, ,Eqs_ (5,,'m.32) and, (,5,.133),,,

TH'E SfUT lEWNSON ,RECURSION:· where we have defined


T' _, ,
'J ~

27'1

1 _L'r''J .' ~

Ej

(5~139)'"

Note that 1) may be evaluated from r:t, (i') and the coefficients of the singular predictor vector
Sj

as follows

7:) ~

i L.rx(i)sj(i)
i=O'

(5~140)

Similarly for sj we find that

:s7 satisfies the Toe,pruitt equa.tions


R i'~j = r'*"['1 0"" ." ~,*, 's ,~"
j! ~ ~ .• ,

[0',

';

,_

,-

1"J

-1'
:

where

(5.,142)
Given that sj' and sj satisfy a set of 'Ioeplitz equations, we may now' derive a recursion for '(he singular predictor polynomials 5j(z), sa ular to. what we did foe Aj{z) and. ,A;'(z) in the Levinson-Durbin recursion. To do. this, we must first show' bow to express Ai(z) in, terms of 8:J (l.) and Sj,+l (z). 'Beginning willi Eq, (5~ 134) we have Ai (z)
Replacing j by (j

= (1 + r

j) ~J (z)

_ Af(z)

(5,.143)

+ 1) in Eq,

(5.,.132) and solving for Af'(z) gives

At (z) = Z:~i+l(z) ,_ zAj'(z)


Substituting Eq..(5,.,l44) into Eq..(S'L] 43) 'we,'lind
A}(z) =; (I. + Tj) Si (z) - l~~+I(Z.) Th erefo re solv ing. for .. ·.:"'!Ii.r',l~:.'ii.."
~r... '.,···."l~·.:J&·,

(5~144)

+ ZA:i(z)
(5~'145)

A" '.j (." 7')' .I:~

. .,

which is the desired relation" Using, Eq, (5~14-5)we may now derive a recursion for Sf (z). '''[ii: b ,. '.h the Levinson- D bi recursion t ,. .",e aegm wrtn ;1;'1.. Le '. ;.ur '10
, '( . /: . /1. A}_] :,Z' A},z ')"== AJ~l ('Z ') + I' -1·R (")
~

,and nom from Bq'L (5.1.32) tbat ,Af~l{Z) may be expressed in terms of Aj-l(Z)
follows,

and ,~(z) as

Af-Th (z)
Substituting this [expression for

A:~l

:= z:Sp(Z) ,_ ,zAj -1(zJ

(z) into Eq. (5..146) 'we,have

Al(::) = Aj-Th (z)

+ rp;-l [ZSj(::)
(z)

- ZAJ-l (::)]

:= (l- rj)Aj-1

+ rjSj(z:)

272

!D,~'CU' '~JrV'f-'~. J' ~ .I'r .. THE' IU:-"-J1N', ~1',er'kN' , IlC-I ~' :_':,'R~C"~iIi..,'
j",

Now, as we saw in Eq, (5~145), ,Aj(z) maybe written in terms of Sr(z) and multiplying both sides of Eq, (5'..147) by (I - z) (1 _ z)Aj (z)

SJ-t-l (z.)'., Therefore,

= (1 _' rj)
"

(1. ~ z)Aj

-,], (z)

,+ rj(1 " " 'S, _ 1("") " ',J ~, ...

z) Sj' (z)

and 'Using Bq, (5..14.5) 'to eliminate (I ~ z)Aj (z) and (I - z)Aj~] (z.) 'we have after conibining

together commca terms,


"+l(Z) S,j' ',,' where we have defined
15j
-

,_ (1, "

+"7 -1," , )S"·(7)


I...

''j'

.... '

,-,1\.'7, , Dj, .....

'-1

(~L148)

= (1 - rj)(l
.

+ rI~'li)

(,

• .)

.1 . I,

}' .AI

"-t ;9",)':.: ",

In the time-domain, Eq, (5.148) becomes


i

(5,.150)
f thl smgmar me ~ 1!

L~ L C ed - . xpressecd' In terms .' wruc h IS rererre : 'to as tb tnree-term recurrence. t2 E' ae 'h 'it.,,~; il.._ .' predrctor vectors, tms three-term recurrence IS
,_,.",_...l~
II-

Oil

SJ+l(O)
SJ+l (1)

,I
J

!ljr(O)

0
t

.
'

.8](1.)

,-

..
r.

+
I:

,$j(O) . .,
•.

.$;.-,] (0)

- 11j'
1)

~
I

..

(5'+ 1.51.)

Sj+l(j)
Sj+l (J

Sj(j)
i

+ 1)

il
:1

$j(j
Sj

( . '~i-'[ J - 1)
0

( ~\ J.I

Equation (5,.148) Oft equivalently, Eq, (,5,,151), gives us ,amethod for computing the singular
predictor polynomial SJ+li (zJ given the previous two polynomials, ~},(z) and S,j--l (z), and the parameter ,8j ,. What is needed to complete 'the recursion is. a method for computing dj'~ Since the, reflection coefficieats F j are not known, we cannot use Eq. (5 .. 49) directly, l However, using: Eq..(5.139) we see that

"9
rj_:~ ]

+ :r~j

fj

1+ ,fi-l
Ej-l

However,

since

f.j :'~

,E) ~-1 (1 =
-' J

rJ')" then
',1\

T'

'_

(1 ~' rj)(l
.._,
Jl

Tj~l 'W"'. -';"I~C'~I ".ILU.,U"


'fr"

+ r,'}-"
~

+ :ri-l) "
Therefo
.L' ..... llll

= (1 _. r,·):tl
,J \
c:r<P~'
........ "

, , . '.

. + r:-_·n:) J
I '

.l.

.om Eq (" " 14'19':'1")' is equal'"" 0" 5 ....'. """"I"


",I ,'. ',,,'

,t'

.r .oj.'

.• '3 v), "',

m 1y', be ev'" nated using",' . , al _',"""'U ' , .(5.,1.53)


I

I'
I

, ,

~-

THE spur lEVINSON' RECURSfON·

27'3

along with Eq ~(5.140)+ Equations (5 ~140), (5 ~150)." land (5 * 1.53)constitute the split Levinson recursion for generating the singular predictorpolynomials ..The initial conditions :required

to begi n the recursion are


Soc.,:) . -'

: .I..... ~ ,] + r_. . + AoR'"" ']!IIII _1 [ .Ao{z) " ,: (z) ,'I .,0 ..., ·-1 R·

'

SI{.Z) := .A·o(z)·+ Z . Ao (z)

== I + z ~1

Now that we have a recursion for Sj(l.),~; all. that is left is to show how to derive the. prediction error filter .A.p(z) from the singular predictor polynomials, Prom Eq, (5~145) ith wn - .J == p we h nave
+

(1 ,_.z).Ap(Z)

==

(1 + rp)Sp{z) ._, Z.sp.+n (Z)

(5~155)

Multiplying 'both. sides by


Ap(z)

_',Z-l

and solving for Ap(.z) gives

:= z-.l Ap{z)

+. Sp+[ (z) -

Z-l

(1

+ :r.p)Sp(z)
(5.157)

which, in the coefficient domain becomes

With tbe initial condition ap (0)

that is missing is an.expression for -(1 + p)~ However, this may be computed from · ows.., S-" b ave and sp as .f01-1 Setting Z ~ '1·' 'iH",., ('S 55') 'we .:,.~ tn ~"
I •• ~

== ] 'we thus

r
'.W.. dI

have a rec ursi on. fo.r the coefficients ap (j) ..All


Sp+ 1
:

'::

~ . Th ererore, since

tben

(5~159)

1 TiL, .tlecomplete a form. that most efficient

Sp:I it Le v~,. ..., '"-,,...1 .-:. ~ if' ~ tnson R' tcecurston IS snmmanzecd'· IT": b" ,~' ..1'" ,IS presented, h .1D ra "le 'S'7 .1l ~ ..',' nowever, does. not take full advantage of the symmetries in. the. polynomials Sj (z) ..In its
m

form, for example, the evaluation of the coefficients


{j-n/2:~.
~ .2

1:j

would be accomplished

as follows

'"" [--(;)+ r
': '''"·X

(J"' -- 1_ sJ(,). "")] ,.. r


< .. -

,.

',.

j odd
.•

=-0

(j-])!2 ,

.~

"\' [.rx· ('") Tx ,} ."')] ..l" + (' - I'


I

l ') _Sp~...l.' , .

+ .rx ('·/'2') (·'/2') }.-'-_.Sj}..1

.;

J even

!.=O

274

"THE,lEViNSON' RECUR:S:tON

1.,

Initialize the recursion


(a)
,SQ
T~Ji :=

2 and

Sl

= [];,].]1'

(b)

= rx{O)
,=' ='

2.,

For j

= 1;, 2.~,,.. , . ~ p
'r}

«a)
fb)
{c)

'L1

~O

,''x (l)s} (i)

oJ

fj

/rI=' 1

5';'+1

(0) ::::8J+1 (j

,+ I) = 1
+ Sjft I) ~ 5js,,-1. (k - 1)

-( ) d
3. 4.

For k

= I, 2, '.~'.~i
(k) = SJ'(.k)
;=;

SiT]

Set (1 ,+, rp)

Er~~ (l)/ E,f~ Sw,H sp,(l)


,

For J
(a)

::::=:

l~2,~
r

••

P'

a,p ()')

=:

ap (j ~ I) +: Sp+l (j) - (1 + I'p).s::p (j - I)

5"

Done

In, addition, due to the symmetry of sj" only brut' of the coefficients need to be evaluated" thereby reducing: the number of multiplications and additions by approximately a factor of
two, "tth .. uded f t, 'I' -L . . ~ ra ' o.. ~ .' a1 A tnoug h.not mc 1 ;al.I ~i as apartottne spurLevmsonrecursronm To bI e 5 7 it ssatso possible 'to compute the reflection coefficient recursively from, the coefficients 8} Specificall y:, solving Eq. (5.152) for rj we find that
+

__

-,,"

l'
e

lJ, ..'
:m ~'

,_

+ :r~j-~l
0" :".~

.J
,

(5 160'):
: .~ ,I •. ••

i.., :'

.... ., ~ Th rrunat.1 con'dit ,e .' · ~ nnon c tms recursionis 'r'.0 lor ,j;.1...::: Example 5,.. . 1 4 S.... Le visso ",
, '"
I _

'''J''ILlh '~, ~ J!; 't!~;. .' '.,lit""~' ...

iI't, "-iU"I~I3i' ·.··Iill "

R'bC' .~

""""T.A in' iI tl)liIU'.iI'

Let us use the split Levinson, recursion to find the predictor polynomial corresponding to
the autocorrelation sequence:

" 'r ,= [',4''',1 " -,2"]'T. l'


,:!i"' t :1· :

In order to, make the discussion easierto follow, we will not employ the symmetry of Sj in .,.:l our CruCU 1'·' ijatJ.on.s~
T ... ! ' ".:ii1: •. .' I• Iainauzation

0"f the :1,

~ recmslOn+

~o :;;:;;: r(O)
~ -,2".'
""1) "

'~ 4[

s1 ~ [1 1 ]111' ,',
I
1
,

'THEsen tEVlNSON ,RECURSION'"

2,75

,2~For j

=:

ill,

1
T

~5: -

,.
"

1,

!,

()

, {}

I
II!
,
i

1 1

- 5/4
I
!I

2
0

=]/2
]

o
'p , .. ",-,

3'

'0""" ,_,' :-. 2··, W", ~ J': ~ ,

,,'.:1

"'" 1:..: <!); HQ

v '_ te

:I
~2

[4" m,

.1]1

'~1/2
I

= 9''./:2"
.,'

Again. using: 'Eq.{5"l51). to find ~1W'!! have


1
s3 I:
I'

0
,-,

ID/2

1
0

+,
-

1./2 1

91~O

=,2l5
~'21:':5::1
.' '"

'0
-

4~ Pinally, for'.i ~ 3 we have


r
i
!

'fJ =

[4, 1~ t, ~2]

-2/5
,c,

-2/5
1

= 6/5

Again. using Eq .. 5.l5l) 'to find (


II
,I

1.
..=.=..

r
I

S4.

we have
0
:1

Ii

S4\

Ii,
,I

2/5
~~- ,:,'
I ,"
,"

I
-

2/5'
,'1

+,

-_2/5 -·2/5
1.

,_ 4/15
I:

1.

1/3
'~,:":'.'

I,

-1/2
1.

2/3 1/3
:,'
, ,

'I
0

il

.0

Finall y, we compute the predictor polynomial. with, the help of Eqs (5~1.57) and (5~1,59),.,

27'6:

'THE[EVINSON RECURSION

Pirst, using Eq ..(5~159) we have

Then, from Eq..(5~157) 'we have

1
a3(1)

o
1

'---;1

..

!
I

1
s,,(I)

i
I-

o
(1..+ rJ)
1.
93(1) S3(2.)

,
I

1
[.

.' a3(2)
.[

s4(2) s4(3)

:1

t13(3)

Iacoroorati h am ., ....,1':: ." ncorporatmg tne va ues f.01' tlbe smgu Iar predictor vectors .gm.ves :1
I

0
1.
Q3(1)

, :1

:
I

a3(l)
a·.(2) :3 '. _

.+.

,_

4/3
-J ~'.

[I

a..(3)
. j. .' -

uJ,(2)
-

J
a3. (i)
;

0
1.

Solving this. equation recursively for the coefficients


1

we find

~·1/3 ·-1/3
2/'3
.,:-

5•. S'UMMA': '. R'y" 5


r ',' • ,. " " .' ,: .'_:: ".::~'~ .r ..
r

III this chapter, we have looked at several different algorithms for recursively solving a set of Toeplitz equations ..Each of these algorithms exploits the symmetries and redundancies of 'the Toeplitz matrix, This chapter began 'wid! a derivation of the Levinson- Durbin recursion

for sol ving a set of Hermitian Toeplitz equations in. which the right-band side is a unit vector,
Rp8p = €.p,Ul (5;.]61) Compared with ,0 (p3) operations that are required to solve a set of p linear equations using Gaussian elimination, the Levinson ... Durbin recursion requires 0 (p2) operations, U s-· ing the Levinson-Durbin recursion we were able to establish some important properties of the solution up to these equaiioas. It was shown" for example, that 3.f)1 generates a mini ..· mum phase polynomial if and only if the Toeplitz matrix. ~p. is positive definite and, in

T.'";Ii,," ',' SU1Y.lIIY~'.;tRY, "


.l
J

27'7
.'

.'

- ~.' case, me, reflection coefficients r j that are generated by the Levinson- DUrbin recursion
re bounded by one; in magnitude, I fj :1 -c 'I~ As a result, we 'w'ere able to establish that all-pole models that are 'formed using Prony's method and the autocorrelation method _ ~'t"- guaranteed to be stable, 'We 'were also able to demonstrate that. if 'Rp >, 0" then r x (k) . - l' ~. O~1, '.. ' ~,',,P represents a valid partial aetocorrelation sequence, and may be ex,-, Icdled, for k > P" This extrapolation 'may be performed by setting lrkll -< 1 for k ,> ,P' -, 1;0 finding the corresponding autocorrelation sequence using the inverse Levinson-Durbin recursion. In addition to the Levinson-Durbin recursion, 'we derived a number of other "Levinson';J!iIl.-, ~,~ • C 'j ,.rh 'iI!.,;, • .,t... .'L.. 'I'.... ., 'T'\.,,~,_'L.. .' ...' •. . e· ~recursions. ror exampie, WI, u t~ VIew that tne Levinson- ~w, u'ill. recursion 1[8a mapping ~ i,e from a. set of autocorrelarions .rx (k) to Hi Stetof filter coefficients ,(J:pt (k) and a set of reflection coefficients ri,t we saw that it 'W,(lS, possible to derive recursions to. find 'the,filter coefficients from the, reflection coefficients (step-up recursion), the reflection coefficients from the filter coefficients (step-down recursion), and the autocorrelation sequence from the reflection coefficients and the modeling error 'f;p (inverse Levinson-Durbin recursion), Two additional recursions that were developed for solving equations of the form given in Eq. (5.161) are the split Levinson recursion and the Schur recursion. The split Levinson, recursion reduces the computational requirements of the Levinson-Durbin recursion 'by approximately a. factor of two 'by introducing the set of singular 'vectors Sj,. The Schur recursion, on the other hand, achieves some modest computational savings by generating only the reflection coefficients from the autocorrelation sequence. The main advantage of the Schur recursion over the Levinson-Durbin and split Levinson recursions, however; is that it is suitable for parallel processing in ,8; multiprocessor environment, Finally, 'we derived the general Levinson recursion for solving a general set of Toeplitz equations of 'me, form
JL.
T

Rp,8':'p = b .. ..
"".

."

(5~162)

where b is an arbitrary vector, As 'w]ith the Levinson-Durbin recursion, the general 'Levinson. recursion requires 0 (p2) operations, In addition to the recursions developed in this chapter, there are other fast algorithms that may be of interest, With respect to the problem-of signal modeling, one particular algorithm ,. .. the ,f,' ,. leori hm tor ~ of Interest IS 4"t... iast covarlanc:e' ,agorlt·~,·:" Ii: so.'1vmg ",'L·' lilliecovariance 'norma '1'equations given in Eq, (4" 1. 27)., As 'with. the Levinson. and 'the Levinson-Durbin recursions, the fast covruian,oe algorithm requires on the order of ,p2 operations. Although, the approach that is .Ai .. j ... ~ ~'l ~ 'II <Eo'\-. T usee to denve me f; covariance ,;;,; ~:t.._..1;8. snnuar m styie b) Uta,t useed .:It!'or 'IU.Ie '. .' rast algontnm j[ Levinson
+'iIi..,
;li-IL

-=

.,

. recursion
., reg lures

'. id .. 1I~..l '~th' c: "",,ill .'+'1:.., ts consu era b111 more, mvorveo .. For example, ne 'last covariance atgonthrn " .~ rt"IIod • ~ .... Ad·· '. th r:J'~.J~ .' th '. ' _AI·' a tnne-upcate recursion :In. dU,' ,tHOU to '1' e 0raer-upaate recurSlon:at is f'ouna tn . It
1,j]_Y
T.

the 'Levinson recursion, This time-update recursion relates the solution to the covariance normal equations over 'the interval [L ~,U] to the solution. that is, obtained over the intervals :[L, +, 1 U]: and [L, U ,~ l]j., The details of this, recursion may be found in 118" 19]" Another recursion of interest is 'the. Trench. algorithm, which generalizes the Levinson recursion by .... 1:iii ,. '··1Ik !Q:1 t..'...... alJtovr"lng tb 'T: ~I~ matnx to ne non-symmetnc ~ iI1'lO '"leIj.. TheIrencns, aigonmm thererore , ae toepntz (.:.• LJ ,i ,:' T '. cmay 00 used. to sol ve 'We, Pade equations and the modified 'Yiullie=·Wrukerequations. In its most general form.the Trench algorithm gives both 'the inverse matrix along with the solution xp ('S' 1'62') -many, h uld be ........ ..t ... th Levmson-bke . recursions. l.:ifl".·...."" • to E q, 1(,~"Ji ,'I .. ,F+' .·,;l,~l it sLouk be n.OiOO.t.. In au.laon to I.. ' e '''I.e ., that, ~ dditi there. are other fast aigorithms for solving linear equations mat halve other types of structured matrices such as Hankel and.Vandermond matrices [8~,15],.
'j

0;'

i'T

t ...

a.

2· 7,8···.,

THE lEV,iNSON RECURSION

Refie~nces

~~~~~
V,=~1 I L JEi.rulLa... UJL1"

~~

~~

___

d 11....," .e ..:m:::: ·;F.~'I wtl:alce fil . tters lor ru,gh.ill, speec,h processing," Proc. 19871nt Coef. on A,cow'!., Speech; Sig, Proc., pp., 2 ill-24~ April 1987. 2. y, Bistritz, H,. Lev-Ari, and T., Kailath, "Immittance-type three-term Schur and Levinson recursions for quasi -Toeplitz and complex, Hermitian :matrices ,""SIAM Journal on Matrix Analysis awl' licati -3 4n A ppncanons. VO.!l.,. 12:~ D.O., '~ PiP. ,I" 7 - 52"0" J'] Y ill 9'9'L. .':! I" IT .w .•..' - ill ,3,. R., 'V~, hurchill and, J. W. Brown, 'Complex Vari'ahles and,l%pp.licati'ons, McGraw- Hill :New York, C 1990. 4\1, l., R., Deller" J. G. Proakis, and J. H'. L, Hansen, Discrete-time Processing of Speech Signals,
•. 11" .....~, 'lr~UCe
1]1 1

..l 1 'y B· '. , ,. Le Ari- ~ ,anU T,. ,~ "., :]stnt:z" 'R .V-"

'~cOmpJJeXl~Y

S'~ ,P;Delsarte and 'V. v. Genin, "The split Levinson algorithm," iEEE Trans'. Aeoust._, Speech; Sig. Proc., vol, ASSP.- 34." no, 3., pp, 470-478" June 19816+ ~ 'po 'D' 1 ,~ _.-I . f ".~ '1 ",1 '. ....... .-:Ii' .• u~ .r; Dersarte ano 'Y' "V' G' " "0' -n tne sp 1'·.. .,. '... , senm, "..• &-L, rtnng 0', C1assscar aigontl 1..,........... TIll' •. 1.~.ln JJlnear P'l:'DUIlICt!on ",ilk H meory, tt. IEEE 'Tram. ACO'USl.,~ Speech" Sig~Proc., vol, ,ASSP-35" no ...5." 'PP', f~5--653~ 1V:lay 1981. 7. J. Durbin, "The fitting ,of 'time series models," R'ev. lntemas. Statist . Inst. ~ vol, .23, pp,. 233~244,'" 1960.. 8~1 Gohberg, ·T.Kailath, and. I, Kolrracht, "Efficient solution of linear 8,ys~:ems. of equations with recursive structure,' Lin. A,lg~Appl. ",vol, 80" no, ,81~ 1986. ,.. 1. Gohb erg " ed, ~I. Schur ,UetMlls in Operator Theory and Signa! Processing; vol. I 8" B irkhauser,

h.1ocM:Hh~ml:r New 'Yo,rk" 1. '9-93.

Boston, '~986+ 1",~ G'~ H,. Golub and C:., f~, Loan, Matrix: Computations, J600s Hopkins University Press, Balti'Van more, 1'98-9'. H, M. H., Hayes and ~1.. A. Clements, "An efficient algorithm for computing Pisarenko's harmonic decomposition using Levinson's recursion," IEEE Trans. A,co:usL Speech; Sig. Proc.; vol, ".;\,SSPJ

34."no, 3., pp..4S5-49t, June 1986 ... 12. H,. Krishna and S. D., M,or-ge:La"':'The Levinson recurrence and fast algorithms for solving Teeplitz systems of linear equations," .iEEE Trans. AICOU3t." Speech; Sig. Proc. vol, ASSP~ 35,~no. ,6'1' pp..839-848~ June 1987:. '. AI Y H 'li', ,,. . hi 1-. 1 A ~.'I 1..,:: 13:..·.1S- 'Y" K ,'. :,ung and Y, 1_:,,, Hn, 'fi II ,gUIJ{ concnrrent mgon ·'th ana :pwpe:,: nee arcmtecture .c 00,1" _.llm doi Ii d tor vjJjng Toeplitz systems," .lEEE'Trans'. A.cousE., Speech; Sig. Proc ... vol, ASSP~ 3-1", pp, f~6-76",Feb. 1983. " t.4~ S" Lang and J., ,M'cCIeUan, "'A, simple, proof of stabiH~ for all-pole linear prediction, models," Proc. IEEE" vo1..67" pp, 860-861~ M,a.y 197"9. ~ th . '. oi '" . . 1~' H .•: T ~ A-" IrullU! T,.,' K,al"]at "'T- ". .3~;-' Lev-An mngu' lar f at tactortzsnon of strucmrec d H' "erm'~,tlan matrices, '~",. S·t. 'Ul_,Cnur' ,Methods: in Operator Theory and Signal Processing; 1., Gohberg, Ed.'1! ol, ill 8~Boston, Birkhauser, v
f .I~ ,. ,

'19,0 s::: , ··CIUJ.

1'10, N,. Levinson, "The Wiener rms error criterion in filter design and prediction," J. ,Math., Phys,. ~ vol.
, C' ,L.J, . 2'6' ill ",",'8: pp.,,·.lli-=:~~'· . '194''''' i.
'I' , ...•. ,.

17," J. Makhoul, '~~OTI! eigenvalues. of symmetric Toeplitz matrices," lEEE Trans. AcO'ust.,~Spe,ec:k, the Sig. Proc. ~vel, ASSP- 29',~, 4, pp, 868=812" Aug, 1'9-81., no, 1-8. J. :H:.,Mca.e~nan!, "Parametric Signa], Modeling," Chapter ~ in Advanced Topics in Signal Process.ing. Prentice- Hall, 'NJ~ 1988,.
j,

19~ M:. MOIrf, B,.. W. Dickenson, T:, Kai~~th~and A.. 'C,. G. Vieira, "Efficient solution ,of covariance equations for linear prediction," IEEE Trans~ ,Ac.O'ust" Speech, Sig. Proc., vol, ASSP- 25~ pp. 4296.433, Oct, 1917.. :20i' 'L, Pakula and S,. Kay, "Simple proofs of the 'minimum phase property of the prediction error filter," .IEEE Trans. AcO'us,t. ",Speech" Sig~ Proc., vol, ,AS-SP:=31, n,OI. 2t. .PP,. 501, April 'n 983.. ,21.., ,R. ,A.,Roberts and C,. T. Mu~]js~,DigitalSi'gnai Processing, Addison Wesley~.Reading, :MA+ 1981;, 22~ 1. SCb.UI!, "On power series 'which Me bounded in tlbe interior of the unit circle,' J., Reine Angew+ Math., 'Vol. 14-1, pp, 205=:232" m911, vol, 148" pp,. 1.22-125, 1.91~8., (Translation. of Parts ]I and II
may be found, in L Gohberg, pp., 31-59 and 61-88).

r~N'-"'" B.,J IN',~nlO"':',',-0, U':' -CTl"" 'Vl.


r'

,IA·.

"L

",'

"J

In this chapter we consider the, problem of estimating tbe pow',er spectral density 'Of a 'wide-sense stationary random, process, As discussed in Chapter 3; the, power spectrum is 'the Fourier transform of 'the. autocorrelation sequence, Therefore, estimating the power spectrum is, equivalent to estimating 'the, autocorrelation, 'For an, autocorrelation ergodic process, recall that
. ~i· 00

N .

],.N
.. }:

2"N+l

x(n+k)x'"(n),'

A~' n~-l.

I
.
:

= Tx{lr)

(8~I)

'Thus; if .x(n) 'is known for all n; estimating the power spectrum is straightforward, in theory since all that must be done, is to' determine the autocorrelation sequence r x (k) using Eq, (8.,.1).,and then compute its 'Fourier transform. However, there, are two difflculnes with this approach that make, spectrum estimation both an interesting and a challenging problem, 'First, the amount of data. that one 'bas to work with is, never unlimited and, in many cases, it may be very small. Such 3, Iimi tation may be an inherent characteristic of the data collection. . , "Ie . ~ tne . .,.:1 ~ · ~d "!lI.rlh,...... . 'process as is th case", €. ex amp. 'Ie, 'in. th aDilJ.YS1S of' seisrmc c am -!L an ear tnquake.' In tor trom "1: which the, signal persists for 'Only :8. short period of rime ..It jis. also possible, however, that a limited data set is imposed by the requirement. that the spectral eharacteristics of the. process, remain constant over the duration of the data record. In speech, lor example, a stationarity requirement 'will restrict the lell,gtb of time OV,f;[, w'hicb the signal may be assumed to be ., . ~ d' /I .. r..1 .h e apprexnnatety':J statronary to :a s: lew miI'l~ ';~.lil,1 iseconos or tess, TIl e second.Ii diffi CUJl~.IS-· that tl' •. !r .. AI:~", " &. ., ., ith .. ~,,..,t;. ... '. , nata IS etten corruptea.Ai b noise O[ contammateod WI: r "an, mrerrenng sig nal '. Th'I- ms, spectrum by ,. :. estimation Is a problem that involves estimating P.t (eiw) from a finite number of noisy measurements of x (n_) '. In some applications" however, estimating the, power spectrum, may c: '.,~~ ..... by ,:"8;Vin,g-pnor knowte dige about 'III... ,. I.·'· 'L.. ,. be tacihtated ..J b havi nOW tlh process is generated.. It may b ,e ne known, for example, 'that .r (a) is an autoregressive process or that. it consists of one OJ more sinusosds in noise, This, type of information may then, allow one to pararnetricall y estimate '" d the power spectrum or." per haps to extrapo Ia ~t... data or i antocorre 1'· ate me nata its anon ]U oruer to • II;;L -'=' '. ~ l,n- ith Improve me pe~, rormance 0f-a spectrum, estimation atgoru m. Sp eetrum estimatio-n is at problem 'that is impo rtant in a variety of different fields and .. applications. It was. shown in Chapter 7, lor example, that the frequency response of a
t 111 '. 111
,

-_._._._",-_.

---_

...

392

.SPECTRUM ESTlJoM.TION'

noncausal Wiener smoothing filter' is

H ,.'.

'e~}-,W)': (:,

,I

P,d,', (e_.JW)' . " .: ",_ Pd(eiw) .Pv(ej.~)


",
-.

where P,d(rejlr» is the power spectrum of.d (n:), the desired output 'Of the Wiener fiU~~:and PI' (eJU)) is the power spectrum of the noise." u(n)., Therefore, before a Wiener smoothing filter can be designed! and implemented, the power spectrum of both d (n) and v(n) must be determined, Since these power spectral densities are; not generally known a. priori, one Is 'faced with the problem of [estimating them from measurements .. Another application in , which spectrum estimation playsan importantrole is signal detection [3JLid tracking ..Suppose, for example, that a sonar array is placed on tbe ocean floor to listen for the narrow-band acoustic signals that are generated by the rotating machinery or propellers of a. ship Once a narrow-band signal is detected, the problem of interest is to estimate its center frequency
e-

in order to determine the ships direction or v[elocity~ Smce these narrow-band signals are typically recorded in a very' noisy environment, signal detection and frequency estimation
are nontrivial problems that. require robust, high-resolution spectrum estimation techniques, Other applications of' spectrum. estimation include harmonic analysis [and prediction, time. series. extrapolation and interpolation, spectral smoothing, bandwidth compression" beamforming and direction finding [25,J4],. The· approaches :for spectrum estimation may be generally categorized into one of two

." lasses, The first incledes the [classical or nonparametric methods that begin 'by estimating c the. autocorrelation s.equence rx (k) from a. given set of data, The powem-spectrum . is then
estimated by Fourier transforming the estimated autocorrelation sequence ..The 'second class includes. the nonclassical or' parametric ..approaches -, ~which are based on using a model for the process in order to estimate the power spectrum ..For example, if it is known 'that x (n) is. a pith-order autoregressive process, then measured values of x(n) may be used to' estimate the parameters of the all-pole model, ,ap (k).~and these estimated. model parameters, li p (k) l' may then, in turn, 'be used to estimate the power spectrum as follows:

. .k.=U ,."[r y"e


I.

~a L.J

'P'-'

[(k)I[e-,-jk~:
!

b ~ t:.1S c h apter wit tl e Cl',aSS11J:ca nonparametnc.. . spectrum estimation we'h ~ ith h .' '1 'Or .. ~ ~'egmn hi niques .. These methods .. which are described in Section 8.2, include. the periodogram, ~ the modified periodogram, Bartlett' s method, Welch~s method, ,an:d the Blackman-Tukey method, The minimum variance method is 'then considered iD. Section 8.3" This technique involves the. design of .8. narrow-band! filter 'bank to' generate a ~let of narrow-band random processes, The power spectrum at the center frequency of each bandpass filter is then ,.' . ~. iii.. .• ~L bd ~ .. ]'. '. fi estimated by measunng the power In. rue .IDlarrow·~·~,.. rocess and dividing by the Iter p bandwidth+ Next, in Section. 8.4, the maximum entropy method (MEM) is. presented." 'and it is shown that MEM is equivalent. to spectrum estimation using an all-pole model. Then." in Section 8... welook at.power' spectrum. estimation. techniques dtmt are based. OD a. p1ara~·~, metric model for the data, These models include moving average (MA)1' autoregressive (AR), and autoregressive moving average r(ARJv1[A);, Next, in Section 8 6, we consider frequency estimation algorithms 10:[' harmonic processes that consist of a sum of siausoids or complex exponentials fun. noise, These methods, sometimes referred to as noise sub=
a, a-

Nl:NPAR.M1ElR"C.MEfHODS

393
l -,'
"
"

space methods, include the Pisarenko harmonic. decomposition, Jv.IDSIC" the eigenvector method, and the minimum norm algorithm, Finally." in Section :!t1'~we look at principal components frequency estimadon. This approach, which.also assumes that the process. is harmoni 'tonus." a I k ~," 'h' .. anno.me" c. .ow- ran~. approximation to tl: e antocorre 1· anon matrix, w: i -c:h 18 tb'en -. i -I iocorporatled into a spectrum estimation algorithm such as the minimum variance method
L

hi-

'

or
"
'J

l\ffiM'
' •.

-' 1

"

,""J

.,1"

._' ..

,8,.2 NONPARAMETR'IC METHODS


In this section, we consider nonparametric techniques of spectrum estimation. These methods we based on the idea of estimati '1-"7 the autocorrelation sequence of---= a ... random process '-"""-'f.IIr from. a set of measured data, and. ' .e~..·takin:-, the Fouri,er translorm to obtain an estimate of ·............_._fi· the power, spet;~2mr We :egm. with the petiodogram; a nonparametnc method II: rst mtroduced by Schuster in 1:898.in his. study of periodicities in sunspot numbe-rs ![47.,49]"As we ,{ will see, aJ.tbough the periodogram is easy to compute", it is limited ~. its ability to produce an. accurate, estimate of the power spectrum, particularly for short data records, We 'win then examine a number of modifications to the periodogram that have been. proposedto imp,rove its statistical properties ..These include the modified periodogram, Bartlett's method" Welcb.;g method, and the Blackman-Tukey method, '
. • .. .... __ ok . ......

r:,......

__ ......

......

,'

Iii

,1

u,

....-~~~-

••

~.,"

'J

'111--'

..

a:-'iar

............... r.......

_.:.

.......................

~ ...................

__..a..:T·1

I .......• ..

'fo

,/

~e

r_

,~.--,.J 82'

---'L Peljlr d Tne.',· ',rlP"Gg'IV,m'

the

power spectrum of a wide-sense stationary random process is. the Fourier transform of autocorrelation seq UteDC-e,. -

:..JI

_.:

Therefore, ~_ec·oorm03es.rima~n iS1 iuome S:c::I:~~ au~orrel.aJtirQn estim,alion .·-,,'obl~m~ For an autocorrelation. ergodic process and an unlimited amount of data, the autocorrelauon seq uence may~,in. theory, be. determined using: the time-average
rx (J~)' =I\J·_;
,N
l!Jj~

~~ '. 4'·00

2N

".

;- ..,

+, 1 n..~~.N
.~ ~

'"" x .( -,+ ·-x,n-'* (..' :)',n - k-')"


j[.~ ~"
r ~ ,:' ._

(:8..2)

. - IS fj" ite iaterval 11 N' . 1 b:len ",'I.. L. tne H owever, W.·f x (n,)"'1 on y measuredd ,over .a ", rule mterva ~say n =,0:., autocorrelation sequence 'must be. estimated using, for example, Eq~(8 L2) with a finite SU~,
"'10

rx(k) = N
.

1 !'i-]

Lx(n + k)x*(n)
n=O

(8..3)

In order to ensure that the values o,f.x(n) that fall outside theintervat [O~N ~ 1] are ex-cluded from the sum, Eq, (8~3) 'will be rewritten as foUow-s::

fAk)

= ~ N~l:\(n..l.k)x*(n)
, 11:=0

k -:0,1, ... , N -1

(8.4)

.( P p,e.'r'.e.}'flJ ....,_
'" T 1 ). _

k=~'N+I

.Although defined, in terms of 'me estimated autocorrelation sequence Px (k).~it will be more convenient to express IDe periodogram directly in terms of 'the,process x{n)., This may be done as follows ... et x N' (a) be tile finite length signal of length N that is equal to x (n) over L 'the interval ,. 0~N - 'ill]~ and is zero otherwise, [
...... .'i' . .J'~'.,.: ~

. .'. ".. :I..r· ,(")

{'" x(n) "

'ii' "

0
r

.....

... ,',

-0tElerwise

..

,6· (s":1)'· ..•.....


',

Thus, x,N(n.) is tile product of x(.n.) with a rectangular window w\R·(n)"


xN(n)
]'0':.'

= W',R(n)x(n)
1',.... .
C'lQq'-1 OJiV',

(8~7)
be 'W'zrittenas follow·:s· .".- .',,:' '. :Lt..... ,~l!.L,. · JJ..:, ,.': ......
'!'. •

terms of .. :.cl..l!JWw.,'· ..

Y' ~jV".,

('n"):,. '~I''Le"estimated :. w, .-,. ·u. u .....

'

autocorre tion . ':.c v!ioiv1.J..'J.il...Y. " '


'J

U' 'e'"Ui I,.".e' '. 'ma' ' if1i' I' '. ~ V .

- rx(k) = N

100

,:1
XN(n

+k)xN(n)

- NXN(k)

*x~(-k)

(8.8)

G__aking !!:heFourier

ttansform:~oo(lSingthe convolution theorem, the periodogram becomes

where X N (ei4)) is the discrete-time Fourier transform of t'he N -point data, sequence X.ffl(n.)."
XN(e}l,'}

=} : XN(;)e::-j....,,~>(n)e-jn""·
00

N-l

)
.

n~ .. oo

,n=O

Thus, the periodogram is propo,rtfonrul_ to the squared magnitude may be easily computed using a D'jtil".ras, follows: ..
. (.)"OFT xN<I,n, ~ X (it.) =-+. N ·IIX····· IL 1'12 1 :~." (",,)"' .: N'·~·· N
1

of the DTFT of X.N (a), and


~

1/

4(8.10)

j}

=:,'

p..... P1!.'r' (' ej21l'lj N) . '.

& .'11.. '. _All ,,~., A "I.: 'IAl'LAB. program for 'the penooogram is grven In 'F'§ 'S 111 M ·'1g-..',."J[;'

The ,et=:, ""-om.., :n...ft~


. • l '.

.'~8.".•

'Inll"

fUnc t..i on Px = per..iod.ogr


Q;..

am tx:,. nl ,-.. :02)

iD·

ii:;;

Alt::);,

if 'liar-gin == 1
end;Px ~ .alb,s {fft (x ~nl':: n2} ,~' 1024,) ) . A2/ (n2,-'nl+:1) ;j

n.1 ~ 1,;

n2 = 1.en9'th (x)

P.x{l}

::zPx.(2) ;

end;'
1

:,

I'

Example 8.2. [I! PeriodOgTllm of 'Mite Noise is white noise witha 'Variance of [(7;"-; then Tx(k) =: o;[8(.k) ;and. the power spectrum is. a, constant"
]f.x(n)

, e.', P,:;-'(" J'"ClJ.!)'' := .......,2 WI:If


'. " 8 ta ~ 11.".'Ii ~ '. Sh O\VIl in F"19:., ,~•2' IS ,asampse reanzanon
The autocorrelation
0-f I

~ ~ :11 ....:: ~ '~ urut vanance w.uJ..'tte noise 0.f' ,J..engttit,'


I

,t."

N'

=: ':

32
"II]

..

sequence thst is [estimated using Eq. (8.. ) is shown.in Fig. :82b~Note thmt 3 ..

1

.
1

,I

J.

.. .. ••

:'''''''',

'.

• • • ....

.....

• • • .. .. .. • • • • • • ...

1'............

.. .. .. . . . . .....

i Q"r ~~~~~~~-T~~'~~~~~~~~~~~~~~~

-'I:
,
"_

[:

)
"

,I

-1" , -.2 0.

. .. .. .. .. . . .. .. .. .

'.,

..

· 1·· .... ····

~~.",.....__Jl....-..-=<

__

....L..........~~_....L......~~_--"-----~

__

"""""--~_~""""""'"

10·

1.5

20

""'!I~
,~

so

(a)

,I

....

(b)

..-..
''0 ~.

,n,j, \.II

'~.,

~.

..:::,_ ::~.-' .
',"

~. ..

~l'~"

..

'._:,. ", •

---:.

'_,

.-:

'"

m
~

-1
_:.::_

~ 1:1 ,.._'

,I']'

.
•• • • • • .. • • • • • : ".-

'I

.,

"

,'

,
L

I'

'..

.
• • ••

.. -_.

••

• ••

'.'

.. .. .. • • • • • .. .. • • • • ..

-2:

:g-,3'
-4
.

: ~

:: ~

:.

:-

~: :.. .. .. .. .. .. .

.:
O~'1

-:.. .
,01.2

.::

<.. . ~
..'
0.7
::
.,

~
.
.

.
1

..
~ _

-s~ _ o

.: .,

.:...

:-

___._,_,~_..L.._~

......._ __

·0.3

F{'EqueIlcy' (ul1l~m,of pQ,

0.4

_.__..__~o=--IL._~ o.s O.. S

........._----II._,.~.......____~-=--'

· ·

0:,,8

0',,,'9

(c)
Fig'ura 8·,~:2,~J A, sample re,alization oj unit variance w'hite noise [oflen.gth N .~ ,(

32 (b} The estimated autocorrelation sequence. (c) Tbeperiodogram .alo.ng 'with


the true
pO'W1e'Y

spectrum; .Px(eJ~)1 ~, l~,",',hi"eli s indicated by the dotted line .. i

... ' 396·


1 ,._).:

rL '-' S~-Ui'

~'iII\., 'J1fl1 ,~;;;

~liI:C"n.&jl·'~'na'· ,J ~tYlI!Jo1,: ' ,.' I

N'

...,1Ith W10Ug.\ll,

which is. the Fourier transform of ;;t (k), is shown, in Fig, 8.r.2c along with the true power spectrum. Note that although 'me periodogram is approximately equal to Px(ej~}" on the: average, we see that there is a considerable amount of variation in P per (eJ~) as co varies.
~, i;l

h.... ,':: ts nero lor {,~ > 32"~, 18.nonzero ~or a i. otner varues 0', A,~,'Thl Ie penocogram, ,," Ik ~ • ; .'it ~ s: aI'I' +;L .... '1. f T.. .'........ ..:1. r, (t')•~ '.' .

."~'iIi",. A s we will, ',' ..see, sue h vari ". vananons are not uncommon 'WI,ili

iod me pence ogram. ..


+L."",,·

As we now show, the periodogram 'has ,an interesting interpretation in terms of filter banks ..Let h,i{n) 'be an F1R filter of length N that is de-fined as follows:
'-'f

:1

l"'nUJ~
,I

o
' ,.. "'"

(:8~11)
','.,'

otherwise

. The freq rrequency response


,. ".~

0:f

.11..':: nus- f Ite 18. n ter '.

N -1 •. h() (} i.e; 6!» '.' '=::. ','['H.e H

L'

'-J'f1;.W'

n~u

',.' .

,~".

=e

'-J"(ti!:Ji~I:,'J.,1t.(N-[)f2' " ~""~J" "

'L.

",

sin '.'N" I(W - r.;.:~'L)'/2"'-'] - [' "'~~ " "'~ .' , N' ;5in~(.m- wi)/2]1
OJ!.I I,
'1i.U\ , .... ,

(:8.,12)

'

which, as illustrated in Fig. 8.3" is ,(1 bandpass filter with, a center frequency W'f~ and, a bandwidth that is approximately equal to 6w == 2Jr I N ~If a WSS random process x(n) is filtered with ,h,r, (n), then. the output process. is
y,(n)

= x(n)

* h;(n) =

k=..n.:-N+l

i:

x(k)hi(n - k)

= _!_
N

,k~ll=N..LT

i:

x(k)ej(Ji-l;r...

Furthermore, if the bandwidth of the filler is small enough so 'that the power spectrum of x(n) may be assumed to be approximately constant over- the passband. of the filter, 'then the

'

--'III

I.

.....- r· ' .. .g1..uJre' 8:. 3'


'I

i'

Ti[. .. ~,

.I. "u;

' ': 'n" ~+o''''d' ~''IIi.L t:-:", ' '" . , .. ~. , •. iIi,f~lQfJUpass,' F .... 11Nlg, I:U-fi. ,f!. 'UJ :t.:1i,e,J:'Ir.:-q,uency response .' i?) ,;,1:.,_. b -, ...i ,... , ft,· 'l'*,", . ,... d ,en t,!:" ~fi":l'+ ,6r use/' ,. ,1t.e,~~.Jo,er

bank interpretation of "

llii~periodogram.

I'OA. D JL ,i.A NOUl ,r.~It.MIV1.£:1L'n":I!V" M'~DS :,..: :1' rt'~~ ",,.' . fl'V,'
lj;. . .~

~'l"

.','

. . 39.'7'
.,

,("8.:' ~,J~; 1: ..f:)" ",

Thus, if 'Vileare able to estimate the power in Yi. (n ),',then the power spectrum at frequency WJ mav be estimated as follows: . ...

P '.x'···
A.

(:·e.iWi )' '~".'. N' Eh, ", ....,_

'f' ..

'lJ'·

,.:[".'-'111

(n- )

f,2 ill' ID

(8..15)

One simple yet very crude way to estimate 'the power is to use a one-point sample average, {' ~, ,-l't ..., ("N" E"': ',1Yin .(. ") ':, ,~= ;.Yfl " From Eq, (8.,13) we see- 'IDat this is equivalent to
lYi-(N """: 11) m.,l

12
2
jk"'J

l)f· = ~2 12::x(k)e!,k~

1 _,-=-~, . 1:

IN

(8.16)

r The'. efore
'.

,r;...l'

(8 ..m7) which is [equivalent to the. periodogram .. Thus; the periodogram may be viewed ;8.8 the estimate of the power. spectrum that is formed using a filter bank of bandpass filters as iliustrated in Fig. :g~4" with (eiW:i) being derived from a one-point sample average of tbe power in the filtered process Yi (n) ~Of coarse, 'the periodogram has such a 'til ter 'bank, "built into if' so that it is not necessary to implement the filter bank. However, in. Section '8.3

r.:

!:
,........-..--. __ Ii!"IIL.;!;j.jn.·~1 rw

W' ~ tn') .' If"'.}'

y] (n)

1..1 ., I12. J;J,d

.z] {n}
~.

_,.,

,JOL..
,

P. . l(eJ,(i!pI) x \.. .'


:;

":>'. .... (' N ~,."

m'):' -'

:I"---~~," ;!

I,

eJrliW2

W_R

(n)

)L.(.ti)
:r-! -....,........---~,."

............. --_......

....-...

P;: (eJ'~) ~-' l.2 (N ~, 1);

.. .'
.. '
Pi,glure' 8,A A. filler bank ~:nte.rpretati'()nof the pe,riadogram
zc (.n) 1---....---.......
I

,.

..-. , P z.e', jfJ!)I,)' (

Zi..(N' - ''II) . ,!I!.:

""'

3:9"'8'
." ••.•

we wiU [consider a generalization of this filter bank idea 'dial all ow s the filters to be data dependent and to have a. frequency response that varies, with the center frequency Wi ~

8.2,.,,2 Performa'nc'e of the PeriodOgram


In '[be previous section, it was shown 'that the periedogram is proportional tothe squared
'magnitude of the D'TFl' of the finite length sequence x N (n) ~Therefore, from a computationa! point of view, tbe periodogram is simple to' evaluate, In this section, we look at 'the performance of the periodogram, Ideally, as, the length of the data record increases, "the periodogram should converge to the: power spectrum orthe process, Pi.(e.jl1J)~ However, we must 'be careful when dis/cussing the convergence of the periodogram to P, (e·jW) ~ Sp-ecifically, since P p'lt!r(e.iw,) isa function o:fthe random variables x(O); '..r.. ~x(N), itis necessary to consider convergence in. a statistical. sense, Therefore, in thls section, we 'will look at '·00 ., h ".'L. mean-square ,coDvergen,oe 0 f tne penoeogram [[33 ,Aln'] 1.,'e'.'1' WIn b" interestedd m wnemer .. h ":",,4V,'"~ 'we "1:iII oe ~ or not
I

, ,." l;rj.;f.--"". ......-.. PI! ------r\...Ai.i'l

lim E I"~ I:
I'

['p
.

pt!r' ,[ 0,;..,

(',oiw).] -

rx:'«e' j.~)',]-12 )': :.':~'


: :
I

""='

[0"·[ ._'

1.1

In order

for the periodogram to be. mean-square

convergent, it is necessary that it be asymp(.8~ 18)

toticall y unbiased

and have a variance that goes to, zero as the data record length N' goes to infinity,
.~~.. N~~ ,\,t -. "-{I p'":. per-·f j.f1J')'..}' VaI":\:
'C'
r"

=-."

-.

(8 ..19)

In. other words, P'pef' (eJ'W) mnst be a consistent estimate: of the power spectrum (see Sec,. 328) : -irst, we IConsi ~ rtne bi 0fth penw.ogrlaffir. ide ""\.. mas . tne '·.... .d bon.'., ' ~"r, F·' '.

.....

n...-":od·log~a"m B=as' .' ~~o compute the'"':l~~. ·'b· Ir-~Ir" : :I!!:'~_ ... 11" ~ ~'I. ,_;.,'.
T

'1,':;[:'

"

expected value of rx(k): k: := 0., I, . .....N ,_. 1 is.

From. Eq. (8~4) It follows that the [expected value of rJ;.(k) for

f",I[the '1".1 UWi,.li~,.·I~·:' we: bezi y fi'ndi~n·g· e peri ...... sram A1j ...... th[" [.... l.'.".'"gln bv ,,'II,~.: ,,: '",
,
'.

/
fi(f.t(k)}
=

Af L
n~

i N,-'J:,-k
E{x{n +k)x*(n)}

N,-l~k
n=U

=N

L rAk) =

- ;r;:(k)

and, for 'Ie > N, the expected valee is zero, Using the conjugate symmetry of.r:c (k) we have

wnere

L,.

N '-Ikl
W:B [(k)-.. ··

,.

Ikl ~

(~L2l)

Ikl >.N

2Since. the: Bsrrlen window is applied.


is. ]TlJ contrast
'(.0

(0

the. autocorrelation seqaeaee,

wJj·(k) is,referred

eo as

a.lag w'ind':ow~ This

a data wi·r.idow that is, applied '[:o.x(n).

·,D\o\\,O' M',\~O' -, ::":. l NON,)rM,~ .... 40·~lfTR'·'··\p:C:· ·'L·:,C::Jln 'D"I!~'


_~l ..• ,

l_

.•.

3'·· 99.·········

Using Eq, (8.20) it follows that the expected value of the periodogram is E· .{.,p'": p-er" (~j'.tl:J.). }:. ._ E,' {•. H . e- ... '," ..
, '. .

"'"

_"

'"

.:

,.k '-~V+l'

··:·],. ~(:i.-). r "'" E .I'fr.e


.... "

jkw }I" -. -'"


1

!i
. '.,

k~-"N+l

:~: r,,,: '.,'k'( ;")'.)1:1'~1k{r) ··.·: ,e E'·-·{' .." E


::.. . ..

'

~"L
,r..' '".

,00,

Yx(k)wB,(k)e-ilm
of the productr; (k)
WB

, :8""' (.",,'. , +,

2:-2~1)'1
- .',

i=-cc,

Since E' {.F r« (eiw)- 1. is. the Fouriertransform nl IOn. rneorem ..... '" convoru It:' ... .~L,·'_ ". ,., we.h ave .

(k:)" using the frequency

,r

where WB(ej~~)')is the Fourier transform of the Bartlett window, W}J(k)1'


_, W.B:i!- j{jJ). _, _.1 C

.-[ sm. (N.w/:2.·). J1. ~ .... ."


~". _

N _. sin(w/2)

(8.. 4) 2

Thus, the expected value ·of the periodogram is the convolution of' the power spectrum P; (ej~) with. the Fourier transform of a Bartlett wi'ndow' and", therefore, the. periodogram is. a. biased estimate, However, since 'WN (ejal) converges to an impulse as N goes to mfimty~ 1I!-1... iod ' .,. me :peno ogram IS asymptoucaufly un b <la'Se:"d
'_ .

'. ~ HiJl .. E
~ ,.'=-+00
1 -

~ .}= P.x . I
p.pe.l'
."

(e .) :
'

. J~

(e ..)

1W'

(8.. 5) 2

To illustrate the effect of the lag window, 'IV B (k)~ on the expected value of the periodogram, consider a. random process consisting of a random phase sinusoid in white. noise x{n)

==

A sin(nw

+ t/J) + v(n)

'where ,tP· is a random variable that is uniformly dsstributed over the interval [-Jr.! If ]" and 'v{n) is white noise with a variance u;* The: powle:r spectrum of x(n) is I . ~ 2[.1.i! -""!t'2 ' """A ( ( )"' Px··f· ..-= 0. if! +. 2.r~-·" :.i.4.ow ~ UJo:) +..~.U' OW" + COo)]1' (.J~.(:r})
1

Therefore, it follows fro-m. Eq~(8 .. 3) that 2

the expected value of the periodogram is.

(8 26)'1
• !to _ •• ~'.'

The power spectrum Px,(ejfd~}) and the expected value of the. periodogram are shown in 'Fig. 8F5 for .N· .= 164~ 'There Me two effects that should be noted :in this example,..First, is the spectral smoothing that is produced by WB(ej(J)~ 'which leads to a spreading of the power in the sinusoid over a band of frequencies that has at bandwidth of approximately 41.r N . / The second effect is the power leakage through the. sidelobes of the window, which creates -, .111 . k ± !r.· ::-" ;·s ~n' . Sectton I':.~",~ ~·',.][t .. 8"2' 3 ., secon d ary spectral.. peaxs at .L~~:wenCH;;S ~ Wo ,~., 2:.'~v'·k .. A we WI:I. see m S «(lk; is possible for these sidelobes to mask Iow-level narrowband components ..
~J,"M1I"

. . __...

..

- _..

- _. _.

snat« fll'1 IC:S:!1~J


To L\.,.. ,'.l\li,.ilVUl .J:'"
.

.,;!,;.To"O,: .-N' , ~ "J'Vb"1Jl ,!, " .,

(r~jw.'t P.,:1;:.\11;;: J

~--~~------~~~-------=----------~~--------~------~~m.
{a)

[.

(b)
'Fi:gure [8,,5 (a) The 'power spectrum of a single sinusoidin 'value of the periodogram:

white' noise and (b)' the expected

EXGlmple

8.2.,2 Periodogram ofa S.inusoid' ,in Noise

Let x (n) be a wide-sense stationary process consisting of a random phase sinusoid in unit ~ ~ variance W',hit [enoise '!1

'With ,A ~ 5., 'Wo ,= O,4.iT,~and lv :::::=: 164, fifty different realizatiens of 'this process were ,. ...l St." '.~" III dh generate d an." U~[epenocd ogram 0 f eaet was computed, Shown rn JP.l.g,. 8'",00 IS an ovenay I each s L;...,. of the 50,periodO,grams" Note that although each periodogram has a peak at approximately u) = 0~41f~there is a considerable amount of variation from one periodogram to the next In. Fig. 8..6b 'file. average of all 5,0 periodograms is shown .. This average is .approximately , equal to the expected value given in Eq ..(8,~26)," yincreasing the number of data values B to N =:: 256~ we obtain the periodograms shown in Fig, 8 ~6c and d. Notre that with, the additional data" the power in '[be sinusoid. is spread. out over a, much narrower band of freq uencJ.~is,,~ ; ,.'. ,J" ,.
i
r

40[1

~
~I"

....
••••••••

-.s
1l.1l 0.2. t!It:1

.
O.:~

nequan..'y fusrlls

QJ..4'

C.$

cil pi)

.g.J.!i;

0..7"

;&

G.9

"Ii

Ii}~

(I;,g

'1

{a)

(bJ
.

ao

,~,

::s

..

._.. ....

--40

...

..,

.. ' . .

.
CIe

~~~--~~--~~--~-~~~~~~--~
0.'

o, 1·

G..<! ;

e.. 3

0..4

F'I':iIq!J~'

tl.'s ,~. (u~,~ iMJ.

;jj~"

IJ;Sl

'1

tL]

,ID.2,

!fila,

~~tl,U"lo~,iiiI~

0 .,$

i;I~

iIll7

Ci'.

0...9

'j

(c:)

(d)

= N' = 256 data

Fi'gure ,8.. 1 The periodo gram of« sin uso id in white noise" (.aJ Overiay plot of 50pe'riodograms using' 6 N 64 data values and (b) the 'periodogram av€'.ra,ge. I(c) Overlay plot of 50 periodograms using

values and (d) the ,PI€ riodog ram average.

In addition to 'biasing the periodogram, the smoothing that is introduced by the Bartlett window also limits the ability of the periodogram to resolve closely-spaced narrowband () C .. f ~ id components," In x::n v.onss~d & examp Ie, a ramdom process constsnng 0: "two smUSOI'.S ier:"ror in 'white noise
I,~

x (n)

==

A.I

siu(n'O)l .' 411) -.+ A2 sin{ 11-W2

.+, ¢J2) + v (n) . + Uo(w·+ ]1

where t/J.l and l/J2 are uncorrelated uniformly distributed random variables and where v (n) is whlte noise with ,3; variance of "The power spectrum of x (n) is
P~(elf!)')
c.

1 = 2 + 2"JlA] 2[_uo(w ,_ [WI)) +'Uo.(W


(Tv

(r:.,

+,001).1:

]: +, i1f A2i uo·(w 1. 2:[

[Wi)

lY2)_!j

and the expected value of the :periodo.~am is E ~PPtr(ejl»)}. .~

J._Px(ei"') 2.:rr
It!

* WB(ei"')
u,·
D I. ,e'

:=: IT 2;

+' -4 A. '['1 [W··· 1 ~ 2: '..

(.

,j. (w--wd')' "

"

+ 'W"·Bit! j. (W+a)l ).).] (.'


-'
:.,

':.

I c,.

! -() +4.A22 [ Ws{eJ{O'-OO2',)

+ WB(e) "(

(J)

+'W2)i~ )]

(8.~27)

•• __

._-

_ __....__.&.. ..............

__

•••

402,
"t..,~: ..';J!.. '., WW'C:I.l 1S

SPEaRUM E5nMAflON

hi s: own
r-

W B' (eiw) increases as the. data record length decreases, for a given data, record length" N" thereis a. nmit on h ow crosery 'two S,UTIl.llSOl.:S or two narrowuaD·,d ,processes may i.. I'ocatec ~ '11'~ ., id L . .d , =. tie i before they can no longer be resolved. One way to define this, resolution lnnit is to set ~w equal to the width. of "themain lobe of the spectral window:" WB (,ejOO)" at. its "half-power" or 6 dB point, For ·the Bartlett window, .8,,(0 = O.,,s,9(21t/ ), which 'means that the resolution N of' the eriod 0: ram '1'~
1],0 ~
." '.

ill. P" -I,g.. 8:'7:~ , ~" lor:


~

A1

64' S""" '+d-'·';IL' = A :2 dllU N' =- :,,' ·~,m,ce {Jl:le"\iVl' U!Ilo.f tl-.. mm.n . ,', ue
_A
'FL..·

' "e Oi ,.0b'"

p" ".

'.

' ."

g.'."
,

"

u.J. ~ it'

Ie< .:3

Res [ P pel' (ej",)

= 0.89

z:

(~L28)

It is im portant to note, however, that Eq~(8~28) is nothing more than, a rule ,of thumb' that should only be used as a guideline in determining the amount 'Of data that. is necessary for a,given resolution, There: is nothing sacred, for example, 'with the proportionality constant Or'89(2rr)~ On the other hand, what is important is "titre fact that 'the resolution is inversely proportional to the amount of dam, .N,. Nevertheless, although the definition of /ltJ) given in Eq, (8~2:8)is somewhat arbitrary ~ one generally 'finds that it is, difticuIt to,resolve detai Is

iu the spectrum that are much fmer than this,


.px(ej(O)
:I,
;0

, 1
,

--.4, 2',n,:,....
".

]-

--=-+--_~_~

_ __"I~_
mil

...........

........... ----~~-~~--~~-~-----_..,

(a)
EJl j3'per~ej.~J} .
n,

I, 'I
,I

,!
1

I, I'

I
I

I I

r
(b)

,r::

-=tr

(l)

Figure 8\.,7 fa) The'power spectrum oftwo si."'nus.o,idsin. white noise and (b) the expected value of the periodogram:

InA.nA NO'" N··· '_~." \r.M.~"

.t .~,iL'TD·.M:C

r~ll' .. ' ,lint'lL] f J! ,'. :'

".~,CTUO·.05' ..
I • ",'

403

'

-s
O.~ 'M ~

'F~.~o{nI~t.:If~

as

!1.:i"

'_'1'i}L '.-------' --=--__ __


o!]

O:m

0.;2

'11,$

Iflj""etqU'!:lOO'f (tl rsts 01 p1]

I!LI!i

'----"--

6.5

__

ns

-'---__.__I. {ll] n:9.

___'______J

e.s

(b)

:2iJ ......

:......

~O'--~--~~--~.
0, 0ll0_1 (1-:2
1);:1; [1)..1:.

~.~
~>.;;;. ,~.~rit5

~Ei ~ 1iH1

L__

~~

o. j

__

~~

.Q.:g

'Il.!a

'il

I([C)
FigUfiS
l'~·."

(d)
lil'~·'" ' .. ~. "
I ' .• •. 1.[,,_ '.J " .' '.1'1, ,r ·IP:;LJl~.

B 8 The periodogram 0' ~,t " sinusoids in 'white M';'C-O w~'*'h~'I - 0 4n- and "'1 ffi.l'O" 0 45:n- (a) ·.U.V·... Overlay plot of 50 periodo gramr. using N' = 40 data values and (b) the. ensemble Q1lerag.e·~(c) Overlay pilot of 50 periodograms using N :=: 64 data values and [d} the ensemble average .
l-:!I!"_'
,r ","

Jr"'l.·[

· ..

!J.'.il'.

" .. "

III

I'

,"

:.

~,~

jj...

,"l.·~.ii

.'_'.

. x,Q:mple 8 .• •. Periodogram' Resolution E 23


Let x (.n)- he a random process consisting of two equal amplitude sinusoids in unit variance \v hi te noise
x (n) =. A. sin (nw'l

+ ,if;],)- + A sin.(nUJ2 + ,th..) + v (n)

where

O..4}l" ~. ~ O.45.jf.!. and A := 5~Using Eq, (8.,28).~ with fiw o..o.S1r we see that in. order to resolve the two narrowband componen ts, a darn. record length on the order of N == 36 is required. Using 1\1 -= 40; an overlay of .sO periodograms is shown
'Wl ~

in Fig, :ttSa., It is clear from, this figure that it is not always possible to resolve the t\VO sinusoidal components when ,AI == 40 ..The average of the 50 peri odograms, shown in 8.8b., illustrates the overlap of the two Bartlett windows, However, as shown in Figures 8,.S.C and d, if N == 64 'then the two sinusoids are clearly resolved ..

Variion;ce of Ii'h,e·Per'i:odlog:ram. '\Vte 'have seen that the periodogram is :an asymptoticall y unbiased estimate of the power spectrum .. In order fOT it to be a. consistent estimate, it is necessary that the variance go to' zero as N ~, 00,. Unfortunately, it is difficult to eval'. ~ ':11.. ~ .".l"" d Mate+'II!.., variance 0-f t:be penord 0gram. f anar bi tr.ary process,r (n,j si tne tor j since tne vanance oepeacs .

p:._:z.

SPECTRUM ESTlMAnON'

on the fourth-order moments of the process. However:" as we show next, the variance may be evaluated :lin'the special case of w hi te 'Gaussian noise. Let x (n) be a Gaussian white noise process with variance IU sing: Eq, (8,,9) the
peri od ogram may be expressed as fOUOYlS:,

0-;~

1.
'-,

i'j,j'

i!

1
, ' '

,~¥ --'~-~'

N "-

L",'-·'

E.···· - x(k)x~(l)e'-.J(k'-Ot~

k=O',~:=O

"

E PI'''' (e

..:

,I.

ial1

,.".

PI'''' (e "'1 )1

A ..

..'

!"

l'

.:.

= N2

N,-l ,N~,lN='l N-l

> .:~2LL

E {x(k)x" (l)x{m)x'-(n)

}e-j(k-Ij"'le-

j(m-nJ<W:!

&:=0 l=O m=O n=[]

(8,.30)

which depends on the fourth-order moments of x (n)~ Since x'(.n) is Gaussian, we may use the moment factoring theorem to simplify these moments [l7 ,,44}.,Fo'T' complex 'Gaussian random variables", the moment factoring theorem i:s3 ,E f..x (.k)x~ (l)x (m )x·: (n)

l= E {x (k)x* (l)} ,E {.x em )x*(n) }


+ E {x(k)x*{n)}
E ~x(m )x$ (l)

(8" 3"1)
: ..•• 11· ... , :.

Substituting

Eq, (8.31) :into Eq, (8.30)l' the: second-crder moment of the periodogram be?omes a sum, of two terms, The first term contains 'products of E {x (k)x* (l) J 'with E {x(m)x*'{n) For 'white noise". these terms are equal to 'when k == I and m ,~' n,!, ,000d they are equal to zero otherwise, Thus", the first term simplifies to

l.,

[0';

. (8..32)
The second
term, 0.0, the other hand, contains products .of E {x (k )x*'(n) } with E{x(m)x*(l)~'" Again, for white noise, these terms are equal to a: when k ::=: nand 1 := m, and they are equal to zer-o otherwise ..Therefore, the second term becomes
]
-,

1'1-1 N='1
'2"
E'," ",~, ", E"'".'

;,

,N'

'.

".

FlIl" [..;0' V'x ~


.'

:4 ,- ,j(k,-n~l'

:;<lI'

i(k-,O®:J2,. '

:=

I'.,_ft

;l,=V

~'~

: ('8" 3"·'3.")'~!
I • .. •• , ··1 .' _. til, _.

3Ni(rte that this is different from the moment factoring three terms ~ns1'eadJ. two" of

mOO;F.em,

for real Gaussian random '\!',aria;lMes, which

C~):~~±n8

NONPA.RAMETRIC'MEJHOflS Combining Eq. (8..32) and Eq, (8~33) follows that it ".IP"· E {.p.: ·'ft..(!!r (.JiW'l ). p. er (··'J~)'l" ~...(14 .,e e " _"
A .....', . "

,405

r:

..it

, +. [S ' N 'In .{ . '!V SIn'


~ I.,
. '" "Oil'
' .. ,

'all (I
. ,I ... ' ,~.

'-.

,,',

W'1

'u ~"~.'.

/2 an )'.' .,
.'
_

.~ ?:} «>2)/2J
'.' I

I.

:1n.ce S'"
..• ' '.'" .. 1«.1'1 '. ""' J(2)7' ' Cov { Pper(e' - ) P",per (e' "J _}'j'
I

jO!.l] .', "" .' .,J-C£n' . ( "" = E ': P,per(C' - ),P pe.r(e· ,] '-)I
.,'.

-E . ,d'" an.
I

J P plf),.(eJw],) i
A .)'

I' E· P p~r(eJW2)
{'" •

I f

E"
....

$p':' l' per:{.

·e,,'

j(J"))

1 - (Ix ,!,I:.en. ~ J -~. ..:2 the .

,if].,..,..

:!il,lJ;

" .•.•• ..... .. f· ..· covariance or the pence ogram ,..", ·e ~.... ,...·d····· ... ts

Coi I" p~- l jW'l) ._" perl e j6!;rp_)}'I p"' ov per ",e
C"

~
I

. ,4 .O'x•

[Sin N{WI
r-', ..

2 1»2)/2J:1
(dz)/2.
..

. '.'

}l SlneWl -

:1

(8 ..3$) ,

Finally, setting

WI

== ~ we have,

for the variance,


..,r(J.il+.."J.,II; ('0

36'-)

and the periodogram is not a con2 sistent e stimate of the: power spectrum. In fact, since Px (e1"l.i)) =: a;._ then the variance of the periodogram of whi te 'Gaussian noise is proportional to. the square of the power specThus, the variance does not go to zero as 1'1{ =-+.
00,.

trum,

Examp e 8.2.:', .Pe'riodo:gmm ,ofWhit~ Noise'


Let x (n) be white. Gaussian noise wi m Px
I ('

le

jW). _. ].
.II~ .

From Eq, (8~23) it follows that the expected value of the periodogram is equal to one,
E ,{,' per .'P' . and from Eq. (8..36) it
foIDlO\1lS.

(.eiw") '}.. = 1 .." ..


.. JUJ ..
I"

that the variance is also equal to one,


Var
." -:-. . .

{Pper(e' ):'} = ill


."
HIt,

'

Thus" although the periodogram is unbiased, the variance is equal to a constant that is. independent of the dam record length, IV, In Fig. :s~9,a, c, and. e are overlay plots 'Of 50 periodograrns of white noise ~hat were generated using data. records of length N =64~ 12.8, and 25'6 respectively .. In Fig. 8 .. b" d, and j, on the other hand, are the periodogram 9 bserve 1.8 t!Ilat~a:Jlt.: h t!: e average va Iue 0.f', me penoadogram 1S ap,. h ill ·h -1-1.. ~ averages. "{'In. l'l' nat 'we .0 :: oug I h
1,

proximately equal to creases,

0:; ==

1.,. the variance does not decrease as the amount of data in-

·406

DfrT.DUl.il jI E·1C""!JI"lj A.no· "N' ... , rJ-'~,~[ni.:.I·:;JVl .. .;JrlJ./VLM~[~I..'./: .

'.

a,

1;)"

O~:

'11;3

Fril;li;l~

Q.<I].

111'...5 r!i\.'5 :j),o:i.if5 d' ptl

00,7"

C',

a.

~
{b)

{a)

~5iJ 00 0..' ~

e.s

~.. 0.5 FI'!!!ii:I9IJ['1(:y ~il~ fJ..~

.._ {l.'!3 ,o;l pi}

-..____...-----=---'
D.7
i).

a:

au!!'

_5li!II....------!-----L.----l....-......il....-_..__-....L...-_.L...-..----i..._____.,/,,~---'

'O,li

(L~

O,~·

~A 0,.5 f~(~'I;i'f~

-O.S

0.7

0.;8

0.9

(c)

(d)

.~~ .

'2

-Q.l

0.:2

0...3

{I.S iJ.'.6., 1Pmqu.a~ (biJ'iis 01 pf,J 0..·4

~7

Q,El

0,9

Q~t

O~:

'il.c3

.I}...:tl

e.s
(!!fill!!!;,

'm;:,

_ ..

___J,,_.__

. .-

;~J,!(Illqr

U [if p;!

0.:1

,[1.8.

("-9·

(e)

([).

iF;igulre 8,.,'9' The periodogram of unit variance ¥lhite Gaussian noise .. (a) Overlay plot ,of 50' pe-, riodog rams with .N := 64 data values and [b] the' pe riodogram lav.e.m:ge. (.~) Overiay plot ·of·50' periodo g'u:mu l-v"ir.j.~ .N == 1.28 data values and (dJ the' periodog'ram ti.verag.e~ fe) IOve.rlay plo« of 50 perietiogram.s: with N = 256 data: values and (0 the' .peri:oaeg-ram ave.rag:e.

The analysis given above for the varianceof the periodogram assumes tbatx(n) is 'white 'Gaussian noise, Although the statistical analysis of a nonwhite Gaussian. process is much

NO,NPARAJrAlEJRlC METHODS

40,7

~ '. more diff '~I n 'CUlt,'we may: dienve an,approxnnate expression C tb,e·variance as :f' 'll' tor ' '0,1' ows. R keca 'I] " that a random process x(n) wj,th power spectrum. Px,.{e}.W) may be generated by filtering uni t variance whi teo noise ~) n) with, a linear shift-invariant filter .h (n) that has a, frequency ( response H (ej,W) with. .
+ ~

As defined in Eq ..(8.. )~ ITXN (n) and 7

VN

(n) are the sequences of length N that ere formed

by windowing x (n) and v (n.).~ respectively, then the periodograms of these processes are

(8 ..39)
.....e P'" . per"
"Ii . ifJJ ('

eJ,. ()o),) ,. ,

==

1~ . N' V (e1(J»)
"N',;[>'
.1

1,

(8~40)

(n) is not equal to the convolution of VN (n) witb h (.n)~if .N· is large compared to the Iength of h (n) so that the transient effects are, small, then

Although

XN

Sinsee ~~
,

iX,N[(e.jW)~2

~'1:H'(ejW)i2

iVN{eiW)j:2

=: 'Pi(ej~) j VN(eJW)'1

substituting Eqs .. (8..39) and (8,.40) into Eq..(8'r.41) we have ,


p(x) (',~j~,),.~ . perJ!U',

p'x (:ej'~t})" p(v) (,~i~:»); : . per ., - . e

..... ',pe'l"~' d p7. 2(, p~ (v} V,,~ {" (x) (,; jW),,:}i ~ 'P',x·' ~if1j)' Var-,{,' ,per ('eiW),:' }i. ,~ '.
JE;. ... '
I.
"
,"

and, since the variance of the periodogram of

l} (n)

is equal to one, then we have

Var.

(p'(X)pe.'r·'(eiW)], i-

~ p2(eiw) , ~.

Thus, assuming that ,N is Iarge, the variance of the periodogram of a 'Gaussian random PITOceSS proportional-to the square, of its power spectrum, is The. second-order momeot and covariance of the periodogram may be similarly gener .. alized for nonwhite Gaussian noise, For the second-order moment, Eq. (8~34) becomes (8.43)
,," . an 'd' , ~, th 'e ",.", '~" "_ ):;l',n ([8' 3i'5' '')1 b ," , ", - ,lor " OQv,anance" ~," ",~,.'_,ecom.e~
if"

Cov II PI(ej(J!)l):P
" i,
per.

',pel'

,('eiC!>'1)"j',

': ,

p,('eJ.Wl)"I,P' ("eiOl2)" :[-sinN(W[


x,·
,I "

N sineW,]: _". rm')/2_

~W2)/2]1?

,('8::',44':',:'
•• I.•. '~~.

..."

):'

Note that for large ,N'" the term in brackets is approximately equ-al to. zero provided [Wl -'en ». Zn" IN'~which implies that. there is little correlation between one frequency and another, The, properties of '[be perio do gram. are summarized in Table 8..1.,

4-0·"8',·,
..•.•.. 1

,,:I,e Thb:lIi. "

,8',' Ii ,'.'",

P,,,.,{ej"') =~

t
iV-]
il'i!,=f.ll

<2:

.:t(n)e-j"",:

Resolution
A il,1iJ!!)
'=:;' ....
Ii -," ...••

0' 8"'9" -[
~'

2Jr
l!ii

:\_r

Varia~e

,8'.2.,3' lhe, ,ModJIi:ed PeriodOg[nJ,m[


In Secti~n 8.2.1 ~e sawt.hI:' [at.~e p~riQd.o~gramis p'~portional. to' the squared, magnitude of

~e

Fourier transform of the windowed signal xN(n) ::;;;:; x(n)tOR(n)~,

P pel"(eJ - ):=: N ~ N,(e} --) i X

'"~

.. ,w'

I .,._'.' .' i21', ~ .L.

== N ' ,_' ':._ (,n.) 'WR (n.).e x


". '_.,.'.

1.

f:.
,

-,nile!)

.J

...
2

,
/
ty

(8..45)

n,-·-OQ

+L. d '" 1: ~ OUU.er,'.ata Win,dOWS.'t't'l 1I,.:g: th- ere b e any b yvOU1:Ul' :eae fi C' e.xamp'JJe" In rep laci tt, ror acing ~:L. rectangular rue window with a triangular (Bartlett) window? To answer this. question, let us ex ami ne, the:
111

Instead of applying a rectangular window to x (a), ,EQ,(8,.45) suggests the possibsli ..

of using

e~!:,"of the data. window' on. the bias of the periodogram, ~i[lg Eq, (8.45), the expected value of the periodogram is E {P",erk1
. ! '" ....
. 'A,

:(t)

,.

)1 =
'I
."

NE
~I

1.

r._

([!!}:
I·...
,

00 ";'

"._.

x(n)WR(n.)e
DO

_'.

..

.:

'.

-'}1;tl!)

," .

" .i!iI,-,

]:[
:
,I I

I)
I

[--.

.........' VII.'

.. '.

:x(m)WR(m)e
00

.'.'."

..

_':

" ,mal II: I


,.-

]*jl l
II

,1l\t'iI:,~

.'

=N

,~

"_

EL {

'_-.'.

00

.'.

E.x(n)x
.'

00

"-~

(m)WR(m)WE(n)e -

[-

. -- . ---

"f. '\ .'.-- .'. -'=J\,'n:~m1W I

.',

,m:~-·CC1 ,i!1~'-'OO

]
'I
i[

-'

(8,,46)
11~:~,~ n l.'II.U

·t'h:"I·:,',~:'''''''': :.[,'.' 0.'f-', v.a.n.a .'.,e.g;,:A- ,_ i-I'hI,' ,', ,i;,. , e CU.a.o,ge,


',':'.!

',' It -

,.'., '. 46"-' ,m ,~,E.q,.,,,. ['("'8-:',[ .j

:)'

... '

1l--~:..""-.-,,. --;-'" 'lJI'VCQm,e.s

/. (

= N.)
,

I ,

.....00 ....

:r",(k)

.,'

[i,' .....00 - --' ~" .'. '. } :. WR(n)WII(n


I'

- k) ••e

. ,'"

]:!

-~:

;1<<9

k~-'CJO;
00

n,- co

1. - . "k' '-' 'L:"" - ,.·--.(k)w,e B. I(k)e-.l@ N-~


,)I; " " ".,--"..."......
l~~-~

(8,.47)

NONPARAMErR1C MflHODS where w,B,{k)

409

= lIJR{k) * wRr(-k)

n=~oo

L 'W,R(n)wR(n.

ec

'-. k)

is a Bartlett window,", 'Using the frequency convolution. theorem, it follows that the expected value of the periodogram js (8:,,4'8:,)':
• II ••

where '",(N' /2) . ·R: ('e'J(.tJ), = sim. ,W, '-W


I ,',. "

sin(w/2)

e -'J (,N -·1)t'V/2 ,'l.


" "

transform of the rectangular dab window, W R (n ~e(efOre, tile amount of smoothing in the petiodogram is determined by the window 'that is applied to the data. Although a rectangular window has a narrow' main Iobe compared to other windows and, ~berefo're~ produces the least amount of spectral smoothing, it. has relatively large sidelobes that may mead'to masking of 'weak narrowband comp onents ~Consider, for example, a random "., '" process consisting 0":If two sinusoiids i wr ute noise '. S 1.0 hi is
r

me Fourier

x{n) = 0., ll, sin(n.w] With.


WI

+ ,¢Jl)- + sin(nwz + ,r/>2) + v(n)


==

128; the expected. value of the periodcgram IS shown in Fig, 8.10a~ What we observe is that the. sinusoid at frequency Wi is almost completel y masked by the sidelobes of 'the window at frequency W:2,. However, jf the rectangular window is replaced with a Hamming window, then the sinusoid at frequency IWl is, clearly visible as illustrated in Fllig. 8.,10b., This is due to the smaller sidelobes of a Hamming 'win= dow" which are down about 30 dB compared to the sidelobes of a. rectangular window, On the

= O.,21r. W2, ==, O...31r,~ and N ~

..._~

o,1

_ __,_._

,____'e,_.___)_

---"--_,'----

'

j_

Q.:?

1).3

(1,4,

F~~J~

!LS 6.0(ttf;1 is '!)01 'pi1

Q. r

O.B

I) l'

0.'1

O'.:<!

0..:3

(l..tj,

as

1;1.1;;

0.::1'

'0..3

Q.~

'~.I!Ii!I~~,o'fPll

(a)

(b)

Fi'g:u're ,8,.10 Spectral analysis oj hV,o sinusoids in ~1i'hite noise with sinusoidal jre:,qy.encie.s: of (1)'! ~ .2:.rr and ,002 = .Jlr and a data record length. of N '=' 12,8 points. (a) 'The expected value of t.he periodog ram. (b) The expected value qf the modified pe riodogram using ~ Hamming' data 'windo'w~

4N''. ole IL.I!.J!£ eqarvaience ot ~. ·t· ... 1:. • ~1 .c'i);'.n

ri'rfj, A'O') \,6.70,

,:q ana B ('iJI 23'") ''1.. '0'.' :'.',

410

cSPEGRUMfSnIMn:ON

'. ., ~'.....,'Ii, ~ ot h er hand, thiS ,_,.",..:I' . ' d : ,: reduction u,} th S}"de 1 b amphtuoe comes at ........ , expen.se o f~an increase tne '. ooe UJLe; in the width of the mainlobe which, in turn, affects the resoID,UJtilonIDe periodogram of a process that is windowed with, a general 'window w(n) is called amadjfie,d periodogram and is given by
I.. l.

~" " p (o,J,~), '_ ~ I


, M',~ ,-

flu'

,n~-OO

E
N-l

00

;2

s) X, (':r.l')' W, 1(': ne 'n, '"

- j'rU)}'-,

(8,.4-9)

where N is the length of the window and


U

=! L!w{n)f
N~=-o
......
..
+ I

(8~50)

is a,constant that" as we 'wiU see, is defined sothat P'M'(e'}W) will be, asymptotically unbiased.
A 'M ATLAB pro~,
, & ::

~ '1ILl.ile odified ,permo d" L i .' . ,. " ,g~ 8" '1! lor ... 'm~ " : ',"", ogram is ,g1.yenlO Pi ;",~1,1.,
'-_I A. -

evaluate the performance of the modified periodogram. It. follows from, derivation of Eq, (8+48) that the expected, value of PM (e.JW) is
n_IOW

Let us

'me

~
{

, A,

,PM/e)

":iflll

),

'J

',m' " .. - . I == '2.1rNU - Px:(,eJ~ * "W(eJ~(~)1 2 '): ,-,


,--,
:1

where W(ei"') is the Fourier transform of the data windo~Note that with ,) 1 ;rv -,1 1 i2 U Iw(n)!2 = 21C N .., . IW(e "')1 dw

= ii L
n:=f.ll

lJ'r

(8~.52)

'-

IT

f'unct.Lon
%
~ ,...... '= ,_

Px
"':.1'1,.\". L>o~·fl
'I'

:=

rnper (x win~,nl".n2 )
<I

.1,.IL
..'~:

N
'W

narg. in =~ i; nl '= 1;. 02 length (xli ; = .02 _ nl + 1;,


~

'~nd;

o,n'f&S

(N. '1) ;' 2)

i,f

w :::: hi3lnml.i,ng (N} ;: else,~f ('win ~,= 3) w ~: hat"Jln.ing' (N} t e.lse:if {win _'_ 4:) W' =: bar t.Let.t. (N). z else.i f {i;\'i:iin. ~- :5) w = bla,c'lcman ('N) ,~ end;
r....

(win ~~

,",",.'~' = : ",&,"':'i;."fM-

..A..

I'Iri1']!...... t- .... + __ ·

"n'~ '" ';J;..-'~/'n',o':'r.-m'- ('I'...r'~ 1'ttl ,.. "J


,~
~J ...

1m

'~ I

~.:1..!1."

....d ,. '....
f.

Fi'gure ,8..1 'I A 'WLAB programfor

,c(Jmput.ing'the ,mod:ijU:d periodogram

~fx (.n)+

r'M~~LI,~f"I',' . )',)[, " '0" NON'·()A ..D A .. JlIJ:"IiDl'C' : MfJH" [Q',"['.''-1'.e

411

tilen 1
. ,', " •

l·,·;n;
J ~

27f N,U : -te'

W{ej~)~2d60

=1

.Sip.c-e; .bem.o~ifi,ed. eri a dogram is simply the periodogram of'a windowed data sequence, ~ p P (j(].)!) ~J~I . ,...'L...... .. tne vanance 0.:f" r }J::e ~'. WI J b approximate 1Y uw same as ~'t.. . ror th peno dogram.j... .. be U~,.a.tc ne e,
;if.\,. .,

and, with an appropriate 'window",I'W' (eitYJ) ~2/ U 'will converge to an impulse of unit area N esN ~ oo, and the modified. periodogram will. be asymptotically unbiased, (Note. that if w(n) is a. rectangular window, then U ~ 1 and the modified periodogram reduces to tile

ogram).

Ot53)

Therefore the medified periodogram is not a,consistent estimate of the power spectrum [and ".... .• c!' A .. W·h the • ara ·,VU1.: oenet It m th,e datawi dow otters no ·L,.,.,. fit i terms .0f';.reoucmg tb e variance ..'.' 'r' at 4-'IL win"dIOW_does provide, however, is. at trade-offbetween spectral resolution (main lobe 'width) and spectral masking (sidelobe amplaude). For example, with th.e resolution of P M(ej®).) defined to be the 3 dB bandwidth of the data window~;5.
[I.

.;i;+'

Res [i .' P' M' (ej,(V )]. ~ ,


I~~

&. 'bie s 2" tnat 43 ' .>:[ d .. m. ",l.. .,A""'I- b 11 ~ .h a we see rrom ~,l!a~. 8 .. +'L....... a··, d'E' reeucnon .. me SI,Uc~o..e ,a.mJhlwidIe 11. t comes from. ~

-=----=----=-=---~D

(..o.,w)3dB

(8..54)

using a Hamming data window instead of a.rectangular window' results in a reduction in. the spectral resolution of about .50%~The properties of the modifi.ed periodogram are summa• ~...:I" . 1"' f .. d Ai' ~ ... " s: nzea m "T': 'b'~ :",,[, s: n extenssve . 1St 0:'. wmc ows ann wmoow Ch rarae 8 3 An naractensttcs may, b·e ronn d " in. rI4]~
1"1'" J.3IU·

'L·te 8' ~'. Prope,.'1e5 '.. a .ew,ommon IY 2 ~ ·"of·~ c .'.


: I" ~ .J.. .";' _-, } • .
J

a,

'.

':

_',

" ..•

"...

.'

•'

Used WinldO·w.s~ lacb Window


.!Ii... I.S

d" IlU:sume_,
... .... , ,-,'

..a.-,

!I'U

L__ . ,ge 0

'f;

II -- ' ".ili.L N:; I.CDgDJ. _,~

Sidelobe

3d.DBW
(A,(U}~dB, O.89(2;.r / N) I. (2x.l N) .28

W.indo·w
Rectangular Bartlett Hanning Hamming Blackman

Level (dB)

~13 -27
-3'''''
"~

1,.44,(2n1N)
m

-,43 -58

1.. 8(2.H I N) 6

,30{2n-/ ~l)

:SNorethat this isconsistentwith

the definid:6n of resolution ,Wven inEq, (8.28) forthe.periOOio,g.rnm., Speeificallj; although. Uheperiodogram resolution is defined to be. the. 6 dB bandwidth of WB (ei~)) ~since WB (ej C!)..) = I \VR {e.irf!~)f!, thiS.1;s. equivalent. to the 3 d'S, bandwidth of WR(eirfl.~).

41.2.

~:U=I.""Tl)·U\Ji," ~ .:~l.f¥;~' ATlO'" n., :.,,,. !l!:S'TiJii,;l1 ~ ". 'N", : ' Sr


L'''_
~l I

T'-I\!i'b~'~:e' 'S",3' , , ,.I.,~ ,I "I! ' ,",.. ,

' .•: ' 01"' riffle", ' I'·'n' '__ _J p' ,_ ~'- J,', ,P:roper,,.,tU ,---~ aL M'; 'od':.'J~,_eg;en-uuogram "
I,

P ,A-'i (Dj.(il) -, ..... .~,~ '-

l, ,v,U,,'

L'"
00

2:
'W' "
,II',a;.

('""1') """.(M)' ;",- jn)«'.J 1 t'"


,,.,.. 1 'iCr
• _r ,

.i'I=-IX,)

J7 =; -.' ILl
,.

N'" L'

}V-I
.

IW (')j2'n: .,
1

B"
,~~

rI-O

'1"-"';;:-

E, { P 1M (e1-OJ)- ,,: = l.i"l"' N U P,;: (e.)~) * IlV' (eIW)


Resolution Va' dan ce
' ;Vil~ar'' ,.-

. '"

.}

'

-:2

~,

Window' dependent

{p?,M'\·' . t eift»
'.

JI

l ~~, P"~2..;o.j r») J' .r. (


FVT .I~

In thi s section, we look at Bartlett' s method of periodogram averaging, which, unlike either the periodogram 0.11" the modified periodograrn, produces a consistent estimate of the power spectrum ,[5]~ The motivation for this method comes from the observation that the ~ expected value of the periodogram converges to Px(eIw) as the data, record length N goes,
to

infinity,
.r-

""'. (JW)'[ 1'· RIP" 'p'f!,r,.e< }W"· ~:,


---:0- 00

P' JfCt)) '. J :;;;;;;._','~' (', e- ~,' .


-~

t(8.5.5)
.....

',.

..!I.

Therefore, .if we can find a consistent estimate: of the mean, ,E { P'per (e.Jw.) } ~ then this estimate 'will be a. consistentestimate of P; (eiffi). In our discussion of the sample mean. in Section 3~2,.8~ 'we saw bow a-vera.gj.ug a. set of uncorrelaied measurements of a random variable x yields a consistent estimate of the mean, ..E {x }-.'This suggests that we consider estimating the power spectrum of a. random proces s 'by periodogram averaging. 'Ihus, let xi{n:) for l = ]~. ~~,." ,K be. K nncorrelared 2~ , realizations of a. random. process x(n) over the interval. 0 < n ,<:: L, With ,P,gJ.,(eiw,) the ~ periodogram of Xi (n)~ .
I. . ( P"(i): ("e Ji!l:1),'__ ,_ L""'"",.' xilln, ,) e,=j~Wi er L P '.
,I' ~

L~-l'

(8.. 6) 5

1ii:=O is
'r

s, tne average

-0'f

h '" ,:' these peno dograms

[(Q' -7) 0..:"-':'::


"

Evaluating the expected value. of P.x:(ei(lj) we have


is:' '•. ldi] .. \. ':

5°)'

. here WB (eJW) is the Fourier transform of a Bartlett window, w

W B (k)"

"thatex-tends from ·-,L

~NPARAMErRIIC

METHODS

,4,13

to ,L. Therefore, as with theperiodogram, PJ:(ej~~) is. asymptotically unbiased, l:n addition, with our assumpti on that the data re-cords are uncorrelated, it follows that the variance of ;8 P .1> e)"(,r.~), ,:1 ,j!'Ij -

1("

(S.S9) which goes to zero as K goes t10 mfinity, Therefore, P x (elm") is a consistent estimate of the power spectrum 'provided that both K and L are 3lUow'{~d go to in.finity~ However, to the' difficulty with this approach is that uncorrelated realizations of a process are generally not available. Instead, one typically only has, a single realization of length N., Therefore, Bartlett proposed that x (n) "be partitioned, into ,K nonoverlapping sequences of length ,L, where N' ~ K'L as, illustrated itt Fig. 8,.12~The Bartlett estimate is then computed as. in Eq C·'S'I 56·')""lll d Eq.. " ~.,5":1') ,WIt--. .; (8 , ,~ h ..
_. ': ~". I '-1,'

....

"

Xj(n)
p'

== x(n

+ iL)
(

n = 0:1' 1. ~ • ~ '. ; .L -~ 1 i := 0, 1~'..''.,', - ill. K

Thus, the Bartlett estimate is

pl_8 (e.ftrJ)

~~

l(-lL-I

" '"' c: iL..J x (n +, i L)e'i;::O O


,n....

21,:
jnt)) ,

(8.,60)

A MAll..i\B program to compute the Bartlett estimate is given in Fig. 8~ ]J.


we mayeasily evaluate the performance of B artlett' s method as follows, First, as in Eq, (8...,58)"he expected t value of Bartlett' 5- estimate is r
Based on our analysis of the periodogram and the modified periodogram

L points

L P-0liUS

.L points

'lI'

,I
.
II!!'"'

t, x2{n)
:j......
,

,
I'
, .. , • ;0

.,

_~ "

,i

~_ ,

'" r '. ~ If'rrgure 8 12 s:D..-. • "'. ..•' '~'. 'tti,rtttl0nm,g xt(')n.' lnto non(Jv:er~ap'Pzng suosequences.

.:.r ....

. '/ / / ./

'l / ./
/
_. ./ .

-"Ii""-'-'".!""'

'

__.o;,;m

........

. unction f

P.x

bar·t {x, nsect]

"""' 'J?x

~
=

f'100r { 1 eng-t.n. {x)

lng··ee t l j.

0,:
'1 .~
~'

nl

fo:r i=:l.:'Dsect Px ...:'Px + peri.odogram (x {nIt nl+L-l)1 }/n.sect;' n.l ~ nL + 'L:

i
:'

,~nd,;'

ifigure 8.113 A, MATJLAB prog.r,am for estimating the PQVJe7' spectrum .u.fing Bartlett's method of ,. - .s ,~:..~ h· -·:f.·i' J.'Ji'~ '., a verag tng penoaograms.i N~ ote tnat tms m~i""e caus p.er:1,odogr am, • m.

Therefore",
~J 'li

is asymptotically unbiased, Second, since the; periodograms used in ,P B (e.J'~) are computed using seq uences of length ,L ~ then the resolution is
Res P'B·,(e)~)I'
.

P B (e,j'~')

....

~',

\.:

]=

~ 2' 2.ji:.jf O~89~,:= O.. 8:9'K: L" N:·


_
.
,-

('8,~62)

,.--.,'

th ' ne '..1.. ( ,:: W'hi b IS, K'·tunes ,w,Mg:er,worse} ': Ian th peno dogram. F·' ll SInce 'UJ[e, sequences Xi.n ) are ict ,. i mal "y~ "",,111 th' ~ redueti generai Y' corre:] ate d with one anor .er, un I ess X'!'() ",IS WIJJ,lte: ., .;d!.., tb vanance '.:', ucnon .: ~,..L n '. t...'.' norse, wen. '.ne "II ~ . be ",1..' ~ ., "II oe ~ 'Vi,;'1..not b as 1 arge as that grven In Eq ~ ("'8" 5~9) H owever.jthe vari ' ..... ',~. se vananee win bei 'Inversely
I, '~I ( );
w. ,I •. :

proportional to K' and, assuming that th.e data se·q,oen.ces are aJ!:RfO,xjma\elY~Dco:rrelat,ed,
for ~arge ,N' the variance is approximately --...,._

'Var ..
.
il .~

I,',

p:,-"',,,,(:',e.' .J'W)I { ,lJ""

J!'t.

~ ,(' . '} -V:~':~ p,,(l.) (rd.JW) 'J ~~.' K p,:2.(.,e~, I ,K' .... 'pf!,'r',~ ". . iI - x ..,.l',ID).
1 .' "..
',A"'''I".!II
I"

w..;"

d' .. '1 d .' fi' .' N' T nus- ii b h' K ano L ale anowea to go 'to. mnmty as i. aott ," ·.·. ....

th p, l .... ill it.._ men .':~[JJ'\e;-;-!lclJ)I,; WI: be a consistent estimate of the power spectrum, In addition, fOlr- a. given value 'Of N',,~letfs it... '~I[l ... ...::II[ ,. ..........·1 I ~ 4=': a re~lucboln In variance b;;"\ dtrction i ~ :-Y_J' methodd a.~,ows one to trace .L-=> A!.,. ~,~'uctlon in ~,peeli,iU~ollu.~n tor simply changing the values of K and L., Tame '8),1;. :summarizes, the properties of Bartlett's
~

00"

method,

Table, 8A

,Proper,ties of Barriers M:etfaocl

1(.

;-"

<

R.'esoliution
A,flj

= O,·.'··8··~K, ~2" .~.~, N

,...' , .......K" . ","'ri '\"') . , " '" m Vax {'p," ,B'['(. e J~''''')J ""'-" p"l(._' .... " .,.,' '.- * ,[,,~,,',
I,

NONP.A.RI4MErRlC MEniODS

4,1-,5

In Example 8~2.4, w~ considered using the periodogram to estimate the power spectrum of white noise, What we observed was tb3Jt the variance of the esti.. mate did not decrease. as we increased the !ength N of the data sequence, Shown in Fig, :8,~ 14a are the periodograms (K =. 1.) ,of.sn d,Hferen.t unit variance white noise seq.~~~o~cS: length N == 512 and in of

.'

Fig. 8,~ 4tb is the ensemble average, 'The, Bartlett estimates for these seq uenees with K ::::::: i 4 sections of Jength L == 12:8 are shown in Fi,g~:8... 14c with the average given .in Fw.g~ ..14d'r 8 Similarly, the Bartlett eS!imates 'Of these 50 processes using: K =- I6 sections of length I: ~ 32 mit;shown in Fig, 8.,14e and the average in Fig, 8".14j: What we observe in these examples ·is that the variance of the estimate decreases. in proportion to the number of se·.·Ii.:.,.I~L.!lL ns K·:· ". tio .' ',' t.:'l ..... .. . ids m ,. ., .A,r,.'L.. anomer examp 1e, Iet Xi() . L a process consisting 0't~ s . n ne two SlDUSOL s i unit vanance white noise,
.x(n) = A s:in(nwl

'where w] == O~2jf.~ == O~25jf.,. and ~ plot of 50 periodograms is shown. in. Fig, 8.15a and the ensemble average of these periodograms is given in Fig. 8Mt5b..'Using Bartlett's method 'with ,K' ~ 4 and K .: 16 sections, the. overlay plots and ensemble averages are shown in Fig, '8~ lSc1: Note that although the. variance of the estimate decreases 'with .K~ there is a corresponding decrease in the resolution as evidenced by the broadening of the spectral peaks,

+ '~l) .+ sin(nlVz=r ¢2) v(n). .A == ,,/ilL With .N = .5i2!, an overlay


4-.

82· 5·
.' . ,iii .'

In 19 67, Welch proposed two modifications to Bartlett's method :[59]~The first is to allow the.seguenc~s ~',i (1.1) to ov,erla,£i". and the second is. to allow a data window 'w_( n) to be applied to leach sequence" thereby 'producing at set of modified periodograms that are to be averaged .. Assuming that successive sequences are offset. by D points and that each sequence is. L points long then. the i'ili sequence is. given. by
j
.~~ IL5 • II. • :I. •• ."'. • .

-sr

"

x:r{n)

'.

== x(.n +.lD)
J:'i

., ..~ L ; ,n == O 1~
.

·
. ~

Thus, the amount of overlapberween


cover the [entire N data points, then

(n) and Xi.~l (n) ]s L - DI points " and. if ,K sequences.

== .L + .D(.K

~ l),

For example, with no overlap, (DI = .L) we have K .= .NiL sections of length L as in Bartlett's method, IOn·the other hand, if the 'sequences. are allowed to overlap by 50% (D...,.= LIZ) then we - may form . N K =2- ~ 1 L sections of length L, 'thus maintaining: the same resolution (section length) as. Bartlett's method while doubling "the number of modified periodograms that are averaged, thereby reducing the variance, However, 'with a. 500/0 overlap, we could 8!1S0 form
'

SPfa~UM fSTIJAATJON

(I.

0..2

il},3

~_4.

~""'l

{Unrn; 1[;.1' ~

d.5

e..E

Q_'j"

I;i,i

;::i;S'

(a)
1<1;.>"-,

(b)
-~-----r----,." ,__,.-_ ....... ,----,--..,..-------,-----"r--.

4BL-"';'_~"------L-_~------IL....-----'-_..!...--"'-_ @:l 02 (J_.(! O_!!:i

o.s

P'~

V.lrJ_ts; d

e.s

e.r

O-<!

a!.'il

'.

'1

i;:t.2'

~.~

pl}

O_<ii F1ia:q'lHI~

-.~
~.JI1.i"t:;, ~ 11-i)'

0_5

ol}.7

0,;9:

{ll'

'1

(c)

'.

('d) '.

~
Il.~

-m.:ii

---I----L-------L-~-[(

(1';3

Q.tIj,

M,

'IiV!i

_.7'-iJl.-'-. .......

e~·M-j
(f)

~~~Iq'~'i:4s

~ p;)

(e)

Figuf'le 8", 14 Spectrum estimation of unit variance white Gaussian noise. (a) lOve rlay plot O;f 50 pe riodog rams with lit!:= 512 and (b) the ensemble ave.r:agle~.(c) Overlay plot of 50 Bartlett estimates with K :=; 4 a-ndL == 128 and (d) the ensemble aMernge~(e) IOv'erlay plot 0/50 Bartlettestimates with and L = I,~-J!'(j),;, the ensem b"I::D n"'l>e:ra'[. g'e'," KU/'!tU I'
~.

8"

,,~

..

-'.'

64'"

,I

,.1'.

1-t'.L

(. ·/(~._1!r;...!.

",

,~,

Ir' .

,Ii

sequences oflength 2L, thus increasing the resolution while maintaining: the same variance as Bartlett's me tho d. Therefore, by aUowing the sequences to ove:rtapl it is possible to mcrease the. number and/or length of the sequences that are averaged, thereby trading a

'N" !OA H A ,il,.IU::TDiI'C .. 'Lji~!·~}-:: NO" .'. ·'W·,M.AMlVlt.;,lft,1fl : ['M'"il::T:UQ' DS


,

:r
I

I

(a)

(b)
~.-~--~~--~--~~~-.--.--~~

I"
~
(1'.:5
rifuiI;

m.-

~lr",
.•..
'ili.::i
o_jl'

o.···~

"~~--1
,il\4 D.5
~'~i::; QJ!!: (l'!: tli}

~~'

as

IJJe:

fiI,.1

=,~I
.[I

~,~.~I~L.~~I~.I~~~j
~,:;I:

ill.:"

~8

O!i1l

(m'l!'!:9.(if pi(!

(e)

(d)

~.~

{I.~

D.;4:

E"Mtq~

0.:5 41\ i;i tuliil3,d~'

Il.7

ruB,

~!ji

I
(0
j'

(e)

(" ,,, , ·.of • "...J'. hi' '. ....1'...... ..J' 1 Spectrum estimasian Qr tw'G SfnUSOluS ,~n.W,·.• te nOI'Se UStng N' :=' .) ,~,..! aata vaiues. ',,' ((l) Overlay plot of 50' periodograms and (by the ensemble average. (c) Overlay plot of 50. Bartlett estimates with K ,= 4 and L = 128 and (Id) the ensemble average. (e) Overlay pilot of 50 Bartlett " . v 8 an d L ~. ~iI aiw IV i tne~ ensemoi e ,average. mb! estimaies Wl.trt(. .n, _ '" \1'"1' ,~,~ (./lL

8' P tglJll"8 : ,',!Ii, 'I": 5' _•

= ,,:'

reductionin variance for a,reduction in resolution, A MA1'l.-AB program for estimating the
POW'ef

. weren's F'. 8 16 spectrum, using W' 'I""",,,L':' met h d is given in ,tg. s.ro, ,0 Let us now evaluate the performance of Welch 's method. 'Note that. the estimate produced
ru .' •

We,kh'S'Method
function
i.f Px :~ 'welch {XI L" over win.)
J!'

(over

>= 1}

I .{o,ver'

< 0)j

error' {~'OV'er.l is ,:inval i,a." ) a:p


nl ~ 1~'

,end

(1-over)±L: n.s,ect::l+floor ( (l,eng'th


nO
'px~(l ,;

[x:.)

-L,

I (nO} ) :

for'

i ~ :ns,,~<:: 1:: t,

fix ~ Px + mpe.:r tX. win" n.l ,;nl +L-i} /n.seC't


j

n:l ~ n.l + n0 ~.

end,

with Welcb."'s,method may be written explicitly in,' terms of x (n) as follows


1_ ....

(lt64)
Of,

more succinctly, in terms of modified


I' ,

periodograms

as

P ,'W' ('eJ:W)'. .... p~(iJ( .J':--(~)}" - ' ' " - K ~ ' M"·· f!. •. ..
I ~ _,'
'C:,

"E"

'm K-l
.i~

(8.,65)

Therefore, the expected value of Welch's estimate is

, '"'.oJ)
.~

(8' '6" 6")'


: ,I ~ . • Ii" '.~.

where W (ej~) is the Fourier transform of the L'=:point data 'window, ·w (n).t 'used in Eq, (8,.64) to form the modified periodograms ..Thus, as with each of the, previous periodogram-based methods, We·lcb"s _~:~~l)Qq.ls:aln, .a:§:y~_ptotica11y unbiased estimate of 'dl~e:pow.e~' jl~c.1;J(llm . ~ .. . The resomu1ro.n~·ho,wev,e~~epends. on the data 'windo~:"A=S-w~lth modified periodogram, d the the resolution is defined to be the 3 dB bandwidth of the data window' (see Table 8.2).. The variance" on the other band, "is more difficult 'to compute since, with overlapping sequences" the modified periodograms cannot he assumed to be nncorrelated. Nevertheless, it has 'been shown "that, with a Bartlett window and a,5'O% overlap; the, variance is approximately [59]

[Comparing Eqs, (8.6'7) and (8 .,6[.3) we see that, for a given DUmber of sections K, the variance of the estimate 'using Welcb"s method is. larger than that. for Bartlett's method by a. factor of '9/8~ However, for a fixed. amount of data, N" and ,8 given. resolution (sequence length L.)1!

with a. 50% overlap twice as many sections. may be· averaged in Welc.b"s method. Expressing

ilk,!DAH,A,.i~~~'C' NOf·"r.M~'¥IC-IIJft.j].·

" : _.. f:-I'r7U"

M~DS'-

: :,'

p w.leJ"'Q:;I'") =, ,..... , \ KL·U":"",.',..",' ',. l ,"

'1

'L"" 'L''(
K-l : ,L~l
.

["

:2

U,l'(.n),..> ,n l

,,','

.['=0

il'i1'~

+ l D),e' -J"'J1)~ '


1 ,,',

RUIS
'i'

R'eso-iutiGn
T ,"'.

'Window dependent

... an:ancel

.1;"

" - "J Var,{ P U,' (eJ(J.}) t: "


'fF

-,-

9 L2 " 16. ,_- px ,(e,}~) N' : ,

the variance in 'terms of' L :and N', we have (assuming a 50% overlap)
-:.

....lo:.;r

',Wi.

''''' '} I 9L P: '2{"P« ('eJa;,)'",.':.......,. -16 -N"";x,'"(e"};fU')", , W', .' ..


T'V' .
,:

(8.6:8)
..l., -, .

;.)

'

1-

Since NIL is 'the number of sections that are used in Bartlett's method, it follows from (8 ,(1;.' iIl." ' Eq ..te ',.u:3"') mat.
,1,1

, '.

·Vmr {Pw'(e} ),:, ~ '16Var{PB(e1,


" ~-'l :'

.....

.}
';Gr.I:

''

'

,,,

"

"

'

,',

I!!) "

)~,

..".
+ ~

bi fd pOSSl. e 'to average '~ more sequences s: a, given araount 0','" ata b:'y mcreasmg tor the amount of overlap, the eomputational requirements increase in proportion with K ~In
L ~i" w, n Alth'~.ou.g~
r

118
~

~,.',. 'II '. ~t ad:ulDOD!!I SIDc'ean mcrease m tne amount 0 f' ovenap increases +l.., corre '1anon b etween th. me ae ' th . ,. ~ ~ ~ ed N' ,I e sequences Xi (n),:"_,ere,are·:di ,. ,. hin,g:returns wb"en. lncreaS;ln_g, K' l:Or' it JililX,,·', ..Th',el1eli,ore nrmms ' , the amount ,.ofo~erlap,'is t~picaH~ilier 50% or 75%. Th,e properties. of We.~,ch'~s ethod m
+ ~ ~ ,..}...
1 • "

are summarized m Table 8~5~


:.'x:a,m;pile 8·.,2,.6 'We'kA's Me,tlUJd'

Consider the process defined in. Example 8.. ,.5 consisting of 'two, sinusosds in um.l. variance 2 white noise. Using, Welch's method. with N ~ 512~ a section length L 12~8,a 50% overlap (7 sections}" and a. Hamming windo:w~ an, overlay plot of the spectrum estimates for 50 different realizations 'of the process are shown in 'Fig .. 8,., 70.: and 'the ensemble average i is shown in Fig, :8". l1'b., Comparing these estimates with those shown in. Fig. 8.,]5e and! of Example 8,.2,5" we see that, since the number of sections used in both examples, are about . the same (1 versus 8), then the variance of the two estimates are approximately the same, In addition, although the width of the main lobe of the Hamming 'window' used in Welch's method is 1~46 times 'the. width of the rectangular window used in Bartlett's method.jhe resolution is about the. same, The reas-on {tor this is due 'to the fact that the"50% overlan that ~

SPEcrR,UM: ',fsnMATlON

~r ~ .
2SI~'--' ,

~Eit
1'0

~........ .. :

!~i"

-g
<iI

_..

C'

~
~

5;

:
'-.--../J:
I ~.....

,Q

l_..-... _........___..._..-..~,~.
.• • ..... ~

----..........

l
'I

0..,

Q..~

11J\i:l

0 ..04' '11.:5 P:'~~iC'Iitpf~

0...'8

~,-1

!l.i!l

O.!;l

'1

-1I1J,I,-- _"'--,
till

ri."!!

_"'--,
0.2:

~..__'

ea

~_.L-___.__-,---__,_,
0'.5 i\"!lPqwncv ~a, M Il:Hi of j!i,I
(11.::1"

_..L-.......,;_~

oW.a

!JJ\ s

(3)

(b),

is used fun, Welch's method allows for the section length to be 'twice the length of that used . sart , S h UI ID.. ~ ~l.. d"'l' • ~ ullit: 'L...... In B 1ett ' s metno dLO wnat ((0 we gam Wltne 'lC~(Sme'~.uo" ,t ""Il ,gain a re (Ill" cnon m ... , we .. u
;l-i(j,.

L,'

".;I!,

'r

'.

amount of spectral leakage that takes. place through the sidelobes of the data 'window ..

8.2.,6' S'ac.km1an, ..'Julcey Method::, Periodogram Sm1oo,thing


The methods of Bartlett and Wei[ch are designed to reduce the variance of 'the periodogram by averaging periodograms and modified periodograms, respectively ..Another method lor decreasing the statistical variability of the periodogram is periodogram smoothing:;· often referred to as the B1eckman -Tukey method afterthe pioneering work of Blackman and'Iukey in spectrum analysi s [6]'. To see how' periodogram smoothing may reduce the variance of the periodogram, recall that the periodogram is computed by taking the Fourier transform of at COIl sistent estimate of the antocorrelation sequence, However, for any finite data record of length N,} the variance of ,f r (k) will be. large for values of k: that are close. to N!, For example, notre that [he estimate of rx (I) at 1ag k = N ,~ I is , 1 ;_r.(lv' ~ I) === Nx(N

- ])XI(O)

Since there is little averaging that goes into the formation of the estimates of r x (k) for ~k~ ~ N,~, no matter how' large N becomes, these estimates will always be unreliable, Consequently, the only 'way to reduce the variance [of the periodogram is to reduce 'the variance of these estimates or to reduce the contribution that they make to the, periodogram. ~ql( the methods of Bartlett and Welch, the variance of 'the periodogram is decreased by reducing the variance of the autocorrelation estimate by averaging, In the Blackman- Tukey method, the, variance of the periodogram is. reduced. by applying a 'window' to fx (k) in [order to decrease the, contribution of me unreliable estimates to the periodogram. Specifically, the Blackman- Tukey spectrum estimate is. ""
'. (. J'W) P... BT·'.,e···- ':'

k=-,M

ix(k)w(k)e-jk",

(8..69)

.~-

42,1

where iw(k.) is a. lag windOw that is applied to the autocorrelanon estimate ..For example, if w (k) is a rectangular window extending from -. M to M with M· .< N·_ I, then the estimates. of r, (k) having 'the largest variance are set to zero and, ..consequently, the power spectrum ~esti.m.ate win have a smaller variance, What is traded for this reduction in variance, however. i.s. a reduction in. resolution. since a smaller number of autocorrelation estimates are used to form the estimate of the power spectrum. Using th.efr-equency convolution theorem, the Blackman- Tukey spectrum may be written in the frequency domain. as follows:
If 8. ., '\ 7~·O·".·)·
.
"

Therefore, the Blackman- Thkey estimate smoothes the periodogram by convolving with the " l~ ~d W ;1\ .~ h th ~ . d'· .n . ouner trans forIn 0f th'e autocorrelation Wln.[OW'.~ ( j W) '., rut h'ougi tnere is considera bl e ,. I[[':.[e·: F· flexibility in the choice of the window that may be used, W (Ir.) should. be. conjugate symmetrie so that ·W (ejW) is real -valued, and the.window should have a nonnegative Fourier transform, .. ..... W{eP;r) > 0". so that PBT(.ej'l!)) is guaranteed to' be nonnegative ..A MATl..AB program for . . ··e[.'.a(; T·· ,. .. generating th B'1 kcman- T· k...ey spectrum. estimate IS. ~.ven in Fi 8'' 18 ..U . 19..:•. 1·.... To analyze the performance of the Blackman-Tukey method, we will evaluate the bias t, ,. . d ~ dent) i ~ bi d an." th e vanance: tbe reso 1·' unon IS WUl· ow-dependent), Th'etas mayeb computecd b ta ki i.[y ang the expected. value of Eq, (8.70) [as follows:
+ + (

Ot71)

f unc t i on. P.x ~. per_smoo t h. (x .r wi.n. ~M ,. n 11 n2 )%


X
r

"",1 •. \~
'""" \. e,

.1'.

i.f nargin =~ 3
n.l. R .r

1;

n.2. ~ 1,eng·th(:X:)

J"

end;·

.=..

c·ov·a.r(x (nl ~ n2) ,.M} "


- [flip1~(R(1~2~M}}~R(1,1}~R(1/2:M) ;
~*u._ ....
Itl?J, ~ rI

M: w
.if

~ one;s.· (M

(win .=: 2: ) '\I] := hamro.ing' (M) ; else i:f (wi-n. 3.). w ~ harming (M:). .t ··1 ·"·F 'W:i.I1 4' b {.~ e s·el.. l· _ ,. j 'W =.a.rt. 1e·t·t:M.,t ~ else:i f ("win =~. 5) w = bl·a.c:kman {M];end;

1,. ;

Px ~ ·abs· (f:ft (r

if

10,2.4)- } ;

Px-(1}=Px{2};.
end:;:
i

.. ---.----------~~--------~~~----~=====-~----~~~====~~=

' .' t· "d' . ue . ·f· ne iod.. · .. ' .. we . ,.". " S' ;t!.." t:ltluung E·· ,(0 ?3"~ .f.':'1 th expected va·1" 0.,1 the peno ·ogram-, .~ h'ave ,.UuS~I.1. .; sq. 'to",...,. J ror me
'" E' { P
-

-] B'T'·"(·e· jlqjJ)" ",.'

~~

I'

:1,

.l!;',
=
-

,_
-

21l" '-.

P.x (·e·l'a.'J)· ....'rI"o' ... .'. "


,.a:,.,

TV":.' B . (,i)J;fC). T If' )' . .. ....'IL1

'W'~;"JilT• . ilPJ'4'J')" '. '.'

(0 ... "", . ,OJ

7'2')"..

1.'

..... ((.o ~,il' .l".).~,E:- {' P:"'iO T.~.,,- ...j~)., J = .~ r ("k: )"jj'~j." (..f"')~·"'1i.j"I'(.· :iIl . jkrn· ~ B·I\., .
Vol, \.~ ,;tl:. '. II!A)'
1tiI'.) ..

11::=.

(8' '7"3")'·
' •• :.' L

":.1 ••

_.

' ..

. :=~M· ~

If we let w.B'T'(k)

wB{k)w(k) be the combined window that is applied to the antocorrelation sequence r~ (k), using the frequency convolution theorem we have

==

.E{foBTI(ej~)}

= 2 _px<ei"') * WBT(t"'"') /
11

(8.,,74)

If we

assume that .M


I

N' so that

W'B

(k)w(k)

:~ w (It ),' ·th.en'we have


('9 7'· :5" )',
l•• ' •

,,()

,'.

where W -( .iW) is the Fourier transform of the lag window UJ (k) .. e .. UI 't:>III' u16: anYJI . y., soectrum .." ';;:-f.~._.I');,~oJ:!j;. bit E'V8i1.Ualo.g ·t1-.: va .: ~"""" 'of" ~:a....'.~.B:·l.(l;.... m a···- Tii~·~I........ o.~"",·.:·.' ;-,,' 'e'!.J!;I.JIlJIi.U.Q~ '~q" t .'. UlV, .. k ..' _D' :UiLre..,.IW-I mi O"!I!'"-Q. iIl·.....
.fj,'n...... \~ 'I

'iW!r'.

'.

1111.!!!~.

1 .1 • '.

I.

,W,'!l;,i.,

'-.j[.

I.'

I. , ..

"' ..... rk WOf·

.. ··fDee

is

S·,··-: . "-,

(:8~76)

we begin by finding [be mean-square value .E { pi,,~(eJW) } ~ From

Eq, (:8~ 70) we have

Therefore, the mean E { P~r-(d"')}

-S1qU,aJ"e

.Jr

21 LX
x . _.j![

value is E { fo per {ei"}p pe,.(eiV)} W(.ej(.._,..... W(ei(ru-V»dudv ..)}

_'.j\[

Using the approximation given run Eq..(8,43) for

(eitl)} leads to an expression. for 'the mean-square value that contains [wooterms. The first term is I

E.{ piper (ej.U),Pper

(S ..77) =
1

~. ir .'.-;.'\T 2

[1

,~

/'_"

P.r(eIU) w(ej(0>-UJ)dU]2=
;

E2 [PBT(eiW»)

which. is. cancelled by 'the second term in Eq.. (8.76). Therefore, 'the variance is
.'. "" ". Var ~ P BT(e" j'.(J)··1·' ');'
j

:, ...

==

4 1s
.Jr.-

1,··J[ 1'.' " !


. ,-;if

_for

.' .

. '-:IT"

... '.' "." '.' Pi ' (e ju ).P;.;r; (e jti'.' )

(8.18)

NONPARAMfTRJC METHODS'
.

423

Since

W' (e.:J.,ru) ,_
r. ,

.B','"

,-

sm,'(N' ,.:.... , .' I , ",.:" (V;/''1J!)]2 ':; [ sin(w/2) _i: N' i_


'

is the discrete-time Fouriertransform ofa Bartlett 'window" then W B (k) approaches a constant as N' ~ 00 WId W:B,(e}ril) converges to an impulse, Therefore, if N is large then the term in brackets may be approximated by an impulse of area 2Jr I N ~
[_ -~~~'--_' vJ/2, N s~n(u~ Si.n N(u -

V)/2']' 2 ~. ,-uo(u . -' v) .' 2J"7'. ''


,N'

Thus, for large N, the variance of the Blackman- 'Iukey estimate is approximately

Var

{P
.... {

BT(ei"')

I
.

-;::':<,

1 ... 2J'T N

1'.' '](; P;(eiu) W2(ej(W-~})du


,,-,iT

'if 141 is large enough so that 'we may assume that P;; (e.J.W) is constant across the main lobe of W(ej(~~lil), then. P}(ej.~) may be pulled out of the integral,

Var

PBT(e!fb)

.}

~?

ID

, ... N . 1r

'.' P;(e'''') : . W2(eJ(OJ-")du


e-

1;;il'

' -,1!

Finally" using Parseval's theorem we 'have Var

(PBT(ei(J})}

~ P;(ei"') ~ ~:

if'

w2(k)

k..;::.~,M

provided N ,M l. Thus, from Eqs (8.,15) and (:s.~ 7'9) we again see. the trade-off between bias. and variance, ~f a small bias, M should be large in order to 'minimize the . nereas s« S,honI'd l...._ smallin 0rd er to' nururmze I' . e sum, ....liii .. , . ,. ,. width 0'f th -e main '~I b eo': W( e-J~W), w,h : to " f·' r oe ,~
'.' I

M"

-,

tho

in :Eq,. (8,.79). Generally, it i.S recommended th~ __ have a maximum value of M M N 1[2,6]., properties of the Blackman-Tukey 'method are summarized, in Table 8.,6~ ~. The

/1/.

.,

Table :8.6 P,roperties ,Method

01 the' .'"clcman'·'Tuke:Y

,P B'T f eiw):=
B.ias

'2: ;iX
M'
k~'--M

(1:)rw (k ),e-- jlQJ

..
c'"

I l .s 1. P".'. E l': p:':' 1)""["'\,'" ¢).)} ......'2:lT';;c. '.:"....".


,R'eso'1'" uuon Window dependent

DjUJ!), ,~
''':I''

W·:- { oj()'!l)1
_' \.""

.'!;,.

-_, .-------------==~-~---=~~,........-....... ~

8.,2.7

Per:formanc,eCampa.rlson.s

.A.D. important issue in the selection of a spectra m estimation technique is rhe performance of the estimator, In the previous sections, we have: seen that, in 'comparing one nonparametric method to another, there is a trade-off between resolution and variance. This section summarizes the performance of each nonparametric technique fun terms of two criteri a. The
first is the variability of the estimate . . . _ Var { fox (ei,",) }

V ._ ---::-.---~ E2 {Px(ej"')}
u

· ilL... ., .. llfl·' ..f:·· w·.hicn rs a norma:I·' d vanance. Th secomd- measure IS an O'veral."i,: gure (?j mera ..,'L.. " 1.8 lW i ne that .• defined as the pro -uct o~v,arji,abmlity and. the resolution,

This figure of merit should be as small as possible, However, as. we will soon discover, this
figure of ment is approximately the same 'Orall of the nonparametnc considered ..
• . lil.

=-----------

"'I

-.

--....i"'.

.,

methods that we have

Periodog r'Clm,. In S,ectionK.2. 2 it was shown that the periodogram is asymptotically unbiased. and. that, for Iarge N.~ the vari .. ance is approximately equal to P;-(ej(j··),..Asymptotically, therefore, the variahili ty of the periodogram is ·eq001 to one

p2 (--: ejW')-l V' .. == . x '. pup.2. ('e! f;j}), .:! .


Thus, since the. resolution of the periodogram is
'--

==

~fo=089-, N
. L .

2x

then the overall 'Ii 0'- of merit is zure


Mp--'D~
··• .... f

.= Ors,'9-N

2Jr
r

which. is inversely proportional to' the data record length, .N

·.and.the variability is

Since the resolution is

fia:)

::=

0+89{2rr .K·/lV) then the figure of merit is


M
:1').-,[----,..D ~ .• --

-'0' 'S-'9' 2Jr N

'which is the same as the figure of merit for the periodogram,

425
WeEcihJ's Met.hod..
The. statistical. properties of Welch"s method depend on 'lire amount of overlap 'that is used [and on the type. of data window .. With a 50% overlap and a Bartlett window, the variability for large N is approximately

91 .w._, -~.~ V
'- .:.. -:

8-" K'

9 --~ L
_

.',0',.

~6

.!IL[ .. '.

N"

_.,....

,j.

Since the J dB bandwidth of a Bartlett window' of length .L is 1..28(2jf [L) the resolution is

and the figure 'Of merit becomes


-',['2-M.,'_' ..... ·2lr .' 'W' . ' 07 . N '-~
p'

l_. alck .. ,.-' li_. Y . solutio ....ffi,........"!'Ii·n TI 'CI'l: ,0·. _.mi:iIlJ.,e·, _T' .' and B~ ,._[ m. an' '1u'i&e',_, M·-.&'h d:i. S'~'iII'IIc t'h; e vari I.an ce W . re.:iJj'.: .J], of the Bla u\"-:KJJI!J"Wi,- u~,kevi
l_:
I,

.l:.

1_.Ll..

I:

!"

. .,:

method. depend on the 'window' that is used, suppose wI(k) is a Bartlett window of length 2M that extends from.k =:. - M to k: == ,M~Assuming that N M I, tb.e variance of the Blackman- Tnkey estimate is approximately

Var{!P8T(eJ)} ,
, . !..

. .....

. . '.~

-'

'.'

Px{el

'12

..,

,f1L."

1 . -' ...: (. M
.'.
L_ .I~ iPL.,:-.~-M

'k~)''"~ L .1- -M.-::·


,

,..... ..
, ..

....

..:2.

..... ''_: . .< .~ UiI 2M' ~. .P:r.(e,J· )·3..N·.· ·

.'

(belie 'we have used the series. given in. Table 2.3 on P.. 16 toevaluate the variability, is.
'

the sum), Therefore,

VST

=3N

2M

Since 'the 3 dB,bandwidth of a Bartlett window of length, 2M is equal to 1 ;.28(21lj2M)~ then the. resolution is.
)\ ..... ,,1'~ .L.l.!o4i" ~

., 64~ 0 '•. ."'.. ,2ir .... ,' M


.

and the figure of merit becomes


MBT,

=-0.43 21l' . · _. N

which is slightly smaller than the figure of merit for We·leb's method,

Table 8r7 provides, a summary of'tbe performance measures presented above for the periodogram-based spectrum. estimation techniques discussed in 'this. section, What is. apparent from this table is that each technique has. ·3 figure of merit. that is approximately the same, and that these figures of merit are inversely proportional to the length, of the data sequence.W, Therefore" although each. method. differs, in its resolution and variance, the overall performance is fundamentally limited by the. amount of dam that is. available" In the following sections, we will look at some entirely different approaches to spectrum estimation in. 'the.hope ofbeing able to find a.higb-resohmen spectrum estimate 'with a.small
variance that 'works well on short data. records.

Summary.

426

'N"" r~ y.. ~,.l.t,c~'" .IAT.lQ· :'.' '_ S,D=:rTi'Ri''U' '..'W!I! .1:"'" ~ I~\JIIII

Table 8

i."

,Per.for.manee Meas,u.res fo,.· "'e Nonparamellic ,Mefbads o.f' :Speclnlm Estimation


l

Variability

Resohnion

Figure. of Merit
... \4. O..89·~ /!l

V
Perio do 0'" 1!tJ!'T1'.!I·1JIY'I'
~ILI>' . ~.lJ,

.:1.(0' 0... 9 N 8
...2JT

2ir 2H. N ..

.... 11='.· Barneu


ri

....-

K 91
1

O·.···'·.K89

21r.
N

O..IB'9_:_

Welcbt ....ckman B l .oLJ&.J.:lu.:- _


J

SK
,I -

lli.28L
O~64____:....,

2M

.. 21f 012·~ ".' N'


O,.43~ 'N

y--: u'~I,,'' ' 'Y'


1, '~."

2M
!

2x'

3N

1"

2v

t' SU% overlap and

a Bartlett Vltindow ..

S····:P··f' "'.'.R_., __ Iii' ,,: 8.·3 M·"N'···'I'M' lU'I'''' II:·R·IA···:·~N~·'·C··'Ir- ,,'


•, . ,,' .. 1, .
=ee •• '} •

C''-. "'R'··U:·IM···,'.' Iii'" ... '

E5'r11; .1,"""'.',','._n:»: ..~' .1' ... ··00··


+ ,.

.:t...~ ~ AI...... • 'hoi c . ".1i." ...... nave oeen U P to ttns point, - we L.,.." L...... CO:OSI'Ucnng nonparametnc ~ teen nques 'lor esnmanng 'LUI; power spectrum. of a random process, Rellyimgon the .D'lFT of an estimated autocorrelation
+

sequence, the performance of these methods is limited by the length of the data record, I~ this section, we develop the Minimum Variance (MV) method of spectrum estimation, which is an adaptation of 'the Maximum Likelihood Method (.~) developed 'by Capon for the analysis of two-dimensional power spectral de nsities [8];fu the M'V method, the power spectrum is estimated by fUtlering a process with. a bank ~OWbWld. bandpass filters ..The motivation for this approach may be seen. by looking, once again l' at the effect WS'S:'random process wttn a narrowuanc d b d d ' ith ,b s: f fil ~ ottthenng a'," bandpass 6-}' mter, Thiererore, it x'\u) l met t' be a zero mean wide-sense stationary 'random process with a power spectrum ,Px (ej{t!) and let Ki(n) be an ideal bandpass filter with a bandwidth Ll and center frequency Wj,.
jGi(e
.. ","W·, ,. , ·)1 =,{"] 0

..
'!l'

Iw· -

Wi~

-e 13/2

otherwise
.'

Ifx(n) is filtered with gi(n).~ then the power s.. --": m Of'tb7e'out:,.' ,:.process, Yi(n)", is. pecma,I'
Pf.(e/Q))

= .f~(e.Jltv)IGi(ejru)12,·

(8,.80)
If 11 is smallenough 'So that P~(ej~) is approximately constant over the passband of the filter; then the power in Y.i (n) is, approximately
'1

E {f.Y.~,' .. '
.

'~',',

(""n' ')'

"

12}: ~
~.

p"

"

.;t , .

111:::"

(,o..iCji·)·, ..,A .. ...


.. -

./

1;;7,'

(8...81)

You might also like