Download as pdf
Download as pdf
You are on page 1of 83
Optimum Signal Processing Second Edition Solutions Manual Sophocles J. Orfanidis Department of Electrical Engineering Rutgers University Chapter 1 Problem 1.1: ‘The experiment is deseribed by the following transition probability diagram: we 6 ae sin oO ae E = not six - TA . Mel > 6 se a. Given that the fair die has been selected (x = 0), the corresponding conditional probabilities of getting y = 0, 1,2 sixes are: cod =1|x=0) = = 3g PO=1|x=0) = 2 als PQ =0|x= B, py=2ix Similarly, the conditional probabilities of getting y sixes given that the biased die was selected ( = t)are: py=0lx 0 py=llz=1)=0, py=2|x=1) b. The probability p(y) of getting y sixes regardless of which die was selected can be computed using Bayes’ rule: PY) = EL pbx) = E pv |xP@) = pv |x =p =0) + py |x=1)p = asa where the a priori probabilities of x are p (x = = 2/3.andp(& =1) = 1/3. Thus, in po-0)- 82608 als zee eee PON = 35°5 * 5 * Tos 124,141.38 | Be eisai 3) 08) The mean number of sixes regardless of die is computed by: 38 Eyl= 5 ypy) = 02% +1 +238 2089 Ria? * 08 * Mog * 05 4 Apply Bayes’ rule p(x 1y) = py |2)p (2)/p 9). Given that we observed cither 0 or 1 six, i is certain that we must have started with the fai di; given that we observed 2 sixes there isa small chance that we started with the fair dic; indeed, m2 =0 |x =O)p (x =0) 3% 3 ( =0|y =0) = 2U=01x=OpG=0) _ 363 1, Ande) PG =0) So 108 1 ot =0 (x =1) 3 fe t|y=0) = os eee ae PO=0) wo ios 02 3 Ly en ios ot But ATE [yayalan = E((atya)(vaa)] = Ele2] ‘Thus, the determining equation for a, becomes Ely,yZ]a, = E [eZ]u. Problem 1.22: The effective performance index, with the constraint builtin, is J=aTRa + A(1-uPa) = min Setting the gradient with respect to a to zero gives 2 = 2Ra-A=0, or, a = AR*u/2. The Lagrange (re multiplier is fixed by requiring the constraint a7 Problem 1.23: ‘Taking determinants of both sides of Eq, (1.7.16) and using the fact that Z has unit determinant (being unit lower triangular), it follows that det R = (det A)Ey. Problem 1.24: Using (1.7.13) into (1.7.19), we find Ri =LIDBL = rid [* allé ‘- ae ea [Eva en Problem 1.25: a, Since @ and b are independent and have zero mean, we have E [ab ] = 0. Therefore, EB] = E[(a + bn)?) = £[0] + n7E (07 = 03 + n2oF | b. x, isnot stationary, because E cd] depends on the absolute time n; similarly, it cannot be ergodic because ergodicity requires stationarity. ‘c. Being the linear combination of two gaussians, ry is itself gaussian. It has zero mean and variance of = Ele] = of + no. Thus, its density is PG) = =p + b(n =m), it follows that ify is given, then the randomness in x, will arise only from b; that is, P (a [mea = Po(b)ab Using dey = (n -m)db and replacing b = (tq -%m)/(n -m), we find maf fadin-m? © @x)'Poy [n= | _ Pld) PG. lm) = Taya Problem 1.26: a. Working in the time domain, we find for 0 0, which is valid for all 420 (with equality attained at A = 1). Noting that f (R) = tr(In + 1), it follows F(R)-f R) = u(RTR + nR)- tT + In u@ -I-InB)>0 where we set B = RR. Note B is non negative definite, being the product of two such matrices. Also, we made use of the fact that for any two matrices tr(IaR) -fx(In) = tela (R*R)], which follows casily from the first part of the next problem. Problem 1.35: ‘The first relationship follows from ln(detR) = (TTA) = Dad = tla) where are the eigenvalues of R. The third follows by differentiating both sides of RR* = [to get Rd(R*) + (GR)R4=0 => d(R) =-R7GR)R* ‘The second identity follows by making use of the eigendecomposition of R; namely, R= EAE“, where E and A are the orthogonal matrix of eigenvectors and the diagonal matrix of eigenvalues. Taking differentials, we have GR = dE AE* + EdhE* -EAE*GEE* Multiplying by R= Ex", we find RAGR = E (WdN)E* + (EM (EGE )(EM)? - dE E* Taking traces and using the fact that the trace is invariant under similarity transformations, it follows aiT- tR*dR) = tr('dA) + (EE) - tre E*) = er(A'dh) = deel) = dte(laR) where the last equality follows from InR = E (laA)E* -18- Chapter 2 Problem 2.1: mall cases, we find the transfer function #7 (2) and use the relationship Sw) = H@)H Sa) = H@)H G1) where we set Su(2) = 02 = 1. 41. ‘The transfer function is H(z) = 1-24, Thus, Sy@) = (L-z4)(-2) = 2-@ +24) Picking out the coefficients of, we find Ru) =2, Ry(21) =, Ry(tk) = 0, for k>2 | ‘The last result is of course related to the fact that the filter has memory of 1 sampling instant and | therefore it can only introduce sequential correlations of lag at most 1. | 2 Here, H(z) = 1-227 + 27. Since the filter is of order 2, we expect to have nontrivial correla- | tions only up to lag 2; indeed, Sy(2) = (1-224 + 2)(1-26 +24) = 6-4@ +24) + GP +2) Tnverting this z-transform, we find the nonzero autocorrelation lags: Ry) = 6, Ry(21)=-4, Ry(22) =1 3. Because the filter H(z) = 1/(1- 0.52") is recursive, we must use the contour inversion formula: a zt dz a rhUmUrhUmr Since Ryg(k) = Rg(-k), we only need to compute the above integral for k >0. In this case, the integrand has only one pole enclosed by the unit circle; namely, z = 0.5. Therefore, we find = Ot Ay) = {OG for 20 4, In this case we have: 1. a 4O* Tyast * Goya Oey ‘The power spectral density is given by (eee eel =19- 1 . 1 * 02s) a-0257) ~ G-o0SG + 0S G-025) Sw) For lags 2 0, the integrand of the contour inversion formula for Ry(k) has two poles inside the unit circle; namely, z = +0.5. Evaluating the residues at these poles we find: a sot nde, ! Raft) = [50 2xje ~ Z (OS) + O.S)(1- 0.2527) fare xj 1 = Res + Res = Bes * 38 * tans (0.5 + (05) Problem 2.2: This property is clearly seen for cross-periodograms: Syole) = ¥G)WG*) = HOX OWE) = HOS) where we used the filtering equation ¥(z) = H (z)X (). For the statistical quantities, the proof is best carried out in the time domain working with autocorrelations and using stationarity: Ryolk) = E ane] = El Chetan at] = DhimE Enemas] = = LitaRae(t mn +k) = Ship Realk =m) Taking ztransforms of both sides, gives the desired result. Part (b) follows from part (a) and the sym- metry property Say) = Sy), Indeed, Supl2) = SpE) = H@*)Sua(@) = Seale) (2%) Problem 2.3: Write ¢, in vector form, as follows: Ye ry on f= Batam * (do dry “> sad] + | =a y(n) Ya Its mean square becomes: E led] = Ela y))(n)"a)] = aE [y(a)y(n)"Ja = a Ryo where Ry = Ely(n)y(n)?}. Its matrix elements are expressed in terms of the autocorrelation lags, as. follows: Rais = E Das dars] = R(t in +)) = Ry a) = Ryli-i) ‘The second expression for E [e2] is obtained by noting that it is expressible as the zero-lag autocorrela- tion of e,: EfeZ] = Ree(0) = foarte = i 14) 15460) Or where we used Eq, (1.9.5) and Su (a) = | (e) |?5,g(a), which follows from the fact that e, isthe out- put ofthe linear filter (2) when the input is yq. Problem 2.4: Because B (2) = 1/A (2) is the signal model of y,, we must have: a LC Sule) = 62 1B() |? = A Using the results of Problem 2:3, applied to the filter 4 “(e), we find @-ElAl= [14 ed =aTRya” on, “fer wat ATR a Applying Problem 23 to g itself, we get o? = a” Rya. We finally find a Part (b) is obtained by interchanging the roles of A’(2) and 4 (2). Problem 2.5: Using stationarity, we obtain Raglke)* = (E Da enda® I)* = ED nn. Ry) In the notation of Problem 23, we may write ¢, = a y(n). Its mean square value is Eletaea] = 8Ely(n)* y(n) Ia = at Ra ‘The matrix elements of Ry are expressed in terms of the autocorrelation lags: Ruiz = Vas" Yui] = EVai¥as” | = R(t jn +i) = Bgl -j) on. Problem 2.6: Inserting yy = Aye*e™* into the definition of Problem 2.5, we find: Raylk) = EVnsb ds a Blge et gee eM ow [Ay [20 Next, let y, be the sum ya Aye) 4 Aaeiton s#) ‘Then, we obtain the four terms EVaerYal® = |Ai [72 + [4a [20 + be Agdgt OMAP ee) 4 Ady eH ME eth] Because ¢ and ¢y are independent, we have Ele*e™) = Ele* Ee oo=0 where we used the fact that for uniformly distributed 4, the expectation value E [e] is zero. Indeed, m » Ele = fe%p@ad = [eH =o 2 2 It follows that the cross terms inthe above autocorrelation are 2ero. Thus, Ry lk) = EnenYnt) = [Ar [2 + [4a [70 Problem 2.7: ‘We just showed that for mutually independent and uniformly distributed phase angles, E[e*e™] = 0 if & #4. But if & = gy, then E[e*e™] = E[1] = 1, To summarize, we have Ele*e™) = by Using this result and E [v,* e**] = 0 which follows from the independence of v, and 4, we get Ruplk) = Ease] = Elastva] + Bact Mepre) + + SAteME Dy, ue™] + SS Aidis ebrlerthsang (eit eh) - = oF (ke) + [As |7e™™ = SS ay Ry(l-m)tq and A (4) = FS age ™™, we find tomo neo Using E[ ea] a Ellenl)= ¥ a Ef “ L u “ 1 =D lanl? + Btaie[ are [Sane = otata + 5 [4s|71A G4) |? ‘mao a i=0 c= L am) + As tla - & ‘This last result forms the basis of Pisarenko’s method of harmonic retrieval, as discussed in Section 6.2. Part (4) follows the same steps with the replacement of 6(/-m) byQ (I-m) and recognizing that au 5 a*Q(-m)am = atQa ito Problem 2.9: Consider a more general problem of the form Yn = -R¥Yna + (L-R?) xy with transfer function 1-R? 1k? 1+ Re?” (-jRE V+ FRE) HG) = ‘The power spectral density is Sale) = oH eH) = GEE a+ RAG + RP) where 02 = 1. Settings = e* Sp(w) = ——G-RP _ " T¥ BRPcos(2u) + RE ‘The autocorrelation function is obtained from of sg@ett = pea Rall) = [Sule se Leyes at A ti LR’ 1+R? Rheow es = m where we assumed k 2 0, Since 0} = Ry(0), we find for the noise reduction ratio Ra) | 1-2? Tee Rs Ala Since the filter has unity gain at the frequencies w = =x/2, H (1) = 1, it follows that any linear combi- nation of Band ont) will go through the filter completely unchanged (in the steady state). For values of R close to unity, the steady state is reached more slowly, but the noise reduction ratio is smaller. This is the basic tradeoff between speed of response and effective noise reduction. Problem 2.11: In correlation canceler notation, we have x = [yq}, and y = [Yau]. Then, | Ry = EDeyeal = RQ), Ry = EV2a] = RO), H = ReR aa) ‘The estimate % = Hy becomes, Ja Hyas = dyer where we set a = -R (1)/R (0). The minimized estimation error is computed by Ele] = Ra - Ryde -2@- Ae = R(0) + ,R(0) Problem 2.12: ‘The sample autocorrelation is . oe 3 R} = Draye= DAL a 4k, for k= 0,123 ‘The resulting first order predictor and prediction error are Bot a= AM .3, £28@ + eau -4-Z Loss aot 4 The estimate is: . 3 hm Bone ons ‘The gapped function is defined by 2 = 5 enit(e-m) = R@)- 28-1) Tehas a gap of length one: £() = RG)-2&O =3-34-0 Next compute g(0) and g(2): #0) = RO) -38 = + 6@~R@-3&q) =2-3 [Next define the second order gapped function g’(k): 8k) = 8K) -mg(2-k) where a = g(2)/g(0) = -1/7. Using the Levinson recursion, we construct the second order predictor by: po} [eae a3" 0 1 0 1 7 ‘The 2nd order prediction error is E* = °(0)-(1-73)g(0) = (1- ‘As expected, it is smaller than the error of the 1st order predictor. The estimate is given by a= Sy p< S-Leons ine Smu-tna fe $-d-on Even though the 2nd order predictor is better than the Ist order one in the mean square sense, the actual “predicted” value of the Sth sample is “worse” than that predicted by the Ist order predictor (assuming of course that the most “obvious” value should be y, = 1). Finally, the zeros of the 2nd order predictor are found by solving the quadratic equation Syl. Gets Feta Ithas roots = 0.227 andz = 0.631 Problem 2.13 We have y, = -(-1)* forn = 0,1,2,3. The sample autocorrelation is RW = Syccaye = (4-8), for k= 041,23 Then, = AOS, jeans, je dae ons | Similarly, we find for the autocorrelation of y = [1,2,3,4]: REO, RG), RE, RE] = (30, 20, 11, 4] anda, = -R(1)/R(0) 2/3. Problem 2.14: ‘The equation at the upper adder is Stet Ye Oe In Wat a te Thus, we find 7%) ._—1_- 1 e@) t+az* A4@ Similarly, at the lower adder we have r, = Yau - 1m Which results inthe transfer function 1G) = 2°Y@)-n¥G) = (1 +2) = A*@)YO) Problem 2.15: In the z-domain, the equation at the second upper adder is ¢ (2) = e’(e) + z"r(¢), oF ee) (@) - rat (@) = (AG) -94*@)IY@) = 4°@)YE) where we used the Levinson recursion A(z) = A (2) - m2 44¥(2). Problem 2.16: Let p1 and pq be the two zeros of the prediction-error filter, so that AG) =1+ay'24 tay (Q-piz)Q- paz) Because of the Levinson recursion, we have Pitpr= a= (1-7), Pipa = aa" ‘The required integral is & f——____+ mf ae” Le-pae paps) BI 1+ pws (1-papa)(1 - PE) - pa) es + Re: Using the identity (1-pB)G 8) = (1 + pupa)? - 1 + Pa)? and expressing the ps in terms of the 7s, we find finally: r-— 1 _ (-0-®) Chapter 3 Problem 3.1: Starting with the right hand side, we have _( “ au A@AG) = (§ az" (s a] =D gage) = Eitneatete* where we defined k = i-n and changed summation variables from the pair in to the pair kn. We ‘ust now determine the proper range of summation over k and n - doing the n summation first and the 4k summation last. Since both / and n are in the range [0, M|, it follows that k will range in -M 21 1?) ‘Summing over n, we find Elem? m=0 2 bm 1? = (L- a1) S(lfml?- Vf 17) = (1-120 1?) Lfal? neo where we used the fact that f.. = 0. Problem 33: ‘Using the following standard z-transform oti 1a? (1-az4)(- az) we find = Footie = 105? _ Sale) = E OMe = TS “Thus, the model is BG)= + and of = 1-(05)? = 0.75 —— 1-052" ‘The difference equation is Yq = O.5)n1 + Problem 3.4: AAs in the previous problem, we find - 1.057 1-(05? oie 1 >= Gasya-0% * Groshasom) ~ Tome Tom Therefore, of = 1875, and BO)= page) 7 Snare Problem 3.5: See solution of Problem 29. Problem 3.6: Factor the numerator in the form: 218-06( +24) = (L-f\O-ft) = A + P)-o7f@ +24) This requires 2.18 = o(1 + f*) and 0.6 = of. Eliminating o, we find Lif 218 feos 7” 06 where we chose the solution for f of magnitude less than one. Then, we have o? = 0.6/f = 2. Thus, 248 -06(¢ + 24) = (1 -0.324)(1 -0.32) ‘The denominator is factored in a similar fashion to give 125-05(@ +24) = (1-052)(1-052) Therefore, Sip(2) is factored as follows Sw@) = We identify, o3 = 2, and 1.0324 BQ) = ost and Yq = 0.5yeu +f -O3e Problem 3.7: Using yq = cy + vq and the fact that x, and v, are uncorrelated, we find x a = - ‘The required signal model B (z) is found by completing the fraction and factoring the numerator in the form (with |f | <1): Spe) = 2Q#RG waa) | AAA YA-f) ™ (@-ar)(-az) (-ar}(-a2) ‘Therefore, we must have the identity in z: 67Q + R(L-ar*)\(1-az) = o(1-f"\(1- fe) Which gives rise to A+ P)=7Q+RO+0%), of = Ra Solving the second, of = Ra /f, and inserting inthe first, we find the quadratic equation for f aR. +f?) = f[e20 +R +a) This equation remains invariant under the substitution f —-1/f. Therefore, if one solution has magni- tude less than one, the other will have magnitude greater than one. Substituting the expression Ra UR into the quadratic equation for f, we find [2Q + RQ +0) or, Brees Rar 6029 + RU +) which gives after some algebra Ra?P chante R+cP If P is positive, then f has magnitude less than one. Indeed, since by assumption |@ | <1, we have gp -t¢l< ei, lt+xSi+e To show that one solution for P is positive and the other negative, we work with the variable x defined above. Then, the quadratic Riccati equation reads It can be written in the form where a = 8 (1-02), The two solutions are atVitB a VE ae ei 1% Because > 0, it follows that regardless of the value of a,x, will be positive and.x negative. Finally, inserting the expression of f in terms of P into o3, we find Ra = eR scr f Problem 3.8: For n 0. Therefore, The prediction filter is then ox OT oat OF, fase Ob yas + 0.1% where we denoted £,41/a = £1(1). We note that the prediction filter H(z) is related to the estimation filter H(z) by H(z) = 0.5H (z). Since both filters have the same input, that is, y,, it follows that ayn = OSEqj5 Noting that Se,(2) = 2Sa(2)e* = Sx(z), and Sy,(2) = Sy(e)e", we find for the prediction error 125 Gro Flchaud = f Sanl) -OSOSE = I Eaaraey tis slightly worse than the estimation error € because the estimation filter uses one more observation el than the prediction filter, and thus, makes a better estimate. Next, we derive the above results using a Kalman filter formulation. The state and measurement equa- tions are 5x, +, and yy amet Me with parameters a=05, ¢=1, @ The corresponding algebraic Riccati equation is: PRa? R+eP 125P oo tise with positive solution P = 1.25. The Kalman gains are G=—P_ 202, K=0G=01 R+oP We also find fea-cK=04, of=R+cP = 6: ‘Therefore, the prediction and filtering equations will be Fastin = Linus + KYn = O4E ain + O1Yq Sapn = Sarina + Gyn = OMFnsjns +0236 Finally, note €, = Eley] =P =125, € = Eleipl = Problem 4.7: Solving the Riccati equation PRa? Reap “2 with Q = 05,R = 1,a = 1,ande = 1, we find P = 1. Thus, tae +05, K=0G= sek 405, of=R oP +2 pita 305, KaaG=05, foa-ck=05, ‘The prediction and filtering equations become Snotfm = OSEymat + 05Yn Sofa = O5taja + 059m Note, fasi/a * Qfa/n = S4/n- Also, because the signal model is marginally stable (cather than strictly stable), the signal x isnot truly stationary. Therefore, itis not entirely correct to apply the above Kal- man filter methods which were derived by assuming strict stationarity, However, as showm in Example 4491, the time-varying Kalman filter converges asymptotically tothe above stationary one. Problem 48: First, we derive adiference equation for éxjni = ty “Zaye fn = Suet “Zast/a = (Gin + Wa) - eajnt + Kya) = fein + Wa Ki where we used yy = Gty + anda = f + cK, We may think of ¢y,/5 a8 the output ofthe filter 1 MO) = ar nase driven by the white noise input ty will be ya ~Kiy. Because w, and vq are uncorrelated, the variance of tq o2 = Elua] = (wi) + EP] =O +R ‘The power spectral density of e41/q will be ee © = MEME = Tena By Integrating (2) over the unit circle we find the variance of ey: ff QE RR ae Pa Bleaund= [Sao ~ LET A) oa The derivative of P with respect to the Kalman gain Kis BP , KR oP aay Setting this to zero, we find AR=[eP =(¢-cKyeP > K= ee We can use this result to express fin terms of P: -9- e sek ae OP Cee enna ha niece | Inserting the above expressions for fand K into P = 2*4CR, we find 1 o+(—2)'r R+ cP) pg = PRA? an) - “2 R+cP (ra Problem 49: In the previous problem we derived the difference equation enetfa = fenjnct + la Where ly = Wy -K¥y Because 1, is white, it follows from the causality of this difference equation that E [éa/s 4a] = 0. Using this and stationarity we find P= Eleh espa] = PE [edna] + Elus] = f°P + (Q + KR) which maybe aed or P= 2ACE. Noa, ret th vrei? oy We have Enst/n = not “Fnst/a = (Gq + Wy) ~ Bin jm = Ola/m + We w, is uncorrelated with ey, because it is uncorrelated with xq, yg, and all the past xs and ys. It follows, that Eleheya] = a7Eleija] +E Wa] <> P= a7E ein] +O Solving for £ fe2/,] and using the Riccati equation, we find 2-0, _PR a R+eP Ele}, Problem 4.10: Using the measurement equation yy = Cry + vq, We find f= Yn Sajna = (Gin + Ya) “Eins CC + Ya Now, va is uncorrelated with x, and all past xs. Therefore, itis uncorrelated with y, and all past ys. It follows that itis uncorrelated with gj... Thus, of = Ele] = cE [cies] + E02] = cP +R Problem 4. First, note that the nth diagonal entry of this performance index incorporates the constraints required for the nth estimate: In Ele2] + AH am + (HAP an = Ella] + 25D Atlas Using Rye = R-HRyg Rell” + HRyHt”, derived in Section 1.4, we find J = Reg = Reg Hig RegHl™ + HRygH™ + AHT + HAT ‘The first order variation with respect to H is a (Hg - Rey + 8) SHE + BEE (FRy ~Rey + AF ‘The unconstrained minimization of J requires &J = 0 for all 6H. Thus, we obtain the conditions HRy-Ry th = Ry -HRy = stricly upper triangular which is the same as Eq, (48.3) subject to Eq. (48.2). -41- Chapter 5 Problem 5. Let.xy = Ynooy $0 that X ( 'Y (z). Then, the cross power spectral deasity will be Sw(@) = 2?Sy(z) = 0227B(2)B(e*) where Sy(z) = o3B @)B(e%) ‘The Wiener filter for estimating x, from y, is then, H©@ ‘The causal instruction is removed as follows: PB), =? + biz + be F Ht dD HOTS ] = shot boat + os aloe? b bot +) 2(r0). Bre} ‘Therefore, the prediction filter becomes Note that its input isy, and its output isthe estimate jy, /x Of Yn 0 ‘The first few terms in the power series expansion of Example 5.11 are 1 aor es 1 tet + OIF 1-092* +0227 B@)= ‘Therefore, the D =2 predictor will H@) = 27(1-(1-09e" + 022)(1 + 094) = 061-0182 with an I/O equation +2/n = O61yq - 0.1844. For D = 3, we find HG) =23(1-C-09¢" + 02°90 + 0964 + 0614) = 0709-03224 with 1/0 equation y,2/x = -0.729y, -0.122y4.. For Example 5.1.2, we have 242. = 120.289 = 1408" + 03%7 + 1-08" 2G) “The 2-step predictor becomes a 22{1-2208eG + 08e)) | __039 we 0 (1 Ome) «aes with diference equation Fa.aj. = 0.25yu/ea + 038yq- The 3-step predictor is ~08e)(1 + 084 2) iG) = 291-2082) + 084 + 05929) | _ 0312 10.2527 1-025 with difference equation J.ss/n = 0.25Yusx/aa + 0312Yx- = 1/B (2) determines the projection onto the infinite past. Working YE) = ¥@)-e) = [1-A @)VE) = [ere + ane? + +971) or, in the time-domain Fu =-leryna + dayaa + °° + Apu] ‘The prediction coefficients must satisfy Eqs. (5.2.5) which are identical to Eqs. (5.3.7) that determine the projection onto the past p samples. Problem 53: Using ¢ = So auyas, we find y= EGR Gay = aTRa The constraint ao =1 can be writen in vector form aTu=1, where w is the unit vector = (1,0, «= ,0f?. The extended performance index incorporating this constraint with a Lagrange smultplir wil be J=a?Ra + 2(1-a7u) Its first order variation with respect to a is WF = 26a Ra- 260 2a (Ra - du) -43- The minimization condition &f = 0 requires Ra= da The Lagrange multiplier is fixed by imposing the constraint. Multiplying by a” we find oF =aTRa=)aTu=d ‘Thus, we obtain Eq. (53.7), Ra = ou. Problem 5.4: Noting the (2) are the reverse polynomials, we find for the inverse z-transform of Eq. (5.3.17) eer] [0 1 is recognized as the upside down (ie., reversed) version of Eq, (53.15). Problem 5.5: For arbitrary 7, we have the matrix identity cies tg eee (a0 Sale) lee ee Problem 5.6: ‘Sending the reflection coefficients through the routine frwlev, gives all the prediction-error filters up to order four, with their coefficients arranged in reverse order into the matrix L: 10 0) 0G 51 0 oo L=|05 07 1 0 9 05085 1 1 9 OS A 13125 4.251 ‘The th order polynomial extracted from the last row of L is Ag@) = 1-12524 + 131252-29 + 0524 Sending the coefficients of A4(z) and £4 = 40.5 through the routine rev, gives the autocorrelation lags: { RO), R(1), R2), R (3), R(A)} = { 128, 64, 8, 11} Problem 5.7: Initialize by R (0)-1 = Eo, which gives E = 256. Enlarge the Oth order normal equations by padding a | PSI - <= and Ey = (1-n9)Eo = 192 The Ist order prediction filter will be [e) [aE - [2] Next, enlarge to next size by padding a zero: 256 128 32]7 1 128 256 128/05] =| 0] => A, =-96 2 128 256], 0} La Therefore, A _ 96 aoe. 2 = (1-DEL = 144 an Ep OS) and En = DE: and the 2nd order prediction filter will be 1] fa op yt @n| = | 05 - (0.5)| 05] = | 0.75 am} Lo ele os Enlarging to the next size, we get 256 128 32 6) 1] faa 128 256 128 -32|] 075 32128 256 128] 05 | ~ = Mn 0 0 “16 -32 128 256] 0 Mo | wh and Ey = (1-79)E2 = 108 and the 3d order prediction filter will be 1 1 0 1 ax] | 0.75 0s a ax/ | 05 |-°5) 075] = | oss as 0 1 05 Enlarging to order 4 by padding a zero, we get 256 128 32 16 IF 1 | fs 128 256 18 -32 -16|| 1 ° 32 128 256 128 32/0875] =| 0 | me ay =-s8 16 -32 128 256 val] 05| | 0 2 6 32 128 256|1 0 | | a Thus, Ao _ 58 en hos, and Ey = (1-nDEs = 81 a DE: The 4th order filter will be 1 1 ° 1 fal |oa 05} | 125 2a| = | 0875] -(0.5)| 0875} = | 13125 ae) | 05 1 “1 aul | 1 os Problem 5.8: Sending the coefficients of 44(z) and £4 = 081 through the routine rlev, gives the same matrix L as, that of Problem 5.6, and the following autocorrelation lags: {R), RC), RQ), RG), R(A)} = (2.6, 1.28, -032, -0.16, 022} Problem 5.1 The sample autocorrelation of the frst sequence is computed by Rw = Syren = (KG-4), for k= 0,,33,4 or, {RO, RW), RE), RG) RA} = (5, 43,21} Sending these through the routine ley, gives all the prediction filters up to order four, arranged into the matrix L: 1 0 0 0 oj 08 1 0 0 0 L=| 01111 08889 1 0 9] 012 0 0875 1 0 01429 0 © 0 gsm | and the vector of prediction errors {Eo, £1, Ea, Es, £4} = (5,18, 1.778, 1.75, 1.7143} ‘The 1/0 equation of the 4th order predictor will be Su = -085T1yn4 OM Yq me fq = 08571 + 0.2829 = 0.7142 ‘The second sequence has autocorrelation lags {RO, RG), RQ), RE), R(A)} = (55, 40, 26, 14, 5} Sending these through le, gives 1 0 o 0 07 1 9 96 L=|013 084 1 9 1 0.0937 0.043 -0.8029 q] 0) | o 0.0543 0.0501 0.0454 -0.7978 1] with 4th order prediction Ja = 0.7978 yy = 0.0454yq.2 = 0.0501 Yq. - 0.0543 Yuu Problem $.11: For the first sequence, we have 7, = R(1)/R (0) = 0.5 and E, = (1-8)R (0) = 0.75, The maximum entropy extension keeps all the prediction error filters equal to the first order filter, that is, A(2) = 1-27 = 1-0.5z4, The maximum entropy spectral density wll be, then EA 6. Sa) = Tae) * Wo 0sr\a-0s) ‘The inverse z-transform of this leads to R (k) = (0.5)!*!, For the second sequence, we determine the first two reflection coefficients (7, %2} = {0, 0.25}, and the corresponding 2nd order prediction filter -47- Aq(2) = 1-0.252* and prediction error E = 3.75. Therefore, the maximum entropy spectrum will be | 2 315 Se) = Fae) * (0571-0257) ‘Taking the inverse z-transform of this spectral density leads to RK) = 205)!*! + 2(05)!41 j Problem §.12: ‘We have from (53.24): | RQ +) Sages RG +14) = -SapyssRO +1) -Ayergss RO) From the Levinson recursion, We have Qpeipes ™~Tpex 0d Gpais "Op -TpeiGnpeisy for iA, +++. Therefore, RQ+)= Een trsrtaest DRO HA + Ayu RO) = terlRO) + Bape R@ +11- Hank +14) But, we recognize RO) + DengeeR@ +14) = Lenk =F, Problem §.13: Initialize the split Levinson recursion by y b=, «-[}]. y=R)=255, m=0 1 Next, compute ry = (R(0), R(A)IF, = (256, 128]] ,| = 384, and a = r/ro = 384/256 = 1.5, and then, = 1+ ay /(1-np) = -1 + 15 = 05, Then, updaie the symmetric polynomial “BrP Next, compute 127 (ROR), RAD ps _ a= 2 = % = 025 ; i and find 7 = -1 + an/(1-1) =-05. And then, update (of: ayes eye leet ta] acs [fl [-)- poole 0. 1 | 1 Next, compute 025 ts 15 = (RO), RA), RQ), REG) = [256, 128, -32, -16]] 455) = 216 <> ay = a a and find 7g = -1 + 05/(1-%9) =0.5. And then, construct fy: &) fo 0) | 025} 1 a} fas w= [6] -[o]-=[e]-[3}- faa] 8 — : 74 = RO), RD, RE), RG)RAa = [256, 128, -32, -16, 22] 1.75] = 54 ays i * Fg 705 as oS 1 and find 44 = -1 + a4/(1-) =05. Problem 5.14: The prediction-error filters of Problems 5.6-5.8 are all the same. The fourth order polynomial is AgG) = 1-1.2524 + 1312527-2° + 0524 The corresponding reflection coefficients are (15 2, t9, 4) = (05, 05, 05, 05} =. ‘The analysis lattice filter is ‘The synthesis filter is Problem 5.15: Sending the coefficients of the first polynomial through the routine bkwley, gives the reflection coefficients: (1s tas m2 14) = (2,03, 04, 05} Because one of the reflection coefficients has magnitude greater than one, the polynomial will not be minimal phase. The reflection coefficients of the second polynomial are: {11721 %2, U} = (02, 03, 0.4, 0.5} Since all of them have magnitude less than one, the polynomial wil be minimal phase. Problem 5.16: For a gaussian density, we have me lopG) = F in(derr) + tn (209 + SyRy ‘The entropy S is the expectation value $ = E[-Inp]. Noting that ¥R#y = tr(yy™R4), we have Et y] = rE py IR") = te(RR*) = tr) = M. It follows that 5 = Eb = diadet) +6 where ¢ = In((2*)M/%) + M/2. Using the LU decomposition of R, that is, LRL™ = D, we have (detL)PdetR = detD = JT). But, det = 1 because L is unit lower triangular. Therefore, io The errors Ei,i = p +1, + , Mare given in terms ofthe reflection coefficients p15 ©** tw by Fea (lead) sO -BdEpSEp, f= pty Me with the maximum value attained when all the new 7s are zero, that i 2 = 0, for i= p +1, --~ , Af. It is evident, then that Sy (with S, fixed) will be maximized Problem 5.17: a. By definition, we have L™ = [bp, by, -*~ , bye). It follows that ESS u ET SEPBE = foo, bi, +s bul| > | = L7DAL = RA Ely b. <51- 2M): | = s@yFa AG) = a9 tayzt + + toyz = flyzty ey on ‘The inverse z-transform can be written vectorially as = Loree = as fags’ where, s(@7) = (1,2, +++ MF. Using the results of part (b), we have K (0) = s(@)Fk(w) = s(@)FRs(w) = s(@)7RARR*S(w) = KEe)FRK(w) 4. First, note Is(e) = EM, v.24 UP = 2M UL2, Using the fact that R and R* are invariant under J, that is, JR" = R*, we obtain Kew) = s@IRYs(w) = 24w set) R Sw) = MME hw) ©. From parts (a) it follows K (zw) = s@)7R*s() = Sse By(z)By(w) where, B,(2) = s(¢)"b,. Using part (d) and the fact that polynomial Byz) isthe reverse of Ay(2), we find Mc) « Mw BS Day eB) = SS 2rapcew Kem) = 3 LAs e4g(wyerrwioer) By definition, we have Ry = R (i-j). Thus, RGD [Soe a Re f Seve se & je Using the above result and the definition k(z) = R's(z), we find Be = RRR = f SORE FRIGE = J SORE MEI EE Using the above result and the definition of K (z,w), we find LSa@OK GK (om) Kw) = s@FR4s) = f Salome HO AWTS = Problem §.19: (@) First, note that s,(z) admits the decompositions Spa) 1 w=] = |-[2?] fete ‘Using the order updating property (5.9.11) (see also (1.7.28), we have Rho [Elden Ie follows that ky(w) = Ris,(w) can be written as Kyaw) sal) A a= ]s eee [ 0 ] Brew) wo E ‘The second decomposition follows from the order updating formula (5.9.16), or (1.7.35), that is, ‘Then, we have 1 oo a win [9 Ralfetsneo deiae= [reo] Ee) -3- ‘The above equation can also be obtained by reversing the first one. To see this, note first that the reverse of ky(W) is Jpky(w) = R,s,(w) = Rjlw?sp(w") = w7kg(w''), where we used JR = Rip. Tha, angi Gel tak 8, pby am Ay) = WPGC"), we get Kw) = Jelpky(W) Ka (wty winy-ony| MO”) we ApbgBg lw = 0 ety] + eaetste) aorBe = Lath, alw)| * Byte and use the property %pskpa(W") = WJpalpsbya(W) = W Iya). (b) These recursions can be proved using the results of Problem 5.17(¢), or they can be proved directly as follows K, (ew) = 5906) kw) = [5pa@)*, 1] 20) - 0 | +B Heb Byte) KGa) = Kpaleaw) + Z-Bs)B, (0) Similarly, Ky(2) = 59607 p(w) = [1,245,467] 0 wipe] * BAF Kyle) = 2 WK alu) + 2A, 4,00) (©) Multiplying the first of the above recursions by 2“‘w't and subtracting to cancel the z1wiK,.a(@,w) term, we obtain the required result. Similarly, if we subtract them to cancel the K,(z,”) term and solve for K;.1 (zw), we obtain the result of part (e). (@) Because A,(2) has real coefficients, we have A,(z*) = A,(2)*. Then, applying part (¢) with w = 2%, we obtain a Jz 1? 14,@)1?- 125) 1? Ky G2") = 552) Rosy) (e[Pt Ifz = % is a zer0 of 4,(2), that is, A,() = 0, then, we have Bytep |? Kyla) = deyFRsteyt = BABEL Because Rj is positive definite, the left hand side will be positive. Thus, all the factors in the above equation are positive, and we obtain 1- |2;[? > 0 or, ||? < 1. Similar methods can be used to reach the same conchision by working with the expression in part (e) Problem $20: Initialize the Schur recursion by 256 128 es = =| 32 “16 2 Compute 4, = 38 (1)/g5(0) = 128/256 = 0.5. Then, using the Schur recursions, construct the ordet-1 gapped functions for 1 1 according as [z | > 4, #1, <1. Because 4,(2) is a minimum phase polynomial, it can be factored in the form 4,@) = [Q-a24) =» A¥@)=TICat +24) with [aj] <1. Thus, | 5,(2) |? is written as a product of factors as in part (a), and each factor has the required properties. “37. Problem 5.24: First, find 73 = -Ss(o2) = -0.125. Then, apply the backward Schur recursion to find Ss@) + _ 0.1111 0.88892" + 27 $3) =? Ty ay5@) ~ L-ogsene" + omnis This gives, 7» = -0.1111, and Sie) +m | agers Si@) 2 Ty 5a) ~ 1-087 Thus, 7, = 08 and So(z) = 1 Problem 5.25: ‘The filtering equations for v,(n) and va(r) are in the z-domain Vi@) = Hi@)V@) and Vo(z) = Ha@)V@) where H@)* 4S and Ha) Since s(n) is uncorrelated with va(n), it follows that : a Sa) = San(6) = LOEW) = GEG where we used Sy(2) = 03 = 1. Similarly, Sule) * Sun @) = HOMES) * GN a ‘The cross-correlation is computed for k 20 by z a at ede at Rat) = f Sate oe = | Sandoas) Bf” Teas Similarly, ae 2 . ae Ra) = f Sule Se * J erayioansy 2 ‘The infinite-order Wiener fiter is obtained by H@) paal ES Bo Fe], with B(e) = Ha(e) and o = of = 1, We find Salt) _ Hie Hale Be?) Hae) =e) which is already causal. Therefore, H@ => >me-+ Fae) I =1+ (1-42) ne Lay. having causal impulse response (n) + (@1 -aa)atu(n-1), © where u(n) = unit-step Note, that the infinite-order Wiener filter causes exact cancellation of the v;(n) component of x(n). Indeed, working with z-transforms, we find for the estimate of x(n) fe = HOO = FO move = meve =" s(n), and the estimation error becomes (2) = (0) -H(n) = s(n) + ¥42) -nuln) = (0) With the choice of parameters M = 4, a, = -05, a = 0.8, we find for the first M +1 = 5 values of the infinite-order impulse response b= [1, -13, 065, 0325, 0.162" Using the routine firw, we find for the 4th order Wiener filter b= [1, -13, 065, 0225, 0.116)" ‘The two h differ only in their last entry. The corresponding lattice weights are also produced by firw. = (0.257, 0.929, 0.464, 0.232, 0.116)" +9. Problem 5.26: The matrix inverse is easly verified by direct multiplication. For example, for M = 2, the autocorrela- tion matrix is 1 ag a} a, 1a aja, 1 and we can verify explicitly 1 az a3][1 a. 0 st ig] 1 4||42 > -aa/=|010 Ba, 1|[ 0 a1 oi] ‘The optimum Mth order Wiener filter is given by h = Rifr, where the cross-correlation vector F has entries + y= Rak) = for k=0,1, +1 5M Taxa ‘Using the expression for Rij, we find for the first and last entries of hg = ro-aary = 1 fag = Tue Gates = Note how hy differs from the infinite-order case. For all the other values of kin the range 1 -kSISM-k ‘The two combine into one max(0, 4)

You might also like