Download as pdf
Download as pdf
You are on page 1of 16
8.5 ‘chapter Eight Numerical Matrix Eigenvalue Problems 85. Comput entries is needed. The method works very well (so does the Car Bete case ay — 0, the seco in practice, even with small codiagonal entries, but this hes a¢ pe assume the role of the rigorous round-off error analysis of either of these methods Seetinuous-time system Ax(t) Computing Selected Eigenvalues and Eig¢ 8.5.1 Discussions on the Importance of the Largest amt ‘Smallest Eigenvalues We have seen before that in several applications, all we ne ly be predicted from ju fow largest or smallest eigenvalues and the corresponding ei gens mattis and the corres recall that in the buckling problem, itis the smallest eigen.siee important one. In vibration analysis of structures, a common enzinesrng to compute the first few smallest eigenvalues (frequencies) 298 ® eigenvectors (modes), because it has been seen in practice tha and eigenvectors contribute very little fo the total response of remarks holdin the case of control problems modeled by a system differential equations arising in the finite element-generated reducedt-onder large flexible space structures (Inman 1989). In statistical applications, such as in principal cemponent analysis, only first few largest eigenvalues are computed. ‘There are other applications where on the dominant and the subdominant eigenvalues and the corresponding eigenvecto play an important role. sce Luenberger (1979), -, the long term be he Population St AE EASE OLS HORAN SSN NOS = wash} where py is the populatic Tess tam 1 (that is VA, Pes eho + ‘that the population deer there Is long-term grow approaches a. final dist cigenvalue. Moreover. it fest the original populat if the dominant cigenva decay in the population 2 The Role of Dominant Eigenvalues and in Dynamic Systems Let's discuss briefly the role of dominant eigenvalues and the eigenvectors in th ccomext of dynamic systems Consider the homogencous discrete-time system mu=An k= 012, 8.5.3 The Power N and the Ray Let Ay be the doniinant eigenvalue of A: that is, [Ail > Dol where A, A eigenvectors: 1, asl IA! 2p are the eigenvalues of A. Suppose that A has a set of independent oy ty, Then the state xy al any time k > 0 is given by. In this scetion we brief dominant are particularly suitab ‘multiplicutions only. ny = ayAfay + adden +--+ gdh, envalues « where ty = ayy 9+ + ey q, Because [Arlt > LAP for large values of k, 3....,m follows that leat] > laa}, 723,00. ‘The Power Method ‘The power method is corresponding eigenve: construction of the pow Let the eigenvale Hail > Bal = 1a provided that a, % 0, This means that for large values of k, the state vector x¢ will approach the direetion ofthe vector v4 corresponding to the dominant eigenvalue Aj Furthermore, the rate at which the state vector approaches x is determined by the ratio of the second to the first dominant eigenvalue: [As//Ay|. (A proof is presented later.) 185 Computing Selected Eigenvalues and Eigenvectors 407 In the case ex, = 0, the second! dominant cigenvalue Az and the corresponding eigenvector assume the role ofthe frst dominant eigenpeit. Similar conclusions bok for the continuous-time system aU = AX) For details, see Luenberger (1979). Tn summary, the long term behavior of a homogeneous dynamical system can ‘essentially be praicted from jus! the first and second elominant eigenvalues of the system matrix and the cortesponding eigenvecion. tne second dominant eigenpairis particularly important in the case ay = O.In this case, the long-term behavior ofthe system fs determined by this pelt ‘An Example: The Population Study Let's take the case of a population system to illustrate this. It is well known (Laten- berger 1979, 170) that such a system can be modeled by pu = Ap, R= O12 where py isthe population vector: Ifthe dominant eigenvalue Ay of the mattix As Tess than F (that is. JAs| <1), them i Follows from a= aban oe + Ona te that the population decreases 10 7ero as k becomes large. Similarly, if [As| > 1 then there isfong-term growth inthe population, In the latter case the original population approaches a fil cisributon that is defined by the eigenvector of the dominant Gigenvalue, Moreover, itis the second dominant eigenvalue of 4 that determines how fast the original population distribution is approaching the final distribution, Finally, if the dominant eigenvalue is 1, thea ever the Tong term there is neither growih Nor decay in the population. 8.5.3 The Power Method, the Inverse Iteration, and the Rayleigh Quotient Iteration inthis section we brielly dscribe two well-known classical methods for finding the llominant eigenvalues and the corresponding eigenvectors of 2 maitis. The methods ite puncultly stable for sparse marices, because they ely On matrix vector rniplications onl. ‘The Power Method ‘The power method is frequently used 10 find the dominant eigenvalue and the corresponding eigenvector of 2 matrix Its so named because itis based on implicit construction of the powers of 4 Let the eigenvalues Ay, Az...» An 0F 4 be such that Dail > Dol = Pal = ++ = Dal 408 Chapter Eight Numerical Matrix Eigenvalue Problems ‘that is, Ay is the dominant eigenvalue of A. Let o, be the corresponding eigenvector Let max) denote the element of maximum modulus ofthe vector g. Assume that is diagonalizable ALGORITHM 85.1 Power Method Siep I= Choose xy Siep 2: For k = 1,2.3,...do until conver fy = AM a = 5¢/ maxlig) THEOREM 85.1 max(fc) — Ay, and fue} ory, a multiple of 9, as + Proof, From before, we have _ Ax max(AGo) Since A is diagonalizable, the eigenvectors x; through 3) associated with A can be chosen to be linearly independent. We can then write xy = ao) + env: F aq iy. 0 # 0.80, Alain taney +++ Fed ai Afay + aAdin +--+ ay akg oaifon se(B) a+ ou(¥)'o] Because Ay is the dominant eigenvalue, (A) no ws ee pean Thus y= A ey, whoree ie a constant max(A¥ ip) and fmaxtig)}— Ay . Remarks: We have derived the power method under two constraints: Lay #0. 2. Ay is the only dominant eigenvalue. The first constraint (cry # 0) is not really a serious practical constraint, bees after'a few iterations, roundolT errors will almost always make it happen. ‘As far as the second constraint is concerned, we note that the method converges when the matrix A has more than one dominant eigenvalue, For exe ple, let Lay] = Wal = -++ = Wed and [Al > Wp oal = +++ = Lad, and let there independent eigenve Alig = at (3 iD because (A./A i power method conve ‘The eigenvalues of 4 responding to the lar (0.3851,0559: He 5 = Any maxtiy) neat max 0: = [07 1 max(iy) = 9.631 85 Computing Selected Eigenvalues and Eigenvectors 409 independent eigenvectors associated with Ay. Then we have {because (2,/A,)* is small for large values of &). This shows that in this case the ower method converges to some vector inthe subspace spanted by 1, ..-, tr. Example 8.5.1 123 a-(23 4 345 wy = DT The eigenvalues of A are 0, ~0.6235, and 9.6235. The normalized eigenvector eor- responding to the largest eigenvalue 9.6235 is (0.3851, 0.5595, 0.7339)" 3.0528 a= ( 3421 9.6316, & max(ts) 0.5246. 0.7623 1.000 maxis) = 9.6316 410 Chapter Eight Numerical Matrix Eigemalue Problems ‘Ths {max{ss)} is converging toward the largest eigenvalte, 9.6235, and {ag} is converging toward the direction of the eigenvector associated with this eigenvalus (Note that the normalized dominant eigenvector 0.3851 0.55! 0.7339 Js scalar multiple of x3.) MATCOM Note: Algorithm 8.5.1 has been implemented in MATCOM program POWER. Convergence of the Power Method “The rate of convergence of the power method is determined by the rato [As /A asily seen from the following. From the proof of the powermethod (Algorithm 8.5.1 we have Is, — esl ) wy tenet ay (* iy = {Ef (eat + + losllloul) (because 1A: /Ay} “Thus we have IAs/Aih i= 34m). bt = (lerlllogll + +> + lexylllenll) Wee evvull 1,2,3, ‘This shows that the rate at which x approaches ay x depends upon how fast [As/A -goestozero. The absolute value of the errorateach slep decreases by the ratio }A3/\ that is, fz is close to A, den the comvergence will be very slow: If this naiv small, the convergence will Be fast. The Power Method with Shift In some eases, convergence can te significantly improved by using a suitable shi Thus, ito i 4 suitable shift so that A, ~ Fis the domirant eigenvalue of A ~ 0 and if the power method is applied to the shifted matsix A — oF, then the & convergence will be determined by the ratio P21 xian P| A — | Ay (Note that by shifting the matrix A by or, the cigeavalues are shifled by or, hut the eigenvectors remain unaltered.) By choosing | h=al an be mi signif An opiimal choice E00 A. This sin Bur there are many this choice oto Consiler 1 20 feo Hp ao ar Still close 10 1. Thece above choice of o is prior. The Invorse Power | The following iterativ for computing an eige is known, ALGORITHM 85.2 Inverse such that [Ay ~ a = | eigenvalues, Step Ie Choose Sep 2: For k - Solve swith p Stop it where Remark: Nove that th san exact eigenpa IEOREM 8.5.2 The sequen sponding ty Ay Remark: Now that in of)! which is why ici Proof: ‘The eigenvalue and the eigenvectors v, 85 Computing Selectod Eigenvalues and eigenvectors 411 By choosing o- appropriately, in some cases, the ratio R=3 San be made significantly smaller than JA2/Ay|, thus yielding the faster convenwen 40 oetimal choice (Wilkinson 1965, 572) of o- assuming that A, we all eat 2(A2 + An) This simple choice of er sometimes indeed sieley very fast convergence, Fit there are many common examples where ihe comvetuence cay stil fe dee oe, this choice of o. Consider a 20 x 20 matrix A with the eigenvalues 20, 19, The ehice of = (19+ 1)/2 = LW yicide the tio sano @ 1 Therefor, the rae of convergence will sil be low. Furthermonethe ahove choice of ais not useful in practice, because the eigenvalues are nes agers apriori ‘The Inverse Powor Methodiinverse iteration ‘The following itcraiive method, known asth i ib am fective method for computing an eigenvector whena reasonably geod apjroximationtoay eigenvalue isknown SEGORITHM 85.2 Inverse Heration Let w be an approximation toa real cigenvalue Ay Such that Lr ~ of << IA, — of @ #1) thatis, is much clowerto-Ay thar odie aia eigenvalues, Step 1: Choose ay Step 2: Fork = 1,2.3,....do Solve (A ~ afi, = x4. for iy, using Gaussian elimination with partial pivoting, X= ig/ mantie) Stop if A — of xl. < cullall. where ¢ isa constant of order unity, Remark: | Note thatihe use of the above stopping eriterion will make the pair (7,44) ‘an exact eigenpair ofa nearby mattis, THEOREM 85.2 The sequence {3} comers the direction of the eigenvector come: sponding © Ay Remarks Nove that inverse iteration is simply the power method applied to (A ~ @1) which is why its also known as the inverse power method. Roh Mbeeigenvalues ofA — of)! ae) ~ 0) 4. 0h3— a9 thy -o)! and the eigenvectors 2)... ate the same as those of A. Thus, asinthe care of 412 ‘Chapter Eight Numerical Matrix Eigervalue Problems. the power method, we ean write a Ar oF Qo - oF ~ atap[m to(ZES) [Because Aj is closer to or than any other eigenvalue, the fist term on the right- side is the dominating one and, therefore, {rq} converges (0 the direction of v1. I the direction of m that we are trying to compute, : An Illustration Letus illustrate these ideas with k = 1, Suppose that x ‘Then yu bean bee bey oy lees 8) = (A oly xy = Oy 0) Heim + G EA 0) lente vy other eigenvalue, the coefficient of the first tern the largest). Thus, fy is rough Because Ay is closer to than the expansion, 1/(41 ~ ), isthe dominant one ( ‘amultiple of %, which is what we desire. Numerical Stability of the Inverse Iteration At first sight the inverse iteration seems 0 be a dangerous procedure, because o is near Ay, the matrix (A — oF) is obviously ill conditioned. Consequently, ill-conditioning might affect the computed approximations of the eigenvector. F tunately, in practice the ill-conditioning of the matrix (A ~ of) is exactly what = "want. The error at cach iteration grows toward the direction of the eigenvector, 2 is the direction of the eigenvector in which we are interested. Wilkinson has rermarked that in practice i, is remarkably close to the solu of A-ol+ Pea ix small. For details see Wilkinson (1965, 620-621." The iterate vectors do indzod eonverge evenly to he eigenvectors of A+ Remarks In the prec vector x; toemphasize the di of the cig MATCOM Note: Ai INVITR, Choosing the initial To choose the initial and then switch to the ‘method ay the initial v siated that ify is assur Le = Pxy where P is the permurat PA ot) = LU and e isthe vector of uni that o isa good approxi 85 Computing Selected Eigenvalues and Eigonvectore (s,2)" = (0.3714, 0,5571, 0.7428)" all> 42 = (0.6190,0.8975, 1.171)" a (0.3860, 0.5597, 0.7334)" 4s = (0.6176, 0.8974, 1.1772)" 2s = 7S = 03850.05595,0.7340 Tia ~ ¢ 2 1 = (06176, 0.8974, 1,1772)7 (0.3850, 0.5595, 0.7340)" kes y= (0.6177, 0.8974, 1.1772)" 45 = 7 = (03851. 0.5595, 0.7339) Tesi J Remark: Ww the preceding example we have used norm Issa scaling for the eelot «to emphasize that the scaling is immaterial, because we are working omvard the direction of the eigenvector, MATCOM Note: INVITR, Algorithm 8.52 has been implemented in MATCOM program Choosing the Initial Vector xp ‘To choose the initial vector x) we can run and then switch to the inverse iteration, with ‘method as the initial vector xy in the invers stated that if xy i8 assumed to be such that Le = Pry 4 few iterations of the power method the last vector generated by the power iteration, Wilkinson (1965, 627) has where P isthe permutation mattis satisfying PA oh = LU is the vectorof unitelements, then two iteratio that o is a good approximation to the eigenvalue. © usually adequate, provided 414 Chapter Eight Numerical Matrix Eigenvalue Problems Now: of the triangular system: Un =e 123 234 345. ‘The eigenvalues of A are Example 8.5.3 0, 0.6235, and 9.6235 Choose or = 9.1: “a1 2030 s-a( 20-6! “*) 3040 at 1 0 0 =81 L=[ -v2469 1 oj, v=[ 0 3704-08456 1 o Loo r(ot ) Dot Lo oop he( wn) 02160 rot 0.4003 camer = (nr 980 4 (is 4 = = ( 06637 ma S000 k=? 0.9367 fy = (A= at ty = ( 3540, 1.7624 05315 a2 = ( 0.7683 1.000 which is about {3 times the normalized eigenvector, 5.6062 fx is chosen as described, then the computationof &; involves only solution 3 7407 1.0200 ‘The normati 0.3851 05595 0.7339 ‘The Rayleigh Qui THEOREM 8.5.3 Let Abe imation to an eigen R, isa good approxims vector. Proofs Because A eigenvectors v4.1, seat Again, since vy, i = f= Aen and not Because of our assun other cj. 7 that ois close to Ay Inthecrem ss wT Ax ne Tat 5 where « isan eigenvect ‘© ancigenvalue A for 85 Computing Solected Eigenvalues and Eigervectors ‘The normalized eigenvector correct to four di 0.3851 0.5595 0.7330, The Rayleigh Quotient THEOREM 8.5.3 Let bea symmetric matrix and let x € RY hea reasonably good approx imation to an eigenvector. Then the quotient ix # g00d approximation to the eigenvalue A for which xis the corresponding eigen- vector, Ck Because Ais symmetric, by Theorem 8.2.8, thor exist aset of orthonormal Cigenvectors vy, vz,.... vq. Therelore, we can write oe ee ce dante irl ie Normalized, wu = 1. Then, since Avy = Ap +o cand noting that 1 ¥) = 0,5 # j, we have MAG _ (erm $4 cx" Aue ++ TE Ce FF einem, + forty boo # ene edpey + Because of our assumption that 1 is a imation to 2, ey is Larger than other cy, f= 2... Thu nwithin [J is close to 1, which means thal o is close to A, 7 In Theorem 85.3 we have defined the quotient a6 is a nonzero vector, then the quotient aTAx Ae wx Wa ~ phstly Note that the normal equations for the least squares problem of minimizing War pelle in the unknown x are given by xx = x7 Ar DEFINITION 8.5.1 ‘The quotient Ky the vector ‘ample 8.5.4 Let 1 + (ras) Then the Rayleigh quotient ame ty = -02 isa good approximation to the cigenvalue Note: It can be shown that for a symmewic matrix A, Ay 2) ave the smallest and the largest eigenvalue of A, respectively. Rayleigh-Quotient iteration ‘The idea of approximating an cigenvalue can be combined with inverse itera compute successive approximations of an eigenvalue and the corresponding © we fashion, known as Rayleigh quotient iteration and de as follows. Let N he the maximum number of iterations to be performed and vector in an iter initial approximation (© an eigenvector. ALGORITHM 8.5.3 Rayleigh Quotient heration ‘Or k= 0,1,2,.04,d0 Step I: Compute oy =f aun Step 2: Solve For Se1 Aon = Chapter Eight Numerical Matrix Eigenvalue Problems is called the Rayleigh quotient Sup 3: Co Sep 4: Sto orth = Flop count and vo gorithm 8.5.3, we method quite exper the symmetric m the cost is only O¢r {quotient iteration is It is cubies u details, see Pariett ( Choice ofr. Ask use the direct powe Remark: Rayleigt where one finds bot of the nonsymmetti also Parlett (1974), le 8.5.5 12 A=[2 3 a4 Letus take 0.5246 aw = {0.7622 1.000 hich is obtained aft k=0: Mato _ aX ( 247 0.7623 1.000 af any 1.000 a= ( 14529 1.9059 ) 85 Computing Selecied Eigenvattes and Eigenvectors Step 3 Compute ter = Roa /mancsey4) Step 4 Stop i the pair (or, x.) isan acceptable cigenvalue-cigenvecor pair orith > N. Bp count and convergence af the Rasleigh quotient iteration. In Step 2 of Ae Borithm 8.5.3. we need to solve a new system for each value of &. This makes the ipethou! quite expensive compared to the cox of inverse iteration—ualess, ut course, the unmet matrix A is tridiagonal, in which case as we have seen in Chapter 6) the costs only O(n flops per iteration, But the rate of comergence of the Rayleigh quotient iteration is quite fas. 1S cub: that is the number of accurate figures triples per iteration, For details, sce Parlet (19806), Choke af sy. As for choasing an inital vector xo, perhaps the hest thing to dois tse the direct power method itself fow times and then use the last approximation Remark: Rayleigh quotient iteration can aso he defined in the aos) mmetcie eas, whore one finds both left and right cigenectors.at each step. We omit the diseuscien, of the nonsymmetsic case here and refer the reader 9 Wilkinson (1968, 636) So also Parlett (1974), Example 85.5 Lot us take 0.5246 wy = (07622 1.000) ‘which obtained after 3 iterations ofthe power method. Then we have the following. k =o: Ay BX oscar a= (nics om 9.6235 1.000 418° Ghapter Eight Numerical Matrix Eigenvalue Problems ‘The normalized eigenvector associated with 9.6255 is Proof: From: 38st (5 *) manta 0.7339. ‘That is, HAH(k Note hat 0:35 times. is thiseigenmectr wc digits. Thus, tvoFterations wee the fi ean sufficient. [AH must have MATCOM Note: Algorithm 8.5.3 has been implemented in MATCOM pros RAYQOT. HAH = 8.5.4 Computing the Subdominant Eigenvalues ‘ and Eigenvectors: Deflation Because dew HAH ‘Once the dominant eigenvalue Xx and the corresponding eigenvector v1 have Oss fas are the sama Computed, the next Jominan’ eigenvalue 4 canbe computed by using detiation, T= Tal > Dal inal matrix by another matrix of the Wildy basic idea behind deflation 1s to replace the ot came of lesser dimension using a computed cigenpair such ual the deflated mats fh. hus the same eigenvalues as the original one except the one that i used (0 deflate ALGORITHM 8.5.4 Hows Hotelling Deflation ‘The Hotetling deflation isa process that replaces the original matrix A= Ay Dy = Step 1: Co amare As ofthe same order such that all the eigenvalues of Az are the same as those vector: Gray except the one wed to construct Ay from Ay. Step 2: Fin Remarks: Although the Hotelling deflation is commonly recommended in the shecring erature asa pratieal process for computing a seicete numer of aes and eigenvectors and is well suited to large and sparse matrices, the methos H alhin the symmettic and nonsyimmetric cases 8 numerically unstable (Whinsoe 1965, 585) Householder Deflation SEBS: 108 esl now construct deflation using 2 similarity transformation on A with House Sep 4: Di holder matrices, OF course, the technique will work with any similarity transforms donna tion; however, we will use Householder matrices for reasons of mumerical stabil Example 8.5.6 The method is based upon the following result THEOREM 85.4. LatQ,,o) be aneigenpair ofA and f bea Householder matrix such ths an ‘Hy isa muiple oF €y then = [000 srs aa % ‘The eigen ac emn-{° “paler he eigenvalues o NHANES “Ola 0 Lay = 14s suhere Ao is (a ~ 1) X (n I) and the eigenvalues of Az are the same as those Fencepe for Ay in particular, if TAL > Dal 2 [Ay] 2 + °° = [aa then donna a walue of the second dominant (subdominant) eige ‘eigerwallue of gis Ao, which 85 Computing Selected Eigenvalues and Eigenvectors 419 Proof. From Av, = Ay2, we have HAH = Hy (since H? = fy ‘ha is, HALT ke) = Aske (since Hoy = ke) or HAM ~ dye. (Phi eas tha the fis column of HAZ is , times the fit column of the itty maths) Thee HAH must have the for mee HAH : A. ° Because det(HTAI ~ A) = dey ~ A)del(A2 ~ AD, it follows that the eigenvalues of Ap are the same as those of Ay mints Aj. Moreover it Wil > sl > ash = Mal > = = lag ie dominant eigenvalue of Az is As, which is the second dominant. eigenvalue of A . ALGORITHM 8.5.4 Householder Deflation to Compute the Subslominan! Ligenvalue ‘Step 1: Compute the dominant eigenvalue 2y and the corresponding eigen vector 2 using the power method and the inverse power method. ‘Step 2: Finnd a Householder matrix Hf such that He = 0 Step 3: Compute HAH Step 4: Discard the first row and the first column of HAM and find the dominant eigenvalue of the (n ~ 1) (n — 1) matrix thus obtained Example 85.6 0.2190 0.6703 o.s104 A=[ 0.0470 09347 nx310 0.6789 0.3835 0.0346, ‘The eigenvalues of A are 0.0018, ~0.3083, 1.4947, 0.5552 Ldy= 14947,» = {0.7039 0.4430, 1.9552 2 «=m Mleilke: = | -0.7039 0.4430 420 Chapter Eight Numoricel Matrix Eigenvalue Problems. =0.7039 —.6814 0.2005 =0.5552 —0.7039 -0.4430 0.4430 =0.2005 -0.8738 1 1, = (0 0. 14977 0.3223 -03331 sna ( dune 0367 0 0.3736 ~05052 1.4977 | -0.3223 03331 -(0 As 0 4, The dominant eigenvalue of Ay is ~0.3083, which is the subdominant eigenvalue of A Gomputing the Subdominant Eigenvector nee the subdominant eigenvalue A> has been computed using the above procedure. the corresponding eigenvector v ean be found from the inverse iteration, ‘Computing the Other Largest Eigenvalues and Eigenvectors Once the pair (Ag. #2) is computed, the matrix A> can be dellated using this pair t compute the pair (Xs. v3). From the pair (As, v3). we then compute the pair As 2% ‘and so on, Thus, by repeated applications of the process, we can compute suceessivel) all the» eigenvalues and eigenvectors of A. Remark: If more than a few eigenvalues and eigenvectors are needed, the QR iteration method to be described a litle later should be used, because in that ease, the QR iteration will be more cost-effective. and it isa stable method. ‘Computing the Smallest Eigenvalues Lis easy to see that the power method applied to A givesus the smallest eigenvalue jn magnitude (the least dominant one) of A ‘Let A be nonsingular and let the eigenvalues of A bye ordered such that Vag > Dad = Ash + = The al > Pol 0 renvalues af A) 20 ‘Then the eigenvalues of A! (which are the reciprocals of the arranged as \ li ~ k 1/Aqis the dominant eigenvalue of A”. This fact suggests that the reciprocs mallest eigenvalue can be computed by applying the power method to A ‘That ofthe ALGORITHM 8.5.5 Compu Step I App Step 2: Take Nate: Because the only, the inverse of true because comput system: Ay = ‘To compute th in magnitude). we ¢0 and then apply defla ‘malriea the bottom cigenvalue of A. One inverse power metho. ‘shown earl Remark: To accele Example 8.5.7 14 A=(23 i i ‘The power method ( (=I, D! gave a =! 0.1051 (Note the eigenvalues, ‘Summary of the F and Eigonvectors To compute a sel or smallest eigen: combination of the in the following seq 1. Use the po areasonab in magni Use the in it fixed) a starting ye Apply def 85 Computing Selected Eigenvalues and eigenvectors 421 “LGORITHM 8.5.5 Computing the Smallest Eigenvalue in Magnitude Step Te Apply the power method to A ‘Step 2: Take the reciprocal of the eigenvalue obiained in Step 1 Note: Because the power method is implemented by matrix-vector multiplication uly, the inverse of A does not have 10 he computed explicily. This satement ie true because computing A” 'x, where is a veto, is equivalent to solving the linens system: Ay = 3 To compute the next least dominant eigenvalue (the second smallest eigemale an deen tie) We compute the smallesteigenvalueand the comrespondingeigenvector Hiner apply defavion, The inverse power method applied tothe (a1)

You might also like