Download as pdf
Download as pdf
You are on page 1of 66
ELEMENTS OF NUMERICA', ANALYSIS Ne) 2 (- Vise Opa lees. : ge 2 Aa Dr. Faiz Ahmad 4 Muhammad Afzal Ran Ld 1.2 13 14 15 1.6 17. 18 1g 1.10 Lu 112 CONTENTS CHAPTER 1 ERRORS IN COMPUTATION Introduction Errors and their. sources Approximate numbers and significant figures Rounding of numbers and around off errors Truncation error Absolute, relative, and percentage errors The error of a sum The error of a difference The error of a product The error of a quotient The error of a power The error of a root “ CHAPTER 2 LINEAR OPERATORS € operators MATRICES ah ROSE CHAPTER 4 agrange’s f E The error of the int Divided differences Divided-difference table Newton's interpolation polynomial CHAPTER 5 INTERPOLATION WITH EQUALLY SPACED DATA The difference table Newton's forward and backward difference formulae The Lozenge diagram Gauss formulae Stirling’s interpolation formula Bessel’s interpolation formula Everett’s interpolation formula erpolation polynomial CHAPTER 6 THE SOLUTION OF NONLINEAR EQUATIONS Introduction Bisection method The method of false position i method ees or regula falsi met Newton-Raphson method Fixed point iteration Order of convergence 6.4 erence equations ifference equations saave ms enmene sD 9. 9. 9. 9. 9. 9. % ic 1c CHAPTER 4 TION WITH UNEQUALLY SPACED Data eduction fagrange’s formula The error of the in Divided differences Divided-difference table "Newton's interpolation polynomial CHAPTER 5 INTERPOLATION WITH EQUALLY SPACED DATA terpolation polynomial ne difference table mon’s forward and backward difference formulae e Lozenge diagram rmulae 's interpolation formula ‘interpolation formula interpolation formula CHAPTER 6 LUTION OF NONLINEAR EQUATIONS rence equations erence equations a cONTEN boouNe nawnae one 10,2 10.2 10.: 10.< 10.¢ 81 8.2 83 8.3.1 8.3.2 8.3.3 91 911 962 9.1.3 ie x 24 CHAPTER 8 SOLUTION OF SYSTEMS OF LINEAR EQUATIONS (DIRECT METHODS) Introduction Gauss’s elimination method LU decomposition Dolittle’s method Crout's method Cholesky’s method CHAPTER 9 SOLUTION OF SYSTEMS OF LINEAR EQUATIONS (INDIRECT METHODS) Iterative methods Jacobi iterative method Gauss Seidel iterative method Successive over-relaxation Residuals Convergence of iterative methods Ill-conditioned system CHAPTER 10 NUMERICAL DIFFERENTIATION jation formulae based on equally forward us M7 122 123 126 128 133 134 138 145 147 150 1s3 1S7 1S8 158 162 164 168 170 172 174 CHAPTER 11 NUMERICAL INTEGRATION integration dal rule with error term rule with error term g rule with error term ition merical integration trapezoidal rule sed quadrature formulae ermined co-efficients ons yalue problems 184 187 189 193 196 197 199 204 209 215 217 227 231 237 244 246 246 251 255 vir APPENDIX A theorems 268 APPENDIX B 272 277 278 CHAPTER 1 __ ERRORS IN COMPUTATION lations with numucrs, whether peformed by hand € subject to sumerical errors. This form of ble in numerical computation and should not be errors’ arising from programming mistakes or or misuse of algorithms. For example, if we can obiain the exact answer ‘8’, but if we exact answer is a decimal fraction with igures. We have to be content with an 0.143.. which is in error by about red this error because, in practice, {finite number of significant figures: in ing devices, from pencil-and-paper omputer, have this restriction. Typi- limited to eight or ten signifi- puters can handle a few more. srformed there are many possible urate data and inaccurate formu- affect the accuracy of the solu- It is therefore evident that the due to one or both of two sour- ors of calculation, Errors of the t those of the second type can se. Thus, when such a number te value in a computation, we approximation by taking e t» as in some way approximate, to obtain results consis~ bour and with better app- pter is to identity the effects on our computa~ and methods relating to estimatj, 2 4 and to give methods for imating aa alculations ‘ ae ha results obtained. accuracy ERRORS AND THEIR SOURCES 12 ii din mathe errors involves i into ine following five BOWE ment of the problem. More 1, Errors involve seed ape pee ee =. : ally, there are See a on, ade in ematical 1 i 5.1 ' problems may be diy | ‘ ti te aarp jen the model and the physical syst, The discrepancy Detiwhich can be termed as error ofp ig a source matical models and their relation to the y problem. (Mather world are investiga ineering ae. happens that it is pose Bran or impossible to solve a given problem when formulated precis Wy if that ‘is the case, it is replaced by an approxi problem yielding almost the same results. This is the som of an error regarded as the error of the method. 4 2. Errors occur due to the presence of infinite processes 1 mathematical analysis. Many numerical imethods express { In i desired result as the limit of an infinite sequence or jand o sum of an infinite series. Since, generally speaking, janalys infinite process cannot be completed in a finite number steps, we are forced to stop at some term of the sequt and consider it to be an approximation to the required si!-3 AP tion, Naturally, such a termination of the process gil rise to an error known as truncation error. He! Errors associated with the system of numeration. In mumet? ™@ cal computation there are fundamental limitations imposed) "hth Be ae mature of the computing medium at our disPSppryy “teen : Computation is subject to the numbe “Finite space @is use ‘truncation e slight) exact ansW& The of figures ‘there “approximate ™Althou they ¢ ted in applied mathematics, physics, The eee "Ne TION sng |evENTS OF NUMERICAL ANALYSIS 3 the original data into the final ‘ors of operation are inherent. 5. Errors due to numerical ues can only be determi Sore all perce constants and mathematical constants such Tn atte an oush Such constants are defined pre- pene mattel terme, their’ representation requires an infinite sequence of digits, for example, Tesults. In this way, err= Parameters (in formulae) whose val- ned approximately. Such, for instan- m = 3.1415926 ..., © = 2,.71982818 .. ¢ these in numerical computation we must approximate ues to a finite number of digits, for instance, 3.142 (rounded to 3 decimal places or 4 significant figures), 71828 (rounded to 5 decimal places or 6 significant figures) nsequience is a data error which may be called the ini- ‘oblem, quite naturally, some errors are absent a negligible effect. But, generally, a compicte with all types of errors. [BERS AND SIGNIFICANT FIGURES approximate computation, it is convenient tween numbers which are exact and. those e values te Numbers): An approximate value of a e number, is defined as a number which n ta an exact number and differs only ber for which it stands. 1/3, 200 ete, are exact numbers because or uncertainty associated with them. e, m, 12, ete. are exact numbers but ctly by a finite number of digits. they must be written as 2.7183, are therefore only approxima~ roximate numbers. sees x and its approximate ERRORS IN COMPUTATigy, Figures): Significant figures | CLEMENTS OF nines, (significa eer is any one of the 4, resentation, or any zero Te ROUNDING-oFF ; fie the decimal point or to fill the Places ee sr i aaa ied digits 09475 the significant figure, a2 Place holder, | eee are used merely to fix the poi ¥ Fal ficate the place values of the ae oe se in not significant. In the oe & tb) ip oe including. zero, are significant fig! akon a are not signi, the first four zeros BNificg —(c) if the 0. digits, including the other two 2° es remaining “if the last digit of 0.006070 i, Setanta He number must be written as 0.000607, p' (a) h if the fi “iow, the numbers 0.0006070 and 0.000607 are, ee wi, the has four significant figures anq ! digit is te it ts even ¢ mmbers, the. zeros on the right can is ae ignificant figures and to fix the p, EXAMPLE 11: Row ae! ee ie, ice This can lead to misunderstar, {UF and three siz 1s Ot simbers are written in the ordinary way. Consider, 1 ines het oe SOLUTION: Round lear how. many signif L example, the number 675,000. It is not ¢ gntig SOLU 4 are, although we can say we have at least tin 1 42, at can be removed by using powers-of-ten nota, Pi EXAl notation) as 6.75x10° if the number has three sgEXAMPLE 1 Figures, or as 6.750x10° if it has four significant tis ee as ° five significant fige, 6.7500x10° if the number has five signi oe generally, ‘this notation is convenient for num TION: We obta “a large number of non significant zeros, sui eo = 2.30x10", and the like. Rour Let us now introd bs ats ¢ DEFINITION 1.3 (Re OF NUMBERS AND ROUND-OFF ERRORS Reicks: more aig ‘ the resulting error 13 by 7, we obtain Must be rounded of th other words, the “epresented to its ft If we attempt to “raction with an in ent with an appre oad 1 uch , Weound-off error of terminates. In order to use SY tase, we do not kno’ "a ‘OMe calculating devi 5 ‘7 = 1.8571428571..., FLEMENTS OF NUMERICAL ANALYSIS 5 5 igh ||, ROUNDING-OFF RULE. To round off or simply round a number to n significant figures or digits, drop all digits to the right of the nth place, or replace. them by zeros. if the zeros are needed as place holder. In this rounding-off rule, note the following: (a) if ‘the First of the discarded digits. is less than 5, leave the remaining digit (>) if the first. dis retained digit, (©) if the first discarded digit hon-zero digits among those retained digi (d) if the first discarded digit, all other discarded digits re zeros, the last retained digit is left unchanged or is increased by 1 according as it is even or odd (the even-digit rule), carded digit exceeds 5, add 1 tothe last is exactly 5 and there are discarded, add 1 to the last however, is exactly 5 and PEXAMPLE 1.1: Round-off the number 1 = 3.14159265... to six, five, four, and three significant figures SOLUTION: Rounding the number to given significant figures we obtain respectively the approximate numbers 3.14159, 3.1416, 42, 3.14. “1.2: Round the following number to four significant fig- 1 ag ‘7.4727, 76346, 15.235 and 15.245 10 oaks ne { ON: We obtain respectively the approximate numbers os 7 15.24, 15.24. 3 ch. concept of the round-off error. 4, ee aeinten vanes % ORS IN” COMPUTATION -_ FLEMENTS «¢ ay be anything 4, fact, the answer mé pean _ ea Bem. In other words, aaah ie 4 due aa off by + 0.0005, i.e. by es ie q | 4 ined. sition after the last significant figure which is re! EXAMPLE 1.3: Find the round-off errors introduced in Example } SOLUTION: The round-off error introduced in each case of ty example is as follows: 1.6 ABSOL 7.4121 ~ 7.473 = -0.0003 76346 - 76350 = 4 _ mime 15.235 - 15.24 = -0,005 15.245 = 15.24 = 0.005 DEFINITIO n decimal is the larg 1.5 TRUNCATION ERROR eaten Many numerical methods express the desired results as’ the Jimi fan infinite sequence or the sum of an infinite series. { ‘Practice only a finite number of terms’can be computed; the com For m = 3.1 Sequence of neglecting the remaining terms is the error known 7 the truncation error. A truncation error, therefore, leads to : approximate formula being used instead of an exact one. Considi a convergent infinite series and we see i DEFINITION Sie at te ee yy error of an ae 2 é 8. ie 2 absolute err Thus, if number of terms! number x wt © terms are contrib to S, we concl¥ ite number of ter! ‘we, might denote! ot LinsSon From this it ” range @ possible to red the series. 1 ding error for 5¢ can see: that | ror brevity,» converging ELEMENTS OF NUMERICAL ANALYSIS : Table 1.1 Truncation Error ie - a] Ss, Error ins 3 | 0.875 OT25 5 | 0.96875 | 0.03125 eau 10 | 0.99902 | 0. o0008 15 | 0.99997 | 0.00003 1.6 ABSOLUTE, RELATIVE, AND PERCENTAGE ERRORS | DEFINITION 1:4 (Absolute Error); If X_is an approximation to a number x, then the absolute error E of x is El |x x. aay DEFINITION! 1.5: An approximation % to x is said to be correct to n/decimal placeser positions (or agree to n decimal places) if n is the largest’non-negative integer for which - | < 0.5x10™ .2) fe For © = 3.14159... and 7 = 3.142, the absolute error is oa wtrne i 4 >: 1 fret} = 0.4pd0" < 0.51107, Se a © oe a ta © ee correct to 3 decimal places. lute Error): The limiting absolute is any number not less than the zy absolute error of an approximate ERRORS IN COMPUTATION of the ibe = a the number limiting absolute error 4: Determine the m, ‘ahich is used instead of the number ee i his ¢ ON: It follows from the inequality occurs fF measure 3.14 sin (x - h + h) ~ sin (x = h) sin x ~ sin (x ~h) eet cag ee 8) if Fis a Sis = sth (2x = h) ingful. How LS = 2 sin 5 cos —* 5 eee EXAMPLE 2.3: Find p’f(x). SOLUTION:7F(x) = jaut(x)) = w(/2vit(x + h/2) + f(x = h/2)) Ds 2 a ytulttx + 72) + te ~ b/2)1) 9 172)1172 4 th +f Ge) (172) {F 2c) +f (xb) 2.3 OPERAT Ife + h) + 2 f(x) + fx - hd) and the ser; Any of th other operate in Taylor series: . JW) f+.” (o” a mo) D, we can write (2.6) " roiow prom th we have ELEMENTS OF NUMERICAL ANALYsis or we write B= bp : Se onucclined by meats of the power séries in (2.7). Similarly one an define, for any operator P, eat 5 P P a eee fees e P log (1+P) = A a) Sine ie (+P) = Pp 7 5 nee: ete, An operator defined by means of an infinite series is mean- ingful only if the series obtained when it operates on any func- tion f € F is convergent. For example, let Q be defined by Dp? 2 Q=it+p+ PP, If F is the space of polynomials, the above definition is mean- ingful. Howeve if e“ € F then Qe does not have any meaning, since Qe = el +14 1/2+13+...), and the series on the right side does not converge. can be expressed in terms of any ‘the relations (2.8) (2.9) (2.10) (2.4) (2.12) LINEAR OPERATORS 24 and # = ate”? +E, (2:13) gives uw = Uvayte + Et + 2). Thus 8742 = ay? — This gives the useful relation 2 tae 2 (2.14) EXAMPLE 2.4: Prove that 8? = a? (1 + ay"!, SOLUTION: omg? _ pve ‘Therefore SY SE+E?- 2 But E =A+1, 2 hence S =A+it¢ a oe =M+?+1-2a+n ee Ne ae = 74a + 1). Show that yi = (1 + 4/2) + ay SOLUTION: Sin Grae’? + EY) herefore w= (W/E + ET + oF But 1+, L eer: 2 s therefore w= (1/4) + A + per 2! 2, ite to emt al base Bote Sais ay oi + 20 + a, 24D Fror we hay Later o +_ Evalua Prove waTORS ELEMENTS OF NUMERICAL ANALYSIS 25 2.4 DIFFERNCE OPERATORS AND THE DERIVATIVE OPERATORS (2.13) From the relation we have (2.14) 1 pv? 1/2 _ nove hove -E'? 2 =o 2 sinh (hD), = (ate? + wu 1 = cosh (hD), Later on, in chapter 10, we shall derive the following formulae: mee kA, 2 | nD = 4 3S aa PY (2.15) AD warm ar+ at - (2.16) 2 pp =v+d+ (a7 nD? = 9 + + (2.18) EXERCISE 2 Evaluate a2), V(cos x), 15(x). pRATORS 27 CHAPTER 3 EIGENVALUES AND EIGENVECTORS OF MATRICES 3.1 INTRODUCTION iRise awitBing Eigenvalues play a very important role in obtaining deep under- standing of many physical system. For example, the stability of aft in flight and the modes of vibration of a bridge are ned by eigenvalues. They are also important in the study of growth of certain types o ‘Population. Suppose X denotes the ber of females of a certain age in a given population. If the ion is in the state of age stability, the vector X satis- “an equation of - form a ere Ais a square aa er A is a positive real number which s whether the population increases or not. If A < 1 the If A > lit increases, and if A = 1 it re- ¥ fi EIGENVALUES AND EIGENVECTORS OF MATRICES (A - ADX = 0 eS &, x x ), then we have a system of n equations a + X,) in n unknowns: . 5 + +... ax =0, (a, Ax, + 3m, t Ak ine i L SS eae ay 7 : fa, i x anon | | | Thu Bx +a x tak 4 A Hal Sade = 0. are nt * n2¥2 * Baas nn a It is well-known that a non-trivial solution exists if and only if the determinant of the system vanishes ie., eon os 9, Fo a Aon mits A a > wie =o (3.2) at Se Bas © eabdien that if we put A = 0 with det A. Hence k OVA 2X. “LEMENTS OF NUMERICAL ANALYSIS 29 EXAMPLE 3.1: Find eigenvalues and corresponding eigenvectors of the matrix 2 3] wn ff 3 [A= at] = ie a =(2-aN9 -a)- 12 ame Aa BAe 6 = 0 is the characteristic equation of A and A = 6,-1 two eigenvalues, To find eigenvectors we have to solve (2 - Ax, + 3x, = 0, ; eee, * SANS 0, RICES ues AND EIGENVECTORS oF MAT! FIGENVAL! meen 4 oa + 14 = 5 -1,2 and 7. To find i ue: -1,2,7. Hence A has eigenvaleresponding to -1 we 4 ‘ors we proceed as follows. : eck 4 ations : = (x, Xpx,). Thus we have to solve the system oS p 1 — -2x, + 6x, > 24x, = Xp 5 . - 3x, + 10x, x x, - 4x, + 18x = xy Th = =0, x, 6x, + 24x, at 2x, - 10x, = 0, - =0. x, - 4x, + 14x, PRI cor Since the equations are not linearly independent, we can choose 6 %, arbitrarily. Let x, = 1, then x, = 5, x = 6. Thus | 5 2 3 2 1 ; By eigenvector corresponding to the eigenvalue », = -l. Similarly we bot! 3) ~2 5 find that | 2] and | 1 respectively are the eigenvectors corres: . L 1 1 Ponding to the eigenvalues A, = 2 and ape. «a We now give some simple results about i; igenvalues of a matrix Sippone An Ait A, are the (not necessarily distinct) eiger| Tak values of a square mat THEOREM 3.1: 0, + yrs OF NUMERICAL ANALYSIS ES ON 3.1: Two column vectors X, and X, are said to be orth- agonal Hf XX, = 1* : ae ae a = (X}) , for example if X, = pet » then X, = (-i,2+,3). “DEFINITION 3.2: A matrix is said to be hermitian if A = = nt tet 1 oie ‘EXAMPLE 3.3: If A = fe a ‘then A” = [hs Z| i > Pat aa fia a! Thus A is a mepeitian matrix. 15.08 THEOREM. 3.3: a hermitian matrix has only real eigenvalues. : rEGORSn eens sbeshermitian and A be an eigenvalue and X the EIGENVALUES AND EIGENVECTORS oF MATRICES 32 genvallles ectors corresponding to distinct ei THEOREM 3.4: Eigenv’ of a hermitian matrix are orthogonal. ax, = AX PROOF: Let AX, = AX and 2 ; itian matrix, Proceeding as above We dA is a hermitian mat where A, A,, an have Kaw <= EX KA x= i i ‘ where we uged the fact that A, is real. The above equation implies Buy xX, = 0. (a) X, X, 3 Since 4, # A,, we have 4 4 x he which proves the theorem. An important corollary of the above theorem is as follows. COROLLARY: A symmetric real matrix has only real eigenvalues ani ie) searerte corresponding to distinct eigenvalues are ortho gonal. If It is easily shown [8], (page 84) that if Xs Xp oe Xa 2 ciation of A corresponding to distinct eigenvalues then th 1 “+ X\) is linearly independent, Hence every nxl column aol f combination of these ve? it can be shown that [ll even ! ortho set of linear!) be found. api tioned’ 3 has an eigenvalue repet ents OF NUMERICAL ANALYSIS 33 m =X, i=L243 EXAMPLE 3.5: The matrix 110 A=/011 E 001 the eigenvalue unity repeated three times. However there is one linearly independent eigenvector =G] 3.3 GERSHGORIN’S THEOREM ‘Now we consider some estimates for the absolute he [A] where A is any plecoralte of a “square matrix A = fa, . X= Ung tO is an eigenvector cr coranilieg to A, then we have Wake Sinuitarty bac TAsage } aX 1-42... 0 (3.5) eee ie cod IES AND EIGENVECTORS OF MATRICES EIGENVALU ‘ ; (3.6) - Ise re ’ tl has the same set of eigenvalues, hence a Also the transposed matrix 4 n (3.7) iim flail ‘From (3.5) we have : He A-a x = } ay x. rn Thus ; Ther! or 2a} Simil AE btore we replace the above result bj "lies xy a weaker one:"A" lies in one of the discs with centre at a, and radius : From la, 4 fab a Finally n. (3.8) B GE 3. s in one of the ‘713) | (a9) EXAMPL. oe ore the ene we j Gershgor™ ELEMENTS OF NUMERICAL ANALYSIS 35 3.6) THEOREM 3.5: If A is an eigenvalue of A = {a,) then it satisties the estimates (3.6) - (3.9). ice EXAMPLE 3.6: Consider pa -2 6 -24 A=[10-3 10 1-4 13 =2+6+ 24 32 a } Ie, 1 20 +3 +10 213 cue la, | =l+4+ 13 18 cr # (3.10) (3.4p =10 or [A - 13] =34 (3,13) and in EIGENVALUES AND EIGENVECTORS OF Matrices: Ja-1j 44 or [a-3| s4 or Ja= [a - 1] #7 or [a- 3] 55 oF Ja - 8] <4. Thus all eigenvalues lie in the union of Jaci] = 7 and [A-8| = 4 The matrix has eigenvalues 3, 4 and 5. 3.4 THE POWER METHOD The power method is an iterative technique to find the dominant | eigenvalue of a given matrix A. Assume that a matrix has eigen- a values which are so arranged that |A,| > IA] = = lI and uj, Uy «: . U, aren linearly independent eigenvectors. If u is d any arbitrary n x 1 vector then it can be expressed as a linear combination of the eigenvectors and we write urautaut.. tau. Then Au = au, #a,Au, +... + a Au, =aau Fart. + aru. | It is clear x, C3 k k us a(atu + as Au = arcu, + au, +a (au aL ye =) a] (3.14) 1 | gE) as k is taken large Cs vector V, we have Sc iewenTS OF NUMERICAL ANALYSIS eon Hef, au), Hence k-> 0 Saye (3.15) ea (AF a), From (3.15) we have for large k Aly ayatu Thus AUAMU) = A (atu) q ‘ 4 Therefore A’ u is an approximate eigenvector corresponding to the the above analysis, for find- eigenvalue ae The ing the dominant (in absolute value) eigenvalue of a matrix, is called Method. We proceed as follows: a) Choos a suitable arbitrary column vector u. an — ale Usually u is r k $0 as to make the largest ele- if one is working with an automatic ultiply, at every step, with Eaeclal ie 5 OF MATES AND. EIGENVECTOR: EIGENVALUES vector vy, be: the vecty, tf umn 5 ne vl value has been made equal to, largest ¢ absolut et rose largest element Jn 0° Gas = nS eigenvect Clee salem ee? x i oe Y 2 ‘0.75 Subtracti 1 -0.75, -0. "| 1 Thus A-« |-0.75 atodivecs, -0,7083 element o 1 subtractin, -0.7083 unchanged, largest or - | me vy, = Au, = ‘Suppose : BOOTS: eigenvalue T0714 ‘ound, add Pe eee ine en ie PO rong that found oe: -2.41428 -0.70712 EXAMPLE 2 vy = 4 u, = | 3.41428 . l : -2.41428 0.707112 =2.414224 0.70711 v, "7 a tes 3.414224 1 ere 2.414224 [-0.70711 x he SOLUTION: Thus the 1 y ‘5 decimal places eam smallest eige Apply the pov Let ou LEMENTS OF NUMERICAL ANALysis 39 ; eigenvalues. 5 SEE Nbe a cigenvalue of a matrix A and ee X be the corresponding AX = AX Subtracting @X from both sides of the above equation we get Beem. AX'='GX = A x ox SA - ax = Wax Thus A-@ is an eigenvalue of the matrix A-al and X is the associ- BF? ctherwords, .the. effect of subtracting «fron each element of the leading diagonal of a matrix has the effect of subtracting a from each eigenvalue but leaves the eigenvectors unchanged. Therefore, we can make, by a proper choice of a, the largest or the smallest eigenvalue the dominant one in absolute value Suppose thatya is taken equal to or greater’ than the dominant 4 of A. Then, if the dominant eigenvalue of A - al is pemie to it will give the numerically smallest eigenva- of A. a correspondig eigenvector will be the same as for the dominant eigenvalue of A - al. * lest eigenvalue of 1 eigenvalues satisfy [A - 2| = 2. BD itergat’ (0.041? lenod tha darest in ab the - GENVECTORS OF MATRICES i oe ELEMENTS OF NUy 40 0.7143) | eas fo Le fe Sey LZ 5 0.7143, EXAME ‘0.70833 286 ee ¥. = -3.4286 i ] = -3.4286u, 0.70833, | (0.70732 j z uy 41667 | 1 = -3.41667u, 9 9-25. ' The a als i 0.70732 4 eigenvalue of aw : is (1, -0.5)", 3.41464 0.7071434 b avicten "ond '0.7071434 2.41464) in 9. Thus boll Vv, = Au, = = -3.41464 [ 1 = -3-41464u. -2.41464 ~2.4142868 0.7071131 ists weet v, = Au, = |-3.414ze08| =-3.4142868]. 1 =-3.41428¢rhem orthonorm a bi [: 4142868 0.707131 a -2.414226127 (0.707108 a ~ ¥, = Au, = |-3.414226127] =-3.414226] 1 =-3.4140226, ae ~2.414226127, 0.9en08 v Thus an approximate eigenvalue for B is -3.414226. The correspand 3 ding eigenvalue for A is ~3.414226 + 4 = 0.585774, in ret The exact value is 2-{2 = 0.58578644. The approximate corr ponding eigenvector for this eigenvalue is 0.707108 1 0.707108 arises that whether the smallest eigenvalue can’ 5 finding the largest on ose Y= Thus we see tha’ auch faster as ' -1 4g 8 called the seq p of A. MH ample 3.9, alue 1/ if x ag then ¥ irs OF NUMERICAL ANALYSIS a EXAMPLE 3.10: Let A = ie or eat L [8 2) © [ 012222 ~o.0556 3 [2 3] ee otyaes| mF gy power method we can find the largest eigenvalue of A which is 0.25. The Corresponding eigenvector is (1, -0.5)'. The eigenvalue of A is 1/0.25 or 4 and the corresponding eigenvector a is (1, -0-5). The eigenvalues of 4 calculated analytically are 4, 9. Thus both the results agree. Rayleigh Quotient: If a matrix is Hermitian its eigenvectors Uy oe u, are orthogonal. We suppose that we have also made a them orthonormal. Let u be an arbitrary column vector such that wu, # 0. Now = mec U 4... + Ou . Sir 22 aR P a , — Aus ea? u + oA Pu, + + 2 Pu, i, on Yea * arta, +... + a) u. cenvectors OF MATRICES gIgeNvALUES AND EFS J a= -3.414213473 %* 3.417647, 1 Fe ELEMENTS : ser to the exact value -3.4142135 is much’ clo Now we fo and we see that %, as compared with the value method. -3,414226 obtained by the ear), Note that > 3.5 DEFLATION has only ze “Suppose we have found an eigenvalue 2, and the correspond of 4 matr, eigenvector X, of an nxn matrix A- Deflation means finding matrix of order (n-I)x(n-1) whose eigenvalues coincide with y THEOREM : rest of the eigenvalues of A..Suppose X, = (» *, to get an ei assume that x, * © and multiply X, bY Ve, g BEN ROOF: ‘Sup x whose top nie is unity ie. r where x ha vos x,J1j eigenvalues « it eee ay , Te Pi. peg nn +X) has it E If the i-th row of A is denoted by R,, we see that aa a * 7 RX Wy Sb Damtos75 % a axy = | RX Hence 0 is an top element uni (ait in the # Gi? the above, we lear from (3.17) pment OF NUMERICAL ANALYSIS, ee RX = 0. 47.166) now we form the matrix B defined as _ =A- xR. ee that X,R, is a an nxn matrix whose first row is Ry, hence B ‘has only a in its first row. We show that with he exception oe , matrices A and B haye all eigenvalues common. 3.6: If A has eigenvalues AY ay sees a then B has 0, Ay cee a. . . ql = XK st, 2, . . with eigenvector xX If x, has its sg ann EIGENVECTORS OF MATRICES has its .top elem, ‘EIGENVALUE! eigenvector. Finally if % 0 zero, then . a Gig nS Ax, This completes the proof. > Now define an (n ~ 1)x(n - 1) matrix A, obtained by deley ' the first row and the first column of B. Let e 0 0 0 o ba Pe bos ~ oa - fy aS Bo Xe bo9 4 ‘on b at a2 ona ‘Then ad nat Xe Te vt . ‘Since i @ ELEMENTS OF NUME EXAMPLE 3.11; A has an eigen GLvay a and B= A - xp 1 The matrix A, ot column of B is fou si Now [A -3 1 Thus A, has eigem eigenvalues of A. We can recover deflated matrix A, i iS, i 3, @ et bY ye Typ oe Tyas! denote this vector b is Ay NUMERICAL ANALYS! pens OF 1S 4 L wet pXAMPLE 3.11: Let A= |2 3 _p 5343 eee agec™’aite 4 with (2.2.91. a5 un" eigenvector, Here 0 =4, 1 xt = (112)" and R, = (1,2,2). ei ‘Now ue se "R= 4, XR, 1 2 2, ride " v2.1 1 and B= A - X°R, comes out as Ss omoaie: ha oO oOo Sint See} a 1 -« ues) ws sie 7 cae - b d by deleting the first row and the first ‘The matrix A, obtaitied by 8 ‘column of B is found | rr 7 EIGENVALUES ARE erm = ce (3.17) and (3.18) we have - : fi 8) (ay EXAMPLE 3, BOX, 5 X15 Oy wt RX =A 40 1 we toe a : matrix A, cat . . BX, =AX, ? (3.29) RX, = 0. - rs A, has an eig ay ee value 3 with ; (3.21) holds then Y, must be proportional to x = x ag es “are eigenvectors of B corresponding to the same eigenvaly: s for some a Since RY, = xp =X, = ey, GB. re = 6 ie ‘ let A, = 3 and Here RY, = 2a (3.24 ~ Thus we have fin (and 3° with re 4,0," In the, «clement of X, is ¢ afi Make the second ¢ Suitable constant 7 Modifications. In practice we ¢ Ponding eigenvector fLEMENTS. OF NUMERICAL ANALYSIS 47 12 2 EXAMPLE 3.12: Let A= | 2 3-2 |. es) os We know that A, = 4, x'= (1,1.1/2)". We found that the deflated matrix A, came out to be xi8a, : “-* we] detrs Biss Ae] A, has an eigenvalue 5 with an eigenvector (1,-1)" and an eigen value 3 with an eigenvector (2,-1)'. Let us take A, = 5 and EIGENVALUES AND EIGENVECTORS OF MATRICES matrix A, by deflation, apply the power method to A, to next dominant eigenvalue and so on. : % ix we can find the dominant eigenvalue e case of 3x3 matrix wt ey ea Mthe corresponding eigenvector of the e then the smallest eigenvalue and the corresponding vec- Eile third eigenvalue follows immediately as the sum of the lues is equal to the trace of the given matrix. The power method fails if the dominant eigenvalues are a pair ff complex conjugate numbers. Also the convergence is poor if ratio of the two dominant eigenvalues is close to unity. In such @ case Jacobi's method may be used. For details see Fréberg [4] EXERCISE 3 4, Find, the eigenvalues and the corresponding eigenvectors of the following matrices analytically. 12-8 0 2-10) 22 3 ae |-2 2-2 |,B=|]-3 1-3 },c¢=| 3-1 3). 1-4 10 1-5 11 i acs eee the dominant eigenvalue and the corresponding eigenvec- tor for each of the following matrices. Give your answers correct to three decimal places. Peas or IMERICAL ANALYSIS 49 using the estimate of eigenvector obtained in Exercise 4, deflate the matrix A and hence calculate the remaining eigen- values and associated eigenectors. Compare your results with those obtained in Exercise 3. 5. 6, Using the estimate of eigenvector obtained in Exercise 4, find the smallest eigenvalue and associated eigenvector of A. 7. Consider the marix TS i 1 -2 mF. 9 having A, = -1 as an eigenvalue and X, = (1,3/2,1/2)" the corresponding eigenvector. Deflate the matrix and hence calculate the remaining eigenvalues and associated eigen- vectors. alereo a ak'» te ae, ATONE aoty sini imisnanition, 22 ian Yo. Jaom paiom Aleta ie i CHAPTER 4 INTERPOLATION WITH UNEQUALLY SPACED DATA 4.1 INTRODUCTION Suppose we are given the values of a function f(x) at certain Points x,, Xyoo-)X_ and we want to estimate f(a) where a is any point in the interval [x,.x,]. We assume that the function can be approximated by a polynomial. This is called polynomial interpo- lation. There are a number of interpolation formulae, most of which have certain advantages over the others in certain situations, but no one of which is preferable to all others in all respects. In the general case, when the abscissas are not equally spaced, the use of divided differences is convenient. 4.2 LAGRANGE’S FORMULA We assume that the polynomial quiMENTS OF NUMERICAL ANALYS15 mt + a+ ax, tax ajtax + ° ie ee (4.1) i n , “ : 1 n a aX, x! a” Bh. See ee BiBilie's jrieccen, 4 _ Since Ky X yok, are. all distinct, this determinant is not zero and the ft) ‘has a unique solution. However one or more, 0 of @ may turn out to be zero. Thus we huve shown ni of degree = 1 exists such that the curve repre- through the given n+l points. Aiso this . Let us define add up to zero), also Plx,) = POEs %§; er P(x.) = ¥, Thus P(x) is the required polynomial. Equation (4.3) is called the Lagrange’s interpolation formula. Note that, although we proved the existence of such a polynomial by a different method, Wwe seldom use the determinant method to actually evaluate the constants a,,a,,....a,. Later we shall discuss other methods for finding the interpolation polynomial. Since such @ polynomial is unique, it does not matter how we find it. The result is always the same. EXAMPLE 4.1: Let values of y = f(x) be as given in the following table. Find a polynomial which interpolates the given data x y =1_ 1-8 1 | -2 _ 2 4 - 3 | 28 th = (x12) (x23) - 1,0) = yayGi-2it-1-3)°

You might also like