Download as pdf
Download as pdf
You are on page 1of 230
THEORY and PROBLEMS of MATRICES by FRANK AYRES, JR. including 340 solwed problems Completely Solved in Detail SCHAUM PUBLISHING CO. NEW YORK SCHAL S OUTLINE OF THEORY A D PROBLEMS oF MATRICES oy FRANK AYRES, JR., Ph.D. Formerly Professor and Head, Department of Mathematics Dickinson College 34° SCHAUM’S OUTLINE SERIES ‘McCRAW-HILL BOOK COMPANY New York, St. Louis, San Francisco, Toronto, Sydney Copsright @ 1982 by MeGraw-HUll, Inc, AIL Rights Reserved, Printed ia the United States of Amerien, No part of this publication may be reproduce stored in a retrieval system, or transmitted, in any form or by any mean electronic, mechanteal, photocopying, recording, or otherwise, without the prior written permission of the publisher. 656 T8910 SHSH 7543210 Preface Elementary matrix algebra has now become an integral part of the mathematical background necessary for such diverse fields as electrical engineering and education, chemistry and sociology, ‘as well as for statistics and pure mathematics. This book, in presenting the more essential mate~ rial, is designed primarily to serve as a useful supplement to current texts and as a handy refer- tence book for those working in the several fields which require some knowledge of matrix theory. Moreover, the statements of theory and principle are sufficiently complete that the book could be used as a text by itself. The material has been divided into twenty-six chapters, since the logical arrangement is thereby not disturbed while the usefulness as a reference book is increased. This also permits 4 separation of the treatment of real matrices, with which the majority of readers will be con- cerned, from that of matrices with complex elements. Each chapter contains a statement of perti- nent definitions, principles, and theorems, fully illustrated by examples. These, in turn, are followed by a carefully selected set of solved problems and a considerable number of supple- mentary exercises. ‘The beginning student in matrix algebra soon finds that the solutions of numerical exercises are disarmingly simple. Difficulties are likely to arise from the constant round of definition, the- orem, proof. The trouble here is essentially a matter of lack of mathematical maturity, and normally to be expected, since usually the student’s previous work in mathematics has been concerned with the solution of numerical problems while precise statements of principles and proofs of theorems have in large part been deferred for later courses. The aim of the present book is to enable the readcr, if he persists through the introductory paragraphs and solved prob- Jems in any chapter, to develop a reasonable degrce of self-assurance about the material. The solved problems, in addition to giving more variety to the examples illustrating the theorems, contain most of the proofs of any considerable length together with representative shorter proofs. The supplementary problems call both for the solution of numerical exercises and for proofs. Some of the latter require only proper modifications of proofs given earlier: more important, however, are the many theorems whose proofs require but a few lines. Some are of the type frequently misnamed “obvious” while others will be found to call for considerable ingenuity. None should be treated lightly, however, for it is due precisely to the abundance of such theorems that elementary matrix algebra becomes a natural first course for those seeking to attain a degree of mathematical maturity. While the large number of these problems in any chapter makes it impractical to solve all of them before moving to the next, special attention is directed to the supplementary problems of the first (wo chapters. A mastery of these will do much to give the reader confidence to stand on his own fect thereafter. The author wishes to take this opportunity to express his gratitude to the staff of the Schaum Publishing Company for their splendid cooperation. FRANK AYRES, JR, Carlisle, Pa, October, 1962 CONTENTS Page Chapter MATRICES cecteetetstctttteteeeees Matrices. Equal matrices. Sums of matrices. Products of matrices Products by partitioning. Chapter SOME TYPES OF MATRICES wee : we 10 ‘Triangular matrices. Sealar matrices, Diagonal matrices. The identity matrix. Tnverse of a matrix. Transpose of a matrix. Symmetric matrices. Skew-symmetrie matrices. Conjugate of a matrix. Hermitian matrices, Skew-Hermitian matrices. Direct sums. Chapter DETERMINANT OF A SQUARE MATRIX : 20 Determinants of orders 2 and 3. Properties of determinants. Minors and cofactors, Algebraic complements, Chapter EVALUATION OF DETERMINANTS 32. Expansion along @ row or column. The Laplace expansion. Expansion along the first row and colurmm, Determinant of a produet. Derivative of determinant, Chapter EQUIVALENCE ... eee ee eee i SO) Rank of a matrix, Non-tingular and singular matrices. Elementary transformations. Inverse of an elementary transformation. Equivalent matrices. Row canonical form. Normal form. Elementary matrices Canonieal sets under equivalence. Rank of a product. Chapter THE ADJOINT OF A SQUARE MATRIX 49 ‘The adjoint. The adjoint of a product. Minor of an adjoint. Chapter THE INVERSE OF A MATRIX... 55 Inverse of @ diagonal matrix. Inverse from the adjoint. Inverse from clementary matrices, Inverse by partitioning. Inverse of symmetric matrices. Right and left inverses of mn matrices Chapter FIELDS . 64 General fields, Sub-ficlds, Matrices over a field Number fie CONTENTS Page Chapter. 9 LINEAR DEPENDENCE OF VECTORS AND FORMS........ 67 Vectors. Linear dependence of vectors, linear forms, polynomials, and matrices. Chapter JQ LINEAR EQUATIONS % System of non-homogeneous equations. Solution using matrices. Crame rule, Systems of homogeneous equations. chapter J] VECTOR SPACES 85 Vector spaces, Sub-spaces. Basis and dimension. Sum space. Inter- section space. Null space of a matrix. Sylvester's laws of nullity, Bases and coordinates Chapter 12 LINEAR TRANSFORMATIONS cee cr Singular and non-singular transformations. Change of basis, Invariant space. Permutation matrix, chapter 13 VECTORS OVER THE REAL FIELD..... ea 100 Inner product, Length. Schwarz inequality, Triangle inequality Orthogonal vectors and. spaces. Orthonormal basis. Gram-Sehmidt, orthogonalization process. ‘The Gramian. Orthogonal matrices. Orthog- onal transformations. Vector product. chapter 4 © VECTORS OVER THE COMPLEX FIELD 5 110 Complex numbers. Inner product. Length. Schwarz inequality. Tri- angle inequality. Orthogonal vectors and spaces, Orthonormal basis. Gram-Schmidt orthogonalization process. ‘The Gramian, Unitary mat ices. Unitary transformations. Chapter 15 — CONGRUENCE A 5 Congruent matrices. Congruent symmetric matrices. Canonical forms of real symmetric, skew-symmetric, Hermitian, skew-Ilermitian matrices sander congruence, Chapter 16 BILINEAR FORMS 125 Matrix form. Transformations. Canonical forms. Cogredient trans- formations. Contragredient transformations, Factorable forms. chapter 17 QUADRATIC FORMS 131 Matrix form, Transformations. Canonical forms. Lagrange reduction, Sylvester's law of inertia, Definite and semi-definite forms. Principal minors. Regular form. Kronecker’s reduction, Pactorable forms, Chapter 18 CONTENTS Page HERMITIAN FORMS 6 Matrix form. ‘Transformations, Canonical forms, Definite and semi- definite forms, Chapter 19 THE CHARACTERISTIC EQUATION OF A MATRIX. 9 Characteristic equation and roots. Invariant veetors and spaces Chapter 20 SIMILARITY cee : 156 Similar matrices. Reduction to triangular form. Diagonable matrices, Chapter 21 SIMILARITY TO A DIAGONAL MATRIX........... 163 Real symmetric matrices. Orthogonal similarity. Pairs of real quadratic forms. Hermitian matrices. Unitary similarity. Normal matrices, Spectral decomposition. Field of values. Chapter 22 POLYNOMIALS OVER A FIELD.......... - - 2 Sum, product, quotient of polynomials. Remainder theorem. Greatest common divisor. Least common multiple. Relatively prime polynomials. Unique factorization, Chapter 23 LAMBDA MATRICES . cecseeeseees 179 ‘The Amatrix or matrix polynomial. Sums, products, and quotients Remainder theorem. Cayley-Hamilton theorem. Derivative of a matrix. Chapter 24 SMITH NORMAL FORM ....... cevteeeeeeee - 188 Smith normal form. Invariant factors, Elementary divisors. Chapter 25 THE MINIMUM POLYNOMIAL OF A MATRIX. . A 196 Similarity invariants. Minimum polynomial. Derogatory and non- Gerogatory matrices, Companion matrix. Chapter 26 CANONICAL FORMS UNDER SIMILARITY. .... cee 208 Rational canonical form. A second canonieal form, Hypercompanion matrix. Jacobson eanonical form. Classical canonical form. A reduction to rational esnonical Zorm, 11) >, 215 INDEX OF SYMBOLS.............. 219 Chapter 1 Matrices A RECTANGULAR ARRAY OF NUMBERS enclosed by « pair of brackets, such as 2037 ae @) and (6) J21 4 A 5. AT 6, and subject to certain rules of operations given below is called a matrix. ‘The matrix (a) could be ° x- y+52=0 considered as the coetictent marx ofthe system of homogeneous Hiner eauations 25°97 2 of 2 the augmented natix of the system of nom-honogeneous neat equations {267% °7 Later, we shall see how the mats nay be used to obtain solutions of these systems. ‘The ma- trix (5) eobld be given similar Snterprettion or We might consider ies rows as simply the eoor dinates of the points (1,31), (2.1,4), and (7,6) In ordinary space. ‘The matrix will be used later to Sette such questions as whether or notte thee points Le in the same plane withthe otigin or on the sane line through the ogi, In the matrix aa the numbers of functions a,, are called its elements, In the double subscript notation, the first subscript indicates the row and the second subscript indicates the column in which the element stands. Thus, all elements in the second row have 2 as first subscript and all the elements in the fifth column have 5 as second subscript. A matrix of m rows and n columns is said to be of order "m by n" ot mxn. (Un indicating a matrix pairs of parentheses, ( ), and double bars, || |, are sometimes used, We shall use the double bracket notation throughout. ) At times the matrix (1.1) will be called "the mon matrix (a;]" or "the mxn matrix 4 = {aij". When the order bas been established, we shall write simply "the matrix A" SQUARE MATRICES. When m =n, (1.1) is square and will be called a square matrix of order n or an aesquare matrix. Ina square matrix, the elements @,;, a22,.-. @y,, 8f@ called its diagonal elements. ‘The sum of the diagonal elements of a square matrix 4 is called the trace of A 2 MATRICES. [oHaP.1 EQUAL MATRICES. Two matrices 4 = [aj] and B= [ij] are said to be equal (A=B) if and only if they have the same order and each element of one is equal to the corresponding element of the other, that is, if and only if 1g = Bag AQyeeeemi f= Lee) ‘Thus, two matrices ar equal if and only if one is a duplicate of the other ZERO MATRIX, A matrix, every element of which is zero, is called a zero matrix. When A is a zero matrix and there can be no confusion as to its order, we shall write 4 = 0 instead of the mxn atray of zero elements. SUMS OF MATRICES. If 4 = [a;;] and B ~ [b;;] are two mxn matrices, their sum (difference), 4 +B, is defined as the mxn matrix C= (e;;], where each element of C is the sum (difference) of the coresponding elements of A and B. Thus, 4B = [ays + bij] 125 230 Example 1, It A = and 8 then ‘ F 1 J i 2 ‘| isn 249 el iE 5 ‘| Aan = : orca is2 acs} La ao. ap PR BB HOP Ga ‘| O-(-1) 1-2 4-8 ana a ‘Two matrices of the same order are said to be conformable for addition or subtraction. Two matrices of different orders cannot be added or subtracted. For example, the matrices (a) and (b) above are non-conformable for addition and subtraction, and ‘The sum of k matrices 4 is a matrix of the same order as A and each of its elements is k times the corresponding element of A. We define: If F is any scalar (we call k a sealar to dis tinguish it from [] which 1s a 1x1 matrix) then by £4 = Ak is meant the matrix obtained from A by multiplying each of its elements by k. Example 2. If 4 2) then mien, = [ >]. Ataea [i aE hE q : {i “] = = ae 2 al’ al*b al |e o. and cya [2° al . [ “5 ) 512) 86) 10-18 In particular, by ~A, called the negative of 4, is meant the matrix obtained from A by mul- tiplying euch of its elements by -1 or by simply changing the sign of all of its elements. For every A, we have 4 +(—A) = 0, where 0 indicates the zero matrix of the same order as A. Assuming that the mateices 4,8,C are conformable for addition, we state: (a) A+B= Bea (commutative law) () 44 Bs) = sRy4C (associative law) (c) k(A+B) = kA + RB = (A+B)k, ka scalar (d) There exists a matrix D such that 4+D = B. ‘These laws are a result of the laws of elementary algebra governing the addition of numbers and polynomials. ‘They show, moreover, 1. Conformable matrices obey the same laws of addition as the elements of these matrices. CHAP. 1] MATRICES. 3 MULTIPLICATION. By the product 4B in that order of the 1xm matrix A= (as; a2 @o -.. ayy) and bas bas the mat matrix B= || 4s meant the 131 mattix C = [ass bya # robs * ++ aaq bes) as ae bot Phat 8, (as ea --s tae) -| 2 | = [eicbes bare bea tot aebes) = abn] bes, Note that the operation is row by column; each element of the row is multiplied into the cor- responding element of the column and then the products are summed. ’ branole a.) (2 34] EJ + Baaeneae) ~ 1 2, = ote Le] ce-con 2 By the product AB in that order of the mxxp matrix A= [aj] and the pxn matrix B= (6 {is meant the mn matrix C = (cj;] where PVD. ccems F912. cag = Gigbyj + aesbaj 4+" + aipbos = Eanes. ¢ ‘Think of A as consisting of m rows and B as consisting of m columns. In forming C = AB exch row of 4 4s multiplied once and only once into each column of 8. The element cg; of C is then ‘the product of the ith row of A and the jth column of B. Example 4. ma 2) rg an Osa F Oreb2s @asbio + aay bye AB = las, a2 = Jacsbar tavcbos aasbre + azcboe box boo orbia tanzbex a1 bre + ean boo, ‘The product 4B is defined or A is conformable to B for multiplication only when the number of columns of 4 is equal to the number of rows of B. If A is conformable to B for multiplication (AB is defined), B is not necessarily conformable to A for multiplication (BA may or may not be defined). See Problems 3-4 Assuming that 4, B,C are conformable for the indicated sums and products, we have (©) A(R +0) = ARE AC (first distributive Law) (f) ABC = AC+BC (second distributive law) (@) ABC) = (ABYC (associative law) However, (A) AB + BA, generally, (i) AB = 0 does not necessarily imply A =0 or B = (i) AB ~ AC does not necessarily imply B = C. See Problems 3-8, 4 MATRICES (CHAP. 1 PRODUCTS BY PARTITIONING, Let A= [a;;] be of order mxp and B= [b;;] be of order pxn. In forming the product 4B, the matrix A is in effect partitioned into m matrices of order 1xp and B into n matrices of order px1, Other partitions may be used. For example, let A and B be parti- tioned into matrices of indicated orders by drawing in the dotted lines as prem) | (Paxne) eae eo nee ‘ms xpe) | (maxpe) | (mxps) a= [pees yet A B = |opoxm) | oxn) {imgxpx) | (mp xpo) | (mexPa), <4 =a (poxn) | (Paxne), Bia | Bio Bea | By or B= | Boy | Boo Bas | Bao In any such partitioning, it is necessary that the columns of A and the rows of B be partitioned in exactly the same way; however my, mz,na,nz may be any non-negative (including 0) integers such that m,* m= m and ny+ny =n. Then ArsBea + AroBor + AsoBos AaiBaz + ArzBoo + Ais Bso Ca Cre AB = eile a AorBia + doaBor + AzoBos A21Ba2 + Apo Bao + Ao Bi Cor Coo 210 aa Example 5. Compute 48, given 4 =|3 2 0| and B=|2 1 104 23 Partitioning so that 4 - [4s Ae © [Aas doo] © AesBis ¢AseBos Aya Byo + AroBor we have AB AorBs3 +AaeBox AasBio + Aas Eee} ]-Be so Bag: 8 fo aft }ecne en nal * He] - (bs d-bo9 GG . en | fhigd+paq [o]+(2] {94 2)(2) B42? See also Problem 9. 2 5 2 Let 4,B,C,... be n-square matrices. Let 4 be partitioned into matrices of the indicated axes (Pax Ass Goxpd expe | eo see pes (ps P1) | (Psx Po) 1 (Ps Ps). Ass, and Iet B,C, ... be partitioned in exactly the same mannet. Then sums, differences, and products may be formed using the matrices Ais, Aya, «i Bass Buoy 1 Crus Cron CHAP. 1] MATRICES SOLVED PROBLEMS 2 143 26(-4) -141 042 4-1 3]. fet See ae J ; 54-2) 148 a+ ao 24 -1-1 0-9 26 ~2 -2 o-5 2-0 1-3] = | 3-5 2-2 “542 1-9 244, 0-3-2 4 43. Po, 1-3-p 2-2-9] foa-p 9 00 we a+B—D = |aeicr 4-5-5] =| 4-7 -1e] = fo of, -2-p=0 and p=—2, 4 12 3 -2 b |] 2. A=|3 4] and B=] 1-5], tind D=|r s| such that A+B-D = 0. 5 6. Seder 6+ 3—u, ot on 00, 2 0 men D=| 4 -1|2 448 88 eeu 3. (@) (a5 6] LI = (4ay+5@+ee-n) »fi7) 1 a 24) 2) 2067 8 10 17 @) | a} [456] = | 3) 36) 306) 12:15 18 1 @) -16) 16), 4-5 6. 4-6 9 6 wy (12a) }o-7 10 7 5B 8-11 ~8 [aca + 2¢0) + 3(5) 1(-6) + 2(—7) + 3¢8)1(9) + 210) #811) 146) + 27) + 3-8) [is 4 -4 -4] A + + w P24] - forse seen) _ pr re AE] ~ Lassa sec] ~ |» 3-4 ofedias Ake 2, 2 ott ance a =[6 2], oem 104 » [2-2 gp e117 pear @ alo rallo 12)-|2 14] one 1 ogh oi} Ls -re, ‘The reader will show that 4° 1@) +2412) 1-H ¥26)+10)] _ fs 8 4() 400) +22) 4(-4) +05) +2} ~ Ls -12 4 8-3} -1 a) fu ao 4.4 =|2 14]]o 12)-| 3-18 sah o4 8-43 a8, AA? and APA? 6 MATRICES. [cHAP. 1 5. Show that: A F @ 2 aadysten) = 3, einbey +3 avers. aanbandeng- Babyy teas) * 2elbog taj) = Past Aiohs) + Cintas taints) = Seary + Jeane (era + ain + a19) + (sa + 000 + @20) (213 +402) + (2 + 420) + (O39 * 30) Rb ‘This is simply the statement that in summing all of the elements of a sratrix, one may sum first the elements of each row or the elements of each column © Jeacd = Zein arey * bactag * Pant es? = aintbraces + braeost baacaj) + ialboxeas + beats + bootsj) + isha Haiabodery + Ciabie + tibasdens + Gtabie + aizbradeos (Eertines +6 aumbeadeaj + (E ainbyade inberders + (E aiebys) = Ed eaters 6, Prove: If A = [ag] Is of order mm and if B = [b;,] and C~ {e,4] are of order nxp, then A(B + C) AB+ AC. “ ‘The elements of the ith row of A ate ais, ajo, -:+ iq and the elements of the jth column of B+C are Bajtenss bast 603 ‘vonj- Then the element standing In the ith row and jth column of A(8 + C) 1s efbaj ea) ¥ bes eap) Fon tugs tog = E aay tea; E ondas+,E,euens. the sum ot the elements standing inthe th row and jth coluan of 40 and AC, 1. Prove: it A= [ai] is of on then A(BC) = (AB)C. 2 ‘The elements ofthe throw of atewis,ais,...94y andthe elenents ofthe th column of BC are E. bineyy. t mxn, it B= (4s5] 18 of order np, and if C = [cy] 18 of order peg, 2 Ejbentaj o> tunengi hence the element standing inthe ith row and th column of 4 (BHC) is 7 t zo a2 nay tie B bonny to + aig Zany = Foch taney : (2 embmrens 5 2 ambaney + CZ, aiebeadeng + + (2 emebaphens ‘This is the element standing in the ith row and jth column of (48)C; hence, A(BC) = (AB)C. 8. Assuming A,B, C,D conformable, show in two ways that (A +B)(C+D) = AC + AD + BC +BD. Using (e) and then (f), (A +B)(C+D) = (A+B)C +(A+B)D = AC + BC +40 + BD. Using (7) and then (e), (A+8)(C+D) = A(C+D)+B(C+D) = AC +AD +BC + BD AC +BC +4 + BD. CHAP. 1] MATRICES 7 hoo Hoo] fo] fodfiog i oo] faa] fang] 9.c@ fo 10, 2\\5 6 9 = or olfo 1 of+|2|(a 121 = for ole 2 a] - Joa 4 oor fe 2"} Joo allo oa] |p oor) issal [sar 12, 1 0 3 |? 0 0 i hh 213 4516 1fi2) fiapas) fi gy 2 2 alas Git eis) babsad b Wh. wl? pais e7lal _ |e 1 Af 4] 1 Ale 6 a fa ate lo 45/67 8\9 12 1} ]4 5] ft 2 1] Je 7 8] fa 2 ho 0 jo site sia] fot als al lox bo sf lox le 0 pte 5 411 UMe7) (es4) Oy Elbe see] 35 1 9M13 4 TU0 13: 16) L 19, 4 7 10 13 16 19 _ |fpr 93]f35 37 soqfaty] _ fat a3 a5 a7 99 41 20 z2{[24 26 28l|s0|| ~ }20 22 24 26 28 30 13 saflis 13 aallia|} [is 13 13 13 13 13 fe Me s alti] L?7 65 #1 eyeter ery snes 10. Let |e = assy. asoye be three linear forms in ys and nat { : Xo = Geaya + Oo2yo Yinear transformation of the coordinates (y.,y2) into new coordinates (=, =2). The result of applying the transformation to the given forms is the set of forms B= (arabia t @yoboaden + (Brrbay + droboo) 0 % at aooberder + (derbs2 + teaboa) % + eaabosd22 + (aidis + Aaoboo) 22 Using matrix notation, we have the three forms |x2] = ass a22 fees (] and the transformation [e]-[E SE] merem ron te tana EES ae) ‘Thus, when a set of m linear forms in n variables with matrix d is subjected to a linear trans- formation of the variables with matrix B, there results set of m linear forms with matrix C = AB. ation Is the set of three forms 8 MATRICES (CHAP. 1 SUPPLEMENTARY PROBLEMS 12-9 fs i civen 4={5 0 af. 8 2 5), and s-1 4 2-4 @j ) Compute: -24 = |-10 0-4], 0-8 = 0 -2 2-2 (©) Verify: 4+(B—C) = (4¥B)-6 (d) Find the matelx D such that A+D =. Verity that D = B—A = ~(A~R), 23] um 6-1 12, Given 4 = 4 6], compute 48 =0 and BA =|~22 12 —2]. Hence, AByR4 23 <1 6-1 generally. 1-3 J 1 4a0 2 1-1-3] 13. Given A= [2 1-3), B= [2 114), and C=|3 2 ~1 1), show that 4B = AC. Thus, AB =AC 4-3-1 a -21 2 2-5-1 9 does not necessarily imply B = C. 1 14. civen 4 =|2 3 {]- show nat Bye = Be, 15, Using the matrices of Problem 11, show that A(B+C) = AB +4C and (A+8)C = AC +BC. 16. Explain why, in general, (A+R) = 4°4 248 + B? and A? — + AA ByA+B), 2-3-5] “13 9 2-2-4 aawen 4 =|-1 4 5), B=! 1-3-5], and c=|-1 3 al. 1-3-4] Pipiates 1-2-3 (@) show that AB = BA =0, AC» 4, CA (®) use the results of (a) to show that ACB=CBA, 4 i = (A-BY(A+B), (A+ BY © A? + BP 18. Given 4 -[ I]. whee = 1, derive oma for he postive inte powers ot & Ans. AT 21, A, 04 Dw that the product of any two or more matrices of the set {1 ° ede wep 2 OL fE 9) Fe sso at oummaattiorrmnemtionctoens fs LLL EEL I.E 9G) [: 4}: [ 4 is a matrix of the set. 20, Given the matrices 4 of order mxn, B of order nxp, and C of order rxq, under what conditions on p. 4, ‘and r would the mairices be conformable for finding the products and what is the order of each: (a) ABC’ (ACB, (e) ABHCN? Ans. (a) p A according as m= 4p, 4p+1, 4p+2, 4p+3, where J = [3 | mxq (b)r=m =a; mx (er =n, p=9: meg cHAP. 1] MATRICES. 9 21. Compute 4B, given: a ww 9 0 coat 22, Prove: (a) trace (4+B) = trace A + trace B, (b) trace (kA) = k trace A = tye ty ya = ent Oey p 1-2 i)" itty, ae le saath. sot [2] ‘fe By tye 899 ME YT 2) ~ [2 1-3)? nay tte ~224 ~ 62 24.164 Lay] and & = [645] ato of onder mn and i C= [egg] fe of order np, show that (L4B)C = AC + BC 2. Let A (oy) and = [ogy], where (f= 1,2,.c0omi7 = As2eve spi k= 1,2).0sm). Denote by Bi; the sum of Bs Bo Ma ‘he elements of the jth row of that is, let j= 2 bjy. Show that the element in the ith row of A> Pe 4s the sum of the elements lying in the ith row of AB. Use this procedure to check the products formed in Problems 12 end 13, 26, A elation (such as parallelism, congruency) between mathematical ontities possessing the following popertios: (4) Determinative Either « is in the relation to 6 or a is not in the relation to b. Gi) Reflexive «is in the relation to a, for all a (it) Symmetric Ifa is in tho relation to b then & is in the relation to a. (iv) Transitive Ifa 4s in the relation to & and bis in the relation to ¢ then a is in the relation ta e. 4s called an equivalence relation. Show that the parallelism of lines, similarity of trlangles, and equality of matrices are equivalence relations. Show that perpendicularity of lines is not an equlvalence relation 21. Show that conformability for addition of matrices 1s an equivalence relation while conformability for multi- plication is not, 28. Prove: If A,B,C are matrices such that AC = C4 and BC =CB, then (AB 4 BA)C = C(AB 4 BA), Chapter 2 Some Types of Matrices ‘THE IDENTITY MATRIX, A square matrix A whose elements aj; = 0 for i> is called upper triangu- lar; a square matrix A whose elements a,;~ 0 for i 3] is 4'-]2 5). note that the element o,, in the ith row 458 ete i and jth column of A stands in the jth row and ith column of A If A’and Mare transposes respectively df 4 and B, and if k is a scalar, we have immediately @ y= 4 land (b) (kay = ka In Problems 10 and 11, we prove: | HL. The transpose of the sum of two matrices is the sum of their transposes, ive (Asby| = ao 2 SOME TYPES OF MATRICES [CHAP. 2 and MIL. The transpose of the product of two matrices is the product in reverse onder of thelr transposes, i.e., By = BA See Problems 10-12 SYMMETRIC MATRICES. A square matrix A such that A’=4 is called symmetric, Thus, a square matrix A = [qj] is symmetric provided aj; = aj, for all values of i and j, For example. ies 2 4-5] is symmetric and so also is E for any scalar k, 3-3 6 A In Problem 13, we prove TV. If A is an n-square matrix, then A+ 4 is symmetric A square matrix A such that 4°= ~A is called skew-symmetric. Thus, a square matrix 4 is, skew-symmetric provided aj; for all values of i and j. Clearly, the diagonal elements are 0-23) 2 0 4| 4s skew-symmetric and so also is KA for any scalar k. “3-40, zeroes. For example, 4 With only minor changes in Problem 13, we can prove V. If A is any n-square matrix, then 4 ~ 4° is skew-symmeteic. From Theorems IV and V follows VL. Every square matrix A can be written as the sum of a symmetric matrix B= $(4+d°) and a skew-symmetric matrix C= }(4-4) See Problems 14-15. THE CONJUGATE OF A MATRIX. Let « and 6 be real numbers and let i= =i; then, z= a+bi is called a complex number. ‘The complex numbers a+ bi and @~ bi are called conjugates, each being the conjugate of the other. If z = a+bi, its conjugate is denoted by 2 = a+ i If 2,=a4bi and z,==,-a~bi, then %~%,-a-bi=a+bi, that is, the conjugate of the conjugate of a complex number = is = itself, If z= a+6i and 2=c+di, then (1) a4 zp = (ate) + (bed and PFE = (atey~ (beds = (a-biy + (e-di) = He that is, the conjugate of the sum of two complex numbers is the sum of their conjugates. (ii) 2y-y = (aerbd) + (adsbeyi and Fz = (ac~bd)~(adsbeyi = (a—biXe=di) = F-% that Is, the conjugate of the product of two complex numbers Is the product of their conjugates When 4 is a matrix having complex numbers as elements, the matrix obtained from A by re~ placing each element by its conjugate is called the conjugate of 4 and is denoted by A (A conjugate), teat Example?. When 4 = 3 if Zand Bare the conjugates ofthe matrices 4 and B and if 1s any scalar, we have readily © AoA and () GA - BA ‘Using (4) and (if) above, we may prove cHaP. 2) SOME TYPES OF MATRICES 13 VIL. The conjugate of the sum of two matrices is the sum of theit conjugates, i.¢. (AvB) = A+B. ‘VIM. The conjugate of the product of two matrices is the product, in the same order, of ‘their conjugates, i.e., (1B) = The transpose of J is denoted by A’(A conjugate transpose), It is sometimes written as A* We have IX, The transpose of the conjugate of A is equal to the conjugate of the transpose of Ate, (Ay = (A). Example 3. From Example 2 ye ATF 8 wnite are JAF are bo HERMITIAN MATRICES. A square matrix A=[a;j] such that =A is calied Hemnitian, Thus, A Js Hermitian provided oj; » aj; for all values of i and j. Clearly, the diagonel elements of an Hermitian matrix are real numbers. roast al Example, The matrix 4= {142 3 | 4s Hemitian. Is kA Hermitian if k is eny real number? any complex number? A square matrix A = (a;;] such that 7=-A Is called skew-Hermitian. Thus, 4 is skew- Hermitian provided a;; = dy for all values of i and j. Clearly, the diagonal elements of a skew-Hermitian matrix are either zeroes ot pure imaginaries, Example 5. The matrix A = é] 1s skew-Hermitian. ts £4 skew-Hemitian if k 1s any real ‘number? any complex number? any pure imaginary? ‘By making minor changes in Problem 13, we may prove X. If A is an n-square matrix then A+ is Hermitian and A~ is skew-Hermitian, From Theorem X follons Al. Every square matrix A with complex elements can be written as the sum of an Hermitian matrix B =4(A+4) anda skew-Hermitlan matrix C= 5(4- 4) DIRECT SUM. Let 4;, 4p,..., 4s be square matrices of respective orders my, mp, ...ms. ‘The general- ization of the diagonal matrix is called the direct sum of the 4; 14, SOME TYPES OF MATRICES (CHAP. 2 a-t Examples, Let 4 03 1-2 20000 0 012000 034000 ‘The diroet sum of As. Ap dy 18 ding(Ay Aadad = |5 9 209 00020 3 0041-2 Problem 9(b), Chapter, illustrates MIL If A= dlag(A,, dp,.,43) and B= diag(Bs, Boy...,B,), where Ay and By have the same order for (i= 1, 2,...,8), then AB = diag (A,B, AgBa «... AB), SOLVED PROBLEMS 4, 0 0 | bs bre ban Midis didi Gaby Op bay Boao ban bor Aoabaa oben 1. Since ° a aa , the product AB of o 0 Ce Senbar Ganbae ++ Oanban| fan m-square diagonal matrix 4 = diag(as. dso, dgq) and any mxn matsix B is obtained by multi- plying the first row of B by ay, the second row of B by a>, and $0 on dc. ‘a tfc d] _ facebd adsbe] _ [e d][a 0) wis uors oom [FYE] Geos sie] - Eb 2-2-4 3. Show that |-1 3 4] is idempotent 1-2-3 2-2 -d[ 2-2-4 2-2-4 #& = l-1 a afl-aa af = Jaa af = 4 1-2 -a][ 1-2 -3 1-2-3, 4. Show that if AB <4 and BA~=B, then A and B are idempotent ABA = (AB)A = A-A =A and ABA = A(BA)= AB =A; then 4? «A and A 4s idempotent, Use BAB to show that B is idempotent. cap. 2} SOME TYPES OF MATRICES 15 2-1-3 rasffaag ooo oo ofii a & 5 2 6/5 2 6J-|3 5 9] and 4 =| 3 3 9]]5 2 6)-0 2-1-3] [-2 1-3) [+1 1-2 1-1-3] [-2 -1 -3 6. If 4 fs nilpotent of index 2, show that A(+4)"= 4 for n any positive integer. Then AU say = A(lendy = Atnd® 2 A 7. Let A,B,C be square matrices such that AB = C. Thus, B=C and CA=1. Then (CA)B = C(AB) 80 that B= 4” 4s the unique inverse of A. (What is B~*?) 8 Prove: (ABy* = RA By definition (ABY"(AB) = (AB)(ABY" = I. Now eva and ABB*AY) = ABR AY = AA 2 By Problem 7, (4B)" is unique; hence, (4B)* - B*-4* 9. Prove: A matrix A is involutory if and only if (J—A)(I+ A) « 0. Suppose (J-A)({+4) = 1-4 = 0; then 4? = [ and A is invotutory. Suppose 4 is involutory; then # =I and (I-A)(I+A) = 1A? = I= 10. Prove: (A+By' = A's B Let 4 =[aj5] and # = [bij]. We need only check that the element in the ‘th row and jth column of A,B. and (4+BY are respectively ag. Bj, and 04 + 85. ML. Prove: (ABY = BA’ Let A = [oj] be of ontor man, 8 = (4,j] be of order mp; then C= AR cig] 18 of order map. The element standing in the ith row and jth column of AB is cj; = ¥ ajy-byj and this is also the element stand- {ng in the jth row and ith column of (4BY. ‘The elements of the jth row of B’are b,j, boj.....bnj and the elements of the ith column of d’ate af, to. 9y- ‘Then the element in the jth row and th column of Bis Boye = Seah = 6 Thus, (ABY = Ba 12. Prove: (ABCY = CBA. Write ABC = (4B)C. ‘Then, by Problem11, (ABCY laBycl = c(aBy = CB. 16 SOME TYPES OF MATRICES [cHAP. 2 13. Show that if A=(aj;] is n-square, then B First Proof, byl= 44a is symmetric, ‘The element inthe /h row and jth column of 4 is a4; and the corresponding element of A's aj; hence big = 045+ 0:4. Tho element in the jth row and th column of 4s ay; andthe corresponding element of A's gyi hence. By = aj, +a;;. Thus, by; « bss and B is symmetric Second Proof, By Problem 10, (44 = A'¢ (4 = ded = A 4A and (4444) Is symm 14. Prove: If A and B are n-square symmetric matrices then AB is symmetric if and only if 4 and B commute, Suppose A and B commute so that AB 1A, ‘Then (ABY= BA"= BA = AB and AB is symmetric. Suppose AB is symmetric so that (48Y tices 4 and B commute. B. Now (ABY = BA’ = BA; hence, AB = BA and the ma- 15. Prove: If the m-square matrix 4 is symmetric (skew-symmetric) and if P is of ordet mxn then B = PAP ts symmetric (skew-symmetric), IfA is symmetric then (seo Problem 12) B= (PAPY = PA‘P'Y = PA'P » PAP and R is symmetric If Ie skew-symmetric then B’ = (PAPY = -PAP and B is skew-aymmettic. 16. Prove: If 4 and B are n-square matrices then A and B commute if and only if 441 and B~kI ‘commute for every scalar k. Suppose 4 and B commute; then AB - B4 and ASRB =H) = AB ~ RAB) + PT = RA RAED = (B= RICA -Kt) Thus, 4 and B—&1 commute Suppose 41 and B-KI commute; then AHEM) = AB WAR) + ET BAW WASB)+ ET = (B—RIA—Kh AB A. and A and B commute. CHAP. 2] SOME TYPES OF MATRICES 17 SUPPLEMENTARY PROBLEMS 17, Show that the product of two upper (Lower) trungular mattices is upper (lower) triangular. Derive a rule for forming the product BA of anmxn matrix Band A = diagiess, azo, Hint. See Problem 1 nn 19, Show that the scalar matrix with diagonal element & can be written as kl and that kA where the order of fis the row order of 4. TA = diagth, ky k) A, 20. IFA is n-square, show that 4°47 49.4 where p and 4 ate positive intesers. 2-3-5 “18 8 21, (@) Show that A= J-1 4 5] and 9 =| 1-2 -5| are idempotent. 1-3 4, -1 3 5) () Using 4 and A, show that the converse of Problem 4 does not hold. 22. If is idempotent, show that B =A te sdempetent and that 4B = BA = 0, 23. (a) If A ota i Pi-1-if? fou oP ro zi. srow that [0 1 of = [-r-1-if = | o 00 4 oo a1) a 25. show that A= |-s 2 9] is pertodic, of period 2. 2 0-3. 44-51 =o. oe 1 2 1-3-4 26. snow that |-1 3 4 is nilpotent. 13 124) 2-1 = 21. show that @) 4 =| 3 2 of ang B=] 3 2 9) commute, 1-1-1 1-1 - 11d 2/3 0-1/3] 4 =| 231] and 8 =|-3/5 2/5 1/5] commute, 124 a8 -1/5 1/15] tet = [2 1} anttcommite anc + = AP + BP, 80. Prove: The only matrices which commute With every n-square matrix are the n-square sealar matrices. 31. (a) Find all matrices whick commute with diag(t, 2,2) (8) Pind all matrices which commute with diaa(azs, aoe, Ans. (a) diag(a,6,c) where a,b,¢ are arbit an 18 32, 33 38, 25. 36. a7. 38, 39 41, 2. 43. 4. 45, 46, a. 48. |. Prove: (ABCY* = CABTA, Hint, Waite ABC SOME TYPES OF MATRICES [oHaP. 2 rad Show that (@) | 2 5 1] is the inverse of -2 4 5 1000] 2100 8 the inverse of |) ag of Bees 2314 se [JE a] + [b GJeeastmeimeneor [ftp am [52 tn] Show that the inverse of a diagonal matrix A, all of whose diagonal elements are different from zero, 1s & diagonal matrix whose diagonal elements are the inverses of those of A and in the same order. ‘Thus, the anverse of fy 18 fy 01 sag show that A= ]¢—-3 4[ and 8 = |-1 01] are invotutory, p34 -4 -4-3, Prove: (a) (4°)’= A, (b) AY = kA’, (©) ay (A)? for p a positive integer. (aByc 1 tA Prove: (a) ty#= A, (b) Gedy? = Lact, (ey Py = ah? for p a positive integer. Show that every real symmetsie matrix is Hermitian, Prove: (a) y= A, (6) @¥B)= A+B, (0) GA)= RA, @) ABY= TB. 1 ise 243i Show: @) 4 = ]i-¢ 2 =i | is Hermitian, pai dO i are asi i) B= |-1sr 281 | is skew-Hormitian, 2-31-10 (0) iB is Hermitia (@) Ais Hermitian and B is skew-Hermitian, IfA is n-squate, show that (@) 44” and 4’A are symmetric, (6) A+’, AA’, and 4°A are Hermitian, Prove: If is Hermitian and 4 is any conformable matrix then (AY 114 1s Hermitian, Prove: Every Hermitian matrix A can be written as B+ iC where B is real and symmetric and C is skew-symmetric al and Prove: (a) Every skew-Hermitian matrix A can be written as A= B+iC where B is real and skew-symmetric fand C is real and symmetric. (b) A’A s real if and only if B and C anti-commute Prove: If 4 and B commute s0 also do 47 and 8°, A’ and A’, and 4° and F Show that for m and m positive integers, A” and B” commute if A and & commute. CHAP. 2] SOME TYPES OF MATRICES 19 De el aoa oft fit ma see) aay fat ax 2 49. show (@) =~" fora) -fo x a’ oA ee oo A oo pl 50. Prove: If 4 is symmetric or skew-symmetric then 44°= 44 and # are symmetric. BL. Prove: If A is symmetric so also is ad” +h4° 4. ¢gh where a,b. g Integer. @ scalars and p is a positive ‘52, Prove: Every square matrix 4 can be weitton as A - B+C where # is Memitian and C is skew-Hornitian, 33, Prove: If is real and skew-symmetsic or if 4 is complex and skew-Hemmitian then 4/4 are ternitian 54, Show that the theorem of Problem 52 can be stated: Every square matrix 4 can be written as 4=B8+iC where B and C are Hermitian 55. Prove: If 4 and B are such that AB =A and BA =B then (a) BA’ idempotent, (c) A « B= 1 if A has an inverse. A and A°8’= 8°, (6) A’and B’ate 56. If 4 is involutory. show that (+4) and $(/A) are \dempotent and 4iI+4)- 540A) = 0. 81. If the n-saunre matrix A has an inverse 4, show: (a a (by ay* ©) APF int, (2) From the transpose of AA" = 1, obtain (AY as the inverse of 4’ 458, Find all matrices which commute with (a) dlag(1.1.2.3). (8) diag(t. 1.2.2) Ans. (a) diag(A.b,c). (b) diag(4.B) where 4 and & are 2-squate matrices with arbitrary elements and 5. ¢ are scalars, 89.1 Ay do. .u.dy te Scalar matrices of respective orders ms,mo,....m3. find all matrices which commute with dingy Aa... As) Ans. diag(B,.Ba.....Bs) where Ry. Ba, ....Ry are of respective orders m3.m. ....ms with arbitrary elements. 60. If AB =0. where 4 and B are non-zero n-square matrices, then 4 and & are called divisors of zero. Show ices 4 and B of Problem 21 are divisors of zero, Ag) and B = ding(B,,Bo.....B3) where A; and By 1 of the same order. (i -#), show that CO) AGB = AaB CAYE By, Ag Bans Ag t Ba) (8) AB = ding (As Bs.AoBa. ode Bo) () tiace AB = trace AyBy + trace Aafia +... + trace A,B, 62, Prove: If and # are n-squate skew-synmetric matrices then 4 is symmettic if and only if A and # commute. 63, Prove: If 4 is m-square and B Atal 5, then 4 and B commute, | Let A and B be n-square matrices and let nr, ss.s2 be Scala nAts.8, Co=ryA+s2B commute if and only if A and & commute, such that rsp 4 rs. Prove that C= 165. Show that the n-squate matrix A will not have an inverse when (a) A has a row (column) of zero elements or (8) 4 has two identical rows (colunns)or (c) 4 has a row (column) which i the sum of two other rows (columns). 66. If A and B are n-square matrices and A has an inverse, show that AsBAMA-By = A~BYA*A +BY Chapter 3 Determinant of a Square Matrix PERMUTATIONS. Consider the 3 aa 193 192 213 281 912 32 and eight of the 4 6 permutations of the integers 1,2. taken together 24 permutations of the integers 1,2,3,4 taken together 1234 2194 9124 4123 a 1324 2314 3214 4213 If in a given permutation a larger integer precedes # smaller one, we say that there fs an inversion. If in a given permutation the number of inversions is even (odd), the permutation is called even (odd). For example, in (3.1) the permutation 123 is even since there is no inver~ sion, the permutation 132 is odd since in it 3 precedes 2, the permutation 312 is even since in it 3 precedes 1 and 8 precedes 2. In (3.2) tho permutation 4213 is even since in it 4 precedes 2, 4 precedes 1, 4 precedes 3, and 2 precedes 1 ‘THE DETERMINANT OF A SQUARE MATRIX. Consider the n-square matrix aay i tos ze Bog oom, and a product G4) 3} Oj, ajo On, of n of its elements, selected so that one and only one element comes from any row and one and only one element comes from any column, In (3.4), as a matter of convenience, the factors have been arranged so that the sequence of first subscripts is the natural order 1, 2,....m; the Sequence jy ja.» y Of Second subscripts 1s then some one of the n! permutations of the inte- gers 1,2.....m. (Facility will be gained if the reader will parallel the work of this section be- jainning with a product arranged so that the sequence of second subscripts is in natural order.) For a given permutation j,, jz... jy of the second subscripts, define ©) 5... 4, = according as the permutation is even or odd and form the signed product Hor-1 as © fubaoes dn Oth Che OMe ‘By the determinant of 4, denoted by |], is meant the sum of all the different signed prod- ucts of the form (3.5). called terms of |4|, which can be formed from the elements of 4; thus, 66) Vl =F eddbsnndy ah Sah eri where the summation extends over pn! permutations j.jy...jq of the integers 1, 2,...m ‘The determinant of a square matrix of order n is called a determinant of order n. 20 CHAP. 3] DETERM! NANT OF A SQUARE MATRIX 21 DETERMINANTS OF ORDER TWO AND THREE. From (3.6) we have for n=2 and n=3 GB ue = &19 @yxdon + £2 MaoGoy 11092 — Gy0Goq (3.8) 4g; Boo Mog 129 1200 + Gop Mx Gzaan + Grn 22% dog “Jag. daa a Jay Oe " © [2B] ~ 20-cm = ors o © fies] : = 133-51) = 4 2, Adding to the elements of the first column the corresponding elements of the other columns, “1 4 1 1 1 1 1 1 1 1-e 1 1 1 1 1 1-4 a 1 igeseateen 1 ot rad Tee one-star |r 1 Ce ieasenl “4 O11 1-4 by Theorem I 3. Adding the second column to the third, removing the common factor from this third column, and using Theorem Vit 1a bee 1a asbee red 1 beta] = |1d orb+e] = avbecyli a1 o Le ass Le atbee ted 4, Adding to the third row the first and second rows, then removing the common factor 2; subtracting the second row from the third; subtracting the third row from the first: subtracting the first row from the second; finally carrying the third row over the other rows 26 DETERMINANT OF A SQUARE MATRIX (CHAP. 3 a+b, agtbe aot bl ath, aptby — agtly a:b; apt be a+b bytes bobes Baten] = 2| Biter Baten —bBateg | = 2] ater Baten boted ext, cote est acl Ja,+5,4¢, ast botee aat bat eo ty a) PD Bal 4 as oy a]brter bores bated ales eo co 2] bs be by mae J ae oy ese ea con 5. Without expanding, show that [4 = | a a, 1) = -(a,~ 42)(a_~ ap) (a9 ~ a). ay ‘subtract the second row from the frst; then Ga aa 0 later 1 | Jal a tf = (qnaa| 8 ae 1| by Theorem m ey aay 1 and aja, is a factor of [A], Similarly, a—aq and eq~a, are factors. Now |4] is of order three in the Ietters; hence, a lal «ha; —a,¥ao~aaXan—as) ‘The product of the diagonal elements. dan, 1s a term of || and trom (1), the term is —Adjoa. Thus ect and [dl =~ (a; aia aghay~01)- Note that /4] vantshes if and only If two of the 1. dp, dg are ‘qual 6. Prove: If A is skew-symmetric and of odd order 2p~ i, then |] = 0. __ Since 4 is skew-symmetric. a'=—4; then [al = [al = cuy'?*lal = =ldl. But. by Theorem a, lat = ial; nenee. /4!=—lal and |) - 0. 7. Prove: If 4 is Hermitian, then | 4| is a real number. since 4 is Hermitian. 2 = 4’, and [4) =|] = |4| by Theorem m. But if Ul = Seg deheagentng, = 00M be ee ee Now [A] = {4] requires 6 = 0; hence, |4] is a real number, 12 8. For the matrix A = |2 3 12 aa seue |e ifen aes en fPe aa ears? fe an = cor? ff te orf] ae ental? te carafe d ta = vrei d woo = ene|t cua. 3 DETERMINANT OF A SQUARE MATRIX 2 Note that the sigs given to the minors of the elements in forming the cofactors follow the pattem ‘where each sign occupies the same position in the display as the element, whose cofactor is required, oc- cupies in A, Write the display of signs for a S-equare matrix. 9. Prove: The value ofthe determinant |4| of an n-squate matrix A is the sum of the products obtained by multiplying each element of a row (column) of A by its cofactor. We shal rove ts for «tow. The tems of.) having as A factor ae @ te 6 Mth May NOW Gi. Jy © Sb devedy Since Im a permutation 1. ..--jn the 146 Mn natural order, ‘Then (a) may be written as 2) 831 Bafa nS thy Oni where the summation extends over the "= (n—1)! permutations of the integers 2.9.....m, and hence, as 25 Aq. Man © ay] "7 “8 ™) = ays Mel feo Apa Sn Consider the matrix B obtained from 4 by moving its sth eoluma over the first s—1 columns. By Theorem Mi. |B! = (-1)" "Id|, Moreover. the element standing in the fitst rom and first column of B is a5 and the minor of ays in B is precisely the minor |Mjs| of a;5 in 4. By the argument leading to (c), the terms of 4:5 [Myo! ate all the terms of || having as as a factor and, thus, all the terms of (—1)°"*l4| having 5 8s, ‘factor. Then the terms of aso{(-1)* ‘lifys|I are all the terms of || having c.5 as @ factor. ‘Thus (3.15) 4l ake Wag + exch! latslh + aysh-1)""4| + anki) Mant Sith + sree + + aint since (-1)"* = 1)". We nave (3.9) with ¢ =1. We shall call (3.18) the expansion of [A alone its first ‘The expansion of |A| along its rth row (that 1s, (2.8) for /=r) is obtained by repeating the above argu: iments. Let 8 be the matrix obtained from A hy moving its rth row over the first r—t rows and thom its sth co! lumn over the first s—1 columas. Then B| coe la] ola ‘The element standing in the first row and the rst column of B is a,, and the minor of oy, in B is precisely the minor of ars in A. Thus, the terms of ay gha1y | Myalt are alt the terms of || having o-sas a factor. ‘Then Ml + Beater Wal = 2 ents and we have (3.9) for 28 DETERMINANT OF A SQUARE MATRIX (car. 3 10, When ajj is the cofactor of aj; in the n-square matrix 4 = [a;;], show that tag thee sen hy fay 59 0 00,5-3 Be 9,543 0 Bon @ Kyatys + Rattas + By ego Bn Ojeda ‘This relation follows from (3.10) by replacing a,j with ke, ap; with kp, .dqj with ky. Th making these replacements none of the cofactors Mj oj... hj appearing is affected since none contains an element from the jth column of 4 By Theorem VI, the determinant in (3) {8 0 when ky= ary, (r= 1.2....9 and s 4). By Theorems VII fand VIL, the determinant in (2) 1s |4| when ky = a7j+ hays. (P= 1,2). and s4)), rite the equality similar to (2) obtained from (3.9) when the elements of the ith row of 4 are replaced Dy a igs oly 1 02 345 28 25 38 41. Evaluate: (a) |A| = ]3 04 @ |Al =] 12.3 we [Al = ]42 38 65 2-51 -25-4 36 47 83 148 2 3-4 (b) (Al = |-215 @ |Al = [5-6 3 324 42-3 (a) Expanding along the second column (see Theorem X) 1 02 304 2-81 lal = (@) Subtracting twice the second column from the third (see ‘Theorem IX) (@) Subtracting three times the second row from the first and adding twice the second row to the third 04 5 3-40) 4-20) $-9@)— Jo-2-4 4 lal = Jars rer v2a} + -[2-4| asa] [-2eaa seam -e2@] do 0 2 =~ (4436) = 92 (@) Subtracting the firet column ftom the second and then proceeding as in (¢) Dat 2 1-4 amy 1 ~4e4ay lal = |s-e a) = serra) = |sqaeiy -11 aeaeap sana] [a 2-3 aaa) 2-242) o 1 0 = faa a} = -|% hl Sea 8-2 it CHAP. 3} DETERMINANT OF A SQUARE MATRIX 29 (e) Pactoring 14 from the first column, then using ‘Theorem IX to reduce the elements in the remaining colunns 28 25 38; 2 25 28 2 25~12(2) 38-202) la 42 28 63! 14] 38 65 14] 38~12¢3) 65-2043) 56 47 83, 447 83 4 47-124) 83 -2044)| 2 1-2] 0 10) = uals 2s} - a4af-1 29 = -14-1-59 = 10 4-13 B11 12, Show that p and q, given by (3.13) and (3.14), are either both even or both odd. Since each row (column) index 1s found in either p or 4 but never in both, P 4g = (F2+Hom) + LEBER) = Bedaett) = meet) Now pg 1s even (either n or n+1 is even); hence, p and q are either both even ot both odd. ‘Thus, (1)? = (1) and onty one need be computed 123 678 13. For the matrix A = [a,J = }11 12 13 16 17 18 19 20 21 22 23 24 25 the algebraic complement of | 45'3| is prsezea) 93.0.5) 28 8 cayeertLastsl| = -|16 18 20] (see Problem 12) am sod te alsa component ot [42%8lie [42 = =],7 8 SUPPLEMENTARY PROBLEMS 14. Show that the permutation 12594 of the integers 1,2,3.4.5 is even, 24135 is odd, 41532 1s even, 53142 18 odd, and 52914 1s even 45. List the complete set of permutations of 1.2.3.4, taken together; show that half are even and half are odd, 16. Let the elements of the diagonal of a S-square matrix 4 be o.5,c,d,e. Show. using (3.6). that when A is diagonal, upper triangular, of lower triangular then |4| = abede Eg] show that 42 84 2 di 252 Ad's fA ut tat the determinant of 11. Given BA each product is 4 48. Rvaluste, as in Problem 1, 2-11 22-2 0 22 @ | 3 24) = 27 @& |r2 s)-4 «ey |-2 04] -1 03 234i -3 40] 30 DETERMINANT OF A SQUARE MATRIX [cHAP. 3 19, @) Evaluate [4] - ]23 9) 45n 33 () Denote by |8 | the determinant obtained from | 4 | by multiplying the elements of Its second column by 5. Evaluate |8 | to verify Theorem If (©) Denote by |C| the determinant obtained trom |.4| by interchanging its fist and third columns, Evaluate 1c} to verity Theotem ¥. Psa (é) snow tnat [Al » fs e284 ana vetetng Tver vam eral Mlies ner (ott tom [|e deterisane [0] «23. 3] ny sasecing tee tines te elements ofthe ft coe column from the corresponding elements of the third column. Evaluate || to verity Theorem IX. (1) In |4| subtract twice the first row from the second and four times the first row from the third. Evaluate the resulting determinant, (g) in | multiply the first column by three and from i subtract the thitd column. Evaluate to show that || has been tripled. Compare with (c). Do not confuse (e) and (g). 20. If A Is an n-square matrix and k is a sealar, use (3.6) to show that |e| = 4"|4 | a1. Prove: (a) it |4] =. then |4| =e =| 2 ()) IEA Is skew-Hemitian, then | 4 | is either real or is a pure imaginaty number, 22. (a) Count the number of interchanges of adjacent rows (columns) necessary to obtain B from 4 in Theorem V ‘and thus prove the theorem (®) Same, for Theorem VI 23, Prove Theorem VIE. Hint: Interchange the Identical rows and use Theorem V, 24, Prove: If any two rows (columns) of « square matrix 4 are proportional. then || = 0, 25. Use Theorems VII, HI, and VI to prove Theorem IX. 26, Rvaluate the determinants of Problem 18 as in Problem 11 b00 oe py tten eneck that [4] = [2 8] ¢ £], hus. tt 4 = dhagits on, where 00gh 21. Use (8.6) to evaluate Ay, do are 2-square matrices, |4| = |4y}+| Aol ays -2/3 ~2/9 28. Show that the cofactor of each element of | 2/3 1/2 ~2/2| is that element 2/3 =2/8 1/4, M4 a3 a] 29. Show that the cofactor of an element of any cow of | { 0 1 Js the corresponding element of the seme 443 ‘numbered column. 30, Prove: (a) If is symmetric then aijs= ay when t 4) (6) 164 is n-squate and skew-symmetsic then agg = (-1Y" "ayy when ¢/ HAP. 5] DETERMINANT OF A SQUARE MATRIX 31 BA. For the matrix 4 of Problem 8; (a show that [4] <1 ay tty (0 orm ve mates ¢ = [tsp ay Map| and show tat AC Sha Mee Ase (©) explain why the testi) is An as soon as (0 known be oP a 32, mutiny he cotamns of |4| = |8? co 42] respectively by 2.6.0; remove the common factor trom each of 22 ab be a8 ee tre rows toanow tat [1] + [ob en be te be ab eat bed) Ja? oat 161 ood] for oe ot ou evaaaing show : = 9840 = 40 aXb eK —dKe wit vag anon tag [22 8 1 208] | ED eae one sy ox aye Petal Iowa ona oatat Lott lott 24 show hat tensa determinant [4] = [E292 gig], ym yy iio} dl iat aa ad 38. Prove: |” fe | > Ham aak 09) ay —0nKag~ a4) (ae ay yea ~ al nay, mast by nat bo ay ay a] 36. Without expanding, show that |nb:¢c1 bates mbgt col = (ntiyin?—n4+1)] bx by by acta, negtas nes+as| ee eal 0 xa xb 31. Wishout expanding, show that the equation | x+a 0 x-c| =0 nas 0 as a root, xth xte 0 = Fema by 2 a a ws ase Chapter 4 Evaluation of Determinants PROCEDURES FOR EVALUATING determinants of orders two and three are found in Chapter3. In Problem 11 of that chapter, two uses of Theorem IX were illustrated: (a) to obtain an element 1 of =1 if the given determinant contains no such element, (4) to replace an element of a given determinant with 0. For determinants of higher orders, the general procedure is to replace, by repeated use of Theorem IX, Chapter3, the given determinant |4| by another |B| = |b,;| having the property that all elements, except one, in some row (column) are zero. If by, 18 this non-zero element and pq is its cofactor, lal B) = Bog yy = AI %yq = minor of by, ‘Then the minor of by, is treated in similar fashion and the process is continued until a determi- nant of order two or three is obtained. 3 2 34) — |3-3(3) 2-3-2) 3-31) 4—-3(2) 6 80-2 -cnen|% S| = 20 See Problems 1-3, For determinants having elements of the type in Example 2 below, the following variation may be used: divide the first row by one of its non-zero elements and proceed to obtain zero elements in a row or column, Example 2 0.921 0.185 0.476 0.614 1 0.201 0.517 0.667 10.201 0.517 0.867 0.782 0.187 0.527 0.138] _ 4.49, [0-782 0.157 0.527 0.128] _ 4 4.,[0 0 0.129 ~0.988 0.872 0.484 0.67 0-799 0.872 0.484 0.637 0.709 0 0.308 0.198 0.217 0.312 0.555 0.841 0.438 0.912 0.555 0.841 0.448 0 0.492 0.680 0.240 0 0.128 -0.384 0 0320 4 ste o2es 0.492 0.157 0.240) = 0.921(-0.384N(0.108) = ~0.087 32 cuap. 4 EVALUATION OF DETERMINANTS 33 ‘THE LAPLACE EXPANSION, The expansion of a determinant || of order n along a row (column) is ‘@ special case of the Laplace expansion. Instead of selecting one row of ||, let m rows num- bered i, i,...,éq , when atranged in order of magnitude, be selected. From these m rows n(n =1)...(0~ m1) LQ om can be formed by making all possible selections of m columns from the n columns, Using these minors and their algebraic complements, we have the Laplace expansion » minors an lal = gen 4 fines doe) 2 | where s = ijtigtuotiy + yt jz to" +i and the summation extends overthe p selections of the column indices taken m at a time Exanple 3. 29-24 212 evaluate [4 using minors of the frst two rows te Lal} 272 124 asta minoes of the tt “2405 From (4.1, eens sal + et?) 43. Lagl separ [abs] 4 eat] az. Ladt + entre] ats] (age) + attest] asst. [43g me Lee [2 s}f2 3 3 -2l'lo s al'ls o Le SHasl- Hal Fade acabls sl - Le shl2 3 az = 1S) = KB) + OXI) + HIKED = AKG) + ANI8) = ~a6 Seo Problems 4-6 DETERMINANT OF A PRODUCT. 1f 4 and B ate n-squate matrices, then (42) 4B] = |Al- See Problem 7 EXPANSION ALONG THE FIRST ROW AND COLUMN. If 4 = [ais] is n-square, then (4.3) |4 crnE where css is the cofactor of a,, and a;is the algebraic complement of the minor \& od) ot A DERIVATIVE OF A DETERMINANT. Let the n-square matrix A = [;;] have as elements differen- tiable functions of a variable x. ‘Then 34 EVALUATION OF DETERMINANTS (CHAP. 4 1. the derivative, [4], of [4 with respect tox isthe sum of n determinants obtained by replacing in all possible ways the elements of one row (column) of |4| hy their derivatives with respect to 2 jo xed 8 ow oL 0 ott 8 x41 8 5+ ae im — or! See Problem 8 SOLVED PROBLEMS 2o-24 2 3 -2 4 23-24 14-310 1-212) 4-2(3) -3~2(-2) 10-218) 3-212 - = = 286 (See Example Mlaaae 2 2 3 4 3234 aa Ly jH-24005 24 ° 5 2405 ‘There ate, of course. many other ways of obtaining an element +1 or ~1; for example, subtract the first column from the second, the fourth column from the second, the first row ftom the second, etc. ro-1 2 10-141 2-211) 100 9 2 [23 2-2] _ [23 242 -2-210) 234-6 2421] > [ra 22 1-20) 2aa-3 34 5-3 21 543 -3-20) 318s 4-6 3-28) 4-214) ~8-2(-2) “5-4 0 5 4-3 ‘ 4 3 - fa 4-3 8-9 a) 8-314) -9~9(-3) 1-4 0 “5-4 . sual 7? 0 tei 1428 1-i 0 2-3 1-2 24a 0 3, Evaluate [4] Multiply the second row by 1+£ and the third row hy 142; then Ore tea oa ata: o itt Lea aatiyarayla| = (tea 4] = ]2 0 5-4 200 5-4 0 8-14 25-55 5-44 0 14478-1042 1 4471-10424 bi eae 6 + 1B [Sa asta and [41 CHAP. 4] EVALUATION OF DETERMINANTS 35 4. Derive the Laplace expansion of [4| = lal of order», using minors of order m ~2 interchanges of edjacent roms the row munberd i ca be bowet into the second Tow, ... bY iy 72 11a4 a -4 5 6 nate 1-2 3-2-2 f41e8 2-113 2 Oletae @ fra 2a af = us 1-4-3 -2 -5 2427 3-2 2 2-2 10. 1f 4 is n-square, show that || is real and non-negative 11, Evaluate the determinant of Problem 9(a) using minors ftom the first two rows; also using minors from the frst two columns wow [near [ed Use |4B| = |4|:|B| to show that (aj+a3)(b5+85) = (ayb—aybe) + (agby + 045)". ay tat 4 =f 28 2264) yg. | bitis botiba nap ieg ata abytiby by iby Use [AB| = [4|-[B] to express (a;+as+aq+a,)(b:+ be+be+ba) as a sum of four squares. 13. Evaluate using minors from the first three rows, Ans, 120 38 EVALUATION OF DETERMINANTS (CHAP. 4 Ihaiad Dorit 14, Evaluate {1 1 0 0 0] using minors trom the first two columns. Ans. 2 core 122.1 1B. If Ay. Ap...dg are square matrices, use the Laplace expansion to prove [dine 4s. Aee ne ADd| = |All = Lgl by bo by ba 16. Expand using minors of te first tmo rows and show that ba ba by be . | = o by bof [a be bs dsl [be bal by bal [be bal o 4 17, Use the Laplace expansion to show thatthe n-sqare determinant |° 4, wnere ois k-square, is zero when b> in 18. [Al = osstir + eyotae + agatha + mata expand exch of the cofactors de ts, dz4 along ite fest cole ain to show lat ants - 2, 2 atsassaa} as 3 wher ait ts the algebraic complement of te minor 19. If 14; denotes the cofactor of aj; In the n-square matrix 4 = [a;;], show that the bordered determinant es 20. For each of the determinants ||, find the derivative. (a) “* (by fx? oxen (© ax+5| ee) O Bx—2 x°+1 tL o x Ans. (a) 28+ 9x7— Bx, (b) 1 = Gx + 21x74 128" ~ 15x", (0) 6x9 — 5x" — 28x74 974 208-2 21. Prove: If A and B are real n-square matrices with 4 non-singular and it H = 4448 is Hermitian, then aP = lah. |rectay| Chapter 5 Equivalence THE RANK OF A MATRIX. A non-zero matrix 4 1s said to have rank r if at least one of its r-square minors is different from zero while every (r41)-square minor, if any, Is zero. A zero matrix is said to have rank 0. Example 1. ‘The rank of A a isre2 since See Problem 1 ‘An n-square matrix A is called non-singular if its rank r=n, that is, if |4] 40. Otherwise, 4 is called singular, ‘The matrix of Example 1 is singular, From |A8| = | 4|-|B| follows I. The product of two or more non-singular n-square matrices is non-singular; the prod- uct of two or more n-square matrices is singular if at least one of the matrices is singular. ELEMENTARY TRANSFORMATIONS. The following operations, called elementary transformations, on a matrix do not change either its order or its rank: (1) The interchange of the ith and jth rows, denoted by His: ‘The interchange of the ith and jth columns, denoted by Ki; (2) The multiplication of every element of the éth row by a non-zero scalar k, denoted by H(k); ‘The multiplication of every element of the ith column by a non-zero scalar k, denoted by Ki) (3) The addition to the elements of the ith row of F, a scalar, times the cortesponding elements of the jth row, denoted by 1,;(h) ; ‘The addition to the elements of the ith column of &, a sealar, times the corresponding ele- ments of the jth column, denoted by K;;(k) ‘The transformations H are called elementary row transformations; the transformations K are called elementary column transformations. The elementary transformations, being precisely those performed on the rows (columns) of determinant, need no elaboration. It is clear that an elementary transformation cannot alter the order of a matrix. In Problem2, it is shown that an elementary transformation does not alter its rank. THE INVERSE OF AN ELEMENTARY TRANSFORMATION. ‘The inverse of an elementary transforma. tion is an operation which undoes the effect of the elementary transformation; that is, after A hhas been subjected to one of the elementary transformations and then the resulting matrix has been subjected to the inverse of that elementary transformation, the final result is the matrix 4. 39 40 EQUIVALENCE [cHAP. 5 Example 2. Let 4 = 2 5 8 ‘The effect of the elementary row 12 nsformation Hpx(~2) is to produce B= |2 1 0) 789. ‘The effect of the elementary row transformation Hay(+ 2) on B is to produce A again. ‘Thus, Hoy(~2) and Hp,(+2) are inverse elementary row transformations. ‘The inverse elementary transformations are: ay Hy @) Hedy = Mga/ky (8) Hjthy = Hib Kihy = We have Tl. The inverse of an elementary transformation is an elementary transformation of the same type. EQUIVALENT MATRICES. Two matrices 4 and are called equivalent, 4~B, from the other by a sequence of elementary transformations, Equivalent matrices have the same order and the same rank. if one can be obtained Example 3. Applying in tun the elementary transformations Hi(—2), Ha(2). Hao(-1, ra2-a af fr 2-1 4] froa al fi2a gl 4 = |2 4 3 sl~1o 0 5-s)~Jo0 s-2!~joo s-sf = B -1-2 6-1] J-1-2 6-7] Joo s-a} Joo o 9, sine a sneer ot ae zero wtte [-E_$] #0) te an ois 2: nae the rank of A is 2. This procedure of obtaining from 4 an equivalent matrix B from which the rank is evident by inspection is to be compared with that of computing the various minors of A. See Problem 3. ROW EQUIVALENCE. If a matrix 4 is reduced to B by the use of elementary row transformations a- lone, B is said to be row equivalent to A and conversely. The matrices 4 and B of Example 3 are row equivalent, ‘Any non-zero matrix 4 of rank r is row equivalent to a canonical matrix C in whieh (@) one or more elements of each of the first r rows are non-zero while all other rows have only zero elements. (b) in the ith row, (/=1,2,...,9, the first non-zero element is 1; let the column in which this element stands be numbered j, (AS So Si (d) the only non-zero element in the column numbered jj, (i =1, 2,....7) 18 the element 1 of the ith row. CHAP. 5] EQUIVALENCE 41 To reduce A to C, suppose j; is the number of the first non-zero column of A. (iy) If a3, £0, use Hya/ Mi) If a, ) to reduce it to 1, when necessary. 0 but ay; #0, use Hy» and proceed as in (i) iis (ii) Use row transformations of type (3) with appropriate multiples of the first row to obtain zeroes elsewhere in the j,st column If non-zero elements of the resulting matsix B occur only in the first row, B=C. Other wise, suppose js is the number of the first column in which this does not occur. If bj, #0, use HA1/bej) 88 in (Ks); if aj, 0 but byj, #0, use Hog and proceed as in (,). ‘Then, as in (i), clear the jond column of ai other non-zero elements. If non-zero elements of the resulting matrix occur only in the first two rows, we have C. Otherwise, the procedure Is repeated until C is reached. Example 4. The sequence of row transformations Hys(~2). Ha.(1); Ho(1/8); Huo( 1), H: to A of Example 3 yields (-8) applied ro2z-1 a) fiat a) fr 2-1 4] fa 2 0 ans 4 = |2 4 3 5l~Jo 0 5 -3[~lo 0 1 -3/8/~lo 0 1 -3s5 -1-2 6-7] Loo 5-3} loo s -s} Looo o c having the properties (a)-(2) See Problem 4 ‘THE NORMAL FORM OF A MATRIX. By means of elementary transformations eny matrix 4 of rank r>0 can be reduced to one of the forms i, 0 1, I. Vogl. Urol. |r wy lp lf HI U, 0} ii] called its nonmal form. A zero matrix is its own normal form ‘Since both row and column transformations may be used here, the element 1 of the first row obtained in the section above can be moved into the first column, ‘Then both the first row and first column can be cleared of other non-zero elements. Similarly, the element 1 of the second Tow can be brought into the second column, and so on. For example, the sequence Hzs(-2), Hox(1), Kox(~2), Ksx(1), Kax(-4), Koa, Ko(1/5), Hoo(-1), Kas(8)_ applied to 4 of Example 3 yields | 9 the normal form. See Problem 5. ELEMENTARY MATRICES, The matrix which results when an elementary row (column) transforma- tion is applied to the identity matrix J, is called an elementary row (column) matrix. Here, an elementary matrix will be denoted by the symbol introduced to denote the elementary transforma tion which produces the matrix 42 LET EQUIVALENCE, (CHAP. 5 Every elementary matrix is non-singular. (Why?) ‘The effect of applying an elementary transformation to an mxn matrix A can be produced by multiplying A by an elementary matrix To effect a given elementary row transformation on 4 of order mxn, apply the transformation to /, to form the corresponding elementary matrix Hf and multiply 4 on the left by I To effect a given elementary column transformation on A, apply the transformation to J, to forin the corresponding elementary matrix K and multiply 4 on the right by K 124 oo ifi2s] frag Example6. When 4 = }45 5]. Ho-4 - Jot alfa 5a] - ]4 5 6) intorcnanges the first and third 789 noolrso) Lrg 2s] fio Cras rows of 4; AK.9(2) = ]4 5 6]-]0 10] ~ ]16 5 6| adds to the tirst column of A two tines ne} boy bsoo the thitd column 4A AND B BE EQUIVALENT MATRICES. Let the elementary row end column matrices corre- sponding to the elementary row and column transformations which reduce 4 to B be designated as My, My: KyyKo.u-.Ky where Hl, is the first row transformation, H, is the second, ...; Ky is the fitst column transformation, K, is the second,.... Then (5.2) My oo Hy HysA+KyKy Ky = PAQ = B where 5.3) eee een eae tec der kee We have IML, Two matrices 4 and # are equivalent if and only if there exist non-singular matrices P and Q defined in (5.3) such that PAQ = B ro TOT Ly Tool foro) fore of foros [oreo fait ator 0 o1o0ffooro} joor of fooro0ljoozo req aes ‘ ao oO oo 1 ® Since any matrix is equivalent to its normal form, we have IV. If 4 is an n-squate non-singular matrix, there exist non-singular matrices P and Q as defined in (5.3) such that PAQ = [, See Problem 6 CHAP. 5} EQUIVALENCE 43 INVERSE OF A PRODUCT OF ELEMENTARY MATRICES. Let Pos yyy and Q = KyeKy us Ke as in (5.3). Since each H and K has an inverse and sinee the inverse of a product is the product in reverse order of the inverses of the factors (5.4) = WHS Hy and Q KE Let A be an n-square non-singular matrix and let P and Q defined above be such that PAQ ‘Then 5) A = PPAQO* © P*1.Q? = Pt We have proved V. Every non-singular matrix can be expressed as a product of elementary matrices. See Problem 7. From this follow ‘Vi. If A is non-singular, the rank of AB (also of BA) is that of B Vil. If P and Q are non-singular, the rank of PAQ is that of A CANONICAL SETS UNDER EQUIVALENCE. In Problem 8, we prove VII. Two mxn matrices A and 8 are equivalent if and only if they have the same rank, A set of mxn matrices is called a canonical set under equivalence if every mxn matrix is equivalent to one and only one matrix of the set. Such a canonical set is given by (5.1) as r ranges over the values 1,2,...,m or 1,2,....m whichever is the smaller See Problem 9. RANK OF A PRODUCT. Let 4 be anmxp matrix of rank r. By Theorem HI there exist non-singular matrices P and Q such that meen [; ] oo, Then A= P"NQ". Let B be apxn matrix and consider the rank of (5.6) AB = P*NQ*B By Theorem VI, the rank of 4@ is that of NQ™'B. Now the rows of NQ""B consist of the firstr rows of Q"B and m-r rows of zeroes. Hence, the rank of AB cannot exceed r, the rank of 4 Similarly, the rank of AB cannot exceed that of B. We have proved IX. The rank of the product of two matrices cannot exceed the rank of elther factor. Suppose AB =0; then ftom (5.6). NQ"R~ 0. This requires that the fist r rows of O° be zeroes while the remaining rows may be arbitrary. Thus, the rank of (7B and, hence, the rank of # cannot exceed p-r. We have proved X. If the mxp matrix is of rank r and if the pxn matrix B is such that AB = 0, the rank of B cannot exceed p-r 44 EQUIVALENCE, (CHAP. 5 SOLVED PROBLEMS 12 1. (a) The rank of asince | 1 2| 40 and em Inors of order t 2 (6) The rank of 25] ts 2sinco [Al ~0 ane |? [0 28 i 2g Ge) Tre rank of A={0 4 6{ is 1 since [4] =, each of the nine 2-square minors is 0, but not 068 every element is 0. 2. Show that the elementary transformations do not alter the rank of a matrix We shall consider only row transformations here and leave consideration of the column transformations, tus an exercise, Let the rank of the mxn mattix A be r so Uhat every (r+1)-square minor of A, if any, 18 zero. Let 8 be the matrix obtained from A by a row transformation. Denote by |f| any (r+1}-square minor of A and by || the +1)-square minor of & having the same position as |R| Let the row transformation be Mg. Its effect on [R| is either (i) to leave it unchanged, (i) to interchange two of its rows, of (Ml) to interchange one of its rows with a row not of {R|. In the ease (i, [S| = |R] = 0; 4m the case (Ab, |5| = =|R| =0; in the case (ti. |S] is, except possibly for sign, another (r+l)-sauare minor of (Al and, hence, is 0 Let the row transformation be Hj(k). Us effect on |R_| is either (X) to leave it unchanged or (it) to malti- ply one of ts rows by k. Then. respectively. |s|=|R|= 0 or |S! =IR] = 0 Let the tow transformation be Mf). Its effect on |R_ is either (1 to Teave it unchanged, (Hf) to increase fone of its rows by h times another of its rows, oF (it) to increase one of Its tows by k times a row not of | Im the cases (4) and (Ht), |S] =|R|= 0; in the case (iit), |S] = |R] 4K (another (r+1)-square minor of 4) = Otho = 0. ‘Thus. an elementary rove transformation cannot raise the rank of a matrix. On the other hand. it cannot ower the rank for. if it did. the inverse transformation would have to raise it. Hence, an elementary row transformation does not alter the rank of a matrix. 3. Por each of the matrices obtain an equivalent matrix B and from it, by inspection, determine the rank of A, 123 fi 2 Pfir2q fied wy 4 <|2 1 3[~Jo-3 -3)~Jo 1 i]~fo 1 a} = 3 32a) lo-a-a} foi 2} loon ‘The transformations used were foy(—2), Hay(—8)i Mak—1/8), He{—1/4); Hock—1). The rank Is 3. 1236) fi 2 sq fi 2 sq] fi 2 sq fi 2 ao oy a= [248 2ffo 0-32] Jo-« -2 3} [o-¢-e af fo -+-8 31s anesank ies 2213} |o- -ss] Jo o -32] Jo o-a2] Jo 0-32] ~ se75} lo-s-1 5) lo-<-1 5} fo o-sa} lo 0 00 1 oui -i] fio oF fio o A= fo i rerf~lo s re2]~fo trea] = 8, Te rank is 2 rasa aid Le asa} loo 0 Note, The equivalent matrices # obtained here are not unlaue. In particular, since in (a) and (b) only tow transformations were used, the reader may obtain others by using only column transfomations When the elements are rational numbers, there generally is no gain in mixing tow and column transformations. CHAP. 5] EQUIVALENCE 45 4. Obtain the canonical matrix C row equivalent to each of the given matrices A. 013-H forrs 2] foris 4] foi0e & (4249126 Of forse of foors-2| Joors—2| “Jo2s9 2) lo2s9 2! Joo1a—2| Joaoa o lori3 af lo 0013-2] [p00 a i2-23i] fi 2-23 i] fio-23 §] fioos #7 fioo0 gy a= f83-230/J0 1 00-1] Jor oo-r| for oo-1| for00-1| 24-364} lo 0 10 2!"Joo 10 2} Joo10 2] Joor0 2 r-1s6) fo-1 11 5} loo 11 4} fooon af lpoor 2, 5. Reduce each of the following to normal form, 120-1) [i 20-1) fi 006) fio od] fic o | fio o | finod (4 | 241 2}-fo-21 5|~fo-215]~Jo1-25]|~fo1-2 sl~lo1 0 of~for00 -2a2 5} jo 72 3) lo 723) loz 7a} loom) loou-z) loora = Us 0] ‘The elementary transformations are: Hox(~B), Hay(2¥; Kox(~2). Kas(1): Keg Maek—2); Kag(2). Kaol—8); Ko(1/11), Keo) b23 4) fos 5 J] fia 5 4 fissq food rood) fioog fioog] (4 = 23 5 4[~fo2 3 al~lo2 3 af~lo2sai~lo234|~Jo134|~Jo1 00|~/o100 seisi2} [ss1s12} [zaisi2} lors} Looe} forza} lorool foooo, i, 0 oo ‘The elementary transformations are: Hai KyCBY: Hax(—2): Kog(~9). Box(—5), Kay(—8)i Kol 2); KaolB). Kaol~4); Haol=1) 6. Reduce A to normal form N and compute the matrices P, and Q, such that P,AQ, 2 -2 ° Seven elements and each column transformation Is performed on a column of seven elements, 1000 1000 1-2-3 1-2-3 0100 0100 o 100 o 100 D010 oo1.9 eo 10 oo 10 coon soo on so 001 20001 123-2100 12 3-2 100 1000100 1000100 2-21 9010 0-6-5 7-210 0-6-57-210 0-6-8 7-2 10 304 1001 0-6-5 T-301 0-G-~S7-301 0 0 00-1-11 1 13-92 1 1/3 ~4/3 -1/3 0-1/8 00 0-1/6 -5/6 1/8 © 010 2 0 1 0 es +0 001 -o 0 o 1 or 1 000100 1 0 © o100 WH D 1-57-2110 0 1 0 0210 0 0 Oo-1-r1 0 0 © O-t-td 46 EQUIVALENCE, (omar. 5 Vat y 9 0 0 1 134 ‘The elementary transformations Hoy(-1), lax(—1%; Kox(—3), Kaa(~8) reduce A to ly, that is, [see (5.2)] T= HyHty AK, Ky From (8.9.4 = Hua = 8. Prove: Two mxn matrices A and B are equivalent if and only if they have the same rank 1f 4 and B have the same rank, both are equivalent to the same matrix (5.1) and are equivalent to each other. Conversely. if A and B are equivalent, there exist non-singular matsices P and Q such that B = PAQ. By Theorem VIl, 4 and 8 have the same rank. 9. A canonical set for non-zero matrices of order 3 1s, » fe BE fg- iss i008 L008 L008 tots foro i ooo i coe pore p00 0 po 0e 10. If from a square matrix 4 of order n and rank ry, a submatrix B consisting of s rows (columns) of A 4s selected, the rank r, of B is equal to or greater than 1 +s ~ n ‘The normal form of A has n= tows whose elements are zeroes and the normal form of & has s—rg rows whose elements are zeroes. Clearly o-y 2 6 from which follows 1, > 1,4 5 ~n as required. CHAP. 5] EQUIVALENCE 47 SUPPLEMENTARY PROBLEMS vg fan) [sag fisess eneas Ans. (a) 2. (6). (6) 4. (22 12. Show by considering minors that 4.4.1, and T have the same rank. 18. Show that the canonical matrix C, row equivalent to a given matrix 4. is uniquely determined by A 14, Pind the canonteal matrix row equivalent to each of the following: ~ 4312} [oor t1/9] 3-3 1 2) Loo12| wfaaa shfoor a] ow [it 2 e3ffere on pee eee a-2 1 oalfoor 20 1 3-1-3] [ooo o reed oY 15. Write the normal form of each of the matrices of Problem 14. ans.) ool once Usd [ee fd 16. Let 4 2 3 4 q 1 2 (6) Prom Ie form Myo. He(9) Hya(—A) and cheek that etch J offocts the corresponding row transformation (6) From I, form Koa. Ka(~1), Kao(3) and show that each 4K effects the corresponding column transformation. (6) Wrtethe inverses 13, 152), Hra(~A) of the elementary matrices of (a). Check that foreach If. Hl" (2) wite ue verses Ky Ke'CI), Kya) of the elementary matricesof(b). Cheek that foreach K. KK“! = og Paid (0) -H6'@) Me = [1/3 00) 0 04 03 0 (©) Compute B = thy -Hot3+Haa(—s) = ]1 0-4) and C = po () Show that BC = CR = 11. (a) Show that Ky jth) = High. and Ks) = Hs (b) Show that if R Is a product of elementary column mattices. Ris the product in reverse order of the same elementary row matrices, 18, Prove: (a) AR and AA are non-singular if 4 and B ate non-singular n-square matzices, (®) AB and BA ate singular if at least one of the n-square matrices 4 and B is singular 19 IfP and @ are non-singular, show that 4, PA, 4Q, and PAQ have the same rank. Hint, express P and Q as products of elementary matrices a 20. Reduce B= to normal form Nand compute the matrices P, and Qp such that P,BQy = N 4 5 48 EQUIVALENCE [CHAP. 5 a Ey 2, 2, 2s, 26. 2 30, 31 (a) Show that the number of matrices in a canonical set of n-square matrices under equivalence is » +1 (4) Show that the number of matrices in a canonical set of mxn matrices under equivalence is the smaller of met and n+ 124 qj Given 4 = |122 6] of rank 2. Find a 4-squate matrix B 40 auch that 4B = 0. 25 6 10] Hint. Follow the proof of Theorem X and take do 08 _ oo00 Ca = labea chek where a,b,...,h ate arbitrary "The matrix A of Problem 6 and the matrix B of Problem 20 are equivalent. Find P and @ such that B = PAQ. HH the men mattices 4 and Bare of rank y and ry respectively, show that the sank of A+B cannot exceed a Let A be an arbitrary a-square matrix end B be an n-square elementary mattix. By considering each of the Six diferent types of matrix &, show that [48 = |4\-[B] Let A and B be n-square matices. (a) If at least one 1s singular show that 48] = |4|-[8] ; (by If both are non-singular, use (5.5) and Problem 25 to show that. |4| = |4\-[B| Show that equivalence of matrices is an equivalence relation. Prove: The row equivalent canonical form of a non-singular matrix 4 is / and conversely Prove: Not every mateix 4 can be reduced to normal forzn by row transformations alone. Hint, Exhibit a mateix which cannot be so reduced. Show how to effect on any matrix A the transformation Hj; by using a succession of row transformations of types (2) and (3) ° . Prove: If A is an mxn matsix, (man), of rank m then 44” is a non-singular symmetsle mattix. State the theorem when the rank of 4 is < m. Chapter 6 The Adjoint of a Square Matrix THE ADJOINT. Let 4 = [a;,] be an n-squate matrix and a4; be the cofactor of a,,; then by definition, sy den One (6.1) adjoint A = adj = [2 S92 Mao an ton an Note carefully that the cofactors of the elements of the ith tow (column) of 4 are the elements of the ith column (row) of adj 4 124 Example 1. For the matrix 4=]2 3 2 33.4 Mys= 6, Oke =—2. Cea 5. dog= 3, Sox 61-8 and ee “3 3-4 Using Theorems X and XI of Chapter 3, we find 8, eg =A, Ogg = = See Problems 1-2, (6.2) AfadjAy = [92 M2 oon) | ae log one wdingdAl [Ald = Ldledy = (adh Ay Example 2, Por the matrix 4 of Example 1, [4 «—1 and 12afe 1-9 a) adja) = [2s a[{-2-s af - | o— ssajis 3-1 ° By taking determinants in (6.2), we have (6.3) [4|-|aaj a} la’ = laajal-tal ‘There follow I. If A fs n-square and non-singular, then (6.4) ladjal = 50 THE ADJOINT OF A SQUARE MATRIX (CHAP, 6 IL, If 4 is n-square and singular, then Aadjd) = (adjdyd = 0 If Ads of rank n. If the set of vectors (9.2) is linearly independent so also is every subset of them. A LINEAR FORM over F in n variables x,, x,..., x, 18 a polynomial of the type (9.6) Zax: = oye + ety be + ant Consider a system of m linear forms in n variables f= tam + tot to + ante 7 and the associated matrix If there exist elements iy, key... dq, not all zero, in F such that laf + bah + + ah ° 70 LINEAR DE ENDENCE OF VECTORS AND FORMS (CHAP. 9 the forms (9.7) are said to be linearly dependent; otherwise the forms are said to be linearly independent. ‘Thus, the linear dependence or independence of the forms of (9.7) is equivalent to the linear dependence or independence of the row vectors of A. Example 5. The forms f, = 26; — 5+ Sts fy = Xi 4 Bet dea, fo = ey — Tao xe ate Hnesrly depend. 2-as ent since 4= 1 2 4| Is of sank 2. Here, 3f;—2f,- 4-01, ‘The system (9.7) is necessarily dependent if m>n. Why? SOLVED PROBLEMS 1. Prove: If among the m vectors Xs, Xp,....%_ there is a subset, say, Xs, Xo) Xp rm, whieh is Vinearly dependent, so also are the m vectors. Since, by hypothesis. ix; + kyXy + +4,Xp = 0 with not all of the k's equal to zero. then BX, + bala + om + lg + O-Xpas tot Ody = 0 with not all of the #s equal to zero and the entire set of vectors is linearly dependent. 2 Prove: If the rank of the matrix associated with a set of m n-vectors is rn, consider the matrix when to each of the given m vectors m-n additional zero compo- nents are added, This matrix is [40]. Clearly the linear dependence or independence of the vectors and also the rank of 4 have not been changed ‘Tus, in elther case, the vectors Xy,,.....q are linear combinations of the linearly independent vee tors Xy.Xq,..Xp a5 was to be proved, 3. Show, using a matrix, that each triple of vectors X,=[1,2,-3,4] (2.3.4,-1] (a) Xp=[3.-1,2.17 and (b) Xp=(2.3.1.-2] Xg=[1.-5,8,-7] Xy=[4.6,2.-3] Js linearly dependent. In each determine a maximum subset of linearly independent vectors and express the others as Linear combinations of these. 2K, 4 2Ky—BYy 2 0 and My = Xp Xe 2 LINEAR DEPENDENCE OF VECTORS AND FORMS. [cuar. 9 4. Let A(1.1,1), 21, 2.3), (3.1.2), and P(2,3,4) be points in ordinary space. The points P,, P, and the origin of coordinates determine a plane 7 of equation xyet “ tht 2 en ayee = 0 1231 ooo1 Substituting the coordinates of P, into the Left member of (1), we have 2341 2840) 234 anit 1itodl : = |raal +o 1231 1230 123 0001 Jooor 2a ‘Thus, P ies in 77. The significant fact here is that [P..P.P]’=|1 1 1| 4s of rank 2. 123 We have verified: Any three points of ordinary space Ile in a plane through the origin provided the matrix of their coordinates is of rank 2 Show that P, does not Ite in 77 SUPPLEMENTARY PROBLEMS 5. Prove: If m vectors Xz, Xs, ..,q ate linearly independent while the set obtained by adding another vector nus 18 linearly dependent, then Xqq1 ean be expressed as a linear combination of Xs, Ko, Xq 6. Show that the representation of 1,1 In Problem § 1s unique. iit: Suppose Kaus = ZX; = EZ aiX, and emsiaer E kya) % 12. Prove: A necessary and sufficient condition that the vectors (9.2) be linearly dependent is that the matrix (8.5) of the vectors be of rank r

You might also like