Friedberg - Linear Algebra (4th Ed)

You might also like

Download as pdf
Download as pdf
You are on page 1of 614
INEAR ~ LGEBRA she STEPHEN H. FRIEDBERG ARNOLD J. INSEL LAWRENCE E. SPENCE List of Symbols Aig the ij-th entry of the matrix A page 9 At the inverse of the matrix A page 100 At the pseudoinverse of the matrix A page 414 At the adjoint of the matrix A page 331 Ai the matrix A with row i and column j deleted —_ page 210 At the transpose of the matrix A page 17 (A|B) the matrix A angmented by the matrix B page 161 B,®-+-@® By the direct sum of matrices By through By page 320 BV) the set of bilinear forms on V page 422 B the dual basis of 3 page 120 Be the T-cyclic basis generated by x page 526 Cc the field of complex numbers page 7 Gy the ith Gerschgorin disk page 296 cond(A) the condition number of the matrix A page 469 c"(R) set of functions f on R with f) continuous page 21 co set of functions with derivatives of every order _ page 130 C(R) the vector space of continuous functions on 2 page 18 C([0,1]) the vector space of continuous functions on [0,1] page 331 Cy the T-cyclic subspace generated by x page 525 D the derivative operator on C** page 131 det(A) the determinant of the matrix A page 232 633 the Kronecker delta page 89 dim(V) the dimension of V page 47 eA lity oo (1 +A44i yt. page 312 ej the ith standard vector of F” page 43 Ey the eigenspace of T corresponding to \ page 264 F a field page 6 f(A) the polynomial f(x) evaluated at the matrix A page 565 Fe the set of n-tuples with entries in a field F page 8 f(T) F(S,F) H I, or I lv ort Ky Ko La si An Lv) L£(V.W) Mmxn(P) v(A) ¥(A) N(T) nullity(T) 0 per(Af) P(F) P,(F) og R rank(A) rank(T) p(A) pi(A) R(T) the polynomial f(x) evaluated at. the operator T the set of functions from S$ to a field space of continuous complex functions on [0.27] the n x n identity matrix the identity operator on V generalized cigenspace of T corresponding to A {r: ($(T))?(x) = 0 for some positive integer p} left-multiplication transformation by matrix A the limit of a sequence of matrices the space of lincar transformations from V to V the space of linear transformations from V to W. the set. of m x 7 matrices with entries in F the column stun of the matrix A the jth column sum of the matrix A the null space of T the dimension of the null space of T the zero matrix the permanent of the 2 x 2 matrix M the space of polynomials with coefficients in F the polynomials in P(F) of degree at most n standard representation with respect to basis 3 the field of real munbers the rank of the matrix A the rank of the linear transformation T the row stun of the matrix A the ith row sum of the matrix A the range of the linear transformation T CONTINUED ON REAR ENDPAPERS page 565 page 9 page 332 page 89 page 67 page 485 page 525 page 92 page 284 page 82 page 82 page 9 page 295 page 295 page 67 page 69 page 8 page 448 page 10 page 18 page 104 page 7 page 152 page 69 page 295 page 295 page 67 Contents Preface ix 1 Vector Spaces 1 1.1 Introduction 1 1.2 Vector Spaces . 6 1.3. Subspaces 16 1.4 Linear Combinations and Systems of Lincar Equations... . 24 1.5 Linear Dependence and Linear Independence ..... 2... - 35 1.6 Bases and Dimension .......-.20-00 0-200 005 42 1.7* Maximal Linearly Independent Subsets 58 Index of Definitions 62 2 Linear Transformations and Matrices 64 2.1 Linear Transformations, Null Spaces, and Ranges... ... « 64 2.2 The Matrix Representation of a Linear Transformation ... 79 2.3 Composition of Linear Transformations and Matrix Multiplication . 86 2.4 Invertibility and Isomorphisms . . . . 99 2.5 The Change of Coordinate Matrix . . 110 2.6" Dual Spaces...........000-- 119 2.7* Homogencous Linear Differential Equations with Constant Coefficients 127 Index of Definitions 145 3 Elementary Matrix Operations and Systems of Linear Equations 147 3.1 Elementary Matrix Operations and Elementary Matrices... 147 “Sections denoted by an asterisk are optional. Table of Contents 3.2 The Rank of a Matrix and Matrix Inverses .. 2... 4 152 3.3 Systems of Linear Equations Theoretical Aspects... . . « 168 3.4 Systems of Lincar Equations--Computatioual Aspects... . 182 Index of Definitions... 6... ee ee ee 198 Determinants 199 4.1 Determinants of Order 2 199 4.2 Determinants of Order n 209 4.3 Propertics of Determinants 222 4.4 Summary- Important Facts about Determinants... .... 232 4.5* A Characterization of the Determinant Index of Definitions Diagonalization 245 5.1 Eigenvalues and Eigenvectors ..... . . 245 5.2 Diagonalizability . . 261 5.3* Matrix Limits and Markov Chains . . . . 283 5.4 Invariant Subspaces and the Cayley-Hamilton Theorem ... 313 Index of Definitions .. 2.2.2... 2. eee eee 328 Inner Product Spaces 329 6.1 Inner Products and Norms 2... ee 329 6.2 The Gram-Schmidt Orthogonalization Process and Orthogonal Complements 341 6.3 The Adjoint of a Lincar Operator 357 6.4 Normal and Self-Adjoint Operators 369 6.5 Unitary and Orthogonal Operators and Their Matrices... 379 6.6 Orthogonal Projections and the Spectral Theorem ..... . 398 6.7* The Singular Value Decomposition and the Pseudoinverse . . 405 6.8* Bilincar and Quadratic Forms ...... . 422 6.9" Einstein’s Special Theory of Relativity. . . 451 6.10" Conditioning and the Rayleigh Quotient... ......-.. 464 6.11* The Geometry of Orthogonal Operators . . Index of Definitions Table of Contents vii 7 Canonical Forms 482 7.1 The Jordan Canonical Form 1 482 7.2 The Jordan Canonical Form II... .. . 497 7.3 The Minimal Polynomial .. 0... 0000002 516 7.4* The Rational Canonical Form... 0... ee 524 Index of Definitions 2... 2 ee ee 548 Appendices 549 A Sots... ee ee ee eee 549 B Functions . . 551 C Fields ..... 552 D ~~ Complex Numbers... . - - 555, E Polynomials... 2.0.2.0... 561 Answers to Selected Exercises 571 Index 589 Preface The language and concepts of matrix theory and, more generally, of linear algebra have come into widespread usage in the social and natural sciences, computer science, and statistics. In addition, linear algebra continues to be of great importance in modern treatments of geometry and analysis, The primary purpose of this fourth edition of Linear Algebra is to present a careful treatment of the principal topics of linear algebra and to illustrate the power of the subject through a variety of applications. Our major thrust emphasizes the symbiotic relationship between linear transformations and matrices. However, where appropriate, theorems are stated in the more gen- eral infinite-dimensional case. For example, this theory is applied to finding solutions to a homogencous linear differential equation and the best approx- imation by a trigonometric polynomial to a continuous function. Although the only formal prerequisite for this book is a one-year course in calculus. it requires the mathematical sophistication of typical junior and senior mathematics majors. This book is especially suited for a second course in linear algebra that emphasizes abstract vector spaces, although it can be used in a first course with a strong theoretical emphasis. The book is organized to permit a number of different courses (ranging from three to cight semester hours in length) to be taught from it. The core material (vector spaces, linear transformations and matrices, systems of linear equations, determinants, diagonalization, and inner product spaces) is found in Chapters 1 through 5 and Sections 6.1 through 6.5. Chapters 6 and 7, on inner product spaces and canonical forms, are completely independent and may be studied in either order. In addition, throughout the book are applications to such areas as differential equations, economics, geometry, and physics. These applications are not central to the mathematical development, however, and may be excluded at the discretion of the instructor. We have attempted to make it possible for many of the important topics of linear algebra to be covered in a one-semester course. This goal has led us to develop the major topics with fewer preliminaries than in a traditional approach. (Our treatment of the Jordan canonical form, for instance, does not require any theory of polynomials.) The resulting economy permits us to cover the core material of the book (omitting many of the optional sections and a detailed discussion of determinants) in a one-semester four-hour course for students who have had some prior exposure to linear algebra. Chapter 1 of the book presents the basic theory of vector spaces: sub- spaces. linear combinations, linear dependence and independence, bases, and dimension. The chapter concludes with an optional section in which we prove x Preface that every infinite-dimensional vector space has a basis. Linear transformations and their relationship to matrices are the subject of Chapter 2, We discuss the null space and range of a linear trausformation. matrix representations of a linear transformation, isomorphisms. and change of coordinates. Optional sections on dual spaces aud homogencous linear differential equations end the chapter. The application of vector space theory and linear transformations to tems of lincar equations is found in Chapter 3. We have chosen to defer this important subject so that it can be presented as a consequence of the pre- ceding material. This approach allows the familiar topic of linear systems to illuminate the abstract theory and permits us to avoid messy matrix computa- tions in the presentation of Chapters 1 and 2. There are occasional examples in these chapters, however, where we solve systems of linear equations. (Of course, these examples are not a part of the theoretical development.) The necessary background is coutained in Section 1.4. Determinants, the subject of Chapter 4, are of much less importance than they once were. In a short course (less than one year), we prefer to treat determinants lightly so that more time may be devoted to the material in Chapters 5 through 7. Consequently we have presented two alternatives in Chapter 4—a complete development of the theory (Sections 4.1 through 4.3) and a summary of important facts that are needed for the remaining chapters (Section 4.4). Optional Section 4.5 presents an axiomatic development of the determinant. Chapter 5 discusses eigenvalues, eigenvectors, and diagonalization. One of the most important applications of this material occurs in computing matrix limits. We have therefore included an optional section on matrix limits and Markov chains in this chapter even though the most. gencral statement of some of the results requires a knowledge of the Jordan canonical form. Section 5.4 contains material on invariant subspaces and the Cayley Hamilton theorem. Inner product spaces are the subject of Chapter 6. The basic mathe- matical theory (inner products; the Gram Schmidt process; orthogonal com- plements; the adjoint of an operator; normal, self-adjoint, orthogonal and unitary operators; orthogonal projections; and the spectral theorem) is cun- tained in Sections 6.1 through 6.6. Sections 6.7 through 6.11 contain div applications of the rich inner product space structure. Canonical forms are treated in Chapter 7. Sections 7.1 and 7.2 develop the Jordan canonical form, Section 7.3 presents the minimal polynomial. and Section 7.4 discusses the rational canonical form. There are five appendices. The first four, which discuss sets. functions. fields, and complex numbers, respectively, are intended to review basic ideas used throughout the book. Appendix E on polynomials is used primarily in Chapters 5 and 7, especially in Section 7.4. We prefer to cite particular results from the appendices as needed rather than to discuss the appendices Preface xi independently. The following diagram illustrates the dependencies among the various chapters. Chapter 1 | Chapter 2 | Chapter 3 | Sections 4.1 4.3 or Section 4.4 | Sections 5.1 and 5.2], | Chapter 6 | Section 5.4 | Chapter 7 One final word is required about our notation. Sections and subsections labeled with an asterisk (+) are optional and may be omitted as the instructor sees fit. An excrcise accompanied by the dagger symbol ({) is not optional, however —we use this symbol to identify an exercise that is cited in some later section that is not optional. DIFFERENCES BETWEEN THE THIRD AND FOURTH EDITIONS The priucipal conteut change of this fourth edition is the inclusion of a new section (Section 6.7) discussing the singular value decomposition and the pseudoinverse of a matrix or a linear transformation between finite- dimensional inner product spaces. Our approach is to treat this material as a generalization of our characterization of normal and self-adjoint operators. The organization of the text is essentially the same as in the third edition. Nevertheless, this edition contains many significant local changes that im- xt Preface prove the book. Section 5.1 (Eigenvalues and Eigenvectors) has been stream- lined, and some material previously in Section 5.1 has been moved to Sec- tion 2.5 (The Change of Coordinate Matrix). Further improvements include revised proofs of some thcorems, additional examples, new exercises, and literally hundreds of minor editorial changes. We are especially indebted to Jane M. Day (San Jose State University) for her extensive and detailed comments on the fourth edition manuscript. Additional comments were provided by the following reviewers of the fourth edition manuscript: Thomas Banchoff (Brown University), Christopher Ileil (Georgia Institute of Technology), and Thomas Shemanske (Dartmouth Col- lege). To find the latest information about this hook, consult our web site on the World Wide Web. We encourage comments, which can be sent to us by e-wail or ordinary post. Our web site and e-mail addresses are listed below. web site: http://www.math. ilstu.edu/linalg email: linalg@math.ilstu.edu Stephen H. Friedberg Arnold J. Insel Lawrence FE. Spence Vector Spaces 1.1 Introduction 1.2 Vector Spaces 1.3. Subspaces 1.4 Linear Combinations and Systems of Linear Equations 1.5 Linear Dependence and Linear Independence 1.6 Bases and Dimension 1.7* Maximal Linearly Independent Subsets 1.1. INTRODUCTION Many familiar physical notions, such as forces, velocities, and accelerations, involve both a magnitude (the amount of the force, velocity, or acceleration) and a direction. Any such entity involving both magnitude and direction is called a “vector.” A vector is represented by an arrow whose length denotes the magnitude of the vector and whose dire scuts the direction of the vector. In most physical situations involving vectors, only the magnitude and direction of the vector are significant; conscquently, we regard vecto with the same magnitude and direction as being equal irrespective of their positions. In this section the geometry of vectors is discussed. This geometry is derived from physical experiments that test the manner in which two vectors interact. Familiar situations suggest that when two like physical quantities act si- multancously at a point, the magnitude of their effect need not equal the sum of the magnitudes of the original quantitics. For example, a swimmer swim- ming upstream at the rate of 2 miles per hour against a current of 1 mile per hour docs not progress at the rate of 3 miles per hour. For in this instance the motions of the swimmer and current oppose each other, and the rate of progress of the swimmer is only 1 mile per hour upstream. If, however, the 'The word velocity is being used here in its scientific sense as an entity having both magnitude and direction. The magnitude of a velocity (without. regard for the direction of motion) is called its speed. 2 Chap. 1 Vector Spaces swimmer is moving downstream (with the current), then his or her rate of progress is 3 miles per hour downstream. Experiments show that if two like quantities act together, their effect is predictable. In this case, the vectors used to represent these quantities can be combined to form a resultant vector that represents the combined effects of the original quantities. This resultant vector is called the sum of the original vectors, and the rule for their combination is called the parallelogram law. (See Figure 1.1.) Figure 1.1 Parallelogram Law for Vector Addition. The sum of two vectors a and y that act at the same point P is the vector beginning at P that is represented by the diagonal of parallelograin having x and y as adjacent sides. Since opposite sides of a parallelogram are parallel and of equal length, the endpoint Q of the arrow representing x + y can also be obtained by allowing z to act at P and then allowing y to act at the endpoint of x. Similarly, the endpoint of the vector x + y can be obtained by first permitting y to act at P and then allowing x to act at the endpoint of y. Thus two vectors z and y that both act at the point P may be added “tail-to-head”; that is, either zx or y may be applied at P and a vector having the same magnitude and direction as the other may be applied to the endpoint of the first. If this is done, the endpoint of the second vector is the endpoint of x + y. The addition of vectors can be described algebraically with the use of analytic geometry. In the plane containing « and y, introduce a coordinate system with P at the origin. Let (a1, a2) denote the endpoint of x and (b1, bz) denote the endpoint of y. Then as Figure 1.2(a) shows, the endpoint Q of r+y is (a; +b), @2 +02). Henceforth, when a reference is made to the coordinates of the endpoint of a vector, the vector should be assumed to emanate from the origin. Moreover, since a vector beginning at the origin is completely determined by its endpoint, we sometimes refer to the point x rather than the endpoint of the vector x if x is a vector emanating from the origin. Besides the operation of vector addition, there is another natural operation that can be performed on vectors—the length of a vector may be magnified Sec. 1.1 Introduction 3 (tar, taz) Q ' (a1 +b1, a2 + ba) ‘ -4 (a1 + 61,62) ' tar Figure 1.2 or contracted. This operation, called scalar multiplication, consists of mul- tiplying the vector by a real number. If the vector x is represented by an arrow, then for any nonzero real number ¢, the vector tz is represented by an arrow in the same direction if ¢ > 0 and in the opposite direction if t < 0. The length of the arrow tz is |¢| times the length of the arrow x. Two nonzero vectors x and y are called parallel if y = tx for some nonzero real number ¢. (Thus nonzero vectors having the same or opposite directions are parallel.) To describe scalar multiplication algebraically, again introduce a coordi- nate system into a plane containing the vector x so that « emanates from the origin. If the endpoint of # has coordinates (a1, a2), then the coordinates of the endpoint of tz are easily scen to be (tai, tag). (See Figure 1.2(b).) The algebraic descriptions of vector addition and scalar multiplication for vectors in a plane yield the following properties: 1. For all vectors andy, z+y=y+a. 2. For all vectors x, y, and z, (e+y)+z=a2+(y+2z). 3. There exists a vector denoted 0 such that x + 0 = x for each vector 2. 4. For each vector x, there is a vector y such that e+ y= 0. 5. For each vector z, lr = 2. 6. For each pair of real numbers a and b and each vector , (ab)z = a(bz). 7. For each real number a and each pair of vectors x and y, a(z + y) = ax + ay. 8. For each pair of real numbers a and 6 and each vector z, (a+ )z = ax + be. Arguments similar to the preceding ones show that these eight properties, as well as the geometric interpretations of vector addition and scalar multipli- cation, are true also for vectors acting in space rather than in a plane. These results can be used to write equations of lines and planes in space. 4 Chap. 1 Vector Spaces Consider first the equation of a line in space that passes through two distinct points A and B. Let O denote the origin of a coordinate system in space, and let u and v denote the vectors that begin at O and end at A and B, respectively. If w denotes the vector beginning at A and ending at B, then “tail-to-head” addition shows that u+w = v, and hence w = v—u, where —u denotes the vector (—1)u. (See Figure 1.3, in which the quadrilateral OABC is a parallelogram.) Since a scalar multiple of w is parallel to w but possibly of a different length than w, any point on the line joining A and B may be obtained as the endpoint of a vector that begins at A and has the form tw for some real number t. Conversely, the endpoint of every vector of the form tw that begins at A lies on the line joining A and B. Thus an equation of the line through A and B is s =u+tw=u+t(v —u), where t is a real number and x denotes an arbitrary point on the line. Notice also that the endpoint C of the vector v — u in Figure 1.3 has coordinates equal to the difference of the coordinates of B and A. Figure 1.3 Example 1 Let A and B be points having coordinates (—2,0, 1) and (4,5, 3), respectively. The endpoint C of the vector emanating from the origin and having the same direction as the vector beginning at A and terminating at B has coordinates (4,5, 3) — (—2,0,1) = (6, 5, 2). Hence the equation of the line through A and Bis x = (-2,0,1)+#(6,5,2). @ Now let A, B, and C denote any three noncollinear points in space. These points determine a unique plane, and its equation can be found by use of our previous observations about vectors. Let u and v denote vectors beginning at A and ending at B and C, respectively. Observe that any point in the plane containing A, B, and C is the endpoint S of a vector x beginning at A and having the form su +¢tv for some real numbers s and ¢. The endpoint of su is the point of intersection of the line through A and B with the line through S Sec. 1.1 Introduction 5 Figure 1.4 parallel to the line through A and C. (See Figure 1.4.) A similar procedure locates the endpoint of tv. Moreover, for any real numbers s and ¢, the vector su + tv lies in the plane containing A, B, and C. It follows that an equation of the plane containing A, B, and C is r=Atsutte, where s and ¢ are arbitrary real numbers and « denotes an arbitrary point in the plane. Example 2 Let A, B, and C be the points having coordinates (1,0,2), (-3,—-2,4), and (1, 8, -5), respectively. The endpoint of the vector emanating from the origin and having the same length and direction as the vector beginning at A and terminating at B is (—3, —2, 4) — (1, 0,2) = (—4, -2, 2). Similarly, the endpoint of a vector emanating from the origin and having the same length and direction as the vector beginning at A and terminating at C is (1,8, —5)—(1,0, 2) = (0,8, —7). Hence the equation of the plane containing the three given points is z= (1,0,2) + 5(—4,-2,2) +2(0,8,-7). Any mathematical structure possessing the cight properties on page 3 is called a vector space. In the next section we formally define a vector space and consider many examples of vector spaccs other than the ones mentioned above. EXERCISES 1. Determine whether the vectors emanating from the origin and termi- nating at the following pairs of points are parallel. 6 Chap. 1 Vector Spaces (a) (3.1.2) and (6,4,2) (b) (—3,1.7) and (9.-3,-21) (c) (5,-6.7) and (-5,6.-7) (d) (2.0, 5) and (5,0, —2) 2. Find the equations of the lines through the following pairs of points in space. (a) (3,-2.4) and (-5,7, 1) (b) (2,4,0) and (—3,-6.0) (c) (3.7.2) and (3,7, -8) (ad) (-2.-1,5) and (3, 9.7) 3. Find the equations of the plan (a) (2.-5,-1). (0.4.6), and (—3,7.1) (b) (3,-6.7), (—2,0.—4), and (5, —9, -2) (c) (—8,2,0), (1,3,0), and (6,—5, 0) (a) (1.1.1), (5,5,5). and (-6,4,2) s containing the following points in space. 4, What are the coordinates of the vector ? in the Euclidean plane that satisfies property 3 on page 3? Justify your answer. 5. Prove that if the vector w cmanates from the origin of the Euclidean plane and terminates at the point with coordinates (a),a2), then the vector tz that emanates from the origin terminates at the point with coordinates (tay, taz). 6. Show that the midpoint of the line segment joining the points (a,b) and (c,d) is ((a + ¢)/2,(b + €)/2). 7. Prove that. the diagonals of a parallelogram bisect each other, 1.2. VECTOR SPACES In Section 1.1, we saw that with the natural definitions of vector addition and scalar multiplication. the vectors in a plane satisfy the eight properties listed on page 3. Many other familiar algebraic systems also permit definitions of addition and scalar multiplication that satisfy the saine cight properties. In this section, we introduce some of these systems, but first we formally define this type of algebraic structure. Definitions. A vector space (or linear space) V over a ficld? F '$ of asct on which two operations (called addition and scalar mul- tiplication, respectively) are defined so that for each pair of elements z, y, 2 Fields are discussed in Appendix ©. Sec. 1.2 Vector Spaces 7 in V there is a unique element z+ y in V, and for each element a in F and each element in V there is a unique element ax in V, such that the following conditions hold. (VS 1) For all x, y in V, « +y =y +2 (commutativity of addition). (VS 2) For all x, y, z in V, (rn +y) +z = 2+ (y+ 2) (associativity of addition). (VS 3) There exists an element in V denoted by 0 such that r+ 0 = x for each z in V. (VS 4) For each clement x in V there exists an clement y in V such that tt+y=0. (VS 5) For each clement x in V, 1x (VS 6) For each pair of elements a, b in F and each element x in V. (ab)a = a(bz). (VS 7) For each clement a in F and each pair of elements x, y in V, a(x +y) =ar + ay. (VS 8) For each pair of elements a, b in F and cach clement = in V, (a + b)e = ar + be. The elements x + y and az are called the sum of x and y and the product of a and «x, respectively. The elements of the field F are called scalars and the elements of the vector space V are called vectors. The reader should not confuse this use of the word “vector” with the physical entity discussed in Section 1.1: the word “vector” is now being uscd to describe any element of a vector space. A vector space is frequently discussed in the text without explicitly men- tioning its field of scalars. The reader is cautioned to remember, however, that every vector space is regarded as a vector space over a given field, which is denoted by F. Occasionally we restrict our attention to the fields of real and complex numbers, which are denoted R and C, respectively. Observe that (VS 2) permits us to define the addition of any finite number of vectors unambiguously (without the use of parentheses). In the remainder of this section we introduce several important examples of vector spaces that are studied throughout this text. Observe that in de- scribing a vector space, it is necessary to specify not only the vectors but also the operations of addition and scalar multiplication. An object of the form (a1,@2,...,@n), where the entries a;,a2,...,ay are elements of a field F, is called an n-tuple with entries from F’. The elements 8 Chap. 1 Vector Spaces Q.49.....t@ are called the entries or components of the n-tuple. Two n-tuples (ay.a2....- ay) aud (by.b2...., b,) with entries from a field F are called equal if a; = 0; for 7 = 1.2.....n. Example 1 The sct of all n-tuples with entries from a field F is denoted by F". This set is a vector space over F with the operations of coordinatewise addition and scalar multiplication: that is. if «= (@1.d2,-.-4@n) € FY, 0 = (br, bo... sn) € FB and ¢ € F, then utu=(a tby,ag +b2.....4, +62) and cu 604.009... €1ty)- Thus R? is a vector space over 2. In this vector space. (3.-2.0) + (-L1 4) = (2,-1.4) and —5(1.—-2.0) = (5.10.0). Similarly, C? is a vector space over C'. Tu this vector space, (1+i,2) + (2-33, (3-2 244i) and i(1+i.2)= (14.2%). Vectors in F" imay be written as column vectors a ay On rather than as row vectors (a1.d2.....dy). Since a 1-tuple whose only entry is from F can be regarded as an element of F, we usually write F rather than F! for the vector space of L-tuples with entry from FP. @ An mx n matrix with entries from a field F is a rectangular array of the form Qi M120 In agp 220 tt Mn Ami Anz 77" Omen where cach entry aj (L — 223 + 4 +2) Exercises 5 and 6 show why the definitions of matrix addition and scalar multiplication (as defined in Example 2) are the appropriate ones. 5. Richard Gard (“Effects of Beaver on Trout in Sagehen Creek. Cali- fornia,” J. Wildlife Management, 25, 221-242) reports the following number of trout having crossed beaver dams in Sagehen Creck. Upstream Crossings Fall Spring __ Sunimer Brook trout 8 3 1 Rainbow trout. 3 0 0 Brown trout 3 0 0

You might also like