Vector Spaces 6th CH

You might also like

Download as pdf
Download as pdf
You are on page 1of 78
244 Chapter 6 Real Vector Spaces Uhre set of all positive real numbers w with the operations 9@0=OandeO0=0. Show har yg, ue@v=uvandcOu ” ey we @ (a) IF V isa vector space that hay % "Let V hehe st a ea mater deine bby tg, RB 0 IY na econ space that hs none a Wey =2u—vand Oby cOu=cu, Iv V avector my space? (b) Describe all vector spaces hhaving a fining AA8. Lot V be the sg comsisting of a single temeng 0, Let vectors Ma, ow Ny aa x | Theoretical Exercises In Exercises T] through T4, establish the indicated result for yw that fu # Oand au = bu, then g — » 2 real vector space V 6 Show that a vector space has only one reqs, tions ance. etre war 6: Ch og, T2! Show that —(—w) =u Show that a vector U in a vector space has on ~ Ply TR Show that if w+ ¥ = w+, then v= w negative —u ¥ o- MATLAB Exercises The concepts discussed in this section are not easily of polynomials of degree m or less. Operation, ampiemented in MATLAB routines. The items in Definition 1 polynomials can be performed in linear agen, rast hold for all vectors. Just because we demonstrate in software by associating a row matrix of size n= MATLAB that a property of Definition 1 holds for a few with polynomial p(t) of Py. The row mati cng ectors it does not imply that it holds forall such vectors. of the coefficients of p(t) using the association You must euard against such faulty reasoning. However, if vor « particular choice of vectors, we show that a property ‘ails im MATLAB, then we have established that the property oes not hold in all possible cases. Hence the property is considered tobe false. In this way we might be able to show that 2 ser is not a vector space. PUD) = Gyl" + dy! +o bat ta, ly missing, a2e0 5 used for its coefficient. Then the addition of polynomials corresponds to matrix addition ad MLL. Le: V’be the set of all 2 x 2 matrices with operations multiplication of a polynomial by a scalar given by the following MaTLaa commands: corresponds to scalar multiplication of mative ls enki MATLAB to perform the following operations 01 i 2 polynomials, using the matrix association desert KOA is kta above. Let n = 3 and Is V a vector space? (Hint: Enter some 2 x 2 = 452 41— matrices and experiment with the MATLAB eo ces commands to understand their behavior before qi a=P+3tS. checking the conditions in Definition 1.) (@ pO) + qt) (b) Spt ML.2. Following Example 8, we discuss the vector space P, (©) 3p(t) ~ 4g) = , 6.2 SUBSPACES . : inti {n this section we begin to analyze the structure of a vector space: Fi, convenient to have a name for a subset of a given vector space thats! i ¢ vector space with respect to the same operations as those in V. Thu" the following. a DEFINITION Let V be a vector space and W a nonempty subset of V. If W isa vee" with respect to the operations in V, then W is called a subspace Sev. 6.2 Subspaces 245 Every vector space has at least two s , s tubspaces, itself and the consisting only of the zero vector [recall il 00 Ae qa ea aa a vector space (see Exercise 19 in Secto inany zero subspace. n 6.1)].-The subspace (0) is called the . os Let W be the subset of R* consisting of all vectors of the az and bare any real numbers, with the usual ope mae Be a scalar multiplication. To check i W ia subypace of, weit sce whete properties (a) and (B) of Definition | hold. ‘Thus lew = (a,b 0) und Hee (aaebr, 0) be vectors in W. Then w+ ¥ = (ay by.0)-+ (ay, by, 0) wat any +b3,0) i in W, ic the hi compares Also, if cis a scalar, then cu (a), b,,0) = (cay, cb), 0) isin W. 1 (of Denton | hold. We can ely veri at Hopes aheay ha Hence W is a subspace of R* . Before listing other subspaces we pause to develop a labor-saving result We just noted that to verify that a nonempty subset W of a vector space V is a subspace, we must check that (a), (B), and (a)-(h) of Definition. | hold J However, the following theorem says that itis enough to merely check that (a) and (B) hold. Property (a) is called the closure property for ®. and (B) is called the closure property Tor ae Let V be a vector space with operations @ and © and let W be a nonempty subset of V. Then W isa subspace of V ifand only ifthe following conditions hold: (q) If wand v are any vectors in W, then w @ v is in W. (B) “fc is any real number and w is any vector in W, then c © wis in W. Proof Exercise T.1. . Remark/é Observe that the subspace consisting only of the zero vector (see Example 1) a nonempty subspace. A vee SUUIESEN Consider the set W consisting of all 2 x 3 matrices of the form Pie ‘ eb 4 : a ae al where a, b,c, and d are arbitrary real numbers. Then W is a subset of the Neotor space May defined in Example 4 of Section 6.1, Show tht Wisa subspace of M33. Note that a 2 x 3 matrix is in W provided its (1,3) and (2, 1) entries are zero. 0 nb 0 conideru=[ 4 Sjaer=[f a a in’ W. Then Solution : 1 [ie byt be 4 isin W utve| gL ate ath so that (a) of Theorem 6.2s satisfied, Als. if is scalar, then = [eo | inW w= [5 ey kh} so that (8) of Theorem 6.2s satisfied. Hence W isatsubspace of Max, a —Saper NCR VECIOr Spaces: ww that a nonempty subset W of a vecto an also sho pa eo V itand only if au+ bv is in W for any vectors y any scalars a and b (Exercise 7.2) 3 istii f all vectors of the for Let W be the subset of R’ consisting 0! 7M (a,b, REL rea b are any real numbers. To check whether properties (q) al : ‘Theorem 6.2 hold, we let u = (a1, 1, 1) and v = (a2, ba, 1) be yeah ony * Pace and y = bi + by, 2), wt Then u+v = (a1, bi, 1) + (a2, b2, 1) (ay + 42. bi +2, 2), wi W: since the third component is 2 and not 1. Since (a) of THe not hold, W is not a subspace of R 2 Pi EXAMPLES | In Section 6.1, we let P, denote the vector space consisting of al po), \ of degree < n and the zero polynomial, and let P denote the ve ct of all polynomials. It is easy to verify that P2 is & subspace of p, geiieral, that P, is a subspace of Py. (Exercise 7). Also, Py isa subs, P (Exercise 8). : ; ' Perce) Let V be the set of all polynomials of degree exactly = 2; V isa subse) butt is nota subspace of P2, since the sum of the polynomials 2 4" aX Vand -22°5 7 +2, a polynomial of degree 1, is not in V. J EXAMPLE? F (Calculus Required) Let C{a, b] denote the set of all real-valued don functions that are defined on the interval [a, b). If Pand’g are in Cla, bh # +8 isin Cla, b) since the sum oF two continuous functions is conn Similarly, if c is a scalar, then cf is in Cla, 6). Hence C[a, b] is a subspuce the vector space of all real-valued functions that are defined on (a, ], vi was introduced in Example 5 of Section 6.1. If the functions are define fr all real numbers, the vector space is denoted by C(—00, 00) We now come to a very important example of a subspace. Consider the homogeneous system Ax = 0,-where A is an m x n matt solution consists of a vector x in Rt". Let W be the subset Of R” conssing! all solutions to the homogeneous system. Since AO = 0, we conclude tl is not empty. To check that W is a subspace of R", we verify properties and (B) of Theorem 6.2. Thus let x and y be solutions. Then Ax=0 and Ay=0. Now A(x+y) = Ax+ Ay=04+0=0, (/ Sox +y isa solution. Also, if c is a scalar, then A(cx) = c(Ax) =c0=0, / - So cx is also a solution. Hence W is a subspace of R")called the space of the homogeneous system, or the null space of A. aM should be noted that the set of all solutions to the linear system “* whey A is m x n, is not a subspace of R" if b # 0 (Exercise T3) oN SN vt A simple way of constructing subspaces in a vector space is as follO¥* and v2 be fixed vectors in a vector space V and let W be the set 0! combinations (see Section 1.3) of ¥; and vp, that is, W consists of 2! Sire 12 Nubspaces 247 in V of the form ayyy 1), whe " Vib ayv). Where ay and a, that W isa subspace of, We verity Toile Ged oe Thus let (a7) and (A) of Th orem 2 WES ME bevy and wy by thy pe vectors in W, When a way be SM EVD EYE Davo) tay FudY tay th ay nim) hich isin W Also, fe is a scalar, then ew) = (cay (ey) isin W, Hence W isa subspace of V ZY : ‘The construction carried out in Example 9 for two vectors can easily be pevtonmes! for more than two vectors. We now give a formal definition serreTiON Let vy. Va. ++ Va be vectors in a vector space V. A vector v in V is called a : \ 7 linear combination of Vi, V200+5+%k if vec tovet te for some real numbers ¢), €25 +++ Ck- (See also Section 13) asa linear combination of In Figure 6.1 we show the vector ¥ in R? or R° the vectors v; and V2- sigure 6.1% om aa yea tev av Ys yy =(12)) 27 (10,2): and vs= (ll ‘The vector v=Q15), if we can find real is a linear combination of V1» ¥2» and ¥3 3 $0 that eqs tena FON = Substituting for ¥, Vi» ¥2+ ‘and vs, we have ) pel LO = 1,5) (2M) Fert 02 eV, 2, 1) +2! nding entries leads (0 the Combining terms onthe fe and eats correspon Jinear system (verify) at are der ce Solving this linear system by the methods of Chapter 1 gives c= 2, and cy = —1, which means that v is a linear combi and vy. Thus (verify ation of v=vy, + 2v) — v3. Figure 6.2 shows v as a linear combination of vj, ¥2, and y, Figure 6.2 > "3 DEFINITION” If S = (v1, v2, ..., Ve) is a set of vectors in a vector space V, then these: all vectors in V that are linear combinations of the vectors in S is denoted span S or span (V1, V2, ..., Ve). In Figure 6.3 we show a portion of span {vi, v2}, where ¥; and ¥ noncollinear vectors in R?; span {v), v2} is a plane that Passes through ti origin and contains the vectors v; and v2. Figure 6.3 > EXAMPLE 1 | Consider the set $ of 2 x 3 matrices given by of 0 1 0) fo 0 o ono e|) | 9 0 o}'lo 0 offo 1 offo ot Then span S is the set in M3 consisting of all vectors of the form™ 100 1 Oo] fo 0 o ‘| [t 0 o+a[e 0 ote ee 1 ol+e[o o! ab 0 7 [é cd}? Where a,b,c, and d are real numbers Proof SS Solution Remark. Sec.6.2 Subspaces 249 That i, span S isthe subset of Ma; consisting ofall matrices ofthe fo s xm abo 0c af: where a, b, ¢, and d are real numbers, . Lat = (Wi. Yas Vi) be ase of vectors ina vector space V. Then span $ is a subspace of V. 7 ee See Exercise T4. . In P, let ye2Ptit2, WweP-2, w=5P-542, w= y, Determine if the vector 2\2 4 ua +r? [oe | belongs to span (Vj, V2, 3, Val.“ If we can find scalars 1, ¢2, 3, and cy so that cv + cp¥a + a¥3 + cae = Ws then u belongs to span (Vi, ¥2, V3. Va}. Substituting for w, v1, ¥2, V3» and ¥ss we have c2P +142) +e? - 20 +a" ~ 5 +2) +0a(-1 - 31-2) =P+i+2 or (2ey +2 + 5c3 — ea)? + (C1 — 2c2 — Sea — Beat + (2c, +2c3 — 2cs) =P ttt? Now two polynomials agree for all values ofr only ifthe coeficens of re- spective powers of agree. Thus we get the linear system 2e + c+ 5e3- 6 1 — 2c — Se — 34 2c + 2c; — 2cg = 2. To investigate whether or not this system of linear equations is consistent, We form the augmented matrix and transform it to reduced row echelon form, obtaining (verify) A — 1 0 i) Of 3) 0 0 0 0 which indicates thatthe system is inconsite that is, it has no solution. — Hence u does not belong to span [¥1. 2. ¥3-¥4 In general, to determine if a specific vector v belongs to span S, we investigate the consistency of an appropriate linear system. SS —__By Which of the following subsets of the vector sp, subspaces of Mich of te following subsets of RY are R27 The set of all vectors of the form {a) (a,b, 2) 0) (a, b,c), where ¢ = +b. (6) (a,b,c), where e > 0 Which of the following subsets of R? are subspaces of > R°? The set of all vectors of the form fa) (a. b,c), where & 0. : wa (by (abe). where a fo) (a,b ch where b= 2a + 4, Which of the following subsets of R are subspaces of RA) The set ofall vectors of the form (a) (a,b. ed), where a — b= 2 t @) (a.diccd)wherec = a+ 2band d = a— 30.5 (©) (a. dec.d), where a = Oand b= —d. / J a/’Which of the following subsets of R* are subspaces of R*? The set of all vectors of the form (a) (a, b.c.d), wherea = = 0. ()) (a, b,c. d), where a = 1,b=0,anda+d=1. (©) (a,b, c,d), where a > Oand b < 0. 5, Which of the following subsets of P, are subspaces? ~The set of all polynomials of the form + ay + ap, where do = 0. yt + do, where do = 2. (©) gat? + ait + ao, where ay + ay Which of the following subsets of P, are subspaces? ‘The set of all polynomials of the form (a) ast? + 0,1 + ao, where a; = Oand ay = 0 (b) ast? +11 +p, where ay = 2a. (©) yt? +ayt +05, where a2 + a4 + a9 = 2. 7. (af Show that P, is a subspace of P3. (by Show that P, is a subspace of P,+1. 9? Show that P, is subspace of P. mm ” y Let u = (1,2, ~3) and v = (—2, 3, 0) be two vectors in how that P is a subspace of the vector space defined in Example 5 of Section 6.1 Rand Jet W be the subset of R? consisting of all vectors of the form au + by, where @ and b are any real numbers. Give an argument to show that W is a subspact of R? Letu = (2,0,3,~4) and v = (4,2, -5, 1) be two vectors in R* and let W be the subset of R* consisting of all vectors of the form au + bv, where « and b are any real numbers, Give an argument to show that W is a subspace of R*. defined in Example 4 of Section 6.1 are Subspace, 4, "Ne set of all matrices of the form abc = (i ‘ owner b= ate be 9) i : ofwneree 0 io (i : p]rtera= —Peand ste Aa. Which of the following subsets of the vector spacey defined in Example 4 of Section 6.1 are subspace, set of all matrices of the form he ' (a) la 4 jjomeaaret “la bof. “© (‘ , ftereate= oan bd jas “44, Which ofthe following subsets of the vector spc ate subspaces? (@) The set of all nxn symmetric matrices. (b) The set of all n x m nonsingular matrices. |) The set ofall x m diagonal mates JS. Which of the following subsets of the vector space are subspaces? nO (a) The set of all n x n singular matrices. (b) The set of all n x n upper triangular mafices. . (©) The set of all n x m matrices whose detrita (Caleulus Required) Which of the following ibe are subspaces of the vector space C(—09, 00) define! Example 7? (a) All nonnegative functions. we (b) All constant functions. (©) All funetions f such that f(0) = 0. (4) All functions f such that f(0) = 5 (€) All differentiable functions. YW (Calculus Required) Which ofthe followins the vector space C (00, 00) defined in Examp! subspaces? subse! ye 78 %, (a) ‘All integrable functions. (b) All bounded functions. (©) All functions that are integrable on (a. (4) All functions that are bounded on [4,6]: af Consider the differential equation eo ww jo ts a tval salwed function f satistying the se Let be the set of all solutions to the given nat equation define @® and © as in Example Sin ae Shaw thal V8 @ subspace of the vector ws defined on (~90, 20) rot the following subsets of RE are ee —— a | nee er — o , & ~ . Dasrnine which of the following subsets of AY are sabes w ‘ thats subset W of a yewtur space V is 4 Tol amt only ithe following condition Andy are any vectors in Wy 2! 8 ave any scalars, then au + BY Pee et a sation to AX MOE subspace of RT if = @ Me b, where 4. eM Og YO) be a set of vectors im a vector 7 v Sec.6.2 Subspacer 251 wo) 72M. In each part, determine whether the given vector ¥ belongs to span {¥). ¥2. vs}. where (1.0.0.1), ¥:=(1.=1.0,0), and % », WH 14.2.9. U6) ¥= 0.2.0.1) SO VSCLLAS. O@ ¥=.1.1.0) 22 Which of the following vectors are linear combinations of 23: In each part, determine whether the given vector p(t) belongs to span (p(t), p2(1). py(t)]. where 7 Pde Lp) =P +1 (a) pu) 38 -3e pir 1+) (d) pr) = 2 ©) peered space V. and let W be a subspace of V containing S. ‘Show that W contains span S LO If A isa nonsingular matn. what is the null space of A? Justify your answer. 17. Ler be a fixed vector in 2 vector space V. Show that the set W cosisting of all scalar multiples cy of xo is a subspace of ¥ TB. Let A be an m xn mama Is the set W of all vectors x in R such that Ax = a subspace of R*? Justify your answer 252 Chapter 6 Real Vector Spaces gt ane (0) anal A TO. Show that the only subspaces of A are (0) itself. Phage vn Ft is im Wy ane wy asin Wy TAO, Let Wy and Ws be sol AW) 4 Wy be the set otal ves wy 4 wewwhere Wy ‘ Show that Wy + We ava subypaace of T TAL, Let and Ws Be satbyptees of vector pate VE well Maris Exercises MLE Lett he ante W fee hye of of vets jaf the form (2.4, )) where @ and > ane any real numbers Is Wa subypace of 1 Use the following AL ATE AB commands ty help you determine the al = fix(10 + randn); 12 = fin 10 + randn): DI = fix 10 « randn): 2 = fixi10 + randn); al bl] w= [2 a2 b2) ML Let \ be Py and Jet W be the subset of V of vectors the form ax* ~ bx +S, where a and b are arbitrary ‘eal numbers With each such polynomial in W we associate 2 vector (a..6. 5) in R*. Construct commands like those in Exercise ML.1 to show that W isnot s subspace of V Before solving the following MATLAB exercises, you should hhave read Section 12.7 ML. Use MATLAB to determine if vector visa linear combination of the members of set $. a) S=tvy.¥,¥5) 11,0.0.1),(0.1. 1.0). 11,1) O11). d 0) S=lv4.¥5,¥5) I Use MaTLan to combination of th * in terms of the members ut @ S=tvi.v, yy) 0.2.0.0, 2.14.4) MLA, er mine if Vis 4 linear Ne Members of set $1 it iS, express, 7 1.8, 4) V5 = 101 Let. WPS be ayy cis t a Suppose that V wt every vector in V can be uniquely y whos wy in We and wy is in W, write Vee Wy Wy and say than ‘of the subspaces WV) and WV), tite 9 i Mathis thea ln ‘2, Shaw thatthe se OF AE PIAS in th yy Oe by Lec Os i subsp H (b) Se 1A Ay Ad oP pa, Hooft off g il veh MIL.S, Use MATL An to determine iF is. hing combination of the members of set §- {fy ¥ in terms of the members of §: (a) SS (Vi. Va. V4. Va) 1 °° 7 2 fr 2 yt af fy | = I 2}. oO}. 1 of f= of | a H ldo 0 =! I -2 1 0) S= (rt), pr, pun} 2? ree 2, VE pi) =4r ares. ML.6. In each part, determine whether ¥ belongs toss where Vis Va. Va) = LOD. 1.0, DO. 2D (a) v= (2, 3,23), bh) v=Q (©) ¥=(0.1,2,3) ML.7. In cach part, determine whether p(s) belongs! 5, where Y= (pt), pated, Pun) Sen bre he eee (PO =P pe ) pws 42 () pus 241 5 LINEAR INDEP' 6 Solution roo aa | Sov 64 Linear Independence 253. ENDENCE Thus tar we have letined a mathematical system called a real vector space nat notat some of tts properties, We further observe that the only real vector save having @ finite mumber of vectors in itis the vector space whose only A is 0, tor ity # O is tn a vector space V, then by Exercise Tin Section SA e eve where cand e ate distinet real numbers, anid so Vas wntinitely tmany vectors iti However, in this section and the following ane we show that mest vector ytees V tudied here have a set composed of «finite sumer aevectars that completely describe V. I'shouk be noted that, im general there wee thart ane such set describing 1 We now turn to a formulation of these ideas. The veetors Vyovs ooo Ye tn a vector space VF are said to span V if ev ery vector in Vis a near combination of vy. ¥2. 7M Moreover. if = (yto Ya cos Yale them ave also say that the set S spans V7, orthat (¥). ¥ Spans V, of that Vis spanned by S. or im the language of Sec span S “vq span the vector space The procedure to check if the Vectors V1. ¥2+ V isas follows. Step 1. Choose an arbitrary vestor vin Vs” Step 2, Determine if visa linear combination of the B10 vectors. If itis: then the given vectors span is not, they do not span V+ the consistency of a linear system, but this ume Again we investigate a right side that represents an arbitrary vector im 8 YECIOr space V Let V’ be the vector space RS and let yy = (2D, 2 (1,02), and vy = (1.1.0). Do vj. ¥z-and vy span V2 Step 1. Lety = (a, b.0) be any vector in R', where a, b, and c are arbitrary real numbers ‘Step 2. We must find out whether there are constants ¢1, ¢2,and cy such that ey) ben How = ‘This leads to the linear system (verify) at ates 21 +eved tia oe A solution iy (verify) ace we have obtained a sotution for every choieg of Pandy we conclude Mee eee span A" This is equivatent vo sayin’ that span (¥-¥ va) = RY . Lo fe 1) fo 0} 0 of Lt of-fo a] spans the subspace of Ms: consisting of all symmetric marca. Solution Sep I, An arbitrary and care any real numbers We must find constants ¢ ft e+ which leads to a linear system whose solution is (verify) Ge eo Since we have found 2 solution far every choice of @. 5. and cw thst S spans the given subspace. (Pitt), P2G)}, where py Let V be the vector space PL. Let § and px) =F = 2. Does S span Py Solution Step 1. Let pie) = ar? + dt + be any polynomial in AL where are any real numbers, Step 2. We must find out whether there are constants ¢) and cy sash PUD + exp) pu) = or c=aPezensg artbr Thus ler Fe” + Ret +(e) + 2g) S ar + be dec. Since two polynomials agree far all values of : only if the coefticieats of > SPectve Powers of f agree, we obtain the linear system Using elementary row operations on the augmented matrix of this lines tem, we obtain (verify) > il os Linear Independence 255 The vectors e) = i = (1, Oyande. » OYand e Vin Section 4.1, if a = (u,u3) is an ; C 2) is any vector in R2, then u = As be noted in Section 4, every vector u in R? aie it a Riad on Catan of the vectors e, = 1 (1,0, 0), ey a a mbinatio ; 0,0), = 01,0), and e; = k= G00, This, « nd ex span Sia, he aca (A + €2 = (0,1,00...,0)....,€) = 0,0,... an R", since a vector U = (1, 02.6... Uy) in R” can be written as a (0, 1) span R?, for as was observed WS He) + Ze) + + Uy. . 1The set S = {17 ee ++, 1] spans P,, since every polynomial in P, is of y at" + ay! $e pg it + ay, which is a linear combination of the elements in-S. . Consider the homogeneous linear system Ax = 0, where 11 1 1 0 2 2-21-55 ae) i 1 3]y / 44-1 9 From Example 8 in Section 6.2, the set of all solutions to Ax = 0 forms a subspace of R‘. To determine a spanning set for the solution space of this homogeneous system, we find that the reduced row echelon form of the aug- mented matrix is (verify) 7 where r and s are any real numbers. In matrix form we have that any member of the solution space is given by = YY and Hence the vectors 0 ant 0 tt DSH Cyaper Heal Vector New eye te 0 a CCI on m= SQ, a is linearly depende ether py), PDs PMD n Nt oj tninten tte naan PS 2AM, The th ita yn (eH) ‘ 2c) +3e1 = 0 “ (3 ey ey + 200 = 0 ‘ 21 + 2cy =0, vie has infinitely many solutions (verify). A particular solution is, eye hers hao Pi) + pat) = pO = Hence 8 is lineurly depen etccon, ctor space and v; is the zero veg nG Ivy. vo.0.2.%4 are & vectors in any vec \ %, Equation (1) holds by letting ¢; = | and cy) = 0 for j # i. Thus < 4 is linearly dependent. Hence every set of vectors containing —— rector is linearly dependeat. . Let $; und $2 be finite subsets of a vector space and let Sy be a subse of $3. Then (a) if) is linearly dependent, so is So; and (b) if Sis linary independent, so is $; (Exercise T.2) We consider next the meaning of linear independence in R? and R?. Sup- Pose that v; and v2 are linearly dependent in R?. Then there exist scalars ¢, tnd ¢, not both zero, such that civ) + eave Mey #0, then Wey #0, then ‘Thus one of the vectors is a scalar multiy aon fhe va multiple of the other. Conversely, suppos® Iv; — cv, =0, s re heat coefficients of v; and v2 are not both Zer0, it follows that vi amd only if 7 ne Thus vj and v3 are linearly dependent in R? ey Vectors is a multipl ink are linearly dependent if gay nib of the other. Hence two vectors through the origin (Figure eal. iF they both ie an te same Sine past Suppose now : can weite that vi. va, and v3 are linearly dependent in 3, Then ¥ CIV + c2¥2 + e345 =0, 'ot all zero, say cy 0. Then ==(23)n-(2)w where cy, 2, and Cy are ng Figure 6.4 ‘pe Figure 6.5 > Proof Sec. 6.3 Linear Independence 259 (a) Linearly dependent vectors in R? sin (b) Lineurly independent vectors in R? which means that v2 is in the subspace W spanned by v; and v " Now W is either a plane through the origi : wae te othe ign hn a a dent), or the origin (when vj = ¥2 = v3 = 0). Since a line through the ongin always lies in a plane through the origin, we conclude that vy, ¥», and val lie in the same plane through the origin. Conversely, suppose that vy, v9, and v3 all lie in the same plane through the origin. Then either all three vectors are the zero vector, or al three vectors lie on the same line through the origin, or all three vectors lie in a plane through the origin spanned by two vectors, say v; and v3. Thus, in all these cases, v2 is a linear combination of vy and vs: ¥2 = av) + a3¥3. ayy — 1v2 +43¥3 = 0, which means that v1, vz, and v3 are linearly dependent. Hence three vectors in R° are linearly dependent if and only if they all lie in the same plane passing through the origin [Figure 6.5(a)]. 0 ea! vw om oe fed (a) Linearly dependent veetors in R? (b) Linearly independent vectors in R?. i V. We can More generally, let u and v be nonzero vectors in a vector space V.. show (Exercise "T13) that u and v are linearly dependent if and only if there see scalar k such that v = ku. Equivalently, w and v are linearly independent if and only if neither vector is a multiple ofthe other. This approach will not work with sets having three or more vectors. Instead, we use the result given by the following theorem. | .¥q ina vector space V are linearly dependent The Ee ee > 2, isa linear combination of the if and only if one of the vectors Vj» preceding vectors Viy Vay +++» Vi-I" If) isa linear combination of Vi, ¥2n---" Ye!" yj seim teat FN then eqn teava tee tepid +O FOV too + Ov 1 ev th Wer Real Vector Spaces EXAMPLE 13 Remarks at least one coefficient, ~1, is nonzero, we conclude that y,_ 4. linearly dependent. : \ Conversely, suppose that v1, ¥2,-...¥_ are linearly de Cy, Not all zero, such that Penden, > there exist scalars ¢, cry, + env? + + enV, = 0. Now let,’ be the largest subscript for which c; #0. 1f j > 1, then ed el Co il Ij = 1. then e.y, = 0, which implies that vy = 0, a contradiction 4 , hypothesis that none of the vectors are the zero vector. Thus one of the yen ® ¥) isalinear combination ofthe preceding vectors ¥1.¥2.....¥.). fy, v2. v3. and vy are as in Example 9, then we find (verify) that vy, +¥2 + Ov; — v4 = 0, 80 V1, V2. Va, and vy are linearly dependent. We then have vy=vit v2. 1 1, We observe that Theorem 6.4 does not say that every vector v is a lin combination of the preceding vectors. Thus, in Example 9, we also hing the equation v; + 2v2 + V3 + Ov, = 0. We cannot solve, in this equatce for v4 as a linear combination of v;, v2, and v3, since its coefficient i zero. 2. We can also prove that if § = (v1, v2, ..., Vk) isa set of vectors ina vectr space V, then $ is linearly dependent if and only if one of the vectorsis Sis a linear combination of all the other vectors in S (see Exercise 73) For instance, in Example 13, W=—V2—Ovs+v4 and v2 =—}yy 3V3 - Ovs. 3. Observe that if v,, v2, ..., vg are linearly independent vectors in a vectt space, then they must be distinct and none can be the zero vector. The following result will be used in Section 6.4 as well as in several othe! places. Suppose that S = (vj, v2,..., Vn} spans a vector space V and ¥; is! linear combination of the preceding vectors in S. Then the set SL SAV V2y ee Vyety Vjgty vey Vals consisting of $ with v, deleted, also spans V. To show this result, observe t* if vis any vector in V, then, since S spans V, we can find scalars a1, a2, «-::% such that GIVE + ava +++ Haj 1V jg HayVy + ayy iV yg Hoos tb anVe Now if Vy = Biv) + bavy $e + by aV jy then VS Qyvi tava +o ay av ja1 +.aj(b,Yy + byvy + oe + bp-iV-!) Hays V 41 bo tb agVy FIV + CaVg Het paAVjnn H CyHV jg tone + eaVin which means that span 5; = V. See. 6.3. Linea Independence 261 Consider the set of vectors § = (vj, v2. vs. Va) in RY, where 1 1 0 z ! few] gfe and weedy 0 0 A 4 and let W = span S. Since vy = ¥) + ¥. we cont W = span 4M ic ja a 7 onclude Si geno the 8. Let ach DY 1.2) r ‘ i ean =f]. v-[-]. 2 2.0.2.9 H a 2 2 belong to the solution space of Ax = 0. Is (x1, ¥2.%) sic ofthe following sets of vectors span R?? linearly independent? ip i. -1-2)-(0. 1 DI So Let a py (1.2. -1-6.3,0.4 1 2), 2,-5.4) 1 F ‘ oa. 2D. .L0)- _|2 0 6 e' melo) = ]-1 w=|) 1 1 9, 44 ((1.0.0). (0. 1,0), 0. 0,0, 1, D} | wc ofthe following vectors span R*? (0,0. 1). (0. 1,0,0), (1.161, Ds ls $9) 0.21.0). 1.1, -1-0). 0.0.0.0. 10. Which of te following sets of vectors in Bare linearly 16 (6.4, 2.4). (2.0.0, Dy (3.2.1 De ependent? For thos that are, expresso Vectra 3 6,6.-3.2), (0.4, -2,-D. linear combination of the rest pelong tothe mul space of A. I 1. %2 83 Hinearly v 11,0). independest? @ (1.1.0.0), (1,2, -1, D, 0,0, 1, Dv (a) (,2,-D. 3.2, 5). ae (6) (04,2, 1, 2.6,-8)- (1-2-3) HE ics on fotowing sts of polynomials span mt (6) (01.1.0), (0.2.3) (1.2.3. 8:6.9)) @iP+iP+n141) y Oy (@ (0.2.3.1, D.(.0 DL teehee WP 42,27 ‘nl 11. Consider the vector space R*. Follow the directions: of Dwarg let 2 +e+4d oa Exersise 10. a ea fa), 12, Do (1,0, 0,2) (6,846 (0.3.2 ») Dot polynomials + 2+ Io? a1t2,042 Ge) (23 De(-2 4-6.) S | $f -5 a () (he 1), (2,351 2.3, 1.2.0, (2.2.1. Dh ase of vectors spa 7 Outer ining the solution space of @ 14. 2,-1.3.(6.5.-5. D2 oy) | 0 1 0 42. Consider’ the vector space Py, Follow the directions of A 23 1 Exercise 10 1 3 1 124 4 of vectors spanning the null space of yrou2-l Piao . = FR 13, Consider the vector pace 2S | Exercise 10 Mz. Fallow the directions of 1) ft oy fo 37 po oy zy fo afe[r 2f-[4 6 Voi) ft oy fo a TL iflo 2f-fo 2 14. Let V be the vector space of all real-valued continuous functions, Follow the directions of Exercise 10. (a) feos, sins, e'), Theoretical Exercises TL. Show that the vectors e1,€3, +€, in R" are linearly independent. T2, Let 5) and Sy be finite subsets of a vector space and let Si be a subset of $2, Show that: (2) IVS1 is linearly dependent, so is 5. (0) If Ss is linearly independent, so is 5} T3. LES = t¥1.¥5.... v4) beaset of vectors in a vector Space. Show that $ is linearly dependent if and only if one of the vectors in S is a linear combination of all the other vectors in S. T4. Suppose that $ = (v3, v2, vs) is a linearly independent ‘Set of vectors in a vector space V. Show that 7 = (wi. W2, 5) is also linearly independent, where MHS VHD + Vi w2 = va +05, and Wy = ¥5 TS. Suppose that § = (vs, vs, v3) is a linearly independent Set of vectors in a vector space V. Is T = (w1. 2, Ws), where w, w= 1= Vi + V2, =v +5, 2 + Ws. linearly dependent or linearly independent? Justify your answer, 6. Suppose that $= (v1, v2, v5} isa linearly dependent Set of vectors in a vector space V. Ig T= (1, w2, ws), where w, =Vw = inten tt? + ¥3: linearly dependent or 1 independent? Justify your answer. T7. Let vs, vacand v, be vectors in a Yector space such that {v1.2} is linearly independent. Show te +¥2, inearly lat if-v3 does not belong to span {v,, v2) (ete then (v1, ¥2.¥3} is linearly independent vy T8. Let A be an m x n'matrix form. Show that the nonzero rows of Ai in reduced row echelon iewed as MATLAB Exercises IL-1. Determine if $ is linearly independent or linearly dependent, wll TELE a (b) (1, e", sin). © (nel). (A) (cos? 1, sin? f, cos 2r), 15, For what values of ¢ are the vectors (_ 1, ~ 2.1.2) and (1,1, c) in RE 0, linearly depen! deny 16. For what values of 2 are the vectors, a 21 +37 + 2in P; linearly dependents oo vectors in R" form a linearly independen tof vectors. T9. Let $ = (uy, u:, “git Pea set of vectorsing space, and let T = (V1, Vay... va), where ae i=1,2, tliat in S. Show that iad DIM + Dave tebe, is a linear combination of the vectors in § T.10. Suppose that (¥;, v2... ¥,) isa li Set of vectors in R". Show that if A nonsingular matrix, then (Ay,, Ay. linearly independent, TAL. Let 5, and 5; be finite subsets of a Vector space V an Fat Bes mibect of Sy. IFS, in tinearty dente: show by examples that 5; may be either linearly dependent or linearly independent, nearly independen isannxn Ava is TZ Let S; and 5 be finite Subsets of a vector space anil bea subset of 53. If 5 is linearly independent, show by examples that S; may be either linearly dependent or linearly independent. TA3. Let w and v be nonzero vectors in a vector space V. Show.that (u, v) is linearly dependent if and only i there is a scalar k such that v = ku. Equivalently, - {u, v) is linearly independent if and only if one of vectors is not a multiple of the other. TA4. (Uses material from Section 5.2) Letw and v® linearly independent vectors in R?. Show that ie and w x v form a basis for R?. (Hint: Form Eau (1) and take the dot product with u x ¥.) / eee 1 2 [-? 2 1 wy ©s={lr 2 0 fl oO} J-1 0 1 1 iy L-1 af. Woe of AX ML.2. Find a Spanning set of the solution sP** a Sec. 6.4 Busin und Dimension 263 2 0 7 1 2 yori 2 se[2 -1 $0 7 0 2-2 -2 ed ig AND DIMENSION in this section we continue our study of the structure of a vector space V by Jetermining a smallest set of vectors in V that completely describes V The vectors Vi. V2,.....¥4 ina vector space V are suid to form.g basts for V if ta) vi v2... Va span V'andH) vj, va, ..., Ve are linearly independent. If v1, Vo... Ve form a.basis for a vector space V, then they must be distinct and nonzero, so we write them as a set (Vi, ¥2. «++. ¥el- The vectors €; = (1,0) and e) = (0, 1) form a basis for R?, the vectors e, 2, and es form a basis for R? and, in general, the vectors €), €2, ....€» form a basis for R". Each of these sets of vectors is called the natural basis or standard basis for R?, R®, and R”, respectively. s SCs Show that the set S = {¥1,V2,V3, V4}, where vy, = (1,0, 1,0), v2 = Sue (0. 1, -1, 2), ¥3 = (0, 2, 2, 1), and vs = (1,0, 0, 1) is a basis for R*. To show that S is linearly independent, we form the equation cry + €2¥2 + .63¥3 + Ca¥ = 0 and solve for c1, C2, ¢3, and cs. Substituting for vj, V2, V3. and vs, we obtain the linear system (verify) ca +c4=0 ate a - e420 2+ ctes = 0 (verify), showing that S which has as its only solution cy = ¢2 = ¢3 = is linearly independent. To show that $ spans now seek constants ky, kz, ks. RS, we let v = (a,b, c,d) be any vector in Rt. We and ky such that Kivi + Rave + kava + kava = Ve i » find a solutioh (verify) for kya, ka, Substituting for v1, Vas V3» ¥4, and v, we 7 and kato Me resulting linear system for any a, b,c, und d. Hence S spans e. and is a basis for R*. eee nner re 264 Chapter Real Vector Spaces EXAMPLE 3 ALOR [ee hte bet Qbis Solution Todo this, we must show that § spans v and is lineatly indepen, shat it spans V, ave take any vector nV. that is, a polyno, Constants ay, 43. and ay such that i ASI8 FOF the very, and must find ae bre = ae ED EAE WF ae 57 a? Vay FMA a dy for all values of Conky Hf the cre we get the linear system Since two polynomials 4 gpective powers of Fare’, a a ay + 2a = ay — ay + ay = Solving. we have ath—« thea a= Hence S spans V. To illustrate this result, suppose that we are given the vector 2r Here a = 2, b = 6, and c = 13. Substituting in the foregoing cxpr- 4), 43, and as, we find that a= Hence +I -$0-1)+ F2r-2 2? +61 +13 = 2 To show that S is linearly independent, we form a\(P +1) +an(t — 1) + a3( Then ay?” + (ay + 2a3)t + (ay — ay + 2a) = 0. Again, this can hold for all values of ¢ only if a : a; + 2ay = 0 a) ~ ay + 2a The only solution to this homogeneous system is a) = 42 = implies that S is linearly independent. Thus $ is a basis for Ps . The set of vectors (1,01, .f, 1) forms a basis for the °° ’ called the natural, or standard basis, for P. It has alread n Example 5 of Section 6.3 that this set of vectors spans Py. LN" dence is left us an exercise (Exercise 7.15), EZIEZY Find basis for the subspace V of Py, consisting of all ete" av + bt +c, where © =a ~b. solution Proof : oasis and Dimen Every vector in V is of the form rension 265 a tbr bah which can be written as a? +1) 4b 1), so the vectors P+ Land 11 pan V. Moreove xependent because neither one i er, these vectors 4 independent because neither one is a multiple of the oth jectors are linearly could also be reached (with more work) by writing the ther This conclusion «juation a +) +a) <0 or 2 Pay + tay + (ay =a) =0. Since this equation is to hold for a r x jor all values of ¢, we must have a, = 0 o a A vector space V is called finite-dimensional if there i V that is a basis for V. If there is no such finite subset of v, then Yield infinite-dimensional. : Almost all the vector spaces considered in this book are finite-dimen- sional. However, we point out that there are many infinite-dimensional vector spaces that are extremely important in mathematics and physics; their study lies beyond the scope of this book. The vector space P, consisting of all poly: nomials, and the vector space C(—00, 00) defined in Example 7 of Section 6.2 are not finite-dimensional. We now establish some results about finite-dimensional vector spaces that will tell about the number of vectors in a basis, compare two different bases, and give properties of bases. First, we observe that if (v1, v2 we¥e) is. basis for a vector space V, then {CV}, V2s..-»¥4) is also a basis when c # 0 (Exercise T.9). Thus a basis for a nonzero vector space is never unique «va is a bass for a vector space V, then every vecor in V IS = (M1. Var can be written in one and only one way as a linear combination of the vectors in. First, every vector vin V can be written as a linear ‘combination of the vectors in S because S spans V. Now let vy tcava to Fenn wo and vadyyy tye toe + dave @ Subtracting (2) from (1), we obtain dy)¥ne Oe (a di ter et +len OQlsi sn. so press ¥ as a linear . int, it follows that ¢ s ines ende! Since $ is nearly independent it ay wen co = dy, 1 m. We take the \= + “im Sas m x 1 matrices and form Equation (3) above. This equation i a homogeneous system in the n unknowns 1, c2, ... Cq3 the colunss m xn coefficient matrix A are v1, V2, ..., ¥q. We now transform A102" B in reduced row echelon form, having r nonzero rows, | space V and c #0, then (c¥), V2. basis for V T.l0. Let $= [v). vs. vs} be a basis for vector space V. Then show that T = (Wy, Wa, Ws), where Wt v2 tM Vat Vs. and WEY is also a basis for V Matias Exercises In order 1 use MATLAB in this section, you should have read Section 12.7. In the following exercises we relate the theory developed in the section to computational procedures in MATLAB that aid in analyzing the situation. To determine ifa set § = (¥,.¥2,...,¥4) is basis for a vector space V. the definition requires that we show span S= V and inert independent. However ifwe now tha in V = then Theorem 69 implies tha we need oly so ta cir nS = Vor Ss incur independent lependence, in this special case, is easily anal sing MatLab’ tre command. Const the fomesencn note AX = ssid wih he ner repel terendece questo Then stat rrefiA) = [‘] In Exercises ML.J through ML.6, ‘applied, do so; ctherwise, the conventional manner if this special case can be determine if § is a basis for V in ey be a set of nonzero Vectors in a Vecigg that every vector in V can be written jp? ‘one way as a Tinear combination of tye the veg Show that S is a basis for V, cin “12, Suppose that WM) is a basis for R". Show that if A isan, nonsingular matrix, then " {AV AV20 0 AY) is also a basis for R". (Hint: See Exercise Section 6.3.) f ‘TAB. Suppose’that {VL Vay Val is a linearly independent set of vectors in F x, be a singular matrix. Prove or disprove ita {AV), AV2,-.., Aa} is linearly independent. T.14. Show that the vector space P of all polynonias finite-dimensional. (Hint: Suppose that (pile), pat). isa finite basis for P. Let d) Establish a contradiction. ] + Pelt} degree (1) ‘TAS. Show that the set of vectors (F171... linearly independent. MLA. S = {(1, 2,1), (2,1, 1), (2.2, Dia Y= ML2. $= (212,12 — 3 + 1,28 - a tdliel® ML3. 5 = ((1, 1,0,0), 2, 1,1, -1. Q04! (12.1, 2)} in V = RA MLA. S {(1, 2, 1,0), (2, 1,3. Ds 2? span S, (01,2,1,0), 2.1.3, D222" span S. ML.6. V = the subspace of R° of all veeto™ (a,b,c), where b = 2a — cand $= (1,-1.1, Db MLS. v 4m Exercises ML7 through MLS, use MATIN command to determine a subset of Stat 0" S. See Example 5. ' a! MLI. 5 = ((1,1,0,0),(-2.-2,0:0 00" 2.1,2.0,0,1, 1, Die ‘ ‘What is dim span 5? Does span. 5 See. Momogencons Systems 275 A [ {0 i}: basis for V. allowing 1 in Exumple 9, uve aft MANE AN’: ref command ia eutend Sta havi for Vin ; f “H Fvem ives ML 10 through ML D2 af asa 5? res span S = Mis? MEI. 8 = (1 1.0,0), 1,0, 1.00), = RE Bay haar art he tae ly MLL S = re veh Sha san 57 Does span ; : MLAR 8 = (0.4.0.2, Dh ag mreorem 6 818 that any Finca V =the subspace of A consisting of all vectors of Crestor space V can be estended 10.0 the form (a, bees ede where e =a, b= 2d be oo OGENEOUS SYSTEMS Homogeneous systems play a central role in linear algebra. This will be seen in Chapter 8, where the foundations of the subject are all integrated to solve one of the major problems occurring in a wide variety of applications. In this section we deal with several problems involving homogeneous systems that will be fundamental in Chapter 8. Here we are able to focus our attention on these problems without being distracted by the additional material in Chap- ter 8. Consider the homogeneous system Ax=0, where A is an m x n matrix. As we have already observed in Example 8 of Section 6.2, the set of all solutions to this homogeneous system is a subspace of R". An extremely important problem, which will occur repeatedly in Chap- ter 8, is that of finding a basis for this solution space. To find such a basis. we use the method of Gauss—Jordan reduction presented in Section 1.5. Thus we transform the augmented matrix [A : 0] of the system to a matrix [B : 0] in reduced row echelon form, where B has r nonzero rows, | 1 6 v0 t | 1 o 2 0 8 Ole 2 0) o 0 0 1 2 00 0 0 o 0 0 0 0. 2. ‘This agrees with the result obtainel in Then rank A = 3 and nullity the solution of Example 1 of Section 6.5, where we found that the dinero of the solution space of Oi 1 ‘The following example will be used 10 illustrate geometrically wet the ideas discussed above. Let 3 2 50 8 7 1.8 Transforming A to reduced row echelon form we obtain (verily) toot o rit o 0 0 so we conclude the following: rank A= 2. «# dimension of row space of A= 2. s0 the now spake of E84 dimensional subspace of that is, a plane passing through nytt From the reduced row echelon form matty that A ty Been tanel solution 10 the homogenevnts system x = 1 EMT xe] -r Sec.6.6° ‘The The Rank of a Matrix and Applications 289 where r is an arbitrary constant (verify), 90 the eee : . 90 the solution space akan fi 2 the null space of A, is a line passing ieee the eal ; the dimension of the null space of A, or the Mh inl Thus Theorem 6.12 has been verified eee Of course, we alrea mm of pases eal ie that the dimension of the column space of s also obtain this result by findi : two vet forthe col peo AT ‘he column apace of Ass 4 so siona ace i 1 ma ‘ a two-dimen ional subspace of A, that is, a plane passing through the wigin. ‘These results are illustrated in Figure 6.7. ee OE s figure 6.7” Null space of 4 oa and Singularity ‘The rank of a square matrix can be used to determine whether the mstrix is singular or nonsingular, as the following theorem shows. PREM Ann x n matrix A is nonsingular if and only if rank A = Proof Suppose that A is nonsingular. ‘Then A is row equivalent to /, (Theorem 1.12, Section 1.6), and so rank A =n. Conversely, let rank A = n. Suppose that A is row equivalent 10 8 matrix B in reduced row echelon form. Then rank B = n, so the rows of A must be linearly independent. Hence B has no zero rows and since i is in reduced aeoncheion form, it must be J,- Thus A is row equivalent to /n and so A is nonsingular (Theorem 1.12, Section 1.6). ‘An immediate consequence of Theorem 6.13 is the following corollary. which is a eriterion for the rank of ann xn matrix to De "- R STOTT A is ann x n matrix, then rank A =n ifand only ifdet(A) # 0. Proof Exercise T.1. LS! ‘Another result easily obtained from Theorem 6.13 is contained in the fol- lowing corollary. Ty oy PEW Ler Abe ann xn matrix. The linear system Ax = bhas a unigue solution for every n x 1 matrix b if and only if rank A= n- Proof Exercise T.2. . 290 Chapter Real Vector Spaces oN The following corollary ives #! od Of testing Vectors in A are linearly dependent or lineurly independ, Whethey Va) he a ser ofa vectors IR" and le Ee Let 8 = (vi 2s Dy ve nova teohumns) are the vectors nS. Then 8 is tnearty ing Mn, leven, and only if det A) #0. Hor n = 4c the method of Corollary 6.4 for testing linear de hot ac efficient as the direct method of Section 6.3, calling forthe sa a homogencous system. Proof bxereise 1.4. ‘ 0 of n linear equations inn unkniyn The homogencous system Ax eee aes nontrivial solution if and only if rank A 3 a wi § se ML2 for column spaces. To compute te 2 mairax A in MATLAB, use the commend ramk —_—__ 6.7 COORDINATES AND CHANGE OF BASIS -—_ Coordinates If V is an n-dimensional vector space. we know that V bas = ‘vectors in it: so far we have not paid much attention to the oni" in S. However, in the discussion of this section we speak of = ae ® Salvi. ¥q) for V: thus S) = [¥2.¥)...-.¥e) 82 SS basis for V) 7 If S = (vy. v3, .... Va) is an ordered basis for the e~" space V, then every vector v in V can be uniquely expressed =" veays where 4 the coordinate vector of v with respect to the ordered bas ~ of [¥], ate called the coordinates of v with respect to S. .o Observe thatthe coordinate vector [¥], depends on the O55" Vectors in S are listed. a change in the order of this listing” ad Suu Solution a Solution Remark See. 6.7 Coordinates and Change of Basis 295 coordinates of ¥ with respect to S. All buses considered in this section are assumed to be ordered bases. Let S = (v1, ¥2, Vi. va} be a basis for R', where vy = (1,1,0,0), v2 = (2,0,1,0), O12=1), v4 = (0,1, =1,0). ue v= (1,2,-6,2), ee compute [¥],. ye te To find [v],, we need to compute constants ¢}, c2, ¢}, and cy such that CIV Fev + ew¥s + cava =v, whi s just a linear combination problem. ‘The previous equation leads to the linear system whose augmented matrix is (verify) 1 2 0 0; elses ered a Ww Oo 1 2-1 0 O-1 0: or, equivalently, Mow y wiv). ‘Transforming the matrix in (1) to reduced row echelon form, we obtain the solution (verify) cl so the coordinate vector of v with respect to the basis S is 3 = Ps=| => 1 Let S = (€1, €2, €3} be the natural basis for R® and let v=(2,-1,3), Compute [¥],. Since $ is the natural basis, v= 2e) — ley +3es, so. In Example 2 the coordinate vector [¥],, of v with respect to 5 agrees with v because S is the natural basis for R? 6 296 Chapter Real Vector Spaces EU Solution Lot V be 2), the vector space of all polynomials of degree < | {v), Vo). and To [W), Wa] be bases for Py, where “Ah i yen ome weet wer Letv = pds Sr 2 (a) Compute [¥]y (b) Compute [vy], ' (a) Since § is the standard or natural basis for Py, we have SH 2=SHH(-W Ng VY’ « oe ' Hence (b) To compute [¥]}. we must write Vv as a Tinear combination of Wy andy, Thus, St-2=al+ I) +ercr-), or St 2= (er eat + (C1 ~ 2). Equating coetticients of like powers of r, we obtain the linear system whose solution is (verify) c= F and ec Hence Wr In some important ways the coordinate vectors of elements in a ve space behave algebraically in ways that are similar to the way the vee themselves behave. For example, it is not difficult to show (see Exercise 7" that if 5 is a basis for an n-dimensional vector space V, v and w are vec V, and & is a scalar, then [v+w], =[v], + [v], | and [ev], =4[v]5- ‘That is, the coordinate vector of a sum of two vectors is the sumo! dinate vectors, and the coordinate vector of « scalar multiple of a vec" scalar multiple of the coordinate vector. Moreover, the results in Ea (2) and (3) can be generalized to show that [havi + kava +> katy a [vi] he [vay eet hells is, the coordinate vector of a linear combination of vectors is combination of the individual coordinate vectors. the on is is ‘Tha Jing wid Change ot Hats 2977 Hen we The choice of an ontered basis and the can anment of a coodinete for every vin V enables i the vector space We ilvstrate this notion by using Example 1 Choose a tived paint O in the plane A! and diraw any Hw arrows Wy and Ww) from 0 that depict the haste vectors Fane Lin the ontered basis Se (1 HE for Py (vee Figure 6:8) The direetions of wand we determine Wo lines, which we call the ey and eymnee respectively Phe positive direction on the ty axis is inthe direction of wy the negative direction fon the) axis ts along W) Similarly, the positive direction on thee, wee ve in the direction of W5. the negative direction nthe ween along ow) Phe Jengths of Wy and w) determine the scales on the ey and ©) ates. reaper tirehy Tv isa vector in Py, we ean write ¥ a mark off a sepment of length ey] on th Wely. a8 v= ew hes) We nenw 1) axis Cin the positive direetion of cy ts positive and in the negative direction fe, is negatives and draw a fine Through the endpoint of this segment parallel tow) Similarly, mark off + segment of length [o>] on the c-axis (in the positive direction if ¢) is positive and in the negative direction if ¢; is negative) and draw a line through the endpoint of this segment parallel to w;, We draw a directed line segment from 0 t0 the point of intersection of these two lines represents ¥ This directed line segment Figure 6.8 > Suppose now that $= (vy) Va} and P= [wy Wa. 0 wy) are Bases for the n-dimensional veetor space V- We shall examine the relanoaship be tween the coordinate vectory [¥], and [v], of the vector ¥ in F with respect Wo the bases § and T, respectively Wy is any veetor in V then Vm bewe bo beam aw 2B Chagtor s_ Real Vestoe Spaces a w& 7 yun) —— [¥], cin ‘Mutipty on left by Paar (ane Figure 6.9 4 Then fermi tegws to Fea s [erm], fom, +-~ + Fowl mein, tealvale te tole. Let the coordinate vector of w, with respect 10 5 be denoted by ayy ayy 7 nj Then ay az ain an mn) Fey a | lan am |_| az a ||, Wea pte} prota ote : mal Uae aan} Lan ane = ano, or = Ps-r [V+ 6 where ay) aig odin m+ a Pore |! [los], (wels (mel is called the transition matrix from the T-basis to the S-basis. Equation () ‘says that the coordinate vector of v with respect to the basis S is the transition matrix Ps. times the coordinate vector of v with respect to the basis 7 Figure 6.9 illustrates Equation (5). ‘We may now summarize the procedure just developed for computing transition matrix from the T-basis to the S-basis. ‘The procedure for computing the transition matrix Ps.r from the basis Va) for Vis T = (mw, + Wn} for V to the basis § = (v1, v2, .- follows. ‘Step 1. Compute the coordinate vector of w : {o the basis 5. This means that we first have to express combination of the vectors in S: | AMF V2 bo Hag Ve = Ww), f= 12, We now solve for a1). a2,,.... dq) by Gauss—Jordan reduction, transfor, ing the augmented matrix of this linear system to reduced row ect form, ‘Step 2. The transition matrix Psy from the T-basis to the 5-bssis ® formed by choosing [w,]., as the jth column of Psat. i See. 6.7 Coordinates and Change of Basis. 299 Let V be R'a =v : a and let 8 = {vy. v9. va} and T= (wy, w>. wy} be bases for R', 1 1 velo weft welt 1 0 I 6 4 5 wad w)=]-1 wa 15 3 1 2 (a) Compute the transition matri€ Psy from the T-basis to the S-basis, “| Haak '-ba (b) Verify Equation (5) for v = | —9 . 5 and (a) To compute Ps. r, we find ay, a3, ay such that ayy + a2v2 + avy = W) In this case we are led to a linear system of three equations in three un- knowns, whose augmented matrix is, [uo we wim). That is, the augmented matrix is aaa so i 10) Similarly, to find by, ba, bs and cy, c2, ¢3 such'that ivy + brv2 + bays = crv +02¥2 + .0a¥3 = Way we are led to two linear systems, each of three equations in three un- knowns, whose augmented matrices are en or, specifically, 2 1 1 2 1 1 s 02 1 and 02S 1 0 1 1 0 132 Since the coefficient matrix of all three linear systems ts (Movoy we can transform the three augmented matrices to reduced row eche form simultaneously by transforming the partitioned matrix [vi vz ovaiwe wi] to reduced row echelon form. Thus we transform rou 4.5 201 -1i5 ool ai2 _— eee ee eee ceeeeteme eee THEOREM to reduced row echelon form, obtaining (verify) 08 OF 2) Ol. OF: OO, 1: which implies that the transition matrix from the T-basis tothe 5.5, 2 2 1 Prorat -t 2{. ret dd i iy () It then to express v in terms of the T-basis, we use Equation (4). From y, associated linear system we find that (verify) 4 6 4 5 v=|-9] =1]3]+2]-1]-2]5] = 1 +2w, -2w, -BeH-8 so : i= [ 3} Then by Equation (5) we find that [v], is 2 2 1771 4 Pser[v], = : = F e = a If we compute [v], directly by setting up and solving the associated lineer system, we find that (verify) Et v=|-9}=4/0]-5 +1 5 1 0 4 m-[-4). 1 Hence (v], = Ps-r [vy], ' We next show that the transition matrix Psy from the T-basis to the S basis is nonsingular and that Ps! , is the transition matrix from the S-biss the T'-basis, | Ay, — Sv2 + Is, Let S = (vy,¥9, dimensional vector basis to the S-basis, ‘matrix from the S “Val and T = Wa} be bases for th t | space V. Let Ps.-r be the transition matrix from the T " Then Ps.-r is nonsingular and Ps! is the transi” basis to the T-basis, \ ia Proof Solution See.6.7 Coordinates and Change of Basis 301 We proceed by showing that the null space of Ps. contains only the zero vector. Suppose that Ps. [¥], = Oxe for some v in V. From Equation (5) we have Por (ve = [5 =e If-v = byyy + byvy +--+ + ByVq, then by 0 by =[¥], =" = by 0. so ‘ v= Ov, + 0v2 +--+ 0¥, = Oy. Hence [¥] . Thus the homogeneous system Ps._rx = 0 has only the trivial solution; it then follows from Theorem 1.13 that Ps.-r is nonsingular. Multiplying both sides of Equation (5) on the left by Ps_'7, we have Wh, = Petr lds ‘That is, P51, is then the transition matrix from the S-basis to the T-basis; the Jth column of Ps, is [vj],- . In Exercises T.5 through 1.7 we ask you to show that if S and 7 are bases for the vector space R", then Pser = M5'Mr, where Ms is the n x.n matrix whose jth column is vj and Mr is then xn ma- trix whose jth column is w,. This formula implies that Ps,-r is nonsingular and itis helpful in solving some of the exercises in this section, Let Sand T be the bases for R? defined in Example 4. Compute the transition matrix Qr.-s from the S-basis to the T-basis directly and show that Qr_s = Pedr Qr.-s is the matrix whose columns are the solution vectors to the linear sys- tem oblained from the vector equations a,Wi + a2W2 +435 = V1 byw) + bow + baws = Vo cyWy + C2W2 + C3W3 = V3. As in Example 4, we can solve these linear systems simultaneously by trans- forming the partitioned matrix [wi wo ws ivi vi ys] to reduced row echelon form. That is, we transform Bae SHO iil Ona ‘s_ , 302 Chaper6 Real Veco Spaces to reduced term exbelom fenin, ehaasninn, (100187) 10 Ohi $e F “| ” 0 | yt bind; Hh OO 0205034) V2 Gres bo) | 10 2 Muluplying U7. yo me Sines (veHihy) that Dy. Veep oy conclude that Or.» PRS ater tsi sand tm write tse bw weet l wmtsh (a) Compute the transition matsix. P. Sronn the T basis to the (by Verify Equation (5) for v = 51 + 3. (c) Compute the transition matrix. Q «> Srorn the Stasis to the T “0 ag show that O75 = Pay 4) To compute Ps..7, we need wo wolve the vector expualine . ayy) + avs = yyy + bas = we simultaneously by transforming the resulting partitioned madtis |= fb siti] - to reduced row echelon form. The result is (verify) o (by Iv = 51 + J. then expressing v in terms of the T-basis. © he vaSr+1 =~ 143041) 4, - [5] ‘To verily this, vet up and solve the arsexctated linear system fe!" the vecwor equation veawy paw, ¥ 4g y See. 6.7 Coordinates and Change of Basis 303 ‘Then I= Perl, = ‘ L 1 Computing [v], directly from the asse Med linear system arising from the vector equation v= byw) + baw, we find that (v ity) v Hence which is Equation (5). (©) The transition matrix Qs from the S-basis to the T-basis is obtained (verify) by transforming the partitioned matrix [-t tiois] to reduced row echelon form, obtaining (verify) ‘bases considered in th d to be he 1ese exercises are assumed to be 5 cong bases: In Exercises I through 6, compute the ale vector of v with respect to the given basis S for V cee ttl (3) i cvnms<([b IEE 2 4) +4 Lo vy | re [i e-E J | PSE ret ee lhy 4P 2 +3. '° ol ul Ie EE of ob i In Exercises 7 through 12, compute the vecto coordinate vector [¥} is given with respect B Visk'.S 304 Chapter 6 Real Vector Spaces or v ifthe to the basis S for rrr (EEE (0.1, -)..0.0, (eb DI 1,2). 1} and T = (1,1), (2,3)] be bases 1,5) and w = (5,4). (a) Find the coordinate vectors of v and w with respect to the basis T (b) What is the transition matrix Ps. from the T-to the S-basis? (c) Find the coordinate vectors of v and w with respect to S using Psy (G) Find the coordinate vectors of v and w with respect 1 S directly, (€) Find the transition matrix Qs from the S- to the T-basis (8) Find the coordinate vectors of v and w with respect to T using Qr.-s. Compare the answers with those BANE WS and be bases for R?. Let 1 v=|3 Follow the directions for Exercise 13, Is La St 1-2, sett 4 3st) be bases for Ps Let y= fy? *! wT —149. Follow the directions a ae 16 LS =I AEF NE $I 4 {1+ 1.2.02 + 1] be bases for P,, Also ep yo -P 441+ Sand w= 27 —6, Folio directions for Exercise 13 P10 IE 34 (0 18 3 be bases for Mzo. Let of] ely Follow the directions for Exercise 13, -1 -1) fi 0] fo o 1f'fo 1f'fo and Lip fio r={fo oli be bases for Mz». Let 0 0 29 “0 j=} Follow the directions for Exercise 13. Je Lats = (U1. =1.0, D} and T = (3,0). (4 bases for R?. If vis in R? and wh=[2 ine 17. Let and 18, Let determine [¥], 20. Let {t+ ln 2)and T= (0 for P\. If vis in P, and 1 w=[4]: determine [¥], BaeTet § = 1 1,2.1), (0,1, Do (-2.2. Dam T = {(-1,1,0). (0, 1,0). (0, 1, 1D) be bas isin R' and 2 Wl.= (}: determine [v], +E Let LU be tr fl ‘6 ive Nr Sag |e ad T= fs Wis Wa Be hans fo Te | gf | “ Tee ee 7 Lod. =U LO, = 0.0.1), we sition matrix from 7t0 Sis pera of [Ei 9 eerie T. rove) and T= {w 1. Wa) be bases for P; Sec. 6.7 Coordinates and Change of Basis 305 Ihe transition matrix from $540 7s 23 12 dejermine ae vy. ¥o} and 7 o where eos y= 2, wy = O04), © AF the transition matrix from §'t0 7 is fi i [w 1, wo) be bases for R?, determine 26. Let = (vi, v2} and T= (wy: Wa) be bases for P;, where wat marti I the transition matrix from T to S is E 3) determine S. Ve} be a basis for the +) | ndimensional vector space V. and let v and w be two vexors in V. Show that v= w if and only if oP, 1 Seti sa bas fr an nimensiona vector | SRV anwar econ in Veandkvasca [= bem] Ph +b, - o],=*b, 1 Let S be a basis for an n-dimensional vector ‘space V. + ‘Show that if (ww, .W,) is a linearly ‘dependent set of vectors in V, then Ue )ya foe) oss) act npende set of veces u bel M1. ¥as-+.5 Va) be a basis for an mensional vector space V. Show that (DJ feals--- Peds tinal M coordinates ofa vector with respect toa basis is ‘nation problem. Hence, once the Mog ae linear system is constructed, we can use ! ives, Teduce or rref to find its solution. The Reena yi desired coordinates. (The discussion in etay je B helpful as an aid for constructing the Sten) _ ~ ° 22 11 11 Finding the transition matrix Ps.-r from the T-basis to the ‘S-basisis also a linear combination problem. Ps.-r is the ‘matrix whose columns are the coordinates of the vectors in T with respect to the S-basis. Following the ideas developed in Example 4, we can find matrix Ps.-r using routine reduce or rref. The idea is to construct a matrix A whose columns correspond to the vectors in § (see Section 12.7) and a @ wy a 6.8 ORTHON minimum. A subspace W of R" need not contain any of MT ied vectors, but in this section we want to show that it has a bass properties. That.is, we want to show that W contains abasis 5S vector in $ is of unit length and every two vectors in S are oe From our work with the natural bases for R?, R°, ands in £29 we know that when these bases are present, the computations ar wl > matrix B whose columns correspond tothe Then MATLAB command rref((A B) gives ("hy in Exercises MLA through ML, ase ne, techniques described above 10 fnd the iangge! | enna Py.-1 from the T-basis to the S-basis, hy | MLA. V = R', | MLS. V = Py S= Lt the teeny, (21 =12-P.0 +2} MLO, V = RE, ={(1, 2.3.0). ,1,2,3),0,9) (2,3,0, 1}. 7 = natural basis, : MLZ. Let V = R? and suppose that we have bss 1) ft) fo s=hti].d2}.al, Way 1) fr) fo r= {fo}.[t}.Jaff. 1} {0} [2 and male (a) Find the transition matrix P from U7 (b) Find the transition matrix Q from ToS. () Find the transition matrix Z from U8 (d) Does Z = PQ or QP? the m2 i" i: { w method used to obtain. such a basis is the Gram-Schmidt Presented below. ie 5 A set S = (uj,uz,..., uz} in R" is called orthogonal ito Vectors in $ are orthogonal, that is, if w+ = for i # JAP y. set of vectors isan orthogonal set of unit vectors. Thatis, 5 = 43." is orthonormal if uj -u, = 0 fori # j, and uj +; = | for s rr Sec. 6,8 Orthonormal Bases in X" 307 If x; = (1, 0,2), x2 = (-2,0, 1), and xy = (0, is orton akin E. The vec nt tan Caan) Wan v= (50 =) and m=(-70%) ate anit vectors inthe dretions of xy and x, respectively, Since x i alo of unit length, it follows that (w}, up, x3) is an orthonormal set. Also, span (1, Xo, Xs) is the same as span (U, us, x3}. . ‘The natural basis {(1, 0, 0), (0, 1,0), (0,0, 1)) is an orthonormal set in R*. More generally, the natural basis in" is an orthonormal set. . An important result about orthogonal sets is the following theorem. 10), Up, ..., a} be an orthogonal set of no Sec. 6.8 Orthonormal Bases in R" 309 Since vi # O(why?), vi v1 %0, and solving for cy and cin (4), we have We may assign an rbiteury nonzero value t es, Ths, letting ey = 1, we obtain _ Wevy vey (on) an orthogonal subset (v1, v2) of W (see Figure Hence Y= cy bey ce that at this point we have 6.10). Next, we look for a vector vi in the subspace W; of W spanned by (uy. up, uy} that is orthogonal to both v; and ¥2. Of course, W) is also the subspace spanned by (Vv), v3, uy} (why’). Thus Vy = divi + dav2 + dius We now try to find d; and d; so that vary) =O and vy-¥2 =0. Now (div) + dovy + dyuy)+¥1 = di(vi evi) + dx(uy+vi). (5) = (d\¥, + dyvy + dyuy)+ V2 = d2(¥2+¥2) +ds(U3+¥2). (6) In obtaining the right sides of (5) and (6), we have used the fact that v - v2 0. Observe that v2 # 0 (why?). Solving (5) and (6) for d; and d>, respectively, we obtain us +¥) Us + V2 and d; = —d3 view va-V2 d, = —d; We may assign an arbitrary nonzero value to d3. Thus, letting d; = 1, we have Hence Notice that at this point we have an orthogonal subset {Vi, V2, ¥s} of W (see Figure 6.11) 10 Chapter 6 Real Vector Spaces Remark Solution the subspace Ws of W spanned b a vector ¥s in ; , vy, ts that is orthogonal ty,” aa id thus by {Vi.¥2+ uy + V2 Uy +. un ( ) v2 - ( a), v2+V2 ¥3-¥3) anner until we have an orthogonal set 7* = ( It then follows that 7° is a basis for W, Ive %, toy We continue in this _ Ym} of mn VECtOrs: normalize the Vi. that is, let w= livil Wa) is an orthonormal basis for W. ' (isism), then T = (wi. W2 We now summarize the Gram-Schmidt process. gan orthonormal basis 7 ff R” with basis § = [pe Gram-Schmidt process for computin | (w.wo.---. Wn) for a nonzero subspace W of | (uy, up, ..-. Um) is as follows. = Step 1. Letv) = 01 ‘Step 2. Compute the vectors V2. ¥3- the formula «Vin Suecessively, one at atime by | The set of vectors T* = {Vi. V2» | Step 3. Let “ 1 j=cy (lsism. | wae iS” Then T is an orthonormal basis for W. Itis not difficult to show that if u and v are vectors in R" such that 4 then u- (ev) = 0 for any scalar ¢ (Exercise 7.7). This result can oft b used to simplify hand computations in the Gram-Schmidt process. As as a vector ¥; is computed in Step 2, multiply it by a proper scalar 0 clear at! fractions that may be present. We shall use this approach in our computation! work with the Gram-Schmidt process. Consider the basis S = (uy, ua, us} for R°, where 1), and uy = (-1,2.3) wed), w=Ch Use the Gram-Schmidt process to transform $ to an orthonormal basis for Step 1. Let yy =u; = (1, 1,1). Step 2, We now compute v2 and vs; = we nen (BY) y= cro-v- (anne chs y See 6K Onthonormal Hases in R317 Multiplying v) by Vt0 clear fractions, we obtain (1, 2 1), which we now use as ys. Then yey wey mem (MEYy (Hn) wend ya d HEVIM TN 1 te eae. ay Thus = UMM IN 2.0.29) 1s an orthogonal basis for RY : Step 4 Let we! hana =o w= met Ae? 1 1 a rai" 7! ' 1 1 We ye iw a Then = (1, W2, Wa) ‘\Gaa)Ces 33 3) ve is an orthonormal basis for R?._ — GESTED 1s: be the subspace of Rt with basis 5 uy = (1, [u, U2, ws}, where 20,1), Up=(H1,0,0,-1), and wy = (1.1.0.0) Use the Gram-Schmidt process to transform S to an orthonormal basis for W. Solution Step J. Let; (1,-2.0, Step 2. We now compute v2 and v3: UV Vp =u; — vey Multiplying v» by 3 to clear fractions, we obtain (~2, -2, 0, -2), which we now use as V2. Then no -(2) (2 V3 = Us view 1 2)" = (1, 1,0,0) — (3) U, -2.0, 1) - (7) = (},0,0,-3) Multiplying v3 by 2 to clear fractions, we obtain (1, 0, 0, —1), which we now use as v3. Thus T* = ((1, -2,0, 1), (-2. isan orthogonal basis for W. -2), (10,0, =D} y 312 Chapter Real Vector Spaces | Step 3. Let : 4a,-2.00 "=Ti"" 4 ayy = 4-2,-2,0,-2) ee el v2 A) (Chea isan orthonormal basis for W. ‘ Remarks 1. In solving Example 5, as soon as a vector is computed we multiple, an appropriate scalar to eliminate any fractions that may be present Ts optional step results in simpler computations when working by ban this approach is taken, the resulting basis, while orthonormal, may fer from the orthonormal basis obtained by not clearing fractions. Mx computer implementations of the Gram-Schmidt process, including ise developed with MATLAB, do not clear fractions. 2. We make one final observation with regard to the Gram-Schmit prs. In our proof of Theorem 6.18 we first obtained an orthogonal bass” ‘and then normalized all the vectors in 7* to obtain the orthonormal T. Of course, an alternative course of action is to normalize each vest as soon as we produce it hE 1L. Which of the following are orthogonal sets of vectors? '. Use the Gram-Schmidt process to find an thom! (@) {(1,=1, 2), 0,2, =D. (= 1, Dh. basis for the subspace of R® with basis ©) ((1,2,-1,D,@,=1, -2,0), (1,0,0,-D) {C1,-1.0), 2.0, Dh ©) (0, 1,0, -1), (1,0, 1,1), (-1, 1, =1,2)} 6. Use the Gram-Schmidt process to find an onowe™™ | 2. Which ofthe following are orthonormal sets of vectors? basis for the subspace of R? with basis @ (3.3)... ed (Y,0,2),(-1, 1, 0)}. 5464). Gh-9). (FD) ho ©) {(4.0.-4). (4.5.4). 0.1.0 Se the Gram-Schmidt process to find anon" {G-9-4): (4-4)... ”“pasis for the subspace of R* with basis (©) (0.2.2, 1), (1, 1, -2,2), ©, -2, 1,2)). {(,51,0, 1), (2,0, 0, -1), 0,0, 1, Ob tid In Exercises3 and 4, let V = R°. shen Gram-Schmidt process to find an ot ‘basis for the subspace of R* with basis 3. Letu= (1, 1, —2) and v = (a, —1, 2). For what values ieep of a are w and v orthogonal? {(, 1, +1, 0), (0, 2,0, 1), (-1,0,0, DI. ; pete 9. Use the Gram-Schmidt process to transfor Tes: 1 eee P we 4 Lew 0, Jy) sand v = (a, Jy, ~b). For what ((1, 2), (3, 4)} for R? into (a) an ortho values of a and 6 is (u, v) an orthonormal set? (b) an orthonormal basi am-Schmidt process to transform the ) Us OT DO. 1a De (1.2.39) for R? into an ¢ ae basis for R°. on6.17 to write (2,3, 1) as a linear tn ote bas obtained pa fhonormal basis for R° containing the vectors chmidt process to construct an ass forthe subspace W of R? spanned by 2), 0-0, De (1.2.3) ye qm-Schmidt process to construct an al basis for the subspace W of R* spanned by 3,-1.0,1. 8.3.0, —2), 1, 3} * sgaotnoora basis forthe subspace of R° « oe fall vectors of the form (aa + b.b). al basis for the subspace of R* & Sea of all vectors of the form i 7 (a.a+bc.b+o) ada othonormal basis forthe subspace of R* easing ofall vectors (a,b. c) such that atb+c=0. + Fadan onhonormal basi forthe subspace of R* onsisting of all vectors (a. b, c,d) such that poner Mero nO » Sec. 6.8 Orthonormal Bases in R" 313 18. Find an orthonormal basis for the solution space of the homogeneous system wba =O ny +g + 2ry = 0. 19. F hon \! an orthonormal basis for the solution space of the ywencous system EEE co \a, du, ' ‘Onsider the orthonormal basis I : ( I 1 \(e aa): vt al for R?. ‘Using Theorem 6.17 write the vector (2, 3) as a linear combination of the vectors in S. 21. Consider the orthonormal basis 2 ( 2 a) | i. FH). (- 0. He) .(0. 1.0) \(a a) vs" V5. for R°. Using Theorem 6.17 write the vector (2;—3, 1) as a linear combination of the vectors a-b-2+d=0. in. Theoretical Exercises TL Vendy thatthe natural basis for R® is an orthonormal ‘TB. Suppose that (¥), v2, .., Vp) is an orthonormal set in jos RY. Let the matrix A be given by 2 A=[w ov ‘¥,]. Show that A is nonsingular pee cuelny ee and compute its inverse. Give three different examples 13, Show that an orthonormal set of m vectors in R" is a of such a matrix in R? or R?. tasis for RY UA (a) Prove Theorem 6.17. (b) Prove Corollary 6.7. TS. Letu, y..v,...,¥, be vectors in R", Show that if w ‘sonhogonal tov), V2... Vp then w is orthogonal to ‘very vector in span { Mal: u : 4 Letu bea fixed vector in R®. Show tha the set ofall sania R* that are orthogonal to u is a subspace 0 {Gttandv be vectors in R*. Show that EU (cv) = 0 for any scalar c ‘T9. Suppose that (¥), V2... a} i8 an orthogonal set in R". Let A be the matrix whose jth column is vj. J = 1,2,...9m. Prove or disprove: A is nonsingular. TAO, Let $ = (uy, up, ..., ue) be an orthonormal basis for a subspace W of R", where n > k. Discuss how to construct an orthonormal basis for V that includes S. DAL. Let (ty... te, Mes1y +++ Ua} be an orthonormal basis for R", $= span (uy... Ux), and T = span (Ux41.--+, Ua). For any x in § and any y in 0. T, show that x+ Vata Exercises ‘VR cg Schmidt process takes a basis S for a subspace W tory dees ‘an orthonormal basis T for W. The ‘sen todice the orthonormal basis T that is given Pini implemented in MATLAB in routine Pe help gschmidt for directions. MLL. Use gschmidt to produce an orthonormal basis for R® from the basis {GE}

You might also like