Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

132 CHAPTER 4.

VECTOR SPACES

Exercise 4.4.6 (Ex. 28. P. 219) Let S = {(6, 2, 1), (−1, 3, 2)}. De-
termine, if S is linearly independent or dependent?
Solution: Let

c(6, 2, 1) + d(−1, 3, 2) = (0, 0, 0).

If this equation has only trivial solutions, then it is linealry independent.


This equaton gives the following system of linear equations:
6c −d = 0
2c +3d = 0
c +2d = 0
The augmented matrix for this system is
   
6 −1 0 1 0 0
 2 3 0  . its gauss − Jordan f orm :  0 1 0 
   

1 2 0 0 0 0
So, c = 0, d = 0. The system has only trivial (i.e. zero) solution. We
conclude that S is linearly independent.

Exercise 4.4.7 (Ex. 30. P. 219) Let


     
3 5 3 7 3
S= , , , 3, 4, , − , 6, 2 .
4 2 2 2 2
Determine, if S is linearly independent or dependent?
Solution: Let
     
3 5 3 7 3
a , , + b 3, 4, + c − , 6, 2 = (0, 0, 0) .
4 2 2 2 2
If this equation has only trivial solutions, then it is linealry independent.
This equaton gives the following system of linear equations:
3
4
a +3b − 23 c = 0
5
2
a +4b +6c = 0
3
2
a + 27 b +2c = 0
4.4. SPANNING SETS AND LINEAR INDIPENDENCE 133

The augmented matrix for this system is


 3 3
  
4
3 − 2
0 1 0 0 0
 5
 2 4 6 0  . its Gaus − Jordan f orm  0 1 0 0 .
  
3 7
2 2
2 0 0 0 1 0

So, a = 0, b = 0, c = 0. The system has only trivial (i.e. zero) solution.


We conclude that S is linearly independent.

Exercise 4.4.8 (Ex. 32. P. 219) Let

S = {(1, 0, 0), (0, 4, 0), (0, 0, −6), (1, 5, −3)}.

Determine, if S is linearly independent or dependent?


Solution: Let

c1 (1, 0, 0) + c2 (0, 4, 0) + c3 (0, 0, −6) + c4 (1, 5, −3) = (0, 0, 0).

If this equation has only trivial solutions, then it is linealry independent.


This equaton gives the following system of linear equations:

c1 +c4 = 0
4c2 5c4 = 0
−6c3 −3c4 = 0

The augmented matrix for this system is


   
1 0 0 1 0 1 0 0 1 0
 0 4 0 5 0  . its Gaus−Jordan f orm  0 1 0 1.25 0  .
   

0 0 −6 −3 0 0 0 1 .5 0

Correspondingly:

c1 + c4 = 0, c2 + 1.25c4 = 0, c3 + .5c4 = 0.
134 CHAPTER 4. VECTOR SPACES

With c4 = t as parameter, we have

c1 = −t, c2 = −1.25t, c3 = .5t, c4 = t.

The equation above has nontrivial (i.e. nonzero) solutions. So, S is


linearly dependent.

Theorem 4.4.9 Let V be a vector space and S = {v1 , v2 , . . . vk }, k ≥


2 a set of elements (vectors) in V. Then S is linearly dependent if and
only if one of the vectors vj can be written as a linear combination of
the other vectors in S.

Proof. (⇒) : Assume S is linearly dependent. So, the equation

c1 v1 + c2 v2 + · · · + ck vk = 0

has a nonzero solution. This means, at least one of the ci is nonzero.


Let cr is the last one, with cr 6= 0. So,

c1 v1 + c2 v2 + · · · + cr vr = 0

and
c1 c2 cr−1
vr = −v1 − v2 − · · · − vr−1 .
cr cr cr
So, vr is a linear combination of other vectors and this implication
isproved.
(⇒) : to prove the other implication, we assume that vr is linear com-
bination of other vectors. So

vr = (c1 v1 + c2 v2 + · · · + cr−1 vr−1 ) + (cr+1 vr+1 + · · · + ck vk ) .

So,

(c1 v1 + c2 v2 + · · · + cr−1 vr−1 ) − vr + (cr+1 vr+1 + · · · + ck vk ) = 0.

The left hand side is a nontrivial (i.e. nozero) linear combination,


because vr has coefficient −1. Therefore, S is linearly dependent. This
completes the proof.
4.5. BASIS AND DIMENSION 135

4.5 Basis and Dimension


Homework: [Textbook, §4.5 Ex. 1, 3, 7, 11, 15, 19, 21, 23, 25, 28, 35,
37, 39, 41,45, 47, 49, 53, 59, 63, 65, 71, 73, 75, 77, page 231].

The main point of the section is

1. To define basis of a vector space.

2. To define dimension of a vector space.

These are, probably, the two most fundamental concepts regarding vector
spaces.
136 CHAPTER 4. VECTOR SPACES

Definition 4.5.1 Let V be a vector space and S = {v1 , v2 , . . . vk } be


a set of elements (vectors)in V. We say that S is a basis of V if

1. S spans V and

2. S is linearly independent.

Remark. Here are some some comments about finite and infinite basis
of a vector space V :

1. We avoided discussing infinite spanning set S and when an infinite


S is linearly independent. We will continue to avoid to do so. ((1)
An infinite set S is said span V, if each element v ∈ V is a linear
combination of finitely many elements in V. (2) An infinite set
S is said to be linearly independent if any finitely subset of S is
linearly independent.)

2. We say that a vector space V is finite dimensional, if V has


a basis consisting of finitely many elements. Otherwise, we say
that V is infinite dimensional.

3. The vector space P of all polynomials (with real coefficients) has


infinite dimension.

Example 4.5.2 (example 1, p 221) Most standard example of ba-


sis is the standard basis of Rn .

1. Consider the vector space R2 . Write

e1 = (1, 0), e2 = (0, 1).

Then, e1 , e2 form a basis of R2 .


4.5. BASIS AND DIMENSION 137

2. Consider the vector space R3 . Write

e1 = (1, 0, 0), e2 = (0, 1, 0), e2 = (0, 0, 1).

Then, e1 , e2 , e3 form a basis of R3 .

Proof. First, for any vector v = (x1 , x2 , x3 ) ∈ R3 , we have

v = x1 e1 + x2 e2 + x3 e3 .

So, R3 is spanned by e1 , e2 , e3 .

Now, we prove that e1 , e2 , e3 are linearly independent. So, sup-


pose

c1 e1 + c2 e2 + c3 e3 = 0 OR (c1 , c2 , c3 ) = (0, 0.0).

So, c1 = c2 = c3 = 0. Therefore, e1 , e2 , e3 are linearly indepen-


dent. Hence e1 , e2 , e3 forms a basis of R3 . The proof is complete.

3. More generally, consider vector space Rn . Write

e1 = (1, 0, . . . , 0), e2 = (0, 1, . . . , 0), . . . , en = (0, 0, . . . , 1).

Then, e1 , e2 , e3 , . . . , en form a basis of Rn . The proof will be


similar to the above proof. This basis is called the standard
basis of Rn .

Example 4.5.3 Consider

v1 = (1, 1, 1), v2 = (1, −1, 1), v3 = (1, 1, −1) in R3 .

Then v1 , v2 , v3 form a basis for R3 .


138 CHAPTER 4. VECTOR SPACES

Proof. First, we prove that v1 , v2 , v3 are linearly independent. Let

c1 v1 +c2 v2 +c3 v3 = 0. OR c1 (1, 1, 1)+c2 (1, −1, 1)+c3 (1, 1, −1) = (0, 0, 0).

We have to prove c1 = c2 = c3 = 0. The equations give the following


system of linear equations:

c1 +c2 +c3 = 0
c1 −c2 +c3 = 0
c1 +c2 −c3 = 0

The augmented matrix is


   
1 1 1 0 1 0 0 0
 1 −1 1 0  its Gauss − Jordan f orm  0 1 0 0 
   

1 1 −1 0 0 0 1 0

So, c1 = c2 = c3 = 0 and this estblishes that v1 , v2 , v3 are linearly


independent.
Now to show that v1 , v2 , v3 spans R3 , let v = (x1 , x2 , x3 ) be a vector
in R3 . We have to show that, we can find c1 , c2 , c3 such that

(x1 , x2 , x3 ) = c1 v1 + c2 v2 + c3 v3

OR

(x1 , x2 , x3 ) = c1 (1, 1, 1) + c2 (1, −1, 1) + c3 (1, 1, −1).

This gives the system of linear equations:


        
c1 +c2 +c3 x1 1 1 1 c1 x1
 c1 −c2 +c3  =  x2  OR  1 −1 1   c 2  =  x2 
        

c1 +c2 −c3 x3 1 1 −1 c3 x3
4.5. BASIS AND DIMENSION 139

The coefficient matrix


   
1 1 1 0 .5 .5
A =  1 −1 1  has inverse A−1 =  .5 −.5 0 .
   

1 1 −1 .5 0 −.5

So, the above system has tha solution:

      
c1 x1 0 .5 .5 x1
−1 
 c2  = A  x2  =  .5 −.5 0   x2  .
     

c3 x3 .5 0 −.5 x3
So, each vector (x1 , x2 , x3 ) is in the span of v1 , v2 , v3 . So, they form a
basis of R3 . The proof is complete.

Reading assignment: Read [Textbook, Examples 1-5, p. 221-224].

Theorem 4.5.4 Let V be a vector space and S = {v1 , v2 , . . . , vn } be


a basis of V. Then every vector v in V can be written in one and only
one way as a linear combination of vectors in S. (In other words, v can
be written as a unique linear combination of vectors in S.)

Proof. Since S spans V, we can write v as a linear combination

v = c1 v1 + c2 v2 + · · · + cn vn

for scalars c1 , c2 , . . . , cn . To prove uniqueness, also let

v = d1 v1 + d2 v2 + · · · + dn vn

for some other scalars d1 , d2 , . . . , dn . Subtracting, we have

(c1 − d1 )v1 + (c2 − d2 )v2 + · · · + (cn − dn )vn = 0.

Since, v1 , v2 , . . . , vn are also linearly independent, we have

c1 − d1 = 0, c2 − d2 = 0, . . . , cn − dn = 0
140 CHAPTER 4. VECTOR SPACES

OR
c1 = d1 , c2 = d2 , . . . , cn = dn .
This completes the proof.

Theorem 4.5.5 Let V be a vector space and S = {v1 , v2 , . . . , vn } be


a basis of V. Then every set of vectors in V containing more than n
vectors in V is linearly dependent.

Proof. Suppose S1 = {u1 , u2 , . . . , um } ne a set of m vectors in V, with


m > n. We are requaired to prove that the zero vector 0 is a nontrivial
(i.e. nonzero) linear combination of elements in S1 . Since S is a basis,
we have
u1 = c11 v1 +c12 v2 + · · · +c1n vn
u2 = c21 v1 +c22 v2 + · · · +c2n vn
··· ··· ··· ··· ···
um = cm1 v1 +cm2 v2 + · · · +cmn vn
Consider the system of linear equations

c11 x1 +c22 x2 + · · · +cm1 xm = 0


c12 x1 +c22 x2 + · · · +cm2 xm = 0
··· ··· ··· ··· ···
c1n x1 +c2n x2 + · · · +cmn xm = 0

which is     
c11 c22 · · · cm1 x1 0
 c12
 c22 · · · cm2   x2
    0 
= 
 ··· ··· ··· ···  ···   ··· 
c1n c2n · · · cmn xm 0

Since m > n, this homegeneous system of linear equations has fewer


equations than number of variables. So, the system has a nonzero
solution (see [Textbook, theorem 1.1, p 25]). It follows that

x1 u1 + x2 u2 + · · · + xm um = 0.
4.5. BASIS AND DIMENSION 141

We justify it as follows: First,


 
c11 c22 · · · cm1
     c12 c22 · · · cm2 
u1 u2 . . . um = v1 v2 . . . vn  ···

··· ··· ··· 
c1n c2n · · · cmn

and then
 
x1
   x2 
x1 u1 + x2 u2 + . . . + xm um = u1 u2 . . . um  
 ··· 
xm

which is
  
c11 c22 · · · cm1 x1
   c12 c22 · · · cm2   x2 
= v1 v2 . . . vn  ···
 
··· ··· ···  ··· 
c1n c2n · · · cmn xm

which is  
0
  0 
= v1 v2 . . . vn 
 · · ·  = 0.

Alternately, at your level the proof will be written more explicitly as


follows: x1 u1 + x2 u2 + . . . + xm um =
m m n
! n m
! n
X X X X X X
xj uj = xj cij vi = cij xj vi = 0vi = 0.
j=i j=1 i=1 i=1 j=1 i=1

The proof is complete.

Theorem 4.5.6 Suppose V is a vector space and V has a basis with


n vectors. Then, every basis has n vectors.
142 CHAPTER 4. VECTOR SPACES

Proof. Let

S = {v1 , v2 , . . . , vn } and S1 = {u1 , u2 , . . . , um }

be two bases of V. Since S is a basis and S1 is linearly independent, by


theorem 4.5.5, we have m ≤ n. Similarly, n ≤ m. So, m = n. The proof
is complete.

Definition 4.5.7 If a vector space V has a basis consisting of n vectors,


then we say that dimension of V is n. We also write dim(V ) = n. If
V = {0} is the zero vector space, then the dimension of V is defined
as zero.
(We say that the dimension of V is equal to the ‘cardinality’ of
any basis of V. The word ‘cardinality’ is used to mean ‘the number of
elements’ in a set.)

Theorem 4.5.8 Suppose V is a vector space of dimension n.

1. Suppose S = {v1 , v2 , . . . , vn } is a set of n linearly independent


vectors. Then S is basis of V.

2. Suppose S = {v1 , v2 , . . . , vn } is a set of n vectors. If S spans V,


then S is basis of V.

Remark. The theorem 4.5.8 means that, if dimension of V matches


with the number of (i.e. ’cardinality’ of) S, then to check if S is a basis
of V or not, you have check only one of the two required prperties (1)
indpendece or (2) spannning.

Example 4.5.9 Here are some standard examples:

1. We have dim(R) = 1. This is because {1} forms a basis for R.


4.5. BASIS AND DIMENSION 143

2. We have dim(R2 ) = 2. This is because the standard basis

e1 = (1, 0), e2 = (0, 1)

consist of two elements.

3. We have dim(R3 ) = 3. This is because the standard basis

e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)

consist of three elements.

4. Mor generally, dim(Rn ) = n. This is because the standard basis

e1 = (1, 0, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en = (0, 0, . . . , 1)

consist of n elements.

5. The dimension of the vector space Mm,n of all m × n matrices is


mn. Notationally, dim(Mm,n ) = mn. To see this, let eij be the
m × n matrix whose (i, j)th −entry is 1 and all the rest of the
entries are zero. Then,

S = {eij : i = 1, 2, . . . , m; j1, 2, . . . , n}

forms a basis of Mm,n and S has mn elements.

6. Also recall, if a vector space V does not have a finite basis, we


say V is inifinite dimensional.

(a) The vector space P of all polynomials (with real coefficients)


has infinite dimension.
(b) The vector space C(R) of all continuous real valued functions
on real line R has infinite dimension.
144 CHAPTER 4. VECTOR SPACES

Exercise 4.5.10 (Ex. 4 (changed), p. 230) Write down the stan-


dard basis of the vector space M3,2 of all 3 × 2−matrices with real
entires.
Solution: Let eij be the 3 × 2−matrix, whose (i, j)th −entry is 1 and
all other entries are zero. Then,

{e11 , e12 , e21 , e22 , e31 , e32 }

forms a basis of M3,2 . More explicitly,


     
1 0 0 1 0 0
e11 =  0 0  , e12 =  0 0  , e21 =  1 0 
     

0 0 0 0 0 0

and
     
0 0 0 0 0 0
e22 =  0 1  , e31 =  0 0  , e33 =  0 0  .
     

0 0 1 0 0 1

It is easy to verify that these vectors in M32 spans M32 and are linearly
independent. So, they form a basis.

Exercise 4.5.11 (Ex. 8. p. 230) Explain, why the set


S = {(−1, 2), (1, −2), (2, 4)} is not a basis of R2 ?
Solution: Note

(−1, 2) + (1, −2) + 0(2, 4) = (0, 0).

So, these three vectors are not linearly independent. So, S is not a
basis of R2 .
Alternate argument: We have dim (R2 ) = 2 and S has 3 elements.
So, by theorem 4.5.6 above S cannot be a basis.
4.5. BASIS AND DIMENSION 145

Exercise 4.5.12 (Ex. 16. p. 230) Explain, why the set

S = {(2, 1, −2), (−2, −1, 2), (4, 2, −4)}

is not a basis of R3 ?
Solution: Note

(4, 2, −4) = (2, 1, −2) − (−2, −1, 2)

OR
(2, 1, −2) − (−2, −1, 2) − (4, 2, −4) = (0, 0, 0).

So, these three vectors are linearly dependent. So, S is not a basis of
R3 .

Exercise 4.5.13 (Ex. 24. p. 230) Explain, why the set

S = {6x − 3, 3x2 , 1 − 2x − x2 }

is not a basis of P2 ?
Solution: Note
1 1
1 − 2x − x2 = − (6x − 3) − (3x2 )
3 3
OR
1 1
(1 − 2x − x2 ) + (6x − 3) + (3x2 ) = 0.
3 3
So, these three vectors are linearly dependent. So, S is not a basis of
P2 .

Exercise 4.5.14 (Ex. 36,p.231) Determine, whether

S = {(1, 2), (1, −1)}


146 CHAPTER 4. VECTOR SPACES

is a basis of R2 or not?
Solution: We will show that S is linearly independent. Let

a(1, 2) + b(1, −1) = (0, 0).

Then
a + b = 0, and 2a − b = 0.
Solving, we get a = 0, b = 0. So, these two vectors are linearly indepen-
dent. We have dim (R2 ) = 2. Therefore, by theorem 4.5.8, S is a basis
of R2 .

Exercise 4.5.15 (Ex. 40. p.231) Determine, whether

S = {(0, 0, 0), (1, 5, 6), (6, 2, 1)}

is a basis of R3 or not?
Solution: We have

1.(0, 0, 0) + 0.(1, 5, 6) + 0.(6, 2, 1) = (0, 0, 0).

So, S is linearly dependent and hence is not a basis of R3 .

Remark. In fact, any subset S of a vector space V that contains 0 is


linearly dependent.

Exercise 4.5.16 (Ex. 46. p.231) Determine, whether

S = 4t − t2 , 5 + t3 , 3t + 5, 2t3 − 3t2


is a basis of P3 or not?
Solution: Note the standard basis

1, t, t2 , t3

4.5. BASIS AND DIMENSION 147

of P3 has four elements. So, dim (P3 ) = 4. Because of theorem 4.5.8,


we will try to check, if S is linearly independent or not. So, let

c1 (4t − t2 ) + c2 (5 + t3 ) + c3 (3t + 5) + c4 (2t3 − 3t2 ) = 0

for some scalars c1 , c2 , c3 , c4 . If we simplify, we get

(5c2 + 5c3 ) + (4c1 + 3c3 )t + (−c1 − 3c4 )t2 + (c2 + 2c4 )t3 = 0

Recall, a polynomial is zero if and only if all the coefficients are zero.
So, we have
5c2 +5c3 =0
4c1 +3c3 =0
−c1 −3c4 = 0
c2 +2c4 = 0
The augmented matrix is
   
0 5 5 0 0 1 0 0 0 0
   
 4 0 3 0 0   0 1 0 0 0 
 −1 0 0 −3 0  its Gauss−Jordan f orm
   .
 0 0 1 0 0 
   
0 1 0 2 0 0 0 0 1 0

Therefore, c1 = c2 = c3 = c4 = 0. Hence S is linearely independent. So,


by theorem 4.5.8, S is a basis of P3 .

Exercise 4.5.17 (Ex. 60. p.231) Determine the dimension of P4 .


Solution: Recall, P4 is the vector space of all polynomials of degree
≤ 4. We claim that that

S = {1, t, t2 , t3 , t4 }

is a basis of P4 . Clearly, any polynomial in P4 is a linear combination


of elements in S. So, S spans P4 . Now, we prove that S is linearly
148 CHAPTER 4. VECTOR SPACES

independent. So, let

c0 1 + c1 t + c2 t2 + c3 t3 + c4 t4 = 0.

Since a nonzero polynomial of degree 4 can have at most four roots,


it follows c0 = c1 = c2 = c3 = c4 = 0. So, S is a basis of P4 and
dim(P4 ) = 5.

Exercise 4.5.18 (Ex. 62. p.231) Determine the dimension of M32 .


Solution: In exercise 4.5.10, we established that

S = {e11 , e12 , e21 , e22 , e31 , e32 }

is a basis of M3,2 . So, dim(M32 ) = 6.

Exercise 4.5.19 (Ex. 72. p.231) Let

W = {(t, s, t) : s, t ∈ R} .

Give a geometric description of W, find a basis of W and determine the


dimension of W.
Solution: First note that W is closed under addition and scalar multi-
plication. So, W is a subspace of R3 . Notice, there are two parameters
s, t in the description of W. So, W can be described by x = z. Therefore,
W represents the plane x = z in R3 .
I suggest (guess) that

u = (1, 0, 1), v = (0, 1, 0)

will form a basis of W. To see that they are mutually linearly indepen-
dent, let
au + bv = (0.0.0); OR (a, b, a) = (0.0.0).
4.5. BASIS AND DIMENSION 149

So, a = 0, b = 0 and hence they are linearly independent. To see that


they span W, we have

(t, s, t) = tu + sv.

So, {u, v} form a basis of W and dim(W ) = 2.

Exercise 4.5.20 (Ex. 74. p.232) Let

W = {(5t, −3t, t, t) : t ∈ R} .

Fnd a basis of W and determine the dimension of W.


Solution: First note that W is closed under addition and scalar mul-
tiplication. So, W is a subspace of R4 . Notice, there is only parameters
t in the description of W. (So, I expect that dim(W ) = 1. I suggest
(guess)
e = {(5, −3, 1, 1)}

is a basis of W. This is easy to check. So, dim(W ) = 1.

You might also like