Download as pdf or txt
Download as pdf or txt
You are on page 1of 50

4 Vector Spaces

4.3
LINEARLY INDEPENDENT
SETS; BASES

© 2012 Pearson Education, Inc.


LINEAR INDEPENDENT SETS; BASES

▪ An indexed set of vectors {v1, …, vp} in V is said to


be linearly independent if the vector equation
c1v1 + c2 v 2 + ... + c p v p = 0 ----(1)
has only the trivial solution, c1 = 0,..., c p = 0 .
▪ The set {v1, …, vp} is said to be linearly
dependent if (1) has a nontrivial solution, i.e., if
there are some weights, c1, …, cp, not all zero, such
that (1) holds.
▪ In such a case, (1) is called a linear dependence
relation among v1, …, vp.
© 2012 Pearson Education, Inc. Slide 4.3- 2
LINEAR INDEPENDENT SETS; BASES
▪ Theorem 4: An indexed set {v1, …, vp} of two or
more vectors, with v1  0 , is linearly dependent if
and only if some vj (with j  1) is a linear
combination of the preceding vectors, v1 ,...,v j−1 .

▪ Definition: Let H be a subspace of a vector space V.


An indexed set of vectors B = {b1 ,...,b p } in V is a
basis for H if
(i) B is a linearly independent set, and
(ii) The subspace spanned by B coincides with H;
that is, H = Span{b1 ,...,b p }
© 2012 Pearson Education, Inc. Slide 4.3- 3
LINEAR INDEPENDENT SETS; BASES

▪ The definition of a basis applies to the case when H = V,


because any vector space is a subspace of itself.

▪ Thus a basis of V is a linearly independent set that


spans V.

▪ When H  V, condition (ii) includes the requirement that


each of the vectors b1, …, bp must belong to H, because
Span {b1, …, bp} contains b1, …, bp.

© 2012 Pearson Education, Inc. Slide 4.3- 4


Matrices
The matrix derived from the constant terms of the system is
the constant matrix of the system.

x – 4y + 3z = 5
System: – x – 3y – z = –3
2x – 4z = 6

Augmented Matrix:

5
Matrices

Coefficient Matrix:

Constant Matrix:

6
Matrices
Note the use of 0 for the missing coefficient of the
y-variable in the third equation, and also note the fourth
column (of constant terms) in the augmented matrix.

The optional dotted line in the augmented matrix helps to


separate the coefficients of the linear system from the
constant terms.

7
Example 2 – Writing an Augmented Matrix

Write the augmented matrix for the system of linear


equations.
x + 3y = 9
–y + 4z = –2
x – 5z = 0

What is the dimension of the augmented matrix?

8
Example 2 – Solution
Begin by writing the linear system and aligning the
variables.
x + 3y = 9
– y + 4z = –2
x – 5z = 0

Next, use the coefficients and constant terms as the matrix


entries. Include zeros for the coefficients of the missing
variables.

9
Example 2 – Solution cont’d

The augmented matrix has three rows and four columns,


so it is a 3  4 matrix.

The notation Rn is used to designate each row in the


matrix. For instance, Row 1 is represented by R1.

10
Elementary Row Operations

11
Elementary Row Operations
In matrix terminology, these three operations correspond to
elementary row operations.

An elementary row operation on an augmented matrix of a


given system of linear equations produces a new
augmented matrix corresponding to a new (but equivalent)
system of linear equations.

Two matrices are row-equivalent when one can be


obtained from the other by a sequence of elementary row
operations.

12
Example 3 – Elementary Row Operations

a. Interchange the first and second rows of the original


matrix.

Original Matrix New Row-Equivalent Matrix

b. Multiply the first row of the original matrix by

Original Matrix New Row-Equivalent Matrix

13
Example 3 – Elementary Row Operations cont’d

c. Add –2 times the first row of the original matrix to the


third row.

Original Matrix New Row-Equivalent Matrix

Note that the elementary row operation is written beside


the row that is changed.

14
Gaussian Elimination with Back-Substitution

The next example demonstrates the matrix version of


Gaussian elimination.

The basic difference between the two methods is that with


matrices we do not need to keep writing the variables.

15
Example 4 – Comparing Linear Systems and Matrix Operations

Linear System Associated Augmented Matrix

x – 2y + 3z = 9
–x + 3y + z = – 2
2x – 5y + 5z = 17

Add the first equation to the Add the first row to the
second equation. second row: R1 + R2.

x – 2y + 3z = 9
y+ 4z = 7
2x – 5y + 5z = 17

16
Example 4 – Comparing Linear Systems and Matrix Operations
cont’d

Add –2 times the first Add –2 times the first row to the
equation to the third third row: –2R1+R3
equation.
x – 2y + 3z = 9
y + 4z = 7
–y– z=–1

Add the second equation to Add the second row to the


the third equation. third row: R2 + R3.

x – 2y + 3z = 9
–x + 3y + z = –2
2x – 5y + 5z = 17
17
Example 4 – Comparing Linear Systems and Matrix Operations
cont’d

Multiply the third equation by Multiply the third row by

x – 2y + 3z = 9
y + 4z = 7
z=2

At this point, you can use back-substitution to find that the


solution is

x = 1, y = –1, and z = 2.

18
Gaussian Elimination with Back-Substitution

The last matrix in Example 4 is in row-echelon form.

The term echelon refers to the stair-step pattern formed by


the nonzero elements of the matrix.

19
Gaussian Elimination with Back-Substitution

To be in this form, a matrix must have the following


properties.

20
Example 5 – Row-Echelon Form
Determine whether each matrix is in row-echelon form. If it
is, determine whether the matrix is in reduced row-echelon
form.

a. b.

c. d.

21
Example 5 – Row-Echelon Form cont’d

e. f.

Solution:
The matrices in (a), (c), (d), and (f) are in row-echelon form.

The matrices in (d) and (f) are in reduced row-echelon form


because every column that has a leading 1 has zeros in
every position above and below its leading 1.

22
Example 5 – Solution cont’d

The matrix in (b) is not in row-echelon form because the


row of all zeros does not occur at the bottom of the matrix.

The matrix in (e) is not in row-echelon form because the


first nonzero entry in Row 2 is not a leading 1.

23
Gaussian Elimination with Back-Substitution

Gaussian elimination with back-substitution works well for


solving systems of linear equations by hand or with a
computer.

For this algorithm, the order in which the elementary row


operations are performed is important.

We should operate from left to right by columns, using


elementary row operations to obtain zeros in all entries
directly below the leading 1’s.

24
Example 6 – Gaussian Elimination with Back-Substitution

Solve the system of equations.


y + z – 2w = –3
x + 2y – z =2
2x + 4y + z – 3w = –2
x – 4y – 7z – w = –19

Solution:

Write augmented matrix.

25
Example 6 – Solution cont’d

Interchange R1 and R2
so first column has
leading 1 in upper left
corner.

Perform operations
on R3 and R4 so first
column has zeros below
its leading 1.

26
Example 6 – Solution cont’d

Perform operations on
R4 so second column
has zeros below its
leading 1.

Perform operations
on R3 and R4 so third
and fourth columns
have leading 1’s.

27
Example 6 – Solution cont’d

The matrix is now in row-echelon form, and the


corresponding system is

x + 2y – z = 2
y + z – 2w = – 3
z– w=–2
w= 3

Using back-substitution, you can determine that the


solution is
x = –1, y = 2, z = 1, w = 3.

Check this in the original system of equations.


28
Gaussian Elimination with Back-Substitution

The following steps summarize the procedure used in


Example 6.

29
Gauss–Jordan Elimination
With Gaussian elimination, elementary row operations are
applied to a matrix to obtain a (row-equivalent) row-echelon
form of the matrix.

A second method of elimination, called Gauss-Jordan


elimination after Carl Friedrich Gauss (1777–1855) and
Wilhelm Jordan (1842–1899), continues the reduction
process until a reduced row-echelon form is obtained.

This procedure is demonstrated in Example 8.

30
Example 8 – Gauss–Jordan Elimination
Use Gauss-Jordan elimination to solve the system.

x – 2y + 3z = 9
– x + 3y + z = – 2
2x – 5y + 5z = 17

Solution:
In Example 4, Gaussian elimination was used to obtain the
row-echelon form

31
Example 8 – Solution cont’d

Now, rather than using back-substitution, apply additional


elementary row operations until you obtain a matrix in
reduced row-echelon form.

To do this, you must produce zeros above each of the


leading 1’s, as follows.

Perform operations on R3 so
second column has a zero
above its leading 1.

32
Example 8 – Solution cont’d

Perform operations on R1 and


R2 so third column has zeros
above its leading 1.

The matrix is now in reduced row-echelon form.

Converting back to a system of linear equations, you have

x= 1
y = –1
z= 2
33
Example 8 – Solution cont’d

Now you can simply read the solution,


x = 1, y = –1, z = 2
which can be written as the ordered triple (1, –1, 2).

You can check this result using the reduced row-echelon


form feature of a graphing utility, as shown in Figure 7.23.

Figure 7.23
34
▪ Ex 9: Testing for linear independence
Determine whether the following set of vectors in R3 is L.I. or L.D.
S =  v1 , v 2 , v 3  = (1, 2, 3) , ( 0, 1, 2 ) , ( −2, 0, 1)
Sol: c1 − 2c3 = 0
c1 v1 + c2 v 2 + c3 v 3 = 0  2c1 + c2 + =0
3c1 + 2c2 + c3 = 0
1 0 − 2 0  1 0 0 0 
 2 1 0 0 ⎯⎯⎯ G.-J. E. 
→ 0 1 0 0 

3 2 1 0 0 0 1 0 
 c1 = c2 = c3 = 0 (only the trivial solution )
(or det(A) = −1  0, so there is only the trivial solution)
 S is (or v1, v2 , v3 are) linearly independent 4.35
◼ Ex 10 Testing for linear independence
Determine whether the following set of vectors in P2 is L.I. or L.D.

S = v1 , v 2 , v 3  = 1 + x − 2 x 2 ,2 + 5 x − x 2 ,x + x 2 
Sol:
c1v1+c2v2+c3v3 = 0
i.e., c1(1+x – 2x2) + c2(2+5x – x2) + c3(x+x2) = 0+0x+0x2

c1+2c2 =0  1 2 0 0 1 2 0 0
 1 5 1 0  ⎯⎯⎯  
c1+5c2+c3 = 0  
G. E.
→ 0 1 1
3
0
–2c1– c2+c3 = 0  −2 −1 1 0   
0 0 0 0
This system has infinitely many solutions
(i.e., this system has nontrivial solutions, e.g., c1=2, c2= – 1, c3=3)
S is (or v1, v2, v3 are) linearly dependent
4.36
◼ Ex 11: Testing for linear independence
Determine whether the following set of vectors in the
2×2 matrix space is L.I. or L.D.

  2 1  3 0  1 0 
S = v1 , v 2 , v 3  =    ,  , 
 0 1  2 1   2 0 

Sol:
c1v1+c2v2+c3v3 = 0

2 1  3 0  1 0  0 0 
c1   + c2   + c3   = 
 0 1  2 1   2 0   0 0 

4.37
2c1+3c2+ c3 = 0
c1 =0
2c2+2c3 = 0
c1 + c2 =0

2 3 1 0 1 0 0 0
1 0 0 0 G.-J. E. 0 1 0 0
 ⎯⎯⎯→ 
0 2 2 0 0 0 1 0
   
1 1 0 0 0 0 0 0

c1 = c2 = c3= 0 (This system has only the trivial solution)

S is linearly independent

4.38
STANDARD BASIS
▪ Let e1, …, en be the columns of the n  n matrix, In.
▪ That is, 1  0 0
0  1   
e1 =   ,e 2 =   ,...,e n =  
    0
0  0 1 
     
▪ The set {e1, …, en} is called the standard basis for R n .
See the following figure.

© 2012 Pearson Education, Inc. Slide 4.3- 39


THE SPANNING SET THEOREM
▪ Theorem 5: Let S = {v1 ,..., v p } be a set in V, and
let H = Span{v1 ,...,v p }.
a. If one of the vectors in S—say, vk—is a linear
combination of the remaining vectors in S,
then the set formed from S by removing vk still
spans H.
b. If H  {0}, some subset of S is a basis for H.
▪ Proof:
a. By rearranging the list of vectors in S, if
necessary, we may suppose that vp is a linear
combination of v1 ,...,v p−1 —say,
© 2012 Pearson Education, Inc. Slide 4.3- 40
THE SPANNING SET THEOREM
v p = a1v1 + ... + a p−1v p−1 ----(2)

▪ Given any x in H, we may write


x = c1v1 + ... + c p−1v p−1 + c p v p ----(3)
for suitable scalars c1, …, cp.

▪ Substituting the expression for vp from (2) into


(3), it is easy to see that x is a linear
combination of v1 ,...v p−1 .

▪ Thus {v1 ,..., v p−1} spans H, because x was an


arbitrary element of H.

© 2012 Pearson Education, Inc. Slide 4.3- 41


THE SPANNING SET THEOREM
b. If the original spanning set S is linearly
independent, then it is already a basis for H.
▪ Otherwise, one of the vectors in S depends on
the others and can be deleted, by part (a).
▪ So long as there are two or more vectors in
the spanning set, we can repeat this process
until the spanning set is linearly independent
and hence is a basis for H.
▪ If the spanning set is eventually reduced to
one vector, that vector will be nonzero (and
hence linearly independent) because H  {0}.
© 2012 Pearson Education, Inc. Slide 4.3- 42
THE SPANNING SET THEOREM
 0 2  6
▪ Example 1: Let v1 =  2  , v 2 =  2  , v3 =  16 
     
 −1  0   −5
and H = Span{v1 ,v 2 ,v3}.
Note that v3 = 5v1 + 3v 2, and show that
Span{v1 , v 2 , v3} = Span{v1 ,v 2 }. Then find a basis
for the subspace H.

▪ Solution: Every vector in Span {v1, v2} belongs to H


because
c1v1 + c2 v 2 = c1v1 + c2 v 2 + 0v3
© 2012 Pearson Education, Inc. Slide 4.3- 43
THE SPANNING SET THEOREM
▪ Now let x be any vector in H—say,
x = c1v1 + c2 v 2 + c3 v3 .
▪ Since v3 = 5v1 + 3v 2 , we may substitute
x = c1v1 + c2 v 2 + c3 (5v1 + 3v 2 )
= (c1 + 5c3 )v1 + (c2 + 3c3 )v 2
▪ Thus x is in Span {v1, v2}, so every vector in H already
belongs to Span {v1, v2}.
▪ We conclude that H and Span {v1, v2} are actually the
set of vectors.
▪ It follows that {v1, v2} is a basis of H since {v1, v2} is
linearly independent.
© 2012 Pearson Education, Inc. Slide 4.3- 44
BASIS FOR COL B
▪ Example 2: Find a basis for Col B, where
1 4 0 2 0
0 0 1 −1 0
B = b1 b 2 b5  =  
0 0 0 0 1
0 
 0 0 0 0
▪ Solution: Each nonpivot column of B is a linear
combination of the pivot columns.
▪ In fact, b 2 = 4b1 and b 4 = 2b1 − b3 .
▪ By the Spanning Set Theorem, we may discard b2 and
b4, and {b1, b3, b5} will still span Col B.
© 2012 Pearson Education, Inc. Slide 4.3- 45
BASIS FOR COL B
▪ Let
 1   0   0  
      
  0  1   0  
S = {b1 ,b3 ,b5 } =  , , 
 0  0  1  
 0  0   0 

▪ Since b1  0 and no vector in S is a linear


combination of the vectors that precede it, S is
linearly independent. (Theorem 4).

▪ Thus S is a basis for Col B.


© 2012 Pearson Education, Inc. Slide 4.3- 46
BASES FOR NUL A AND COL A
▪ Theorem 6: The pivot columns of a matrix A form a
basis for Col A.

▪ Proof: Let B be the reduced echelon form of A.


▪ The set of pivot columns of B is linearly independent,
for no vector in the set is a linear combination of the
vectors that precede it.
▪ Since A is row equivalent to B, the pivot columns of
A are linearly independent as well, because any linear
dependence relation among the columns of A
corresponds to a linear dependence relation among
the columns of B.
© 2012 Pearson Education, Inc. Slide 4.3- 47
BASES FOR NUL A AND COL A
▪ For this reason, every nonpivot column of A is a linear
combination of the pivot columns of A.
▪ Thus the nonpivot columns of a may be discarded from
the spanning set for Col A, by the Spanning Set
Theorem.
▪ This leaves the pivot columns of A as a basis for Col A.

▪ Warning: The pivot columns of a matrix A are evident


when A has been reduced only to echelon form.
▪ But, be careful to use the pivot columns of A itself for
the basis of Col A.
© 2012 Pearson Education, Inc. Slide 4.3- 48
BASES FOR NUL A AND COL A
▪ Row operations can change the column space of a
matrix.
▪ The columns of an echelon form B of A are often not
in the column space of A.

▪ Two Views of a Basis


▪ When the Spanning Set Theorem is used, the deletion
of vectors from a spanning set must stop when the set
becomes linearly independent.
▪ If an additional vector is deleted, it will not be a
linear combination of the remaining vectors, and
hence the smaller set will no longer span V.
© 2012 Pearson Education, Inc. Slide 4.3- 49
TWO VIEWS OF A BASIS

▪ Thus a basis is a spanning set that is as small as


possible.

▪ A basis is also a linearly independent set that is as


large as possible.

▪ If S is a basis for V, and if S is enlarged by one


vector—say, w—from V, then the new set cannot be
linearly independent, because S spans V, and w is
therefore a linear combination of the elements in S.

© 2012 Pearson Education, Inc. Slide 4.3- 50

You might also like