Math2014 ch3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 68

Math2014 Chapter 3

Linear Algebra

2
3

4
Example 1

6
7

8
9

10
Row Reduction and Echelon Forms
• A rectangular matrix is in echelon form (or row
echelon form) if it has the following three
properties:
1. All nonzero rows are above any rows of all
zeros.
2. Each leading entry of a row is in a column to
the right of the leading entry of the row
above it.
3. All entries in a column below a leading entry
are zeros.

11

• If a matrix in echelon form satisfies the following


additional conditions, then it is in reduced echelon
form (or reduced row echelon form):
4. The leading entry in each nonzero row is 1.
5. Each leading 1 is the only nonzero entry in its
column.
• An echelon matrix (respectively, reduced echelon
matrix) is one that is in echelon form (respectively,
reduced echelon form.)

12
• Any nonzero matrix may be row reduced (i.e.,
transformed by elementary row operations) into more
than one matrix in echelon form, using different
sequences of row operations. However, the reduced
echelon form one obtains from a matrix is unique.
Theorem 1: Uniqueness of the Reduced Echelon Form
Each matrix is row equivalent to one and only one
reduced echelon matrix.

13

• If a matrix A is row equivalent to an echelon matrix U,


we call U an echelon form (or row echelon form) of A;
if U is in reduced echelon form, we call U the reduced
echelon form of A.

• A pivot position in a matrix A is a location in A that


corresponds to a leading 1 in the reduced echelon
form of A. A pivot column is a column of A that
contains a pivot position.

14
• Example : Row reduce the matrix A below to echelon
form, and locate the pivot columns of A.

 0 3 6 4 9 
 1 2 1 3 1
A 
 2 3 0 3 1
 1 4 5 9 7 
 

• Solution: The top of the leftmost nonzero column is


the first pivot position. A nonzero entry, or pivot, must
be placed in this position.

15

• Now, interchange rows 1 and 4.


Pivot
 1 4 5 9 7 
 1 2 1 3 1
 
 2 3 0 3 1
 0 3 6 4 9 
 
Pivot column
• Create zeros below the pivot, 1, by adding multiples of
the first row to the rows below, and obtain the next
matrix.

16
• Choose 2 in the second row as the next pivot.
Pivot
1 4 5 9 7 
0 2 4 6 6 
 
0 5 10 15 15
0 3 6 4 9 

Next pivot column
• Add 5 / 2times row 2 to row 3, and add 3 / 2 times row 2 to
row 4.

17

1 4 5 9 7 
0 2 4 6 6 
 
0 0 0 0 0
0 0 0 5 0 

• There is no way a leading entry can be created in
column 3. But, if we interchange rows 3 and 4, we can
produce a leading entry in column 4.

18
Pivot
1 4 5 9 7 
0 2 4 6 6 
 
0 0 0 5 0 
0 0 0 0 0 

Pivot columns
• The matrix is in echelon form and thus reveals that
columns 1, 2, and 4 of A are pivot columns.

19

Pivot
 1 4 5 9 7 
 0 2 4 6 6  positions
 
0 0 0 5 0
0 0 0 0 0 

Pivot
columns
• The pivots in the example are 1, 2 and 5.

20
• In Class Exercise: Apply elementary row operations to
transform the following matrix first into echelon form
and then into reduced echelon form.

0 3 6 6 4 5
 3 7 8 5 8 9 
 
 3 9 12 9 6 15

21

Solution:
STEP 1: Begin with the leftmost nonzero column. This is a pivot
column. The pivot position is at the top.
0 3 6 6 4 5  3 9 12 9 6 15
 3 7 8 5 8 9   3 7
  8 5 8 9 
 
 3 9 12 9 6 15 3 6 6 4 5
0
Pivot column Pivot
• STEP 2: Select a nonzero entry in the pivot column as a pivot. If
necessary, interchange rows to move this entry into the pivot
position.
Interchange rows 1 and 3. (Rows 1 and 2 could have also been
interchanged instead.)

22
STEP 3: Use row replacement operations to create zeros in all
positions below the pivot.

We could have divided the top row by the pivot, 3, but with two 3s in
column 1, it is just as easy to add -1 times row 1 to row 2.
Pivot
 3 9 12 9 6 15
0 2 4 4 2 6 
 
0 3 6 6 4 5
STEP 4: Cover the row containing the pivot position, and cover
all rows, if any, above it. Apply steps 1–3 to the submatrix that
remains. Repeat the process until there are no more nonzero
rows to modify.
23

• With row 1 covered, step 1 shows that column 2 is the


next pivot column; for step 2, select as a pivot the “top”
entry in that column.
Pivot
 3 9 12 9 6 15
0 2 4 4 2 6 
 
0 3 6 6 4 5
New pivot column
• For step 3, we could insert an optional step of dividing
the “top” row of the submatrix by the pivot, 2. Instead,
we add 3 / 2 times the “top” row to the row below.

24
• This produces the following matrix.
 3 9 12 9 6 15
0 2 4 4 2 6 
 
0 0 0 0 1 4 
• When we cover the row containing the second pivot
position for step 4, we are left with a new submatrix that
has only one row.
 3 9 12 9 6 15
0 2 4 4 2 6 
 
0 0 0 0 1 4 
25

• Steps 1–3 require no work for this submatrix, and we


have reached an echelon form of the full matrix. We
perform one more step to obtain the reduced echelon
form.
• STEP 5: Beginning with the rightmost pivot and
working upward and to the left, create zeros above
each pivot. If a pivot is not 1, make it 1 by a scaling
operation.
• The rightmost pivot is in row 3. Create zeros above it,
adding suitable multiples of row 3 to rows 2 and 1.

26
 3 9 12 9 0 9  Row 1  (6)  row 3
0 2 4 4 0 14 
  Row 2  (2)  row 3
0 0 0 0 1 4 
• The next pivot is in row 2. Scale this row, dividing by
the pivot.

 3 9 12 9 0 9 
0 1
1 2 2 0 7  Row scaled by
  2
0 0 0 0 1 4 

27

• Create a zero in column 2 by adding 9 times row 2 to


row 1.

 3 0 6 9 0 72  Row 1  (9)  row 2


0 1 2 2 0 7 
 
0 0 0 0 1 4 

• Finally, scale row 1, dividing by the pivot, 3.

28
1
 1 0 2 3 0 24  Row scaled by
0 1 2 2 0 7  3
 
0 0 0 0 1 4 
• This is the reduced echelon form of the original
matrix.

• The combination of steps 1–4 is called the forward


phase of the row reduction algorithm. Step 5, which
produces the unique reduced echelon form, is called
the backward phase.

29

• The row reduction algorithm leads to an explicit


description of the solution set of a linear system when
the algorithm is applied to the augmented matrix of
the system.
• Suppose that the augmented matrix of a linear system
has been changed into the equivalent reduced
echelon form.

 1 0 5 1
0 1 1 4 
 
0 0 0 0 

30
• There are 3 variables because the augmented matrix
has four columns. The associated system of equations
is x1  5 x3  1
----(1)
x2  x3  4
00

• The variables x1 and x2 corresponding to pivot


columns in the matrix are called basic variables. The
other variable, x3, is called a free variable.

31

• Whenever a system is consistent, as in (1), the


solution set can be described explicitly by solving the
reduced system of equations for the basic variables in
terms of the free variables.

• This operation is possible because the reduced


echelon form places each basic variable in one and
only one equation.

• In (1), solve the first and second equations for x1 and


x2. (Ignore the third equation; it offers no restriction
on the variables.)
32
x1  1  5 x3
x2  4  x3
----(2)
x3 is free

• The statement “x3 is free” means that you are free to


choose any value for x3. Once that is done, the formulas
in (2) determine the values for x1 and x2. For instance,
when x3  0 , the solution is (1,4,0); when x3  1 , the
solution is (6,3,1).

• Each different choice of x3 determines a (different)


solution of the system, and every solution of the system
is determined by a choice of x3.
33

• The description in (2) is a parametric description of


solutions sets in which the free variables act as
parameters.

• Solving a system amounts to finding a parametric


description of the solution set or determining that the
solution set is empty.

• Whenever a system is consistent and has free


variables, the solution set has many parametric
descriptions.

34
• For instance, in system (1), add 5 times equation 2 to
equation 1 and obtain the following equivalent
system. x  5 x  21
1 2

x2  x3  4
• We could treat x2 as a parameter and solve for x1 and
x3 in terms of x2, and we would have an accurate
description of the solution set.
• When a system is inconsistent, the solution set is
empty, even when the system has free variables. In
this case, the solution set has no parametric
representation.

35

Theorem 2: Existence and Uniqueness Theorem


A linear system is consistent if and only if the
rightmost column of the augmented matrix is not a
pivot column—i.e., if and only if an echelon form of
the augmented matrix has no row of the form
[0 … 0 b] with b nonzero.
• If a linear system is consistent, then the solution set
contains either (i) a unique solution, when there are
no free variables, or (ii) infinitely many solutions,
when there is at least on free variable.
36
Using Row Reduction to Solve a Linear System (Gauss Jordan method)
1. Write the augmented matrix of the system.
2. Use the row reduction algorithm to obtain an
equivalent augmented matrix in echelon form.
Decide whether the system is consistent. If there is
no solution, stop; otherwise, go to the next step.
3. Continue row reduction to obtain the reduced
echelon form.
4. Write the system of equations corresponding to
the matrix obtained in step 3.
5. Rewrite each nonzero equation from step 4 so that
its one basic variable is expressed in terms of any free
variables appearing in the equation.

37

Example 2 Use Gauss Jordan method to solve the following systems


a) b)

c) d)

38
Example 3 Determine whether the system is consistent

a) b)

39

Example 4

{
a)

b)
{
40
41

Vectors in
• A matrix with only one column is called a column vector,
or simply a vector.
• An example of a vector with two entries is
 w1 
w ,
 w2 
where w1 and w2 are any real numbers.

• The set of all vectors with 2 entries is denoted by


(read “r-two”).

42
• The stands for the real numbers that appear as entries
in the vector, and the exponent 2 indicates that each
vector contains 2 entries.
• Two vectors in are equal if and only if their
corresponding entries are equal.
• Given two vectors u and v in , their sum is the vector
obtained by adding corresponding entries of u and v.
• Given a vector u and a real number c, the scalar
multiple of u by c is the vector cu obtained by
multiplying each entry in u by c.

43

 1  2
• Example 1: Given u    and v    , find
 2   5

4u, (3)v , and 4u  (3)v .

44
GEOMETRIC DESCRIPTIONS OF
• Consider a rectangular coordinate system in the
plane. Because each point in the plane is
determined by an ordered pair of numbers, we can
identify a geometric point (a, b) with the column
vector  a  .
b
 

• So we may regard as the set of all points in the


plane.

45

PARALLELOGRAM RULE FOR ADDITION


• If u and v in are represented as points in the plane,
then u  vcorresponds to the fourth vertex of the
parallelogram whose other vertices are u, 0, and v.
See the figure below.

46
VECTORS IN and
• Vectors in are 3 1 column matrices with three
entries.
• They are represented geometrically by points in a
three-dimensional coordinate space, with arrows from
the origin.
• If n is a positive integer, (read “r-n”) denotes the
collection of all lists (or ordered n-tuples) of n real
numbers, usually written as n 1 column matrices,
such as  u1 
u 
u   2
 
u 
 n .
47

ALGEBRAIC PROPERTIES OF
• The vector whose entries are all zero is called the
zero vector and is denoted by 0.
• For all u, v, w in and all scalars c and d:

48
Example 2

1) Calculate the linear combination of the above four vectors.


2) Take , , calculate the linear combination
again.
49

Question:

50
 1 2  7
• Example 3: Let
b   4
, and .
a1   2  a 2   5  
   
 5  6   3

Determine whether b can be generated (or written) as


a linear combination of a1 and a2. That is, determine
whether weights x1 and x2 exist such that
x1a1  x2a 2  b ----(1)

If vector equation (1) has a solution, find it.

51

• Now, observe that the original vectors a1, a2, and b are
the columns of the augmented matrix that we row
reduced:
 1 2 7
 2 5 4 
 
 5 6 3
a1 a2 b
• Write this matrix in a way that identifies its columns.
a 1
a2 b

52
• A vector equation
x1a1  x2a 2  ...  xna n  b
has the same solution set as the linear system
whose augmented matrix is
 a1 a 2 a n b  . ----------(*)

• In particular, b can be generated by a linear


combination of a1, …, an if and only if there exists a
solution to the linear system corresponding to the
matrix (*).

53

• Definition: If v1, …, vp are in , then the set of all


linear combinations of v1, …, vp is denoted by Span
{v1, …, vp} and is called the subset of spanned
(or generated) by v1, …, vp. That is, Span {v1, ..., vp}
is the collection of all vectors that can be written in
the form
c1v1  c2 v2  ...  c p v p
with c1, …, cp scalars.

54
• Let v be a nonzero vector in . Then Span {v} is the
set of all scalar multiples of v, which is the set of
points on the line in through v and 0. See the figure
below.

55

A GEOMETRIC DESCRIPTION OF SPAN {U, V}


• If u and v are nonzero vectors in , with v not a
multiple of u, then Span {u, v} is the plane in that
contains u, v, and 0.
• In particular, Span {u, v} contains the line in
through u and 0 and the line through v and 0. See the
figure below.

56
LINEAR INDEPENDENCE

• Definition: An indexed set of vectors {v1, …, vp} in


is said to be linearly independent if the vector
equation x1v1  x2 v2  ...  x p v p  0

has only the trivial solution. The set {v1, …, vp} is


said to be linearly dependent if there exist weights
c1, …, cp, not all zero, such that
c1v1  c2 v2  ...  c p v p  0----(1)

57

LINEAR INDEPENDENCE

• Equation (1) is called a linear dependence relation


among v1, …, vp when the weights are not all zero.

• An indexed set is linearly dependent if and only if it


is not linearly independent.

58
 1 4 2
Example 4: Let v1   2  , v 2   5 , and v3   1 .
   
 3  6   0 

a. Determine if the set {v1, v2, v3} is linearly


independent.
b. If possible, find a linear dependence relation
among v1, v2, and v3.

59

• Suppose that we begin with a matrix A   a1 an 


instead of a set of vectors.

• The matrix equation Ax  0 can be written as


x a  x a  ...  x a .  0
1 1 2 2 n n

• Each linear dependence relation among the columns of A


corresponds to a nontrivial solution of Ax  0 .

• Thus, the columns of matrix A are linearly independent if


and only if the equation Ax  0 has only the trivial
solution.
60
SETS OF ONE OR TWO VECTORS

• A set containing only one vector – say, v – is linearly


independent if and only if v is not the zero vector.

• This is because the vector equation x1v  0 has only


the trivial solution when v  0.

• The zero vector is linearly dependent because x1 0  0


has many nontrivial solutions.

61

SETS OF ONE OR TWO VECTORS

• A set of two vectors {v1, v2} is linearly dependent if


at least one of the vectors is a multiple of the other.

• The set is linearly independent if and only if neither


of the vectors is a multiple of the other.

62
SETS OF TWO OR MORE VECTORS

• Theorem* : Characterization of Linearly Dependent Sets


• An indexed set S  {v1 ,..., v p } of two or more vectors is
linearly dependent if and only if at least one of the
vectors in S is a linear combination of the others.
• In fact, if S is linearly dependent and v1  0 , then some vj
(with j  1) is a linear combination of the preceding
vectors, v1, …, v j1 .

63

• Proof:

• If some vj in S equals a linear combination of the other


vectors, then vj can be subtracted from both sides of
the equation, producing a linear dependence relation
with a nonzero weight (1) on vj.
• For instance, if v1  c2 v2  c3 v3 , then
0  (1)v1  c2 v 2  c3 v3  0v 4  ...  0v p .
• Thus S is linearly dependent.

64
Conversely, suppose S is linearly dependent.
If v1 is zero, then it is a (trivial) linear combination of the other
vectors in S.

• Otherwise, v1  0 , and there exist weights c1, …, cp, not all zero,
such that
c1v1  c2 v2  ...  c p v p  0 .

• Let j be the largest subscript for which c j  0 .

• If j  1 , then c1v1  0 , which is impossible because v1  0

65

SETS OF TWO OR MORE VECTORS

• So j  1, and
c1v1  ...  c j v j  0v j  0v j 1  ...  0v p  0
c j v j  c1v1  ...  c j 1v j 1
 c1   c j 1 
v j     v1  ...     v j 1.
 cj   cj 

66
• Theorem* does not say that every vector in a linearly
dependent set is a linear combination of the
preceding vectors.
• A vector in a linearly dependent set may fail to be a
linear combination of the other vectors.

3 1 
 
• Example 5: Let u  1  and v  6  . Describe the
0  0 

set spanned by u and v, and explain why a vector w is


in Span {u, v} if and only if {u, v, w} is linearly
dependent.
67

• Solution: The vectors u and v are linearly independent


because neither vector is a multiple of the other, and
so they span a plane in .
• Span {u, v} is the x1x2-plane (withx3  0 ).
• If w is a linear combination of u and v, then {u, v, w} is
linearly dependent, by Theorem*.
• Conversely, suppose that {u, v, w} is linearly
dependent.
• By theorem*, some vector in {u, v, w} is a linear
combination of the preceding vectors (since u  0).
• That vector must be w, since v is not a multiple of u.

68
• So w is in Span {u, v}. See the figures given below.

• Example 5 generalizes to any set {u, v, w} in with u


and v linearly independent.
• The set {u, v, w} will be linearly dependent if and only if
w is in the plane spanned by u and v.
69

• Theorem **: If a set contains more vectors than there


are entries in each vector, then the set is linearly
dependent. That is, any set {v1, …, vp} in is linearly
dependent if p  n .
• Proof: Let A   v1 v. p 
• Then A is n  p , and the equation Ax  0 corresponds
to a system of n equations in p unknowns.
• If p  n, there are more variables than equations, so
there must be a free variable.

70
SETS OF TWO OR MORE VECTORS
• Hence Ax  0 has a nontrivial solution, and the
columns of A are linearly dependent.
• See the figure below for a matrix version of this
theorem.

• Theorem ** says nothing about the case in which the


number of vectors in the set does not exceed the
number of entries in each vector.

71

• Theorem ***: If a set S  {v1 ,..., v p } in contains


the zero vector, then the set is linearly dependent.

• Proof: By renumbering the vectors, we may


suppose v1  0

• Then the equation 1v1  0v2  ...  0v p  0 shows


that S in linearly dependent.

72
• If A is an m  n matrix—that is, a matrix with m rows
and n columns—then the scalar entry in the ith row
and jth column of A is denoted by aij and is called the
(i, j)-entry of A. See the figure below.
• Each column of A is a list of m real numbers, which
identifies a vector in .

73

• The columns are denoted by a1, …, an, and the matrix A


is written as
A  a1 a 2 a. n 
• The number aij is the ith entry (from the top) of the jth
column vector aj.
• The diagonal entries in an m  n matrix A  a  are
ij

a11, a22, a33, …, and they form the main diagonal of A.


• A diagonal matrix is a sequence n  m matrix whose
nondiagonal entries are zero.
• An example is the n  n identity matrix, In.

74
• An m  n matrix whose entries are all zero is a zero
matrix and is written as 0.

• The two matrices are equal if they have the same size
(i.e., the same number of rows and the same number
of columns) and if their corresponding columns are
equal, which amounts to saying that their
corresponding entries are equal.

• If A and B are m  n matrices, then the sum A  B is


the m  n matrix whose columns are the sums of the
corresponding columns in A and B.
75

• Since vector addition of the columns is done


entrywise, each entry in A  B is the sum of the
corresponding entries in A and B.

• The sum A  B is defined only when A and B are the


same size.
 4 0 5  1 1 1
• Example 1: Let A   1 3 2  , B  3 5 7  ,
   
 2 3
and C
1
. Find A  B and A  C .
0

• If r is a scalar and A is a matrix, then the scalar


multiple rA is the matrix whose columns are r times
the corresponding columns in A.
76
Theorem : Let A, B, and C be matrices of the same
size, and let r and s be scalars.
a. A  B  B  A
b. ( A  B)  C  A  ( B  C )
c. A  0  A
d. r ( A  B)  rA  rB
e. (r  s) A  rA  sA
f. r ( sA)  (rs) A

• Each quantity in Theorem is verified by showing


that the matrix on the left side has the same size as
the matrix on the right and that corresponding
columns are equal.
77

• Row—column rule for computing AB


• If a product AB is defined, then the entry in row i and
column j of AB is the sum of the products of
corresponding entries from row i of A and column j of B.
• If (AB)ij denotes the (i, j)-entry in AB, and if A is an
m  n matrix, then
( AB)ij  ai1b1 j  ...  ainbnj .

• Example 2: Compute AB, where  2 3 and


A 
 1 5
4 3 9
B  .
 1 2 3 

78
• Theorem 2: Let A be an m  n matrix, and let B and
C have sizes for which the indicated sums and
products are defined.
a. A( BC )  ( AB)C (associative law of
multiplication)
b. A( B  C )  AB  AC (left distributive law)
c. ( B  C ) A  BA  CA (right distributive law)
d. r ( AB)  (rA) B  A(rB) for any scalar r
e. I m A  A  AI n (identity for matrix
multiplication)

79

• If AB  BA , we say that A and B commute with


one another.
• Warnings:
1. In general, AB  BA .

2. The cancellation laws do not hold for matrix


multiplication. That is, if AB  AC , then it is
not true in general that B  C .
3. If a product AB is the zero matrix, you cannot
conclude in general that either A  0 or B  0 .

80
• If A is an n  n matrix and if k is a positive integer, then
Ak denotes the product of k copies of A:
Ak  A A
k

• If A is nonzero and if x is in n , then Akx is the result


of left-multiplying x by A repeatedly k times.

• If k  0 , then A0x should be x itself.

• Thus A0 is interpreted as the identity matrix.

81

Example 3 Calculate the transpose of D

82
Theorem 3: Let A and B denote matrices whose sizes
are appropriate for the following sums and
products.
a. ( AT )T  A
b. ( A  B)T  AT  BT
c. For any scalar r, (rA)T  rAT
d. ( AB)T  BT AT

83

Example 4 Calculate and , where

84
Example 5 Calculate |A|

85

86
87

Example 6 Calculate |A| by

a)

b)

88
Example 7 Calculate |T|

89

90
Example 8 Calculate |A| by row operations

91

92
93

94
Example 9 Calculate the inverse of A by using Gauss-Jordan
elimination to transform [A|I] in to [I|A].
Discuss: when A is not invertible?

a)

b)

c)

95

Example 10

96
Example 11

97

• We have added two vectors and multiplied a vector by a scalar.


• Question: Is it possible to multiply two vectors so that their product
is a useful quantity?

Inner
Cross
product (dot
product
product)

98
Definition THE DOT PRODUCT
• If a = [a1, a2, …an ] and b = [b1, b2, …bn], then
the dot product of a and b is the number a • b (or <a,b>)given by:
a • b = a1b1 + a2b2 +… +anbn

• The result is not a vector.

• It is a real number, that is, a scalar.

• For this reason, the dot product is sometimes


called the scalar product (or inner product).

99

Example 1

Example 1 Calculate the inner product of

Example 2 Calculate the norm of W and X.

100
PROPERTIES OF DOT PRODUCT
• If a, b, and c are vectors and c is a scalar, then

101

GEOMETRIC INTERPRETATION
• The dot product a • b can be given a geometric
interpretation in terms of the angle θ between a and b.
• If θ is the angle between the vectors
a and b, then

a ∙ b = ||a|| ||b|| cos θ

102
Proof
• If we apply the Law of Cosines to triangle OAB here, we get:
|AB|2 = |OA|2 + |OB|2 – 2|OA||OB| cos θ
• where
|OA| = ||a||

|OB| = ||b||

|AB| = ||a – b||


• Then we have
||a – b||2 = ||a||2 + ||b||2 – 2||a|| ||b|| cos θ
||a||2 – 2a ∙ b + ||b||2 = ||a||2 + ||b||2 – 2||a|| ||b|| cos θ
–2a ∙ b = –2|a||b| cos θ
a ∙ b = |a||b| cos θ
103

Example 3
•If the vectors a and b have lengths 4
and 6, and the angle between them is π/3, find
a ∙ b.
Example 4

•Find the angle between the vectors

a = [2, 2, –1] and b = [5, –3, 2]

104
ORTHOGONAL VECTORS
•Two nonzero vectors a and b are called
perpendicular or orthogonal if the angle
between them is θ = π/2, and a ∙ b = 0
Example 5
•Show that [2,2,-1] is perpendicular
to [5,-4,2].

105

•The dot product a ∙ b is:


• Positive, if a and b point in the same general direction

• Zero, if they are


perpendicular

• Negative, if they point


in generally opposite
directions

106
THE CROSS PRODUCT
•The cross product a x b of two
vectors a and b, unlike the dot product,
is a vector.
• For this reason, it is also called the vector product.

• Note that a x b is defined only when a and b


are three-dimensional (3-D) vectors.

107

108
Example 6 If a = (1, 3, 4) and b = (2, 7, –5), calculate

109

• Solution

i j k
ab  1 3 4
2 7 5
3 4 1 4 1 3
 i j k
7 5 2 5 2 7
 (15  28)i  (5  8) j  (7  6)k
 43i  13j  k
110
Example 7 Show that a x a = 0 for any vector a
in V3.
• If a = <a1, a2, a3>, i j k
then a  a  a1 a2 a3
a1 a2 a3
 (a2 a3  a3a2 ) i  (a1a3  a3a1 ) j
 (a1a2  a2 a1 ) k
 0i  0 j  0k  0
111

•One of the most important


properties of the cross product
is given by the following theorem:
Example 8 (theorem) Prove the vector a x b
is orthogonal to both a and b.

112
Proof
•In order to show that a x b is orthogonal to
a, we compute their dot product as follows
(a  b)  a
a2 a3 a a3 a a2
 a1  1 a2  1 a
b2 b3 b1 b3 b1 b2 3
 a1 (a2b3  a3b2 )  a2 (a1b3  a3b1 )  a3 (a1b2  a2b1 )
 a1a2b3  a1b2 a3  a1a2b3  b1a2 a3  a1b2 a3  b1a2 a3
0

•A similar computation shows that


(a x b) · b = 0
• Therefore, a x b is orthogonal to both a and b.
113

•Let a and b be represented by directed


line segments with the same initial point,
as shown, a x b points in a direction
perpendicular to the plane through a and b.

114
• We know the direction of the vector a x b.

• The remaining thing we need to complete its geometric description is its


length |a x b|.

• This is given by the following theorem.

•If θ is the angle between a and b


(so 0 ≤ θ ≤ π), then

||a x b|| = ||a|| ||b|| sin θ

115

Proof
• From the definitions of the cross product
and length of a vector, we have:
• ||a x b||2
= (a2b3 – a3b2)2 + (a3b1 – a1b3)2 + (a1b2 – a2b1)2

• = a22b32 – 2a2a3b2b3 + a32b22 + a32b12


– 2a1a3b1b3 + a12b32 + a12b22
– 2a1a2b1b2 + a22b12
• = (a12 + a22 + a32)(b12 + b22 + b32)
– (a1b1 + a2b2 + a3b3)2

• = ||a||2||b||2 – (a . b)2
• = ||a||2||b||2 – ||a||2||b||2 cos2θ = ||a||2||b||2 (1 – cos2θ)
• = ||a||2||b||2 sin2θ

116
•Two nonzero vectors a and b are parallel if
and only if

axb=0

117

Example 9
•Find a vector perpendicular to the plane that
passes through the points

P(1, 4, 6), Q(-2, 5, -1), R(1, -1, 1)

118
• Solution:
PQ  (2  1) i  (5  4) j  (1  6) k
i j k
 3i  j  7k
PQ  PR  3 1 7
0 5 5
PR  (1  1) i  (1  4) j  (1  6) k
 5 j  5k  (5  35) i  (15  0) j  (15  0) k
 40i  15 j  15k

• Therefore, the vector ‹-40, -15, 15›


is perpendicular to the given plane.
• Any nonzero scalar multiple of this vector,
such as ‹-8, -3, 3›, is also perpendicular
to the plane.

119

Example 10
•Find the area of the triangle with vertices

P(1, 4, 6), Q(-2, 5, -1), R(1, -1, 1)

120
•In Example 9, we computed that

PQ  PR  40, 15,15

PQ  PR  (40)2  (15)2  152  5 82

The area of the triangle PQR 5


2 82

121

• If a, b, and c are vectors and c is a scalar, then


1. a x b = –b x a

2. (ca) x b = c(a x b) = a x (cb)

3. a x (b + c) = a x b + a x c

4. (a + b) x c = a x c + b x c
These properties can be proved by
5. a · (b x c) = (a x b) · c writing the vectors in terms of their
components and using the
6. a x (b x c) = (a · c)b – (a · b)c definition of a cross product.

122
Example: Proof of property 5: a · (b x c) = (a x b) · c
• Let
a = <a1, a2, a3>

b = <b1, b2, b3>

c = <c1, c2, c3>


a · (b x c) = a1(b2c3 – b3c2) + a2(b3c1 – b1c3)
+ a3(b1c2 – b2c1) a1 a2 a3
= a1b2c3 – a1b3c2 + a2b3c1 – a2b1c3
+ a3b1c2 – a3b2c1 a  (b  c)  b1 b2 b3
= (a2b3 – a3b2)c1 + (a3b1 – a1b3)c2 c1 c2 c3
+ (a1b2 – a2b1)c3
=(a x b) · c

123

• Definition: An eigenvector of an n  n matrix A is a


nonzero vector x such that Ax  λx for some scalar
λ. A scalar λ is called an eigenvalue of A if there is a
nontrivial solution x of Ax  λx ; such an x is called
an eigenvector corresponding to λ.
• λ is an eigenvalue of an n  n matrix A if and only if
the equation
( A  λI )x  0
has a nontrivial solution.

124
Example 1

125

• Example 2: Find eigenvalues of matrix

 1 6  and find the corresponding eigenvectors.


A 
 5 2 

• Example 3: Find eigenvalues of matrix

and find the corresponding eigenvectors.

126
• Example 4: Find eigenvalues of matrices and the
corresponding eigenvectors.

* A 3 by 3 matrix has at most 3 linearly independent eigenvectors

127

• Theorem : The eigenvalues of a triangular matrix are


the entries on its main diagonal.

 a11 a12 a13   λ 0 0 


A  λI   0 a22 a23    0 λ 0 
   
 0 0 a33   0 0 λ 
 a11  λ a12 a13 
 0 a22  λ a23 
 
 0 0 a33  λ 

128
DIAGONALIZATION
 7 2
• Example 5 : Let A   . Find a formula for
 4 1

Ak, given that A  PDP 1 , where

 1 1 5 0
P and D   0 3
1 2   
 
• Solution: The standard formula for the inverse of a
matrix yields
2 2
 2 1
P 
1

 1 1
129

• Then, by associativity of matrix multiplication,


A2  ( PDP 1 )( PDP 1 )  PD ( P 1 P) DP 1  PDDP 1
I

 1 1  5 2
0   2 1
 PD P  
2 1
  0 32   1 1
 1 2   
• Again,

A3  ( PDP 1 ) A2  ( PD P 1 ) P D 2 P 1  PDD 2 P 1  PD 3 P 1
I

130
• In general, for k  1 ,
 1 1  5 k
0   2 1
Ak  PD k P 1    0 
 1 2  3k   1 1
 2  5k  3k 5k  3k 
 k 
 2  3  2  5 k
2  3k  5k 
• A square matrix A is said to be diagonalizable if A is
similar to a diagonal matrix, that is, if A  PDP 1 for
some invertible matrix P and some diagonal, matrix D.

131

• Theorem : An n  nmatrix A is diagonalizable if and


only if A has n linearly independent eigenvectors.
In fact, A  PDP 1 , with D a diagonal matrix, if and
only if the columns of P and n linearly independent
eigenvectors of A. In this case, the diagonal entries of
D are eigenvalues of A that correspond, respectively,
to the eigenvectors in P.

132
• Example 6 : Diagonalize the following matrix, if
possible.

1 3 3 
A   3 5 3
 
 3 3 1
That is, find an invertible matrix P and a diagonal
matrix D such that A  PDP .
1

133

Solution: Step 1. Find the eigenvalues of A.


0  det( A  λI )  λ 3  3λ 2  4
 (λ  1)(λ  2) 2
• The eigenvalues are λ  1 and λ  2 (double).
• Step 2. Find three linearly independent eigenvectors
of A.
 1  1  1
λ  1: v1   1 λ  2 : v 2   1 v3   0 
     
 1  0   1

You can check that {v1, v2, v3} is a linearly independent set.

134
• Step 3. Construct P from the vectors in step 2.
 1 1 1
P   v1 v2 v3    1 1 0 
 
 1 0 1
• Step 4. Construct D from the corresponding eigenvalues.
 1 0 0
D  0 2 0 
 
0 0 2 

135

• To avoid computing P 1, simply verify that AD  PD .


• Compute
 1 3 3  1 1 1  1 2 2 
AP   3 5 3  1 1 0    1 2 0 
    
 3 3 1  1 0 1  1 0 2 
 1 1 1  1 0 0   1 2 2 
PD   1 1 0  0 2 0    1 2 0 
    
 1 0 1 0 0 2   1 0 2 

136

You might also like