Linear Algebra

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 137

Linear Algebra

By
Assoc.Prof. Mai Duc Thanh
Course Syllabus
 Lecturer: Assoc. Prof. Mai Duc Thanh Office: Room
A2.610
Phone: 091 323 899 E-mail: mdthanh@hcmiu.edu.vn
 Contents
1. Systems of Linear Equations and Matrices

2. Determinants

3. Vector Spaces

4. Inner Product Spaces

5. Linear Transformations

6. Eigenvalues and Eigenvectors

 Grading
1. HW&CW Assignments: 20%

2. Midterm Test: 20%

3. Final Exam: 60%

 Textbook:
R. Hill, Elementary Linear Algebra and Applications, 3 rd

ed.
B. Kolman and David R. Hill, Elementary Linear
Algebra with
Applications, 9th edition
1.2

Assignments
HW assignments will be given during the
semester. These
will typically be simple questions from your
textbook.
CW assignments (Quiz) will be given in the

class. They are


not informed in advance.
Late submission of HW assignments will lose

10% of the
total points for each day they are late and will
not be accepted
after one week.
Extra marks E from 0-20 are given for those

who solve
exercises on the board. Rule:
Average of HW/Quiz+ E = Scores of HW/CWA
1.3
Chapter 1: Systems of
Linear
Equations and Matrices
1.1 Introduction to Systems of Linear Equations
1.2 Gaussian Elimination and Gauss-Jordan
Elimination
1.3 Operations with Matrices
1.4 Properties of Matrix Operations
1.5 The Inverse of a Matrix
1.6 Elementary Matrices
Introduction to Systems of Linear
Equations
The word linear comes from the word
line
An equation for a line in the xy-plane is
of the form
a
1x + a2y=b
where a
1, a2 and b are constants, a1, a2 not both
zero.
In general, a linear equation in the n
variables x1, x2,
…, xn is one that can be put in the form
a
1x1+ a2x2 + … + anxn=b
1.5

We often deal with several linear


equations at the
same time:
A system with at least one solution is
called
consistent. Otherwise, it is called
inconsistent
11 1 12 2 1 1
21 1 22 2 2 2
1122
...
...
...
nn
nn
m m mn n m
axaxaxb
axaxaxb
axaxaxb



1.6

Matrix of Coefficients and


Augmented Matrix
We define the matrix of
coefficients of the
above linear system of
equations
Each number of A is
called an entry or
element
The augmented matrix
of the linear system of
equations is:
11 12 1
21 22 2
12
nn
m m mn
aaa
aaa
A
aaa






11 12 1 1
21 22 2 2
12
nn
m m mn m
aaab
aaab
A
aaab






1.7
A matrix of m rows and n columns is a
rectangular array of
numbers
11 12 1
21 22 2
12
nn
m m mn
aaa
aaa
A
aaa






 a
ij, 1 ≤ i ≤ m, 1 ≤ j ≤ n, are the entries (or
elements) of A;
a

ij is the entry in the i-th row and j-th column, or


simply the (i, j)-
entry of A.
1.8

Pivots
A pivot of a matrix is the first nonzero
entry in a row
(left-to-right order)
Example 1:
0102
0000
3640
0025
A






1.9

Matrices in row echelon form


Definition 1. A matrix is in row echelon form
if
a) All rows that contain only zeros are grouped
at the bottom of
the matrix
b) For each row that does not contain only
zeros, the pivot in
each row appears strictly to the right of the pivot
of each row
(that appears) above it
Definition 2. A row-echelon form matrix is in

row-reduced
echelon form if all pivots are 1’s, and all other
entries in any
column containing a pivot are zeros
1.10
 Intuitively, a matrix is in row echelon form if
it has the
appearance of a staircase pattern like
The pivots are blue stars

Black stars may or may not be zeros


0******
00*****
00000**
000000*
0000
*
*
*
*
0000







1.11

Example 2
Matrices in row echelon form:
512020
003902
a)
000112
000000







000
b)
000



00521
c)
00003



2125
d) 0 4 0 3
0023




1001
0101
0012




Matrix in row-reduced echelon form:
1.12

Example 3
The following matrices are not in row echelon
form
512321
003912
d)
000112
000201







000
b)
002



320
c) 0 0 0
021




2125
a) 1 4 1 3
0023




1.13
Example 4
Suppose that the augmented matrix for
a linear
system has been reduced to the given
matrix in row
echelon form and the variables are also
given. Solve
the system.
2135
a) 0 3 2 17
0 0 5 10
Variables: , ,
A
xyz




23152
b) 0 3 2 1 2
00284
Variables: , , ,
A
xyzw




1.14

Solution.
(a) The first step is to find the associated
linear system
The system is in triangular form and we

solve it by
backsubstitution
5z = -10, z = -2
-3y + 2(-2) = 17

-3y = 21, y=-7

2x + (-7)-3(-2)=5

2x=6, x=3

235
3 2 17
5 10
xyz
yz
z





2135
0 3 2 17
0 0 5 10
A




1.15

Solution
(b) First step: The associated system is
The variables x, y, and z corresponding to the

pivots of the
augmented matrix are called leading variables
(or
dependent variables).
The remaining variables are called free

variables (or
independent variables)
Second step: move all free variables to the

right-hand side of
the equations
2352
322
284
xyzw
yzw
zw





23152
03212
00284
A




2325
322
248
xyzw
yzw
zw



1.16
248
24
3 2( 2 4 ) 2
23
2 3(2 3 ) ( 2 4 ) 2 5
39
zt
zt
ytt
yt
xttt
xt






( , , , ) ( 3 9 ,2 3 , 2
4 , ), any real number
xyzwttttt

2325
322
248
xyzt
yzt
zt



We let w=t, the system is in triangular form
similar to part (a):
It can be solved by backsubstitution in the

usual manner :
Solution is
1.17
Example 5
3 2 1 3 1 14
002532
=
000036
000000
A







x x x x x 12345, , , ,
Suppose that the augmented matrix for a linear
system has been
reduced to the given matrix in row echelon form
and the variables
are also given. Solve the system
Variables:
1.18

Solution
The associated system is
The leading variables are x1 , x3 and x5 .

So, x2 and x4 are free variables

We discard the last equation and move the all


free variables
to the right-hand side:
12345
345
5
3 2 3 14
2532
36
00
xxxxx
xxx
x






3 2 1 3 1 14
002532
=
000036
000000
A






13524
354
5
3 14 2 3
2325
36
xxxxx
xxx
x



1.19
We let x
2 = s, x4 = t:
Remark: If there are more free variables, we

could have used


subscripted independent variables such as
We now solve by backward substitution
135
35
5
3 14 2 3
2325
36
xxxst
xxt
x




t t t t 1 2 3 , , ,..., n
55
3
33
1
11
362
2 3(2) 2 5
28545/2
3 (4 5 / 2) 2 14 2 3
382/28/32/3/6
xx
xt
xtxt
xtst
xstxst






 Solution is

( , , , , ) (8/ 3 2 / 3 / 6, ,4
5 / 2, ,2) x x x x x s t s t
t 1 2 3 4 5 
1.20

1.2 Gaussian Elimination and Gauss-


Jordan
Elimination
Elementary row operations:
Row equivalent:

Two matrices are said to be row equivalent if


one can be obtained
from the other by a finite sequence of above
elementary row
operations
(1) Interchange two rows:

(2) Multiply a row by a nonzero constant: M


k R R i i i ( ) k ( )
()
,
(3) Add a multiple of a row to another row: A
k R R R i j i j j k ( )
I R R i j i j , 
Every matrix can be transformed to a matrix in
row echelon form
using (a finite number of) elementary operations
1.21
Ex 1: Elementary row operations




2341
0134
1203




2341
1203
0134





5212
1330
1231





5212
1330
2462





0 3 13 8
0321
1243




2152
0321
1243

I1,2
1
()
2
M1
( 2)
A1,3 
1.22
Gaussian elimination:
The procedure for reducing a matrix to a row
echelon form
by performing the three elementary row
operations
Gauss-Jordan elimination:

The procedure for reducing a matrix to its row-


reduced echelon
form by performing the three elementary row
operations
Notes:

(1) Every matrix has an unique row-reduced


echelon form
(2) A row echelon form of a given matrix is not
unique
(Different sequences of elementary row
operations can
produce different row echelon forms)
1.23
3
2322
12311
1,
34341
2 3 1 2 3 1 0.
x
xxxx
xxxxx



123
123
123
3242
231
2532
xxx
xxx
xxx



3242
1231
2532
A




1231
1,2
3242
2532
I 



( 3) (2) 1 2 3 1
1,2 1,3 , 0 8 13 5
0134
A A 



1231
2,3
0134
0 8 13 5
I 



( 8) 1 2 3 1
2,3 0 1 3 4
0 0 37 37
A 



Ex 2: Solve a system by the Gauss elimination

method
Soluti
on
System has a
unique solution
1.24
Find solutions of the following system of linear
equations
1234
1234
1234
24
3 2 11
12 7 31.
xxxx
xxxx
xxxx



1 -2 -1 -1 4
3 1 1 -2 11
1 12 7 1 31
A





The last equation cannot be solved.
So the system has no solutions.

 Example 3
0 11 !! x3
( 3) ( 1) 1 -2 -1 -1 4
1,2 1,3 , 0 7 4 1 23
0 14 8 2 35
A A 




( 2) 1 -2 -1 -1 4
2,3 0 7 4 1 23
0 0 0 0 11
A






 Solution
1.25
 Using Gauss elimination method, solve the
following system
123
123
123
251
362
3 5 4 0.
xxx
xxx
xxx



151
1362
3540
2
A






 Example 4
Let x
3 = t. then x2 = t – 3/7, x1 = (1 – x2 – 5x3)/2 = 5/7
– 3t.
The system admits an infinite number of

solutions
( 1/ 2) 1 5 1
1,2 7 / 2 3 /
20
2
( 3 / 2) 7 /
17/27/23/
2
,3 0 2
AA






(1) 1 5 1
2,3 7 / 2 3 / 2
0
200
7/2
00
A 




 Solution
1.26
Ex 5: Solve a system by the Gauss-Jordan
elimination method
2 5 5 17
34
239



xyz
xy
xyz
Solution:
1239
1304
2 5 5 17
A




(1) ( 2) 1 2 3 9
1,2 1,3
0135
0111
A A 



(1) (1/ 2) 1 2 3 9
2,3 3
0135
0012
A M 



( 3) ( 3) (2) 1 0 0 1
3,2 3,1 2,1
0101
0012
A A A 



112



z
y
x

(row echelon form)


Row-reduced echelon form
1.27
Ex 6: Solve a system by the Gauss-Jordan
elimination
method (infinitely many solutions)
123
12
2420
351
xxx
xx


Sol:
1
( ) 2 ( 3)
1 1,2
( 1) ( 2)
2 2,1

242012101210
350135010131
12101052
01310131
MA
MA









For the augmented


matrix,
(row echelon form)
row-reduced echelon
form)
1.28

Then the system of


linear equations
becomes
13
23
52
31
xx
xx


23
13
13
25
xx
xx



It can be further
reduced to
Let , then x3 t
,
13,
25,
123
xt
xttR
xt



 So, this system has infinitely many solutions
1.29
Homogeneous systems of linear equations :
A system of linear equations is said to be
homogeneous
if all the constant terms are zero
0000
112233
31 1 32 2 33 3 3
21 1 22 2 23 3 2
11 1 12 2 13 3 1





m m m mn n
nn
nn
nn
axaxaxax
axaxaxax
axaxaxax
axaxaxax



1.30
Trivial (obvious) solution:
Nontrivial solution:

other solutions

0
x1 x2 x3 xn 
Theorem 1.1
(1) Every homogeneous system of linear
equations is consistent.
Furthermore, for a homogeneous system,
exactly one of the
following is true:
(a) The system has only the trivial solution
(b) The system has infinitely many nontrivial solutions in
addition to the
trivial solution (In other words, if a system has any
nontrivial solution, this
system must have infinitely many nontrivial solutions)
(2) If the homogenous system has fewer
equations than variables,
then it must have an infinite number of solutions
1.31
Solve the following systems by Gauss
elimination method
Exercises
123
123
123
1)
220
232
3242
xxx
xxx
xxx



123
123
123
2)
3522
23
3241
xxx
xxx
xxx



1.32
Solve the following systems by Gauss-Jordan
elimination method
Exercises
1234
1234
1234
1234
4)
3231
24222
52351
3241
xxxx
xxxx
xxxx
xxxx




1234
1234
1234
3)
2451
2343
3221
xxxx
xxxx
xxxx



1.33
1) Solve the following system by Gauss
elimination method
Quiz
1234
1234
1234
1234
34642
25352
33583
24363
xxxx
xxxx
xxxx
xxxx




123
123
123
2352
4341
543
xxx
xxx
xxx



2) Solve the following system by Gauss-Jordan
elimination
1.34

1.3 Operations with Matrices



mn
Matrix:
m m m mn m n
nnn
M
ij

aaaa
aaaa
aaaa
aaaa
Aa




[ ] 
123
31 32 33 3
21 22 23 2
11 12 13 1




(i, j)-th entry (or element): aij


number of rows: m
number of columns: n
size: m×n
Square matrix: m = n
1.35
A matrix consisting of a single column is called
a
column vector
A matrix consisting of a single row is called a

row
vector.
123
u






v 1 3 4
Let A be a square, n × n, matrix. Then its
(main) diagonal
consists of the entries a
11, a22, …, ann.
11 12 1
21 22 2
12
nn
n n nn
aaa
aaa
A
aaa






1.36
Upper triangular matrix : All entries below the
main diagonal are
zeros:
Lower triangular matrix : All entries above the

main diagonal
are zeros:
Diagonal matrix : All entries above and below

the main diagonal


are zeros:


33
22 23
11 12 13
00
0
aa
a
aaa


31 32 33
21 22
11
00
0
aaa
aa
a
Ex:
upper triangular



33
22
11
00
00
00
a
a
a
 lower triangular diagonal
Special Square Matrices:
Triangular Matrices
1.37

For [ ] and [ ] , A a B b
ij m n ij m n 
Equal matrices: two matrices are equal if they
have the same size
(m × n) and entries corresponding to the same
position are equal

ABabimjn
if and only if
for 1 , 1 ij ij
Ex: Equality of matrices






cd
ab
AB
34
12

If , then 1, 2, 3, and 4
A B a b c d 
1.38
Matrix addition:

If [ ] , [ ] , A a B b ij m
n ij m n 
then [ ] [ ] [ ] [ ] A B a b
a b c C ij m n
ij m n ij ij m n ij m n 
Ex 2: Matrix addition
















13
05
0112
1123
12
13
01
12








132
132





22
33
11



000
1.39
Matrix subtraction:

AB A(1)B
Scalar multiplication:

If [ ] and is a constant
scalar, A a c ij m n 
Ex 3: Scalar multiplication and matrix
subtraction



212
301
124
A




132
143
200
B
Find (a) 3A, (b) –B, (c) 3A – B

then [ ] cA ca ij m n 
1.40
(a)



212
301
124
3A 3
(b)





132
143
200
B1
(c)







132
143
200
636
903
3 6 12
3A B
Sol:



636
903
3 6 12





323132
333031
313234






132
143
200



704
10 4 6
1 6 12
1.41
 Matrix multiplication:

If [ ] and [ ] , A a B b
ij m n ij n p 
then [ ] [ ] [ ] , AB a b c
C ij m n ij n p ij m p 
in nj
n
k
cij aikbkj ai b j ai b j a b


1
1122
size of C=AB is m × p
※ The entry cij is obtained by calculating the sum of the
entry-by-entry
product between the i-th row of A and the j-th column of
B
If they equal, A and B are multipliable
11 12 1 11 12 1 1
11 1 1
21 2 2
1212
1
1212
njn
jn
jn
i i in i i ij in
n nj nn
n n nn n n nj nn
aaacccc
bbb
bbb
aaacccc
bbb
aaacccc








1.42
32

13
42
50
A




2 2
32
41
B




Ex 4: Find AB

Sol:
32
32
( 1)( 3) (3)( 4) ( 1)(2) (3)(1)
(4)( 3) ( 2)( 4) (4)(2) ( 2)(1)
(5)( 3) (0)( 4) (5)(2) (0)(1)
91
46
15 10
AB











Note: (1) BA is not multipliable


(2) Even BA is multipliable, it could be that AB
≠ BA 1.43
Matrix form of a system of linear equations in
n variables:




m m mn n m
nn
nn

axaxaxb
axaxaxb
axaxaxb

1122
21 1 22 2 2 2
11 1 12 2 1 1

===
Axb

m linear
equations
single matrix
equation
Axb
mn n1 m1







m m mn n m
nn
bbb
xxx
aaa
aaa
aaa




12
12
12
21 22 2
11 12 1


1.44
 Partitioned matrices :






21 22
11 12
31 32 33 34
21 22 23 24
11 12 13 14
AA
AA
aaaa
aaaa
aaaa
A
submatrix
11 12 13 14 1
21 22 23 24 2
31 32 33 34 3

aaaa
Aaaaa
aaaa




rrr

11 12 13 14
21 22 23 24 1 2 3 4
31 32 33 34

aaaa
Aaaaa
aaaa




cccc
※ Partitioned matrices can be
used to simplify equations or
to obtain new interpretation of
equations (see the next slide)
row vector
column vector
1.45

11 12 1
21 22 2
12
12
nn
n
m m mn
aaa
aaa
A
aaa






ccc
12n
xxx






x
11 1 12 2 1
21 1 22 2 2
11221
nn
nn
m m mn n
m
axaxax
axaxax
A
axaxax







x

Ax is a linear combination of the column


vectors of matrix A:









mn
nn
n
mma
aa
x
aaa
x
aaa
x

12
2
22
21
2
1
21
11
1

c1
=

c2
=
n
=c
x x x 1 1 2 2 c c c n
n Ax can be viewed as the linear combination of the
column vectors of A with coefficients x
1, x2,…, xn

12
12n
n
xxx






ccc← You can derive the same result if you perform
the matrix multiplication for matrix A
expressed in column vectors and x directly
and
1.46
 Diagonal matrix a square matrix in which
nonzero elements
are found only in the principal diagonal
Trace:

A
diag(d1,d2,,dn)
n
nn

M
d
d
d







00
00
00
2
1

If [ ] , then ( ) A a Tr A a
a a ij n n nn 11 22
※ It is the usual notation for a diagonal matrix
1.47

1.4 Properties of Matrix Operations


Three basic matrix operators:
(1) matrix addition
(2) scalar multiplication
(3) matrix multiplication
Zero matrix:
000
000
000
mn
mn







0

 Identity matrix of order n:


100
010
001
n
nn
I







1.48
then (1) A+B = B+A
(2) A+(B+C) = (A+B)+C
(3) (cd) A = c (dA)
(4) 1A = A
(Distributive property of
(5) c(A+B) = cA scalar
+ cB multiplication over matrix
addition)

(Distributive property of scalar


(6) (c+d) A = cA
multiplication over real-number
+ dA addition)

If , , , and , are scalars,


A B C M c d m n 
Properties of matrix addition and scalar


multiplication:
(Commutative property of matrix addition)
(Associative property of matrix addition)
(Associative property of scalar multiplication)
(Multiplicative identity property, and 1 is the multiplicative
identity for all matrices)
Notes:
All above properties are very similar to the
counterpart
properties for real numbers
1.49

If , and is a scalar, A M
c m n 
then (1) A A 0m n 
(2) ( ) A A 0m n 
(3) 0 or cA c A
0 0 m n m n 
Notes:
All above properties are very similar to the
counterpart
properties for the real number 0
Properties of zero matrices:
※ So, 0m×n is also called the additive identity for the set of all
m×n matrices
※ Thus , –A is called the additive inverse of A
1.50

(Associative property of matrix


(1) A(BC) = multiplication)
(AB)C (Distributive property of LHS matrix
multiplication
(2) A(B+C) = AB + AC
(Distributive property of RHS matrix
(3) (A+B)C = AC
multiplication
+ BC over matrix addition)
(4) c(AB) = (cA)B = A(cB)
Properties of the identity matrix:

If , then (1)
(2)
mnn
m
A M AI A
IAA


Properties of matrix multiplication:

over matrix addition)


※ For real numbers, the properties (2) and (3) are the same
since the order for the
multiplication of real numbers is irrelevant
※ The real-number multiplication can satisfy above properties
and there is a
commutative property for the real-number multiplication, i.e.,
cd = dc
※ The role of the real number 1 is similar to the identity
matrix. However, 1 is unique
in real numbers and there could be many identity matrices with
different sizes
(Associative property of scalar and matrix
multiplication)
1.51
10
12102
, , and 3 1 .
21321
24
ABC




Ex 3: Matrix Multiplication is Associative


Calculate (AB)C and A(BC) for


Sol:
10
12102
()31
21321
24
10
5 4 0 17 4
31
1 2 3 13 14
24
AB C









1.52
10
12102
()31
21321
24
1 2 3 8 17 4
2 1 7 2 13 14
A BC







1.53
Equipped with the four properties of matrix
multiplication,
we can prove: if a homogeneous system has any
nontrivial
solution, this system must have infinitely many
nontrivial
solutions
1
11
1
Suppose there is a nonzero solution for this homegeneous system such
that . Then it is straightforward to show that must be another
solution, i.e.,
()(
At
AttA


x
x0x
xx
1) ( )
Finally, since can be any real number, it can be concluded that there are
infinitely many solutions for this homogeneous system
t
t
0 0

1.54
Properties for Ak:
Definition of Ak : repeated multiplication of a

square matrix:
12
matrices of
,,,
k
kA

A A A AA A AA A 
(1) AjAk = Aj+k
(2) (Aj)k = Ajk
where j and k are nonegative integers and A0 is
assumed
to be I
11
22
0000
0000
0000
k
k
k
k
nn
dd
dd
DD
dd







 For diagonal matrices:


1.55
11 12 1
21 22 2
12
If ,
nn
mn
m m mn
aaa
aaa
AM
aaa







11 21 1
12 22 2
12
then
mm
T
nm
n n mn
aaa
aaa
AM
aaa







Transpose of a matrix:
※ The transpose operation is to move the entry aij (original at
the position (i, j)) to
the position (j, i)
※ Note that after performing the transpose operation, AT is with
the size n×m 1.56



28
A (b)



789
456
123
A (c)




11
24
01
A
Sol: (a)



28

A AT 2 8
(b)



789
456
123
A



369
258
147
AT
(c)




11
24
01
A





141
021
AT
(a)
Ex 8: Find the transpose of the following

matrix
1.57

(1) (AT )T A


(2) (AB)T AT BT
(3) (cA)T c(AT )
(4) (AB)T BT AT
Properties of transposes:
※ Properties (2) and (4) can be generalized to
the sum or
product of multiple matrices. For example,
(A+B+C)T =
AT+BT+CT and (ABC)T = CTBTAT
1.58
212
103
021
A




31
21
30
B




Sol:
Ex 9: Verify that (AB)T and BTAT are equal

2123121
261
()1032161
112
0213012
TT
AB T




210
323261
102
110112
231
B A TT




1.59
A square matrix A is symmetric if A = AT
Ex:




56
4
123
If
bc
A a is
symmetric, find a, b, c?
A square matrix A is skew-symmetric if AT = –A
Skew-symmetric matrix :

Sol:



56
4
123
bc
Aa



356
24
1
bc
a
AT

a 2, b 3, c 5


A AT

Symmetric matrix :
1.60



30
0
012
If
bc
A a is a skew-symmetric, find a, b, c?

Note: AAT must be symmetric

Pf
:
is symmetric
()()
T
TTTTTT
AA
AA A A AA


Sol:



30
0
012
bc
Aa






230
10
0
bc
a
AT

A AT a
1, b 2, c 3
Ex:
※ The matrix A could be with any size,
i.e., it is not necessary for A to be a
square matrix.
※ In fact, AAT must be a square matrix.
1.61
ab = ba (Commutative property of real-number
multiplication)

(1) If , then is
defined, but is und m
p AB BA efined
(3) If , then m p n AB M
BA M m m m m ,
(2) If , , then , m p m n
AB M BA M m m
n n (Sizes are not the same)
(Sizes are the same, but resulting matrices may not be equal)
Real number:
Matrix:

AB BA
mn np
Three results for BA:

n p m n 
Before finishing this section, two properties
will be discussed,
which is held for real numbers, but not for
matrices: the first is the
commutative property of matrix multiplication
and the second is
the cancellation law
1.62





21
13
A




02
21
B
Sol:











44
25
02
21
21
13
AB

AB BA











42
07
21
13
02
21
BA
Ex 4:
Show that AB and BA are not equal for the
following matrices.
and
(noncommutativity of matrix multiplication)
1.63
Notes:
(1) A+B = B+A (the commutative law of matrix
addition)
(2) (the matrix multiplication is not with the
commutative law) (so the order of matrix
multiplication is very
important)
AB BA
※ This property is different from the property for the
multiplication operations of real numbers, for which the
order of multiplication is with no difference
1.64
(Cancellation law is not
necessary to be valid)

ac bc c and 0
a b (Cancellation law for real
numbers)
Matrix:

AC BC C C and (
is not a zero matrix)
0
(1) If C is invertible, then A = B
(2) If C is not invertible, then it is possible that
Real number:

A B
1.65












12
12
,
23
24
,
01
13
ABC
Sol:











12
24
12
12
01
13
AC

So, although , AC BC


A B











12
24
12
12
23
24
BC
Ex 5: An example in which cancellation is not
valid
Show that AC=BC
1.66

Exercises
1) Find 2A, A+B, AB, A BT, ATB if it is
possible
or state “undefined” and explain why
2) Find AB, BA, BTAT , if it is possible
or state
“undefined” and explain why
341210
,
201324
AB



13021
241,23
32512
AB




1.67

1.5 The Inverse of a Matrix


AMnn
if there exists a matrix
such that , B M AB BA I
n n n 
Note:
A square matrix that does not have an inverse is
called
noninvertible (or singular )
Consider ,
then (1) A is invertible (or nonsingular)
(2) B is the inverse of A
Inverse matrix
1.68
If B and C are both inverses of the matrix A,
then B = C.
Pf:
BC
IB C
CA B C
C AB CI
AB I

()
()

Consequently, the inverse of a matrix is unique.


Notes:

(1) The inverse of A is denoted by A1


(2) AA1 A1A I
Theorem 1.2: The inverse of a matrix is


unique
(associative property of matrix multiplication and the property


1.69
for the identity matrix)

A I I A | |
Gauss-
Jordan elimination 1
Ex 2: Find the inverse of the matrix A





13
14
A
Sol:

AX I 







0 1
10
13
14
21 22
11 12
xx
xx







01
10
33
44
11 21 12 22
11 21 12 22
xxxx
xxxx
Find the inverse of a matrix by the Gauss-
Jordan elimination:
1.70
(1) ( 4)
1,2 2,1 ,

141103
(1)
130011
A A 


(1) ( 4)
1,2 2,1 ,

140104
(2)
131011
A A 


x11 3, x21 1
x12 4, x22 1





11
34
XA1
Thus
(2)
31
40
(1)
30
41
12 22
12 22
11 21
11 21






xx
xx
xx
xx
by equating corresponding entries
This two systems of linear
equations have the same
coefficient matrix, which
is exactly the matrix A
Perform the GaussJordan elimination on
the matrix A with the
same row operations 1.71
(1) ( 4)
1,2 2,1
Gauss-Jordan elimination
,
1
14101034
1 3 0 1 0 1 1 1 AA
AIIA





※ If A cannot be row reduced to I, then A is singular
Note:
Instead of solving the two systems separately,
you can
solve them simultaneously by appending the
identity
matrix to the right of the coefficient matrix
11
21
solution for
xx



12
22
solution for
xx



1.72






623
101
110
A
( 1)
1,2

110100
011110
623001
A





Sol:







001
010
100
623
101
110

A I
(6)
1,3

110100
011110
043601
A





( 1)
3

110100
011110
001241
M





( 4)
2,3

110100
011110
001241
A





 Ex 3: Find the inverse of the following matrix
1.73
(1)
3,2

110100
010331
001241
A





(1)
2,1

100231
010331
001141
A





So the matrix A is invertible, and its inverse is






241
331
231
A1

[ I A1 ]
 Check it by yourselves:
AA1 A1A I
1.74
If A is an invertible matrix, k is a positive
integer, and c is a scalar,
then

(1) A1 is invertible and


(A1)1 A
(2) is invertible and ( ) (
) A A A k k k 1 1 
111
(3) is invertible given 0 and ( ) cA c cA A
c



Theorem 1.3: Properties of inverse matrices
(4) AT is invertible and
(AT )1 (A1)T
1111 1
(1) ; (2) ( ) ; (3) ( )( ) ; (4) ( )
Pf:
A A I A A I cA A I A A I k k T T
c



← “T” is not the number of
power. It denotes the
transpose operation.
1.75
Theorem 1.4: The inverse of a product
If A and B are invertible matrices of order n,
then AB is invertible
and

(AB)1 B1A1
Thus, if is invertible,
then its invers AB B A e
is 1 1
Pf:

( )( ) ( ) ( ) ( ) AB B A A
BB A A I A AI A AA I
1 1 1 1 1 1 1


Note:
(1) It can be generalized to the product of
multiple matrices
(2) It is similar to the results of the transpose of
the products of
multiple matrices

A1A2 A3An 1


An1A31A21A11
A A A A A A A A 1 2 3 3 2 1 n n T
T T T T
(associative property of matrix multiplication)
1.76
Theorem 1.5: Cancellation properties for
matrix multiplication
If C is an invertible matrix, then the following
properties hold:
(1) If AC=BC, then A=B (right cancellation
property)
(2) If CA=CB, then A=B (left cancellation
property)
Pf:
AB
AI BI
A CC B CC
AC C BC C
AC BC



()()
()()
11

11 -1
( is invertible, so
exists) C C
Note:
If C is not invertible, then cancellation is not
valid
(Associative property of matrix multiplication)
1.77
Theorem 1.6: Systems of equations with a
unique solution
If A is an invertible matrix, then the system of
linear equations

Ax = b has a unique solution given by xb


A1
Pf:
11

( A is
nonsingular)
11
A
AAA
IA
A



xb
xb
xb
xb

So, this solution is unique

If and were two


solutions of equat x x x
b 1 2 ion , A 
then A A x b x 1 2
x x 1 2 (left cancellation
property)
1.78
Ex 8:
Use an inverse matrix to solve each system
(a) (b)
(c)
231
331
242
xyz
xyz
xyz



234
338
245
xyz
xyz
xyz



230
330
240
xyz
xyz
xyz



Gauss-Jordan elimination 1

231110
331101
241623
A A




Sol:
1.79
(a)
(b)
(c)
1

11012
10111
62322
A




xb
1

11044
10181
62357
A




xb
1

11000
10100
62300
A




xb
※ This technique is very
convenient when you face
the problem of solving
several systems with the
same coefficient matrix
※ Because once you have A-1,
you simply need to
perform the matrix
multiplication to solve the
unknown variables
※ If you solve only one
system, the computational
effort is less for the G. E.
plus the back substitution
or the G. J. E.
1.80

Exercises
11
342
1) Let 1 5 3
124
Find and ( ) T
A
A A 





121
138
2) Let 2 4 3
365
Find and ( )
A
A A 





1.81

1.6 Elementary Matrices


 Elementary matrix
An nn matrix is called an elementary matrix if
it can be obtained
from the identity matrix I by a single elementary
row operation
Three row elementary matrices:

,,

(1) ( ) E I I i j i j 
(2) ( ) ( 0) E M I k i i ( ) ( ) k
k 
()()
,,

(3) ( ) E A I i j i j k k 
(interchange two rows)
(multiply a row by a nonzero constant)
(add a multiple of a row to another row)
Note:
1. Perform only a single elementary row operation
on the identity
matrix
2. Since the identity matrix is a square matrix,
elementary matrices
must be square matrices 1.82
100
(a) 0 3 0
001




100
(b)
010



100
(c) 0 1 0
000




100
(d) 0 0 1
010




10
(e)
21



100
(f) 0 2 0
001




(3)

Yes ( ( )) M I 2 3 No
(not square) No (row
multiplication must
be with a nonzero constant)

Yes ( ( )) I I 2,3 3 Yes ( ( ))


A I 1,2 2 (2)
(2) ( 1)
23
No (use two elementary row
operations, and ) M M 
Ex 1: Elementary matrices and nonelementary


matrices
1.83
Theorem 1.7: Representing elementary row
operations
Let E be the elementary matrix obtained by
performing an
elementary row operation on Im. If that same
elementary row
operation is performed on an mn matrix A,
then the resulting
matrix equals the product EA
(Note that the elementary matrix must be
multiplied to
the left side of the matrix A)
Note:

Elementary matrices are useful because they


enable you to use
matrix multiplication to perform elementary row
operations
1.84
1
()
( 2) 2
1,2 , , 1,3 3
01351302
13020135
26200012
IAM

A





Sol:
1 1,2 3

010
()100
001
EII




( 2)
2 1,3 3

100
()010
201
EAI




1
()
2
333
12
100
()010
00
EMI




Ex 3: Using elementary matrices
Find a sequence of elementary matrices that can
be used to
rewrite the matrix A in its row-echelon form
1.85
1 1,2 1

01001351302
()10013020135
00126202620
AIAEA




( 2)
2 1,3 1 2 1

10013021302
()01001350135
20126200024
AAAEA




1
()
2
33232

10013021302
()01001350135
100240012
00
2
AMAEAB






1
()
2 ( 2)

or ( ( ( ))) B E E E A B M A I
A 3 2 1 3 1,3 1,2 
same row-echelon form
※ This example demonstrates that we can obtain the
same effect of
performing elementary row operations by multiplying
corresponding
elementary matrices to the left side of the target matrix A
1.86
2341
3252
4214
A




Exercise: Using elementary matrices
Find a sequence of elementary matrices that can
be used to
rewrite the matrix A in its row-echelon form
1.87
Matrix B is row-equivalent to A if there exists a
finite number
of elementary matrices such that

BEEEEA
k k12 1
Row-equivalent:
Theorem 1.8: Elementary matrices are

invertible
If E is an elementary matrix, then exists and is
still an
elementary matrix (of the same type)
E1
※ This row-equivalent
property is from the rowequivalent property after
performing a finite
sequence of the
elementary row operations

E1
1.88
1
3

100
010
0 0 1/ 3
E



3 

100
Ex: 0 1 0
003
E




1
2

100
010
201
E




2

100
Ex: 0 1 0
201
E




1

010
Ex: 1 0 0
001
E




1
1

010
100
001
E




 Ex: Elementary Matrix Inverse Matrix
still corresponds
to I
1,2(I)

1
E1
The corresponding
element row operations
for is still to add a
multiple of the row 1 to
the row 3, but the
multiplying scalar is
from 2 to -2
1
E2
The corresponding
element row operations
for is still to
multiply the row 3 by a
nonzero constant, but
the constant is from 3
to 1/3
1
E3
Formula of the inverse of an
elementary matrix
1.89
Pf:
Assume that A is the product of elementary
matrices
1. Every elementary matrix is invertible (Thm.
1.8)
2. The product of invertible matrices is
invertible (Thm. 1.4)
Thus A is invertible
If A is invertible, has only one solution (Thm.
1.6), and
this solution must be the trivial solution.

Ax 0 
A I 0 0 G.-J.
E. E E E E A I k 3
21
1111

A E E E E I E E E
E I 1 2 3 1 2 3 k k = 
Thus A can be written as the product of
elementary matrices
Theorem 1.9: A property of invertible

matrices
A square matrix A is invertible if and only if it
can be written as
the product of elementary matrices (we can
further obtain that an
invertible matrix is row-equivalent to the
identity matrix)
()
()
A invertible matrix
is row-equivalent to
the identity matrix
1.90





38
12
A
Sol:
( 1) ( 3)
1 1,2
1
( ) 2 ( 2)
2 2,1

121212
383802
1210
0101
MA
MA
A
I







1
()
( 2) ( 3) ( 1) 2

Therefore E E E E A I 2,1 2 1,2 1 


Ex 4:
Find a sequence of elementary matrices whose
product is
1.91











01
12
02
10
31
10
01
10
( 1) (3) (2) (2)

E E E E 1 1,2 2 2,1 


1
11
()()
( 2) ( 3) ( 1) ( 1) 1 ( 3) 1 1 ( 2) 1 2 2
A E E E E E E E E 2,1 2 1,2 1 1 1,2 2 2,1 ( ) ( ) ( ) ( )






1
()
( 2) ( 3) ( 1) 2
E E E E A I 2,1 2 1,2 1 
1.92

If A is an nn matrix, then the following


statements are equivalent
(1) A is invertible
(2) Ax = b has a unique solution for every n1
column vector b
(3) Ax = 0 has only the trivial solution
(4) A is row-equivalent to In
(5) A can be written as the product of
elementary matrices
Theorem 1.10: Equivalent conditions
1.93

A LU LU A is a
-factorization of
If the nn matrix A can be written as the product
of a lower
triangular matrix L and an upper triangular
matrix U, then
LU-factorization (or LU-decomposition) :
11
21 22
31 32 33

00
0
a
aa
aaa




11 12 13
22 23
33

0
00
aaa
aa
a




3 3 lower triangular matrix : all entries
above the principal diagonal are zero
3 3 upper triangular matrix : all entries
below the principal diagonal are zero
1.94
21
111
12
111
( ) 12
k
k
k
EEEAU
AEEEU
A LU L E E E






Theorem 1.11 (LU-factorization ):
111
() E E E 1 2 k
,
k
Ai j
1.95
12
(a)
10
A 

130
(b) 0 1 3
2 10 2
A




(-1)
1,2

1212
1002
A

A U 

Sol: (a)
( 1)

E A U 1,2 
( 1) 1 (1) (1)
1,2 1,2 1,2
10
( ) where
11
A E U E U LU L E 

Ex 5: LU-factorization
1.96
(b)
( 2) (4)
1,3 2,3

130130130
013013013
2 10 2 0 4 2 0 0 14
AA

AU





(4) ( 2)

E E A U 2,3 1,3 


A E E U E E U E E U
LU  2,3 1,3 1,3 2,3 1,3 2,3 (4) ( 2) ( 2) 1 (4) 1 (2)

( 4)  1
(2) ( 4)
 ()()
where 1,3 2,3
100100100
010010010
201041241
L E E 




1.97

If , then A LU LU
x b
Let , then y x y b
U L
Two steps to solve Ax=b:
(1) Write y = Ux and solve Ly = b for y
(2) Solve Ux = y for x

Ax b 
Solving Ax=b with an LU-factorization of A
(an important
application of the LU-factorization)
1.98
Sol:
A LU











0 0 14
013
130
241
010
100
2 10 2
013
130
2 10 2 20
31
35
123
23
12




xxx
xx
xx

(1) Let , and solve y


x y b U L








20
51
241
010
100
123
yyy
20 2( 5) 4( 1) 14
20 2 4
51
312
12






yyy
yy
Ex 7: Solving a linear system using LU-
factorization
(solved by the forward substitution)
1.99

(2) Solve the


following system Ux
y









14
51
0 0 14
013
130
123
xxx
5 3 5 3(2) 1
1 3 1 (3)( 1) 2
1
12
23
3




xx
xx
x

Thus, the solution is


So such scenario, the LU-
factorization
is performed once, and can
be used
many times.
※ You can find that the
computational effort for LU
factorization is almost the
same as
that for the Gaussian
elimination,
so if you need to solve one
system
of linear equations, just use
the
Gaussian elimination plus
the back




1
12
x
(solved by the back substitution)
※ Similar to the method using A –1 to
solve the systems, the LUfactorization is useful when you
need to solve many systems with
the same coefficient matrix. In
substitution or the Gauss-Jordan
elimination directly.
1.100
Let
Write A as the product of elementary matrices

Write A-1 as the product of elementary

matrices
Exercise
112
213
353
A




1.101

Homework Chapter 1
Textbook: B. Kolman and David R. Hill,
Elementary
Linear Algebra with Applications, 9th edition
Section 1.1: 3, 5, 8, 9, 12

Section 1.2: 6, 8

Section 1.3: 11, 14, 15

Section 1.4: 7, 8, 30

Section 2.1: 1, 4

Section 2.2: 2, 5

Section 2.3: 9, 10

Section 2.5: 1, 2, 5, 6

Deadline: 4 weeks
1.102

You might also like