Professional Documents
Culture Documents
Linear Algebra
Linear Algebra
Lecture Notes
Textbook
Students are strongly advised to ac- book is easily available on Amazon (this
quire a copy of the Textbook: includes some very cheap used copies)
and in Blackwell’s bookshop on campus.
D. C. Lay. Linear Algebra and its These lecture notes should be
Applications. Pearson, 2006. ISBN treated only as indication of the
0-521-28713-4. content of the course; the text-
book contains detailed explana-
tions, worked out examples, etc.
Other editions can be used as well; the
About Homework
The Undergraduate Student Hand- Normally, homework assignments will
book 1 says: consist of some odd numbered exercises
from the sections of the Textbook cov-
As a general rule for each hour in class
ered in the lectures up to Wednesday in
you should spend two hours on indepen-
particular week. The Textbook contains
dent study. This will include reading lec-
answers to most odd numbered exercises
ture notes and textbooks, working on ex-
(but the numbering of exercises might
ercises and preparing for coursework and
change from one edition to another).
examinations.
Homework N (already posted on the
In respect of this course, MATH10212
webpage and BB9 space of the course)
Linear Algebra B, this means that stu-
should be returned to Supervision
dents are expected to spend 8 (eight!)
Classes teachers early in Week N, for
hours a week in private study of Linear
marking and discussion at supervision
Algebra.
classes on Tuesday and Wednesday.
1
http://www.maths.manchester.ac.uk/study/undergraduate/information-for-current-students/
undergraduatestudenthandbook/teachingandlearning/timetabledclasses/.
2
Communication
The Course Webpage is Email: Feel free to write to me with
questions, etc., at the address
http://www.maths.manchester.ac.
uk/~avb/math10212-Linear-Algebra-B. alexandre.borovik@manchester.ac.
html uk
The Course Webpage page is updated but only from your university e-mail
almost daily, sometimes several times a account.
day. Refresh it (and files it is linked to)
Emails from Gmail, Hotmail, etc. auto-
in your browser, otherwise you may miss
matically go to spam.
the changes.
• Over the course, we shall develop • Physicists use even more short no-
increasingly compact notation for tation and, instead of
operations of Linear Algebra. In 3
X
particular, we shall discover that p1 g1 + p2 g2 + p3 g3 = pi gi
i=1
p1 g1 + p2 g2 + p3 g3 write
can be very conveniently written as p1 g 1 + p2 g 2 + p3 g 3 = p i g i ,
g1 omitting the summation sign en-
p1 p2 p3 g2 tirely. This particular trick was
g3 invented by Albert Einstein, of all
people. I do not use “physics”
and then abbreviated tricks in my lectures, but am
prepared to give a few addi-
g1
p1 p2 p3 g2 = P T G,
tional lectures to physics stu-
g3 dents.
3
Checking solutions
Given a solution of the system of linear equations, how to check that it is correct?
MATH10212 • Linear Algebra B • Lecture 2 • Row reduction and echelon forms • Last change: 23 Apr 2018 6
Denote Span{v1 , . . . , vp }
0 1 1
and is called the subset of Rn spanned
a1 = 1 , a2 = 1 , a3 = 1
(or generated) by v1 , . . . , vp ; or the
1 1 −1
span of vectors v1 , . . . , vp .
and
That is, Span{v1 , . . . , vp } is the collec-
2
b = 3 , tion of all vectors which can be written
2 in the form
then the vector equation can be written
c1 v1 + c2 v2 + · · · + cp vp
as
x1 a1 + x2 a2 + x3 a3 = b.
with c1 , . . . , cp scalars.
Notice that to solve this equation is the
We say that vectors v1 , . . . , vp span Rn
same as
if
express b as a linear combi- Span{v1 , . . . , vp } = Rn
nation of a1 , a2 , a3 , and find
all such expressions. We can reformulate the definition of
span as follows: iv a1 , . . . , ap ∈ Rn , then
Therefore
Span{a1 , . . . , ap } = {b ∈ Rn such that
solving a linear system is the same as [a1 · · · ap |b]
finding an expression of the vector of the
is the augmented matrix
right part of the system as a linear com-
bination of columns in its matrix of co- of a consistent system
efficients. of linear equations},
A vector equation
or, in simpler words,
x1 a1 + x2 a2 + · · · + xn an = b.
Span{a1 , . . . , ap } = {b ∈ Rn such that
has the same solution set as the linear the system of equations
system whose augmented matrix is
x1 a1 + · · · + xp ap = b
a1 a2 · · · an b has a solution}.
MATH10212 • Linear Algebra B • Lecture 5 • The matrix equation Ax = b • Last change: 23 Apr 2018 11
x2 + x3 = 2
Existence of solutions. The equation
x1 + x2 + x3 = 3 Ax = b has a solution if and only if b is
x1 + x2 − x3 = 2 a linear combination of columns of A.
can be written as { v1 , . . . , vp }
T (x) = T (x1 e1 + x2 e2 + · · · + xn en ) T : Rn −→ Rm
= T (x1 e1 ) + · · · + T (xn en ) be linear transformation and let A be the
= x1 T (e1 ) + · · · + xn T (en ) standard matrix for T . Then:
A+B = a1 + b1 a2 + b2 ··· an + bn
a11 + b11 · · · ···
a1j + b1j a1n + b1n
.. .. ..
. . .
= ai1 + bi1 · · · aij + bij ··· ain + bin
.. .. ..
. . .
am1 + bm1 · · · amj + bmj · · · amn + bmn
MATH10212 • Linear Algebra B • Lecture 9 • Matrix operations: Addition • Last change: 23 Apr 2018 18
Scalar multiple. If c is a a scalar then trices of the same size and r and s be
we define scalars.
cA = ca1 ca2 · · · can
1. A + B = B + A
ca11 · · · ···
ca1j ca1n 2. (A + B) + C = A + (B + C)
.. .. ..
. . .
3. A + 0 = A
= cai1 · · · caij ··· cain
. .. ..
.. . . 4. r(A + B) = rA + rB
cam1 · · · camj · · · camn
5. (r + s)A = rA + sA
Theorem 2.1.1: Properties of ma- 6. r(sA) = (rs)A.
trix addition. Let A, B, and C be ma-
MATH10212 • Linear Algebra B • Lecture 10 • Matrix multiplication • Last change: 23 Apr 2018 19
Their composition b1 , . . . , bn
then the product AB is the p×n matrix
(S ◦ T )(x) = S(T (x)) whose columns are
is a linear transformation Ab1 , . . . , Abn :
S ◦ T : Rn −→ Rp .
AB = A b1 · · · bn
From the previous lecture, we know that = Ab1 · · · Abn .
linear transformations are given by ma-
trices. What is the matrix of S ◦ T ?
Columns of AB. Each column Abj of
Multiplication of matrices. To an- AB is a linear combination of columns of
swer the above question, we need to com- A with weights taken from the jth col-
pute A(Bx) in matrix form. umn of B:
Write x as
b
1j
Abj = a1 · · · am ...
x1
x = ... bmj
xn = b1j a1 + · · · + bmj am
and observe Mnemonic rules
Bx = x1 b1 + · · · + xn bn .
[m × n matrix] · [n × p matrix] =
Hence [m × p matrix]
A(Bx) = A(x1 b1 + · · · + xn bn )
= A(x1 b1 ) + · · · + A(xn bn ) columnj (AB) = A · columnj (B)
= x1 A(b1 ) + · · · + xn A(bn )
rowi (AB) = rowi (A) · B
= Ab1 Ab2 · · · Abn x
Theorem 2.1.2: Properties of ma-
Therefore multiplication by the matrix
trix multiplication. Let A be an m×n
matrix and let B and C be matrices for
C = Ab1 Ab2 · · · Abn
which indicated sums and products are
transforms x into A(Bx). Hence C is defined. Then the following identities
the matrix of the linear transformation are true:
MATH10212 • Linear Algebra B • Lecture 10 • Matrix multiplication • Last change: 23 Apr 2018 20
1. A(BC) = (AB)C
2. A(B + C) = AB + AC
3. (B + C)A = BA + CA
5. Im A = A = AIn
Ak = A · · · A (k times)
If A 6= 0 then we set
A0 = I
1. (AT )T = A
2. (A + B)T = AT + B T
4. (AB)T = B T AT
MATH10212 • Linear Algebra B • Lecture 11 • The inverse of a matrix • Last change: 23 Apr 2018 21
(AB)−1 = B −1 A−1
a b
A=
c d
(c) If A is an invertible matrix, then
If ad − bc 6= 0 then A is invertible and so is AT , and
−1
(AT )−1 = (A−1 )T
a b 1 d −b
=
c d ad − bc −c a
MATH10212 • Linear Algebra B • Lecture 12 • Characterizations of invertible matrices • Last change: 23 Apr 2018 22
sion of the column space Col A of A: form a basis of R3 : there are three of
them and they are linearly independent
rank A = dim Col A
(the latter should be obvious to students
at this stage).
The Rank Theorem 2.9.14: If a ma-
trix A has n columns, The Invertible Matrix Theorem
(continued) Let A be an n × n matrix.
rank A + dim Nul A = n. Then the following statements are each
equivalent to the statement that A is an
invertible matrix.
The Basis Theorem 2.9.15: Let H be
a p-dimensional subspace of Rn .
(m) The columns of A form a basis of
• Any linearly independent set of ex- Rn .
actly p elements in H is automati-
cally a basis for H. (n) Col A = Rn .
For example, this means that the vectors (q) Nul A = {0}.
1 2 4
0 , 3 , 5 (r) dim Nul A = 0.
0 0 6
MATH10212 • Linear Algebra B • Lecture 15 • Introduction to determinants • Last change: 23 Apr 2018 27
a11 a12 a13
a22 a23 a21 a23 a21 a22
det a21 a22 a23 = a11 ·
− a12 · + a13 ·
a32 a33 a31 a33 a31 a32
a31 a32 a33
det A = a11 det A11 − a12 det A12 + · · · + (−1)1+n a1n det A1n
n
X
= (−1)1+j a1j det A1j
j=1
MATH10212 • Linear Algebra B • Lecture 15 • Introduction to determinants • Last change: 23 Apr 2018 28
Cij = (−1)i+j det Aij All entries in the first row of A–with pos-
sible exception of a11 —are zeroes,
Then
a12 = · · · = a15 = 0,
det A = a11 C11 + a12 C12 + · · · + a1n C1n
therefore in the formula
is cofactor expansion across the first det A = a11 det A11 + · · · + a55 det A55
row of A.
all summand with possible exception of
Theorem 3.1.1: Cofactor expan-
the first one,
sion. For any row i, the result of the
cofactor expansion across the row i is the a11 det A11 ,
same:
are zeroes, and therefore
det A = ai1 Ci1 + ai2 Ci2 + · · · + ain Cin
det A = a11 det A11
a22 0 0 0
The chess board pattern of signs in a32 a33 0 0
cofactor expansions. The signs = a11 det
a42 a43 a44 0 .
det In = 1.
Corollary. The determinant of a diag-
onal matrix equals is the product of its
diagonal elements:
Corollary. The determinant of the zero
d1 0 · · · 0 matrix equals 0:
0 d2 0
det .. .. = d1 d2 · · · dn .
. . det 0 = 0.
. . .
0 · · · dn
MATH10212 • Linear Algebra B • Lectures 15–16 • Properties of determinants • Last change: 23 Apr 2018 30
Example.
1 0 3 1 0 0
C3 −3C1
det 1 3 1 = det 1 3 −2
1 1 −1 1 1 −4
1 0 0
2 out of C3
= 2 · det 1 3 −1
1 1 −2
1 0 0
−1 out of C3
= −2 · det 1 3 1
1 1 2
expand 1+1 3 1
= −2 · 1 · (−1) · det
1 2
R1 −3R2 0 −5
= −2 · det
1 2
R1 ↔R2 1 2
= −2 · (−1) · det
0 −5
= 2 · (−5)
= −10.
0 −1 1
∆ = (−1) · (−1)3+1 · det −1 0 1 Recall that this theorem has been al-
1 1 2 ready known to us in the case of 2 × 2
matrices A.
0 −1 1 Example. The matrix
= − det −1 0 1
1 1 2 2 0 7
A = 0 1 0
0 −1 1
1 0 4
R3 +R2
= − det −1 0 1 has the determinant
0 1 3
1 0 0 1
det A = 2 · det + 7 · det
0 4 1 0
= 2 · 4 + 7 · (−1)
After expanding the determinant across = 8−7=1
MATH10212 • Linear Algebra B • Lectures 15–16 • Properties of determinants • Last change: 23 Apr 2018 33
Example. Take
Proof. Set B = A−1 , then
1 0 1 5
A= and B = ,
3 2 0 1
then AB = I
2. u + v = v + u. y = c1 v1 + · · · cp vp
3. (u + v) + w = u + (v + w). is called a linear combina-
4. There is zero vector 0 such that tion of v1 , v2 , . . . , vp with weights
u + 0 = u. c1 , c2 , . . . , cp .
If v1 , . . . , vp are in V , then the set of
5. For each u there exists −u such
all linear combination of v1 , . . . , vp is de-
that u + (−u) = 0.
noted by Span{v1 , . . . , vp } and is called
6. The scalar multiple cu is in V . the subset of V spanned (or gener-
ated) by v1 , . . . , vp .
7. c(u + v) = cu + cv.
That is, Span{v1 , . . . , vp } is the collec-
8. (c + d)u = cu + du. tion of all vectors which can be written
in the form
9. c(du) = (cd)u.
c1 v1 + c2 v2 + · · · + cp vp
10. 1u = u.
with c1 , . . . , cp scalars.
Examples. Of course, the most impor-
Span{v1 , . . . , vp } is a vector subspace
tant example of vector spaces is provided
of V , that is, it is closed with respect to
by standard spaces Rn of column vectors
addition of vectors and multiplying vec-
with n components, with the usual op-
tors by scalars.
erations of component-wise addition and
multiplication by scalar. V is a subspace of itself.
Another example is the set Pn of poly- { 0 } is a subspace, called the zero sub-
nomials of degree at most n in variable space.
x,
(c1 u1 + · · · + cp up ) ui = 0 ui = 0
Theorem 6.2.4. If
S = { u1 , . . . , up } After opening the brackets we have
v1 , . . . , vn v ui = x1 u1 ui + · · · + xn un ui
= xi ui ui
form an orthogonal basis of Rn . = xi .
An orthonormal basis in Rn is an or-
thogonal basis Theorem: Eigenvectors of symmet-
ric matrices. Let A be a symmetric
u1 , . . . , un
n × n matrix with n distinct eigenvalues
made of unit vectors,
λ1 , . . . , λ n .
kui k = 1 for all i = 1, 2, . . . , n.
Then Rn has an orthonormal basis made
of eigenvectors of A.
MATH10212 • Linear Algebra B • Lecture 26 • Orthogonal matrices • Last change: 23 Apr 2018 44
Inner product space. A vector space Length. The length of the vector u is
with an inner product is called an inner defined as
product space. √
kuk = u u.
Example: School Geometry. The or-
dinary Euclidean plane of school geome- Notice that
try is an inner product space:
kuk2 = u u,
• Vectors: directed segments start- and
ing at the origin O kuk = 0 ⇔ u = 0.
• Addition: parallelogram rule
Orthogonality. We say that vectors u
• Product-by-scalar: stretching the and v are orthogonal if
vector by factor of c.
u v = 0.
• Dot product:
u v = kukkvk cos θ,
where θ is the angle between vec- The Pythagoras Theorem. Vectors
tors u and v. u and v are orthogonal if and only if
q q
|u1 v1 + · · · + un vn | 6 u21 + ··· + u2n · v12 + · · · + vn2 .
or, which is its equivalent form,
Z 1
sZ 1
s
Z 1
f (x)g(x)dx 6 f (x)2 dx · g(x)2 dx,
0 0 0
or, equivalently,
Z 1 2 Z 1 Z 1
2 2
f (x)g(x)dx 6 f (x) dx · g(x) dx .
0 0 0
v1 = x1
x2 v1
v2 = x2 − v1
v1 v1
x3 v1 x3 v2
v3 = x3 − v1 − v2
v1 v1 v2 v2
..
.
xp v1 xp v2 xp vp−1
vp = xp − v1 − v2 − · · · − vp−1
v1 v1 v2 v2 vp−1 vp−1
Main Theorem about inner prod- the standard dot product
uct spaces. Every finite dimensional
inner product space has an orthonormal v
.1
basis and is isomorphic to on of Rn with u v = uT v = u1 · · ·
un .. .
vn
MATH10212 • Linear Algebra B • Lectures 28–30 • Revision • Last change: 23 Apr 2018 49
Linear dependence
If a1 , . . . , ap are vectors in a vector space and form a matrix A using vectors
V , an expression a1 , . . . , ap as columns, then
c1 a1 + · · · + cp ap , ci ∈ R
is called a linear combination of the c
.1
vectors a1 , . . . , ap , and coefficients ci are c1 a1 + · · · + cp ap = a1 · · · ap ..
called weights in this linear combina- cp
tion. If = Ac
c1 a1 + · · · + cp ap = 0
we say that weights c1 , . . . , cp de- and linear dependencies between vec-
fine a linear dependency of vectors tors a1 , . . . , ap are nothing else but solu-
a1 , . . . , ap . We always have a trivial tions of the homogeneous system of lin-
(or zero) dependency, in which all ear equation
ci = 0. We say that vectors a1 , . . . , ap
are linearly dependent if the admit a
Ax = 0.
non-zero (or non-trivial) linear depen-
dency, that is, a dependency in which at
least one weight ci 6= 0. Therefore vectors a1 , . . . , ap are linearly
dependent if and only if the system
Let us arrange the weights c1 , . . . , cp in a
column
c1 Ax = 0
..
c=.
cp has a non-zero solution.
Rank of a matrix
We took it without proof that if U is a U and is denoted dim U .
vector subspace of Rn , then the maximal
A maximal system of linearly indepen-
systems of linearly independent subsets
dent vectors in U is called a basis of U .
in U contain the same number of vectors.
This number is called the dimension of Each basis of U contains dim U vectors.
MATH10212 • Linear Algebra B • Lectures 28–30 • Revision • Last change: 23 Apr 2018 51
An important theorem: .
dim null A + dim Col A = n Theorem. If rank A = p then among
columns of A there are p linearly inde-
dim Col A is called the rank of A and is pendent columns
denoted
Theorem. Since elementary row trans-
rank A = dim Col A. formations of A do not change depen-
dency of columns, e=elementary row op-
Therefore
eration do not change rank A.
dim null A + rank A = n