Download as pdf or txt
Download as pdf or txt
You are on page 1of 127

Linear Algebra Review

Linear Algebra Primer

Slides by Juan Carlos Niebles*, Ranjay Krishna* and Maxime Voisin


*Stanford Vision and Learning Lab

27-Sep-2018
Another, very in-depth linear algebra review from CS229 is available here:
http://cs229.stanford.edu/section/cs229-linalg.pdf
And a video discussion of linear algebra from EE263 is here (lectures 3 and 4):
https://see.stanford.edu/Course/EE263
1
Stanford University
Outline
• Vectors and matrices

Linear Algebra Review


– Basic Matrix Operations
– Determinants, norms, trace
– Special Matrices
• Transformation Matrices
– Homogeneous coordinates
– Translation
• Matrix inverse

27-Sep-2018
• Matrix rank
• Eigenvalues and Eigenvectors
• Matrix Calculus

2
Stanford University
Outline
Vectors and matrices are just
• Vectors and matrices

Linear Algebra Review


collections of ordered numbers
– Basic Matrix Operations that represent something: location
in space, speed, pixel brightness,
– Determinants, norms, trace etc. We’ll define some common
– Special Matrices uses and standard operations on
them.
• Transformation Matrices
– Homogeneous coordinates
– Translation
• Matrix inverse

27-Sep-2018
• Matrix rank
• Eigenvalues and Eigenvectors
• Matrix Calculus

3
Stanford University
Vector

Linear Algebra Review


• A column vector where

• A row vector where

27-Sep-2018
denotes the transpose operation

4
Stanford University
Vector
• We’ll default to column vectors in this class

Linear Algebra Review


27-Sep-2018
• You’ll want to keep track of the orientation of your
vectors when programming in python
5
Stanford University
Some vectors have a geometric interpretation,
others don’t…

Linear Algebra Review


• Other vectors don’t have a
• Some vectors have a geometric interpretation:
geometric – Vectors can represent any kind of
interpretation: data (pixels, gradients at an

27-Sep-2018
– Points are just vectors image keypoint, etc)
from the origin. – Such vectors don’t have a
– We can make geometric interpretation
calculations like
“distance” between 2 – We can still make calculations like
vectors “distance” between 2 vectors
6
Stanford University
Matrix

Linear Algebra Review


• A matrix is an array of numbers with size m by n,
i.e. m rows and n columns.

27-Sep-2018
• If , we say that is square.

7
Stanford University 7
Images

Linear Algebra Review


• Python represents an image as a matrix of pixel
brightnesses
• Note that the upper left corner is [x, y] = (0,0)

27-Sep-2018
Python indices
row column start at 0
8
Stanford University 8
Images can be represented as a matrix of pixels.
Images can also be represented as a vector of pixels

Linear Algebra Review


27-Sep-2018
9
Stanford University
Color Images
• Grayscale images have 1 number per pixel, and are stored as

Linear Algebra Review


an m×n matrix.

• Color images have 3 numbers per pixel – red, green, and blue
brightnesses (RGB) - and are stored as an m×n×3 matrix

27-Sep-2018
= 10
Stanford University
Basic Matrix Operations
• We will discuss:

Linear Algebra Review


– Addition
– Scaling
– Dot product
– Multiplication
– Transpose

27-Sep-2018
– Inverse / pseudoinverse
– Determinant / trace

11
Stanford University
Matrix Operations
• Addition

Linear Algebra Review


– We can only add a matrix with matching dimensions, or a scalar.
Good to know for Python assignments J

27-Sep-2018
• Scaling

12
Stanford University
Vector Norms

• Examples of vector norms

Linear Algebra Review


27-Sep-2018
13
Stanford University
Vector Norms

Linear Algebra Review


• More formally, a norm is any function
that satisfies the following 4 properties:

• Non-negativity: For all


• Definiteness: f(x) = 0 if and only if x = [0,0…0].

27-Sep-2018
• Homogeneity: For all
• Triangle inequality: For all

14
Stanford University
Vector Operation: inner product
• Inner product (dot product) of two vectors

Linear Algebra Review


– Multiply corresponding entries of two vectors and
add up the result
– x·y is also |x||y|cos( the angle between x and y )

27-Sep-2018
15
Stanford University
Vector Operation: inner product

Linear Algebra Review


• Inner product (dot product) of two vectors
–If B is a unit vector:
• Then A·B = |A||B|cos(Θ) = |A|x 1 x cos(Θ) = |A|cos(Θ)
• A·B gives the length of A which lies in the direction of B

27-Sep-2018
16
Stanford University
Matrix Operations
• The product of two matrices

Linear Algebra Review


27-Sep-2018
17
Stanford University
Matrix Operations
• Multiplication

Linear Algebra Review


• The product AB is:

27-Sep-2018
• Each entry in the result is:
(that row of A) dot product with (that column of B)
• Many uses, which will be covered later
18
Stanford University
Matrix Operations
• Multiplication example:

Linear Algebra Review


– Each entry of the matrix
product is made by taking:
the dot product of the
corresponding row in the left
matrix, with the corresponding
column in the right one.

27-Sep-2018
19
Stanford University
Matrix Operations
• The product of two matrices

Linear Algebra Review


27-Sep-2018
20
Stanford University
Matrix Operations

Linear Algebra Review


• Powers
– By convention, we can refer to the matrix product AA as A2, and
AAA as A3, etc.
– Important: only square matrices can be multiplied that way!
(make sure you understand why)

27-Sep-2018
21
Stanford University
Matrix Operations
• Transpose a matrix: flip the matrix, so row 1

Linear Algebra Review


becomes column 1

27-Sep-2018
• A useful identity:

22
Stanford University
Matrix Operations
• Determinant

Linear Algebra Review


– returns a scalar
– Represents area (or volume) of the
parallelogram described by the vectors
in the rows of the matrix
– For ,
– Properties:

27-Sep-2018
23
Stanford University
Matrix Operations
• Trace

Linear Algebra Review


– Invariant to a lot of transformations, so it’s used
sometimes in proofs. (Rarely in this class though.)
– Properties:

27-Sep-2018
24
Stanford University
Matrix Operations
• Vector Norms (we’ve talked about them earlier)

Linear Algebra Review


27-Sep-2018
• Matrix norms: Norms can also be defined for
matrices, such as the Frobenius norm:

25
Stanford University
Special Matrices
• Identity matrix I

Linear Algebra Review


Square matrix, 1’s along diagonal, 0’s elsewhere
I · [a matrix A] = [that matrix A]
[a matrix A] · I = [that matrix A]

• Diagonal matrix
Square matrix with numbers along diagonal, 0’s

27-Sep-2018
elsewhere
[diagonal matrix A] · [another matrix B] scales the
rows of matrix B

26
Stanford University
Special Matrices

Linear Algebra Review


• Symmetric matrix

• Skew-symmetric matrix 2 3
0 2 5
42 0 75
5 7 0

27-Sep-2018
27
Stanford University
Outline
• Vectors and matrices

Linear Algebra Review


– Basic Matrix Operations
– Determinants, norms, trace Matrix multiplication can be
– Special Matrices used to transform vectors.
• Transformation Matrices A matrix used in this way is
– Homogeneous coordinates
– Translation
called a transformation
• Matrix inverse matrix.

27-Sep-2018
• Matrix rank
• Eigenvalues and Eigenvectors
• Matrix Calculus

28
Stanford University
Transformation: scaling

Linear Algebra Review


• Matrices can be used to transform vectors in useful ways,
through multiplication: Ax = x’
• Simplest transformation is scaling:

27-Sep-2018
(Verify to yourself that the matrix multiplication works out this way)

Scaling Initial Scaled 29


matrix
Stanford University vector vector
Transformation: scaling

Linear Algebra Review


Scaled
vector:
Initial
vector:

27-Sep-2018
Multiply by the
scaling matrix:

30
Stanford University
Transformation: rotation

Linear Algebra Review


Rotated
vector:

Initial
vector: 𝜃 = 30°

27-Sep-2018
31
Stanford University
Before learning how to rotate a vector, let’s
understand how to do something slightly different
• How can you convert a vector represented in frame “0” to a new,

Linear Algebra Review


rotated coordinate frame “1”?

y0 y0
y0 y1 x1

27-Sep-2018
x0 x0
32
It is
Stanford University the same data point, but it is represented in a new, rotated frame.
Before learning how to rotate a vector, let’s
understand how to do something slightly different
• How can you convert a vector represented in frame “0” to a new,

Linear Algebra Review


rotated coordinate frame “1”?

y0 y0
y0 y1 x1
y

27-Sep-2018
This is y
This is x coordinate in
coordinate
frame “0”
in frame
“0” x x0 x0
33
It is
Stanford University the same data point, but it is represented in a new, rotated frame.
Before learning how to rotate a vector, let’s
understand how to do something slightly different
• How can you convert a vector represented in frame “0” to a new,

Linear Algebra Review


rotated coordinate frame “1”?

y0 y0
y0 y1 x1
y

27-Sep-2018
This is x
This is x x' coordinate
This is y coordinate in frame
coordinate in frame “0” This is y “1”
in frame coordinate in y'
“0” x x0 frame “1” x0
34
It is
Stanford University the same data point, but it is represented in a new, rotated frame.
Before learning how to rotate a vector, let’s
understand how to do something slightly different
• How can you convert a vector represented in frame “0” to a new,

Linear Algebra Review


rotated coordinate frame “1”?
• Remember what a vector is:
[component in direction of the frame’s x axis, component in direction of y axis]
y0 y0
y0 y1 x1
y

27-Sep-2018
This is x
x' coordinate
This is y in frame
coordinate “1”
in frame y'
“0” x x0 x0
This is x coordinate in This is y coordinate in 35
Stanford University frame “0” frame “1”
Before learning how to rotate a vector, let’s
understand how to do something slightly different
• How can you convert a vector represented in frame “0” to a new,

Linear Algebra Review


rotated coordinate frame “1”?
• Goal: How do we go from to ?

y0 y0
y0 y1 x1
y

27-Sep-2018
This is x
x' coordinate
This is y in frame
coordina “1”
te in
y'
frame
“0”
x x0 x0
This is x coordinate in This is y coordinate in 36
Stanford University frame “0” frame “1”
Before learning how to rotate a vector, let’s
understand how to do something slightly different
• How can you convert a vector represented in frame “0” to a new, rotated

Linear Algebra Review


coordinate frame “1”?
• Goal: How do we go from to ?

• Answer: we use dot products!!


o x’ (the new x coordinate) is the length of the original vector which lies
in the direction of the new x axis

27-Sep-2018
o y’ (the new y coordinate) is the length of the original vector which lies
in the direction of the new y axis
• So:
o x’ is [original vector] dot [the new x axis]
o y’ is [original vector] dot [the new y axis] 37
Stanford University
Before learning how to rotate a vector, let’s
understand how to do something slightly different
• How can you convert a vector represented in frame “0” to a new,

Linear Algebra Review


rotated coordinate frame “1”?
• Goal: How do we go from to ?

o x’ (the new x coordinate) is [original vector] dot [the new x axis]


o y’ (the new y coordinate) is [original vector] dot [the new y axis]

27-Sep-2018
• So:
o x’ = [x, y] dot [the new x axis]
38
o y’University
Stanford = [x, y] dot [the new y axis]
Before learning how to rotate a vector, let’s
understand how to do something slightly different
• How can you convert a vector represented in frame “0” to a new, rotated

Linear Algebra Review


coordinate frame “1”?
• Goal: How do we go from to ?

o x’ = [x, y] dot [the new x axis]


o y’ = [x, y] dot [the new y axis]

27-Sep-2018
• So:
o
= [
Stanford University
[the new x axis] dot [x, y]
[the new y axis] dot [x, y] ] 39
Before learning how to rotate a vector, let’s
understand how to do something slightly different
• How can you convert a vector represented in frame “0” to a new,

Linear Algebra Review


rotated coordinate frame “1”?
• Goal: How do we go from to ?

o
[ [the new x axis] dot [x, y]
[the new y axis] dot [x, y] ]
• So:

27-Sep-2018
o =[ [the new x axis]
[the new y axis] ]x à it is a matrix-vector
=[ multiplication!!
o
Stanford University
2x2 Rotation Matrix
]x 40
Before learning how to rotate a vector, let’s
understand how to do something slightly different
• How can you convert a vector represented in frame “0” to a new,

Linear Algebra Review


rotated coordinate frame “1”?
• Goal: How do we go from to ?

à answer: it is a matrix-vector multiplication!!


o = [ 2x2 Rotation Matrix
]x

27-Sep-2018
o =
Rx P' = R P
41
Stanford University
Before learning how to rotate a vector, let’s
understand how to do something slightly different
• How can you convert a vector represented in frame “0” to a new,

Linear Algebra Review


rotated coordinate frame “1”?
• Answer: using a matrix-vector multiplication!!

y0 y0
y0 y1 x1
y

27-Sep-2018
x'

y'
x x0
Stanford University
x0 =
Rx 42
Now, here’s how we do a rotation!!
• Until now, we’ve seen how to convert a data point represented in frame “0” to a

Linear Algebra Review


new, rotated coordinate frame “1”. We rotated the frame to the left.

y0 y0
= Rx
y1 y1

27-Sep-2018
x1 x1
y
x'

x x0 y' x0
43
Stanford University
Now, here’s how we do a rotation!!
• Until now, we’ve seen how to convert a data point represented in frame “0” to a

Linear Algebra Review


new, rotated coordinate frame “1”. We rotated the frame to the left.
• Insight: rotating the frame to the left == rotating a data point to the right
– Suppose we express a point in the new coordinate system “1”
– If we plot the result in the original coordinate system, we have rotated the point right

y0 y0
= Rx y0
= Rx
y1 y1 y1

27-Sep-2018
x1 x1 x1
y
x'
y’
x x0 y' x0
x’ x0
44
Stanford University
Now, here’s how we do a rotation!!
• Until now, we’ve seen how to convert a data point represented in frame “0” to a

Linear Algebra Review


new, rotated coordinate frame “1”.
• Insight: rotating the frame to the left == rotating a data point to the right

y0 Rotation by an Rotate the


y0
angle Θ vector by an
angle Θ
y

27-Sep-2018
y’
x x0
= Rx x’ x0
• Thus, rotation matrices can be used to rotate vectors. We’ll usually think of 45
Stanford
them University
in that sense-- as operators to rotate vectors
2D Rotation Matrix Formula: what is R?
Counter-clockwise rotation by an angle q

Linear Algebra Review


x ' = cos θ x - sin θ y
P’ y' = cos θ y + sin θ x
y’ q
y P

x’ é x'ù écosq - sin q ù é x ù

27-Sep-2018
x
ê y 'ú = ê sin q cosq úû êë y úû
ë û ë

Stanford University
P' = R P 46
Transformation Matrices
• Multiple transformation matrices can be used to transform a

Linear Algebra Review


point:
p’=R2 R1 S p
• The effect of this is to apply their transformations one after
the other, from right to left.
• In the example above, the result is
(R2 (R1 (S p)))

27-Sep-2018
• The result is exactly the same if we multiply the matrices
first, to form a single transformation matrix:
p’=(R2 R1 S) p
47
Stanford University
Outline
• Vectors and matrices

Linear Algebra Review


– Basic Matrix Operations
– Determinants, norms, trace
– Special Matrices
• Transformation Matrices
– Scaling, Rotation
– Homogeneous coordinates
• Matrix inverse

27-Sep-2018
• Matrix rank
• Eigenvalues and Eigenvectors
• Matrix Calculus

48
Stanford University
Homogeneous system

Linear Algebra Review


• In general, a matrix multiplication lets us linearly
combine components of a vector

27-Sep-2018
– This is sufficient for scale, rotate, skew transformations.
– But notice, we can’t add a constant! L

49
Stanford University
Homogeneous system

Linear Algebra Review


– The (somewhat hacky) solution? Stick a “1” at the end of every
vector:

27-Sep-2018
– Now we can rotate, scale, and skew like before, AND translate
(note how the multiplication works out, above)
– This is called “homogeneous coordinates”
50
Stanford University
Homogeneous system
– In homogeneous coordinates, the multiplication works out

Linear Algebra Review


so the rightmost column of the matrix is a vector that gets
added.

27-Sep-2018
– Generally, a homogeneous transformation matrix will have a
bottom row of [0 0 1], so that the result has a “1” at the
bottom too.
51
Stanford University
Homogeneous system
• One more thing we might want: to divide the result by

Linear Algebra Review


something
– For example, we may want to divide by a coordinate, to make
things scale down as they get farther away in a camera image
– Matrix multiplication can’t actually divide
– So, by convention, in homogeneous coordinates, we’ll divide the
result by its last coordinate after doing a matrix multiplication

27-Sep-2018
52
Stanford University
2D Translation using Homogeneous Coordinates
P’

Linear Algebra Review


P

27-Sep-2018
53
Stanford University
2D Translation using Homogeneous
Coordinates

Linear Algebra Review


P’ P = ( x, y ) ® ( x, y,1)
t
ty P t = (t x , t y ) ® (t x , t y ,1)
y P

x tx é x + t x ù é1 0 t x ù é x ù

27-Sep-2018
P' ® êê y + t y úú = êê0 1 t y úú× êê y úú
êë 1 úû êë0 0 1 úû êë 1 úû
54
Stanford University
2D Translation using Homogeneous
Coordinates

Linear Algebra Review


P’ P = ( x, y ) ® ( x, y,1)
t
ty P t = (t x , t y ) ® (t x , t y ,1)
y P

é x + t x ù é1 0 t x ù é x ù

27-Sep-2018
x tx
ê ú ê ú ê ú
P' ® ê y + t y ú = ê0 1 t y ú× ê y ú
êë 1 úû êë0 0 1 úû êë 1 úû
55
Stanford University
2D Translation using Homogeneous
Coordinates

Linear Algebra Review


P’ P = ( x, y ) ® ( x, y,1)
t
ty P t = (t x , t y ) ® (t x , t y ,1)
y P

27-Sep-2018
x tx é x + t x ù é1 0 t x ù é x ù
P' ® êê y + t y úú = êê0 1 t y úú× êê y úú
êë 1 úû êë0 0 1 úû êë 1 úû
56
Stanford University
2D Translation using Homogeneous
Coordinates

Linear Algebra Review


P’ P = ( x, y ) ® ( x, y,1)
t
ty P t = (t x , t y ) ® (t x , t y ,1)
y P

27-Sep-2018
x tx é x + t x ù é1 0 t x ù é x ù
P' ® êê y + t y úú = êê0 1 t y úú× êê y úú
êë 1 úû êë0 0 1 úû êë 1 úû
57
Stanford University
2D Translation using Homogeneous
Coordinates
P = ( x, y ) ® ( x, y,1)

Linear Algebra Review


P’
t
ty P t = (t x , t y ) ® (t x , t y ,1) t
y P

x tx é x + t x ù é1 0 t x ù é x ù
P' ® êê y + t y úú = êê0 1 t y úú× êê y úú

27-Sep-2018
êë 1 úû êë0 0 1 úû êë 1 úû
éI tù
=ê ú ×P = T×P
Stanford University
ë0 1û 58
Linear Algebra Review 27-Sep-2018
59
P’

Stanford University
Scaling
Scaling Equation

Linear Algebra Review


sy y
P’
P = ( x, y) ® P' = (s x x, s y y)
P
y
P = ( x, y ) ® ( x, y,1)
P' = ( s x x, s y y ) ® ( s x x, s y y,1)
x sx x

27-Sep-2018
60
Stanford University
Scaling Equation
P’

Linear Algebra Review


sy y
P = ( x, y) ® P' = (s x x, s y y)
P
y
P = ( x, y ) ® ( x, y,1)
P' = ( s x x, s y y ) ® ( s x x, s y y,1)
x sx x

27-Sep-2018
é sx x ù ésx 0 0ù é x ù
P' ® êê s y y úú = êê 0 sy 0úú× êê y úú
êë 1 úû êë 0 0 1úû êë 1 úû
61
Stanford University
Scaling Equation
sy y
P’
P = ( x, y) ® P' = (s x x, s y y)

Linear Algebra Review


P
y
P = ( x, y ) ® ( x, y,1)
P' = ( s x x, s y y ) ® ( s x x, s y y,1)
x sx x

é sx x ù ésx 0 0ù é x ù

27-Sep-2018
ê ú ê ú ê ú éS' 0ù
P' ® ê s y y ú = ê 0 sy 0ú× ê y ú = ê ú ×P = S×P
ë 0 1û
êë 1 úû êë 0 0 1úû êë 1 úû
Stanford University S 62
Scaling & Translating

Linear Algebra Review


P’’

P
P’=S∙P

27-Sep-2018
P’’=T∙P’
P’’=T · P’=T ·(S · P)= T · S ·P
63
Stanford University
Scaling & Translating

Linear Algebra Review


" 1 0 t x % " sx 0 0 %" x %
$ '$ '$ '
P'' = T ⋅ S⋅ P = $ 0 1 t y '⋅ $ 0 sy 0 '$ y '=
$ '$ '$ '
$# 0 0 1 '& $# 0 0 1 '&# 1 &
" s 0 t %" % " s x+t % " x %
x

27-Sep-2018
$ x x
'$ ' $
x x
' " S t %$ '
= $ 0 sy t y '$ y ' = $ sy y + t y '=$ '$ y '
$ '$ $ ' # 0 1 &$
$# 0 0 1 '&# 1 '& $# 1 '& # 1 '
&
A 64
Stanford University
Scaling & Translating

Linear Algebra Review


" 1 0 t x % " sx 0 0 %" x %
$ '$ '$ '
P'' = T ⋅ S⋅ P = $ 0 1 t y '⋅ $ 0 sy 0 '$ y '=
$ '$ '$ '
$# 0 0 1 '& $# 0 0 1 '&# 1 &
" s 0 t %" % " s x+t % " x %
x

27-Sep-2018
$ x x
'$ ' $
x x
' " S t %$ '
= $ 0 sy t y '$ y ' = $ sy y + t y '=$ '$ y '
$ '$ $ ' # 0 1 &$
$# 0 0 1 '&# 1 '& $# 1 '& # 1 '
&
65
Stanford University
Translating & Scaling != Scaling & Translating

Linear Algebra Review


é1 0 t x ù és x 0 0ùé x ù és x 0 t x ù é x ù és x x + t x ù
P' ' ' = T × S × P = êê0 1 t y úú êê 0 sy 0úúêê y úú = êê 0 sy t y úú êê y úú = êês y y + t y úú
êë0 0 1 úû êë 0 0 1úûêë 1 úû êë 0 0 1 úû êë 1 úû êë 1 úû

27-Sep-2018
66
Stanford University
Translating & Scaling != Scaling & Translating

Linear Algebra Review


é1 0 t x ù és x 0 0ùé x ù és x 0 t x ù é x ù és x x + t x ù
P' ' ' = T × S × P = êê0 1 t y úú êê 0 sy 0úúêê y úú = êê 0 sy t y úú êê y úú = êês y y + t y úú
êë0 0 1 úû êë 0 0 1úûêë 1 úû êë 0 0 1 úû êë 1 úû êë 1 úû

és x 0 0ù é1 0 t x ùé x ù
P ' ' ' = S × T × P = êê 0 sy 0úú êê0 1 t y úúêê y úú =
êë 0 0 1úû êë0 0 1 úûêë 1 úû

27-Sep-2018
és x 0 s x t x ù é x ù és x x + s x t x ù
= êê 0 sy s y t y úú êê y úú = êês y y + s y t y úú
êë 0 0 1 úû êë 1 úû êë 1 úû
67
Stanford University
Translating & Scaling != Scaling & Translating

Linear Algebra Review


é1 0 t x ù és x 0 0ùé x ù és x 0 t x ù é x ù és x x + t x ù
P' ' ' = T × S × P = êê0 1 t y úú êê 0 sy 0úúêê y úú = êê 0 sy t y úú êê y úú = êês y y + t y úú
êë0 0 1 úû êë 0 0 1úûêë 1 úû êë 0 0 1 úû êë 1 úû êë 1 úû

és x 0 0ù é1 0 t x ùé x ù
P ' ' ' = S × T × P = êê 0 sy 0úú êê0 1 t y úúêê y úú =
êë 0 0 1úû êë0 0 1 úûêë 1 úû

27-Sep-2018
és x 0 s x t x ù é x ù és x x + s x t x ù
= êê 0 sy s y t y úú êê y úú = êês y y + s y t y úú
êë 0 0 1 úû êë 1 úû êë 1 úû
68
Stanford University
Rotation
P’

Linear Algebra Review


P

27-Sep-2018
69
Stanford University
Rotation Equations

Linear Algebra Review


Counter-clockwise rotation by an angle q

x ' = cos θ x - sin θ y


P’ y' = cos θ y + sin θ x
y’ q
P
y

27-Sep-2018
x’ x é x'ù écosq - sin q ù é x ù
ê y 'ú = ê sin q cosq úû êë y úû
ë û ë
P' = R P 70
Stanford University
Rotation Matrix Properties

Linear Algebra Review


é x'ù écosq - sin q ù é x ù x ' = cos θ x - sin θ y
ê y 'ú = ê sin q cosq úû êë y úû y' = cos θ y + sin θ x
ë û ë
A 2D rotation matrix is 2x2

Note: R belongs to the category of normal matrices

27-Sep-2018
and satisfies many interesting properties:

R × RT = RT × R = I
det( R ) = 1 71
Stanford University
Rotation Matrix Properties
• Transpose of a rotation matrix produces a

Linear Algebra Review


rotation in the opposite direction

R × RT = RT × R = I
det( R ) = 1
• The rows of a rotation matrix are always

27-Sep-2018
mutually perpendicular (a.k.a. orthogonal)
unit vectors
– (and so are its columns)
72
Stanford University
Rotation Equation

Linear Algebra Review


𝐏 = x, y → 𝐏 + = (cos𝜃. x − sin𝜃. 𝑦, cos 𝜃. 𝑦 + sin 𝜃. 𝑥)

x, y, 1

27-Sep-2018
cos𝜃. x − sin𝜃. y, cos 𝜃. y + sin 𝜃. x, 1

73
Stanford University
Rotation Equation

Linear Algebra Review


𝐏 = x, y → 𝐏 + = (cos𝜃. x − sin𝜃. 𝑦, cos 𝜃. 𝑦 + sin 𝜃. 𝑥)

𝐏 = x, y → x, y, 1
𝐏 + = cos𝜃. x − sin𝜃. y, cos 𝜃. y + sin 𝜃. x
→ cos𝜃. x − sin𝜃. y, cos 𝜃. y + sin 𝜃. x, 1

27-Sep-2018
cos 𝜃. x − sin 𝜃. y cos 𝜃 −sin θ 0 x
𝐏 + → cos 𝜃. y + sin 𝜃. x = sin θ cos θ 0 . y
1 0 0 1 1
= 𝐑′ 𝟎 ? 𝐏 = 𝐑 ? 𝐏 74
Stanford University 𝟎 𝟏
Scaling + Rotation + Translation
P’= (T R S) P

Linear Algebra Review


é1 0 t x ù écos θ - sin θ 0ù és x 0 0ùé x ù
P' = T × R × S × P = êê0 1 t y úú êê sin θ cos θ 0úú êê 0 sy 0úúêê y úú =
êë0 0 1 úû êë 0 0 1úû êë 0 0 1úûêë 1 úû

écos θ - sin θ t x ù és x 0 0ù é x ù
= êê sin θ cos θ t y úú êê 0 sy 0úú êê y úú =
This is the form of the

27-Sep-2018
êë 0 0 1 úû êë 0 0 1úû êë 1 úû general-purpose
transformation matrix
é xù é xù
éR t ù éS 0ù ê ú é R S tùê ú
=ê y = y
ë0 1úû êë 0 1úû ê ú êë 0 1úû ê ú
êë 1 úû êë 1 úû
75
Stanford University
Outline
• Vectors and matrices

Linear Algebra Review


– Basic Matrix Operations
– Determinants, norms, trace
– Special Matrices
• Transformation Matrices
– Homogeneous coordinates
– Translation
The inverse of a transformation
• Matrix inverse matrix reverses its effect

27-Sep-2018
• Matrix rank
• Eigenvalues and Eigenvectors
• Matrix Calculate

76
Stanford University
Inverse
• Given a matrix A, its inverse A-1 is a matrix such

Linear Algebra Review


that AA-1 = A-1A = I

• E.g.

• Inverse does not always exist. If A-1 exists, A is


invertible or non-singular. Otherwise, it’s singular.

27-Sep-2018
• Useful identities, for matrices that are invertible:

77
Stanford University
Matrix Operations
• Pseudoinverse

Linear Algebra Review


– Fortunately, there are workarounds to solve AX=B in these
situations. And python can do them!
– Instead of taking an inverse, directly ask python to solve for X in
AX=B, by typing np.linalg.solve(A, B)
– Python will try several appropriate numerical methods (including
the pseudoinverse if the inverse doesn’t exist)

27-Sep-2018
– Python will return the value of X which solves the equation
• If there is no exact solution, it will return the closest one
• If there are many solutions, it will return the smallest one
78
Stanford University
Matrix Operations
• Python example:

Linear Algebra Review


>> import numpy as np

27-Sep-2018
>> x = np.linalg.solve(A,B)
x =
1.0000
-0.5000
79
Stanford University
Outline
• Vectors and matrices

Linear Algebra Review


– Basic Matrix Operations
– Determinants, norms, trace
– Special Matrices
• Transformation Matrices
– Homogeneous coordinates
– Translation
• Matrix inverse

27-Sep-2018
The rank of a transformation matrix
• Matrix rank tells you how many dimensions it
transforms a vector to.
• Eigenvalues and Eigenvectors
• Matrix Calculate

80
Stanford University
Linear independence
• Suppose we have a set of vectors v1, …, vn

Linear Algebra Review


• If we can express v1 as a linear combination of the other
vectors v2…vn, then v1 is linearly dependent on the other
vectors.
– The direction v1 can be expressed as a combination of the
directions v2…vn. (E.g. v1 = .7 v2 -.7 v4)

27-Sep-2018
81
Stanford University
Linear independence
• Suppose we have a set of vectors v1, …, vn

Linear Algebra Review


• If we can express v1 as a linear combination of the other
vectors v2…vn, then v1 is linearly dependent on the other
vectors.
– The direction v1 can be expressed as a combination of the
directions v2…vn. (E.g. v1 = .7 v2 -.7 v4)
• If no vector is linearly dependent on the rest of the set,

27-Sep-2018
the set is linearly independent.
– Common case: a set of vectors v1, …, vn is always linearly
independent if each vector is perpendicular to every other
vector (and non-zero)
82
Stanford University
Linear independence
Linearly independent set Not linearly independent

Linear Algebra Review


27-Sep-2018
83
Stanford University
Matrix rank

Linear Algebra Review


• Column/row rank

– Column rank always equals row rank

• Matrix rank

27-Sep-2018
84
Stanford University
Matrix rank
• For transformation matrices, the rank tells you the

Linear Algebra Review


dimensions of the output
• E.g. if rank of A is 1, then the transformation
p’=Ap
maps points onto a line.
• Here’s a matrix with rank 1:

27-Sep-2018
All points get
mapped to
the line y=2x

85
Stanford University
A matrix has “full rank” <-> it is invertible
• If an m x m matrix is rank m, we say it’s “full rank”

Linear Algebra Review


– Maps an m x 1 vector uniquely to another m x 1 vector
– An inverse matrix can be found
• If rank < m, we say it’s “singular”
– At least one dimension is getting collapsed. No way to look at
the result and tell what the input was

27-Sep-2018
– Inverse does not exist
• Inverse also doesn’t exist for non-square matrices

86
Stanford University
Outline
• Vectors and matrices

Linear Algebra Review


– Basic Matrix Operations
– Determinants, norms, trace
– Special Matrices
• Transformation Matrices
– Homogeneous coordinates
– Translation
• Matrix inverse

27-Sep-2018
• Matrix rank
• Eigenvalues and Eigenvectors
• Matrix Calculus

87
Stanford University
Eigenvector and Eigenvalue

Linear Algebra Review


• An eigenvector x of a linear transformation A is a non-zero
vector that, when A is applied to it, does not change
direction.

27-Sep-2018
88
Stanford University
Eigenvector and Eigenvalue

Linear Algebra Review


• An eigenvector x of a linear transformation A is a non-zero
vector that, when A is applied to it, does not change
direction.
• Applying A to the eigenvector only scales the eigenvector by
the scalar value λ, called an eigenvalue.

27-Sep-2018
89
Stanford University
Eigenvector and Eigenvalue

Linear Algebra Review


• We want to find all the eigenvalues of A:

• Which can we written as:

27-Sep-2018
• Therefore:

90
Stanford University
Eigenvector and Eigenvalue

Linear Algebra Review


• We can solve for eigenvalues by solving:

• Since we are looking for non-zero x, we can instead solve the


above equation as:

27-Sep-2018
91
Stanford University
Properties
• The trace of a A is equal to the sum of its eigenvalues:

Linear Algebra Review


• The determinant of A is equal to the product of its eigenvalues

27-Sep-2018
• The rank of A is equal to the number of non-zero eigenvalues
of A.
• The eigenvalues of a diagonal matrix D = diag(d1, . . . dn) are
just the diagonal entries d1, . . . dn
92
Stanford University
Spectral theory

Linear Algebra Review


• We call an eigenvalue λ and an associated eigenvector
an eigenpair.
• The space of vectors where (A − λI) = 0 is often called
the eigenspace of A associated with the eigenvalue λ.
• The set of all eigenvalues of A is called its spectrum:

27-Sep-2018
93
Stanford University
Spectral theory

Linear Algebra Review


• The magnitude of the largest eigenvalue (in
magnitude) is called the spectral radius

– Where C is the space of all eigenvalues of A

27-Sep-2018
94
Stanford University
Spectral theory

Linear Algebra Review


• The spectral radius is bounded by infinity norm of a
matrix:
• Proof: Turn to a partner and prove this!

27-Sep-2018
95
Stanford University
Spectral theory

Linear Algebra Review


• The spectral radius is bounded by infinity norm of a
matrix:
• Proof: Let λ and v be an eigenpair of A:

27-Sep-2018
96
Stanford University
Diagonalization
• An n × n matrix A is diagonalizable if it has n linearly

Linear Algebra Review


independent eigenvectors.
• Most square matrices (in a sense that can be made
mathematically rigorous) are diagonalizable:
– Normal matrices are diagonalizable
– Matrices with n distinct eigenvalues are diagonalizable

27-Sep-2018
Lemma: Eigenvectors associated with distinct eigenvalues are
linearly independent.
97
Stanford University
Diagonalization

Linear Algebra Review


• Eigenvalue equation:

– Where D is a diagonal matrix of the eigenvalues

27-Sep-2018
98
Stanford University
Diagonalization

Linear Algebra Review


• Eigenvalue equation:

• Assuming all λi’s are unique:

27-Sep-2018
• Remember that the inverse of an orthogonal matrix is just its
transpose and the eigenvectors are orthogonal

99
Stanford University
Symmetric matrices

Linear Algebra Review


• Properties:
– For a symmetric matrix A, all the eigenvalues are real.
– The eigenvectors of A are orthonormal.

27-Sep-2018
100
Stanford University
Symmetric matrices

Linear Algebra Review


• Therefore:

– where
• So, what can you say about the vector x that satisfies the
following optimization?

27-Sep-2018
101
Stanford University
Symmetric matrices

Linear Algebra Review


• Therefore:

– where
• So, what can you say about the vector x that satisfies the
following optimization?

27-Sep-2018
– Is the same as finding the eigenvector that corresponds to the
largest eigenvalue of A.

102
Stanford University
Some applications of Eigenvalues

Linear Algebra Review


• PageRank
• Schrodinger’s equation
• PCA

• We are going to use it to compress images in future classes

27-Sep-2018
103
Stanford University
Outline
• Vectors and matrices

Linear Algebra Review


– Basic Matrix Operations
– Determinants, norms, trace
– Special Matrices
• Transformation Matrices
– Homogeneous coordinates
– Translation
• Matrix inverse

27-Sep-2018
• Matrix rank
• Eigenvalues and Eigenvectors
• Matrix Calculus

104
Stanford University
Matrix Calculus – The Gradient

Linear Algebra Review


• Let a function take as input a matrix A of size
m × n and return a real value.
• Then the gradient of f:

27-Sep-2018
105
Stanford University
Matrix Calculus – The Gradient

Linear Algebra Review


• Every entry in the matrix is:
• the size of ∇Af(A) is always the same as the size of A. So if A
is just a vector x:

27-Sep-2018
106
Stanford University
Exercise

Linear Algebra Review


• Example:

27-Sep-2018
• Find:

107
Stanford University
Exercise

Linear Algebra Review


• Example:

27-Sep-2018
• From this we can conclude that:

108
Stanford University
Matrix Calculus – The Gradient

Linear Algebra Review


• Properties

27-Sep-2018
109
Stanford University
Matrix Calculus – The Hessian

Linear Algebra Review


• The Hessian matrix with respect to x, written or
simply as H:

27-Sep-2018
• The Hessian of n-dimensional vector is the n × n matrix.

110
Stanford University
Matrix Calculus – The Hessian

Linear Algebra Review


• Each entry can be written as:

• Exercise: Why is the Hessian always symmetric?

27-Sep-2018
111
Stanford University
Matrix Calculus – The Hessian

Linear Algebra Review


• Each entry can be written as:

• The Hessian is always symmetric, because

27-Sep-2018
• This is known as Schwarz's theorem: The order of partial
derivatives don’t matter as long as the second derivative
exists and is continuous.
112
Stanford University
Matrix Calculus – The Hessian

Linear Algebra Review


• Note that the hessian is not the gradient of whole gradient
of a vector (this is not defined). It is actually the gradient of
every entry of the gradient of the vector.

27-Sep-2018
113
Stanford University
Matrix Calculus – The Hessian

Linear Algebra Review


• Eg, the first column is the gradient of

27-Sep-2018
114
Stanford University
Exercise

Linear Algebra Review


• Example:

27-Sep-2018
115
Stanford University
116

Linear Algebra Review 27-Sep-2018


Stanford University
Exercise
Exercise

Linear Algebra Review


Divide the summation into 3 parts depending on whether:
• i == k or

27-Sep-2018
• j == k

117
Stanford University
118

Linear Algebra Review 27-Sep-2018


Stanford University
Exercise
119

Linear Algebra Review 27-Sep-2018


Stanford University
Exercise
120

Linear Algebra Review 27-Sep-2018


Stanford University
Exercise
121

Linear Algebra Review 27-Sep-2018


Stanford University
Exercise
122

Linear Algebra Review 27-Sep-2018


Stanford University
Exercise
123

Linear Algebra Review 27-Sep-2018


Stanford University
Exercise
124

Linear Algebra Review 27-Sep-2018


Stanford University
Exercise
125

Linear Algebra Review 27-Sep-2018


Stanford University
Exercise
126

Linear Algebra Review 27-Sep-2018


Stanford University
Exercise
What we have learned
• Vectors and matrices

Linear Algebra Review


– Basic Matrix Operations
– Special Matrices
• Transformation Matrices
– Homogeneous coordinates
– Translation
• Matrix inverse

27-Sep-2018
• Matrix rank
• Eigenvalues and Eigenvectors
• Matrix Calculate
127
Stanford University

You might also like