Professional Documents
Culture Documents
Matrix Alga 20
Matrix Alga 20
Matrix Alga 20
Peter J. Hammond
My email is p.j.hammond@warwick.ac.uk
or hammond@stanford.edu
A link to these lecture slides can be found at
https://web.stanford.edu/~hammond/pjhLects.html
Vectors
Vectors and Inner Products
Addition, Subtraction, and Scalar Multiplication
Linear versus Affine Functions
Norms and Unit Vectors
Orthogonality
The Canonical Basis
Linear Independence and Dimension
Matrices
Matrices and Their Transposes
Matrix Multiplication: Definition
x = 12 (b1 + b2 ), y = 12 (b1 − b2 )
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 3 of 71
Using Matrix Notation, I
Matrix notation allows the two equations
1x + 1y = b1
1x − 1y = b2
to be expressed as
1 1 x b1
=
1 −1 y b2
or as Az = b, where
1 1 x b1
A= , z= , and b = .
1 −1 y b2
ax + by = u = 1u + 0v
cx + dy = v = 0u + 1v
ax + by = u and cx + dy = v
Vectors
Vectors and Inner Products
Addition, Subtraction, and Scalar Multiplication
Linear versus Affine Functions
Norms and Unit Vectors
Orthogonality
The Canonical Basis
Linear Independence and Dimension
Matrices
Matrices and Their Transposes
Matrix Multiplication: Definition
x> = (x1 , x2 , . . . , xm ).
Exercise
Consider the quadratic form f (w) := w> w
as a function f : Rn → R of the column n-vector w.
Explain why f (w) ≥ 0 for all w ∈ Rn ,
with equality if and only if w = 0,
where 0 denotes the zero vector of Rn .
Vectors
Vectors and Inner Products
Addition, Subtraction, and Scalar Multiplication
Linear versus Affine Functions
Norms and Unit Vectors
Orthogonality
The Canonical Basis
Linear Independence and Dimension
Matrices
Matrices and Their Transposes
Matrix Multiplication: Definition
si = xi + yi and di = xi − yi
for i = 1, 2, . . . , n.
The scalar product λx of any scalar λ ∈ R
and vector x = (xi )ni=1 ∈ Rn is constructed by multiplying
each component of the vector x by the scalar λ — i.e.,
λx = (λxi )ni=1
Definition
A vector (or linear) space V over an algebraic field F
is a combination hV , F, +, ·i of:
I a set V of vectors;
I the field F of scalars;
I the binary operation V × V 3 (u, v) 7→ u + v ∈ V
of vector addition
I the binary operation F × V 3 (α, u) 7→ αu ∈ V
of multiplication by a scalar.
These are required to satisfy
all of the following eight vector space axioms.
Exercise
Prove that 0v = 0 for all v ∈ V .
Hint: Which three axioms justify the following chain of equalities
0v = [1 + (−1)]v = 1v + (−1)v = v − v = 0 ?
Exercise
Given an arbitrary algebraic field F, let Fn denote
the space of all lists hai ini=1 of n elements ai ∈ F
— i.e., the n-fold Cartesian product of F with itself.
1. Show how to construct the respective binary operations
Fn × Fn 3 (x, y) 7→ x + y ∈ Fn
F × Fn 3 (λ, x) 7→ λx ∈ Fn
Vectors
Vectors and Inner Products
Addition, Subtraction, and Scalar Multiplication
Linear versus Affine Functions
Norms and Unit Vectors
Orthogonality
The Canonical Basis
Linear Independence and Dimension
Matrices
Matrices and Their Transposes
Matrix Multiplication: Definition
Definition Pk
A linear combination of vectors is the weighted sum h
h=1 λh x ,
where xh ∈ V and λh ∈ F for h = 1, 2, . . . , k.
Exercise
By induction on k, show that the vector space axioms
imply that any linear combination of vectors in V
must also belong to V .
Definition
A function V 3 u 7→ f (u) ∈ F is linear provided that
Exercise
Prove that the function V 3 u 7→ f (u) ∈ F is linear
if and only if both:
1. for every vector v ∈ V and scalar λ ∈ F
one has f (λv) = λf (v);
2. for every pair of vectors u, v ∈ V
one has f (u + v) = f (u) + f (v).
Exercise
In case V = Rn and F = R, show that
any linear function is homogeneous of degree 1,
meaning that f (λv) = λf (v) for all λ ∈ R and all v ∈ Rn .
In particular, putting λ = 0 gives f (0) = 0.
What is the corresponding property in case V = Qn and F = Q?
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 29 of 71
Affine Functions
Definition
A function g : V → F is said to be affine
if there is a scalar additive constant α ∈ F
and a linear function f : V → F such that g (v) ≡ α + f (v).
Exercise
Under what conditions is an affine function g : R → R linear
when its domain R is regarded as a vector space?
Theorem
Except in the trivial case when HPhas only one Pmember,
Cauchy’s functional equation F ( h∈H yh ) ≡ h∈H fh (yh )
is satisfied for functions F , fh : Q → R if and only if:
1. there exists a single function φ : Q → R such that
Proof.
Suppose fh (yh ) ≡ αh + ρyh for all h ∈ P
H, and F (Y )P≡ A + ρY .
Then Cauchy’s functional equation F ( P h∈H yh ) ≡ h∈H fh (yh )
is obviously satisfied provided that A = h∈H αh .
Proof.
To prove part 1, consider any i ∈ H and all q ∈ Q.
P P
Note that Cauchy’s equationPF ( h yh ) ≡ h fh (yh )
implies that F (q) = fi (q)P+ h6=i fh (0)
and also F (0) = fi (0) + h6=i fh (0).
Now define the function φ(q) := F (q) − F (0) on the domain Q.
Then subtract the second equation from the first to obtain
φ(q + q 0 ) = F (q + q 0 ) − F (0)
= fi (q) − fi (0) + fj (q 0 ) − fj (0)
= φ(q) + φ(q 0 )
Vectors
Vectors and Inner Products
Addition, Subtraction, and Scalar Multiplication
Linear versus Affine Functions
Norms and Unit Vectors
Orthogonality
The Canonical Basis
Linear Independence and Dimension
Matrices
Matrices and Their Transposes
Matrix Multiplication: Definition
The unit ball B ⊂ Rn is the solid set (like a cricket ball or golf ball)
n Xn o
B := x ∈ Rn xi2 ≤ 1
i=1
Vectors
Vectors and Inner Products
Addition, Subtraction, and Scalar Multiplication
Linear versus Affine Functions
Norms and Unit Vectors
Orthogonality
The Canonical Basis
Linear Independence and Dimension
Matrices
Matrices and Their Transposes
Matrix Multiplication: Definition
Vectors
Vectors and Inner Products
Addition, Subtraction, and Scalar Multiplication
Linear versus Affine Functions
Norms and Unit Vectors
Orthogonality
The Canonical Basis
Linear Independence and Dimension
Matrices
Matrices and Their Transposes
Matrix Multiplication: Definition
Example
A prominent orthonormal set is the canonical basis of Rn ,
defined as the set of n different n-vectors ei (i = 1, 2, . . . , n)
whose respective components (eji )nj=1
satisfy eji = δij for all j ∈ {1, 2, . . . , n}.
Exercise
Show that each n-vector x = (xi )ni=1 is a linear combination
Xn
x = (xi )ni=1 = xi ei
i=1
Example
Consider the case when each vector x ∈ Rn is a quantity vector,
whose components are (xi )ni=1 ,
where xi indicates the net quantity of commodity i.
Then the ith unit vector ei of the canonical basis of Rn
represents a commodity bundle
that consists of one unit of commodity i,
but nothing of every other commodity.
In case the row vector p> ∈ Rn
is a price vector for the same list of n commodities,
the value p> ei of the ith unit vector ei must equal pi ,
the price (of one unit) of the ith commodity.
Theorem
The function Rn 3 x 7→ f (x) ∈ R is linear
if and only if there exists y ∈ Rn such that f (x) = y> x.
Proof.
Sufficiency is easy to check.
Pn i
Conversely, note that x equals the linear combination i=1 xi e
of the n canonical basis vectors.
Hence, linearity of f implies that
X n Xn Xn
f (x) = f x i ei = xi f (ei ) = f (ei ) xi = y> x
i=1 i=1 i=1
Definition
The vector-valued function
Proof.
Sufficiency is obvious.
Conversely, because x equals the linear combination ni=1 xi ei
P
of the n canonical basis vectors {ei }ni=1
and because each component function Rn 3 x 7→ Fi (x) is linear,
one has
Xn Xn
Fi (x) = Fi xj ej = xj Fi (ej ) = yi> x
j=1 j=1
Definition
A matrix representation
of the linear transformation Rn 3 x 7→ F(x) ∈ Rm
relative to the canonical bases of Rn and Rm
is an m × n array whose n columns
are the m-vector images F(ej ) = (Fi (ej ))mi=1 ∈ R
m
Vectors
Vectors and Inner Products
Addition, Subtraction, and Scalar Multiplication
Linear versus Affine Functions
Norms and Unit Vectors
Orthogonality
The Canonical Basis
Linear Independence and Dimension
Matrices
Matrices and Their Transposes
Matrix Multiplication: Definition
Definition
A linear combination of theP finite set {x1 , x2 , . . . , xk } of vectors
is the scalar weighted sum kh=1 λh xh ,
where λh ∈ R for h = 1, 2, . . . , k.
Definition
The finite set {x1 , x2 , . . . , xk } of vectors is linearly
P independent
just in case the only solution of the equation kh=1 λh xh = 0
is the trivial solution λ1 = λ2 = · · · = λk = 0.
Alternatively, if the equation has a non-trivial solution,
then the set of vectors is linearly dependent.
Theorem
The finite set {x1 , x2 , . . . , xk } of k n-vectors is linearly dependent
if and only if at least one of the vectors, say x1 after reordering,
can be expressed as a linear combination of the others
— i.e., there exist h
1
Pkscalars hα (h = 2, 3, . . . , k)
such that x = h=2 αh x .
Proof. P
If x = h=2 αh xh , then (−1)x1 + kh=2 αh xh = 0,
1 k P
Pk h
Conversely, suppose h=1 λh x = 0 has a non-trivial solution.
After reordering, we can suppose that λ1 6= 0.
Then x1 = kh=2 αh xh ,
P
where αh = −λh /λ1 for h = 2, 3, . . . , k.
Theorem
If the finite set S = {x1 , x2 , . . . , xk } of k non-zero n-vectors
is pairwise orthogonal, then it is linearly independent.
Proof. Pk
Let s denote the linear combination h
h=1 αh x .
· xj = kh=1 αh xh · xj .
P
Then for each j = 1, . . . , k one has s
In case S is pairwise orthogonal, one has xh · xj = 0 for all h 6= j,
and so s · xj = αj xj · xj .
So when s = 0, it follows that αj xj · xj = 0 for all j = 1, . . . , k.
Then, because we assumed that xj 6= 0,
we have αj = 0 for all j = 1, . . . , k.
This proves that S is linearly independent.
Exercise
Show that the canonical basis of Rn is linearly independent.
Example
The previous exercise shows that the dimension of Rn is at least n.
Later results will imply that any set of k > n vectors in Rn
is linearly dependent.
This implies that the dimension of Rn is exactly n.
Vectors
Vectors and Inner Products
Addition, Subtraction, and Scalar Multiplication
Linear versus Affine Functions
Norms and Unit Vectors
Orthogonality
The Canonical Basis
Linear Independence and Dimension
Matrices
Matrices and Their Transposes
Matrix Multiplication: Definition
Vectors
Vectors and Inner Products
Addition, Subtraction, and Scalar Multiplication
Linear versus Affine Functions
Norms and Unit Vectors
Orthogonality
The Canonical Basis
Linear Independence and Dimension
Matrices
Matrices and Their Transposes
Matrix Multiplication: Definition
a>
i · bj = cij
Exercise
Let X be any m × n matrix, and z any column n-vector.
1. Show that the matrix product z> X> Xz is well-defined,
and that its value is a scalar.
2. By putting w = Xz in the previous exercise regarding the sign
of the quadratic form w> w, what can you conclude
about the value of the scalar z> X> Xz?
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 66 of 71
Exercise for Econometricians I
Exercise
An econometrician has access to data series (such as time series)
involving the real values
I yt (t = 1, 2, . . . , T ) of one endogenous variable;
I xti (t = 1, 2, . . . , T and i = 1, 2, . . . , k)
of k different exogenous variables
— sometimes called explanatory variables or regressors.
The data is to be fitted into the linear regression model
Xk
yt = bi xti + et
i=1
Exercise
For matrix multiplication, explain why there are two different
versions of the distributive law — namely
Exercise
Let A, B, C denote three general matrices.
Give examples showing that:
1. The matrix AB might be defined, even if BA is not.
2. One can have AB = 0 even though A 6= 0 and B 6= 0.
3. If AB = AC and A 6= 0, it does not follow that B = C.