Mathematics For Economists: Linear Models and Matrix Algebra

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 58

Mathematics for Economists

Chapters 4-5
Linear Models and Matrix Algebra

Johann Carl Friedrich Gauss (1777– The Nine Chapters on the Mathematical Art
1855) (1000-200 BC)
Objectives of math for economists
 To understand mathematical economics problems by
stating the unknown, the data and the conditions
 To plan solutions to these problems by finding a
connection between the data and the unknown
 To carry out your plans for solving mathematical
economics problems
 To examine the solutions to mathematical economics
problems for general insights into current and future
problems
 Remember: Math econ is like love – a simple idea but it
can get complicated.

2
4. Linear Algebra
 Some history:
 The beginnings of matrices and determinants goes back to
the second century BC although traces can be seen back to
the fourth century BC. But, the ideas did not make it to
mainstream math until the late 16th century
 The Babylonians around 300 BC studied problems which
lead to simultaneous linear equations.
 The Chinese, between 200 BC and 100 BC, came much
closer to matrices than the Babylonians. Indeed, the text Nine
Chapters on the Mathematical Art written during the Han
Dynasty gives the first known example of matrix methods.
 In Europe, two-by-two determinants were considered by
Cardano at the end of the 16th century and larger ones by
Leibniz and, in Japan, by Seki about 100 years later.
4. What is a Matrix?
 A matrix is a set of elements, organized into rows and
columns
rows

a b 
columns
c d 
 
• a and d are the diagonal elements.
• b and c are the off-diagonal elements.

• Matrices are like plain numbers in many ways: they can be


added, subtracted, and, in some cases, multiplied and inverted
(divided).
4. Matrix: Details
 Examples:

1 b 
A  ; b    
1  d 
• Dimensions of a matrix: numbers of rows by numbers of
columns. The Matrix A is a 2x2 matrix, b is a 1x3 matrix.

• A matrix with only one column or only one row is called a


vector.

• If a matrix has an equal numbers of rows and columns, it is


called a square matrix. Matrix A, above, is a square matrix.

• Usual Notation: Upper case letters => matrices


Lower case => vectors 5
4.1 Basic Operations

Addition, Subtraction, Multiplication


a b   e f   a  e b  f 
 c d    g h   c  g d  h  Just add elements
     
a b   e f   a  e b  f  Just subtract elements
 c d    g h   c  g d  h 
     
a b   e f  ae  bg af  bh  Multiply each
c d   g  
  h   ce  dg cf  dh  row by each
column
a b  ka kb 
k    Multiply each
 c d   kc kd  element by the
scalar
4.1 Matrix multiplication: Details
 Multiplicationof matrices requires a conformability condition
 The conformability condition for multiplication is that the
column dimensions of the lead matrix A must be equal to the
row dimension of the lag matrix B.
 What are the dimensions of the vector, matrix, and result?

b11 b12 b13 


aB   a11a12     c   c11 c12 c13 
b21 22 b23 
  a11b11  a12b21 a11b12  a12b22 a11b13  a12b23 

• Dimensions: a(1x2), B(2x3) => c(1x3)


7
4.1 Basic Matrix Operations: Examples

Matrix  2 1 3 1  5 2 
addition 7 9  0 2  7 11
     
A2 x 2  B2 x 2 C2 x 2

Matrix subtraction 2 1 1 0 1 1
7 9  2 3  5 6
     

Matrix  2 1  1 0   4 3 
multiplication 7 9 x 2 3  26 27
     
A 2 x 2 xB2 x 2  C 2 x 2

Scalar multiplication 1 2 4 1 4 1 2
   
8 6 1  3 4 1 8 
8
4.1 Laws of Matrix Addition &
Multiplication
 Commutative law of Matrix Addition: A + B = B + A

 a11 a12  b11 b12   a11  b11 b12  a12 


A B       
a 21 a 22  b21 b22  a 21  a 21 b22  a 22 

b11 b12  a11 a12   b11  a11 b12  a12 


B A     
b b b b b 
 21 22   21 22   21 21 22a b  a 22 

 Matrix Multiplication is distributive across Additions:


A (B+ C) = AB + AC (assuming comformability
applies). 9
4.1 Matrix Multiplication

 Matrix multiplication is generally not commutative. That is,


AB  BA even if BA is conformable
(because diff. dot product of rows or col. of A&B)

1 2 0  1
A  ,B   
3 4   6 7 
1 0  2 6  1  1  2 7   12 13 
AB     
3 0   4 6  3  1  4 7    24 25

01    1 3 0 2    1 4  3  4
BA     
 6  1  7  3 6  2    7  4   27 40 
10
4.1 Matrix multiplication

 Exceptions to non-commutative law:


AB=BA iff
B = a scalar,
B = identity matrix I, or
B = the inverse of A -i.e., A-1

 Theorem: It is not true that AB = AC => B=C


Proof:

1 2 1  1 1 1 2 1 2
A
1 0  1
 ; B  0
 1 0
 ; C   1
 1  1


1 2  3
 
1 1 0
 
2 3 1 
Note: If AB = AC for all matrices A, then B=C. 11
4.1 Transpose Matrix
 The transpose of a matrix A is another matrix AT (also written
A′) created by any one of the following equivalent actions:
- write the rows (columns) of A as the columns (rows) of AT
- reflect A by its main diagonal to obtain AT
 Formally, the (i,j) element of AT is the (j,i) element of A:

[AT]ij = [A]jj
 IfA is a m × n matrix => AT is a n × m matrix.
 (A')' = A
 Conformability changes unless the matrix is square.

 3 1
3 8  9  8 0
Example : A     A 
1 0 4   
 9 4
12
4.1 Inverse of a Matrix
 Identity matrix:
AI = A 1 0 0

I  0 1 0
0 0 1

Notation: Ij is a jxj identity matrix.

 Given A (mxn), the matrix B (nxm) is a right-inverse for A iff


AB = Im
 Given A (mxn), the matrix C (mxn) is a left-inverse for A iff
CA = In
4.1 Inverse of a Matrix
 Theorem: If A (mxm), has both a right-inverse B and a left-
inverse C, then C = B.
Proof:
AB=Im and CA=In. Thus,
C(AB)=C Im = C and C(AB)=(CA)B=InB=B
=> C(nxm)=B(mxn)
Note:
- This matrix is unique.
- If A has both a right and a left inverse, it is called invertible.
 Inversionis tricky:
(ABC)-1 = C-1B-1A-1
 More on this topic later
4.1 Vector multiplication: Geometric
interpretation
 Think of a vector as a x2

directed line segment in 6


N-dimensions! (has
5
“length” and “direction”)
4
 Scalar multiplication
 6 4  2 U
(“scales” the vector –i.e., 3

changes length) 2
 3 2  U
 Source of linear
1
dependence x1

-4 -3 -2 -1 1 2 3 4 5 6


 1 U   3  2  -2

15
4.1 Vector Addition: Geometric
interpretation
x2
 v' = [2 3]
5
 u' = [3 2]
 w’= v'+u' = [5 5] 4
u
 Note that two vectors 3

plus the concepts of v w


addition and 2

multiplication can create 1 u


a two-dimensional space. x1

1 2 3 4 5

A vector space is a mathematical structure formed by a collection


of vectors, which may be added together and multiplied by
scalars. (It’s closed under multiplication and addition).
16
4.1 Vector Space
 Given a field R and a set V of objects, on which “vector
addition” (VxV→V), denoted by “+”, and “scalar
multiplication” (RxS →V), denoted by “. ”, are defined.
If the following axioms are true for all objects u, v, and w in V
and all scalars c and k in R, then V is called a vector space and
the objects in V are called vectors.
1. u+v  is in V (closed under addition).
2. u + v = v + u (vector addition is commutative).
3. θ is in V, such that u+ θ = u (null element).
4. u + (v+w) = (v + u) +w (distributive law of vector addition)
5. For each v, there is a –v, such that v+(-v) = θ
6. c .u is in V (closed under scalar multiplication).
7. c. (k . u) = (c .k) u (scalar multiplication is
associative). 17
4.1 Vector Space
8. c. (v+ u) = (c. v)+ (c. u)
9. (c+k) . u = (c. u)+ (k. u)
10. 1.u=u (unit element).
11. 0.u= θ (zero element).

We can write S = {V,R,+,.}to denote an abstract vector space.

This is a general definition. If the field R represents the real


numbers, then we define a real vector space.

 Definition:Linear Combination
Given vectors u1,...,uk,, the vector w = c1 u1+....+ ckuk is called a
linear combination of the vectors u1,...,uk,. 18
4.1 Vector Space
 Definition: Subspace
Given the vector space V and W a sect of vectors, such that W
is in V. Then W is a subspace iff:
u, v are in W => u+v are in W, and
c u is in W for every c in R.

u1,...,uk,, the vector w = c1 u1+....+ ckuk is called a linear


combination of the vectors u1,...,uk,.

19
4.1 System of equations: Matrices and
Vectors
Assume an economic model as system of linear equations
in which
aij parameters, where i = 1.. n rows, j = 1.. m columns,
and n=m
xi endogenous variables,
di exogenous variables and constants

a11 x1  a12 x2   a1m xn  d1


a21 x1  a22 x2   a 2 m xn  d 2
   
an1 x1  a n 2 x2   anm xn  d n

20
4.1 System of equations: Matrices and
Vectors
 A general form matrix of a system of linear equations
Ax = d where
A = matrix of parameters
x = column vector of endogenous variables
d = column vector of exogenous variables and constants

 Solve for x*

 a11 a12  a1m   x1   d1 


a a  a   x  d 
 21 22 2m   2 
  2
          
    
an1 an 2  anm   xn  d n 
Ax  d
x*  A1d
21
4.1 Solution of a General-equation System

 Assume  Why?
the 2x2 model
2x + y = 12 4x + 2y =24
4x + 2y = 24 2(2x + y) = 2(12)
 one equation with two
unknowns
Find x*, y*:
2x + y = 12
y = 12 – 2x
4x + 2(12 – 2x) = 24 x, y
4x +24 – 4x = 24 Conclusion:
not all simultaneous equation
0 = 0 ? indeterminante! models have solutions
(not all matrices have inverses).

22
4.1 Linear dependence
 A setof vectors is linearly dependent if any one of them
can be expressed as a linear combination of the remaining
vectors; otherwise, it is linearly independent.

 Formal definition: Linear independent


The set {u1,...,uk} is called a linearly independent set of
vectors iff
c1 u1+....+ ckuk = θ => c1= c2=...=ck,=0.

 Notes:
- Dependence prevents solving a system of equations.
More unknowns than independent equations.
- The number of linearly independent rows or columns in
a matrix is the rank of a matrix (rank(A)). 23
4.1 Linear dependence
 Examples: v1'   5 12
v2'  10 24
 5 10   v1' 
12 24   ' 
  v2 
2v1/  v2/  0 /

 2 1  4
v1   ; v2   ; v3   
7  8 5 
3v1  2v2
  6 21   2 16
  4 5  v3
3v1  2v2  v3  0 24
4.2 Application 1: One Commodity
Market Model (2x2 matrix)
Economic Model
ac
1) Qd = a – bP (a,b >0) P 
*

2) Qs = -c + dP (c,d >0) bd


3) Qd = Qs ad  bc
Q 
*
 Find P* and Q* bd

Scalar Algebra form


(Endogenous Vars :: Constants)
4) 1Q + bP = a
5) 1Q – dP = -c

25
4.2 One Commodity Market Model
(2x2 matrix)
Matrix algebra

1 b  Q   a 
1  d   P    c 
    
Ax  d
1
Q  1 b   a 
*

 *     
 P  1  d   c 
1
x A d
*

26
4.2 Application II: Three Equation
National Income Model (3x3 matrix)
 Model

Y = C + I0 + G0
C = a + b (Y-T) (a > 0, 0<b<1)
T=d+tY (d > 0, 0<t<1)

 Endogenous variables?
 Exogenous variables?
 Constants?
 Parameters?
 Why restrictions on the parameters?

27
4.2 Three Equation National Income
Model Y, C, T: Income (GNP), Consumption, and
 Endogenous:
Taxes.
 Exogenous: I0 and G0: autonomous Investment & Government
spending.
 Parameters:
a & d: autonomous consumption and taxes.
t: marginal propensity to tax gross income 0 < t < 1.
b: marginal propensity to consume private goods and services
from gross income 0 < b < 1.

 Solution:

a  bd  I 0  G0
Y 
*

1  b  bt
28
4.2 Three Equation National Income Model
Parameters & Exog.
Endogenous vars. vars.
 Given (Model)
Y C T &cons.
Y = C + I 0 + G0 1Y -1C +0T = I0+G0
C = a + b (Y-T) -bY +1C +bT = a
T=d+tY -tY +0C +1T = d
 Find Y*, C*, T*  1  1 0 Y   I 0  G0 
 b 1 b C    a 
Ax  d     
  t 0 1 T   d 
x*  A1d
29
4.2 Three Equation National Income Model

 1  1 0 Y   I 0  G0 
 b 1 b  C    a 
    
  t 0 1 T   d 
Ax  d
1
Y   1  1 0  I 0  G0 
*

 *    
C    b 1 b   a 
T *    t 0 1  d 
 
1
x A d
*

30
4.3 Notes on Vector Operations
An [m x 1] column vector u and
 3
u   a [1 x n] row vector v, yield a
2 x1
 2 product matrix uv of dimension
[m x n].
v  1 4 5
1x 3

 3  31215
uv    1 4 5   
2 x3  2  2 8 10 

31
4.3 Vector multiplication: Dot (inner),
and cross product

y  c1 z1  c 2 z 2  c3 z 3  c 4 z 4
4
y   ci z i
i 1
 z1 
z 
y  c 1 c2 c3 c4   2
 c' z
 z3 
 
 z4 

• The dot product produces a scalar! c’z =1x1=1x4 4x1= z’c

32
4.3 Vectors: Dot Product
d 
Think of the dot product
     T   a b c   e   ad  be  cf
as a matrix multiplication
 f 

The magnitude (length) is


the square root of the dot
  [ T ]1/ 2  aa  bb  cc product of a vector with
itself.

The dot product is also


      cos( ) related to the angle between
the two vectors – but it
doesn’t tell us the angle.

Note: As the cos(90) is zero, the dot product of two orthogonal vectors is zero .
4.3 Vectors: Magnitude and Phase (direction)
v  ( x , x ,  , x )T
1 2 n
n
v   x 2 (Magnitude or “2-norm”)
i
i 1
If v  1, v is a unit vector

(unit vector => pure direction)

Alternate representations:
Polar coords: (||v||, )
Complex numbers: ||v||ej
y

||v||

“phase”
x
4.3 Vectors: Cross Product

 The cross product of vectors A and B is a vector C


which is perpendicular to A and B
 The magnitude of C is proportional to the cosine of the
angle between A and B
 The direction of C follows the right hand rule – this
why we call it a “right-handed coordinate system”

a  b  a b sin( )
4.3 Vectors: Cross Product: Right hand rule
4.3 Vectors: Norm
• Given a vector space V, the function g: V→ R is called a
norm if and only if:
1) g(x)≥ 0, for all xεV
2) g(x)=0 iff x=θ (empty set)
3) g(αx) = |α|g(x) for all αεR, xεV
4) g(x+y)=g(x)+g(y) (“triangle inequality”) for all x,yεV

The norm is a generalization of the notion of size or length of


a vector.

• An infinite number of functions can be shown to qualify as


norms. For vectors in Rn, we have the following examples:
g(x)=maxi (xi),g(x)=∑i |xi|, g(x)=[∑i (xi)4 ] ¼
• Given a norm on a vector space, we can define a measure of
“how far apart” two vectors are using the concept of a
4.3 Vectors: Metric
• Given a vector space V, the function d: VxV→ R is called a
metric if and only if:
1) d(x,y)≥ 0, for all x,yεV
2) d(x,y)=0 iff x=y
3) d(x,y) = d(y,x) for all x,yεV
4) d(x+y)≤d(x,z) + d(z,y) (“triangle inequality”) for all
x,y,zεV

Given a norm g(.), we can define a metric by the equation:


d(x,y) = g(x-y).

• The dot product is called the Euclidian distance metric.


4.3 Orthonormal Basis
 Basis: a space is totally defined by a set of vectors – any
point is a linear combination of the basis
 Ortho-Normal: orthogonal + normal
 Orthogonal: dot product is zero
 Normal: magnitude is one
 Example: X, Y, Z (but don’t have to be!)

x  1 0 0
T
x y  0
y   0 1 0 xz  0
T

z   0 0 1 yz  0
T

• X, Y, Z is an orthonormal basis. We can describe any 3D point


as a linear combination of these vectors.
4.3 Orthonormal Basis
• How do we express any point as a combination of a new basis
U, V, N, given X, Y, Z?

a 0 0  u1 v1 n1  a  u  b  u  c  u 
0 b 0 u v2 n2    a  v  b  v  c  v 
  2
0 0 c  u3 v3 n3  a  n  b  n  c  n

(not an actual formula – just a way of thinking about it)

• To change a point from one coordinate system to another,


compute the dot product of each coordinate row with each of
the basis vectors.
4.5 Identity and Null Matrices
1 0
 Identity Matrix is a square matrix and 0  or
also it is a diagonal matrix with 1 along  1
the diagonals. Similar to scalar “1”
1 0 0
0 1 0 etc.

0 0 1
 Null matrix is one in which all elements 0 0 0
are zero. Similar to scalar “0” 0 0 0

 Both are diagonal matrices 0 0 0

 Both are idempotent matrices:


A = AT and
A = A2 = A3 = …
41
4.6 Inverse matrix

 AA-1 =I • Ax=d
 A-1A=I • A-1A x = A-1 d
 Necessary for matrix to be • Ix = A-1 d
square to have unique • x = A-1 d
inverse • Solution depends on A-1
 If an inverse exists for a • Linear independence
square matrix, it is unique • Determinant test!
 (A')-1=(A-1)'

42
4.6 Inverse of a Matrix

1. Append the identity matrix


to A
a b c 1 0 0 2. Subtract multiples of the
other rows from the first row
d e 
f | 0 1 0 to reduce the diagonal
 element to 1
 g h i 0 0 1 3. Transform the identity
matrix as you go
4. When the original matrix is
the identity, the identity has
become the inverse!
4.6 Determination of the Inverse
(Gauss-Jordan Elimination)

all A, X and I are X = A-1


AX = I (nxn) square matrices

1) Augmented 2) Transform
matrix augmented matrix
[A I ] [U H] [I K] further row
operations
Gauss elimination Gauss-Jordan
U: upper triangular elimination
IX=K

I X = X = A-1 K = A-1
4.6 Determinant of a Matrix
 The determinant is a number associated with any squared
matrix.
 If A is an nxn matrix, the determinant is given by |A| or
det(A).
 Determinants are used to characterize invertible matrices. A
matrix is invertible (non-singular) if and only if it has a non-
zero determinant
 That is, if |A|≠0 → A is invertible.
 Determinants are used to describe the solution to a system of
linear equations with Cramer's rule.
 Can be found using factorials, pivots, and cofactors! More
on this later.
 Lots of interpretations
45
4.6 Determinant of a Matrix
 Used for inversion. Example: Inverse of a 2x2 matrix:

a b 
A  | A | det( A)  ad  bc
 c d 

1 1 d  b
A  This matrix is called the
ad  bc  c a 
adjugate of A (or
adj(A)).
A-1 = adj(A)/|A|
4.6 Determinant of a Matrix (3x3)

a b c
d e f  aei  bfg  cdh  afh  bdi  ceg
g h i

a b c a b c a b c
d e f d e f d e f Sarrus’ Rule: Sum
from left to right.
g h i g h i g h i
Then, subtract from
right to left
Note: N! terms
4.6 Determinants: Laplace formula
 The determinant of a matrix of arbitrary size can be defined
by the Leibniz formula or the Laplace formula.
 The Laplace formula (or expansion) expresses the
determinant |A| as a sum of n determinants of (n-1) × (n-1)
sub-matrices of A. There are n2 such expressions, one for
each row and column of A
 Define the i,j minor Mij (usually written as |Mij|) of A as the
determinant of the (n-1) × (n-1) matrix that results from
deleting the i-th row and the j-th column of A.

Pierre-Simon Laplace (1749–


1827). 48
4.6 Determinants: Laplace formula

 Define the Ci,j the cofactor of A as:

Ci , j  (1) i  j | M i , j |

• The cofactor matrix of A -denoted by C-, is defined as the


nxn matrix whose (i,j) entry is the (i,j) cofactor of A. The
transpose of C is called the adjugate or adjoint of A
(adj(A)).

• Theorem (Determinant as a Laplace expansion)


Suppose A = [aij] is an nxn matrix and i,j= {1, 2, ...,n}. Then
the determinant| A | a C  a C  ...  a C
i1 i1 i2 i2 in in

 aij Cij  a2 j C2 j  ...  anj Cnj


49
4.6 Determinants: Laplace formula

 Example:

1 2 3
A  0  1 0
2 4 6

| A | 1xC11  2 xC12  3xC13 


 1x (1x 6)  2x (1) x (0)  3x ((1) x 2))  0
 2x (0)  (1)(1x6 - 3x2)  4x (0)  0
 |A| is zero => The matrix is non-singular. (Check!)
50
4.6 Determinants: Properties

 Interchange of rows and columns does not affect |A|.


(Corollary, |A| = |A’|.)
 |kA| = kn |A|, where k is a scalar.
 |I| = 1, where I is the identity matrix.
 |A| = |A’|.
 |AB| = |A||B|.
 |A-1|=1/|A|.

51
4.6 Matrix inversion: Note
 Itis not possible to divide one matrix by another. That is, we
can not write A/B. For two matrices A and B, the quotient
can be written as AB-1 or B-1A.
 In general, in matrix algebra AB-1  B-1A.

Thus, writing A/B does not clearly identify whether it


represents AB-1 or B-1A.
We’ll say B-1 post-multiplies A (for AB-1) and
B-1 pre-multiplies A (for B-1A)
 Matrix division is matrix inversion.

52
4.7 Application IV: Finite Markov Chains
Markov processes are used to measure movements over
time.
Employees at time 0 are distributed over two plants A & B
x 0/   A0 B0   100 100
The employees stay and move between each plants w/ a known probability
PAA PAB  .7 .3
M   
P
 BA PBB  . 4 .6 
At the end of one year, how many employees will be at each plant?
PAA PAB 
 A1 B1   x M
/
  A0 B0     A0 PAA  A0 PBA B0 PAB  B0 PBB 
PBB 
0
 PBA
.7 .3
 100 100    .7 *100  .4 *100, .3 *100  .6 *100
.4 .6
 110 90
53
4.7 Application IV: Finite Markov Chains
 Associative law of multiplication
At the end of two years, how many employees will be at each plant?
P PAB 
 A1 B1   x 0/ M   A0 B0   AA   110 90
 PBA PBB 
P PAB  PAA PAB 
 A2 B2   x 0/ M 2   A0 B0   AA
 PBA PBB   PBA PBB 
.7 .3
 110 90    .7 *110  .4 * 90 .3 *110  .6 * 90  113 87
.4 .6

54
Ch. 4 Linear Models & Matrix
Algebra: Summary
 Matrix algebra can be used:
a. to express the system of
Ax  d
equations in a compact 1
notation; x A d
*

b. to find out whether solution to a


system of equations exist; and 1 adjA
c. to obtain the solution if it A 
exists. Need to invert the A
matrix to find the solution for
det A
x* adjA
x 
*
d
A

55
Ch. 4 Notation and Definitions:
Summary
 A (Upper case letters) = matrix
 b (Lower case letters) = vector
 nxm = n rows, m columns
 rank(A) = number of linearly independent vectors of A
 trace(A) = tr(A) = sum of diagonal elements of A
 Null matrix = all elements equal to zero.
 Diagonal matrix = all non-zero elements are in the
diagonal.
 I = identity matrix (diagonal elements: 1, off-diagonal:0)
 |A| = det(A) = determinant of A
 A-1 = inverse of A
 A’=AT = Transpose of A
 A=AT Symmetric matrix
 A=A-1 Orthogonal matrix
|Mij|= Minor of A
56
You know too much linear algebra when...

You look at the long row of milk cartons at Whole


Foods --soy, skim, .5% low-fat, 1% low-fat, 2% low-
fat, and whole-- and think: "Why so many? Aren't soy,
skim, and whole a basis?"

You might also like