1 Intro Markov Chains

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Macroeconomics I

Introduction: Some basic tools

Carlos Lizama
March 7, 2022

1
Introducción

• El equipo docente:
• Profesor: Carlos Lizama. Contacto clizama@bcentral.cl
• Auxiliar: Javier Moreno. Contacto jmoreno@bcentral.cl

• Clases y ayudantías:
• Clases de cátedra: Lunes y Viernes, 8.30 am.
• Clases auxiliares: Miércoles, 8.30 am.
• El profesor se reserva el derecho de intercambiar la clase cátedras y auxiliares sin previo aviso.

2
Evaluaciones

• 2 controles (30% cada uno).

• 4 tareas (40% en total).


• Teóricas y computacionales.
Lenguajes de computación sugeridos: Matlab, Python, Julia.
• Se puede trabar en grupos, pero cada uno debe entregar su propia versión.
• Resultados deben ser replicables.

3
About this course

This course is an introduction to modern macroeconomic models.


Focus on formal economic models and analytical methods, study of equilibrium dynamics.

Dynamic Stochastic General Equilibrium Models (DSGE).


Dynamic: we analyze how the economy behaves over time.
Stochastic: shocks as driving forces that determine the evolution of the economy.
General Equilibrium: prices are endogenous and clear markets.

Goal is to build intuition and to learn key macro tools and concepts.

Core topics in macro: growth, business cycle, labor market dynamics, inflation and
monetary policy.

4
References

• There is no mandatory textbook.

• Some recommended references:


• Ljungqvist, L. and T. Sargent, 2018. Recursive Macroeconomic Theory, MIT Press.
• Stokey, N., R. Lucas, and E. Prescott, 1989. Recursive Methods in Economic Dynamics,
Harvard University Press.

5
Markov Chains

6
Stochastic Processes

• We can classify stochastic process according the domain of the state and time:
• Discrete time / Discrete space. E.g. Markov Chain.
• Discrete time / Continuous space. E.g. Autoregressive process.
• Continuous time / Discrete space. E.g. Continuous time Markov Chains.
• Continuous time / Continuous space. E.g. Brownian Motion.

• In economics, we use all kind of these processes.

• In this course, we will focus on Discrete time Markov Chains.

7
Markov Processes

Definition: Markov Property


A stochastic process {xt } is said to have the Markov Property if for all k ≥ 1 and all t

P(xt+1 |xt , xt−1 , xt−2 , . . . , xt−k ) = P(xt+1 |xt )

Intuitively, we only need to know the current “state” to know how the process behaves.

“Finding the state is an art”.

8
Markov property: examples

Example
You age. (Although it is not an stochastic process!!)

Example
Autoregressive process of orden 1, AR(1).

xt = αxt−1 + εt

9
Is this Markov?

Example
Autoregressive process of order 2, AR(2).

xt = α1 xt−1 + α2 xt−2 + εt

Hint: Define yt = [xt ; xt−1 ], then

yt = Ayt−1 + Bεt
   
α1 α2 1
where A = and B = .
1 0 0
yt satisfies is a Markov process.

10
Is this Markov?

Example
Consider an AR(1).

xt = α1 xt−1 + εt

Let mt = max{x0 , x1 , . . . , xt }. Is mt a Markov process?

mt is not.

Define Mt = [xt ; mt−1 ]

Mt is a Markov process.

11
Markov Chains

• Markov Chains are one of the most useful stochastic processes.


• simple and flexible model.
• help building intuitions.
• many elegant theoretical results.

• We will consider processes with


• finite state space S.
• fixed probabilities of moving from any given state to any other (time invariant).

12
Time-invariant Markov Chain

Time-invariant Markov Chain


A time-invariant Markov Chain or simply a Markov Chain is characterized by three elements:
1. A discrete state space, S. Without loss of generality, we can consider a collection of n
vectors ei ∈ Rn , where ei is an n × 1 vector with 1 in the ith row and zero elsewhere.
We will often refer to state ei as state i.
2. A n × n matrix P , called a stochastic matrix, Markov matrix or transition matrix that
records the probabilities of moving from
P one state to another,
Pij = P(xt = ej |xt−1 = ei ) such that j Pij = 1 ∀i
3. An n × 1 vector π0 that records the probabilityP of being in state ei at time 0 (initial
density), π0i = P(x0 = ei ) and it is such that i π0i = 1

13
Examples

Example
A worker finds a job with probability α and loses his job with probability β.
States S ∈ {u, e} = {0, 1} where u = unemployed, e = employed. P = (1 − α α; β 1 − β)

Example
Hamilton (2005) estimates the stochastic matrix
 
0.971 0.029 0
P = 0.145 0.778 0.077
0 0.508 0.492

where the first state represents “normal growth”, the second represents “a mild recession”,
and the third represents a “severe recession”.

14
Forecasting the state

The transition matrix P defines the probability of moving from one state to another in one
period. Note that the probability of moving from one value of the state to another in two
periods is determined by P 2 . In fact,

X
P(xt+2 = ej |xt = ei ) = P(xt+2 = ej |xt+1 = es ) · P(xt+1 = es |xt = ei )
s
X
= Pis Psj = Pij2
s

In general P(xt+k = ej |xt = ei ) = Pijk

15
Forecasting the state

Given π0

π10 = π00 P

π20 = π10 P = π00 P 2

πk0 = π00 P k

16
Forecasting functions of the state

• We might be interested in a function of the state, instead of the state itself.


E.g. If we know the transition from unemployment-employment, but we care about average
income.

• Let y be a n × 1 vector.
y 0 xt defines a random variable.
think of y i as the value of in state i.
• yt = y 0 xt is the realization of this random variable over time.

• We can calculate the unconditional expectation of yt

E[yt ] = πt0 y = π00 P t y

17
Forecasting functions of the state

• ... or we can calculate the conditional expectation of yt+s , conditional on xt = ei


πt = ei , we are conditioning on this.
πt+s = e0i · P s
Thus...

E[yt+s |xt = ei ] = e0i P s y = (P s y)i

• In the example of Hamilton (2005), how would you compute the (un)conditional
probability of being in a “recession”?

18
Stationary Distribution

Definition (Stationary Distribution)


A probability distribution π is called stationary (or invariant) distribution if

π0 = π0 P

Transposing this equation π = P 0 π

A stationary distribution is an eigenvector associated with the unit eigenvalue of P 0

Existence? Uniqueness?

19
Types of Dynamics and Convergence

Definition (Ergodic Set)


A set E ⊆ {ei }i is called an ergodic set is P(xt ∈ E|xt−1 ∈ E) = 1 and no proper subset of
E has this property.

Intuitively, once the system enters the set E, it cycles through the states in E and never
scapes.

Definition (Transient State)


A state i is called transient if there is a positive probability of leaving and never returning.

20
Other properties of Markov Chains

Definition
Two states x and y are said to communicate if there exist positive integers j and k such that

P j (x, y) > 0 and P k (y, x) > 0

This means that state x can be reached eventually from state y and state y can be reached
eventually from state x.

Definition (Irreductible)
The stochastic matrix P is irreductible if all states communicate.

21
Other properties of Markov Chains

Definition (periodicity)
The period of state x is the greatest common divisor of the set of integers
D(x) = {j ≥ 1; P j (x, x) > 0} A stochastic matrix is called aperiodic if the period of every
state is 1, and periodic otherwise.

Convince yourself that this Markov Chain has period 2.

22
Computation of Stationary Distribution

Theorem
Every stochastic matrix P has at least one stationary distribution.

In general, the stationary distribution is not unique.

In fact, consider π1 and π2 such that π10 = π10 P and π20 = π20 P . It is easy to show that
π3 = λπ1 + (1 − λ)π2 is also a stationary distribution. (Prove it!)

23
Computation of Stationary Distribution

How can we compute the stationary distribution? Under what conditions is this
unique?

We can solve the eigenvector associated to the eigenvalue 1, i.e. solve π 0 = π 0 P .


P
We need to impose i πi = 1

24
Asymptotic stationarity

For an arbitrary initial distribution π0 , do the unconditional distribution πt approaches


a stationary distribution?

lim πt = π∞
t→∞

Does this limit depend on π0 ?

If the limit π∞ is independent of the initial distribution, we say that the process is
asymptotically stationary with a unique stationary distribution.

The solution is called the stationary or invariant distribution of P.

25
Useful results and theorems

Theorem
If P is both aperiodic and irreductible, then
1. P has exactly one stationary distribution π∞ .
2. The process is asymptotically stationary.
A stochastic matrix satisfying these conditions are sometimes called uniformly ergodic.

Theorem
Let P be a stochastic matrix with Pij > 0 ∀i, j. Then P has a unique stationary distribution
and the process is asymptotically stationary.

26
Useful results and theorems

Theorem
Let P be a stochastic matrix for which Pijn > 0 ∀i, j for some value of n ≥ 1. Then P has a
unique stationary distribution, and the process is asymptotically stationary.

Theorem
Consider the state space {ei }i and the stochastic matrix P , then
1. The state space can be partitioned into M ≥ 1 ergodic sets and one transient set.
2. Each row of P ∞ is an invariant distribution of P and every invariant distribution is a convex
combination of rows of P ∞ .

27
Useful results and theorems

Theorem
Under irreductibility, for all x ∈ S
m
1 X
1{Xt = x} → π∞ (x) as m→∞
m t=1

This gives us another way to interpret the stationary distribution: the fraction of time
the chain spends at state x.

This is a special case of a law of large numbers result for Markov chains.

28
Examples of dynamics / convergence

Example (asymptotically stationary)


Consider
 
3/4 1/4
P =
1/4 3/4

We have only one ergodic set.


 
1/2 1/2
limn→∞ P = P
n ∞
=
1/2 1/2

29
Examples of dynamics / convergence

Example (transitory sets)


Consider
 
(1 − γ) γ/2 γ/2
P = 0 1/2 1/2 
0 1/2 1/2

where 0 < γ < 1


The first state is transitory. Ergodic set is formed by second and thrid state.
 
0 1/2 1/2
limn→∞ P n = P ∞ = 0 1/2 1/2 Note that each row is an invariant distribution.
0 1/2 1/2

30
Examples of dynamics / convergence

Example (cyclically moving subsets)


 
0 1
Consider P =
1 0

We have only one ergodic set.

limn→∞ P n does not converge


   
1 0 0 1
P 2n = P 2n+1 =
0 1 1 0

 
1
Pm 1/2 1/2
but m t=1 1{Xt = x} → π∞ (x) = 1/2 1/2

31
Examples of dynamics / convergence

Example (Mulitple ergodic set)


 
3/4 1/4 0 0
1/4 3/4 0 0 
• Consider P =  
 0 0 3/4 1/4
0 01/4 3/4

We have two ergodic sets.


 
1/2 1/2 0 0
1/2 1/2 0 0 
limn→∞ P n = P ∞ =
 
0 0 1/2 1/2
0 0 1/2 1/2

Convex combinations are also invariant distributions!

32
Examples of dynamics / convergence

Example (asymptotically stationary)


Combine transitory states and multiple ergodic sets.

33

You might also like