Professional Documents
Culture Documents
1 Intro Markov Chains
1 Intro Markov Chains
1 Intro Markov Chains
Carlos Lizama
March 7, 2022
1
Introducción
• El equipo docente:
• Profesor: Carlos Lizama. Contacto clizama@bcentral.cl
• Auxiliar: Javier Moreno. Contacto jmoreno@bcentral.cl
• Clases y ayudantías:
• Clases de cátedra: Lunes y Viernes, 8.30 am.
• Clases auxiliares: Miércoles, 8.30 am.
• El profesor se reserva el derecho de intercambiar la clase cátedras y auxiliares sin previo aviso.
2
Evaluaciones
3
About this course
Goal is to build intuition and to learn key macro tools and concepts.
Core topics in macro: growth, business cycle, labor market dynamics, inflation and
monetary policy.
4
References
5
Markov Chains
6
Stochastic Processes
• We can classify stochastic process according the domain of the state and time:
• Discrete time / Discrete space. E.g. Markov Chain.
• Discrete time / Continuous space. E.g. Autoregressive process.
• Continuous time / Discrete space. E.g. Continuous time Markov Chains.
• Continuous time / Continuous space. E.g. Brownian Motion.
7
Markov Processes
Intuitively, we only need to know the current “state” to know how the process behaves.
8
Markov property: examples
Example
You age. (Although it is not an stochastic process!!)
Example
Autoregressive process of orden 1, AR(1).
xt = αxt−1 + εt
9
Is this Markov?
Example
Autoregressive process of order 2, AR(2).
xt = α1 xt−1 + α2 xt−2 + εt
yt = Ayt−1 + Bεt
α1 α2 1
where A = and B = .
1 0 0
yt satisfies is a Markov process.
10
Is this Markov?
Example
Consider an AR(1).
xt = α1 xt−1 + εt
mt is not.
Mt is a Markov process.
11
Markov Chains
12
Time-invariant Markov Chain
13
Examples
Example
A worker finds a job with probability α and loses his job with probability β.
States S ∈ {u, e} = {0, 1} where u = unemployed, e = employed. P = (1 − α α; β 1 − β)
Example
Hamilton (2005) estimates the stochastic matrix
0.971 0.029 0
P = 0.145 0.778 0.077
0 0.508 0.492
where the first state represents “normal growth”, the second represents “a mild recession”,
and the third represents a “severe recession”.
14
Forecasting the state
The transition matrix P defines the probability of moving from one state to another in one
period. Note that the probability of moving from one value of the state to another in two
periods is determined by P 2 . In fact,
X
P(xt+2 = ej |xt = ei ) = P(xt+2 = ej |xt+1 = es ) · P(xt+1 = es |xt = ei )
s
X
= Pis Psj = Pij2
s
15
Forecasting the state
Given π0
π10 = π00 P
πk0 = π00 P k
16
Forecasting functions of the state
• Let y be a n × 1 vector.
y 0 xt defines a random variable.
think of y i as the value of in state i.
• yt = y 0 xt is the realization of this random variable over time.
17
Forecasting functions of the state
• In the example of Hamilton (2005), how would you compute the (un)conditional
probability of being in a “recession”?
18
Stationary Distribution
π0 = π0 P
Existence? Uniqueness?
19
Types of Dynamics and Convergence
Intuitively, once the system enters the set E, it cycles through the states in E and never
scapes.
20
Other properties of Markov Chains
Definition
Two states x and y are said to communicate if there exist positive integers j and k such that
This means that state x can be reached eventually from state y and state y can be reached
eventually from state x.
Definition (Irreductible)
The stochastic matrix P is irreductible if all states communicate.
21
Other properties of Markov Chains
Definition (periodicity)
The period of state x is the greatest common divisor of the set of integers
D(x) = {j ≥ 1; P j (x, x) > 0} A stochastic matrix is called aperiodic if the period of every
state is 1, and periodic otherwise.
22
Computation of Stationary Distribution
Theorem
Every stochastic matrix P has at least one stationary distribution.
In fact, consider π1 and π2 such that π10 = π10 P and π20 = π20 P . It is easy to show that
π3 = λπ1 + (1 − λ)π2 is also a stationary distribution. (Prove it!)
23
Computation of Stationary Distribution
How can we compute the stationary distribution? Under what conditions is this
unique?
24
Asymptotic stationarity
lim πt = π∞
t→∞
If the limit π∞ is independent of the initial distribution, we say that the process is
asymptotically stationary with a unique stationary distribution.
25
Useful results and theorems
Theorem
If P is both aperiodic and irreductible, then
1. P has exactly one stationary distribution π∞ .
2. The process is asymptotically stationary.
A stochastic matrix satisfying these conditions are sometimes called uniformly ergodic.
Theorem
Let P be a stochastic matrix with Pij > 0 ∀i, j. Then P has a unique stationary distribution
and the process is asymptotically stationary.
26
Useful results and theorems
Theorem
Let P be a stochastic matrix for which Pijn > 0 ∀i, j for some value of n ≥ 1. Then P has a
unique stationary distribution, and the process is asymptotically stationary.
Theorem
Consider the state space {ei }i and the stochastic matrix P , then
1. The state space can be partitioned into M ≥ 1 ergodic sets and one transient set.
2. Each row of P ∞ is an invariant distribution of P and every invariant distribution is a convex
combination of rows of P ∞ .
27
Useful results and theorems
Theorem
Under irreductibility, for all x ∈ S
m
1 X
1{Xt = x} → π∞ (x) as m→∞
m t=1
This gives us another way to interpret the stationary distribution: the fraction of time
the chain spends at state x.
This is a special case of a law of large numbers result for Markov chains.
28
Examples of dynamics / convergence
29
Examples of dynamics / convergence
30
Examples of dynamics / convergence
1
Pm 1/2 1/2
but m t=1 1{Xt = x} → π∞ (x) = 1/2 1/2
31
Examples of dynamics / convergence
32
Examples of dynamics / convergence
33