Professional Documents
Culture Documents
Algorithmic Methods For Markov Chains
Algorithmic Methods For Markov Chains
Chains
0.5 1
Three states irreducible MC Three states absorbing MC: state 3
is absorbing, {1,2} are transient
Lecture 1: Algo. Methods for discrete time MC 5
Introduction
Let t denotes the set of transient states and a the
set of absorbing states. For absorbing Markov
chains the transition probability matrix P can be
written as, I identity matrix, 1
0.5
2
0.3
Ptt Pta 1 0.7
P= ,
0 I 3
1
Let 𝑚𝑖𝑗 , 𝑖, 𝑗 ∈ t, denote expected number of visits
to 𝑗 before absorption given that the chain starts in
𝑖 at the time 0.
The matrix M= 𝑚𝒊𝒋 gives
−1
M = I − Ptt = I + Ptt + (Ptt )2 + ⋯
6
Lecture 1: Algo. Methods for discrete time MC
Equilibrium distribution of MC
The equilibrium, steady state, probability is defined
𝑝𝑗 = lim 𝑃 𝑋𝑛 = 𝑗|𝑋0 = 𝑖 , 𝑖, 𝑗 = 0, … , 𝑁
𝑛→∞
The equilibrium distribution 𝑝 = 𝑝0 , 𝑝1 , … , 𝑝𝑁 of the
(irreducible) MC is the unique solution to
𝑝 = 𝑝𝐏, 𝑝𝑒 = 1, where 𝑒 is a column vector of ones
For small size Markov chains 𝑝 can be computed
𝑝 = 𝑒 𝑡 𝐈 − 𝐏 + 𝑒𝑒 𝑡 −1
,
where 𝑒 is a column vector of ones, 𝑒 𝑡 is the transpose
of 𝑒. Note 𝑒𝑒 𝑡 is a matrix of ones
Lecture 1: Algo. Methods for discrete time MC 7
Exact Methods
1. Gaussian elimination algorithm (linear algebra)
2. Grassmann, Taskar and Heyman (GTH)
algorithm
Gaussian eliminations
Normalization
Folding
Forward Gaussian
iterations
Unfolding
Backward iterations
Normalization
Normalization
𝑣𝑖 = 𝑝0𝑖 + 𝑣𝑗 𝑝𝑗𝑖
𝑗=1 23
Lecture 1: Algo. Methods for discrete time MC
Iterative bounds (2)
Let 𝑄 denote the N-by-N matrix where entries
𝑞𝑖𝑗 = 𝑝𝑗𝑖 for 𝑖, 𝑗 = 1, … , 𝑁. The mean visits
equation then rewrites (contractive equation
𝜌 𝑄 < 1)
𝑣 (𝑛+1) = 𝑟 + 𝑄𝑣 𝑛
,
𝑛 𝑛
where 𝑣 𝑁-column vectors with elements 𝑣𝑖
and 𝑟𝑖 = 𝑝0𝑖 , 𝑖 = 1, … , 𝑁
𝑛 𝑣𝑖
Once 𝑣 is determined then, 𝑝𝑖 = σ𝑁
𝑗=0 𝑣𝑗