Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

IE3354

Stochastic OR
Week 6 – DTMC: Classification of States and Long-Run Analysis

Classification of States
Long Run Behavior of DTMCs

Özgür TOY, Önder BULUT


Industrial Engineering, Yaşar University

1/18
Classification of States
⚫ It is important to understand the nature of
states in a Markov Chain in order to
characterize the long run behavior.

⚫ Does lim 𝑃 𝑋𝑛 = 𝑗 exist?


𝑛→∞

⚫ If exists, lim 𝑃 𝑋𝑛 = 𝑗 =?
𝑛→∞

2/18
Classification of States
⚫ Different structures affecting the long-run behaviour are possible.
⚫ Some examples:
0 1 2 3 0 1 2 3
0 0.4 0.5 0 0.1 0 0.4 0.5 0 0.1
P= 1 0.7 0.3 0 0 P= 1 0.7 0.3 0 0
2 0 0 0.5 0.5 2 0 0 0.5 0.5
3 0 0.15 0. 50 0.35 3 0 0 0.65 0.35

0 1 2 3 0 1 2 3
0 0.4 0.6 0 0 0 0.4 0.5 0 0.1

1 0.7 0.3 0 0 P= 1 0.7 0.3 0 0


P=
2 0 0 0.5 0.5
2 0 0 0.5 0.5
3 0 0 0 1
3 0 0 0.65 0.35

3/18
Classification of States
⚫ We will first define the state classification
terminology
⚫ accessible states, states in communication,
equivalence classes, recurrent and transient
states, period of a state, ergodic chains
then, characterize long-run behavior of DTMCs.

4/18
Accessible States
⚫ 𝑖 → 𝑗: State 𝑗 is accessible from state 𝑖 iff
𝑛
⚫ 𝑝𝑖𝑗 > 0 for some 𝑛 ≥ 0

⚫ starting from 𝑖, it is possible that the process enters 𝑗 at some point


in time.

0 1 0 1
Ex1 Ex2
0 1 0 0 0 1
P= P=
1 0 1 1 1 0

0 1 2 3
0 1 0 0.4 0.6 0 0
Ex3 Ex4
0 0.5 0.5 1 0.7 0.3 0 0
P= P=
1 0 1 2 0 0 0.5 0.5
3 0 0 0.65 0.35
5/18
Communication
⚫ 𝑖 ↔ 𝑗: If 𝑖 is accessible from 𝑗 and 𝑗 is
accessible 𝑖, then states 𝑖 and 𝑗 are said to
communicate.

⚫ Properties of Communicating States:


0
⚫ (reflexivity) 𝑖 ↔ 𝑖 ⇐ 𝑝𝑖𝑖 = 1

⚫ (symmetry) 𝑖 ↔ 𝑗 ⇒ 𝑗 ↔ 𝑖

⚫ (transitivity) 𝑖 ↔ 𝑗 and 𝑗 ↔ 𝑘 ⇒ 𝑖 ↔ 𝑘
6/18
Equivalence Classes
⚫ The states in an equivalence class are those that communicate with each
other.
0 1 0 1
Ex1 Ex2
0 1 0 0 0 1
P= P=
1 0 1 1 1 0

0 1 2 3
0 1 0 0.4 0.6 0 0
Ex3 Ex4
0 0.5 0.5 1 0.7 0.3 0 0
P= P=
1 0 1 2 0 0 0.5 0.5
3 0 0 0.65 0.35

Ex1: C1={0}, C2={1}


0 1 2 3
Ex2: C1={0, 1}
0 0.4 0.5 0 0.1 0 1
Ex5
Ex3: C1={0}, C2={1} 1 0.7 0.3 0 0
P=
2 0 0 0.5 0.5
Ex4: C1={0, 1}, C2={2, 3}
3 0 0 0.65 0.35 2 3 7/18
Ex5: C1={0, 1}, C2={2, 3}
Irreducibility
⚫ If all states communicate with each other, then there is a
single class and the Markov chain is said to be
irreducible.
0 1 2 3

0 0.4 0.5 0 0.1 0 1


Ex6
1 0.7 0.3 0 0
P=
2 0 0 5 0.5

3 0 0.15 0. 50 0.35 2 3

Ex6: C1={0, 1, 2, 3}

8/18
Recurrent States
⚫ A state is recurrent if, upon leaving this state, it is certain that the
process returns to this state at some point in time.

⚫ If state 𝑖 is recurrent, wherever the process goes upon exiting 𝑖, it is


certain that eventually it will return to state 𝑖 again.

0 1 2 3

0 0.4 0.5 0 0.1 0 1


Ex6
1 0.7 0.3 0 0
P=
2 0 0 0.5 0.5

3 0 0.15 0. 50 0.35 2 3

Ex6: C1={0, 1, 2, 3} is recurrent

9/18
Recurrent States
𝑛
⚫ 𝑓𝑖𝑖 : the probability that starting from 𝑖, the first return to 𝑖 occurs at
the 𝑛𝑡ℎ transition (distribution of recurrence time).
0 1
⚫ 𝑓𝑖𝑖 = 0 and 𝑓𝑖𝑖 = 𝑝𝑖𝑖

⚫ 𝑓𝑖𝑖 : the probability that the process eventually returns to state 𝑖.



𝑛
𝑓𝑖𝑖 = ෍ 𝑓𝑖𝑖
𝑛=0

⚫ State 𝑖 is recurrent ⟺ 𝑓𝑖𝑖 = 1.

10/18
Recurrent States
⚫ 𝑓𝑖𝑖 : the probability that the process eventually returns to state 𝑖.

𝑛
𝑓𝑖𝑖 = ෍ 𝑓𝑖𝑖
𝑛=0

⚫ State 𝑖 is recurrent ⟺ 𝑓𝑖𝑖 = 1.


0 1 2
0 1
0 0.3 0.6 0.1
⚫ Examples: 0 0.4 0.6
1 0.7 0.3 0
1 0.7 0.3
2 0 0 1

11/18
Transient States
⚫ State 𝑖 is transient if, upon leaving this state, the process
may never return to this state again.
𝑓𝑖𝑖 < 1

⚫ A state is transient ⟺ it is not recurrent.

⚫ State 𝑖 is transient ⟺ there is a state 𝑗 (𝑗 ≠ 𝑖) such that


𝑖 → 𝑗 but 𝑗 ↛ 𝑖.

⚫ If state 𝑖 is transient, there is a positive probability that


the process will never return to 𝑖.

12/18
Transient States
⚫ Example

0 1 2 3
0 0.4 0.5 0 0.1 0 1
Ex5
1 0.7 0.3 0 0
P=
2 0 0 0.5 0.5
3 0 0 0.65 0.35 2 3

Ex5: C1={0, 1} is transient, C2={2, 3} is recurrent

13/18
Properties
⚫ Recurrence and transience are class properties.

⚫ In a finite state Markov chain, not all states can be


transient.

⚫ All states in an irreducible finite state Markov chain are


recurrent.

⚫ If state j is transient, then the limiting probability that the


process will be in j is zero (independent from the initial).
𝑛
𝑙𝑖𝑚 𝑝𝑖𝑗 = 0
𝑛→∞ 14/18
Periodicity
⚫ 𝑑 𝑖 : the period of state 𝑖 is the greatest common divisor
(𝑛)
of all integers n ≥ 1 for which 𝑝𝑖𝑖 > 0.

⚫ This means that we may visit state 𝑖 only at multiples of


the period 𝑑 𝑖 .

⚫ Periodicity is a class property.

⚫ If 𝑑 𝑖 > 1, then limiting (steady-state) probability does


not exist for state 𝑖.

⚫ State 𝑖 can only be observed at multiples of 𝑑 𝑖 not at any


arbitrary time. 15/18
Periodicity - Example
0 1 2 3 4 0 1

0 0 0.5 0 0.5 0
Ex7
1 1 0 0 0 0
P= 2 0 0 0 0.8 0.2
2 3
3 0 0 0.65 0.35 0
4 0 0 0 0.4 0.6

1 2 3 4
C1={0,1}, transient, period is 2 (d(0) = d(1) = 2) because 𝑝00 = 0, 𝑝00 = 0.5, 𝑝00 = 0, 𝑝00 = 0.5 2 , …

C2={2, 3, 4} recurrent, aperiodic → ergodic

16/18
Ergodicity
⚫ If 𝑑 𝑖 = 1, state 𝑖 is called aperiodic.

⚫ Recurrent states that are aperiodic are called ergodic


states.
⚫ Ergodicity is a class property.

⚫ A Markov chain is said to be ergodic if all its states are


ergodic states.

17/18
An Ergodic DTMC

0 1 2 3

0 0.4 0.5 0 0.1 0 1


Ex6
1 0.7 0.3 0 0
P=
2 0 0 5 0.5

3 0 0.15 0. 50 0.35 2 3

Ex6: C1={0, 1, 2, 3}

18/18

You might also like