Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Chapter 2

Classication of States
Accessible State
State j is said to be accessible from state i if Pijn > 0 for some n ≥ 0.

ˆ Note that this implies that state j is accessible from state i if and only if, starting in i, it is possible that the
process will ever enter state j .

ˆ This is true since if j is not accessible from i, then P{ever enter j | start in i} = 0

Communicate State
Two states i and j that are accessible to each other are said to communicate, and we write i ↔ j .
Note that any state communicates with itself since, by denition,

Piio = P {X0 = i|X0 = i} = 1

The relation of communication satises the following three properties:

1. State i communicates with state i, all i ≥ 0.

2. If state i communicates with state j , then state j communicates with state i.

3. If state i communicates with state j , and state j communicates with state k, then state i communicates with state
k.

Two states that communicate are said to be in the same class. The concept of communication divides the state space
up into a number of separate classes.

Irreducible State
The Markov chain is said to be irreducible if there is only one class, that is, if all states communicate with each other.

Absorbing State
A state i is called absorbing if it is impossible to leave this state. Therefore, the state i is absorbing if and only if

Pii = 1and Pij = 0 for i = j

If every state can reach an absorbing state, then the Markov chain is an absorbing Markov chain.

5
Dr. L.S. Nawarathna, Department of Statistics & Computer Science, UoP
CHAPTER 2. CLASSIFICATION OF STATES 6

Example 8. Consider the Markov chain consisting of the three states 0, 1, 2 and having transition probability matrix.
Is this Markov chain irreducible?  
1 1
0
2 2 
 1
P =  12 1
4
 4 
0 1 2
3 3

Example 9. Consider a Markov chain consisting of the four states 0, 1, 2, 3 and having transition probability matrix
 
1 1
0 0
2 2 
1 
2 1
0 0
P =
1
2
1 1

1
4 4 4 4
 
0 0 0 1

Identify the classes of this Markov chain.

Recurrent and Transient States


For any state i we let fi denote the probability that, starting in state i, the process will ever reenter state i. State i is
said to be recurrent if fi = 1 and transient if fi < 1.
ˆ Suppose that the process starts in state i and i is recurrent. Hence, with probability 1, the process will eventually
reenter state i.

ˆ However, by the denition of a Markov chain, it follows that the process will be starting over again when it reenters
state i and, therefore, state i will eventually be visited again. Continual repetition of this argument leads to the
conclusion that if state i is recurrent then, starting in state i, the process will reenter state i again and again and
againin fact, innitely often.

ˆ On the other hand, suppose that state i is transient. Hence, each time the process enters state i there will be
a positive probability, namely, 1−fi , that it will never again enter that state. Therefore, starting in state i, the
probability that the process will be in state i for exactly n time periods equals fin−1 (1 − fi ) , n ≥ 1.

ˆ In other words, if state i is transient then, starting in state i, the number of time periods that the process will be
in state i has a geometric distribution with nite mean 1/(1−f i ).

Corollary. If state i is recurrent, and state i communicates with state j, then state j is recurrent.

Example 10. Let the Markov chain consisting of the states 0, 1, 2, 3 have the transition probability matrix
 
0 0 1 1
 2 2
 
1 0 0 0
P =
0 1

 0 0

 
0 1 0 0

Determine which states are transient and which are recurrent.

Example 11. Consider the


 Markov chain having states 0, 1, 2, 3, 4 and
1 1
0 0 0
2 2 
1 1 
2 0 0 0
 2 
P =
0 0 1
2
1
2 0

 1 1 
0 0 0
 2 2 
1 1
0 0 2 1
4 4
Determine the recurrent states.

Dr. L.S. Nawarathna, Department of Statistics & Computer Science, UoP


CHAPTER 2. CLASSIFICATION OF STATES 7

Example 12. Specify the classes of the following Markov chains, and determine whether they are transient or recurrent.

 
  0 0 0 1
0 1 1  
1 2 2 0 0 0 1
(1)  0 1
(2) 1 
 12 2  1
0 0
 1
0 2 2 
2 2 0 0 1 0
1  1 
 0 1
0 0  3
0 0 0
 21 2   41 4 
 1 1
0 0  1
0 0 0
 41 2 4  2 2 
(3)  0 1
0 0 (4) 0 0 1 0 0
2 2   
0 0 0 1 1 0 0 1 2
0
 2 2  3 3 
0 0 0 1 1 1 0 0 0 0
2 2

Periodicity
Consider the Markov chain shown in following Figure.

ˆ There is a periodic pattern in this chain.

ˆ Starting from state 0, we only return to 0 at times n = 3, 6, ....


(n)
ˆ In other words, P00 = 0, if n is not divisible by 3. Such a state is called a periodic state with period d(0) = 3.
(n)
The period of a state i is the largest integer d satisfying the following property: Pii = 0, whenever n is not divisible by
(n)
d. The period of i is shown by d(i). If Pii = 0, for all n > 0, then we let d(i) = ∞.

ˆ If d(i) > 1, we say that state i is periodic.


ˆ If d(i) = 1, we say that state i is aperiodic.
We can show that all states in the same communicating class have the same period.

ˆ A class is said to be periodic if its states are periodic.


ˆ Similarly, a class is said to be aperiodic if its states are aperiodic. Finally, a Markov chain is said to be aperiodic
if all of its states are aperiodic.

If i↔j, then d(i) = d(j).

Consider a nite irreducible Markov chain Xn :

ˆ If there is a self-transition in the chain ( pii > 0 for some i), then the chain is aperiodic.
(l) (m)
ˆ Suppose that you can go from state i to state i in l steps, i.e., Pii > 0. Also suppose that Pii > 0. If gcd(l, m)=1,
then state i is aperiodic.
ˆ The chain is aperiodic if and only if there exists a positive integer n such that all elements of the matrix P n are
(n)
strictly positive, i.e., Pij > 0, for all i, j ∈ S .

Dr. L.S. Nawarathna, Department of Statistics & Computer Science, UoP


CHAPTER 2. CLASSIFICATION OF STATES 8

Example 13. Consider the following Markov chain.

1. Is Class 1={state 1, state 2} aperiodic?

2. Is Class 2= {state 3, state 4} aperiodic?

3. Is Class 4={state 6, state 7, state 8} aperiodic?

Positive Recurrent State


If state i is recurrent, then it is said to be positive recurrent if, starting in i, the expected time until the process returns
to state i is nite.

ˆ It can be shown that in a nite-state Markov chain, all recurrent states are positive recurrent.

Ergodic State
Positive recurrent, aperiodic states are called ergodic.

Exercise. Consider a Markov chain on Ω= {1, 2, 3, 4, 5, 6} specied by the transition probability matrix
⎡ ⎤
1 3
0 0 0 0
⎢ 41 4

⎢2 1
0 0 0 0⎥
⎢ 2 ⎥
⎢0 0 1 0 0 0⎥
⎢ ⎥
P =⎢ ⎥
⎢0 0 1 2
0 0⎥
⎢ 3 3 ⎥
⎢1 0 0 0 0 0⎥
⎣ ⎦
0 0 0 0 0 1

1. What are the (communicating) classes of this Markov chain?

2. Is the Markov chain irreducible ?

3. Which states are transient and which are recurrent? Justify your answer.
1 3
4. Let X0 be the initial state with distribution π0 = (0, 4 , 4 , 0, 0, 0)T corresponding to the probability of being in
states 1, 2, 3, 4, 5, 6 respectively. Let X0 , X1 , X2 , . . . be the Markov chain constructed using P above. What is
E(X1 )?

5. What is Var(X1 )?

Dr. L.S. Nawarathna, Department of Statistics & Computer Science, UoP

You might also like