Professional Documents
Culture Documents
2 - Classification of States
2 - Classification of States
Classication of States
Accessible State
State j is said to be accessible from state i if Pijn > 0 for some n ≥ 0.
Note that this implies that state j is accessible from state i if and only if, starting in i, it is possible that the
process will ever enter state j .
This is true since if j is not accessible from i, then P{ever enter j | start in i} = 0
Communicate State
Two states i and j that are accessible to each other are said to communicate, and we write i ↔ j .
Note that any state communicates with itself since, by denition,
3. If state i communicates with state j , and state j communicates with state k, then state i communicates with state
k.
Two states that communicate are said to be in the same class. The concept of communication divides the state space
up into a number of separate classes.
Irreducible State
The Markov chain is said to be irreducible if there is only one class, that is, if all states communicate with each other.
Absorbing State
A state i is called absorbing if it is impossible to leave this state. Therefore, the state i is absorbing if and only if
If every state can reach an absorbing state, then the Markov chain is an absorbing Markov chain.
5
Dr. L.S. Nawarathna, Department of Statistics & Computer Science, UoP
CHAPTER 2. CLASSIFICATION OF STATES 6
Example 8. Consider the Markov chain consisting of the three states 0, 1, 2 and having transition probability matrix.
Is this Markov chain irreducible?
1 1
0
2 2
1
P = 12 1
4
4
0 1 2
3 3
Example 9. Consider a Markov chain consisting of the four states 0, 1, 2, 3 and having transition probability matrix
1 1
0 0
2 2
1
2 1
0 0
P =
1
2
1 1
1
4 4 4 4
0 0 0 1
However, by the denition of a Markov chain, it follows that the process will be starting over again when it reenters
state i and, therefore, state i will eventually be visited again. Continual repetition of this argument leads to the
conclusion that if state i is recurrent then, starting in state i, the process will reenter state i again and again and
againin fact, innitely often.
On the other hand, suppose that state i is transient. Hence, each time the process enters state i there will be
a positive probability, namely, 1−fi , that it will never again enter that state. Therefore, starting in state i, the
probability that the process will be in state i for exactly n time periods equals fin−1 (1 − fi ) , n ≥ 1.
In other words, if state i is transient then, starting in state i, the number of time periods that the process will be
in state i has a geometric distribution with nite mean 1/(1−f i ).
Corollary. If state i is recurrent, and state i communicates with state j, then state j is recurrent.
Example 10. Let the Markov chain consisting of the states 0, 1, 2, 3 have the transition probability matrix
0 0 1 1
2 2
1 0 0 0
P =
0 1
0 0
0 1 0 0
Example 12. Specify the classes of the following Markov chains, and determine whether they are transient or recurrent.
0 0 0 1
0 1 1
1 2 2 0 0 0 1
(1) 0 1
(2) 1
12 2 1
0 0
1
0 2 2
2 2 0 0 1 0
1 1
0 1
0 0 3
0 0 0
21 2 41 4
1 1
0 0 1
0 0 0
41 2 4 2 2
(3) 0 1
0 0 (4) 0 0 1 0 0
2 2
0 0 0 1 1 0 0 1 2
0
2 2 3 3
0 0 0 1 1 1 0 0 0 0
2 2
Periodicity
Consider the Markov chain shown in following Figure.
If there is a self-transition in the chain ( pii > 0 for some i), then the chain is aperiodic.
(l) (m)
Suppose that you can go from state i to state i in l steps, i.e., Pii > 0. Also suppose that Pii > 0. If gcd(l, m)=1,
then state i is aperiodic.
The chain is aperiodic if and only if there exists a positive integer n such that all elements of the matrix P n are
(n)
strictly positive, i.e., Pij > 0, for all i, j ∈ S .
It can be shown that in a nite-state Markov chain, all recurrent states are positive recurrent.
Ergodic State
Positive recurrent, aperiodic states are called ergodic.
Exercise. Consider a Markov chain on Ω= {1, 2, 3, 4, 5, 6} specied by the transition probability matrix
⎡ ⎤
1 3
0 0 0 0
⎢ 41 4
⎥
⎢2 1
0 0 0 0⎥
⎢ 2 ⎥
⎢0 0 1 0 0 0⎥
⎢ ⎥
P =⎢ ⎥
⎢0 0 1 2
0 0⎥
⎢ 3 3 ⎥
⎢1 0 0 0 0 0⎥
⎣ ⎦
0 0 0 0 0 1
3. Which states are transient and which are recurrent? Justify your answer.
1 3
4. Let X0 be the initial state with distribution π0 = (0, 4 , 4 , 0, 0, 0)T corresponding to the probability of being in
states 1, 2, 3, 4, 5, 6 respectively. Let X0 , X1 , X2 , . . . be the Markov chain constructed using P above. What is
E(X1 )?
5. What is Var(X1 )?