Solved Problems

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

11.2.

7 Solved Problems
Problem 1

Consider the Markov chain with three states, S = {1, 2, 3} , that has the following transition
matrix
1 1 1
⎡ 2 4 4 ⎤

⎢ ⎥
1 2
P = ⎢ 0 ⎥.
⎢ 3 3 ⎥
⎢ ⎥

⎣ 1 1 ⎦
0
2 2

a. Draw the state transition diagram for this chain.


1
b. If we know P (X1 = 1) = P (X1 = 2) = , find
4

P (X 1 = 3, X 2 = 2, X 3 = 1) .

Solution
a. The state transition diagram is shown in Figure 11.6

Figure 11.6 - A state transition diagram.

b. First, we obtain
P (X 1 = 3) = 1 − P (X 1 = 1) − P (X 1 = 2)

1 1
= 1 − −
4 4
1
= .
2

We can now write

P (X 1 = 3, X 2 = 2, X 3 = 1) = P (X 1 = 3) ⋅ p32 ⋅ p21

1 1 1
= ⋅ ⋅
2 2 3
1
= .
12

Problem 2

Consider the Markov chain in Figure 11.17. There are two recurrent classes, R1 = {1, 2} ,
and R2 = {5, 6, 7} . Assuming X0 = 3 , find the probability that the chain gets absorbed
in R1 .

Figure 11.17 - A state transition diagram.

Solution
Here, we can replace each recurrent class with one absorbing state. The resulting
state diagram is shown in Figure 11.18
Figure 11.18 - The state transition diagram in which we have replaced
each recurrent class with one absorbing state.

Now we can apply our standard methodology to find probability of absorption in


state R1 . In particular, define

a i = P (absorption in R1 |X 0 = i),  for all i ∈ S .

By the above definition, we have aR = 1 , and aR


1 2
= 0. To find the unknown
values of ai 's, we can use the following equations

a i = ∑ a k pik ,  for i ∈ S .

We obtain

1 1
a3 = a R1 + a4
2 2
1 1
= + a4 ,
2 2

1 1 1
a4 = a R1 + a3 + a R2
4 4 2
1 1
= + a3 .
4 4

Solving the above equations, we obtain

5 3
a3 = , a4 = .
7 7
Therefore, if X0 = 3, the chain will end up in class R1 with probability
5
a3 =
7
.

Problem 3

Consider the Markov chain of Example 2. Again assume X0 = 3 . We would like to find the
expected time (number of steps) until the chain gets absorbed in R1 or R2 . More specifically,
let T be the absorption time, i.e., the first time the chain visits a state in R1 or R2 . We would
like to find E [T |X0 = 3].

Solution
Here we follow our standard procedure for finding mean hitting times. Consider
Figure 11.18. Let T be the first time the chain visits R1 or R2 . For all i ∈ S ,
define

ti = E [T |X 0 = i].

By the above definition, we have tR 1


= tR2 = 0 . To find t3 and t4 , we can
use the following equations

ti = 1 + ∑ tk pik ,  for i = 3, 4.

Specifically, we obtain

1 1
t3 = 1 + tR1 + t4
2 2
1
= 1 + t4 ,
2

1 1 1
t4 = 1 + tR1 + t3 + tR2
4 4 2
1
= 1 + t3 .
4

Solving the above equations, we obtain

12 10
t3 = , t4 = .
7 7
12
Therefore, if X0 = 3 , it will take on average 7
steps until the chain gets
absorbed in R1 or R2 .

Problem 4

Consider the Markov chain shown in Figure 11.19. Assume X0 = 1, and let R be the first
time that the chain returns to state 1, i.e.,

R = min{n ≥ 1 : X n = 1}.

Find E [R|X0 = 1].

Figure 11.19 - A state transition diagram.

Solution
In this question, we are asked to find the mean return time to state 1. Let r1 be the
mean return time to state 1, i.e., r1 = E [R|X0 = 1]. Then

r1 = 1 + ∑ tk p1k ,

where tk is the expected time until the chain hits state 1 given X0 = k.
Specifically,
t1 = 0,

tk = 1 + ∑ tj pkj ,  for k ≠ 1.

So, let's first find tk 's. We obtain

1 2
t2 = 1 + t1 + t3
3 3
2
= 1 + t3 ,
3

1 1
t3 = 1 + t3 + t1
2 2
1
= 1 + t3 .
2

Solving the above equations, we obtain

7
t3 = 2, t2 = .
3

Now, we can write

1 1 1
r1 = 1 + t1 + t2 + t3
4 2 4
1 1 7 1
= 1 + ⋅ 0 + ⋅ + ⋅ 2
4 2 3 4
8
= .
3

Problem 5

Consider the Markov chain shown in Figure 11.20.


Figure 11.20 - A state transition diagram.

a. Is this chain irreducible?


b. Is this chain aperiodic?
c. Find the stationary distribution for this chain.
d. Is the stationary distribution a limiting distribution for the chain?

Solution
a. The chain is irreducible since we can go from any state to any other states in
a finite number of steps.
b. The chain is aperiodic since there is a self-transition, i.e., p11 > 0.
c. To find the stationary distribution, we need to solve

1 1 1
π1 = π1 + π2 + π3 ,
2 3 2
1 1
π2 = π1 + π3 ,
4 2
1 2
π3 = π1 + π2 ,
4 3

π1 + π2 + π3 = 1.

We find

π1 ≈ 0.457, π2 ≈ 0.257, π3 ≈ 0.286

d. The above stationary distribution is a limiting distribution for the chain


because the chain is irreducible and aperiodic.
Problem 6
1
Consider the Markov chain shown in Figure 11.21. Assume that 2
< p < 1. Does this chain
have a limiting distribution? For all i, j ∈ {0, 1, 2, ⋯}, find

lim P (X n = j|X 0 = i).


n→∞

Figure 11.21 - A state transition diagram.

Solution
This chain is irreducible since all states communicate with each other. It is also
aperiodic since it includes a self-transition, P00 > 0 . Let's write the equations
for a stationary distribution. For state 0, we can write

π0 = (1 − p)π0 + (1 − p)π1 ,

which results in
p
π1 = π0 .
1 − p

For state 1, we can write

π1 = pπ0 + (1 − p)π2

= (1 − p)π1 + (1 − p)π2 ,

which results in
p
π2 = π1 .
1 − p

Similarly, for any j ∈ {1, 2, ⋯}, we obtain

πj = απj−1 ,

p 1
where α =
1−p
. Note that since 2
< p < 1, we conclude that α > 1. We
obtain
j
πj = α π0 ,  for j = 1, 2, ⋯ .

Finally, we must have


1 = ∑ πj

j=0

j
= ∑ α π0 , (where α > 1)

j=0

= ∞π0 .

Therefore, the above equation cannot be satisfied if π0 > 0. If π0 = 0, then all


πj 's must be zero, so they cannot sum to 1. We conclude that there is no stationary
distribution. This means that either all states are transient, or all states are null
recurrent. In either case, we have

lim P (X n = j|X 0 = i) = 0,  for all i, j.


n→∞

We will see how to figure out if the states are transient or null recurrent in the End
of Chapter Problems (see Problem 15 in Section 11.5).

← previous
next →

The print version of the book is available through Amazon here.

You might also like