Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Markov Chains, Examples

June 20, 201


Markov Chains, Examples
Example 6.4

At a given day, a machine is either working or broken. If it is


working at any given day, then it will break down by tomorrow
with probability b. If it is broken, then it will be repaired by
tomorrow wit probability r . Assume r and b are independent
of the machines status at previous days.
1 2
1-b 1-r
r
b

We model the machine by markov chain, because if we know


the status of the machine at any given day, then its status at
future is fully computable and is independent of the past.

The state space is S = 1, 2. X


n
= 1 if on the nth day the
machine is working; X
n
= 2 otherwise.

The PMT is given by P = _


1 b b
r 1 r
_
Markov Chains, Examples
n-step Computation
Assume the car is working on the n day. What is the probability
that the status of the machine on days n + 1, n + 2, n + 3 is 1, 2, 1?
Solution.
We want to nd P(X
n+1
= 1, X
n+2
= 2, X
n+3
= 1|X
n
= 1).
P(X
n+1
= 1, X
n+2
= 2, X
n+3
= 1|X
n
= 1)
= P(X
n+3
= 1|X
n+1
= 1, X
n+2
= 2, X
n
= 1)P(X
n+1
= 1, X
n+2
= 2|X
n
= 1)
= r P(X
n+2
= 2|X
n+1
= 1, X
n
= 1)P(X
n+1
= 1|X
n
= 1)
= rbP(X
n+1
= 1|X
n
= 1) = rb(1 b)
1 2
1-b 1-r
r
b
Markov Chains, Examples
Example 6.3 continued, Markov chains with short memory

Lets change the last example as follows. If car is broken for 3


days, then they replace the machine. If the machine is broken
today, its status tomorrow depends on the previous days.
However, as the following picture shows, by adding new states
we can make a new Morkov chain that models the machines
status.
2
1
2
2
2
3
1
1-b
b
r
1-r 1-r
r
1
P =

1 b b 0 0
r 0 1 r 0
r 0 0 1 r
1 0 0 0

Markov Chains, Examples


Example 6.3 continued
Let b = .1, r = .8. What is the probability that this machine will be
replaced after 10 day, given was working on the rst day?
2
1
2
2
2
3
1
.9
.1
.8
.2 .2
.8
1
P =

.9 0.1 0 0
.8 0 .2 0
.8 0 0 .2
1 0 0 0

P(being replaced in less than a year|X


1
= 1) = P
10
(1, 4),
Markov Chains, Examples
P
3
=

.889 .089 .018 .004


.896 .088 .016 .000
.892 .092 .016 .000
.89 .09 .02 .000

P
4
=

0.8897 0.0889 0.0178 0.0036


0.8896 0.0896 0.0176 0.0032
0.8892 0.0892 0.0184 0.0032
0.8890 0.0890 0.0180 0.0040

P
10
=

.8896797169 .0889679721 .0177935938 .0035587172


.8896796992 .0889679680 .0177936000 .0035587328
.8896797148 .0889679612 .0177935928 .0035587312
.8896797210 .0889679690 .0177935860 .0035587240

P
10
(1, 4) .0036. We can interpret this as saying that .0036% of
total machines need to be replaced on the 10th day.
Markov Chains, Examples
Initial distribution

If 20% of machines are bought used, and they are already in


state 2 and 5% are in state 3(and the rest are in state 1),
what is P(X
4
= 4)?
P(X
4
= 4) = P(X
4
= 4X
0
= 1)P(X
0
= 1)
+P(X
4
= 4|X
0
= 2)P(X
0
= 2) + P(X
4
|X
0
= 3)P(X
0
= 3)
= .0036(.75) + .0032(.2) + .0032(.05)

The vector
0
dened by

0
= |P(X
0
= 1), P(X
0
= 2), P(X
0
= 3), P(X
0
= 4) = 0|
= |.75, .2, .05, 0|
is called the initial distribution of the MC X
n

n0
Markov Chains, Examples
Initial distribution 2

If
0
= |.75, .2, .05, 0|, nd P(X
1
= j ) for j = 1, , 4.
P(X
1
= 1) =
4

i =1
P(X
0
= i )P(X
1
= 1|X
0
= i ) =
4

i =1

0
(i )p
i ,1
= |
0
(1), ,
K
(0)|

p
1,1
p
2,1
p
3,1
p
4,1

, P(X
1
= 2) =
0

p
1,2
p
2,2
p
3,2
p
4,2

P(X
1
= 3) =
0

p
1,3
p
2,3
p
3,3
p
4,3

, P(X
1
= 4) =
0

p
1,4
p
2,4
p
3,4
p
4,4

P(X
1
= j ) = product of the vector
0
and the j th column of P.
Markov Chains, Examples
Initial distribution

Dene
1
to be the distribution of X
1
, then

1
= |P(X
1
= 1), P(X
1
= 2), P(X
1
= 3), P(X
1
= 4)|
= |
0

p
1,1
p
2,1
p
3,1
p
4,1

,
0

p
1,2
p
2,2
p
3,2
p
4,2

p
1,3
p
2,3
p
3,3
p
4,3

,
0

p
1,4
p
2,4
p
3,4
p
4,4

|
=
0
P
then:

1
=
0
P

Similarly, if we dene
n
to be the distribution of X
n
then

n+1
=
n
P = (
n1
P)P = =
0
P
n+1
Markov Chains, Examples
Example

Assume 60% of machines are in state 1, 20% in state 2, 15%


in state 3 and 5% in state 4. What is the distribution of
machines in each state after 3 days?
Solution.
We are looking for
3
, where
0
= (.6, .2, .15, .05).

3
=
0
P
3
= (.6, .2, .15, .05)

.889 .089 .018 .004


.896 .088 .016 .000
.892 .092 .016 .000
.89 .09 .02 .000

= (.8909, .0893, .0174, .0024)


Markov Chains, Examples
Chapman Kolmogorov equation

Let m < n; recall that P(X


m+n
= j , X
m
= l |X
0
= i ) =
= P(X
m+n
= j |X
m
= l , X
0
= i )P(X
m
= l |X
0
= i )

Then
P(X
n+m
= j |X
0
= i ) =

l S
P(X
n+m
= j , X
m
= l |X
0
= i )
=

l S
P(X
n+m
= j |X
m
= l , X
0
= i )P(X
m
= l |X
0
= i )
=

l S
P(X
n+m
= j |X
m
= l )P(X
m
= l |X
0
= i )
=

l S
P(X
m
= l |X
0
= i )P(X
n
= j |X
0
= l ) =

l S
p
n
i ,l
p
m
l ,j
,
where p
n
i ,j
= P(X
n
= j |X
0
= i ) = P(X
m+n
= j |X
m
= i ), is the
probability of going from state i to the state j in n steps.
Markov Chains, Examples
Going from state i S to state j S in n + m steps
S = 1, , K
n
i
n
n
n
n
j

P
P
P
P
P
P
P
P
Pq
"
"
"
"
"
"
"
"
"
p
m
l ,j
-
P
P
P
P
P
P
P
P
Pq
*
1
l
K
time m time 0 time n + m
P(X
n+m
= j |X
0
= i ) = P
n+m
(i , j ) = (P
n
P
m
)(i , j )
=

K
l =1
P
n
(i , l )P
m
(l , j )
p
n
i
,
l
K
P(X
n
= l , X
n+m
= j |X
0
= j ) = p
n
i ,l
p
m
l ,j
Markov Chains, Examples
Large time behavior


n
=
0
P
n
suggests a dynamical perspective of the Markov
chain in which the distribution of X
n
evolves with time.

In 2 dimensions,
0
and
n
s are points on the line x + y = 1.
We choose and x a point (x
0
, y
0
) to be
0
, and study its
transformation under P.

We know that
n
= (x
0
, y
0
)P
n
are again on the same line
n 1 (prove this)
@
@
@
@
@

1
=
0
P

2
=
0
P
2
6
RRR
Markov Chains, Examples
Examples

Let P = _
3]4 1]4
1]6 5]6
_. Let
0
= (2]3, 1]3). Then

1
=
0
P = (5]9, 4]9).

To investigate the limit of


n
, we have to nd out if
0
P
n
,
has a limit or not.

We can check by computer that P


n
_
.4 .6
.4 .6
_

Let = lim
n
P
n
, then rows of are identical. This means
that where ever the chain starts, for large n, going to state j
has the same probability. j S, Or we can say that the chain
forgets the starting point when n is large. (this is not true for
all chains)

Finally for every


0
we have
lim
n

n
= lim
n
u
0
P
n
= (.4, .6)

We call this limit . i.e = (.4, .6)


Markov Chains, Examples

Consider the telephone system with states 0, 1, 2, and


P =

3]4 1]4 0
1]8 2]3 5]24
0 1]6 5]6

. We want to know that in long run,


what is the probability that the phone is in state 0.

We start with some initial distribution


0
, and like to nd
lim
n

n
= lim
n

0
P
n

We can check that P


n

.182 .364 .455


.182 .364 .455
.182 .364 .455

= (.182, .364, .455). Therefore in a long run, regardless of


our initial distribution P(X
n
= 0) = .182
Markov Chains, Examples
Large time behavior and Equilibrium

Assume
n
converge to some limit R
m
. Compute
1
if

0
= .
P = ( lim
n
u
0
P
n
)P = lim
n
u
0
P
n+1
=

i.e, if the limit exists, then starting with all the later
distributions are

n+1
= , n 1

Any distribution such that P = is called the invariant


distribution for P.

is also called the stationary distribution, steady state


distribution, or the equilibrium.

is the left eigenvector of P with eigenvalue 1.


Markov Chains, Examples
Example of calculating P
n

Let P = _
1 p p
q 1 q
_, where 0 < p, q < 1, then
P = _
1 p
1 q
_ _
1 0
0 1 p q
_ _
q
p+q
p
p+q
1
p+q
1
p+q
_
P
n
= _
1 p
1 q
_ _
1 0
0 (1 p q)
n
_ _
q
p+q
p
p+q
1
p+q
1
p+q
_

q+p(1pq)
n
p+q
pp(1pq)
n
p+q
qq(1pq)
n
p+q
p+q(1pq)
n
p+q

_
q
p+q
p
p+q
q
p+q
p
p+q
_

What is ?
Markov Chains, Examples
3 Important Examples

We will give 3 examples such that P


n
doesnt have a limit

These 3 examples comprise all the situations in which P


n
doesnt have a unique limit.
Markov Chains, Examples
SRW with Reecting Boundary on {0, , 4} (Periodic)

0 1 2 3 4
1
.5
.5
.5
.5
.5
.5
.1

P =

0 1 0 0 0
.5 0 .5 0 0
0 .5 0 .5 0
0 0 .5 0 .5
0 0 0 1 0

, P
2n
=

0 0
0 0 0
0 0
0 0 0
0 0

n large
=

.25 0 .5 0 .25
0 .5 0 .5 0
.25 0 .5 0 .25
0 .5 0 .5 0
.25 0 .5 0 .25

, P
2n+1

0 .5 0 .5 0
.25 0 .5 0 .25
0 .5 0 .5 0
.25 0 .5 0 .25
0 .5 0 .5 0

We say P has period 2. This means at every state i , if


P
n
(i , i ) > 0, then n is even.
Markov Chains, Examples
SRW with absorbing boundary on {0, , 4} (Transient)

0 1 2 3 4
1
.5
.5
.5
.5
.5
.5
1

P =

1 0 0 0 0
.5 0 .5 0 0
0 .5 0 .5 0
0 0 .5 0 .5
0 0 0 0 1

, P
n

1 0 0 0 0
.75 0 0 0 .25
.5 0 0 0 .5
.25 0 0 0 .75
0 0 0 0 1

States 1, 2, 3 are transient: p


n
i ,1
= p
n
i ,2
= p
n
i ,3
= 0 for all i , i,e
after passing a large amount of time the chain will stop
visiting states 1, 2, 3.

the rows are not identical, i.e, the chain does not forget the
initial sate.
Markov Chains, Examples
reducible chains
1 2 3 4 5
1/2
1/6
1
2
5
6
3
4
1/4
1/8
7/8
5
6
1/6
Let X
n
be a Markov chain on the sate space S = 1, 2, 3, 4, 5,
P =

1]2 1]2 0 0 0
1]6 5]6 0 0 0
0 0 3]4 1]4 0
0 0 1]8 2]3 5]24
0 0 0 1]6 5]6

P
n

.25 .75 0 0 0
.25 .75 0 0 0
0 0 .182 .364 .455
0 0 .182 .364 .455
0 0 .182 .364 .455

Markov Chains, Examples


The chain reduces to 2 smaller, non interacting chains: A chain
with state space S = 1, 2 and S = 3, 4, 5
Markov Chains, Examples

You might also like