Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 37

ISYE6189 - Deterministic

Optimization and Stochastic


Processes
Topic 7 - Week 7
Discrete-Time Markov Chains
Learning Outcome
• LO3: Apply the concept of discrete and
continuous time Markov chain, transition
matrices and state classifications
SUB TOPIC:
- INTRODUCTION
- CLASSIFICATION OF STATES
- LIMITING PROBABILITIES
INTRODUCTION
Stochastic Processes
A stochastic process is a collection of random variables
X t  , t  T  X t  , t  0
Typically, T is continuous (time) and we have
Or, T is discrete and we are observing  X n , n  0,1, 2,...
at discrete time points n that may or may not be evenly spaced.
Refer to X(t) as the state of the process at time t.
The state space of the stochastic process is the set of all possible
values of X(t): this set may be discrete or continuous as well.
Markov Chains
In this chapter, consider discrete-state, discrete-time:
A Markov chain is a stochastic process  X n , n  0,1, 2,...
where each Xn belongs to the same subset of {0, 1, 2, …},
and P  X n 1  j X n  i, X n 1  in 1 ,..., X1  i1 , X 0  i0   P  X n 1  j X n  i
for all states i0, i1,…, in-1 and all n  0 .
Say Pij  P  X n 1  j X n  i
Then Pij  0 for all i, j

For any i,  Pij  1
j 1

Let P   Pij  be the matrix of one-step transition


probabilities.
(Forecasting the Weather) Suppose that the chance of rain tomorrow
depends on previous weather conditions only through whether or not it is
raining today and not on past weather conditions. Suppose also that if it
rains today, then it will rain tomorrow with probability α; and if it does not
rain today, then it will rain tomorrow with probability β. If we say that the
process is in state 0 when it rains and state 1 when it does not rain, then
the preceding is a two-state Markov chain whose transition probabilities
are given by
The two-state Markov chain can be used to model a wide variety of
systems that alternate between ON and OFF states. After each unit of
time in the OFF state, the system turns ON with probability p. After each
unit of time in the ON state, the system turns OFF with probability q.
Using 0 and 1 to denote the OFF and ON states, what is the Markov
chain for the system?
A packet voice communications system transmits digitized speech only
during “talkspurts” when the speaker is talking. In every 10-ms interval
(referred to as a timeslot) the system decides whether the speaker is
talking or silent. When the speaker is talking, a speech packet is
generated; otherwise no packet is generated. If the speaker is silent in a
slot, then the speaker is talking in the next slot with probability p = 1/140.

If the speaker is talking in a slot, the speaker is silent in the next slot with
probability q = 1/100. If states 0 and 1 represent silent and talking, sketch
the Markov chain for this packet voice system.
(A Communications System) Consider a communications system that
transmits the digits 0 and 1. Each digit transmitted must pass through
several stages, at each of which there is a probability p that the digit
entered will be unchanged when it leaves. Letting Xn denote the digit
entering the nth stage, then
On any given day Gary is either cheerful (C), so-so (S), or glum (G). If
he is cheerful today, then he will be C, S, or G tomorrow with
respective
probabilities 0.5, 0.4, 0.1. If he is feeling so-so today, then he will be C,
S, or G tomorrow with probabilities 0.3, 0.4, 0.3. If he is glum today,
then he will be C, S, or G tomorrow with probabilities 0.2, 0.3, 0.5.
(Transforming a Process into a Markov Chain) Suppose that whether or
not it rains today depends on previous weather conditions through the
last two days. Specifically, suppose that if it has rained for the past two
days, then it will rain tomorrow with probability 0.7; if it rained today
but not yesterday, then it will rain tomorrow with probability 0.5; if it
rained yesterday but not today, then it will rain tomorrow with
probability 0.4; if it has not rained in the past two days, then it will rain
tomorrow with probability 0.2.

If we let the state at time n depend only on whether or not it is raining


at time n, then the preceding model is not a Markov chain (why not?).
However, we can transform this model into a Markov chain by saying
that the state at any time is determined by the weather conditions
during both that day and the previous day. In other words, we can say
that the process is in
state 0 if it rained both today and yesterday,
state 1 if it rained today but not yesterday,
state 2 if it rained yesterday but not today,
state 3 if it did not rain either yesterday or today.

The preceding would then represent a four-state Markov chain


having a transition probability matrix
n-step Transition Probabilities
Given the chain is in state i at a given time, what is the
probability it will be in state j after n transitions? Find it by
conditioning on the initial transition(s).
Pijn  P  X m  n  j X m  i

  P  X m  n  j X m  i, X m 1  k  P  X m 1  k X m  i
k 0

  P  X m  n  j X m 1  k  P  X m 1  k X m  i
k 0

  Pkjn 1 Pik
k 0
Chapman-Kolmogorov Equations
In general, can find the n-step transition probabilities by
conditioning on the state at any intermediate stage:
Pijn  m  P  X n  m  j X 0  i

  P  X n  m  j X n  k , X 0  i P  X n  k X 0  i
k 0

  Pkjm Pikn
k 0

Let P(n) be the matrix of n-step transition probabilities:


n m n  m 
P P P
So, by induction,
P   P n
n
Consider Example 4.1 in which the weather is considered as a two-state
Markov chain. If α = 0.7 and β = 0.4, then calculate the probability that it
will rain four days from today given that it is raining today.

Solution: The one-step transition probability matrix is given by


Given that it rained on Monday and Tuesday, what is the probability that
it will rain on Thursday?

Solution: The two-step transition matrix is given by


CLASSIFICATION OF STATES
Classification of States
State j is accessible from state i if
Pijn  0 for some n  0
If j is accessible from i and i is accessible from j, we say that
states i and j communicate (i  j).
Communication is a class property:
(i) State i communicates with itself, for all i  0
(ii) If i communicates with j then j communicates with i
(iii) If i  j and j  k, then i  k.
Therefore, communication divides the state space up into
mutually exclusive classes.
If all the states communicate, the Markov chain is irreducible.
Recurrence vs. Transience
Let fi be the probability that, starting in state i, the process will ever
reenter state i. If fi = 1, the state is recurrent, otherwise it is transient.
If state i is recurrent then, starting from state i, the process will
reenter state i infinitely often (w/prob. 1).

If state i is transient then, starting in state i, the number of periods in


which the process is in state i has a geometric distribution with
parameter 1 – fi.
 
 
  ii  
n n
P P
Or, state i is recurrent if n 1 ii
and transient if
n 1

Recurrence (transience) is a class property: If i is recurrent (transient)


and i  j then j is recurrent (transient).
A special case of a recurrent state is if Pii = 1 then i is absorbing.
Consider a Markov chain consisting of the four states 0, 1, 2, 3 and having
transition probability matrix
In the following Markov chain, we draw the branches corresponding to
transition probabilities Pi j > 0 without labeling the actual transition
probabilities. For this chain, identify the communicating classes.

This chain has three communicating classes. First, we note that states 0, 1,
and 2 communicate and form a class C1 = {0, 1, 2}. Second, we observe that
C2 = {4, 5, 6} is a communicating class since every pair of states in C2
communicates. The state 3 communicates only with itself and C3 = {3}.
In the following Markov chain, the branches indicate transition
probabilities P i j > 0.

Identify each communicating class and indicate whether it is


transient or recurrent. By inspection of the Markov chain, the
communicating classes are C 1 = {0, 1, 2}, C2 = {3}, and C3 = {4, 5}.
Classes C1 and C3 are recurrent while C2 is transient.
Let the Markov chain consisting of the states 0, 1, 2, 3 have the
transition probability matrix. Determine which states are transient and
which are recurrent.

Solution: It is a simple matter to check that all states communicate and,


hence, since this is a finite chain, all states must be recurrent.
Consider the Markov chain having states 0, 1, 2, 3, 4 and Determine
the recurrent state.

Solution: This chain consists of the three classes {0, 1}, {2, 3}, and {4}.
The first two classes are recurrent and the third transient.
LIMITING PROBABILITIES
Limiting Probabilities

Theorem: For an irreducible ergodic Markov chain,


exists for all j and is independent of i. Furthermore, pj is
the unique nonnegative

solution of
 j    i Pij , j  0
i 0

 j  lim Pijn
n 

j 0
j 1

The probability pj also equals the long run proportion of


time that the process is in state j.
If the chain is irreducible and positive recurrent but
periodic, the same system of equations can be solved for
these long run proportions.
Consider Example in which the mood of an individual is considered as a three-state
Markov chain having a transition probability matrix

In the long run, what proportion of time is the process in each of the three states?
Example
(A Model of Class Mobility) A problem of interest to sociologists is to determine
the proportion of society that has an upper- or lower-class occupation. One
possible mathematical model would be to assume that transitions between social
classes of the successive generations in a family can be regarded as transitions of a
Markov chain. That is, we assume that the occupation of anchild depends only on
his or her parent’s occupation. Let us suppose that such anmodel is appropriate
and that the transition probability matrix is given by
That is, for instance, we suppose that the child of a middle-class worker will
attain an upper-, middle-, or lower-class occupation with respective probabilities
0.05, 0.70, 0.25.
Example
Digital mobile phone transmits one packet in every 20-ms time slot over a wireless
connection. With probability p = 0.1, a packet is received in error, independent of
any other packet. To avoid wasting transmitter power when the link quality is poor,
the transmitter enters a timeout state whenever five consecutive packets are
received in error. During a timeout, the mobile terminal performs an independent
Bernoulli trial with success probability q = 0.01 in every slot.

When a success occurs, the mobile terminal starts transmitting in the next slot as
though no packets had been in error.

Construct a Markov chain for this system. What are the limiting state
probabilities?
Solution
For the Markov chain, we use the 20-ms slot as the unit of time. For the
state of the system, we can use the number of consecutive packets in error. The
state corresponding to five consecutive corrupted packets is also the timeout
state. The Markov chain is
Thank You

You might also like