Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

1

241-460 I nt r oduct i on t o Queuei ng


Net wor ks : Engi neer i ng Appr oach
Ch t 7 M k Ch i
Assoc. Prof. Thossaporn Kamolphiwong
Centre for Network Research (CNR)
Department of Computer Engineering, Faculty of Engineering
Prince of Songkla University, Thailand
Chapt er 7 Mar kov Chai ns
Email : kthossaporn@coe.psu.ac.th
Out l i ne
Mar kov Chai ns Par t I
Definition Definition
Types of Markov processes
Markov chain graph
Discrete time Markov Process
Continuous time Markov Process
Chapter 7 : Markov Chains
2
Mar kov Pr ocesses
A Mar kov pr ocess is stochastic process where
the probability of the next state given the the probability of the next state given the
current state and the entire past depends only
on the current state
Chapter 7 : Markov Chains
Mar kov Pr ocesses
Di scr et e t i me Mar kov Pr ocess
State changes occur at integer 0 1 2 State changes occur at integer 0, 1, 2,
Cont i nuous t i me Mar kov Pr ocess
State changes occur at any instant time S a a g s o u a a y s a
Chapter 7 : Markov Chains
3
Mar kov Pr oper t y
P[X(t
n
) = x
n
|X(t
n-1
)= x
n-1
,X(t
n-2
) = x
n-2
,,X(t
0
) = x
0
]
= P[X(t
n
) = x
n
|X(t
n-1
) = x
n-1
]
( ) ( ) ( ) ( ) | x t X x t X x t X f
CTMC
DTMC
Chapter 7 : Markov Chains
( )
( ) ( ) ( ) ( )
0 0 1 1
,..., | x t X x t X x t X f
n n n n t X
n
= = =

( )
( ) ( ) ( )
1 1
|

= = =
n n n n t X
x t X x t X f
n
Mar kov chai n gr aph
X X X X X X
t=0
X
t=1
X
t=2
X
t=n
X
t=n-1
P[X(t
n
) = x
n
|X(t
n-1
)= x
n-1
,X(t
n-2
)=x
n-2
,,X(t
0
)= x
0
]
= P[X(t
n
) = x
n
|X(t
n 1
) = x
n 1
]
Chapter 7 : Markov Chains
P[X(t
n
) x
n
|X(t
n-1
) x
n-1
]
Markov Chain property
Given the present, the future is independent of the past
4
( Cont i nue)
X
n
= i, the process is said to be in state i at time n
State change from i to i+1
X
t=0
X
t=1
X
t=n
X
1
X
t=n-1
Chapter 7 : Markov Chains
X
0
X
n
X
n-1
State X
n
X
1
Joi nt PMF f or Mar kov chai n
Joint PMF of X(t) at arbitrary time instants is given by
the product of the PMF of the initial time instant and the product of the PMF of the initial time instant and
the probabilities for the subsequent state transitions
e.g.
P[X(t
2
) = x
2
,X(t
1
) = x
1
, X(t
0
) = x
0
]
= P[X(t
2
) = x
2
|X(t
1
) = x
1
,X(t
0
) = x
0
]P[X(t
1
) = x
1
,X(t
0
) = x
0
]
= P[X(t
2
) = x
2
|X(t
1
) = x
1
]P[X(t
1
) = x
2
,X(t
0
) = x
0
]
= P[X(t
2
) = x
2
|X(t
1
) = x
1
]P[X(t
1
) = x
2
|X(t
0
) = x
0
]P[X(t
0
)=x
0
]
Chapter 7 : Markov Chains
5
Joi nt PMF
P[X(t
n
) = x
n
,X(t
n-1
) = x
n-1
,,X(t
0
) = x
0
]
= P[X(t ) = x |X(t ) = x ]P[X(t ) = x |X(t ) = x ] = P[X(t
n
) = x
n
|X(t
n-1
) = x
n-1
]P[X(t
n-1
) = x
n-1
|X(t
n-2
) = x
n-2
]
xP[X(t
1
) = x
1
|X(t
0
) = x
0
]P[X(t
0
) = x
0
]
Chapter 7 : Markov Chains
Di scr et e-Ti me Mar kov Chai n
Def i ni t i on : Di scr et e-Ti me Mar kov Chai n
A discrete-time Markov chain {X
n
|n = 0, 1,}
is a discrete-time, discrete-value random
sequence such that given X
0
,, X
n
, the next
random variable X
n
depends only on X
n-1
through the transition probability
Chapter 7 : Markov Chains
g p y
P[X
n
= j|X
n-1
=i, X
n-2
=i
n-2
,,X
0
=i
0
] = P[X
n
= j|X
n-1
=i] = p
ij
6
Di scr et e-Ti me Mar kov Chai n
X
0
X
1
X
n
X
1
X
n-1
P[X
n
= j|X
n-1
= i, X
n-2
= i
n-2
,, X
0
= i
0
] = P[X
n
=j|X
n-1
= i] = p
ij
X
0
, X
1
,,X
n
: state at discrete time t
0
, t
1
,,t
n
X
n
= j state of system at time step n is j
p
j
(n) = p(X
n
= j) PMF of random variable X
n j
p
ij
= P[X
n
= j|X
n-1
= i] probability that the process
makes a transition from state i at step n-1 to state j at
step n in one single step or transition probability
Chapter 7 : Markov Chains
Mar kov chai n gr aph
P[X
n
= j | X
n-1
= i] = p
ij
St t i
1-p 1-q
p
State i at
time n-1
Transition probability
p
jj
Chapter 7 : Markov Chains
q
State j at time n
j
i
7
Exampl e
A computer disk drive can be in one of three
possible states: 0 (IDLE), 1 (READ), or 2 possible states: 0 (IDLE), 1 (READ), or 2
(WRITE). When a unit of time is required to read
or write a sector on the disk. What is the Markov
chain?
Chapter 7 : Markov Chains
Sol ut i on
For this system, the three-state Markov chain is
p
22
p
11
p
00
p
12
p
01
p
2
1 0
Chapter 7 : Markov Chains
p
10
p
21
p
02
p
20
8
Tr ansi t i on Pr obabi l i t i es
p
ij
: transition probability function of DTMC
Homogenous DTMC : state transition Homogenous DTMC : state transition
probabilities are fixed and do not change with
time
1-step transition prob. p
ij
=p
ij
(1)=P[X
n
= j|X
n-1
= i]
Assuming 0-step transition prob. as:
( )

=
=
otherwise
j i
p
ij
if
0
1
0
Chapter 7 : Markov Chains
g p p
0
p
00
=1
Theorem: The transition probabilities p
ij
of a
Tr ansi t i on Pr obabi l i t i es
j
Markov chain satisfy
p
ij
> 0

=
=
0
1
j
ij
p
p
Chapter 7 : Markov Chains
p
11
p
10
p
12
2
1 0 State i = 1 p
10
+ p
11
+ p
12
= 1
9
Repr esent i ng pr obabi l i t i es i n
Mat r i x
The matrix P is called transition matrix or
Markov matrix or probability transition matrix Markov matrix or probability transition matrix
Properties:
0 < p
ij
< 1
(
(
(
(

12 11 10
02 01 00
p p p
p p p
P

Chapter 7 : Markov Chains


(
(

22 21 20
p p p

=
=
0
1
j
ij
p
Exampl e
The two-state Markov chain can be used to model
a wide variety of systems that alternate between a wide variety of systems that alternate between
ON and OFF states. After each unit of time in
the OFF state, the system turns ON with
probability p. After each unit of time in the ON
state, the system turns OFF with probability q.
Using 0 and 1 to denote the OFF and ON
states, what is the Markov chain for the system?
Chapter 7 : Markov Chains
10
Sol ut i on
The Markov chain for this system is
1 p 1 q 1-p 1-q
p
1 0
(

=
q q
p p
P
1
1
Chapter 7 : Markov Chains
q
Transition Probabilities are
p
00
= 1 p, p
01
= p, p
10
= q, and p
11
= 1 - q
Exampl e
A markov model for packet speech assumes that if
the n
th
packet contains silence, then the the n packet contains silence, then the
probability of silence in the next packet is 1 - o
and the probability of speech activity is o.
Similarly, if the n
th
packet contains speech
activity, then the probability of speech activity in
the next packet is 1 - | and the probability of
silence is |
Chapter 7 : Markov Chains
State 0 and 1 denote the silence packet
and speech packet
11
Sol ut i on
Transition probability Matrix 1-o 1-|
o
(
|
o
0 1
Chapter 7 : Markov Chains
(

=
|
o
|
o
1
1
P
Exampl e
A packet voice communications system transmits
digitized speech only during talk-spurts when the
k i lki I 10 i l ( f d speaker is talking. In every 10-ms interval (referred
to as a timeslot) the system decides whether the
speaker is talking or silent. When the speaker is
talking, a speech packet is generated; otherwise no
packet is generated. If the speaker is silent in a
slot, then the speaker is talking in the next slot with
probability p = 1/140 If the speaker is talking in a probability p = 1/140. If the speaker is talking in a
slot, the speaker is silent in the next slot with
probability q = 1/100. If states 0 and 1 represent
silent and talking, sketch the Markov chain for this
packet voice system.
Chapter 7 : Markov Chains
12
Sol ut i on
139/140 99/100
For this system, the two-state Markov chain is
139/140 99/100
1/140
1/100
1
0
Chapter 7 : Markov Chains
1/100
p
00
= 139/140 p
01
= 1/140
p
10
= 1/100 p
11
= 99/100
Exampl e
2-state DTMC for a cascade of binary comm.
channels. Signal values: 0 or 1 form the state channels. Signal values: 0 or 1 form the state
values.
T
0
b
a
R
0
1-a
Chapter 7 : Markov Chains
1-b
T
1
R
1
13
Sol ut i on
1-a
1-b
a
X
n-1
= 0 X
n
= 0
1-a
b
1-b
b
a
(n-1)
th
stage n
th
stage
X
n-1
= 1
X
n
= 1
0 1
Chapter 7 : Markov Chains
1 , 0 ,
1
1
s s
(

= b a
b b
a a
P
Di scr et e-Ti me Mar kov Chai n
Dynami cs
Describe the state over a short time interval
starting from a given initial state starting from a given initial state
Markov chain is random process and cannot say
exactly what sequence of state will follow the
initial state
Many applications in which it is necessary to
predict the future states X
n+m
given the current
state X
m
.
Chapter 7 : Markov Chains
14
n-st ep t r ansi t i on pr obabi l i t i es
Def i ni t i on : n-st ep t r ansi t i on pr obabi l i t i es
For a finite Markov chain the n-step transition For a finite Markov chain, the n-step transition
probabilities are given by the matrix P(n) which has
i, j
th
element
Let P(n) : matrix of n-step transition probabilities
Chapter 7 : Markov Chains
p
ij
(n) = P[X
n+m
= j|X
m
= i]
Chapman-Kol mogor ov equat i on
Theor em : Chapman-Kol mogor ov equat i ons
F fi it M k h i th t t iti For n finite Markov chain, the n-step transition
probabilities satisfy
( ) ( ) ( )

= +
K
k
kj ik ij
n p m p m n p
0
Chapter 7 : Markov Chains
= k 0
P(n+m) = P(m)P(n)
15
p
ik
(m)
p
kj
(n)
0
m m+n
i
k
j
K
Chapter 7 : Markov Chains
( ) ( ) ( )

=
= +
K
k
kj ik ij
n p m p m n p
0
Theor em : Theor em :
f k h h For a finite Markov chain with transition
matrix P, the n-step transition matrix is
P(n) = P
n
Chapter 7 : Markov Chains
16
Exampl e
Find the n-step transition probability.
(
0 0 1
q
q
1
(
(
(

=
q p
q p P 0
0 0
0
1
( ) | | i X j X P n p
m n m ij
= = =
+
|
p p
3
2 1
Chapter 7 : Markov Chains
( ) ( ) ( )

=
= +
K
k
kj ik ij
m p n p m n p
0
j
( Cont i nue)
( )
(
(
(

= q p P 0
0 0 1
1 ( )
(
(

q p
q p
0
( ) ( ) ( )
(
(
(

+ =
(
(
(

(
(
(

= =
2
2
2
0
0
2
0
1
0
0 0
0
1
0
0 0
0
1
1 1 2
q
pq
q
p
pq p
q p
q p
q p
q p P P P
Chapter 7 : Markov Chains
(
(
(


=
2
2
2
2
0
0
2
0
2 1
1
1
q
pq
q
q pq
q
17
Sol ut i on
(
(
(

(
(
(

=
2 2
0
0
0
1
1
0
0 0 1
q q q p
( ) ( ) ( ) 2 1 3 P P P =
(
(


(
(

2 2
2
2 1
0
q
pq
q pq
q p
( ) ( )
(
(
(

+
+ =
3 2
3
2 2
3
0
0
3
0
2 1 1
1
q pq
q
q pq q q p
q q p
Chapter 7 : Markov Chains
(
(
(


=
3 2
3
3 2
3
0
0
3
0
3 1
1
1
q pq
q
q pq
q
( Cont i nue)
n-step transition probability is
( )
(
(
(


=
n n
n
n n
n
q npq
q
q npq
q n P 0
0
0
1
1
1
1 1
Chapter 7 : Markov Chains

q pq q pq
18
( Cont i nue)
step transition n-step transition
b bilit
1 step transition 1-step transition
b bilit probability probability
q
q
1
p p
3
2 1
q
n
q
n
1
1-q
n
npq
n-1
3
2 1
Chapter 7 : Markov Chains
p p
1-npq
n-1
- q
n
St at e Pr obabi l i t i es Vect or
Def i ni t i on : St at e Pr obabi l i t y Vect or y
A vector p = [p
0
p
1
.p
K
] is a state
probability vector if , and each
1
0
=

=
K
j
j
p
Chapter 7 : Markov Chains
element p
j
is nonnegative
j
19
Theor em Theor em
The state probabilities p (n) at time n can be
St at e Pr obabi l i t i es
The state probabilities p
j
(n) at time n can be
found by either one iteration with the n-step
transition probabilities:
( ) ( ) ( )

=
K
i
ij i j
n p p n p
0
0
( ) ( )

=
=
K
i
ij i j
p n p n p
0
1
= i 0
Chapter 7 : Markov Chains
( Cont i nue)
( ) ( ) ( )

K
0 ( ) ( ) ( )

=
=
i
ij i j
n p p n p
0
0
( ) ( ) ( ) n P p n p 0 =
Chapter 7 : Markov Chains
20
Exampl e
Find the state probabilities. Given initial
PMF p(0) = (0 0 1) transition PMF, p(0) = (0 0 1), transition
probabilities
( )
(
(
(

=
n n
q q n P 0
0
0
1
1
Chapter 7 : Markov Chains
( )
(
(


=
n n n n
q npq
q
q npq
q n P 0
1
1
1 1
Sol ut i on
( ) ( ) ( ) n P p n p 0 = ( ) ( ) ( ) p p
( ) | |
(
(
(


=
n n n n
n n
q npq q npq
q q n p
1
1
0 1
0 0 1
1 0 0
Chapter 7 : Markov Chains
( ) | |
n n n n
q npq q npq n p =
1
1
21
Sol ut i on
Note that if q < 1 then, as n
( )
(
(
(

=
0
0
0
0
0
0
1
1
1
n P
1
1
3
2 1
Chapter 7 : Markov Chains

1
( Cont i nue)
State PMF
p(n) = [p (n) p (n) p (n)] p(n) = [p
1
(n) p
2
(n) p
3
(n)]
= (0 0 1)P(n)
(
(
(
(

= 0
0
0
0
1
1
) 0 0 1 (
Chapter 7 : Markov Chains
) 0 0 1 ( =
(
(

0 0 1
22
Exampl e
Given P and Let o = 1/10 and | = 1/5. Find
P(n) for n = 2 4 8 and 16 P(n) for n = 2, 4, 8 and 16.
If p(0) = (0 1), find p(n)
(


=
|
o
|
o
1
1
P
Chapter 7 : Markov Chains
(

| | 1
Sol ut i on
( )
(
(

(
(

17 . 0 83 . 0 1 . 0 9 . 0
2
2
P( )
(

=
(

=
66 . 0 34 . 0 8 . 0 2 . 0
2 P
( )
(

=
(

=
4934 . 0
2533 . 0
5066 . 0
7467 . 0
66 . 0
17 . 0
34 . 0
83 . 0
4
2
P
Chapter 7 : Markov Chains
( )
(

=
3718 . 0
3141 . 0
6282 . 0
6859 . 0
8 P
23
( Cont i nue)
( )
(

=
3356 0
3322 . 0
6644 0
6678 . 0
16 P( )
(

3356 . 0 6644 . 0
( )
(

=
3 / 1
3 / 1
3 / 2
3 / 2
n P
Chapter 7 : Markov Chains
( Cont i nue)
p(n) = p(0)P(n)
( )
| | 3 / 1 3 / 2
3 / 1
3 / 1
3 / 2
3 / 2
1 0
=
(

=
p( ) p( ) ( )
Chapter 7 : Markov Chains
24
St eady St at e Pr obabi l i t i es
Steady-state probabilities are used to describe
the long-run behavior of a Markov chain. the long run behavior of a Markov chain.
Chapter 7 : Markov Chains
We say that system reaches equilibrium or
steady state
St eady St at e Pr obabi l i t i es
steady state
As n , p
j
(n) t
j
and p
i
(n-1) t
i
( ) ( )

=
=
K
i
ij i j
p n p n p
0
1
Chapter 7 : Markov Chains

=
i
i ij j
p t t
25
St eady St at e Pr obabi l i t i es
The vector t = [t
1
t
2
t
s
] is often called the
st eady-st at e di st r i but i on, or equi l i br i um
di st r i but i on, for the Markov chain. Hence, they
are independent of the initial probability
distribution defined over the states
t
j
are also called st at i onar y probabilities
Chapter 7 : Markov Chains
Exampl e
For the packet voice communications system,
calculate the stationary probabilities [t
0
t
1
] calculate the stationary probabilities [t
0
t
1
]
139/140 99/100
1/140
0
1
Chapter 7 : Markov Chains
1/100
26
Sol ut i on

=
=
K
i
ij i j
p
0
t t
139/140 99/100
1/140
,
100
1
140
139
1 0 0
t t t + =
1
140
100
0 0 1 0
= + = + t t t t
7 140
1/100
0 1
j j
i
ij i j
p p p
1 1 0 0
1
0
t t t t + = =

=
100 140
Chapter 7 : Markov Chains
100
99
140
1
1 0 1
t t t + =
12
5
12
7
240
140
1
0
=
= =
t
t
Exampl e
A digital mobile phone transmits one packet in every 20-ms
time slot over a wireless connection. With probability p =
0 1 a packet is received in error independent of any 0.1, a packet is received in error, independent of any
other packet. To avoid wasting transmitter power when
the link quality is poor, the transmitter enters a timeout
state whenever five consecutive packets are received in
error. During a timeout, the mobile terminal performs an
independent Bernoulli trial with success probability q =
0.01 in every slot. When a success occurs, the mobile
t i l t t t itti i th t l t th h terminal starts transmitting in the next slot as though no
packets had been in error. Construct a Markov chain for
this system. What are the steady state probabilities?
Chapter 7 : Markov Chains
27
Sol ut i on
1-p
p p p p p
1-q
0
1 2
3 4 5

=
i
i ij j
p t t
j = 0
q
1-p
1-p
1-p
1-p
0
1 2
3 4 5
Chapter 7 : Markov Chains
j
t
0
= (1-p)t
0
+(1-p)t
1
+ (1-p)t
2
+ (1-p)t
3
+ (1-p)t
4
+ qt
5
pt
0
= (1-p)t
1
+ (1-p)t
2
+ (1-p)t
3
+ (1-p)t
4
+ qt
5
( Cont i nue)
p p p p p
0
1 2
3 4 5
t
1
= pt
0
t
2
= pt
1
= p
2
t
0
q
1-p
1-p
1-p
1-p
0
1 2
3 4 5
t
3
= pt
2
= p
3
t
0
t
4
= pt
3
= p
4
t
0
qt
5
= pt
4
t
5
= t
4
p/q = t
0
p
5
/q
Chapter 7 : Markov Chains
28
( Cont i nue)
t
0
+ t
1
+ t
2
+ t
3
+ t
4
+ t
5
= 1
+ +
2
+
3
+
4
+
5
/ 1 t
0
+ pt
0
+ p
2
t
0
+ p
3
t
0
+ p
4
t
0
+ t
0
p
5
/q = 1
t
0
(1+p+p
2
+p
3
+p
4
+p
5
/q) = 1
since 1+p+p
2
+p
3
+p
4
= (1-p
5
)/(1-p)
t
0
= q(1-p)/(q-qp
5
+p
5
+p
6
)
Chapter 7 : Markov Chains
St at e Cl assi f i cat i on of DTMC
State
Transient Recurrent
Nonnull Null
Chapter 7 : Markov Chains
Nonnull Null
Periodic Aperiodic
29
St at e Cl assi f i cat i on
When j is not accessible from i, written i j.
I M k h i h i j if th i th
Accessi bi l i t y
In Markov chain graph, i j if there is a path
from i to j
State j is accessible from state i, written i j,
if p
ij
(n) > 0 for some n > 0
Chapter 7 : Markov Chains
Communi cat i ng St at e
Communi cat i ng St at es Communi cat i ng St at es
State i and j communicate, written i j, if
i j and j i
Chapter 7 : Markov Chains
30
Communi cat i ng Cl ass
Communi cat i ng Cl ass
Communicating class is a nonempty subset of
states C such that if i e C, then j e C if and
only if i j
Chapter 7 : Markov Chains
Communi cat i on cl ass pr oper t y
Communication is a class property
State i communicates with itself, for all i > 0
If i communicates with j then j communicates
with i
If d th If i j and j k, then i k
Chapter 7 : Markov Chains
31
Exampl e
In the following Markov chain, we draw the
branches corresponding to transition branches corresponding to transition
probabilities without labeling the actual
transition probabilities. For this chain, identify
the communicating classes.
1 2 3 4
6
Chapter 7 : Markov Chains
1 2 3 4
0 5
6
Sol ut i on
1 2 3 4
6
Three communicating classes.
C
1
= {0, 1, 2}
C {4 5 6}
0 5
C
2
= {4, 5, 6}
C
3
= {3} communicate only with itself
Chapter 7 : Markov Chains
32
Tr ansi ent St at es
d b ( )
Tr ansi ent St at es
A state i is said to be transient (nonrecurrent)
if and only if there is a positive that the process
will not return to this state
A state i is t r ansi ent if there exists a state j
such that i j but j i
Chapter 7 : Markov Chains
Recur r ent St at es
Recur r ent St at es
A state i is t r ansi ent if there exists a state j
A state i is recurrent if and only if, starting
from i, the process eventually returns to state i
with probability one
such that i j but j i ; otherwise, if no such
that j exists, then state i is r ecur r ent
Chapter 7 : Markov Chains
33
Exampl e
In the following Markov chain, transition
probabilities p > 0 Identify each probabilities p
ij
> 0. Identify each
communicating class and indicate whether it is
transient or recurrent.
2 1 0 3
Chapter 7 : Markov Chains
2 1 0 3
5 4
Sol ut i on
2 1 0 3
5 4
This chain has three communicating classes.
C
1
= {0,1,2}, C
2
= {3} C
3
= {4,5}
C
1
C
2
C
3
5 4
Class C
1
and C
3
are recurrent while C
2
is transient
Chapter 7 : Markov Chains
34
I r r educi bl e Mar kov Chai n
I r r educi bl e Mar kov Chai n
A Markov chain is irreducible if there is only
one communicating class
Chapter 7 : Markov Chains
I r r educi bl e Mar kov Chai n
4
Non-irreducible MC
I d ibl MC
5
4
2
3 1 0
2 1
0
Chapter 7 : Markov Chains
Non-irreducible MC
Irreducible MC
35
Type of DTMC
A Markov chain is called
Transient if all its states are Transient if all its states are
transient.
Recurrent nonnull if all its states are
recurrent nonnull.
Recurrent null if all its states are
recurrent null
Periodic (aperiodic) if all its states are
periodic (aperiodic).
If a Markov chain is irreducible, recurrent
nonnull and aperiodic, it is called er godi c.
Chapter 7 : Markov Chains
Cont i nuous t i me Mar kov chai ns
Discrete state space Markov Chains
For Continuous-time Markov chains (CTMCs) the For Continuous-time Markov chains (CTMCs) the
time variable associated with the system
evolution is continuous
Use CTMC to model complex coordination
systems
Interested in long run behavior CTMC calculate Interested in long run behavior CTMC, calculate
steady-state probability
Chapter 7 : Markov Chains
36
The Mar kov Pr oper t y f or
Cont i nuous-Ti me Pr ocesses
F i i h i For a continuous-time stochastic process
{X(t) : t > 0} with state space S, we say it has
the Markov property if
P(X(t + A) = j|X(t) = i, X(t
n1
) = i
n1
, . . . ,X(t
1
) = i
1
)
= P(X(t +A) = j|X(t) = i)
Chapter 7 : Markov Chains
= P(X(t +A) = j|X(t) = i)
where 0 < t
1
< t
2
< . . . < t
n1
< A < t
Cont i nuous t i me Mar kov chai ns
Def i ni t i on Def i ni t i on
A continuous-time stochastic process
{X(t) : t > 0} is called a continuous-time
Markov chain if it has the Markov
property
Chapter 7 : Markov Chains
p p y
37
Ti me Homogeneous
Ti me Homogenei t y : Ti me Homogenei t y :
We say that a continuous-time Markov chain is
time homogeneous if for any A < t and any
states i, j e S,
Chapter 7 : Markov Chains
P(X(t +A) = j|X(t) = i) = P(X(A) = j|X(0) = i)
Tr ansi t i on r at e
q
ij
is the rate of going from state i to state j
Rate q
i,0
, q
i,1
, , q
i,n
is the set of possible
states the process may jump to when it leaves
state i
q
ij
i
j
Chapter 7 : Markov Chains
q
ik
k
l
38
Tr ansi t i on r at e
When the process enters state i, the amount of
ti it d i t t i i E ti ll time it spends in state i is Exponentially
distributed with rate
in i i
i j
ij i
q q q q v + + + = =

=
...
1 0

Chapter 7 : Markov Chains


il ik ij
i j
ij i
q q q q v + + = =

=
q
ij
q
ik
i
j
k
l
q
il
( Cont i nue)
When the process jumps from state i, it jumps
to state j with probability to state j with probability
q
i,j
/(q
i,j1
+ . . . + q
i,jn
) = q
i,j
/v
i
Chapter 7 : Markov Chains
39
( Cont i nue)
There is an important difference between the q
ij
in a continuous time Markov chain and the p
ij
in
a discrete-time Markov chain
The q
ij
are rates, not probabilities and, as
such, while they must be nonnegative, they are
not bounded by 1.
Chapter 7 : Markov Chains
The Tr ansi t i on Pr obabi l i t y
Funct i on
Rates q
ij
in a continuous-time Markov chain are
the counterpart of the transition probabilities p the counterpart of the transition probabilities p
ij
in a discrete-time Markov chain, there is a
counterpart to the n-step transition probabilities
p
ij
(n) of a discrete-time Markov chain.
The transition probability function, p
ij
(t), for a
ti h ti ti M k time homogeneous, continuous-time Markov
chain is defined as
p
ij
(t) = P(X(t) = j|X(0) = i)
Chapter 7 : Markov Chains
40
( Cont i nue)
And when it leaves state i it will go to state j
with probability with probability
p
ij
= q
i,j
/(q
i,j1
+ . . . + q
i,jn
) = q
i,j
/v
i
Chapter 7 : Markov Chains
St eady St at e Pr obabi l i t i es
As t , the state probabilities does not
depend on the initial condition. depend on the initial condition.
This is typical of systems that reach
equilibrium or steady state.
For such system :
( ) ( ) t t p
j ij j
t t = = lim lim
Chapter 7 : Markov Chains
( ) ( ) p
j
t
ij
t
j

( )
0 lim =

dt
t d
j
t
t
41
St eady st at e def i ni t i on
Definition:
[ ]' h 0 f ll i vector t = [t
0
, t
1
, , t
n
]' with t
i
> 0 for all i
and Et
i
= 1 , is said to be a stationary
distribution if
= P(t) t = tP(t)
for all t > 0.
Chapter 7 : Markov Chains
Gl obal bal ance equat i on

After system reach steady state, we get

=
=
j i
ij i j j
q v t t

=
|
|
.
|

\
|
=
ij i
j i
ji j
q q t t
or
1 =

i
i
t
Chapter 7 : Markov Chains
At equilibrium, the rate of probability flow out
of state j is equal to the rate of flow into state j
42
Exampl e
Signal strength is in one of three states: (0) off, (1) low
or (2) high. While off, transitions to low occur after an
exponential time with expected time 3 minutes. While
in the low state, transition to off or high are equally
likely and transitions out of the low state occur at rate
0.5 per minute. When the system is in the high state,
it makes a transition to the low state with probability
2/3 to the off state with probability 1/3. The time
spent in the high state is an exponential (1/2) random
variable. Model this signal strength using a
continuous-time Markov chain.
Chapter 7 : Markov Chains
Sol ut i on
q
01
= 1/3 q
02
= 0
v = q /v = q /v = v
1
= q
10
/v
1
= q
12
/v
1
=
q
10
= q
12
=
q
21
/v
2
= 2/3 q
20
/v
2
= 1/3
v
2
= q
21
= 1/3 q
02
= 1/6
1/3 1/4
Chapter 7 : Markov Chains
2
0
1/4 1/3
1/6
1
43
( Cont i nue)
Find the stationary probabilities
(1/3)p = ()p + (1/6)p

=
ij i j j
q v t t
(1/3)p
0
= ()p
1
+ (1/6)p
2
()p
1
= (1/3)p
0
+ (1/3)p
2
()p
2
= (1/4)p
1
Three equations and three unknown
2
0
1/3 1/4
1/4 1/3
1/6
1

= j i
ij i j j
q v t t
p
1
= p
0
p
2
= p
1
/2
Chapter 7 : Markov Chains
( Cont i nue)
p
0
+ p
1
+ p
2
= 1
p + p + p /2 = 1 p
0
+ p
0
+ p
0
/2 = 1
p
0
= 2/5
p
1
= 2/5
p
2
= 1/5
Chapter 7 : Markov Chains
44
Ref er ences
1. Alberto Leon-Garcia, Probability and Random
Processes for Electrical Engineering, 3
rd
Processes for Electrical Engineering, 3
Ed., Addision-Wesley Publishing, 2008
2. Roy D. Yates, David J. Goodman, Probability
and Stochastic Processes: A Friendly
Introduction for Electrical and Computer
Engineering, 2nd, John Wiley & Sons, Inc, 2005 g g, , y , ,
3. Jay L. Devore, Probability and Statistics for
Engineering and the Sciences, 3rd
edition, Brooks/Cole Publishing
Company, USA, 1991.
Chapter 7 : Markov Chains

You might also like