Professional Documents
Culture Documents
300 Continuous Time Markov Chains
300 Continuous Time Markov Chains
300 Continuous Time Markov Chains
Alejandro Ribeiro
Dept. of Electrical and Systems Engineering
University of Pennsylvania
aribeiro@seas.upenn.edu
http://www.seas.upenn.edu/users/~aribeiro/
Limit probabilities
pdf cdf
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
T1T2 T3 T4 T5 T
6 T7 T8 T9 T
10
t
S1 S2 S3 S4 S5 S6
S7 S8 S9 S10
t=0 t = 5/λ t = 10/λ
2 2 1 1
var [T ] = E T 2 − E [T ] = 2 − 2 = 2
λ λ λ
I Parameter λ controls mean and variance of exponential RV
I Can only be true for all t and s if log g (t) = ct for some constant c
I Which in turn, can only true if g (t) = e ct for some constant c
Theorem
A continuous random variable T is memoryless if and only if it is
exponentially distributed. That is
P T > s + t T > t = P [T > s]
Limit probabilities
n(t)
Ex.1: # text messages (SMS) typed 6
since beginning of class 5
4
Ex.2: # economic crisis since 1900 3
2
Ex.3: # customers at Wawa since 1
8am this morning t
S1 S2 S3 S4 S5 S6
Ex.1: Likely true for SMS, except for “have to send” messages
Ex.2: Most likely not true for economic crisis (business cycle)
Ex.3: Likely true for Wawa, except for unforeseen events (storms)
Ex.1: Likely true if lecture is good and you keep interest in the class
Ex.2: Maybe true if you do not believe we are becoming better at
preventing economic crisis
Ex.3: Most likely not true because of, e.g., rush hours and slow days
(λt)n
P [N(t) = n] = e −λt
n!
I An equivalent definition is the following
(i) The process has stationary and independent increments
(ii) Prob. of event in infinitesimal time ⇒ P [N(h) = 1] = λh + o(h)
(iii) At most one event in infinitesimal time ⇒ P [N(h) > 1] = o(h)
I This is a more intuitive definition (even though difficult to believe now)
T1 T2 T3 T4 T5
↓ ↓
←h→ ←h→ ←h→ ←h→ ←h→ ←h→ ←h→ ←h→ t
↓ ↓ ↓
t=0 t=T
I Note first that since increments are stationary as per (i), it holds
P [Am = k] = P N mh − N (m − 1)h = k = P [N(h) = k]
I In particular, using (ii) and (iii)
P [Am = 1] = λh P [Am = 0] = 1 − λh
I Am is Bernoulli with parameter λh
T1 T2 T3 T4 T5
↓ ↓
←h→ ←h→ ←h→ ←h→ ←h→ ←h→ ←h→ ←h→ t
↓ ↓ ↓
t=0 t=T
T1 T2 T3 T4 T5
↓ ↓
←h→ ←h→ ←h→ ←h→ ←h→ ←h→ ←h→ ←h→ t
↓ ↓ ↓
t=0 t=T
Ex.2: SMS generated by all students in the class. Once you send a SMS
you are likely to stay silent for a while. But in a large population this
has a minimal effect in the probability of someone generating a SMS
λt k
P N(T ) = k Amax ≤ 1 = e −λT
k!
Limit probabilities
Theorem
Interarrival times Ti of a Poisson process are independent identically
distributed exponential random variables with parameter λ, i.e.,
Proof.
I Let Si be absolute time of i-th event. Condition on Si
Z
P [Ti+1 > t] = P Ti+1 > t Si = s P [Si = s] ds
Obs: Restrictions in Def. 1 are mild, yet they impose a lot of structure as
implied by Defs. 2 & 3
Limit probabilities
Definition
Consider a CTMC with transition probs. P and rates νi . The (discrete
time) MC with transition probs. P is the CTMC’s embedded MC
I Transition probabilities P describe a discrete time MC
I States do not transition into themselves (Pii = 0, P’s diagonal null)
I Can use underlying discrete time MCs to understand CTMCs
∞
X −1
I And can recover Pij as ⇒ Pij = qij /νi = qij qij
j=1,j6=i
νi = λi + µi
ν0 = λ 0 P01 = 1
λi
qi,i+1 = νi Pi,i+1 = (λi + µi ) = λi
λi + µi
I Likewise, rate of transition from i to i − 1 is
µi
qi,i−1 = νi Pi,i−1 = (λi + µi ) = µi
λi + µi
I For i = 0, ⇒ q01 = ν1 P01 = λ0
λ0 λi−1 λi λi+1
0 ... i −1 i i +1 ...
µ1 µi µi+1
λ λ λ λ
0 ... i −1 i i +1 ...
µ µ µ
Limit probabilities
I For that, need to study change in Pij (t) when time changes slightly
I Separate in two subproblems
⇒ Transition probabilities for small time h, Pij (h)
⇒ Transition probabilities in t + h as function of those in t and h
Theorem
The transition probability functions Pii (t) and Pij (t) satisfy the following
limits for time t going to 0
Proof.
I Consider a small time h
I Since 1 − Pii is the probability of transitioning out of state i
I For Pij (t) notice that since two or more transitions have o(h) prob
Pij (h) = P X (h) = j X (0) = i = Pij P [Ti ≤ h] + o(h)
Theorem
For all times s and t the transition probability functions Pij (t + s) are
obtained from Pik (t) and Pkj (s) as
∞
X
Pij (t + s) = Pik (t)Pkj (s)
k=0
Proof.
Pij (t + s)
= P X (t + s) = j X (0) = i Definition of Pij (t + s)
∞
X
= P X (t + s) = j X (t) = k, X (0) = i P X (t) = k X (0) = i
k=0
Theorem
The transition probability functions Pij (t) of a CTMC satisfy the system
of differential equations (for all pairs i, j)
∞
∂Pij (t) X
= qkj Pik (t) − νj Pij (t)
∂t
k=0,k6=j
Theorem
The transition probability functions Pij (t) of a CTMC satisfy the system
of differential equations (for all pairs i, j)
∞
∂Pij (t) X
= qik Pkj (t) − νi Pij (t)
∂t
k=0,k6=i
Limit probabilities
I As time goes to infinity, P00 (t) and P11 (t) converge exponentially
I Convergence rate depends on magnitude of q10 + q01
r11 · r1k · r1N s11 · s1j · s1N
· · · · · · · · · ·
0
R =
ri1 · rik · riN
si1 · sij · siN =
RP(t) = P (t)
· · · · ·
· · · · ·
rN1 · rNk · rJN sN1 · sNk · sNN
0
I Matrix form of Kolmogorov’s forward equation ⇒ P (t) = P(t)R
0
I Matrix form of Kolmogorov’s backward equation ⇒ P (t) = RP(t)
⇒ More similar than apparent
⇒ But not equivalent because matrix product not commutative
Definition
The matrix exponential e At of matrix At is defined as the series
∞
X (At)n (At)2 (At)3
e At = = I + At + + + ...
n=0
n! 2 2×3
∂e At
I Putting A on right side of product shows that ⇒ = e At A
∂t
Stoch. Systems Analysis Continuous time Markov chains 65
Solution of Kolmogorov’s equations
∂P(t) ∂e Rt
= = Re Rt = RP(t)
∂t ∂t
I It also solves forward equations
∂P(t) ∂e Rt
= = e Rt R = P(t)R
∂t ∂t
I Also notice that P(0) = I. As it should (Pii (0) = 1, and Pij (0) = 0)
Limit probabilities
Theorem
Consider irreducible positive recurrent CTMC with transition rates νi and
qij . Then, limt→∞ Pij (t) exists and is independent of the initial state i,
i.e.,
Pj = lim Pij (t) exists for all i
t→∞
I As with MCs difficult part is to prove that Pj = limt→∞ Pij (t) exists
I Algebraic relations obtained from Kolmogorov’s forward equation
∞
∂Pij (t) X
= qkj Pik (t) − νj Pij (t)
∂t
k=0,k6=j
∂Pij (t)
lim = 0, lim Pij (t) = Pj (t)
t→∞ ∂t t→∞
q01
I Simplest CTMC with two states 0 and 1
0 1
I Transition rates are q01 and q10
q10
I From transition rates find mean transition times ν0 = q01 , ν1 = q10
ν0 P0 = q10 P1 , ν1 P1 = q01 P0 , P0 + P1 = 1
q01 P0 = q10 P1 , q10 P1 = q01 P0 ,
q10 q01
I Solution yields ⇒ P0 = P1 =
q10 + q01 q10 + q01
I Larger prob. q10 of entering 0 ⇒ larger prob. P0 of being at 0
I Larger prob. q01 of entering 1 ⇒ larger prob. P1 of being at 1
1 t
Z
Pi = lim Ti (t) = lim I {X (t) = i} a.s.
t→∞ t→∞ t 0
I Consider function f (i) associated with state i. Can write f X (t) as
∞
X
f X (t) = f (i)I {X (t) = i}
i=1
I Consider the time average of the function of the state f X (t)
Z t ∞
Z tX
1 1
lim f X (t) = lim f (i)I {X (t) = i}
t→∞ t 0 t→∞ t 0 i=1
∞
X
I Reconsider limit distribution equations ⇒ νj Pj = qkj Pk
i=0,k6=j
I Pj = fraction of time spent in state j
I νj = rate of transition out of state j given CTMC is in state j
⇒ νj Pj = rate of transition out of state j (unconditional)
I qkj = rate of transition from k to j given CTMC is in state k
⇒ qkj Pk = rate of transition from k to j (unconditional)
P∞
⇒ i=0,k6=j qkj Pk = rate of transition into j, from all states
λ0 λi−1 λi λi+1
0 ... i −1 i i +1 ...
µ1 µi µi+1
I Equation for P0 λ0 P0 = µ1 P1
I Sum eqs. for P1 λ0 P0 = µ1 P1 λ1 P1 = µ2 P2
and P0
(λ1 + µ1 )P1 = λ0 P0 + µ2 P2
∞
P∞ X λi λi−1 . . . λ0
To find P0 use i=0 Pi = 1 ⇒ 1 = P0 + P0
I
µi+1 µi . . . µ0
i=0