Professional Documents
Culture Documents
Solutions To Homework Three
Solutions To Homework Three
1. Suppose the numbers of families that check into a hotel on successive days are in-
dependent Poisson random variables with mean λ. Also suppose that the number
of days that a family stays in the hotel is a geometric random variable with param-
eter p, 0 < p < 1. (Thus, a family who spent the previous night in the hotel will,
independently of how long they have already spent in the hotel, check out the next
day with probability p.) Also suppose that all families act independently of each
other. Under these conditions it is easy to see that if Xn denotes the number of
families that are checked in the hotel at the beginning of day n then {Xn , n ≥ 0}
is a Markov chain. Find
Solution:
(a) To find Pi,j , suppose there are i families checked into the hotel at the beginning
of a day. Because each of these i families will stay for another day with
probability q = 1 − p it follows that Ri , the number of these families that
remain another day, is a binomial (i, q) random variable. So, letting N be the
number of new families that check in that day, we see that
Pi,j = P (Ri + N = j)
(b) Using the preceding representation Ri + N for the next state from state i, we
see that
E[Xn |Xn−1 = i] = E[Ri + N ] = iq + λ
1
Consequently,
E[Xn |Xn−1 ] = Xn−1 q + λ
So
and
λ(1 − q n )
E[Xn |X0 = i] = + iq n
p
(c) Stationary probability distribution is the only distribution on the initial state
that results in the next state having the same distribution. Now, suppose that
the initial state X0 has a Poisson distribution with mean α. That is, assume
that the number of families initially in the hotel is Poisson with mean α. Let R
denote the number of these families that remain in the hotel at the beginning
of the next day. Then it follows that R is a Poisson random variable with
mean αq.
In addition, the number of new families that check in is independent of R.
Hence, since the sum of independent Poisson random variables is also Poisson
distributed, it follows that R + N , the number of guests at the beginning of
the next day, is Poisson with mean λ + αq. Consequently, if we choose α so
that α = λ + αq, then the distribution of X1 would be the same as that of X0 .
But this means that when the initial distribution of X0 is Poisson with mean
α = λ/p, then so is the distribution of X1 , implying that this is the stationary
distribution. That is, the stationary probabilities are
eλ/p (λ/p)i
πi = , i≥0
i!
2. In a good weather year the number of storms is Poisson distributed with mean
1; in a bad year it is Poisson distributed with mean 3. Suppose that any years
weather conditions depends on past years only through the previous years condition.
2
Suppose that a good year is equally likely to be followed by either a good or a bad
year, and that a bad year is twice as likely to be followed by a bad year as by a
good year. Suppose that last year, call it year 0, was a good year.
(a) Find the expected total number of storms in the next two years (that is, in
years 1 and 2).
(b) Find the probability there are no storms in year 3.
(c) Find the long-run average number of storms per year.
Solution:
(a) Letting 0 stand for a good year and 1 for a bad year, the successive states
follow a Markov chain with transition probability matrix P :
" #
1 1
2 2
1 2
3 3
3
3. A Markov chain {Xn , n ≥ 0} with states 0, 1, 2, has the transition probability matrix
1 1 1
2 3 6
1 2
0
3 3
1 1
2
0 2
Thus,
1 3 1 3 1 3 1 3 1 3 1 3 53
E[X3 ] = P (X3 = 1)+2P (X3 = 2) = P01 + P11 + P21 +2 P02 + P12 + P22 =
4 4 2 4 4 2 54
Since this Markov chain is irreducible, it follows that all states will be recurrent if
state 0 = (0, 0) is recurrent. Since it is impossible to return to the origin after an
2n−1 2n
odd number of jumps, i.e., P00 = 0, n = 1, 2, . . . . So consider P00 . Now after 2n
steps, the chain will be back in its original location if for some i, 0 ≤ i ≤ n, the 2n
steps consist of i steps to the left, i to the right, n − i up, and n − i down. Since
each step will be either of these four types with probability 1/4 , it follows that the
desired probability is a multinomial probability. That is,
n 2n
2n
X (2n)! 1
P00 =
i=1
i!i!(n − i)!(n − i)! 4
n 2n
X (2n)! n! n! 1
=
i=1
n!n! i!(n − i)! i!(n − i)! 4
2n X n
1 2n n n
=
4 n i=1 i i
2n
1 2n 2n
=
4 n n
4
where the last equality uses the combinatorial identity
X n
2n n n
=
n i=1
i i
√
Now by Stirlings approximation n! ∼ (n/e)n 2πn,
√
(2n)2n+1/2 e−2n 2π 4n
2n (2n)!
= ∼ = √
n n!n! 2πn2n+1 e−2n πn
Hence
2n 1 4n 4n 1
P00 ∼ n
√ √ =
16 πn πn πn
P∞ 2n
which shows that n=1 P00 = ∞ due to the divergence of harmonic series, and
thus all states are recurrent.
Solution:
1
We can show that the probabilities πj = M +1
,j = 0, 1, . . . , M satisfies
M
X 1 X 1
πj = πi Pij = Pij =
i=0
M +1 i M +1
and
M
X
πj = 1
j=0
5
(a) Define a Markov chain with r + 1 states, which will help us to determine the
proportion of time that our man gets wet. (Note: He gets wet if it is raining,
and all umbrellas are at his other location.)
(b) Find the the limiting probabilities of the Markov chain.
(c) What fraction of time does our man get wet ?
(d) When r = 3, what value of p maximizes the fraction of time he gets wet ?
Solution:
(a) Let the state Xn be the number of umbrellas he has at his present location,
which will vary in the range {0, 1, . . . , r}. Since Xn+1 only depends on Xn and
the weather condition when he departs the current location, Xn is a Markov
chain. The transition probabilities are
(b) Since there are finite states in this Markov chain and they communicate, the
chain is irreducible and we need to find the limiting probabilities πi such that
π0 = (1 − p)πr
πi = (1 − p)πr−i + pπr−i+1 , i = 1, 2, . . . , r − 1
πr = π0 + pπ1
r
X
πi = 1
i=0
From
π0 = (1 − p)πr
πr = π0 + pπ1
we have
π0
π r = π1 =
1−p
Let i = 1 so with
π1 = (1 − p)πr−1 + pπr
we can derive
π0
πr = π1 = πr−1 =
1−p
let i = r − 1 so with
πr−1 = (1 − p)π1 + pπ2
6
we can derive
π0
πr = π1 = πr−1 = π2 =
1−p
Follow such recursion we can show
π0
π1 = π2 = · · · = πr =
1−p
and with the normalization equation
r
X
πi = 1
i=0
we have
1−p
r+1−p
, if i = 0
πi =
r
r+1−p
, if i = 1, 2, . . . , r
(c)
p(1 − p)
pπ0 =
r+1−p
(d) From
d2
p(1 − p) 24
= <0
d2 p 4−p (p − 4)3
we know that this probability is concave in p and first-order condition can
imply the maximizer, thus letting
p2 − 8p + 4
d p(1 − p)
= =0
dp 4−p (p − 4)2
√
we have p∗ = 4 − 2 3.
(a) Let the state Xn be the number of umbrellas he has at home at the beginning of
each day, which will vary in the range {0, 1, . . . , r}. Since Xn+1 only depends on
Xn and the weather conditions at the beginning and the end of the day, which
are independent of Xn , Xn is a Markov chain. The transition probabilities are
P0,0 = 1 − p, P0,1 = p,
Pi,i = p2 + (1 − p)2 = 1 − 2p(1 − p), Pi,i+1 = p(1 − p), Pi,i−1 = p(1 − p), i = 1, 2, . . . , r
Pr,r−1 = p(1 − p), Pr,r = 1 − p(1 − p)
where the first line is due to P0,j is independent of the weather condition at the
beginning but depends on that at the end. Pi,j , i = 1, 2, . . . , r is such because
Pi,i implies both morning and afternoon are sunny and rainy, Pi,i+1 implies
a sunny morning and rainy afternoon, Pi,i−1 implies a rainy morning and a
7
sunny afternoon. The third line is such because r → r − 1 happens only when
it rains at morning and doesn’t rain in the afternoon, while for all the other
cases the state will be invariant.
(b) the limiting probabilities πi satisfies
From
we have
πi+1 − πi = πi − πi−1
from
πr = p(1 − p)πr−1 + (1 − p(1 − p))πr
we have
πr = πr−1
πr = πr−1 = · · · = π2 = π1
Interestingly this is the same as the case when we treat the state variable as
the number of umbrellas at present location.
Since this limiting probability is the same as what we mentioned above, section
(c) and (d) will be identical.
8
7. Let {Xn , n ≥ 0} denote an ergodic Markov chain with limiting probabilities πi .
Define the process {Yn , n ≥ 1} by Yn = (Xn−1 , Xn ). That is, Yn keeps track of
the last two states of the original chain. Is {Yn , n ≥ 1} a Markov chain? If so,
determine its transition probabilities and find
Solution:
{Yn , n ≥ 1} is a Markov chain with states (i, j).
0, if j 6= k
P(i,j),(k,l) =
P , if j = k
jl
lim P {Yn = (i, j)} = lim P {Xn = i, Xn+1 = j)} = lim P {Xn = i)}Pij = πi Pij
n→∞ n→∞ n→∞
8. Consider the Ehrenfest urn model in which M molecules are distributed between
two urns, and at each time point one of the molecules is chosen at random and is
then removed from its urn and placed in the other one. Let Xn denote the number
of molecules in urn 1 after the nth switch and let µn = E[Xn ].
Solution:
Hence,
M − Xn Xn 2Xn
E[Xn+1 |Xn ] = Xn + − = Xn + 1 −
M M M
and µn+1 = 1 + (1 − 2/M )µn .
9
(b) We prove this by induction. When n = 0,
n
M M −2 M M M
+ E[X0 ] − = + E[X0 ] − = E[X0 ] = µ0
2 M 2 2 2
π0 = (1 − α1 )π1
πi = αi−1 πi−1 + (1 − αi+1 )πi+1 , i = 1, 2, . . . , M − 1
πM = αM −1 πM −1
M
X
πi = 1
i=0
πi αi−1 πi−1
rewrite πi = αi−1 πi−1 + (1 − αi+1 )πi+1 as πi+1 = 1−αi+1
− 1−αi+1
we can
recursively derive that
Qi−1
j=1 αj
π i = Qi π0 , i = 1, 2, . . . , M
j=1 (1 − αj )
can be rewritten as
α0 α1
α1 π1 = (1 − α2 )π2 or π2 = π0
(1 − α1 )(1 − α2 )
let i = 2, 3, . . . , M − 1
Qi−1
j=1 αj
π i = Qi π0 , i = 1, 2, . . . , M
j=1 (1 − αj )
10
PM
Now with i=0 = 1 we have
" M Qi−1 #−1
X j=1 αj
π0 = 1 + Qi
i=1 j=1 (1 − αj )
Qi−1
j=1 αj
πi = Qi π0 , i = 1, 2, . . . , M
j=1 (1 − αj )
and we have M
M 1
πi = , i = 1, 2, . . . , M
i 2
9. For the Markov chain with states 1, 2, 3, 4 whose transition probability matrix P is
as specified below find fi3 and si3 for i = 1, 2, 3.
0.4 0.2 0.1 0.3
0.1 0.5 0.2 0.2
0.3 0.4 0.2 0.1
0 0 0 1
Solution:
0.4 0.2 0.1
PT = 0.1 0.5 0.2
18 26 56
s13 = , s23 = , s33 =
29 29 29
9 13 27
f13 = , f23 = , s33 =
28 28 56
11
10. For a branching process, calculate π0 when
(a) P0 = 41 , P1 = 34 .
(b) P0 = 14 , P1 = 12 , P2 = 14 .
(c) P0 = 16 , P1 = 12 , P2 = 13 .
Solution:
3
(a) Since µ = 4
< 1, π0 = 1.
1 2
(b) Since µ = 2
+ 4
= 1, π0 = 1.
1 2
(c) µ = 2
+ 3
> 1, so solving equation
∞
X 1 1 1
π0 = π0i Pi = + π0 + π02
i=0
6 2 3
11. Consider a Markov chain in steady state. Say that a k length run of zeroes ends at
time m if
Show the probability of this event with a expression of π0 and P0,0 , where π0 is the
limiting probability of state 0.
Solution:
Letting P be the desired probability, we obtain upon conditioning on Xm−k−1 that
X
P = P (Xm−k−1 6= 0, Xm−k = Xm−k+1 = · · · = Xm−1 = 0, Xm 6= 0|Xm−k−1 = i)
i6=0
X
= Pi0 (P00 )k−1 (1 − P00 )πi
i6=0
X
= (P00 )k−1 (1 − P00 ) Pi0 πi
i6=0
!
X
k−1
= (P00 ) (1 − P00 ) Pi0 πi − P00 π0
i
12
Solution:
P {Xk = j, Xm 6= j, m = 1, · · · , k − 1|X0 = i}
Xn n
X
= Pjjn−k fijk = Pjjn−k fijk
k=1 k=0
(c) Use (b) and Stirling’s approximation to show that for n large E[Nn ] is propor-
√
tional to n.
Solution:
(a) Notice that this symmetric random walk has period 2, i.e.,
2n
2n−1 2n 2n 1
P00 = 0, P00 = , n = 1, 2, 3, . . .
n 2
2n
With Stirling’s approximation we can show P00 ∼ √1 2n
and limn→∞ P00 = 0,
πn
while from the hint we know limn→∞ Pjj2n = 2/Mjj , so M00 = ∞
(b) Let
1, if X = 0
n
In =
0, otherwise
then
2n 2n n n 2k
X X
k
X
2k
X 2k 1
E[N2n ] = E[ Ik |X0 = 0] = P00 = P00 =
k=1 k=1 k=1 k=1
k 2
2k
Now we will prove the formula by induction. When n = 1, P00 = 1/2 while
2n
2n 1 2 1 1
(2n + 1) −1=3 −1=
n 2 1 4 2
13
For an arbitrary n > 1 and suppose the formula holds, now
2n+2
2n + 2 1
E[N2(n+1) ] = E[N2n+2 ] = E[N2n ] +
n+1 2
2n 2n+2
2n 1 2n + 2 1
= (2n + 1) + −1
n 2 n+1 2
2n 2n+2
4(n + 1)2 (2n + 1)(2n)! 1 2n + 2 1
= + −1
4(n + 1)2 n!n! 2 n+1 2
2n 2n+2
2(n + 1)(2n + 2)(2n + 1)(2n)! 1 2n + 2 1
= + −1
4(n + 1)!(n + 1)! 2 n+1 2
2n+2 2n+2
(2n + 2)! 1 2n + 2 1
= (2n + 2) + −1
(n + 1)!(n + 1)! 2 n+1 2
2n+2
2n + 2 1
= (2n + 3) −1
n+1 2
while
2√
r r
2√
E[N2n+1 ] = E[N2n ] ∼ 2n ∼ 2n + 1
π π
we can have r
2√
E[Nn ] ∼ n
π
14. For the gambler’s ruin model of Section 4.5.1, let Mi denote the mean number of
games that must be played until the gambler either goes broke or reaches a fortune
of N , given that he starts with i, i = 0, 1, . . . , N.
M0 = MN = 0; Mi = 1 + pMi+1 + qMi−1 , i = 1, . . . , N − 1
Solution:
14
(a) M0 = MN = 0 is obvious. For i = 1, . . . , N −1, by conditioning on the outcome
of the initial play X,
So
i−1 j i j−2 k
X q 1 XX q
Mi = M1 −
j=0
p p j=2 k=0 p
j−1
( pq )
Pi
i −(i−1)
M1 1−(q/p) + j=2
, if p 6= q
= 1−(q/p) p−q
iM1 − i(i − 1), if p = q
i 1−(q/p)i
(
M1 1−(q/p)
1−(q/p)
− i
p−q
+ p(1−q/p)2
, if p 6= q
=
iM1 − i(i − 1), if p = q
15
and
i(N − i), 1
if p = 2
Mi =
i − N · 1−(q/p)i 1
q−p q−p 1−(q/p)N
, if p 6= 2
15. A store that stocks a certain commodity uses the following (s, S) ordering policy,
if its supply at the beginning of a time period is x, then it orders
0, if x ≥ s,
S − x, if x < s.
The order is immediately filled. The daily demands are independent and equal j
with probability αj . All demands that cannot be immediately met are lost. Let Xn
denote the inventory level at the end of the nth time period. Argue that {Xn , n ≥ 1}
is a Markov chain and compute its transition probabilities.
Solution:
Let Dn denote the demand of the nth time period. Since all demands that cannot
be immediately met are lost, we do not have backorders and the inventory level
will always be nonnegative. Define x+ = max{x, 0} so we have the inter-temporal
relationship: (
(Xn − Dn+1 )+ , if Xn ≥ s,
Xn+1 =
(S − Dn+1 )+ , if Xn < s.
Since the daily demands are independent, Xn only depends on X0 , D1 , D2 , . . . , Dn
and is independent of Dn+1 , Xn+1 only depend on Xn and Dn+1 , so {Xn , n ≥ 1}
is a Markov chain. To simplify the derivation of transition probabilities matrix
we assume that X0 ≤ S and without much loss of generality that Dn is integer-
valued and has range [0, D] where D ≥ S and can be ∞ (this can be generalized to
the case where Dn is real but has finite number or infinitely countable number of
realization). So the range of Xn is {0, 1, . . . , S}.
Now we set out to find the transition probabilities of this Markov chain. Since
Dn are i.i.d. and the ordering policy is time-invariant, this Markov chain is time-
homogeneous so let Pij be the transition probability.
When 0 ≤ i < s, the store will order up to S commodity so
P
k≥S αk , if j = 0
Pij =
α ,
S−j if 0 < j ≤ S
16
So the transition probabilities matrix is:
17