I, I'1 2 2 I, I 1 2 2 I, I 2: 1 N J 1 J 1 j'1 N

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

MATH3603 Probability Theory Assignment 4

Yip Kam Kai Wilson (3035155933)

1. (a)
pm iq2 i2 2ipm iq
Pi,i`1 , Pi,i1 , Pi,i .
m2 m2 m2
(b) Since, in the limit, the set of m balls in urn 1 is equally likely to be any subset of m balls, it is
intuitively clear that 2
m m m
1 mi i
i .
2m 2m
m m
(c) We must verify that, with the i given in (b), i Pi,i`1 i`1 Pi`1,i . That is we must verify that

m m
pm iq pi ` 1q
i i`1
which is immediate.

2.

3. Let the state be the ordering, so there are n! states. The transition probabilities are
1
Ppi1 , ,in qpij ,i1 , ,ij1 ,ij`1 , ,in q .
n
It is now easy to check that this Markov chain is doubly stochastic and so, in the limit, all n! possible
states are equally likely.

4. (a)
(b)
(c) If i a 0, then, for any n, the proportion of time the chain is in any of the states 1, , n
is na. But this is impossible when n a1 . Thus, i 0 for all i.

5. (a) Let time 0 be Monday, then our Markov chain pXn qn0 with transition probability P is supposed
to start from state 2. Let A be the event she wins all games on Tuesday, that is at time 1. So
we can write by condidioning first on X1 and then further on x0 that

PrAs PrShe wins all games on Tuesdays


PrA|X2 1s PrX1 1s ` PrA|X2 2s PrX1 2s
2 3
pP2,1 ` p2 P2,2 p ` p2 .
5 5
(b) Wednesday is time 2. SO we need the distribution of X2 . In general

k
PrXn ks PrX0 jsPj,k .
j

In our case X0 2 and n 2, so

Prx2 1s pP 2 q2,1 , PrX2 1s pP 2 q2,2 .

1
The square of P is 2
2 0.2 0.8 0.36 0.64
P .
0.4 0.6 0.32 0.68
(c) The markov Chain o fhtis problem is irreducible, and has finite state space, so we can use the
ergodic theorem. By the ergodic theorem, in the long run, the proportion of days when Capa
plays 1 games is pi1 , where is the invariant distribution of the Markov chain. Similary, in the
long run, the proportion of days she plays 2 games is 2 .
Consider first those days when she plays 1 game. Within these days the proportion of time
she wins all games (1 game) converges to p. This is the law of large numbers. (The number
of experiments goes to infinity, and each experiment independently of the other experiments,
yields a success with probability p. Then the relative frequency of successes goes to p).
The other case is similar. That is, in the long run, the proportion of time she wins all games
within those days, when she plays two games, is p2 . So finally we get

1
#tdays before N when she wins all gamesu 1 p ` 2 p2 as N 8.
N
It remains to compute the invariant distribution.
P 1 0.21 ` 0.42 2 21
1 ` 2 1 yields 1 1{3 and 2 2{3. So the long tun proportion of days she wins all games
is
1 2
p ` p2 .
3 3
6. Condition on whether machine 1 is still working at time t, we have
1
1 e1 t
1 ` 2
7.
1 1
Ertimes Ertime waiting at 1s ` ` Ertime waiting at 2s ` .
1 2
Now
1 1 1
Ertime waiting at 1s , Ertime waiting at 2s
1 2 1 ` 2
The last equation follows by conditioning on whether or not the customer waits for server 2. There-
fore,
2 1 1
Ertimes ` 1`
1 2 1 ` 2
8. Condition on which animal died to obtain
d c
Eradditional lifes Eradditional life | dog dieds ` Eradditional life | cat dieds
c ` d c ` d
1 d 1 c
`
c c ` d d c ` d
9. (a) By the lack of memery property, no matter when Y fails the remaining life of X is exponential
with rate .
(b) ErminpX, Y q|X Y ` cs ErminpX, Y q|X Y, X Y cs ErminpX, Y q|X Y s
Where the finanal equality follows from (a).

You might also like