Sample Final Solutions

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Test: 1 Course: M362M - Introduction to Stochastic Processes I Page: 1 of 7

University of Texas at Austin


Solutions for the Final Exam , December 13th 2010
Problem 1.1. (30 points) This is a multiple choice question. Please circle the correct
answer, no explanation needed.
(1) Let X be an N
0
-valued random variable, and let P(s) be its generating function.
Then the generating function of 4X is given by
(a) (P(s))
4
,
(b) P(P(P(P(s)))),
(c) P(s
4
),
(d) 4P(s),
(e) none of the above.
Solution: (c): P
4X
(s) = E[s
4X
] = E[(s
4
)
X
] = P
X
(s
4
).
(2) Let X and Y be two N
0
-valued random variables, let P
X
(s) and P
Y
(s) be their
generating functions and let Z = X Y , V = X +Y , W = XY . Then
(a) P
X
(s) = P
Z
(s)P
Y
(s),
(b) P
X
(s)P
Y
(s) = P
Z
(s),
(c) P
W
(s) = P
X
(P
Y
(s)),
(d) P
Z
(s)P
V
(s) = P
X
(s)P
Y
(s),
(e) none of the above.
Solution: (e): Recall the denition of the generating function. Item (a) would be
true IFF Z 0 and Z, Y are INDEPENDENT. However, there is no assumption
like this!
(3) Let (X
n
)
n0
a simple random walk with probabilities p and q = 1 p, and denote by

k
= min{n 0 : X
n
= k} the rst hitting time of k. Then E[
1
+
1
] < if
(a) p < 1/2
(b) p 1/2
(c) q < 1/2
(d) q 1/2
(e) p = q = 1/2
(f) never
Solution: (f): In all cases, either E[
1
] or E[
1
] is innite, so their sum is innite.
(4) using the notation from the item above, if p = q = 1/2 (symmetric random walk),
and a, b > 0 then
(a) P[
a
<
b
] = a/(a +b)
(b) P[
a
<
b
] = b/(a +b)
(c) P[
a
<
b
] = a/b
(d) P[
a
<
b
] = b/a
(e) none of the above
Solution: (b): A quick way to check this is Wald II:
X
0
= 0 = aP[
a
<
b
] b
_
1 P[
a
<
b
]
_
(5) Suppose there exists n N such that P
n
= I, where I is the identity matrix and P
is the transition matrix of a nite-state-space Markov chain. Then
(a) P = I,
(b) all states belong to the same class,
Instructor: Mihai Srbu Semester: Fall 2010
Test: 1 Course: M362M - Introduction to Stochastic Processes I Page: 2 of 7
(c) all states are recurrent
(d) none of the above.
Solution: (c): the example of a chain with 2 states when we go for sure from one
to the other shows that (a) is NOT true. The example P = I shows that (b) is NOT
true, since all states form classes of their own. On the other hand, (c) is correct,
since P
n
= I means that in exactly n steps, we will be back to the state we started
from, for sure.
(6) Let {X
n
}
nN
0
be a Markov chain with nite state space, which has at least a transient
state. Then
(a) the chain must have other transient states as well
(b) the chain is irreducible
(c) the chain must have an absorbing state
(d) none of the above.
Solution: (d): Item (a) is not true, since we may have a chain with two states
in which we go from state one to state two for sure, and we are absorbed there.
The chain is clearly not irreducible, since all states would be recurrent. In addition,
the chain does not need to have an absorbing state: it may only have an absorbing
recurrent class with more than one state.
Problem 1.2. (15 points) Let (X
n
)
n0
be a simple symmetric random walk and denote by
(M
n
)
n0
its running MINIMUM which means M
n
= min(X
0
, X
1
, . . . X
n
). Compute
P[X
10
= 2, M
10
2].
Solution: We just have to use the Reection Principle, which applied to the running min-
imum not only to the running maximum, because the Random Walk is symmetric. More
precisely, we have
P[X
10
= 2, M
10
2] = P[X
10
= 6, M
10
2] = P[X
10
= 6] =
_
10
2
__
1
2
_
10
=
45
2
10
.
Problem 1.3. (10 points) Consider a Branching process where the original size of the
population is Z
0
= 5 and the ospring distribution is
_
0 1 2 3
1
4
1
4
1
4
1
4
_
Compute the probability of extinction.
Solution: The extinction equation is:
1
4
s
3
+
1
4
s
2
+
1
4
s +
1
4
= s.
This can be rewritten as
s
3
+s
2
3s + 1 = 0.
We know that s = 1 is a solution, so we factor out s1 from the polynomial above, obtaining
(s 1)(s
2
+ 2s 1) = 0.
The solution of
s
2
+ 2s 1 = 0
Instructor: Mihai Srbu Semester: Fall 2010
Test: 1 Course: M362M - Introduction to Stochastic Processes I Page: 3 of 7
is
s =
2 + ()

8
2
.
Therefore, the smallest solution of the extinction equation in the interval [0, 1] is
s =
2 +

8
2
=

2 1 44%
This is the extinction probability IF Z
0
= 1. Since Z
0
= 5, the extinction probability is the
probability of 5 independent chains getting extinct, which is
_

2 1
_
5
.
Problem 1.4. (10 points) A soccer team plays every sunny day. The distribution of the
goals scored on a sunny day is
_
0 1 2 3 4 5
1
10
1
5
1
5
1
5
1
5
1
10
_
The probability to rain in a certain day is p = 5%. Denoting by X the total number of
goals scored starting today until the rst rainy day (the team does not play at all if it rains
today), what is the generating function of X? What is the expected number of goals scored
before the rst rainy day?
Solution: The number of goals scored until the rst rainy day is actually a random sum
X =
N

n=1

i
,
where N is the number of sunny days, so it is a GEOMETRIC random variable (number of
FAILURES before rst success) with probability of success p = .05. This means that
g
N
(s) =
.05
1 .95s
.
The random variables
i
are the goals scored during the (sunny) day i, so
P

(s) =
1
10
s
5
+
1
5
s
4
+
1
5
s
3
+
1
5
s
2
+
1
5
s +
1
10
.
Now,
P
X
(s) = P
N
(P

(s)) =
.05
1 .95
_
1
10
s
5
+
1
5
s
4
+
1
5
s
3
+
1
5
s
2
+
1
5
s +
1
10
_.
In addition, the expectation of X is
E[X] = E[N]E[] =
.95
.05
2.5
Problem 1.5. (10 points) Consider a very simple Markov Chain with only two states
(regimes) named 1 and 2 and transition matrix
P =
_
1
2
1
2
1
4
3
4
_
.
Instructor: Mihai Srbu Semester: Fall 2010
Test: 1 Course: M362M - Introduction to Stochastic Processes I Page: 4 of 7
The initial distribution of the chain is given by P(X
0
= 1) =
3
4
and P(X
0
= 2) =
1
4
. We let
the chain run from time n = 0 to n = 2 and observe that
X
2
= 2.
The chain more likely started in state X
0
= 1 or in X
0
= 2? Explain your answer.
Solution: In this problem, we actually have to compare the conditional probabilities of the
original states, given the observation X
2
= 2. More precisely, we need to compare
P[X
0
= 1/X
2
= 2], P[X
0
= 2/X
2
= 2].
We can use Bayes rule, for example, to compute
P[X
0
= 1/X
2
= 2] =
P[X
0
= 1] P[X
2
= 2/X
0
= 1]
P[X
0
= 1] P[X
2
= 2/X
0
= 1] +P[X
0
= 2] P[X
2
= 2/X
0
= 2]
.
We compute now the matrix product
P P =
_
1
2
1
2
1
4
3
4
_

_
1
2
1
2
1
4
3
4
_
=
_
x
5
8
y
11
16
_
We can use this to conclude that
P[X
0
= 1/X
2
= 2] =
3
4

5
8
3
4

5
8
+
1
4

11
16
=
30
30 + 11
=
30
41
.
This means that
P[X
0
= 2/X
2
= 2] =
11
41
,
so it is more likely that the chain started in the state X
0
= 1.
Problem 1.6. (10 points) Two players, A and B play the following game: Every time a point
is played, A wins the point with probability 1/4 and B wins the point with probability 3/4.
Since A has an advantage, the game ends when A is 4 points ahead or B is 2 points ahead.
Write vector matrix equations (without solving the equations, just CIRCLE the appropriate
entry) for
(1) the probability that A wins
(2) the expected number of points played before the game ends
Solution:
(1) We can model the game as a Markov chain where the state is the number of points
by which player A is ahead. In this case actually, the Markov chain is just a random
walk starting at i = 0 and absorbed at the boundaries 4 and 2. If we write the
state space as S = {2, 1, 0, 1, 2, 3, 4} then the transition matrix is
P =
_
_
_
_
_
_
_
_
_
1 0 0 0 0 0 0
3/4 0 1/4 0 0 0 0
0 3/4 0 1/4 0 0 0
0 0 3/4 0 1/4 0 0
0 0 0 3/4 0 1/4 0
0 0 0 0 3/4 0 1/4
0 0 0 0 0 0 1
_
_
_
_
_
_
_
_
_
Instructor: Mihai Srbu Semester: Fall 2010
Test: 1 Course: M362M - Introduction to Stochastic Processes I Page: 5 of 7
The states 2 and 4 are absorbing (so recurrent), all other states are transient. Please
recall this is just a random walk with absorption. The canonical decomposition is
therefore
S = {2, 4} {1, 0, 1, 2, 3}.
Corresponding to this rearrangement of the states, the transition matrix becomes
_
_
_
_
_
_
_
_
_
1 0 0 0 0 0 0
0 1 0 0 0 0 0
3/4 0 0 1/4 0 0 0
0 0 3/4 0 1/4 0 0
0 0 0 3/4 0 1/4 0
0 0 0 0 3/4 0 1/4
0 1/4 0 0 0 3/4 0
_
_
_
_
_
_
_
_
_
(2) Taking into account the canonical matrix, we have that
Q =
_
_
_
_
_
_
0 1/4 0 0 0
3/4 0 1/4 0 0
0 3/4 0 1/4 0
0 0 3/4 0 1/4
0 0 0 3/4 0
_
_
_
_
_
_
R =
_
_
_
_
_
_
3/4 0
0 0
0 0
0 0
0 1/4
_
_
_
_
_
_
, I Q =
_
_
_
_
_
_
1 1/4 0 0 0
3/4 1 1/4 0 0
0 3/4 1 1/4 0
0 0 3/4 1 1/4
0 0 0 3/4 1
_
_
_
_
_
_
The equation we need to solve in order to nd the probability u
0,4
of the player A to
win can be computed from the matrix formula
_
_
_
_
_
_
u
1,2
u
1,4
u
0,2
u
0,4
u
1,2
u
1,4
u
2,2
u
2,4
u
3,2
u
3,4
_
_
_
_
_
_
=
_
_
_
_
_
_
1 1/4 0 0 0
3/4 1 1/4 0 0
0 3/4 1 1/4 0
0 0 3/4 1 1/4
0 0 0 3/4 1
_
_
_
_
_
_
1

_
_
_
_
_
_
3/4 0
0 0
0 0
0 0
0 1/4
_
_
_
_
_
_
The entry u
0,4
should be circled.
(3) we need to compute the expected reward if 1 is payed every time we are in a transient
state. Therefore, we need to compute
_
_
_
_
_
_
v
1
v
0
v
1
v
2
v
3
_
_
_
_
_
_
=
_
_
_
_
_
_
1 1/4 0 0 0
3/4 1 1/4 0 0
0 3/4 1 1/4 0
0 0 3/4 1 1/4
0 0 0 3/4 1
_
_
_
_
_
_
1

_
_
_
_
_
_
1
1
1
1
1
_
_
_
_
_
_
The entry we are looking for is v
0
(since we start at 0).
Problem 1.7. (10 points) Play the following game: a fair die is thrown repeatedly until the
rst time the face 6 shows up (when the game stops) and each time the die is thrown you
win as many dollars as the face of the die shows (including the 6 dollars corresponding to the
Instructor: Mihai Srbu Semester: Fall 2010
Test: 1 Course: M362M - Introduction to Stochastic Processes I Page: 6 of 7
last throw). Denote by R the expected reward that you get playing this game. Condition
on the rst throw of the die to write an equation for R and compute R.
Solution: Denote by X
n
, n = 1, 2, . . . the throws of the die and by
= min{n 1 : X
n
= 6}.
We need to compute
R = E[

i=1
X
n
].
As indicated, we condition on the rst throw, X
1
. We have
E[

i=1
X
n
/X
1
= 6] = 6,
and
E[

i=1
X
n
/X
1
= i] = i +E[

i=2
X
n
/X
1
= i] = i +R, i = 6.
Now
R =
5

i=1
1
6
(i +R) +
1
6
6,
so
R =
21
6
+
5
6
R,
so
R = 21
Problem 1.8. (10 points) Each element in a sequence of binary data is either 1 with prob-
ability p or 0 with probability 1 p. A maximal subsequence of consecutive values having
identical outcomes is called a run. For instance, if the outcome sequence is 1, 1, 0, 1, 1, 1, 0,
then the rst run is of length 2, the second run is of length 1 and the third run is of lentgh
3.
(1) nd the expected length of the rst run
(2) nd the expected length of the second run
Solution: Denote by L the length of the rst run, and by L

the length of the second run.


(1) we condition on the rst digit in the sequence:
E[L] = pE[L/rst digit is 1] + (1 p)E[L/rst digit is 0].
Now, we compute E[L/rst digit is 1]. Conditioned on rst digit being 1, L is a
geometric random variable with probability of success q = 1 p, so the conditional
expectation is 1/q. Conditioned on rst digit being 0, L is a geometric random
variable with probability of success p, so the conditional expectation is 1/p. Now we
have
E[L] = p
1
q
+q
1
p
.
Instructor: Mihai Srbu Semester: Fall 2010
Test: 1 Course: M362M - Introduction to Stochastic Processes I Page: 7 of 7
(2) we use exactly the same idea as in item 1. Conditioned on the rst digit being 1, L

is
a geometric random variable with probability of success p (the second run is going to
be a sequence of zeros so we see what is the expected length of zeros in a row, once
we ALREADY HAVE a zero), so the conditional expectation is 1/p. Conditioned on
rst digit being 0, L

is a geometric random variable with probability of success q, so


the conditional expectation is 1/q.
E[L

] = p
1
p
+q
1
q
= 2.
Formulas:
some useful series:

k=0
x
k
=
1
1 x
, |x| < 1,

k=0
x
k
k!
= e
x
, x (, ).
the probability mass function of a Poisson random variable N with parameter (in-
tensity) is
P[N = k] =
e

k
k!
, k = 0, 1, 2, . . .
for an asymmetric random walk with probabilities p and q, we have
P[X
n
= l] =
_
_
_
_
n
n+l
2
_
p
n+l
2
q
nl
2
, l {n, n + 2, . . . n 2, n}
0, otherwise
Distribution of a geometric r.v.
(1) N (the number of FAILURES before the rst success) with success probability
p has probability mass function
P[N = k] = q
k
p, k = 0, 1, 2, . . . .
(2) If we want the number of TRIALS before the rst success, namely N

= N +1,
then
P[N

= k] = q
k1
p, k = 1, 2, . . . .
(3) if G is a geometric random variable with parameter p, which means the number
of FAILURES until the rst success, where success occurs with probability p,
then
P
G
(s) =
p
1 qs
.
Instructor: Mihai Srbu Semester: Fall 2010

You might also like