Professional Documents
Culture Documents
Solutions To Exercises in Advanced Proba PDF
Solutions To Exercises in Advanced Proba PDF
Solutions to Exercises
06/09-2018
Thomas Mikaelsen
Advanced Probability Theory 1 & 2- Ernst Hansen
INDHOLD INDHOLD
Indhold
1 Week 1 5
1.1 Exercise 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Exercise 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Exercise 1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Exercise 1.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Exercise 1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Exercise 1.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Week 2 12
2.1 Exercise 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Exercise 2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Exercise 2.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Exercise 2.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 Exercise 2.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.6 Exercise 2.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.7 Exercise 2.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 Week 3 20
3.1 Exercise 2.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Exercise 2.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Exercise 2.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Exercise 2.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5 Exercise 2.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.6 Exercise 2.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.7 Exercise 2.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4 Week 4 27
4.1 Exercise 1.21 [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Exercise 1.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.3 Exercise 1.24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.4 Exercise 1.26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.5 Exercise 1.29 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.6 Exam January 2014 . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.6.1 Question 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.6.2 Question 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.6.3 Question 1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.6.4 Question 1.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.7 Exam January 2013 . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2
INDHOLD 3
5 Week 5 36
5.1 Exercise 2.1 [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 Exercise 2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.3 Exercise 2.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.4 Example 7.11 [2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.5 Exercise 2.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.6 Exam Stok2 November ’16 . . . . . . . . . . . . . . . . . . . . . . . 41
5.6.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.6.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.6.3 Question 1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.6.4 Question 1.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.6.5 Question 1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6 Week 6 45
6.1 Exercise 3.2 [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2 Exercise 3.3 [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.3 Exercise 3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.4 Exercise 3.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.5 Exam January 2014 Question 2 . . . . . . . . . . . . . . . . . . . . 51
6.5.1 Question 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.5.2 Question 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.6 Re-Exam January 2015 Question 3 . . . . . . . . . . . . . . . . . . 52
6.6.1 Question 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.6.2 Question 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.6.3 Question 3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7 Week 7 55
7.1 Exercise 3.8 [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.2 Exercise 3.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.3 Exercise 3.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.4 Exercise 3.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.5 Exercise 3.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.6 Exercise 3.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.7 Exam 2015, Problem 3 - Bernstein’s Theorem . . . . . . . . . . . . 59
7.7.1 Question 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.7.2 Question 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.7.3 Question 3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.7.4 Question 3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4 INDHOLD
We take it as known that the standard exponential distribution has density R y fY (y) =
e−y for y ≥ 0 and hence has distribution function F (y) = P (Y ≤ y) = 0 e−t dt =
1 − e−y for y ≥ 0. ey is strictly increasing in y, so F (y) is strictly increasing by
monotonicity of the integral and also obviously continuous. Thus F is injective and
has a unique quantile function, namely its inverse F −1 . We find F −1 by solving
−1 (u)
F (F −1 (u)) = u ⇔ 1 − e−F = u ⇔ F −1 (u) = − log(1 − u).
It now follows by [3] that q(U ) = − log(1−U ) ∼ exp(1), which is what we wanted.
1 y 1 1 1
Z
F (y) = P (Y ≤ y) = dt = arctan(y) +
π −∞ 1 + t2 π 2
arctan(x) is strictly increasing and continuous so it has a unique quantile function.
Similarly to Exercise 1.1 we solve
1 1 π(2U − 1)
−1 −1 −1
F (F (u)) = u ⇔ arctan(F (u)) + ⇔ F (u) = tan .
π 2 2
By [3] q(U ) = tan π(2U2 −1) ∼ Cauchy, which is what we wanted.
5
6 KAPITEL 1. WEEK 1
|=
joint distribution that
!
f (Y ) 1
P U≤ = . (2)
cg(Y ) c
Notice that condition (1) is required to ensure that f and g are not the same, as
if c = 1 we either have that
f (x) = g(x) for all x ∈ R
which is trivial, or we have
Z Z
f (x) < g(x) for all x ∈ R ⇒ f =1< g=1
R R
which is a contradiction. We can always calculate the probability in question by
using
! Z
f (Y )
P U≤ = ✶{U ≤ f (Y ) } d(P ⊗ P )
cg(Y ) cg(Y )
Z
(Def.) → = ✶(−∞, f (Y ) ] (U )d(P ⊗ P )
cg(Y )
Z
(A.C.V., Thm. 10.8) → = ✶(−∞, f (y) ] d(U, Y )(P ⊗ P )
cg(y)
Z
✶(−∞, f (Y ) ] (u)d U (P ) ⊗ Y (P )
(Y U) → =
|=
cg(Y )
Z Z
✶(−∞, f (y) ] (u)dU P (u)dY P (y)
(Tonelli) → =
cg(y)
Z Z
✶(−∞, f (y) ] (U )dP dY (P )(y)
Def. of integral w.r.t. U (P ) → =
cg(y)
!
f (y)
Z
(Def. of F) → = P U≤ dY (P )(y)
cg(y)
f (y)
Z
(U ∼ U nif [0, 1]) → = dY P )(y)
cg(y)
f (y)
Z
(Y has density g w.r.t. λ) → = g(y)λ(dy)
cg(y)
1
Z
= f (y)λ(dy)
c
1
(f is a density) → = ,
c
1.3. EXERCISE 1.3 7
We cannot simply factor the expression into a product using independence since
Y appears in both sets. We get
! Z
f (Y )
P Y ≤ x, U ≤ = ✶(−∞,x]×(−∞, f (Y ) ] (Y, U )d(P ⊗ P )
cg(Y ) cg(Y )
Z
= ✶(−∞,x] (Y )✶(−∞, f (Y ) ] (U )d(P ⊗ P )
cg(Y )
Z
(A.C.V.) → = ✶(−∞,x] ✶(−∞, f (y) ] d(Y, U )(P ⊗ P )
cg(y)
Z
✶(−∞,x] ✶(−∞, f (y) ] d Y (P ) ⊗ U (P )
(Y U) → =
|=
cg(y)
Z Z
✶(−∞,x] ✶(−∞, f (y) ] dU P (u)dY P (y)
(Tonelli) → =
cg(y)
Z Z
✶(−∞,x] (y) ✶(−∞, f (y) ] dU P (u)dY P (y)
=
cg(y)
!
f (y)
Z
✶(−∞,x] (y)P U ≤
= dY P (y)
cg(y)
f (y)
Z
(U ∼ Unif[0, 1]) → = ✶(−∞,x] (y) dY (P )(y)
cg(y)
f (y)
Z
(Y has density g) → = ✶(−∞,x] (y) g(y)λ(dy)
cg(y)
1
Z
= ✶(−∞,x] (y)f (y)λ(dy)
c
F (x)
= ,
c
which is what we wanted.
f (y)
with the convention that τ = ∞ if Un > for all n. We wish to show that
cg(y)
n o
f (y)
P (τ = ∞) = 0. For convenience let An = Un > cg(y) ) . We wish to use the
continuity properties of the measure, but to do that we need to construct a clever
sequence, so let
n
\
Bn = Ak .
k=1
We then have
∞
\ ∞
\
B1 ⊇ B2 ⊇ ... and An = Bn
n=1 n=1
which gives us
∞
\
P (τ = ∞) = P An
n=1
\∞
=P Bn
n=1
= lim P (Bn )
n→∞
n
\
= lim P Ak
n→∞
k=1
n
Y 1
(U Y ) → = lim 1−
|=
n→∞
k=1
c
1 n
= lim 1− = 0,
n→∞ c
as we wanted.
1.3. EXERCISE 1.3 9
where (∗) uses the fact that it is a (geometric) power series with radius of con-
vergence |1| and since c > 1 we are within that radius and thus we may switch
summation and differentiataion, which is what we wanted.
P (τ = n, X ≤ x) = P (τ = n, Yn ≤ x)
n−1
\
Ak ∩ AC
=P n ∩ Y n ≤ x
k=1
n−1
\
Ak P ∩ AC
=P n ∩ Yn ≤ x
k=1
1 n−1 F (x)
(Exc. 1.3(b)-(c)) → = 1 − ,
c c
which is what we wanted.
10 KAPITEL 1. WEEK 1
1.5(b). We now wish to minimize the expression on the right hand side of (4)
by minimizing b, giving us the bound
2 /2
P (X ≥ a) ≤ e−a
Since e is strictly growing in b this is done globally by maximizing b2 /2 − ab using
first order conditions. We get b = a easily.
1.6. EXERCISE 1.6 11
P (X = c) = F (c) − P (X < c)
(Ch. 17 in MT) → = F (c) − lim− F (x)
x→c
(F always right-continuous) → = lim+ F (x) − lim− F (x)
x→c x→c
(By construction of c) → = lim+ 1 − lim− 0 = 1
x→c x→c
Let ω ∈ An ∩ Bn and let ǫ > 0. We then have that Xn (ω) → 0 surely (not almost
surely), so we can choose N ∈ N such that |Xn (ω) − 0| = |Xn (ω)| ≤ ǫ for all
n ≥ N . We then have
12
2.3. EXERCISE 2.5 13
which is what we wanted. This result clearly holds as well for decreasing sequen-
ces, as the only thing needed is that monotonic sequences with convergent sub-
sequences converge.
P a.s.
sup |Xk − X| −
→0 ⇔ Xn −−→ X.
k≥n
a.s.
This shows that (Yn → 0) = (Xn (ω) → X(ω), so if Yn −−→ 0 then P (Yn → 0) = 1
a.s a.s.
and it follows that P (Xn → X) = 1 and conversely, that is Yn −→ 0 ⇔ Xn −−→ X,
which is what we wanted.
14 KAPITEL 2. WEEK 2
k=1
n
Y
[Ui ∼ Unif[0,1]] → = (1 − ǫ)
k=1
= (1 − ǫ)n → 0
1 − ǫ ∈ [0, 1) →
and since this hold for all ω except in a null-set, we have that this is true for all ǫ.
n
X
lim || |Sn | || = lim ||Sn ||1 = lim || Xk ||1
n→∞ n→∞ n→∞
k=1
∞
X ∞
X ∞
X
≤ ||Xn ||1 = E|Xn | = EXn < ∞
n=1 n=1 n=1
Define φ : [0, ∞) → R by
x
φ(x) = , for x ≥ 0
1+x
Ad (a): φ is clearly continuous, since we don’t divide by 0, also φ(x) ∈ [0, 1),
1 −2
φ(0) = 0/1 = 0, φ′ (x) = (1+x) ′′
2 > 0, φ (x) = (1+x)3 < 0. All of this is obvious.
Notice
R x ′ that according to the Fundamental Theorem of Calculus we have φ(x) =
0
φ (t)dt, so we get
Z x+y
φ(x + y) = φ′ (t)dt
Z0 x Z x+y
′
= φ (t)dt + φ′ (t)dt
0
Z x+y x
= φ(x) + φ′ (t)dt
x
Z y
= φ(x) + φ′ (t + x)dt
Z0 y
≤ φ(x) + φ′ (t)dt
0
= φ(x) + φ(y) − φ(0) = φ(x) + φ(y)
where the inequality uses that φ′ is decreasing because φ′′ < 0 and monotonicity
of the integral.
Ad (c): We wish to show that if we identify random variables that are almost
surely equal, then
d(X, Y ) = Eφ(|X − Y |)
defines a metric on the space of real random variables defined on (Ω, F, P ). There
are four things to show.
where the first inequality follows from the triangle inequality of | · | and E, φ both
being monotonous, and the second inequality follows from E, φ being monotonous.
d(X, Y )
P (|X − Y | ≥ ǫ) ≤
φ(ǫ)
Let ǫ, η > 0 and assume that there is a sequence Xn of random variables such
that Xn → X with respect to the metric d. Since Xn converges we can choose
N ∈ N such that d(X, Y ) ≤ η · φ(ǫ) for all n ≥ N . It now follows that
d(Xn , X) η · φ(ǫ)
P (|Xn − X| ≥ ǫ) ≤ ≤ =η for all n ≥ N
φ(ǫ) φ(ǫ)
P
which is the definition of Xn −
→ X.
Consider
d(X, Y ) = Eφ(|X − Y |)
lim sup d(Xn , X) ≤ lim sup φ(ǫ) + lim sup P (|Xn − X|) ≥ ǫ)
n→∞ n→∞ n→∞
Since φ is continuous and strictly increasing we have φ(ǫ) → 0 for ǫ → 0 and thus
Since
always, we have
Alternatively, and somewhat easier, instead of using lim sup we can let ǫ > 0 and let
P
η = ǫ−φ(ǫ). Since Xn − → X we can choose N ∈ N such that P (|Xn −X|) ≥ ǫ) ≤ η
for all n ≥ N , that is
d
and thus Xn →
− X, which is what we wanted.
a.s.
We wish to show that Xn −−→ X. We want to use Theorem 2.25, so let ǫ > 0 and
notice that since xp for p > 0 is an increasing function we have and we use it on
non-negative terms we get
∞
X ∞
X
P (|Xn − X| ≥ ǫ) = P (|Xn − X|p ≥ ǫp )
n=1 n=1
∞
X E|Xn − X|p
Markov → ≤
n=1
ǫp
∞
1 X
= p E|Xn − X|p < ∞
ǫ n=1
a.s.
and now Theorem 2.25 implies that Xn −−→ X, which is what we wanted.
2.7. EXERCISE 2.11 19
And by definition
∞ \
[ ∞
(Xn ≤ c evt.) = (Xn ≤ c)
n=1 k=1
because from XN onward, all members of the sequence are dominated by c. Furt-
hermore, either c is the largest, in which case the entire sequence is dominated
by c, or one of the first N elements of the sequence is the largest, call it Xj (ω),
in which case supn Xn (ω) = Xj (ω). Whichever way, the supremum will be finite.
Thus we get
⇒ P (sup Xn < ∞) = 1,
n
20
3.2. EXERCISE 2.13 21
P (|Xn | ≤ Y ) = 1 ∀n ∈ N.
Proof. This follows from ✶(−∞, x] being continuous on (−∞, x) and (x, ∞),
because if a ∈ (−∞, x), we have ✶(−∞, x](an ) → ✶(−∞, x](a) and conversely
if a ∈ (x, ∞).
Also notice that ✶(Y (ω)≤x) = ✶(−∞,x] (Y (ω)). Now let ω ∈ (X 6= x) ∩ (Xn → X).
By Lemma we have ✶(Xn (ω)≤x) → ✶(X(ω)≤x) and therefore (X 6= x) ∩ (Xn → X) ⊆
(✶(Xn (ω)≤x) → ✶(X(ω)≤x) ). Since (X 6= x), (Xn → X) both have full probability, we
have
by monotonicity of measures and thus P (✶(Xn (ω)≤x) → ✶(X(ω)≤x) ) = 1, i.e. ✶(Xn (ω)≤x) −−→
a.s.
P
Xn −
→X ⇒ Fn (x) → F (x).
Since probability measures are finite, they are especially σ-finite, so convergen-
ce in L2 implies convergence in L1 . Thus EX and EX 2 exist and are finite.
We therefore have EX = limn→∞ ξn = ξ and limn→∞ σn2 = limn→∞ V X =
limn→∞ EXn2 − (EXn )2 = EX 2 − (EX)2 = V X = σ 2 by Theorem 2.20. Now
3.6. EXERCISE 2.17 23
consider
x −(t−ξn )2
1
Z
Fn (x) = p e 2σn2 dt
2
2πσn
−∞
x−ξn
Z
σn 1 −s2 x − ξ
n
[Substitution] → = √ e dt = Φ
2
−∞ 2π σn
Since Φ is continuous, we have that Fn (x) = Φ x−ξ σn
n
→ Φ x−ξ
σ
for σ 6= 0, that
is the limit of the distribution functions is itself Gaussian. We now just need to
establish that this Gaussian is actually the distribution function F of X. But this
L2 P
follows from Exercise 2.15(b), since Xn −→ X implies Xn − → X and so by Exercise
2.15(b) Fn (x) → F (x) for all x ∈ R where P (X 6= x) = 0. So by uniqueness of
limits we have that Fn (x) → F (x) = Φ( x−ξ
σ
). Finally if φ = 0, then V X = φ2 = 0
and so X has a degenerate distribution, which is what we wanted.
a.s.
cn Xn −−→ 0.
\
P( |Xn | > k) = P (Xn = ∞) = 0
k∈N
and because (|Xn | > 1) ⊇ (|Xn | > 2) ⊇ ... it follows from downward continuity of
measures that
\
P( |Xn | > k) = lim P (|Xn | > k) = 0.
n→∞
k∈N
1
Let ǫ > 0 and define cn = 2 n Kn
and consider
∞ ∞
X X ǫ
P (|cn Xn − 0| > ǫ) = P (|Xn | > )
n=1 n=1
cn
∞
X
= P (|Xn | > ǫ2n Kn )
n=1
N
X ∞
X
n
= P (|Xn | > ǫ2 Kn ) + P (|Xn | > ǫ2n Kn )
n=1 n=N +1
∞
X
=S+ P (|Xn | > ǫ2n Kn )
n=N +1
∞
X ∞
X
≤S+ P (|Xn | > Kn ) ≤ S + 2−n < ∞.
n=N +1 n=N +1
a.s.
It now follows from the corollary to Borel-Cantelli that cn Xn −−→ 0, which is what
we wanted.
P
and that Xn −
→ X.
P
By Lemma 2.27 we have |Xn |p − → |X|p and so by Theorem 2.26 there exists a
a.s.
sub-sequence such that |Xnk |p −−→ |X|p and so the limit exists. We thus have
and so we get
E|X|p = E lim inf |Xnk |p
k→∞
≤ sup E|Xnk |p
k
where the final inequality follows from the fact that we take the supremum over a
larger set of numbers, which can only make the supremum larger. Thus E|X|p <
∞, which is what we wanted.
2.18(b). We wish to show that for any r ∈ [1, p) and any ǫ > 0 it holds that
r/p
E |Xn − X| ✶(|Xn −X|)>ǫ ≤ 2
r r+1
sup E|Xn | p
P (|Xn − X| > ǫ)(p−r)/p .
n
We have
E |Xn − X| ✶(|Xn −X|) = E |Xn − X| (✶(|Xn −X|>ǫ) )
r r (p−r)
(r/p) r/p
We now prove Claim: E|Xn − X|p ≤ 2r+1 supn E|Xn |p after which we
are done. We have by the triangle inequality that
|Xn − X|p ≤ (|Xn | + |X|)p ≤ (2 max{|Xn |, |X|})p = 2p max{|Xn |p , |X|p }
We thereby get
E|Xn − X|p ≤ 2p E max{|Xn |p , |X|p }
≤ 2p E(|Xn |p + |X|p )
≤ 2p+1 sup E|Xn |p
where the final inequality follows from E|Xn |p ≤ sup E|Xn |p obviously as well as
E|X|p ≤ sup E|Xn |p by Exercise 2.18(a), and so we are done.
26 KAPITEL 3. WEEK 3
Lr
2.18(c). We wish to show that Xn −→ for any r ∈ [1, p). Let ǫ > 0, then
E|Xn − X|r = E|Xn − X|r ✶(|Xn −X|≤ǫ) + E|Xn − X|r ✶(|Xn −X|>ǫ)
≤ Eǫr + E|Xn − X|r ✶(|Xn −X|>ǫ)
= ǫr + E|Xn − X|r ✶(|Xn −X|>ǫ)
r/p
r r+1 p
[Exercise 2.18(b)] → ≤ǫ +2 sup E|Xn | P (|Xn − X| > ǫ)(p−r)/p
n
= ǫr + 0 = ǫr
r/p
r+1 p
because the second term is a constant 2 supn E|Xn | times something that
P
goes to 0 P (|Xn − X| > ǫ)(p−r)/p because Xn −
→ X by assumption. So we have
Lr
which is the definition Xn −→ X and what we wanted.
4 Week 4
because we are taking the union over fewer and fewer sets. Thus it doesn’t matter
from which point we start intersecting, i.e. for any N ∈ N we have
∞ [
\ ∞ ∞ [
\ ∞
(Xn ∈ B) = (Xn ∈ B). (1)
n=1 k=n n=N k=n
27
28 KAPITEL 4. WEEK 4
and we have
∞ ∞
X X 1 π2
P (Xn = n) = 2
= <∞
n=1 n=1
n 6
Xn a.s.
by assumption, so n
−−→ 0, i.e. P ( Xnn → 0) = 1. Consider
∞ [ ∞ \ ∞
Xn \ Xk 1
( → 0) = <
n m=1 n=1 k=n
k m
∞ \ ∞
[ Xk X
k
⊆ <1 = < 1 evt.
n=1 k=1
k k
and thus by monotonicity of measures 1 = P Xkk < 1 evt. = P (Xk < k evt.).
Taking complements we get P (Xk ≥ k i.o.) = 0, and since we have independence
the second Borel-Cantelli lemma implies
∞
X
∞> P (Xn ≥ n)
n=1
∞
X
[iid] → = P (X1 ≥ n). (1)
n=1
30 KAPITEL 4. WEEK 4
According to the corollary to Lemma 1.8 of [2], (1) holds if and only if E|X1 | < ∞.
Finally we need to establish that EX1 = 1. But since we have independence and
have just
Pestablished E|X1 | < ∞, it follows from the Strong LawPof Large Numbers
a.s. a.s.
that n1 ∞ n=1 X n −
−→ EX 1 , and since we have assumed that 1
n
∞
n=1 Xn −−→ c as
well, it follows by uniqueness of almost sure limits that EX1 = c, which is what
we wanted.
a.s.
it now follows from Borel-Cantelli that Xn −−→ 0, which is what we wanted.
4.6. EXAM JANUARY 2014 31
Since the random variables are assumed to be independent, it follows from Khintchine-
Kolmogorov
P∞ that Sn converges almost surely and in L2 to some variable, S =
n=1 Xn , which is what we wanted.
and we have
n
X n
X
ESn2 =E Xk2 = EXk2 → 8 = ES 2
k=1 k=1
1
Notice that log(n+1)
→ 0 for n → ∞, so it follows from Lemma 5.1 in [2] that
1
Pn 1 L1
n k=1 log(k+1) → 0 for n → ∞. Thus n1 Sn −→ 0, and it now follows from Lemma
P
2.21 in [2] that n1 Sn −
→ 0, which is what we wanted.
n
1 X 1X n
V Xk = V Xk → 0 for n → ∞,
n k=1 n k=1
where the limit is shown in Problem 2.3. We now use Chebyshev’s Inequality to
obtain
1X n n n n
1X 1X 1X
P | Xk − E Xk | > ǫ = P | Xk − EXk | > ǫ
n k=1 n k=1 n k=1 n k=1
1X n
=P | Xk − 0| > ǫ
n k=1
1X n
=P | Xk | > ǫ
n k=1
V n1 nk=1 Xk
P
[Chebyshev] → ≤ → 0,
ǫ2
P
so P (| n1 Sn | > ǫ) → 0 which is equivalent to n1 Sn −
→ 0, which is what we wanted.
34 KAPITEL 4. WEEK 4
and then use (1) to show that n1 nk=1 Xk does not converges almost surely to 0.
P
Consider
n+1 n
|Xn+1 | 1 X 1 X
= Xk − Xk
n+1 n + 1 k=1 n + 1 k=1
n+1 n
1 X 1 X
[Triangle inequality] → ≤ Xk + Xk
n + 1 k=1 n + 1 k=1
n+1 n
1 X 1 X
≤ Xk + Xk (2)
n + 1 k=1 n k=1
re-arranging, we get the desired inequality. We have shown in Problem 2.5 that
P (|Xn |P
≥ n+1 i.o.) = 1, so let ω ∈ (|Xn | ≥ n+1 i.o.) and assume for contradiction
that n nk=1 Xk (ω) → 0. Then there exists N ∈ N such that for all n ≥ N we
1
have n1 nk=1 Xk (ω) < 12 . By (2) there exists a Ñ ≥ N such that |XÑ (ω)| ≥ Ñ .
P
Since Ñ ≥ N we have
|XÑ (ω) |
1≤
Ñ
Ñ +1 Ñ
1 X 1 X
≤ Xk + Xk
Ñ + 1 k=1 Ñ k=1
1 1
< + = 1,
2 2
4.7. EXAM JANUARY 2013 35
m T −1 ([0, α)
= m([0, α))
α
T (x) = 2x < α ⇔ x <
2
α
⇒ T −1 ([0, α)) = [0, ). (2)
2
α 1
T (x) = 2x − 1 < α ⇔ x < +
2 2
1 α 1
⇒ T −1 ([0, α)) = [ , + ). (3)
2 2 2
36
5.1. EXERCISE 2.1 [3] 37
α 1 α 1
T −1 ([0, α)) = [0, ) ∪ [ , + )
2 2 2 2
1 α α 1
⇒ m T −1 ([0, α) = m([ , ) ∪ [0, + ))
2 2 2 2
α α 1
= m([0, )) + m[0, + ))
2 2 2
α α
= + = α = m([0, α)),
2 2
thus T is measure-preserving, which is what we wanted.
Sλ (x) = x + λ − [x + λ]
= x + λ − [λ] − ([x + λ] − [λ])
= x + (λ − [λ)) − ([x + (λ − [λ])) = Sλ−[λ] .
And λ − [λ] ∈ [0, 1) so it is enough to consider λ ∈ [0, 1). Plotting S(x) we notice
that
(
x + λ, x ∈ [0, 1 − λ)
S(x) =
x + λ − 1, x ∈ [1 − λ, 1)
and the first condition never happens because x ∈ [0, 1). If α ≥ λ we get
and thus
(
[1 − λ, 1 − λ + α)
S −1 ([0, α)) =
[0, α − λ) ∪ [1 − λ, 1)
Consider the probability space ([0, 1), B[0,1) , P ) where P = m. Define T : [0, 1) →
[0, 1) by T (x) = x + λ − [x + λ]. T is then measure-preserving by Exercise 2.1 We
wish to show that if λ ∈ Q i.e. λ mn
, then T is not ergodic.
kn
Claim: T k (x) = x + m
− l for l ∈ Z and n, m ∈ N. This follows by induction.
m−1
[
Fα = T −1 ([0, α))
i=0
m−1
[
−1 −1 −i
T (Fα ) = T T ([0, α))
i=0
m−1
[
= T −1 T −i ([0, α))
i=0
m
[
= T −i ([0, α))
i=1
m−1
[
= T −i ([0, α)) ∪ T −m ([0, α)
i=1
m−1
[ m−1
[
= T 0 ([0, α)) ∪ T −i ([0, α)) = T −i ([0, α)) = Fα ,
i=1 i=0
5.3. EXERCISE 2.10 39
1
whereby Fα ∈ IT . Now choose α such that 0 < α ≤ ≤ 1. We then have
m
0 < α = m([0, α)) = m T 0 ([0, α))
m−1
[
≤m T −i ([0, α)) = m(Fα ).
i=0
Furthermore we have
m−1
[ m−1
X
−i
m(Fα ) = m T ([0, α)) ≤ m(T −i ([0, α))
i=0 i=0
m−1
X
[T is measure-preserving] → = m([0, α)) = m · α < 1
i=0
where the final inequality follows from α < m1 by construction. Thus Fα ∈ IT and
m(Fα ) ∈ (0, 1) so T is not ergodic, which is what we wanted.
Since σ([0, α)) = B[0,1) and the generator is ∩-stable, it follows from Lemma 2.2.6
that it is enough to show that
lim [0, α) ∩ T −n ([0, β)) → m([0, α))m([0, β)) = αβ
n→∞
for α, β ∈ [0, 1). It is a good idea to plot T 2 for a couple of values of n. We also
notice that T n (x) = 2n x − [2n x] by induction, and we get this idea by looking at
the plot, because it looks like we double the amount of lines in the plot each time,
each line having the same slope. We thus get that
2n x, x ∈ [0, 21n )
2n x − 1, x ∈ [ 21n , 21n )
n
T (x) = .
..
n
2n x − 2n − 1, x ∈ [ 2 −1 , 1)
2n
−n
Preparing to study T ([0, α) we notice that
α
2n x < α ⇔ x <
2n
α 1
2n x − 1 < α ⇔ x < n + n
2 2
..
.
α 1
2n x − 2n − 1 < α ⇔ x < n
+ n +1
2 2
40 KAPITEL 5. WEEK 5
−n α 1 1 α 2n − 1 α 1
T ([0, α)) = [0, n ) ∪ [ n , n + n ) ∪ ... ∪ [ n , n + n + 1) =
2 2 2 2 2 2 2 i=1
n −1
2[ h i i + αi
= , .
i=0
2n 2n
Thus
p−1 hp
−n
[
−i
p + α i
T ([0, α)) ∩ [0, β) = T ([0, α)) ∪ n , max ,β
i=0
2 2n
h
p p+1
where p ∈ N is the largest number satisfying β ∈ ,
2n 2n
. Observe that
h p p + 1i
β ∈ n, n ⇔ p ≤ β2n < p + 1
2 2
i.e. p = [2n β]. Notice that
[2n β]−1 h [2n β] h
[ i i + α −n
[ i i + αi
, ⊆ T [0, α) ∩ [0, β) ⊆ , n
i=0
2n 2n i=1
2 n 2
where we use the property [x] ≤ x ≤ [x] + 1 to get the inequality. Similarly
n β] n β]
[2[ h i i + α i [2X α α α α
m n
, n = n
= ([2n β] + 1) n ≤ (2n β + 1) n = αβ + n → αβ.
i=1
2 2 i=0
2 2 2 2
Thus limn→∞ m(T −n ([0, α)) ∩ [0, β)) = αβ, which is what we wanted.
φ(Xk , Xk+1 , ...) where S is the shift-operator, then (Yn ) preserves the proper-
ties (1)-(4) of (Xn ). Notice that iid sequences are especially measure-preserving,
mixing, ergodic, stationary. So if we have an example where (Xn ) is an iid sequence
k ∞
and Yk = Xk Xk+1 , then we can write Yk = φ(S (Xn )n≥1 )), where φ : R → R
such that φ (Xn )n≥1 = X1 X2 . It now follows from Example 7.11 that Yk is iid
and therefore (1)-(4).
Assume that EXn2 < ∞ and that (Xn ) is stationary. We have already assu-
med second moments, so to show weak stationarity, we need to show the final
two conditions. By Lemma 2.3.9 [3] it follows that for all k, n ≥ 1 we have
(X1 , ..., Xn ) ∼ (X1+k , ..., Xn+k ), and for k ≥ n we especially have (Xk , ..., Xn ) ∼
(X1 , ..., Xn−(k−1) ). Since the joint distributions determine the marginal distribu-
tions, we can pick out any subset of these as long as they’re in the same place,
e.g. (Xk , Xn ) ∼ (X1 , Xn−(k−1) ). Now define
which exists because we have assumed second moments. Since (Xk , Xn ) ∼ (X1 , Xn−(k−1) )
It follows that
5.6.1 Question 1
We wish to show that Zn has finite second moment, and compute EZn and V Zn .
42 KAPITEL 5. WEEK 5
5.6.2 Question 2
We wish to show that Cov(Zn , Zm ) = 0 for n 6= m, and establish that (Zn )n∈Z is
weakly stationary and find the auto-covariance function.
We want a γ(n) such that it is equal to Cov(Zn , Zm ), and we have just shown
that
(
0, n 6= m
Cov(Zn , Zk ) =
V Z1 = 3, n=m
because Cov(Zn , Zn ) = V Zn = V Z1 = 3. So define
(
0, n 6= 0
γ(n) =
3, n=0
we then have γ(|n − k|) = Cov(Zn , Zk ), which according to Exercise 2.14 shows
that (Zn ) is weakly stationary, which is what we wanted.
5.6. EXAM STOK2 NOVEMBER ’16 43
For k ≥ 2 we get
Cov(|Zn |, |Zn+k |) = E|Zn Zn+k | − E|Zn |E|Zn+k |
2 2 2 2
= E|Xn Xn−1 Xn+k Xn+k−1 | − E|Xn Xn−1 |E|Xn+k Xn+k−1 |
2 2 2 2
[Indep.] → = E|Xn |EXn−1 E|Xn+k |EXn+k−1 − E|Xn |EXn−1 E|Xn+k |EXn+k−1 =0
For k = 1 we get
Cov(|Zn |, |Zn |) = E|Zn Zn+1 | − E|Zn |E|Zn+1 |
2
= E|Xn Xn−1 Xn+1 Xn2 | − E|Xn Xn−1
2
|E|Xn+1 Xn2 |
[Indep.] → = E|Xn3 |EXn−1 2
E|Xn+1 | − E|Xn |EXn−12
E|Xn+1 |EXn2
r r r
8 2 2 2 4 2
= − = − 6= 0.
π π π π π
It now follows that (Zn ) is not independent. Because if it were, it follows that
(|Zn |) is as well, because | · | is a measurable map, and if (|Zn |) were independent,
it follows that Cov(|Zn |, |Zn+1 |) = 0, which we have just shown not to be the case.
Since (Wn ) most likely aren’t independent, we cannot use the standard Strong
Law of Large Numbers, but must instead use Example 7.11 in order to apply The-
orem 7.12. Define φ : R∞ → R such that φ((Xn )n≥1 ) = 2Xn Xn−1 2 2
− Xn−1 Xn−1
because that is how Wn is defined in terms of Xn . We notice that
Wk = φ S k ((Xn )n≥1 )
Since the Xn ’s are iid it follows from Corollary 2.3.14 [3] that the Xn ’s are sta-
tionary and ergodic, and it then follows from Example 7.11 [2] that the Wk ’s are
stationary and ergodic. So in order to apply the Ergodic SLLN, we just need to
show that E|W1 | < ∞. We compute
E|W1 | = E|2Z1 + Z0 | ≤ 2E|Z1 | + E|Z0 | < ∞,
and so it follows from Theorem 7.12 that
n
1X a.s.
Wk −−→ EW1 = 2EZ1 + EZ0 = 0,
n i=1
“⇐”: Assume that limn→∞ µn ({k}) = µ({k}) for all k ≥ 0 and define
(
µn ({x}), x ∈ N0
gn (x) =
0, otherwise
(
µ({x}), x ∈ N0
g(x) =
0, otherwise
µn (B) = µn (B ∩ N0 )
[
[B ∩ N0 ⊆ N0 ] → = µn {x}
x∈B∩N0
X
[Countable & disjoint] → = µn {x}
x∈B∩N0
Z
[Integration w.r.t. τ ] → = µn dτ
B∩N0
Z
[By construction of gn ] → = gn dτ
B∩N0
Z
[gn = 0 outside N0 ] → = gn dτ
B
45
46 KAPITEL 6. WEEK 6
wk
“⇒”: Assume that µn −→ µ and notice that for any k ∈ N0 we have [k − 21 , k + 21 ] ⊆
(k − 1, k + 1). So by Lemma 3.1.3 [3] there exists a bounded and uniformly con-
tinuou function f ∈ Cbu (R) such that ✶[k− 1 ,k+ 1 ] (x) ≤ f (x) ≤ ✶(k−1,k+1) (x) for all
2 2
x ∈ R. Notice that f (n) = 0 for n 6= k since n 6∈ [k − 21 , k + 21 ] andR that f (n) = 1
for n = k since k ∈ (k − 1, k + 1). It now holds that µn ({k}) = f dµn because
since µn is concentrated on N0 it holds that
Z X∞
f dµn = f (i)µn ({i}) = 0 + 0 + ... + f (k)µn ({k}) + 0 + ...
i=1
= 1 · µn ({k}) = µn ({k})
R
where k is the unique number such that f (k) 6= 0. Similarly for µn ({k}) = f dµ.
wk R R
We have assumed that µn −→ µ which by definition means that f dµn → f dµ
for all f ∈ Cb (R) and since f ∈ Cbu (R) ⊆ Cb (R) we have that
Z Z
µn ({k}) = f dµn → f dµ = µ({k}),
Γ(n+ 1 )
so all that is left ot show is that √nΓ(n)
2
→ 1. By re-arranging Legendre’s Doubling
Formula (Theorem 8.19 [2]) and using it on the numerator we get
√
Γ(n + 21 ) Γ(2n) π 1
√ = 2n−1 √
nΓ(n) 2 Γ(n) nΓ(n)
√
Γ(2n) π
= 2n−1 √
2 nΓ(n)2
√ √
Γ(2n) π 2π(2n)2n−1/2 e−2n
= √ √
22n−1 2π(2n)2n−1/2 e−2n nΓ(n)2
√ √
Γ(2n) π 2π(2n)2n−1/2 e−2n
=√ √ (2)
2π(2n)2n−1 e−2n 22n−1 Γ(n)2 n
Γ(2n)
√ → 1 for n → ∞
2π(2n)2n−1 e−2n
Γ(n)
√ →1
2πnn−1/2 e−n
and since the two expressions in (3) are the reciprocals, they also converge to 1.
wk
Thus by Scheffe µn −→ φ(x), which is what we wanted.
by a similar argument to the one given in Exericse 3.2. For k ∈ N0 and n > k we
have
n k n−k n 1 npn n
πn (k) = pn (1 − pn ) = k
(npn )k (1 − ) (1 − pn )−k
k k n n
n! k npn n
= k
(np n ) 1 − (1 − pn )−k
k!(n − k)!n n
n(n − 1) · · · (n − k + 1) npn n (1 − pn )−k
= (npn )k 1 −
n···n n k!
nn−1 n−k+1 npn n (1 − pn )−k
= ··· (npn )k 1 −
n n n n k!
1
→ 1 · 1 · · · 1 · λk · e−λ = µ({k}) ∼ P o(λ)
k!
where we use that npn → λ < ∞ implies pn → 0. Thus we have that the density
πn of µn converges to the density of the poisson distribution. It then follows from
Exercise 3.2 that µn converges weakly to the poisson distribution.
“λ = 0”: If k = 0, then
n 0 npn n
µn ({k}) = pn (1 − pn )n = (1 − pn )n = (1 − ) → e0 = 1
0 n
and if k 6= 0 then
λk e−λ
n k
µn ({k}) = pn (1 − pn )n − k → =0
k k!
where the limit follows from the previous argument and the limits is 0 because we
have assumed that λ = 0. But that is exactly δ0 ({k}), which is what we wanted.
We wish to show that µn converges weakly if and only if ξn and σn both converge.
In the affirmative case, we also want to determine which distribtuion µn converges
weakly to.
where φ(x) is clearly the density corresponding to N (ξ, σ 2 ). It now follows from
wk
Scheffé’s Lemma that µn −→ µ where µ ∼ N (ξ, σ 2 ), which is what we wanted.
where
(
1, ξ∈A
δξ (A) = ,
0, ξ 6∈ A
lim sup([−M, M ]C ) = 0.
M →∞ n≥1
Assume for contradiction that (σn2 ) is unbounded, then there exists a sub-sequence
(σn2 k ) such that σn2 k → ∞ for k → ∞. By Uniform Tightness there exists a M0 ∈ N
such that for all M ≥ M0 we have
1
sup µn ([−M, M ]C ) ≤
n≥1 2
50 KAPITEL 6. WEEK 6
and so we have a contradiction. Thus (σn2 ) is not unbounded and is therefore bo-
unded. We now prove a small lemma.
Proof. If (σn2 k ) and (σn2 m ) both converge to some σ12 , σ22 respectively, it follows from
wk wk
the first part of this exercise that µnk −→ N (0, σ12 ) and that µnm −→ N (0, σ22 )
since ξn = 0 by assumption and so is assumed to converge also. But we have as-
wk
sumed that µn −→ µ, so N (0, σ12 ) = µ = N (0, σ22 ) and so σ12 = σ22 , which is what
we wanted.
For the second part, assume that ξn → ξ for some ξ and let Xn ∼ N (ξn , σn2 ).
wk
Since µn −→ µ to for some µ by assumption, it follows from Lemma 3.1.2 that
D
there exists an X ∼ µ such that Xn − → X. ξn → ξ by assumption so by Dominated
P
Convergence we especially have ξn − → ξ, and so it follows by Generalized Slutsky’s
D
Lemma (Theorem 3.3.3) that Xn − ξn − → X − ξ. But since Xn ∼ N (ξn , σn2 ) it
follows that Xn − ξn ∼ N (0, σn2 ) which means that EXn − ξn = ξn′ = 0. So now we
have a new sequence (Xn − ξn ) with ξn′ = 0 and σn′2 = σn2 , and so it follows from
the previous part of this exercise that (σn2 ) is convergent, which is what we wanted.
For the third part, assume that |ξn | ≤ k for all n ∈ N. Since (ξn ) is bounded,
6.5. EXAM JANUARY 2014 QUESTION 2 51
For the fourth and final part we just need to show that (ξn ) is bounded. Since
wk
µn −→ µ it follows by Uniform Tightness (Lemma 3.1.6 [3]) that there exists
M0 ∈ N such that
1
µn ([−M0 , M0 ]C ) < ∀n ∈ N.
2
Assume for contradiction that (ξn ) is unbounded. Then there exists n0 ∈ N such
that ξn0 > M0 . We then have that
1
> µn0 ([−M0 , M0 ]C )
2
= µn0 ((−∞, −M0 ) ∪ (M0 , ∞))
≥ µn0 ((M0 , ∞))
1
[ξn0 > M0 ] → ≥ µn0 ((ξn0 , ∞)) =
2
where µn0 ((ξn0 , ∞)) = 21 because µn0 ∼ N (ξn0 , σn0 ) is symmetric around its mean
ξn0 . This is a contradiction and thus ξn is bounded and so it follows from part 3
of the argument that (ξn ) and (σn ) both converge, which is what we wanted.
define φ̃ : R3 → R given by φ̃(x, y, z) = y(x + z), then φ((Xn )n≥1 ) = φ̃(X̂1 , X̂2 , X̂3 )
where X̂i : R∞ → R is the ith projection mapping such that X̂i (X1 , X2 , ...) = Xi . φ̃
is measurable because it is continuous. Furthermore, B∞ is defined to be the smal-
lest σ-algebra such that X̂n is B∞ − B measurable for every n, and thus X̂1 , X̂2 , X̂3
are especially B∞ − B measurable. So φ is the composition of measurable fun-
ctions and is therefore itself measurable. It follows from Example 7.11 that (Yn )
is stationary and ergodic, which is what we wanted.
Consider
where the inequality follows from the fact that X1 ∼ N (0, 1) and thus has first
moment.
Un
Vn = , for n ∈ N.
1 + Un+1
X1
where φ : R∞ → R is given by φ(X1 , X2 , ...) = 1+X 2
, which is B∞ − B measuarble
by the same argument as in the previous exam question. It now follows by Example
7.11 that (Vn ) is stationary and ergodic, which is what we wanted.
6.6. RE-EXAM JANUARY 2015 QUESTION 3 53
Notice that |Vi | ≤ Ui almost surely for all i ∈ N because 1 ≤ 1 + Ui+1 almost
surely for all i. Thus Vi is bounded and has moments of all orders, and therefore
especially has first moment, i.e. E|V1 | < ∞. It then follows from Khintchine’s
pointwise ergodic theorem (Theorem 7.12 [2]) that
n
1X a.s.
U
1
Vi −−→ E|V1 | = E
n i=1 1 + U2
1
[Independence] → = EU1 E
Z 1 + U2
1 1
[Density ✶(0,1) w.r.t. Lebesgue] → = ✶(0,1) λ(du)
2 1+u
Z 1
1 1
[Continuous on (0,1)] → = R du
2 0 1+u
1h i1 log(2)
= log(1 + u) = = α,
2 0 2
Un
≤ Vn ≤ Un
2
so if log Un has first moment, then log Vn also has first moment. Consider
Z
E| log Un | = ✶(0,1) | log(u)|λ(du)
Z 1
=R | log(u)|du
0
Z 1
[U ∈ (0, 1) ⇒ log(U ) < 0] → = R − log(u)du
0
h i1
= − lim u log(u) − u
M →0 M
= −(−1 − 0) = 1
a.s.
Since E log V1 ≤ E log U1 = −1, call this fact (i), it follows that nk=1 log Vk −−→
P
a.s.
−∞. To see this, consider if that wasn’t the case. Then either (1) nk=1 log Vk −−→
P
a.s.
k < ∞ or (2) nk=1 log Vk −−→ ∞. If (1) is true, then by Theorem 7.12 it follows
P
that
n
1X a.s.
log Vk −−→ 0 = E log V1 > −1 = E log U1
n k=1
a.s.
Thus nk=1 log Vk −−→ −∞. But nk=1 log Vk =
P P
which
Qis also acontradiction with(i).
n Qn a.s.
log k=1 Vk and therefore log k=1 Vk − −→ −∞ and so, taking the exponen-
tial, we get that
n n
Y Y a.s.
Vk = exp log Vk −−→ 0 = β,
k=1 k=1
Fn (xn ) ≥ Fn (x − ǫ)
Fn (xn ) ≤ Fn (x + ǫ),
55
56 KAPITEL 7. WEEK 7
Thus lim inf n→∞ Fn (xn ) = lim supn→∞ Fn (xn ) limn→∞ Fn (xn ) = F (x), which is
what we wanted.
µn is already
assumed to be a measure, so we just need to show that µn {k/n |
k ≥ 1} = 1 to establish that it is a probability measure. Consider
∞ n o
!
[ k
µn {k/n | k ≥ 1} = µn
k=1
n
X∞ n k o
[Disjoint] → = µn
k=1
n
∞
X 1 1
= (1 − )k−1
k=1
n n
∞
X 1 1
= (1 − )k
k=0
n n
n
[Geometric series] → = = 1,
n
wk
so µn is a probability measure. To show that µn −→ exp(1) let Fn and F be the
CDFs of µn and µ, respectively. Notice that if x > 0 then
k
≤ x ⇔ k ≤ nx ≤ k ≤ ⌊nx⌋
n
so
⌊nx⌋
X 1 1 k−1 X 1 1
Fn (x) = µn ((−∞, x]) = (1 − ) = (1 − )k−1
k
n n k=1
n n
k: n ≤x
1 1 − (1 − n1 )⌊nx⌋ 1
[Finite geo. series] → = 1 = 1 − (1 − )⌊nx⌋ = 1 − eln(1−1/n)⌊nx⌋
n 1 − (1 − n ) n
7.3. EXERCISE 3.10 57
Now we have
1 ln(1 − n1 ) 0
ln(1 − )n = 1 ”→”
n n
0
−1
→ −1
1 − n1
and thus ln(1 − n1 )nx → −x. Furthermore ln(1 − n1 )(nx − 1) = ln(1 − n1 )nx −
ln(1 − n1 ) → −x. So since nx − 1 ≤ ⌊nx⌋ ≤ nx by definition, it follows that
ln(1 − n1 )⌊nx⌋ → −x. So Fn (x) = 1 − eln(1−1/n)⌊nx⌋ → 1 − e−x = F (x) by continuity
of the exponential function. If x ≤ 0, then Fn (x) = 0 = F (x). Thus we have shown
that µn ∼ Fn (x) → F (x) ∼ exp(1) for all x ∈ R and thus by Theorem 3.2.3 it
wk
follows that µn −→ exp(1), which is what we wanted.
Let X ∼ bin(n, p) = µ such that f (x) = nx px (1 − p)n−x for x ∈ {0, 1, ..., n} is the
density with respect to the counting measure τ . We wish to find the characteristic
function of X. Using the definition we get
Z
φ(θ) = eiθx dµ(x)
{0,1,...,n}
iθx n
Z
[Density w.r.t. τ ] → = e px (1 − p)n−x dτ (x)
{0,1,...,n} x
n
X n x
= eiθx p (1 − p)n−x
x=0
x
n
X n
= (peiθ )x (1 − p)n−x
x=0
x
[Binomial Theorem] → = (1 − p + peiθ )n ,
e−λ λx
Z
φ(θ) = eiθx dτ (x)
N0 x!
∞ −λ x
iθx e λ
X
= e
x=0
x!
∞
−λ
X (λeiθ )x
=e
x=0
x!
iθ iθ −1)
[Taylor expansion] → = e−λ eλe = eλ(e ,
where we get the last equality by noticing that the integrand is the density of
a normal distribution with µ = 0 and σ 2 = (θ2 + 1)−1 , which therefore in-
tegrates to 1. Since XY ∼ ZW we have φXY (θ) = φZW (θ) and by definition
φ−ZW (θ) = φZW (−θ) and since φZW is even by (1) we have φZW (−θ) = φZW (θ).
Now to determine the distribution, we notice that XY − ZW because we
|=
|=
EX 2 < ∞ and EY 2 < ∞ and that the characteristic function of X, called φ satis-
fies φ(θ) ∈ (0, ∞) for every θ ∈ R, which implies that 1θ and log(θ) are legitimate
60 KAPITEL 7. WEEK 7
where the inequality follows from the fact that X and Y both have first moment,
since they have second moment. To show that I(θ) = 0, notice that X +Y X −Y
|=
implies X − Y eiθ(X+Y ) and call this (1). We get
|=
Z h i
I(θ) = (X − Y )eiθ(X+Y ) dP = E (X − Y )eiθ(X+Y )
Notice that X Y implies XeiθX Y and vice versa, call this (2), so we get
|=
|=
Z Z Z
iθ(X+Y ) iθ(X+Y )
I(θ) = (X − Y )e dP = Xe dP − Y eiθ(X+Y ) dP
Z Z
= Xe e dP − Y eiθY eiθX dP
iθX iθY
Z Z Z Z
(2) → = Xe dP e dP − Y e dP eiθX dP
iθX iθY iθY
1 ′ 1 ′
Z Z
[Lemma 3.4.8] → = φ (θ) e dP − ψ (θ) eiθX dP
iθY
i i
1 ′ 1 ′
= φ (θ)ψ(θ) − ψ (θ)φ(θ)
i i
1 ′
= φ (θ)ψ(θ) − φ(θ)ψ ′ (θ) ,
i
which is what we wanted.
7.7. EXAM 2015, PROBLEM 3 - BERNSTEIN’S THEOREM 61
that J is well-defined, (2) that J(θ) = 2φ(θ)2 and (3) that J(θ) = −2φ′′ (θ)φ(θ) +
2φ′ (θ)2 .
Ad 1: Consider
Z Z
2 iθ(X+Y )
|(X − Y ) e |dP = (X − Y )2 dP ≤ EX 2 + EY 2 = 2 < ∞,
so J is well-defined.
Ad 2: Consider
Z Z Z
2 iθ(X+Y ) 2
J(θ) = (X − Y ) e dP = (X − Y ) dP eiθ(X+Y ) dP
Z Z Z
2 iθX
= (X − Y ) dP e dP eiθY dP
Z
= (X − Y )2 dP φ(θ)φ(θ)
= E (X − Y )2 φ(θ)2
E (X − Y )2 φ(θ)2 = 2φ(θ)2 .
62 KAPITEL 7. WEEK 7
Ad 3: Consider
Z
J(θ) = (X − Y )2 eiθ(X+Y ) dP
Z Z Z
2 iθ(X+Y ) 2 iθ(X+Y )
= X e dP + Y e dP − 2 XY eiθ(X+Y ) dP
Z Z Z
= X e e dP + Y e e dP − 2 XeiθX Y eiθY dP
2 iθX iθY 2 iθY iθX
Z Z Z Z
[ ] = X e dP e dP + Y e dP eiθX dP
2 iθX iθY 2 iθY
|=
Z Z
− 2 Xe dP Y eiθY dP
iθX
1 ′′ 1 2
[Lemma 3.4.8] → = 2
φ (θ)ψ(θ) + 2 ψ ′′ (θ)φ(θ) − 2 φ′ (θ)ψ ′ (θ)
i i i
′ 2 ′′
[φ(θ) = ψ(θ)] → = 2φ (θ) − 2φ (θ)φ(θ),
thus log(φ(θ)) is a second order polynomial with second order coefficient equal to
-1, so we can write its Taylor expansion as
d2
d dθ 2 θ=0
log(φ(θ))
log(φ(θ)) = log(φ(0)) + log(φ(θ)) +
dθ θ=0 2
′ 2 2
φ (0) −θ θ
[φ(0) = 1 + Q3.4] → = θ+ = φ′ (0)θ −
φ(0) 2 2
[3] Sokol, Alexander & Rønn-Nielsen, Anders (2016). Advanced Probability 4th
edition, Department of Mathematical Sciences, University of Copenhagen.
64