Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

10 A. K.

MAJEE

Proof of Strong law of large number: With out loss of generality we can assume that µ = 0
(otherwise, if µ 6= 0, then set X̃i = Xi − µ, and work with X̃i ). Set Yn = Snn . Observe that,
thanks to independent property,
n
1 X 1 X σ2
E[Yn ] = 0, E[Yn2 ] = 2 E(Xj Xk ) = 2 E[Xj2 ] = .
n n n
1≤j,k≤n j=1

Thus, limn→∞ E[Yn2 ] = 0, and hence along a subsequence, Yn converges to 0 almost surely. But
we need to show that original sequence converges to 0 with probability 1. To do so, we proceed
2 P∞ σ2
as follows. Since E[Yn2 ] = σn , we see that ∞ 2
P
P∞ n=1 E[Yn2 ] = n=1 n2 < +∞ and hence by Lemma
1.15, ii), n=1 Yn22 converges almost surely. Thus,
lim Yn2 = 0 with probability 1. (1.2)
n→∞

Let n ∈ N. Then there exists m(n) ∈ N such that (m(n))2 ≤ n < (m(n) + 1)2 . Now
(m(n)) 2
n
(m(n))2 1X (m(n))2 1 X
Yn − Y(m(n))2 = Xi − Xi
n n n (m(n))2
i=1 i=1
n
1 X
= Xi
n
i=1+(m(n))2
n
h (m(n))2 2 i 1 X
=⇒ E Yn − Y(m(n))2 = 2 E[Xi2 ]
n n
i=1+(m(n))2

n − (m(n))2 2 2m(n) + 1 2
= σ ≤ σ (∴ n < (m(n) + 1)2 )
n2 n2

2 n + 1 2 3σ 2 √
≤ 2
σ ≤ 3 (∴ m(n) ≤ n)
n n2
∞ 2 ∞ 2
X h (m(n))  2 i X 3σ
=⇒ E Yn − Y(m(n))2 ≤ 3 < +∞ .
n n 2
n=1 n=1

Thus, again by Lemma 1.15, ii), we conclude that


(m(n))2
lim Yn − Y(m(n))2 = 0 with probability 1. (1.3)
n→∞ n
(m(n))2
Obsere that limn→∞ n = 1. Thus, in view of (1.2) and (1.3), we conclude that

lim Yn = 0 with probability 1.


n→∞

This completes the proof.


Theorem 1.16 (Kolmogorov’s strong law of large numbers). Let {Xn } be a sequence of
i.i.d. random variables and µ ∈ R. Then
Sn
lim = µ a.s. if and only if E[Xn ] = µ.
n→∞ n
In this case, the convergence also holds in L1 .
Example 1.16 (Monte Carlo approximation). Let f be a measurable function in [0, 1] such
R1 R1
that 0 |f (x)| dx < ∞. Let α = 0 f (x) dx. In general we cannot obtain a closed form expression
PROBABILITY AND STOCHASTIC PROCESS 11

for α and need to estimate it. Let {Uj } be a sequence of independent uniformly random variables
on [0, 1]. Then by Theorem 1.16,
n Z 1
1X
lim f (Uj ) = E[f (Uj )] = f (x) dx
n→∞ n 0
j=1
R1
a.s. and in L2 . Thus, to get an approximation of 0 f (x) dx, we need to simulate the uniform
random variables Uj (by using a random number generator).
Theorem 1.17 (Central limit theorem). Let {Xn } be a sequence of i.i.d. random variables
−nµ
with finite mean µ and variance σ 2 with 0 < σ 2 < +∞. Let Yn = Sσn√ n
. Then Yn converges in
distribution to Y , where L(Y ) = N (0, 1).
Proof. With out loss of generality, we assume that µ = 0. Let Φ, ΦYn be the characteristic
function of Xj and Yn respectively. Since {Xj } are i.i.d., we have
Pn
X
iuYn iu σS√nn iu √ i
i=1
ΦYn (u) = E[e ] = E[e ] = E[e ] σ n

n n
hY X i
iu √i
Y  iu X√i  u n
=E e σ n = E e σ n = Φ( √ ) .
σ n
i=1 i=1

Since E[|Xj |2 ] < +∞, the function Φ has two continuous derivatives. In particular,
Φ0 (u) = iE[Xj eiuXj ], Φ00 (u) = −E[Xj2 eiuXj ] =⇒ Φ0 (0) = 0, Φ00 (0) = −σ 2 .
Expanding Φ in a Taylor expansion about u = 0, we have
σ 2 u2
Φ(u) = 1 − + h(u)u2 , where h(u) → 0 as u → 0.
2
Thus, we get
u 2 u2
n log(Φ( σ√ )) n log(1− u + u
h( σ√ ))
ΦYn (u) = e n =e 2n nσ 2 n

u2
=⇒ lim ΦYn (u) = e− 2 = ΦY (u) (by L’Hôpital rule).
n→∞

Hence by Levy’s continuity theorem, we conclude that Yn converges in distribution to Y with


L(Y ) = N (0, 1). 
Sn
Remark 1.3. If σ 2 = 0, then Xj = µ a.s. for all j, and hence n = µ a.s.
One can weaken slightly the hypotheses of Theorem 1.17. Indeed, we have the following
Central limit theorem.
Theorem 1.18. Let {Xn } be independent but not necessarily identicallu distributed.
Let E[Xn ] = 0 for all n, ane let σn2 = Var(Xn ). Assume that

X
sup E[|Xn |2+ε ] < +∞ for some ε > 0, σn2 = ∞.
n
n=1

Then √PSnn converges in distribution to Y with L(Y ) = N (0, 1).


i=1 σi2

Example 1.17. Let {Xn } be a sequence of i.i.d random variables such that P(Xn = 1) = 12 and
P(Xn = 0) = 12 . Then µ = 12 and σ 2 = 14 . Hence by central limit theorem, Yn = 2S√nn−n converges
in distribution to Y with L(Y ) = N (0, 1).
12 A. K. MAJEE

Example 1.18. Let X ∼ B(n, p). For any given 0 ≤ α ≤ 1, we want to find n such that
P(X > n2 ) ≤ 1 − α. We can thaink of X as a sum of n i.i.d. random variable Xi such that
Xi ∼ B(1, p). Hence by central limit theorem, for large n,
Z x
1 u2
e− 2 du.
p
P(X − np ≤ x np(1 − p)) = Φ(x) = √
2π −∞

Choose x such that np + x np(1 − p) = n2 . This implies that x = 2n √1−2p . Thus,
p
p(1−p)

n
Z √1−2p
n n 1 2 u2
p(1−p)
P(X > ) = 1 − P(X ≤ ) = 1 − √ e− 2 du .
2 2 2π −∞
Therefore, we need to choose n such that
Z √n √1−2p p
1 2 p(1−p) − u
2 √ 2 p(1 − p) −1
α≤ √ e 2 du =⇒ n ≥ Φ (α)
2π −∞ 1 − 2p
4p(1 − p) −1 2
=⇒ n ≥ Φ (α) .
(1 − 2p)2
Example
Pn 1.19. Let {Xi } be sequence of i.i.d. random variables with exp(1) distributed. Let
i=1 Xi
X̄ = n . How large should n be such that
P(0.9 ≤ X̄ ≤ 1.1) ≥ 0.95?
Pn
Since Xi ’s are exp(1) distributed, µ = E[Xi ] = 1 and σ 2 = Var(Xi ) = 1. Let Y = i=1 Xi .
Then by central limit theorem, Y√−n
n
is approximately N (0, 1). Now
(0.9)n − n Y −n (1.1)n − n
P(0.9 ≤ X̄ ≤ 1.1) = P((0.9)n ≤ Y ≤ (1.1)n) = P( √ ≤ √ ≤ √ )
n n n
√ Y −n √  √ √
= P − (0.1) n ≤ √ ≤ (0.1) n = Φ((0.1) n) − Φ(−(0.1) n)
n

= 2Φ((0.1) n) − 1 (∴ Φ(−x) = 1 − Φ(x))
Hence we need to find n such that
√ √ √
2Φ((0.1) n) − 1 ≥ 0.95 =⇒ Φ((0.1) n) ≥ 0.975 =⇒ (0.1) n ≥ Φ−1 (0.975) = 1.96

=⇒ n ≥ 19.6 =⇒ n ≥ 384.16 =⇒ n ≥ 385 (∴ n ∈ N).

You might also like