Professional Documents
Culture Documents
Lect 4 A
Lect 4 A
n n!
Since the binomial coefficient grows quite
k ( n k )! k!
Table 4.1
4
PILLAI
Since x1 0, from Fig. 4.1(b), the above probability is given
by P 2,475 X 2,525 erf ( x 2 ) erf ( x1 ) erf ( x 2 ) erf (| x1 |)
5
2erf 0.516,
7
where we have used Table 4.1 erf (0.7) 0.258 .
1 1 2
e x / 2
2
e x / 2
2 2
x x
x1 x2 x1 x2
(a) x1 0, x2 0 (b) x1 0, x2 0
Fig. 4.1
0 T 6
Fig. 4.2 PILLAI
Hence we may assume p T . Note that p 0 as T .
However in this case np T
T
is a constant,
and the normal approximation is invalid here.
Suppose the interval in Fig. 4.2 is of interest to us. A call
inside that interval is a “success” (H), whereas one outside is
a “failure” (T ). This is equivalent to the coin tossing
situation, and hence the probability Pn (k ) of obtaining k
calls (in any order) in an interval of duration is given by
the binomial p.m.f. Thus
n!
Pn ( k ) p k (1 p ) n k , (4-6)
( n k )! k!
Thus k
lim Pn ( k ) e , (4-8)
n , p 0, np k!
k 1
since the finite products 1 n 1 n 1
1 2
k
n as well
as 1 n tend to unity as n , and
n
lim1 e .
n
n
The right side of (4-8) represents the Poisson p.m.f and the
Poisson approximation to the binomial r.v is valid in
situations where the binomial r.v parameters n and p
diverge to two extremes ( n , p 0) such that their
product np is a constant. 8
PILLAI
Example 4.2: Winning a Lottery: Suppose two million
lottery tickets are issued with 100 winning tickets among
them. (a) If a person purchases 100 tickets, what is the
probability of winning? (b) How many tickets should one
buy to be 95% confident of having a winning ticket?
Solution: The probability of buying a winning ticket
No. of winning tickets 100 5
p 5 10 .
Total no. of tickets 2 10 6
P( X 1) 1 e 0.95 implies ln 20 3.
k 0 k ! k 0 k!
4 2
1 e 2 1 2 2 0.052.
3 3
P x1 X ( ) x2 | B
x2
x1
f X ( x | B )dx. (4-17)
13
PILLAI
Example 4.4: Refer to example 3.2. Toss a coin and X(T)=0,
X(H)=1. Suppose B {H }. Determine FX ( x | B ).
Solution: From Example 3.2, FX ( x ) has the following form.
We need FX ( x | B ) for all x.
For x 0, X ( ) x , so that X ( ) x B ,
FX ( x and
| B ) 0.
FX (x ) FX ( x | B)
1 1
q
x x
1 1
(a) (b)
Fig. 4.3 14
PILLAI
For 0 x 1, X ( ) x T , so that
X ( ) x B T H and FX ( x | B ) 0.
For x 1, X ( ) x , and
P( B )
X ( ) x B B {B} and FX ( x | B ) 1
P( B )
(see Fig. 4.3(b)).
Example 4.5: Given FX (x ), suppose B X ( ) a. Find f X ( x | B ).
Solution: We will first determine FX ( x | B ). From (4-11) and
B as given above, we have
P X x X a
FX ( x | B ) . (4-18)
P X a
15
PILLAI
For x a, X x X a X x so that
P X x F ( x)
FX ( x | B ) X . (4-19)
P X a FX ( a )
For x a, X x X a ( X a ) so that FX ( x | B ) 1.
Thus
FX ( x )
, x a,
FX ( x | B ) FX ( a ) (4-20)
1, x a,
and hence
f X ( x)
d , x a,
f X ( x | B) FX ( x | B ) FX ( a ) (4-21)
dx
0, otherwise.
16
PILLAI
FX ( x | B )
f X ( x | B)
1
f X (x )
FX (x )
a x a x
(a) (b)
Fig. 4.4
a X ( ) b b a.
Example 4.6: Let B represent the event with
FX ( x ), FX ( x | B ) f X ( x | B ).
For a given determine and Solution:
P X ( ) x a X ( ) b
FX ( x | B ) P X ( ) x | B
P a X ( ) b
P X ( ) x a X ( ) b
. (4-22)
FX (b) FX ( a )
For we have and
x a, X ( ) x a X ( ) b ,
hence
FX ( x | B ) 0. (4-23)
17
PILLAI
For a x b, we have X ( ) x a X ( ) b {a X ( ) x}
and hence
P a X ( ) x F ( x ) FX ( a )
F ( x | B)
X X . (4-24)
FX (b) FX ( a ) FX (b) FX ( a )
For x b, we have X ( ) x a X ( ) b a X ( ) b
so that FX ( x | B ) 1. Using
(4-25)
(4-23)-(4-25), we get (see Fig. 4.5)
f X ( x)
, a x b,
f X ( x | B ) FX (b) FX ( a )
0, otherwise.
(4-26)
f X ( x | B)
f X (x )
x
a b
Fig. 4.5 18
PILLAI
We can use the conditional p.d.f together with the Bayes’
theorem to update our a-priori knowledge about the
probability of events in presence of new observations.
Ideally, any new information should be used to update our
knowledge. As we see in the next example, conditional p.d.f
together with Bayes’ theorem allow systematic updating. For
any two events A and B, Bayes’ theorem gives
P ( B | A) P ( A)
P( A | B ) .
P( B ) (4-27)
Let B x1 X ( ) x2 so that (4-27) becomes (see (4-13) and
(4-17))
P ( x1 X ( ) x2 ) | A P ( A)
P A | ( x1 X ( ) x2 )
P x1 X ( ) x2
x2
F ( x | A) FX ( x1 | A)
X 2 P ( A)
x1
f X ( x | A)dx
P ( A). (4-28)
FX ( x2 ) FX ( x1 ) x2
x1
f X ( x )dx 19
PILLAI
Further, let x1 x, x2 x , 0, so that in the limit as 0,
f X ( x | A)
lim P A | ( x X ( ) x ) P A | X ( ) x P ( A). (4-29)
0 f X ( x)
or
P( A | X x ) f X ( x )
f X | A ( x | A) .
P ( A) (4-30)
P( A | P p ) p k q n k , (4-34) p
0 1
21
Fig.4.6
PILLAI
and using (4-32) we get
1 1 ( n k )! k!
P ( A)
P ( A | P p ) f P ( p )dp (4-35)
p k (1 p ) n k dp .
0 0 ( n 1)!
The a-posteriori p.d.f represents the updated
f P|A ( p | A)
information given the event A, and from (4-30)
f P| A ( p | A)
P( A | P p) f P ( p)
f P| A ( p | A)
P ( A)
p
( n 1)!
( n k )! k!
(n, k ). (4-36) Fig. 4.7
p k q n k , 0 p 1
0 1
24
PILLAI
Stirling’s Formula : What is it?
Summing over k 1, 2, , n
we get log k
n n 1
0 log x dx log n ! 1 log x dx
1
x
k 1 k k 1
2( n 12 ) 2n11 3(2n11)3 5(2 n11)5 1
3(2n11)2 5(2 n11)4 7(2n11)6 0 (4-42)
Thus {an} is a monotone decreasing sequence and let c
represent its limit, i.e.,
lim an c (4-43)
n
0.6
0.4
0
n x
zeros of this function. Also -0.2
2 n1
(1 2354 6(2n2n1) )2 2 n11 .
Thus as n , this gives
1 2354 6 2n 1
(2 n1) n1
2
(2 4 6 2n )2 1 2 n ( n!)
2
(2n )! 2 1 .
(2n )! n 1
n 1
Thus as n 2 2
n ! 2 n e
1
n 12 n 112 n2 (4-48)
and
c2 e
12 n 12 ( n k ) 1 12 k 1
1 1 1
.
Notice that the constants c1 and c2 are quite close 34
to each other. PILLAI