Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 34

4.

Binomial Random Variable Approximations,


Conditional Probability Density Functions
and Stirling’s Formula
Let X represent a Binomial r.v as in (3-42). Then from (2-30)
k2 k2
 n  k n k
P  k1  X  k2    Pn ( k )     p q . (4-1)
k  k1 k  k1  k 

n n!
Since the binomial coefficient grows quite
  
 k  ( n  k )! k!

rapidly with n, it is difficult to compute (4-1) for large n. In


this context, two approximations are extremely useful.

4.1 The Normal Approximation (Demoivre-Laplace


Theorem) Suppose n   with p held fixed. Then for k in
the npq neighborhood of np, we can approximate 1
PILLAI
 n  k n k 1 2
  p q  e ( k  np ) / 2 npq . (4-2)
k  2npq
Thus if k1 and k2 in (4-1) are within or around the
neighborhood of the interval np  npq , np  npq  , we can
approximate the summation in (4-1) by an integration. In
that case (4-1) reduces to
1 1  y2 / 2
P  k1  X  k2   
k2 x2
e ( x  np ) 2 / 2 npq
dx   e dy, (4-3)
k1 2npq x1 2
where
k1  np k  np
x1  , x2  2 .
npq npq

We can express (4-3) in terms of the normalized integral


1 x
 (4-4)
2
erf ( x )  e y /2
dy  erf (  x )
2 0

that has been tabulated extensively (See Table 4.1). 2


PILLAI
For example, if x1 and x2 are both positive ,we obtain
P  k1  X  k 2   erf ( x2 )  erf ( x1 ). (4-5)

Example 4.1: A fair coin is tossed 5,000 times. Find the


probability that the number of heads is between 2,475 to
2,525.
Solution: We need P(2,475  X  2,525).Here n is large so
that we can use the normal approximation. In this case p  12 ,
so that np  2,500and npq  35. Since np  npq  2,465,
and np  npq  2,535, the approximation is valid for k1  2,475
and
k 2  2Thus
,525.
1  y2 / 2
P  k1  X  k2  
x2
x1 2
e dy.
k1  np 5 k  np 5
Here x1 
npq
  , x2  2
7 npq
 .
7
3
PILLAI
1 x 1

 y2 / 2
erf ( x )  e dy  G ( x ) 
2 0 2

x erf(x) x erf(x) x erf(x) x erf(x)

0.05 0.01994 0.80 0.28814 1.55 0.43943 2.30 0.48928


0.10 0.03983 0.85 0.30234 1.60 0.44520 2.35 0.49061
0.15 0.05962 0.90 0.31594 1.65 0.45053 2.40 0.49180
0.20 0.07926 0.95 0.32894 1.70 0.45543 2.45 0.49286
0.25 0.09871 1.00 0.34134 1.75 0.45994 2.50 0.49379

0.30 0.11791 1.05 0.35314 1.80 0.46407 2.55 0.49461


0.35 0.13683 1.10 0.36433 1.85 0.46784 2.60 0.49534
0.40 0.15542 1.15 0.37493 1.90 0.47128 2.65 0.49597
0.45 0.17364 1.20 0.38493 1.95 0.47441 2.70 0.49653
0.50 0.19146 1.25 0.39435 2.00 0.47726 2.75 0.49702

0.55 0.20884 1.30 0.40320 2.05 0.47982 2.80 0.49744


0.60 0.22575 1.35 0.41149 2.10 0.48214 2.85 0.49781
0.65 0.24215 1.40 0.41924 2.15 0.48422 2.90 0.49813
0.70 0.25804 1.45 0.42647 2.20 0.48610 2.95 0.49841
0.75 0.27337 1.50 0.43319 2.25 0.48778 3.00 0.49865

Table 4.1
4
PILLAI
Since x1  0, from Fig. 4.1(b), the above probability is given
by P 2,475  X  2,525  erf ( x 2 )  erf ( x1 )  erf ( x 2 )  erf (| x1 |)
5
 2erf    0.516,
7
where we have used Table 4.1  erf (0.7)  0.258 .
1 1 2
e x / 2
2
e x / 2
2 2

x x
x1 x2 x1 x2
(a) x1  0, x2  0 (b) x1  0, x2  0
Fig. 4.1

4.2. The Poisson Approximation


As we have mentioned earlier, for large n, the Gaussian
approximation of a binomial r.v is valid only if p is fixed,
i.e., only if np  1 and npq  1. what if np is small, or if it
does not increase with n? 5
PILLAI
Obviously that is the case if, for example, p  0 as n  ,
such that np   is a fixed number.
Many random phenomena in nature in fact follow this
pattern. Total number of calls on a telephone line, claims in
an insurance company etc. tend to follow this type of
behavior. Consider random arrivals such as telephone calls
over a line. Let n represent the total number of calls in the
interval (0, T ). From our experience, as T   we have n  
so that we may assumen  T . Consider a small interval
of duration  as in Fig. 4.2. If there is only a single call
coming in, the probability p of that single call occurring in
that interval must depend on its relative size with respect to
T. 1 2

n


0 T 6
Fig. 4.2 PILLAI

Hence we may assume p  T . Note that p  0 as T  .

However in this case np   T 
T
    is a constant,
and the normal approximation is invalid here.
Suppose the interval  in Fig. 4.2 is of interest to us. A call
inside that interval is a “success” (H), whereas one outside is
a “failure” (T ). This is equivalent to the coin tossing
situation, and hence the probability Pn (k ) of obtaining k
calls (in any order) in an interval of duration  is given by
the binomial p.m.f. Thus
n!
Pn ( k )  p k (1  p ) n k , (4-6)
( n  k )! k!

and here as n  , p  0 such that np   . It is easy to


obtain an excellent approximation to (4-6) in that situation.
To see this, rewrite (4-6) as 7
PILLAI
n( n  1)  ( n  k  1) ( np ) k n k
Pn ( k )  (1  np / n )
nk k!
(4-7)
 1  2  k  1  k (1   / n ) n
  1   1     1   .
 n  n  n  k! (1   / n ) k

Thus k 
lim Pn ( k )  e , (4-8)
n  , p 0, np   k!
k 1
since the finite products 1  n 1  n 1 
1 2
k

n  as well
as 1  n  tend to unity as n  , and
n
 
lim1    e  .
n 
 n
The right side of (4-8) represents the Poisson p.m.f and the
Poisson approximation to the binomial r.v is valid in
situations where the binomial r.v parameters n and p
diverge to two extremes ( n  , p  0) such that their
product np is a constant. 8
PILLAI
Example 4.2: Winning a Lottery: Suppose two million
lottery tickets are issued with 100 winning tickets among
them. (a) If a person purchases 100 tickets, what is the
probability of winning? (b) How many tickets should one
buy to be 95% confident of having a winning ticket?
Solution: The probability of buying a winning ticket
No. of winning tickets 100 5
p   5  10 .
Total no. of tickets 2  10 6

Here n  100, and the number of winning tickets X in the n


purchased tickets has an approximate Poisson distribution
with parameter   np  100  5  10 5  0.005. Thus
 k
P( X  k )  e ,
k!
and (a) Probability of winning  P( X  1)  1  P( X  0)  1  e    0.005.
9
PILLAI
(b) In this case we need P ( X  1)  0.95.

P( X  1)  1  e    0.95 implies   ln 20  3.

But   np  n  5  10 5  3 or n  60,000. Thus one needs to buy


about 60,000 tickets to be 95% confident of having a
winning ticket!
Example 4.3: A space craft has 100,000 components  n   
The probability of any one component being defective
is 2  10 5 ( p  0). The mission will be in danger if five or
more components become defective. Find the probability of
such an event.
Solution: Here n is large and p is small, and hence Poisson
approximation is valid. Thus np    100,000  2  10 5  2,
and the desired probability is given by
10
PILLAI
4
k 4
k
P( X  5)  1  P( X  4)  1   e 
 1 e 
2

k 0 k ! k 0 k!

 4 2
 1  e 2 1  2  2     0.052.
 3 3

Conditional Probability Density Function


For any two events A and B, we have defined the conditional
probability of A given B as
P( A  B ) (4-9)
P( A | B )  , P ( B )  0.
P( B)

Noting that the probability distribution function FX ( x ) is


given by
FX ( x )  P X ( )  x  , (4-10)
we may define the conditional distribution of the r.v X given
the event B as
11
PILLAI
P  X ( )  x   B 
FX ( x | B )  P X ( )  x | B   . (4-11)
P( B)
Thus the definition of the conditional distribution depends
on conditional probability, and since it obeys all probability
axioms, it follows that the conditional distribution has the
same properties as any distribution function. In particular
P  X ( )     B  P( B )
FX (  | B )    1,
P( B ) P( B )
(4-12)
P  X ( )     B  P ( )
FX (  | B )    0.
P( B ) P( B )
Further
P  x1  X ( )  x2   B 
P ( x1  X ( )  x2 | B ) 
P( B )
 FX ( x2 | B )  FX ( x1 | B ), (4-13)
12
PILLAI
Since for x2  x1 ,
 X ( )  x2    X ( )  x1    x1  X ( )  x2  . (4-14)

The conditional density function is the derivative of the


conditional distribution function. Thus
dFX ( x | B )
f X ( x | B)  , (4-15)
dx
and proceeding as in (3-26) we obtain
x
FX ( x | B )   
f X (u | B )du. (4-16)

Using (4-16), we can also rewrite (4-13) as

P  x1  X ( )  x2 | B  
x2
 x1
f X ( x | B )dx. (4-17)

13
PILLAI
Example 4.4: Refer to example 3.2. Toss a coin and X(T)=0,
X(H)=1. Suppose B  {H }. Determine FX ( x | B ).
Solution: From Example 3.2, FX ( x ) has the following form.
We need FX ( x | B ) for all x.
For x  0,  X ( )  x   , so that   X ( )  x   B   ,
FX ( x and
| B )  0.

FX (x ) FX ( x | B)

1 1
q
x x
1 1
(a) (b)
Fig. 4.3 14
PILLAI
For 0  x  1,  X ( )  x   T , so that
  X ( )  x   B   T    H    and FX ( x | B )  0.
For x  1,  X ( )  x  , and
P( B )
  X ( )  x   B     B  {B} and FX ( x | B )  1
P( B )
(see Fig. 4.3(b)).
Example 4.5: Given FX (x ), suppose B   X ( )  a. Find f X ( x | B ).
Solution: We will first determine FX ( x | B ). From (4-11) and
B as given above, we have
P  X  x    X  a  
FX ( x | B )  . (4-18)
P X  a 

15
PILLAI
For x  a,  X  x    X  a    X  x  so that
P X  x  F ( x)
FX ( x | B )   X . (4-19)
P X  a  FX ( a )
For x  a,  X  x    X  a   ( X  a ) so that FX ( x | B )  1.
Thus
 FX ( x )
 , x  a,
FX ( x | B )   FX ( a ) (4-20)

 1, x  a,

and hence
 f X ( x)
d  , x  a,
f X ( x | B)  FX ( x | B )   FX ( a ) (4-21)
dx 
 0, otherwise.

16
PILLAI
FX ( x | B )
f X ( x | B)
1
f X (x )
FX (x )

a x a x
(a) (b)
Fig. 4.4

 a  X ( )  b b  a.
Example 4.6: Let B represent the event with
FX ( x ), FX ( x | B ) f X ( x | B ).
For a given determine and Solution:
P  X ( )  x    a  X ( )  b  
FX ( x | B )  P X ( )  x | B  
P  a  X ( )  b 
P  X ( )  x    a  X ( )  b  
 . (4-22)
FX (b)  FX ( a )
For we have and
x  a,  X ( )  x   a  X ( )  b   ,
hence
FX ( x | B )  0. (4-23)
17
PILLAI
For a  x  b, we have  X ( )  x   a  X ( )  b  {a  X ( )  x}
and hence
P  a  X ( )  x  F ( x )  FX ( a )
F ( x | B) 
X  X . (4-24)
FX (b)  FX ( a ) FX (b)  FX ( a )

For x  b, we have  X ( )  x   a  X ( )  b   a  X ( )  b
so that FX ( x | B )  1. Using
(4-25)
(4-23)-(4-25), we get (see Fig. 4.5)
 f X ( x)
 , a  x  b,
f X ( x | B )   FX (b)  FX ( a )
 0, otherwise.
(4-26)

f X ( x | B)

f X (x )

x
a b
Fig. 4.5 18
PILLAI
We can use the conditional p.d.f together with the Bayes’
theorem to update our a-priori knowledge about the
probability of events in presence of new observations.
Ideally, any new information should be used to update our
knowledge. As we see in the next example, conditional p.d.f
together with Bayes’ theorem allow systematic updating. For
any two events A and B, Bayes’ theorem gives
P ( B | A) P ( A)
P( A | B )  .
P( B ) (4-27)
Let B   x1  X ( )  x2  so that (4-27) becomes (see (4-13) and
(4-17))
P  ( x1  X ( )  x2 ) | A P ( A)
P A | ( x1  X ( )  x2 ) 
P  x1  X ( )  x2 
x2

F ( x | A)  FX ( x1 | A)
 X 2 P ( A) 
x1
f X ( x | A)dx
P ( A). (4-28)
FX ( x2 )  FX ( x1 ) x2
 x1
f X ( x )dx 19
PILLAI
Further, let x1  x, x2  x   ,   0, so that in the limit as   0,
f X ( x | A)
lim P A | ( x  X ( )  x   )  P  A | X ( )  x   P ( A). (4-29)
 0 f X ( x)
or
P( A | X  x ) f X ( x )
f X | A ( x | A)  .
P ( A) (4-30)

From (4-30), we also get


 
P ( A)  f X ( x | A)dx   P ( A | X  x ) f X ( x )dx, (4-31)
      
1
or

P ( A)  
P ( A | X  x ) f X ( x )dx (4-32)

and using this in (4-30), we get the desired result


P( A | X  x ) f X ( x )
f X | A ( x | A)  
. (4-33)
 
P ( A | X  x ) f X ( x )dx 20
PILLAI
To illustrate the usefulness of this formulation, let us
reexamine the coin tossing problem.
Example 4.7: Let p  P(H ) represent the probability of
obtaining a head in a toss. For a given coin, a-priori p can
possess any value in the interval (0,1). In the absence of any
additional information, we may assume the a-priori p.d.f f P ( p)
to be a uniform distribution in that interval. Now suppose
we actually perform an experiment of tossing the coin n
times, and k heads are observed. This is new information.
How can we update f P ( p) ?
Solution: Let A= “k heads in n specific tosses”. Since these
tosses result in a specific sequence, f ( p)P

P( A | P  p )  p k q n k , (4-34) p
0 1
21
Fig.4.6
PILLAI
and using (4-32) we get
1 1 ( n  k )! k!

P ( A)  
P ( A | P  p ) f P ( p )dp  (4-35)
p k (1  p ) n  k dp  .
0 0 ( n  1)!
The a-posteriori p.d.f represents the updated
f P|A ( p | A)
information given the event A, and from (4-30)
f P| A ( p | A)
P( A | P  p) f P ( p)
f P| A ( p | A) 
P ( A)
p
( n  1)!

( n  k )! k!
  (n, k ). (4-36) Fig. 4.7
p k q n k , 0  p  1
0 1

Notice that the a-posteriori p.d.f of p in (4-36) is not a


uniform distribution, but a beta distribution. We can use this
a-posteriori p.d.f to make further predictions, For
example, in the light of the above experiment, what can we
say about the probability of a head occurring in the next
(n+1)th toss?
22
PILLAI
Let B= “head occurring in the (n+1)th toss, given that k
heads have occurred in n previous tosses”.
Clearly P( B | P  p )  p, and from (4-32)
1
P ( B )   P ( B | P  p ) f P ( p | A)dp. (4-37)
0

Notice that unlike (4-32), we have used the a-posteriori p.d.f


in (4-37) to reflect our knowledge about the experiment
already performed. Using (4-36) in (4-37), we get
1 ( n  1)! k n k k 1
P( B)   p  p q dp  . (4-38)
0 ( n  k )! k! n2
Thus, if n =10, and k = 6, then
7
P( B )   0.58,
12
which is more realistic compare to p = 0.5.
23
PILLAI
To summarize, if the probability of an event X is unknown,
one should make noncommittal judgement about its a-priori
probability density function f X (x ). Usually the uniform
distribution is a reasonable assumption in the absence of any
other information. Then experimental results (A) are
obtained, and out knowledge about X must be updated
reflecting this new information. Bayes’ rule helps to obtain
the a-posteriori p.d.f of X given A. From that point on, this
a-posteriori p.d.f f X |A ( x | A) should be used to make further
predictions and calculations.

24
PILLAI
Stirling’s Formula : What is it?

Stirling’s formula gives an accurate approximation for n!


as follows: n  12  n
n ! ~ 2 n e (4-39)
in the sense that the ratio of the two sides in (4-39) is near
to one; i.e., their relative error is small, or the percentage
error decreases steadily as n increases. The approximation
is remarkably accurate even for small n. Thus 1! = 1 is
approximated as 2 / e 0.9221, and 3!  6 is
approximated as 5.836.
Prior to Stirling’s work, DeMoivre had established
the same formula in (4-39) in connection with binomial
distributions in probability theory. However DeMoivre
did not establish the constant 25
PILLAI
term 2 in (4-39); that was done by James Stirling
( 1730).
How to prove it?

We start with a simple observation: The function


log x is a monotone increasing function, and hence we have
k k 1
 k 1 log x dx  log k   k log x dx.
log x

Summing over k  1, 2,  , n
we get log k

n n 1
 0 log x dx  log n !   1 log x dx
1
x
k 1 k k 1

or   log xdx  x log x  x 


n log n  n  log n !  ( n  1) log(n  1)  n. (4-40) 26
PILLAI
The double inequality in (4-40) clearly suggests that log n!
is close to the arithmetic mean of the two extreme numbers
there. However the actual arithmetic mean is complicated
and it involves several terms. Since (n + 12 )log n – n is
quite close to the above arithmetic mean, we consider
the difference1
an  log n ! ( n  12 ) log n  n. (4-41)
This gives
an  an 1  log n ! ( n  12 ) log n  n  log(n  1)!
 ( n  23 ) log( n  1)  ( n  1)
  log( n  1)  ( n  12 ) log n  ( n  23 ) log( n  1)  1
n  12  12
 ( n  ) log
1
2
n 1
n  1  ( n  ) log n  1  1  1.
1
2
2 2
1
According to W. Feller this clever idea to use the approximate mean (n + 12)log n – n is due
to H.E. Robbins, and it leads to an elementary proof. 27
PILLAI
Hence1 1 1
an  an 1  ( n  12 ) log( 1 2 n11 )  1
2 n 1


 2( n  12 ) 2n11  3(2n11)3  5(2 n11)5    1 
 3(2n11)2  5(2 n11)4  7(2n11)6    0 (4-42)
Thus {an} is a monotone decreasing sequence and let c
represent its limit, i.e.,
lim an  c (4-43)
n

From (4-41), as n   this is equivalent to


1
c n 2  n
n! ~ e n e . (4-44)
To find the constant term c in (4-44), we can make use of
a formula due to Wallis ( 1655).
1
By Taylor series expansion
1 2 1 3 1 4 1 1 2 1 3 1 4 28
log(1  x )  x  2 x  3 x  4 x   ; log (1 x )  x2 x 3x 4 x 
PILLAI
sin x
The well known function sinx x  sinc x x
goes to zero at x   n ;
0.8

0.6

0.4

moreover these are the only  n  


0.2

0
n x
zeros of this function. Also -0.2

-1 -0.8 -0. 6 -0. 4 -0.2 0 0.2 0.4 0. 6 0.8 1

sin x has no finite poles.


x
(All poles are at infinity). As a result we can write
sin x  (1  x22 )(1  x 22 ) (1  2x 2 2 )
x  4 n

or [for a proof of this formula, see chapter 4 of Dienes,


The Taylor Series]
sin x  x (1  )(1  x2
2
x2
4 2
) (1  x2
n 2 2
),
which for x   / 2 gives the Wallis’ formula
1  2 (1  212 )(1  412 )(1  (21n )2 )  2 ( 1223)( 3425 )( 5627 ) ( (2 n(21)n(2)2n1) ) 
29
PILLAI
or  
  (2 n 1)(2 n1)  (1 3 )( 3 5 )( 5 7 ) ( (2 n1)(2 n 1) )
4 n2 22 42 62 (2 n )2

2 n1
 (1 2354 6(2n2n1) )2 2 n11 .
Thus as n  , this gives
  1 2354  6  2n 1

(2 n1) n1
2
(2  4  6  2n )2 1 2 n ( n!)
2
 (2n )! 2 1 .
(2n )! n  1
n 1
Thus as n   2 2

log   2n log 2  2log n ! log(2n)! 12 log( n  12 ) (4-45)


But from (4-41) and (4-43)
lim log n !  c  lim {( n  12 ) log n  n} (4-46)
n n

and hence letting n   in (4-45) and making use 30


PILLAI
of (4-46) we get
log   c  12 log 2  lim 12 log(1  21n )  c  12 log 2
n
which gives
e c  2 . (4-47)
With (4-47) in (4-44) we obtain (4-39), and this proves
the Stirling’s formula.
Upper and Lower Bounds
It is possible to obtain reasonably good upper and
lower bounds for n! by elementary reasoning as well.
To see this, note that from (4-42) we get
 
an  an 1  13  1 2  1 4  
 (2 n 1) (2 n 1)
 3[(2n11)2 1]  12n (1n1)  121n  12(1n1)
31
so that {a – 1/12n} is a monotonically increasing PILLAI
sequence whose limit is also c. Hence for any finite n
an  121n  c  an  c  121n
and together with (4-41) and (4-47) this gives

n !  2 n e
 1
n  12  n 112 n2  (4-48)

Similarly from (4-42) we also have


an  an 1  3(2n11)2  121n1  12( n11)1  0
so that {an – 1/(12n+1)} is a monotone decreasing sequence
whose limit also equals c. Hence
an  121n1  c  an  c  121n1
or 1
n  12  n (112 n ( n 1) ) (4-49)
n !  2 n e .
32
Together with (4-48)-(4-49) we obtain PILLAI
n  12  n 1 n  21  n 1
2 n e e 12 n 1
 n !  2 n e e . 12 n
(4-50)

Stirling’s formula also follows from the asymptotic


expansion for the Gamma function given by
( x )  2 e x x x  12
 139  o( 1 )
1  121x  2861 x2  51840 x3 x4 
(4-51)
Together with ( x  1)  x( x ), the above expansion can
be used to compute numerical values for real x.
For a derivation of (4-51), one may look into Chapter 2
of the classic text by Whittaker and Watson (Modern
Analysis).

We can use Stirling’s formula to obtain yet another


approximation to the binomial probability mass 33
PILLAI
function. Since
 n  k n k n! k n k
k  p q  p q , (4-52)
  ( n  k )! k !
using (4-50) on the right side of (4-52) we obtain
 n  k n k
 k  p q  c1 2 ( nk ) k  k   nk 
n np k nq n k
 
and
 n  k n k
 k  p q  c2 2 ( nk ) k  k   nk 
n np k nq n k
 
where
c1  e
 12 n 1 12 ( n  k ) 12 k 
1  1  1

and
c2  e
 12 n 12 ( n  k ) 1 12 k 1
1  1  1
.
Notice that the constants c1 and c2 are quite close 34
to each other. PILLAI

You might also like