Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Applied Statistics and Probability for Engineers, 6th edition

CHAPTER 7

Section 7-2

7-1. The proportion of arrivals for chest pain is 8 among 103 total arrivals. The proportion = 8/103.

7-2. The proportion is 10/80 =1/8.

7-3. P(2.560  X  2.570)  P 2.560 2.565


0.008/ 9
 X/ n  2.570 2.565
0.008/ 9

 P(1.875  Z  1.875)  P( Z  1.875)  P( Z  1.875)

 0.9696  0.0304  0.9392

7-4. X i ~ N (100,102 ) n  25
 10
 X  100  X   2
n 25

P[(100  1.7(2))  X  (100  1.5(2))]  P(96.6  X  103)  P 


96.6 100
2
 X/ n  1032100 
 P(1.7  Z  1.5)  P( Z  1.5)  P( Z  1.7)  0.9332  0.0446  0.8886

 25
7-5.  X  520 kN / m2 ;  X    10.206
n 6
P( X  525)  P 
/ n
X 
 525 520
10.206

 P( Z  0.4899)  1  P( Z  0.4899)

 1  0.6879  0.3121

7-6.
n=6 n = 50
 3.5  3.5
X    1.429 X    0.495
n 6 n 49
 X is reduced by 0.9339 psi

7-7. Assuming a normal distribution,


 345
 X  13237; X    154.289
n 5
P(17230  X  17305)  P  17230 13237
154.289  X/ n  17305 17237
154.289 
 P(0.045  Z  0.4407)  P( Z  0.44)  P( Z  0.045)
 0.6700  0.482  0.188
 50
7-8. X    22.361psi = standard error of X
n 5

7-1
Applied Statistics and Probability for Engineers, 6th edition

7-9.  2  36

X 
n
2 2
   6 
n         25

 X   1.2 
1
7-10. Let Y  X 
2
a  b (0  1)
X    1
2
2 2
X  X
(b  a ) 2
 X2   1
12
12
2 1
1
 
2
X
 12

n 27 324
X  1
18

Y  12  12  1
 Y2  324
1

1
YX 1
~ N (1, 324 ) , approximately, using the central limit theorem.
2
7-11. n = 36
a  b (4  1) 5
X   
2 2 2
(b  a  1) 2  1 (4  1  1) 2  1
X    15  5
12 12 12 4
5 5/ 4 5/ 4
 X  , X  
2 36 6
X 
z
/ n
Using the central limit theorem:

P(2.3  X  2.7)  P 2.35/24.5  Z  2.75/ 24 .5   P(1.0733  Z  1.0733)


 6 6 
 P( Z  1.0733)  P( Z  1.0733)  0.7169

7-12.
 X  8.2 minutes n = 36

7-2
Applied Statistics and Probability for Engineers, 6th edition

X  6 minutes X 6
X  1 
n 36
 X   X  8.2 mins
Using the central limit theorem, X is approximately normally distributed.
10  8.2
a) P( X  10)  P( Z  )  P( Z  1.8)  0.9641
1
5 8.2 108.2
b) P(5  X  10)  P( 1  Z  1 )

 P( Z  1.8)  P( Z  3.2)  0.9634


6  8.2
c) P( X  6)  P( Z  )  P( Z  2.2)  0.0139
1
7-13.
n1  16 n2  9 2  22
 1  75  2  70 X 1  X 2 ~ N (  X1   X 2 ,  2X   2X ) ~ N ( 1   2 , 1
 )
1  8  2  12
1 2
n1 n2
82 12 2
~ N (75  70,  ) ~ N (5,20)
16 9
a) P( X1  X 2  5)
P(Z  5205 )  P(Z  0)  0.5
b) P(2.5  X1  X 2  6)
P( 2.520 5  Z  6 205 )  P( Z  0.2236)  P( Z  0.5590)

 0.5885  0.2881  0.3004

 B2  A2
7-14. If  B   A , then X B  X A is approximately normal with mean 0 and variance   20.48 .
25 25
Then, P( X B  X A  3.5)  P(Z  3.50
20.48
)  P(Z  0.773)  0.2196
The probability that X B exceeds X A by 3.5 or more is not that unusual when  B and  A are
equal. Therefore, there is not strong evidence that  B is greater than  A .

7-15. Assume approximate normal distributions.


2
42
( X high  X low ) ~ N (60  55, 416  16
)
~ N (5,2)
2 5
P( X high  X low  2)  P( Z  )  1  P( Z  2.12)  1  0.0170  0.983
2

Section 7-3

7-16.

7-3
Applied Statistics and Probability for Engineers, 6th edition

s 1.815
SE Mean  σ X    0.4058
a) n 20
Variance  σ 2  1.8152  3.294
b) Estimate of mean of population = sample mean = 50.184

7-17. a)
√ √
,
V
b) Estimate of population mean = sample mean = 104.492

7-18. a) ( ̂ ) [ ] [ ]
Therefore, ̂
(̂ ) [ ] [ ]
Therefore ̂

b) ( ̂ ) [ ] [ ]

(̂ ) ( ) [ ] [ ]

7-19. ( ̂) ∑ ̅

( ̂)

 2n 
  Xi 
E X 1   E  i 1   1 E  X   1 2n   
2n
7-20. 
 2n  2n  i 1 i  2n
 
 
 n

  Xi 
E X 2   E  i 1   1 E  X   1 n   
n

 n  n  i 1
i
 n
 
 
X 1 and X 2 are unbiased estimators of .

2 2
The variances are V X 1   and V X 2   ; compare the MSE (variance in this case),
2n n
ˆ
MSE (1 )  / 2n n 1
2
  
ˆ )  2 / n 2n 2
MSE ( 2
Because both estimators are unbiased, one concludes that X1 is the “better” estimator with the
smaller variance.

7-21. E ˆ   17 E( X )  E( X


1 1 2 )    E ( X 7 ) 
1
7
1
(7 E ( X ))  (7  )  
7

7-4
Applied Statistics and Probability for Engineers, 6th edition

E ˆ   12 E(2 X )  E( X
2 1 6 )  E ( X 7 )  [ 2      ]  
1
2

a) Both ̂1 and ̂ 2 are unbiased estimates of  because the expected values of these statistics are
equivalent to the true mean, .
b)

V 1 
ˆ  V  X 1  X 2  ...  X 7   1 V ( X )  V ( X )    V ( X ) 
 7  7 2 1 2 7

1 1
 (7 2 )   2
49 7

ˆ )
2
V ( 1
7
V  
ˆ  V  2 X 1  X 6  X 4   1 V (2 X )  V ( X )  V ( X ) 
2   22 1 6 4
 2 
1
 (4V ( X 1 )  V ( X 6 )  V ( X 4 ))
4
= 4      = (6 )
1 2 2 2 1 2

4 4
ˆ )  3
2
V ( 2
2
Because both estimators are unbiased, the variances can be compared to select the better
estimator. Because the variance of ̂1 is smaller than that of ̂ 2 , ̂1 is the better estimator.

7-22. Because both ̂1 and ̂ 2 are unbiased, the variances of the estimators can compared to select the
better estimator. Because the variance of  2 is smaller than that of ˆ1 ,  2 is the better estimator.

ˆ ) V ( ) 12
MSE (
Relative Efficiency = 1
 1
  2.4
ˆ ) V (
MSE ( ˆ ) 5
2 2
7-23. E (ˆ1 )   E (ˆ 2 )   / 2
Bias  E (ˆ 2 )  
 
=  = 
2 2
ˆ
V (1 ) = 9 V (ˆ 2 ) = 5
For unbiasedness, use ̂1 because it is the only unbiased estimator.
As for minimum variance and efficiency we have
(V (ˆ1 )  Bias2 )1
Relative Efficiency = where bias for 1 is 0.
(V (ˆ 2 )  Bias2 ) 2
Thus,
(9  0) 36

Relative Efficiency =
   
5  
2



20   2 
  2  
 

7-5
Applied Statistics and Probability for Engineers, 6th edition

If the relative efficiency is less than or equal to 1, ̂1 is the better estimator.
36
Use ̂1 , when 1
(20   2 )
36  (20   2 )
16   2
  4 or   4
If  4    4 then use ̂ 2 .
For unbiasedness, use ̂1 . For efficiency, use ̂1 when   4 or   4 and use ̂ 2 when
 4   4.

7-24. E (ˆ1 )   No bias ˆ )  15  MSE(


V ( ˆ )
1 1

E (ˆ 2 )   No bias ˆ )  8  MSE(


V ( ˆ )
2 2

E (ˆ 3 )   Bias ˆ )  7 [note that this includes (bias2)]


MSE (3
To compare the three estimators, calculate the relative efficiencies:
ˆ ) 15
MSE ( 1
  1.875 , because rel. eff. > 1 use ̂ 2 as the estimator for 
ˆ ) 8
MSE ( 2
ˆ ) 15
MSE ( 1
  2.14 , because rel. eff. > 1 use ̂ 3 as the estimator for 
ˆ ) 7
MSE ( 3
ˆ ) 8
MSE ( 2
  1.14 , because rel. eff. > 1 use ̂ 3 as the estimator for 
ˆ
MSE ( ) 7 3
Conclusion: ̂ 3 is the most efficient estimator, but it is biased. ̂ 2 is the best “unbiased”
estimator.

7-25. n1 = 8, n2 = 14, n3 = 6
Show that S2 is unbiased.
 8S 2  14S 22  6S 32 
E S 2   E  1 
 28 
 E 8S1   E 14S 2   E 6S 32 
1 2 2

28
 8 12  14 22  6 32   28 2    2
1 1
28 28

Therefore, S2 is an unbiased estimator of 2 .

 X i  X 
n 2

i 1
7-26. Show that is a biased estimator of 2
n
a)

7-6
Applied Statistics and Probability for Engineers, 6th edition

 n 2 
  X i  X  
E  i 1 
 n 
 
 
1  n 2 1 n  1 n   2  
     
E   X i  nX      E X i2  nE X 2      2   2  n  2 
n  i 1
 
n  
 n  i 1  n  i 1 
2
1
 1

 n 2  n 2  n 2   2  ((n  1) 2 )   2 
n n n
X  X  is a biased estimator of 2
Therefore  i
2

n

 X i2  nX  
2
  2   2    2   
2 2
b) Bias = E 
 n  n n
 
c) Bias decreases as n increases.

7-27.  
a) Show that X 2 is a biased estimator of 2. Using E X2  V( X)   E( X)
2

1   n  
2
   n
2
1  n 
 
E X  2 E   X i   2 V   X i    E   X i 
2

n  i 1  n   i 1    i 1  
 
1   n   1
 
2

 2  n 2        2 n 2  n 
2

n  
 i 1   n
2
1

 2 n 2  n 2  2 E X 2 
n
  n
 2

Therefore, X 2 is a biased estimator of .2


 
b) Bias = E X 2   2 
2
n
 2  2 
2
n
c) Bias decreases as n increases.

7-28. a) The average of the 26 observations provided can be used as an estimator of the mean pull force
because we know it is unbiased. This value is 336.36 N.
b) The median of the sample can be used as an estimate of the point that divides the population
into a “weak” and “strong” half. This estimate is 334.55 N.
c) Our estimate of the population variance is the sample variance or 54.16 N2. Similarly, our
estimate of the population standard deviation is the sample standard deviation or 7.36 N.
d) The estimated standard error of the mean pull force is 7.36/26 ½ = 1.44. This value is the
standard deviation, not of the pull force, but of the mean pull force of the sample.
e) No connector in the sample has a pull force measurement under 324 N.

7-29.
Descriptive Statistics
Variable N N* Mean SE Mean StDev Minimum Q1 Median Q3 Maximum
Oxide Thickness 24 0 423.33 1.87 9.15 407.00 416.00 424.00 431.00 437.00

a) The mean oxide thickness, as estimated by Minitab from the sample, is 423.33 Angstroms.
b) The standard deviation for the population can be estimated by the sample standard deviation, or 9.15
Angstroms.
c) The standard error of the mean is 1.87 Angstroms.
d) Our estimate for the median is 424 Angstroms.

7-7
Applied Statistics and Probability for Engineers, 6th edition

e) Seven of the measurements exceed 430 Angstroms, so our estimate of the proportion requested is 7/24
= 0.2917

1 1
7-30. a) E ( pˆ )  E ( X n)  E ( X )  np  p
n n
p(1  p) p(1  p)
b) The variance of p̂ is so its standard error must be . To estimate this parameter
n n
we substitute our estimate of p into it.

7-31. a) E ( X1  X 2 )  E ( X1)  E( X 2 )  1  2
 12  22
b) s.e.  V ( X 1  X 2 )  V ( X 1 )  V ( X 2 )  2COV ( X 1 , X 2 )  
n1 n2
This standard error can be estimated by using the estimates for the standard deviations
of populations 1 and 2.

c)
 (n  1)  S12  (n2  1)  S 2 2 
E (S p 2 )  E 1
 n  n  2

 n 
1
n  2

(n1  1) E ( S12 )  (n2  1)  E ( S 2 2 )  
 1 2  1 2


1
n1  n2  2
 2 2

n  n2  2 2
(n1  1)   1  (n2  1)   2 )  1
n1  n2  2
 2

7-32. a) E(ˆ )  E(X1  (1   ) X 2 )  E( X1)  (1   ) E( X 2 )    (1   )  

b)
s.e.( ˆ )  V (X 1  (1   ) X 2 )   2V ( X 1 )  (1   ) 2 V ( X 2 )

 12  22  12  12
 2  (1   ) 2  2  (1   ) 2 a
n1 n2 n1 n2
 2 n 2  (1   ) 2 an1
 1
n1 n 2
an1
c) The value of alpha that minimizes the standard error is  
n2  an1
d) With a = 4 and n1=1/2n2, the value of  to choose is 2/3. The arbitrary value of  = 0.3 is too
small and results in a larger standard error. With  = 2/3, the standard error is

(2 / 3) 2 2n1  (1 / 3) 2 4n1 0.816 1


s.e.( ˆ )   1 2

2n1 n1
If  = 0.3 the standard error is

(0.3) 2 2n1  (0.7) 2 4n1 1.0344 1


s.e.( ˆ )   1 2

2n1 n1

7-33.

7-8
Applied Statistics and Probability for Engineers, 6th edition

X1 X 2 1 1 1 1
a) E (  )  E( X 1 )  E ( X 2 )  n1 p1  n2 p2  p1  p2  E ( p1  p2 )
n1 n2 n1 n2 n1 n2
p1 (1  p1 ) p 2 (1  p 2 )
b) 
n1 n2
X1 X2
c) An estimate of the standard error could be obtained substituting for p1 and for p2 in the equation
n1 n2
shown in (b).
d) Our estimate of the difference in proportions is 0.02
e) The estimated standard error is 0.0386

Section 7-4

7-34. f ( x)  p(1  p) x 1
n n
 xi  n
L( p)   p(1  p) xi 1
 p (1  p)
n i 1

i 1

 n 
ln L( p )  n ln p    xi  n  ln(1  p)
 i 1 
n

 ln L( p ) n  i x n
  i 1
0
p p 1 p
 n  n
(1  p )n  p  xi  n  n  np  p  xi  pn
0  i 1  i 1
p (1  p ) p (1  p )
n
0  n  p  xi
i 1

n
pˆ  n

x
i 1
i

n
 xi
 x
e  n 
e  xi
e  n
 i 1

7-35. f ( x)  L ( )    n
x! xi !
i 1
x !
i 1
i

7-9
Applied Statistics and Probability for Engineers, 6th edition

n n
ln L( )  n ln e   xi ln    ln xi !
i 1 i 1

d ln L( ) 1 n
  n   xi  0
d  i 1
n

x i
 n  i 1
0

n

x
i 1
i  n
n

x i
ˆ  i 1

7-36. f ( x)  (2  1) x
n n
L( )   (2  1) xi  (2  1) x1  (2  1) x2  ...  (2  1) n  xi
   

i 1 i 1
n
ln L( )  n ln( 2  1)   ln x1   ln x2  ...  n ln( 2  1)    ln xi
i 1

 ln L( ) 2n n
   ln xi  0
 2  1 i 1
n
2n
  ln xi
2  1 i 1

n 1
ˆ  n 
  ln xi
2
i 1

n  n 
n   ( x  )  
  x n 

7-37. f ( x)  e ( x  ) for x   L( )   e  ( x )  n e i 1
 e
n  i 1 

i 1
n
ln L( , )  n ln     xi  n
i 1

d ln L( , ) n n
   xi  n  0
d  i 1
n
n
  x i  n
 i 1

 n

ˆ  n /   xi  n 
 i 1 
1
ˆ 
x 

7-10
Applied Statistics and Probability for Engineers, 6th edition

The other parameter  cannot be estimated by setting the derivative of the log likelihood with
respect to  to zero because the log likelihood is a linear function of . The range of the likelihood
is important.

The joint density function and therefore the likelihood is zero for  < Min( X1, X 2 ,, X n ) . The term
in the log likelihood -n is maximized for  as small as possible within the range of nonzero
likelihood. Therefore, the log likelihood is maximized for  estimated with Min( X 1 , X 2 ,, X n ) so
that ˆ  xmin

b) Example: Consider traffic flow and let the time that has elapsed between one car passing a fixed
point and the instant that the next car begins to pass that point be considered time headway. This
headway can be modeled by the shifted exponential distribution.
Example in Reliability: Consider a process where failures are of interest. Suppose that a unit is
put into operation at x = 0, but no failures will occur until  time units of operation. Failures will
occur only after the time .

7-38.

n
xi e  xi xi
L( )   ln L( )   ln( xi )    n ln 
i 1  
 ln L( ) 1 n
 2  xi 
  
Setting the last equation equal to zero and solving for theta yields

x i
ˆ  i 1
n

a0 1 n
7-39. E( X )    X i  X , therefore: aˆ  2 X
2 n i 1

The expected value of this estimate is the true parameter, so it is unbiased. This estimate is
reasonable in one sense because it is unbiased. However, there are obvious problems. Consider the
sample x1=1, x2 = 2 and x3 =10. Now x = 4.37 and aˆ  2 x  8.667 . This is an unreasonable
estimate of a, because clearly a  10.

1
1 x2
7-40. a)  c(1  x)dx  1  (cx  c )  2c
1 2 1

so that the constant c should equal 0.5

1 n  1 n
b) E ( X )   Xi  ˆ  3   Xi
n i 1 3 n i 1

7-11
Applied Statistics and Probability for Engineers, 6th edition

 1 n  
c) E (ˆ)  E  3   X i   E (3 X )  3E ( X )  3  
 n i 1  3

d)
n 1 1 n
L( )   (1  X i ) ln L( )  n ln( )   ln(1  X i )
i 1 2 2 i 1

 ln L( ) n Xi

 i 1(1  X i )

By inspection, the value of  that maximizes the likelihood is max (Xi)

1 n 2
E ( X 2 )  2   X i so ̂  21n  X i 2
n
7-41. a)
n i 1 i 1

b)
n x e  x 2
2

i
xi 2
L( )   i ln L( )   ln( xi )    n ln 
i 1  2

 ln L( ) 1 n
  xi 2 
 2 2 

Setting the last equation equal to zero, the maximum likelihood estimate is

1 n 2
̂   Xi
2n i 1

and this is the same result obtained in part (a)

c)
a

 f ( x)dx  0.5  1  e
 a 2 / 2

a   2 ln(0.5)  2 ln( 2)

We can estimate the median (a) by substituting our estimate for  into the equation for a.

7-42. a) â cannot be unbiased since it will always be less than a.


na a(n  1) a
b) bias =    0.
n 1 n 1 n  1 n 
c) 2 X
n
 y
d) P(Yy)=P(X1, …,Xn  y)=[P(X1y)] =   . Thus, f(y) is as given. Thus,
n

a
an a
bias = E(Y) – a = a   .
n 1 n 1

7-12
Applied Statistics and Probability for Engineers, 6th edition

e) For any n >1, n(n+2) > 3n so the variance of â2 is less than that of â1 . It is in this sense that the second
estimator is better than the first.

7-43. a)
 n 
 1  xi   1
 x  n x 
 i      n
  xi 
L(  ,  )    i  e  
e i 1  
  
i 1     i 1    

      x 

    
n
xi  1 xi 
ln L(  ,  )   ln 
 
i
 n ln(  )  (   1) ln xi
 
i 1
b)
 ln L(  ,  ) n

   ln

   ln   
xi

xi

xi 

 xi

 ln L(  ,  ) n n
   (   1)    1
   
 ln L(  ,  )
Upon setting equal to zero, we obtain

1/ 
 x
   i 

 n   xi

and
 n 
 ln L(  ,  )
Upon setting equal to zero and substituting for , we obtain

n 1
  ln xi  n ln   xi (ln xi  ln )
 
n n   x  n n   x 
  ln xi  ln i   xi ln xi   xi 1
ln i 
   n   xi  xi 
 n 
1   xi ln xi  ln xi 
and   
   xi n 
c) Numerical iteration is required.

7-44. a) Using the results from the example, we obtain that the estimate of the mean is 423.33 and the estimate of
the variance is 83.7225

b)

-200000

-400000
lo g L
-600000
435
430
425
7 415
420 Me an
8 9 10 11 410
12 13
S td . D e v. 14 15

The function has an approximate ridge and its curvature is not too pronounced. The maximum value for
standard deviation is at 9.15, although it is difficult to see on the graph.

7-13
Applied Statistics and Probability for Engineers, 6th edition

c) When n is increased to 40, the graph looks the same although the curvature is more pronounced. As n
increases, it is easier to determine the maximum value for the standard deviation is on the graph.

( 2 / n)  0   02 x
7-45. From the example, the posterior distribution for  is normal with mean and
 02   2 / n
 /( 2 / n)
2

The Bayes estimator for  goes to the MLE as n increases. This


0
variance .
  2 /n 2
0

 x 2

follows because  /n2


goes to 0, and the estimator approaches 0
(the  02 ’s cancel). Thus,
 2
0

in the limit ̂  x .
( x )2
1  1
7-46. a) Because f (x | )  e 2 2
and f ( )  for a    b , the joint
2  ba
distribution is
( x )2
1 
f ( x,  )  e 2 2
for - < x <  and a    b .
(b  a) 2 
( x ) b 2
1 1 
Then, f ( x)  
b  a a 2 
e 2 2
d

and this integral is recognized as a normal probability. Therefore,

f ( x) 
1
b x    a x 
ba
where (x) is the standard normal cumulative distribution function. Then
(x)2

f ( x,  )
2
e 2
f (  | x)  
f ( x) 2  b x    a x 
b) The Bayes estimator is
( x )2

b
e  d 2 2

~  
a 2   b x    a x 
.

Let v = (x - ). Then, dv = - d and

x xa    xb  xa


v2 v2
x a  
( x  v)e 2 dv
2 2
ve 2 dv
~  
x b 2   b x    a x   b x    a x  xb 2   b x    a x 
 

v2
Let w = . Then, dw  [ 2v2 ]dv  [ v2 ]dv and
2 2 2 
( xa )2
  ( xa ) 
( x b ) 2 2

2 2
e dw w
  e 2 2  e 2 2 
~  x   2  b x    a x 
 x
2   b x    a x 
 
( x b ) 2
2 2

n 1  ( n 1) 
e  x   ne
for x = 0, 1, 2, and f ( )   n  1 
0

7-47. a) f ( x)  for  > 0.


x! 
 0  (n  1)

7-14
Applied Statistics and Probability for Engineers, 6th edition

Then,
n  1n1 n xe ( n1)

0

f ( x,  )  .
0n1(n  1) x!
This last density is recognized to be a gamma density as a function of . Therefore, the posterior
n 1
distribution of  is a gamma distribution with parameters n + x + 1 and 1 + .
0
b) The mean of the posterior distribution can be obtained from the results for the gamma
distribution to be
n  x 1  n  x 1 
 
1
n 1
0
 0  
 n  0  1 

16
(4)  1(4.85)
7-48. a) From the example, the Bayes estimate is ~  25
 4.518
16
25
1
b.) ˆ  x  4.85 The Bayes estimate appears to underestimate the mean.

(0.0045)(2.28)  (0.018)(2.29)
7-49. a) From the example, ~   2.288
0.0045  0.018
b) ˆ  x  2.29 The Bayes estimate is very close to the MLE of the mean.

7-50. a) f ( x |  )  ex , x  0 and f ( )  0.008e0.008 . Then,


f ( x1, x2 ,  )  2e ( x1  x2 ) 0.008e0.008  0.0082e ( x1  x2  0.008) .
As a function of , this is recognized as a gamma density with parameters 3 and x1  x2  0.008
.
Therefore, the posterior mean for  is
~ 3 3
   0.00133 .
x1  x2  0.008 2 x  0.008
1000

 0.00133e
.00133x
b) Using the Bayes estimate for , P(X<1000)= dx = 0.736
0

Supplemental Exercises

n
7-51. f ( x1 , x2 ,..., xn )   e xi for x1  0, x2  0,..., xn  0
i 1
5
 1 
 exp    ( x2i 2 ) 
5 2
7-52. f ( x1 , x2 , x3 , x 4 , x5 )  
 2    i 1 

7-53. f ( x1 , x2 , x3 , x4 , x5 )  1 for 0  x1  1,0  x2  1,0  x3  1,0  x4  1,0  x5  1

7-15
Applied Statistics and Probability for Engineers, 6th edition

2.0 2 2.52
7-54. X 1  X 2 ~ N (100  105,  )
25 30
~ N (5,0.3683)

7-55. X ~ N (50,289)

P(47  X  53)  P 1747/ 5025  Z  1753/5025  P(0.9  Z  0.9) 
 P( Z  0.9)  P( Z  0.9)  0.8159  0.1841  0.6319
Yes, because Central Limit Theorem states that with large samples (n  30).

7-56. Assume X is approximately normally distributed.


P( X  34)  1  P( X  34)  1  P( Z  34 38
0.7 / 9
)
 1  P( Z  17.14)  1  0  1

X   54  50
7-57. z   9.5219
s/ n 3 / 17
P(Z > z) ≈ 0. The results are very unusual.

7-58. P( X  37)  P(Z  6)  0

7-59. Binomial with p equal to the proportion of defective chips and n = 200.

7-60. E (aX 1  (1  a) X 2  a  (1  a)  


V ( X )  V [aX 1  (1  a ) X 2 ]
 a 2V ( X 1 )  (1  a ) 2 V ( X 2 )  a 2 ( n1 )  (1  2a  a 2 )( n2 )
2 2

a 2  2  2 2 a 2 a 2  2
 (n2 a 2  n1  2n1 a  n1 a 2 )(  )
2
   
n1 n2 n2 n2 n1 n2
V ( X )
 (  )(2n2 a  2n1  2n1 a )  0
2

a n1 n2
0  2n2 a  2n1  2n1 a
2a (n2  n1 )  2n1
a (n2  n1 )  n1
n1
a
n2  n1

7-61.
n x
 1    n 2
n i

L( )    e  xi i 1

 2 3  i 1

 1  n n x
ln L( )  n ln    2  ln xi   i
 2 3  i 1 i 1 

 ln L( )  3n n xi
 
  i 1
2

7-16
Applied Statistics and Probability for Engineers, 6th edition

Making the last equation equal to zero and solving for , we obtain
n
 xi
ˆ  i 1
as the maximum likelihood estimate.
3n

7-62.
n
L( )   n  xi 1
i 1

n
ln L( )  n ln   (  1)  ln( xi )
i 1

 ln L( ) n n
   ln( xi )
  i 1

making the last equation equal to zero and solving for theta, we obtain the maximum likelihood estimate
n
̂  n
 ln(xi )
i 1

7-63.
1 n 1
L( ) 
 n x
i 1
i

1 n
ln L( )  n ln  

 ln( x )
i 1
i

 ln L( ) n 1 n


  2
 
 ln( x )
i 1
i

Upon setting the last equation equal to zero and solving for the parameter of interest, we obtain the
maximum likelihood estimate
1 n
̂    ln(xi )
n i 1
 1 n  1  n  1 n
E (ˆ)  E   ln( xi )  E   ln( xi )    E[ln( xi )]
 n i 1  n  i 1  n i 1
1 n n
   
n i 1 n

1
1 1
E (ln( X i ))   (ln x) x 
dx let u  ln x and dv  x 
dx
0
1 1

then, E (ln( X ))    x 
dx  
0

7-64. a) Let ̅ . Then ̅ ̅ ̅ . Therefore and

Therefore, ̅ is a

7-17
Applied Statistics and Probability for Engineers, 6th edition

b) ̅

23.5  15.6  17.4   28.7


7-65. x  21.9
10
Demand for all 5000 houses is = 5000
  5000   5000(21.9)  109,500

The proportion estimate is ̂

Mind-Expanding Exercises

M ( M  1)
7-66. P( X1  0, X 2  0) 
N ( N  1)
M (N  M )
P( X1  0, X 2  1) 
N ( N  1)
( N  M )M
P( X 1  1, X 2  0) 
N ( N  1)
( N  M )( N  M  1)
P( X 1  1, X 2  1) 
N ( N  1)
P( X 1  0)  M / N
P( X 1  1)  N  M
N
P( X 2  0)  P( X 2  0 | X 1  0) P( X 1  0)  P ( X 2  0 | X 1  1) P( X 1  1)
M 1 M M N M M
    
N 1 N N 1 N N
P( X 2  1)  P( X 2  1 | X 1  0) P ( X 1  0)  P( X 2  1 | X 1  1) P ( X 1  1)
N  M M N  M 1 N  M N M
    
N 1 N N 1 N N
M 1 M
Because P( X 2  0 | X1  0)  N 1
is not equal to P( X 2  0)  , X1 and X 2 are not
N
independent.

7-67. a)
[(n  1) / 2]
cn 
(n / 2) 2 /( n  1)

b) When n = 15, cn = 1.0180. When n = 20, cn = 1.0132. Therefore S is a reasonably good estimator for the
standard deviation even when relatively small sample sizes are used.

7-68. a) The likelihood is


√ √

The log likelihood function is

7-18
Applied Statistics and Probability for Engineers, 6th edition

∑[ √ ]

∑[ ] √

∑[ ] (√ )

Take the derivative of each and set it to zero

to obtain
̂

To find the maximum likelihood estimator of , substitute the estimate for and take the
derivative with respect to

∑[ ̂ ̂ ]

Set the derivative to zero and solve



̂

b)

̂ ∑ ∑

∑[ ] ∑[ ]

Therefore, the estimator is biased. The bias is independent of n.

c) An unbiased estimator of  2 is given by 2ˆ 2

7-69. P | X   | c   1 from Chebyshev's inequality. Then, P | X   | c   1  1 . Given an , n


 n  c2  n  c2
and c can be chosen sufficiently large that the last probability is near 1 and c is equal to .
n

7-19
Applied Statistics and Probability for Engineers, 6th edition

7-70.   
a) P X ( n )  t  P X i  t for i  1,..., n  [ F t ]  n

PX (1)  t   PX i  t for i  1,..., n  [1  F t ]n


 
Then, P X (1)  t  1  [1  F t ]
n

b)

f X (1) t   FX t   n[1  F t ] n 1 f t 
t (1)

f X (n) t    FX t   n[ F t ]n1f t 
t (n)

c) PX (1)  0  FX (1) 0  1  [1  F 0]n  1  p n because F(0) = 1 - p.


PX ( n)  1  1  FX ( n ) 0  1  [ F 0]n  1  1  p 
n

d)  
P X  t   F t    t  . From a previous exercise,

  
(t  )2

f X (1) t   n 1   t  n 1 1 2 2
 e
2 

  
(t  )2

f X ( n ) t   n  t  n 1 1 2 2
 e
2 

e) P X  t   1  et
From a previous exercise,
FX (1) t   1  e  nt f X (1) t   ne  nt
FX ( n ) t   [1  e t ] n f X ( n ) t   n[1  e t ] n1 e t

7-71. PF X ( n )   t   PX ( n)  F 1 t   t n for 0  t  1 from a previous exercise.


If Y  F ( X (n) ) , then f Y ( y)  ny n1 ,0  y  1 .
1
n
Then, E (Y )   ny n dy 
0
n 1
 
PF X (1)   t   P X (1)  F 1 t   1  (1  t ) n 0  t  1 from a previous exercise.
If Y  F ( X (1) ) , then f Y ( y)  n(1  t ) n1 ,0  y  1 .
1
1
Then, E (Y )   yn(1  y ) n 1 dy  where integration by parts is used. Therefore,
0
n 1
n 1
E[ F ( X ( n ) )]  and E[ F ( X (1) )] 
n 1 n 1
n 1
7-72. E (V )  k  [ E ( X i21 )  E ( X i2 )  2 E ( X i X i 1 )]
i 1

7-20
Applied Statistics and Probability for Engineers, 6th edition

n1
 k  ( 2   2   2   2  2 2 )  k (n  1)2 2
i 1

Therefore, k 1
2 ( n 1)

7-73. a) The traditional estimate of the standard deviation, S, is 3.64. The mean of the sample is 13.43 so the
values of X i  X corresponding to the given observations are 3.43, 1.43, 5.43, 0.57, 4.57, 1.57 and 3.57.
The median of these new quantities is 3.43 so the new estimate of the standard deviation is 5.08 and this
value is slightly larger than the value obtained from the traditional estimator.

b) Making the first observation in the original sample equal to 50 produces the following results. The
traditional estimator, S, is equal to 14.01. The new estimator slightly changed to 7.62.

7-74. a)
Tr  X 1 
X1  X 2  X1 

X1  X 2  X1  X 3  X 2 

... 

X 1  X 2  X 1  X 3  X 2  ...  X r  X r 1 

(n  r )( X 1  X 2  X 1  X 3  X 2  ...  X r  X r 1 )

1
Because X1 is the minimum lifetime of n items, E( X 1 )  .
n
Then, X2  X1 is the minimum lifetime of (n-1) items from the memoryless property of the
1
exponential and E( X 2  X 1 )  .
(n  1)
1
Similarly, E ( X k  X k 1 )  . Then,
(n  k  1)
n n 1 n  r 1 r T  1
E (Tr )    ...   and E  r    
n (n  1) (n  r  1)   r  

b)V (Tr / r )  1 /(2 r ) is related to the variance of the Erlang distribution


V ( X )  r / 2 . They are related by the value (1/r2). The censored variance is (1/r2) times the
uncensored variance.

7-21

You might also like