Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Faculty of Economics UGM

Statistics for Economics 2


Hengki Purwoto

Key Answer of Selected Chapter Exercises


Ramachandran’s book 3rd ed.

Chapter 2
Basic Concepts from Probability Theory

2.2.16
(a)

P ( A  B ) = .17 + .46 = .63

(b)

P ( Ac ) = 1 − .17 = .83

(c)

( )
P ( Ac  B c ) = P ( A  B ) = 1 − P ( A  B ) = 1
c

(d)

(
P ( A  B) ) = 1 − P ( A  B) = 1
c

(e) P ( A  B ) = P ( ( A  B ) ) = 1 − P ( A  B ) = 1 − .63 = .37


c c c

2.2.19
(i)
250 + 150 + 150
P( B ) = = .55
1000
(ii)
150 + 150
P( Ac  B ) = = .3
1000
(iii)
150 + 250 + 50 + 250
P( A  B c ) = = .7
1000

2.3.2
10!
P4 = = 5040
10
(10 − 4 )!

1
2.3.5

 25  25! 25! 25  24  23  22  21
 = = = = 53130
 20  (25 − 20)!20! 5!20! 5!

2.4.12
111
(a) P ( Daily exercise ) =
400
50
P ( Daily exercise  Male ) 400 50
(b) P ( Daily exercise|Male ) = = =
P ( Male ) 227 227
400

2.5.3

 0, x  −5
.2, −5  x  0

F (x) = .3, 0  x  3
.7, 3  x  6

 1, x6

2.5.4

x −1 0 2 5 6
p ( x ) 0.1 0.05 0.25 0.4 .2

2
(a)

P ( X = 2 ) = .25

(b)

P ( X  0 ) = 025 + .4 + .2 = .65

2.5.5
x -1 3 9
p(x) .2 .6 .2

2.5.6
(a)
4
4
 x2   42  1
0 cxdx = c  
 2 0
= c  − 0  = 8c = 1 , c =
2  8
(b)
x
1  s2  1  x2  x2
x
1
F ( x ) =  sds =   =  − 0  =
80 8  2 0 8  2  16

0 , x0
 2
x
F ( x) =  , 0 x4
16
 1 , x4

(c)

( 3) (1)
2 2
9 −1 1
P (1  x  3) = F ( 3) − F (1) = − = =
16 16 16 2

2.5.7
(a)
3
3
 x3   33  1
c  x dx = c   = c   = 9c = 1 , c =
2

0  3 0 3 9

1 3 3 27 − 8 19
3
1 2
(b) P(2  X  3) = 
92
x dx = x  =
27   2 27
=
27
 .7037

3
x
1 2 1 3 x 1 3 x3
F ( x) =
9 0
s ds =  s  =  x − 0  =
27   0 27   27
(c)
0 , x0
 3
x
F ( x) =  , 0 x3
 27
 1 , x3

2.5.10
(a)
 F ( 3) = 1

 F ( 0 ) = 0
a ( 3) + b = 1

a ( 0 ) + b = 0
1
a= , b=0
3
(b)
d
F ( x) = f ( x)
dx
x d  1
 =
3  dx  3
 0 , otherwise

f ( x) = 1
 3 , 0  x  3
(c)
1 2
P (1  x  5 ) = F ( 5 ) − F (1) = 1 − =
3 3

4
Chapter 3
Additional Topics in Probability

3.2.6
Let X = the number of complete passes, then n = 16 and p = 0.62.
(a)
16 −12
P( X = 12) = (16 12
12 )(0.62) (0.38)

= 1820(0.62)12 (0.38) 4
= 0.122
(b)
P( X  8) = P( X = 9) + P( X = 10) + P( X = 11) + P( X = 12) + P( X = 13) + P( X = 14)
+ P ( X = 15) + P ( X = 16)
= 0.177 + 0.202 + 0.180 + 0.122 + 0.061 + 0.021 + 0.005 + 0.000
= 0.768
(c)
There is a 76.8% chance that he will complete more than half of his passes.
(d)

E ( X ) = np = 16(0.62) = 9.92

3.2.10

e −1 / 2 (1 / 2) 0
P( X  1) = 1 − P( X = 0) = 1 − = 1 − 0.607 = 0.393.
0!
The probability of at least one error on a certain page of the book is 0.393.

3.2.15
The probability density function is given by

 1
 , 0  x  100
f ( x) = 100
0, otherwise
80 1
(a) P(60  X  80) = 60 100 dx = 0.2.
100 1
(b) P( X  90) = 90 100
dx = 0.1.
(c)
There is a 20% chance that the efficiency is between 60 and 80 units; there is 10% chance that
the efficiency is greater than 90 units.

5
3.2.23
P(1.9  X  2.02) = P ( 1.9 −1.96
0.04
Z  2.02 −1.96
0.04 ) = P(−1.5  Z  1.5) = 0.866
P( X  1.9 or X  2.02) = 1 − P(1.9  X  2.02) = 0.134
13.4% of the balls manufactured by the company are defective.

3.3.1
(a)
The joint probability function is
 8  6  10 
   
 x  y  4 − x − y 
P ( X = x, Y = y ) = ,
 24 
 
4

where 0  x  4, 0  y  4, and 0  x + y  4 .

(b)
 8  6  10 
   
3 0 4 − 3 − 0
P( X = 3, Y = 0) =    = 0.053 .
 24 
 
4
(c)
 8  6  10 
2    
2
 x  1  4 − x − 1
P( X  3, Y = 1) =  P( X = x, Y = 1) =  = 0.429 .
x =0 x =0  24 
 
4
(d)
y
x 0 1 2 3 4 Sum
0 0.020 0.068 0.064 0.019 0.001 0.172
1 0.090 0.203 0.113 0.015 0.421
2 0.119 0.158 0.040 0.317
3 0.053 0.032 0.085
4 0.007 0.007
Sum 0.289 0.461 0.217 0.034 0.001 1.00

3.3.12
(a)
2 2 2
x3 y 3 x3 y 4 x3
f X ( x) =  f ( x, y )dy =  dy = ( ) = , 0  x  2.
0 0
16 16 4 0 4

6
2 2 2
x3 y 3 y3 x4 y3
fY ( y ) =  f ( x, y )dx =  dx = ( ) = , 0  y  2.
0 0
16 16 4 0 4
(b)
Given 0  x  2 , we have the conditional density as

x3 y3
f ( x, y ) 16 y3
f ( y | x) = = 3 = , 0  y  2.
f X ( x) x 4
4

3.3.15
(a)
3 3
35
E ( XY ) =  xy  f ( x, y ) =  xy  f ( x, y ) =
x, y x =1 y =1 12
(b)
3
5
E ( X ) =  x  f x ( x) =  x  f x ( x) =
x x =1 3

3
11
E (Y ) =  y  f y ( y ) =  y  f y ( y ) =
y y =1 6
35 5 11 5
Then, Cov( X , Y ) = E ( XY ) − E ( X ) E (Y ) = −  =−
12 3 6 36
(c)
2
 5 3
5
Var( X ) =  [ x − E ( X )]  f x ( x) =   x −   f x ( x) = , and
2

x x =1  3 9
2
3
 11  23
Var (Y ) =  [ y − E (Y )]2  f y ( y ) =   y −   f y ( y ) = .
x, y y =1  6 36

Cov( X , Y ) − 5 / 36
Then,  XY = = = −0.233 .
Var ( X )Var (Y ) (5 / 9)(23 / 36)

3.5.4

Since X follows a Poisson distribution with  = 120 , then  =  2 = 120 . From the

Chebyshev’s theorem

1
P(  − K  X   + K )  1 − .
K2

Equating  − K to 100 and  + K to 140 with  = 120 and  = 120 = 11 , we


7
1
obtain K = 1.82 . Hence, P(70  X  130)  1 − = 0.698
1.82 2

3.5.7

Let X 1 ,..., X n denote each toss of coin with value 1 if head occurs and 0 otherwise. Then,

X 1 ,..., X n are independent variables which follow Bernoulli distribution with p = 1 / 2 .

Thus, E ( X i ) = 1 / 2 , and Var ( X i ) = 1 / 4 . For any   0 , from the law of large numbers

we have

S 1  S 1
P n −    → 1 as n →  , i.e. n will be near to for large n.
 n 2  n 2
Sn
If the coin is not fair, then the fraction of heads, , will be near to the true probability of
n
getting head for large n.

3.5.13

Let X i denote the success of ith customer. Then each X i follows Bernoulli distribution

2500
with probability 0.03, and E ( X i ) = 0.03 and Var ( X i ) = 0.0291 . Let S 2500 =  Xi .
i =1

S 2500 − 2500  0.03


From the CLT, follows approximately N (0,1) . Hence, we have
0.0291  2500

 S − 2500  0.03 80 − 2500  0.03 


P( S2500  80) = P  2500    P( Z  0.586) = 0.279
 0.0291 2500 0.0291  2500 

8
Chapter 4
Sampling Distributions

4.2.1. We have that  ~ 15 


2

(a) We can see, for example in a table, that P (Y  6.26)  0.025 . Then y0  6.26
(b) Choosing upper and lower tail area to 0.025, and since P(Y  27.5)  0.975 , and
P (Y  6.26)  0.025 , then P(a  Y  b)  0.95 , then b   02.975,15  27.5 , a   02.025,15  6.26

(c) P (Y  22.307)  1  P (Y  22.3)  0.10

( n  1) S 2
4.2.7. Since the random sample comes from a normal distribution,   (2n 1) .
2
Setting the upper and lower tail area equal to 0.05, even this is not the only choices, and using
(n  1)b
a Chi-square table with n  1  14 degrees of freedom, we have   02.95,14  23.68 ,
2
(n  1)a
and   02.05,14  6.57 . Then, with   1.41 , b  3.36 , and a  0.93
 2

4.2.9. Since T  t8

(a) T  2.896  =0.99

(b) P (T  1.860)  0.05


(c) Since t-distribution is symmetric, we find a such that P (T  a )  0.01 . Then a  1.344
2

4.2.11. According with the information,   11.4 , n  20 , y  11.5 , and s  2 , then


y
t  0.224 . The degrees of freedom are n  1  19 , so the critic value is 1.729 at
s
n
 =0.05 -level. Then, the data tend to agree with the psychologist claims.

4.2.19. If X  F (9,12)
(a) P( X  3.87)  0.9838
(b) P (`X  0.196)  0.01006

(c) F0.975 (9,12)  0.025 then F0.975  3.4358 .

1 1  1
0.025  P ( X  F0.975 )  P   , where  F (12,9)
 X F0.975  X

1
Then  3.8682 and F0.975  0.258518 . Thus, a  0.2585, b  3.4358
F0.975

You might also like