Professional Documents
Culture Documents
MACT-317 Practice Problems 14: Assigned Problems From The Sixth Edition
MACT-317 Practice Problems 14: Assigned Problems From The Sixth Edition
Chapter 9
Practice Problems 14
In the solutions below, the first number refers to edition 6 and the second
refers to edition 7
Solutions
9.62 (9.70)
Y~ Poisson ( )
Because we seek an estimator for only one parameter , we must equate the first
population and sample moments.
The first population moment is 1 E (Y )
1 n
The corresponding first sample moment is m1 Yi Y .
n i 1
Equating the corresponding population and sample moments we obtain 1 m1 ;
Y
Therefore the method of moments estimator for is ˆ Y
9.63 (9.71)
Although we seek an estimator for only one parameter ˆ 2 (Since μ is known and equal
to zero where Y~N(0, 2 ), we must equate the second population and sample moments
as the first population moment is not a function of 2 ; 1 E (Y ) = 0
1
The second population moment is 2 E Y 2 V (Y ) [ E (Y )]2 2 2 2 0 2
1 n 2
The corresponding second sample moment is m2 Yi
n i 1
1 n 2
2 m2 2 Yi
n i 1
1 n 2
Therefore the method of moments estimator for 2 is ˆ
2
Yi
n i 1
9.64 (9.72)
Y~N (μ, 2 )
Because we seek estimators for two parameters and 2 , we must equate two pairs
of population and sample moments.
1 E (Y ) and 2 E (Y 2 ) V (Y ) [ E (Y )]2 2 2
Equating the first and second sample and population moments, and solving for ̂ and
ˆ 2 , we get:
1 n
1 m1 Yi , Thus
n i 1
̂ Y
1 n 2 1 n 2
2 m2 2 2
n i 1
Yi , hence ˆ 2 ˆ 2 Yi
n i 1
n
Y
2
Y
i
1 n 2 1 n
Y 2 Yi nY 2
2
ˆ 2 i 1
i Y
n n i 1 n i 1
9.65 (9.73)
We have one hyper-geometric observation, Y, with parameters N, n and r =
n
Thus, 1 E Y and m1 Y
N
Equating the first population and sample moments and solving for ˆ we obtain:
n N
Y ˆ Y
N n
9.67 (9.75)
2
Y ~ Beta ,
Although we seek an estimator for only one parameter , we must equate the second
population and sample moments as the first population moment is not a function of ;
1
1 E (Y )
2 2
2 E (Y 2 ) V (Y ) [ E (Y )] 2
2 1 1 2 1 2 1 1
2 2 1
2
4 4 2 1 4 2 1 2 2 1
1 n 2
m2 Yi
n i 1
Equating the second population and sample moments and solving for ˆ , we obtain:
1 1 n 2 2 n
Yi ˆ 1 (2ˆ 1) Yi
2
2(2 1) n i 1 n i 1
n 2
1 2 Yi n
ˆ i 1
n 2
4 Yi n 1
i 1
9.69 (9.77)
Y~U (0, 3 )
Because we seek an estimator for only one parameter , we must equate the first
population and sample moments.
0 3 3 1 n
1 E (Y )
2
2
and m1 Yi Y
n i 1
Equating the first population and sample moments and solving for ˆ , we obtain:
3 2
Y ˆ Y
2 3
9.70 (9.78)
f ( y ) y 1 3 , 0 y3
3
y
3
y
3 1 3
1
E Y dy
3 3 1 0 3 1 1
Because we seek an estimator for only one parameter , we should equate the first
population and sample moments.
3
3 1 n
1 E (Y )
1
and m1 Yi Y
n i 1
Equating the first population and sample moments and solving for ̂ , we obtain:
3 Y
Y 3ˆ (ˆ 1)Y ˆ
1 3Y
9.72 (9.80)
a) Y~ Poisson (λ)
e yi
f ( yi ) yi 0,1,2,.......... i 1,2,3,...., n
yi !
n
n yi
n
e yi e i 1
f yi
n
L y1 , y 2 , ...., y n L n
yi !
i 1 i 1
y!
i 1
i
n n
ln L y i ln( ) n ln y i !
i 1 i 1
The MLE of is the value that make ln L a maximum. Taking derivative of
ln L with respect to we obtain:
n
d y i
ln L( ) i 1
n
d
Setting this derivative equal to zero and solving for ̂ [The MLE of ], we get:
n n
y i y i
i 1
n 0 ˆ i 1
y
ˆ n
b) Y~ Poisson (λ)
E Y , V Y
n n
yi E( y )
n
i
E ˆ E ( y ) E i 1 i 1
n n n
n
yi
n
V ( y )
i
V ˆ V ( y ) V i 1 i 1
(The yi s are independent )
n n2
n
2
n n
9.74 (9.82)
4
yi r
b) f y i 1 ryi r 1e
y i 0, 0 i 1,2,3,...., n
n yi r
f y
n
L y1 , y 2 , ...., y n L
1 r 1
i ry i e
i 1 i 1
n
n
yi r
= r
n i 1
yi
r 1
e
i 1
n
n y r
i
ln L n ln(r ) n ln( ) ( r 1) ln( y i ) i 1
i 1
The MLE of is the value that maximizes ln L . Taking the derivative of
ln L with respect to , we obtain:
n
y r
ln L( ) n i1 2
i
d
d
Setting this derivative equal to zero and solving for ˆ [The MLE of ], we get:
n
n
yir
d
ln L( ) ˆ i 1ˆ 2 0
d
n
n ˆ
n
yir 0 y r
i
i 1 ˆ i 1
n
9.75 (9.83)
1
a) f ( y i ) 0 yi 2 1 i 1,2,3,...., n
2 1
n
L y1 , y 2 , ...., y n L f y
i 1
i
n
n
1 1
i 1
2 1 2 1
n
1
Since L is a monotonically decreasing function of , L ( ) increases
2 1
as decreases. Thus, L ( ) is maximized at the smallest possible value of , subject
y 1
to the constraint 0 y i 2 1 , that is i , i 1,2,...., n . The smallest value
2
5
of that satisfies this constraint is at the maximum observation in the set y1 , y 2 , ...., y n
y(n) 1 y 1
[i.e. at ]. That is ˆ n is the MLE for .
2 2
9.77 (9.85)
yi
1 1
a) f ( yi , ) y i 0, i 1,2,3,...., n
yi . e
n yi
n 1
L y1 , y 2 , ...., y n L f y
1
yi . e
i 1
i
i 1
n
yi
n
1 i 1
e
y 1
n
n
i 1
i
n
1 n
y i
ln( L( ) ) ln y i 1 (n ) ln( ) i 1
n
i 1
n
n
yi
d
ln( L( )) 2
i 1
d
Setting this derivative equal to zero and solving for ˆ [The MLE of ], we get:
n n
y i
n y i
y
i 1
0 ˆ i 1
ˆ 2
ˆ n
b) Y ~ Gamma (α, θ), hence E(Y) = , V (Y ) 2
n
Y 1
Yi
n
n
1
E ˆ E E i 1
n n E (Y )
i 1
i
n
n
Yi
Y 1 1
V ˆ V 2 V (Y ) 2 V i 1
n
n
1
2 2 V (Yi ) (Y1 , Y2 ,....,Yn are independent )
n i 1
n 2 2
n 2 2 n
9.83 (9.91)
Y ~ uniform (0, 2θ)
6
1
f ( yi ) 0 y i 2 i 1,2,3,...., n
2
n
L y1 , y 2 , ...., y n L 1 1
n
i 1
2 2
n
1
Since L is a monotonically decreasing function of , L ( ) increases as
2
decreases. Thus, L ( ) is maximized at the smallest possible value of , subject to the
y
constraint 0 yi 2 , that is i , i 1,2,...., n . The smallest value of that
2
satisfies this constraint is at the maximum observation in the set y1 , y 2 , ...., y n [i.e. at
y (n ) y n
]. That is ˆ is the MLE for .
2 2
9.84 (9.92)
3 yi2
f ( yi ) 3 0 yi i 1,2,3,...., n
n n
f yi
n
3 yi2 3 n
L y1 , y 2 , ...., y n L 3 y 2
i
i 1
3
i 1 i 1
9.85 (9.93)
2 2
a) f y i yi i 1,2,3,...., n
y i3
2 2
n
2 2 n
f yi
n
n
L y1 , y 2 , ...., y n L 3
i 1 y i
i 1 yi3 i 1