pp-01-soln

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

STAT 3202: Practice 01

Autumn 2018, OSU

Exercise 1
Consider independent random variables X1 , X2 , and X3 with
• E[X1 ] = 1, Var[X1 ] = 4
• E[X2 ] = 2, SD[X1 ] = 3
• E[X3 ] = 3, SD[X1 ] = 5
(a) Calculate E[5X1 + 2].
Solution:

E[5X1 + 2] = 5 · E[X1 ] + 2 = 7

(b) Calculate E[4X1 + 2X2 − 6X3 ].


Solution:

E[4X1 + 2X2 − 6X3 ] = 4 · E[X1 ] + 2 · E[X2 ] − 6 · E[X3 ]


= 4 · (1) + 2 · (2) − 6 · (3) = −10

(c) Calculate Var[5X1 + 2].


Solution:

Var[5X1 + 2] = 52 · Var[X1 ] = 25 · (4) = 100

(d) Calculate Var[4X1 + 2X2 − 6X3 ].


Solution:

Var[4X1 + 2X2 − 6X3 ] = 42 · Var[X1 ] + 22 · Var[X2 ] + (−6)2 · Var[X3 ]


= 42 · (4) + 22 · (32 ) + (−6)2 · (52 ) = 1000

Exercise 2
Consider random variables H and Q with
• E[H] = 3, Var[H]
h 2 i= 16
• SD[Q] = 4, E Q5 = 3.2

1
(a) Calculate E[5H 2 − 10].
Solution:

2
Var[H] = E H 2 − (E[H])
 

16 = E H 2 − 32 E H 2 = 25
   

E 5H 2 − 10 = 5 · E H 2 − 10 = 5 · 25 − 10 = 115
   

(b) Calculate E[Q].


Solution:

E Q2 = 16
 

2
Var[Q] = (Var[Q]) = 16

2
Var[Q] = E Q2 − (E[Q])
 

16 = 16 − E[Q]2

E[Q] = 0

Exercise 3
Consider a random variable S with probability density function

1
f (s) = (2s + 10), 40 ≤ s ≤ 100.
9000
(a) Calculate E[S].
Solution:

Z 100
2s + 10
E[S] = s· ds = 74
40 9000

(b) Calculate SD[S].


Solution:

Z 100
2 2s + 10
E[S ] = s2 · ds = 5760
40 9000

2
Var[S] = E[S 2 ] − (E[S]) = 5760 − 742 = 284

p √
SD[S] = Var[S] = 284 ≈ 16.8523

2
Exercise 4
Consider independent random variables X and Y with
2
• X ∼ N (µX = 2, σX = 9)
• Y ∼ N (µY = 5, σY2 = 4)
(a) Calculate P [X > 5].
Solution:
 
X − µX 5−2
P [X > 5] = P > = P [Z > 1] = 1 − P [Z < 1] = 1 − 0.8413 = 0.1587
σX 3
pnorm(q = 5, mean = 2, sd = sqrt(9), lower.tail = FALSE)

## [1] 0.1586553
1 - pnorm(1)

## [1] 0.1586553
(b) Calculate P [X + 2Y > 5].
Solution:

E[X + 2Y ] = 1 · E[X] + 2 · E[Y ] = 12

Var[X + 2Y ] = 12 · Var[X] + 22 · Var[Y ] = 25

2
X + 2Y ∼ N (µX+2Y = 12, σX+2Y = 25)

 
X + 2Y − µX+2Y 5 − 12
P [X + 2Y ] = P > = P [Z > −1.4] = 1 − P [Z < −1.4] = 1 − 0.0808 = 0.9192
σX+2Y 5

pnorm(q = 5, mean = 12, sd = sqrt(25), lower.tail = FALSE)

## [1] 0.9192433
1 - pnorm(-1.4)

## [1] 0.9192433

Exercise 5
Consider random variables Y1 , Y2 , and Y3 with
• E[Y1 ] = 1, E[Y2 ] = −2, E[Y3 ] = 3
• Var[Y1 ] = 4, Var[Y2 ] = 6, Var[Y3 ] = 8
• Cov[Y1 , Y2 ] = 1, Cov[Y1 , Y3 ] = −1, Cov[Y2 , Y3 ] = 0

3
(a) Calculate Var[3Y1 − 2Y2 ].
Solution:

Var[3Y1 − 2Y2 ] = 32 · Var[Y1 ] + (−2)2 · Var[Y2 ] + 2(3)(−2) · Cov[Y1 , Y2 ] = 48

(b) Calculate Var[3Y1 − 4Y2 + 2Y3 ].


Solution:

Var[3Y1 − 4Y2 + 2Y3 ] = (3)2 · Var[Y1 ] + (−4)2 · Var[Y2 ] + (2)2 · Var[Y3 ]


+ (2)(3)(−4) · Cov[Y1 , Y2 ] + (2)(3)(2) · Cov[Y1 , Y3 ] + (2)(−4)(2) · Cov[Y2 , Y3 ]
= 128

Exercise 6
Consider using ξˆ to estimate ξ.
ˆ = 5 and Var[ξ]
(a) If Bias[ξ] ˆ = 4, calculate MSE[ξ]
ˆ

Solution:

 2
ˆ = bias[ξ]
MSE[ξ] ˆ + var[ξ]
ˆ = 52 + 4 = 29

h i
(b) If ξˆ is unbiased, ξ = 6, and MSE[ξ]
ˆ = 30, calculate E ξˆ2

Solution:
ˆ
First we use the definition of MSE to recover the variance of ξ.

 2
ˆ = bias[ξ]
MSE[ξ] ˆ + var[ξ]
ˆ

ˆ
30 = 0 + var[ξ]
ˆ = 30
var[ξ]

Then using the fact that the estimator is unbiased, we have

h i h i  h i2
E ξˆ2 = var ξˆ + E ξˆ = 30 + 62 = 66

Exercise 7
Using the identity
     
(θ̂ − θ) = θ̂ − E[θ̂] + E[θ̂] − θ = θ̂ − E[θ̂] + Bias[θ̂]

show that

4
h i  2
MSE[θ̂] = E (θ̂ − θ)2 = Var[θ̂] + Bias[θ̂]

Solution:
We compute:
h i
MSE[θ̂] = E (θ̂ − θ)2
 2 
= E θ̂ − E[θ̂] + E[θ̂] − θ ( using the given identity)
 2     2 
= E θ̂ − E[θ̂] + 2 · θ̂ − E[θ̂] E[θ̂] − θ + E[θ̂] − θ (algebra)
 2      2
= E θ̂ − E[θ̂] + 2 · E[θ̂] − θ E[θ̂ − E[θ̂] + E[θ̂] − θ (linearity of expectation)

 
where E[θ̂] − θ in the last line is a constant.

Note that:
 2 
• E θ̂ − E[θ̂] = Var(θ̂), by the definition of variance
h i
• E θ̂ − E[θ̂] = E[θ̂] − E[θ̂] = 0
 2
• E[θ̂] − θ = Bias[θ̂]2 , by definition of bias

Substituting these into the last expression above, we have

 2
MSE[θ̂] = Var[θ̂] + Bias[θ̂]

Exercise 8
Let X1 , X2 , . . . , Xn denote a random sample from a population with mean µ and variance σ 2 .
Consider three estimators of µ:

X1 + X2 + X3 X1 X2 + · · · + Xn−1 Xn
µ̂1 = , µ̂2 = + + , µ̂3 = X̄,
3 4 2(n − 2) 4

Calculate the mean squared error for each estimator. (It will be useful to first calculate their bias and
variances.)
Solution:
We first calculate the expected value of each estimator.
 
X1 + X2 + X3 1
E [µ̂1 ] = E = (µ + µ + µ) = µ
3 3
 
X1 X2 + · · · + Xn−1 Xn µ (n − 2)µ µ µ µ µ
E [µ̂2 ] = E + + = + + = + + =µ
4 2(n − 2) 4 4 2(n − 2) 4 4 2 4
 
E [µ̂3 ] = E X̄ = µ

5
Thus we see that each estimator is unbiased.
Next we calculate the variance of each estimator.

 σ2
 
X1 + X2 + X3 1 2
Var [µ̂1 ] = Var = σ + σ2 + σ2 =
3 9 3
σ2 (n − 2)σ 2 σ2 nσ 2
 
X1 X2 + · · · + Xn−1 Xn
Var [µ̂2 ] = Var + + = + + =
4 2(n − 2) 4 16 4(n − 2)2 16 8(n − 2)
  σ 2
Var [µ̂3 ] = Var X̄ =
n
Since each estimator is unbiased, and for any estimator

h i h i  h i2 h i
MSE θ̂ = E (θ̂ − θ)2 = bias θ̂ + var θ̂

we see that the resulting mean squared errors are simply the variances.

σ2
MSE [µ̂1 ] =
3
nσ 2
MSE [µ̂2 ] =
8(n − 2)

σ2
MSE [µ̂3 ] =
n

Exercise 9
Let X1 , X2 , . . . , Xn denote a random sample from a distribution with density

1 −x/θ
f (x) = e , x > 0, θ ≥ 0
θ
Consider five estimators of θ:

X1 + X2 X1 + 2X2
θ̂1 = X1 , θ̂2 = , θ̂3 = , θ̂4 = X̄, θ̂5 = 5
2 3

Calculate the mean squared error for each estimator. (It will be useful to first calculate their bias and
variances.)
Solution:
First, notice that this is an exponential distribution, thus we have

E[Xi ] = θ, Var[Xi ] = θ2

Following the work from the previous exercise, we will find

6
h i
MSE θ̂1 = θ2
h i θ2
MSE θ̂2 =
2
h i 5θ2
MSE θ̂3 =
9
h i θ2
MSE θ̂4 =
n
h i
2
MSE θ̂5 = (5 − θ)

Note that the first four estimators are unbiased, and thus their mean squared error is simply their variance.
The fifth estimator has no variance, and thus its mean squared error is its bias squared.

Exercise 10
h i h i h i h i h i
Suppose that E θ̂1 = E θ̂2 = θ, Var θ̂1 = σ12 , Var θ̂2 = σ22 , and Cov θ̂1 , θ̂2 = σ12 . Consider the
unbiased estimator

θ̂3 = aθ̂1 + (1 − a)θ̂2 .

If θ̂1 and θ̂2 are independent, what value should be chosen for the constant a in order to minimize the variance
and thus mean squared error of θ̂3 as an estimator of θ?
Solution:
We first verify that θ̂3 is unbiased.
h i h i h i h i
E θ̂3 = E aθ̂1 + (1 − a)θ̂2 = aE θ̂1 + (1 − a)E θ̂2 = aθ + (1 − a)θ = θ

If θ̂1 and θ̂2 are independent, then


h i h i h i h i
Var θ̂3 = Var aθ̂1 + (1 − a)θ̂2 = a2 Var θ̂1 + (1 − a)2 Var θ̂2 = a2 σ12 + (1 − a)2 σ22

Thus we want to find a to minimize the function

f (a) = a2 σ12 + (1 − a)2 σ22

Taking the derivative, we have

f 0 (a) = 2aσ12 − 2(1 − a)σ22

Setting this equal to 0 and solving for a gives

σ22
a=
σ12 + σ22

7
Finally, we check that this gives a minimum by evaluating the second derivative.

f 00 (a) = 2σ12 + 2σ22

Since since this is in fact always positive

σ22
a=
σ12 + σ22

gives a minimum.

Exercise 11
Let Y have a binomial distribution with parameters n and p. Consider two estimators for p:

Y
p̂1 =
n
and

Y +1
p̂2 =
n+2
For what values of p does p̂2 achieve a lower mean square error than p̂1 ?
Solution:
Recall that for a binomial random variable we have

E [Y ] = np
Var [Y ] = np(1 − p)

We first calculate the bias, variance, and mean squared error of p̂1 .

Bias [p̂1 ] = E [p̂1 ] − p = p − p = 0


1 np(1 − p) p(1 − p)
Var [p̂1 ] = 2
Var [Y ] = 2
=
n n n
p(1 − p)
MSE [p̂1 ] =
n
Then we first calculate the bias, variance, and mean squared error of p̂2 .
 
Y +1 np + 1
E [p̂2 ] = E =
n+2 n+2
np + 1
Bias [p̂2 ] = E [p̂2 ] − p = −p
n+2
 
Y +1 1 np(1 − p)
Var [p̂2 ] = Var = 2
Var [Y ] =
n+2 (n + 2) (n + 2)2
 2
2 np + 1 np(1 − p)
MSE [p̂2 ] = (Bias [p̂2 ]) + Var [p̂2 ] = −p +
n+2 (n + 2)2

8
Finally, we solve for values of µ where MSE [p̂2 ] < MSE [p̂1 ] . We have,

MSE [p̂2 ] < MSE [p̂1 ]

 2
np + 1 np(1 − p) p(1 − p)
−p + <
n+2 (n + 2)2 n

Now we decide to be lazy and use WolframAlpha.


Thus, MSE [p̂2 ] < MSE [p̂1 ] when

r ! r !
1 n+1 1 n+1
1− <p< 1+
2 2n + 1 2 2n + 1

Note that this interval is symmetric about 0.5, which makes sense because p̂2 is biased towards 0.5

Exercise 12
Let X1 , X2 , . . . , Xn denote a random sample from a population with mean µ and variance σ 2 .
Create an unbiased estimator for µ2 . Hint: Start with X̄ 2 .
Solution:
First note that from previous results we know that

    σ2
E X̄ = µ, Var X̄ =
n
Also, recall that re-arranging the variance formula based on the first and second moments, for any random
variable we have

2
E Y 2 = Var[Y ] + (E[Y ])
 

Using these, we find that

 2 σ2
E X̄ 2 = Var X̄ + E X̄ + µ2
   
=
n

This expression has the µ2 that we’re after, but also has an extra term that we would like to disappear.
Recall that

E S 2 = σ2
 

Let’s now consider the estimator

1
ξˆ = X̄ 2 − S 2
n
It’s expected value is

9
  1   σ2
 
h i
ˆ 1 2 1
2
E ξ = E X̄ − S = E X̄ 2 − E S 2 = + µ2 − σ 2 = µ2
n n n n

1
Thus ξˆ = X̄ 2 − S 2 is an unbiased estimator for µ2 .
n

Exercise 13
Let X1 , X2 , X3 , . . . , Xn be iid random variables form U(θ, θ + 2). (That is, a uniform distribution with a
minimum of θ and a maximum of θ + 2.)
Consider the estimator

n
1X
θ̂ = Xi = X̄
n i=1

(a) Calculate the bias of θ̂ when estimating θ.


Solution:

h i   θ + (θ + 2)
E θ̂ = E X̄ = E[X] = =θ+1
2
h i h i
bias θ̂ = E θ̂ − θ = (θ + 1) − θ = 1

(b) Calculate the variance of θ̂.


Solution:

h i   Var[X] [(θ + 2) − θ]2 1


Var θ̂ = Var X̄ = = =
n 12n 3n

(c) Calculate the mean squared error of θ̂ when estimating θ.


Solution:

h i  h i2 h i 1 3n + 1
MSE θ̂ = bias θ̂ + var θ̂ = 12 + =
3n 3n

10

You might also like