Professional Documents
Culture Documents
Stock Watson 3u Exercise Solutions Chapter 4 Instructors
Stock Watson 3u Exercise Solutions Chapter 4 Instructors
Introduction
to
Econometrics
(3rd
Updated
Edition)
by
James
H.
Stock
and
Mark
W.
Watson
Solutions
to
End-‐of-‐Chapter
Exercises:
Chapter
4*
(This version August 17, 2014)
*Limited
distribution:
For
Instructors
Only.
Answers
to
all
odd-‐numbered
questions
are
provided
to
students
on
the
textbook
website.
If
you
find
errors
in
the
solutions,
please
pass
them
along
to
us
at
mwatson@princeton.edu.
4.1. (a) The predicted average test score is
(c) Using the formula for β̂ 0 in Equation (4.8), we know the sample average of the
(d) Use the formula for the standard error of the regression (SER) in Equation (4.19)
to get the sum of squared residuals:
SSR 12961
TSS = 2
= = 14088.
1− R 1− 0.08
sY = sY2 = 11.9.
(a) Substituting Height = 70, 65, and 74 inches into the equation, the predicted
weights are 176.39, 156.69, and 192.15 pounds.
(c) We have the following relations: 1 in = 2.54 cm and 1 lb = 0.4536 kg. Suppose
the regression equation in the centimeter-kilogram space is
! = ˆ + ˆ Height .
Weight γ0 γ1
4.3. (a) The coefficient 9.6 shows the marginal effect of Age on AWE; that is, AWE is
expected to increase by $9.6 for each additional year of age. 696.7 is the intercept
of the regression line. It determines the overall level of the line.
(b) SER is in the same units as the dependent variable (Y, or AWE in this
example). Thus SER is measures in dollars per week.
(e) No. The oldest worker in the sample is 65 years old. 99 years is far outside the
range of the sample data.
(f) No. The distribution of earning is positively skewed and has kurtosis larger
than the normal.
(g) βˆ0 = Y − βˆ1 X , so that Y = βˆ0 + βˆ1 X . Thus the sample mean of AWE is 696.7
4.4. (a) ( R − R f ) = β ( Rm − R f ) + u ,
4.5. (a) ui represents factors other than time that influence the student’s performance
on the exam including amount of time studying, aptitude for the material, and so
forth. Some students will have studied more than average, other less; some
students will have higher than average aptitude for the subject, others lower, and
so forth.
(c) (2) is satisfied if this year’s class is typical of other classes, that is, students in
this year’s class can be viewed as random draws from the population of students
that enroll in the class. (3) is satisfied because 0 ≤ Yi ≤ 100 and Xi can take on
only two values (90 and 120).
(d) (i) 49 + 0.24 × 90 = 70.6; 49 + 0.24 ×120 = 77.8; 49 + 0.24 ×150 = 85.0
4.7. The expectation of βˆ0 is obtained by taking expectations of both sides of Equation
(4.8):
⎡ ⎛⎜ 1 n ⎞⎟ ⎤
E ( βˆ0 ) = E (Y − βˆ1 X ) = E ⎢ ⎜ β 0 + β1 X + ∑ ui ⎟ − βˆ1 X ⎥
⎣ ⎜⎝ n i =1 ⎟⎠ ⎦
n
1
= β 0 + E ( β1 − βˆ1 ) X + ∑ E (ui )
n i =1
= β0
where the third equality in the above equation has used the facts that E(ui) = 0 and
E[( β̂1 −β1) X ] = E[(E( β̂1 −β1)| X ) X ] = because E[( β1 − βˆ1 ) | X ] = 0 (see text
equation (4.31).)
4.8. The only change is that the mean of βˆ0 is now β0 + 2. An easy way to see this is
The new regression error is (ui − 2) and the new intercept is (β0 + 2). All of the
assumptions of Key Concept 4.3 hold for this regression model.
4.9. (a) With βˆ1 = 0, βˆ0 = Y , and Yˆi = βˆ0 = Y . Thus ESS = 0 and R2 = 0.
(b) If R2 = 0, then ESS = 0, so that Yˆi = Y for all i. But Yˆi = βˆ0 + βˆ1 X i , so that Yˆi = Y
for all i, which implies that βˆ1 = 0, or that Xi is constant for all i. If Xi is constant
n
for all i, then ∑ i −1
( X i − X )2 = 0 and β̂1 is undefined (see equation (4.7)).
4.10. (a) E(ui|X = 0) = 0 and E(ui|X = 1) = 0. (Xi, ui) are i.i.d. so that (Xi, Yi) are i.i.d.
(because Yi is a function of Xi and ui). Xi is bounded and so has finite fourth
moment; the fourth moment is non-zero because Pr(Xi = 0) and Pr(Xi = 1) are both
non-zero. Following calculations like those exercise 2.13, ui also has nonzero
finite fourth moment.
where the first equality follows because E[(Xi − µX)ui] = 0, and the second
equality follows from the law of iterated expectations.
E[( X i − µ X )ui |X i = 0]2 = 0.22 ×1, and E[( X i − µ X )ui |X i = 1]2 = (1 − 0.2) 2 × 4.
n
4.11. (a) The least squares objective function is ∑ i =1
(Yi − b1 X i )2. Differentiating with
∂ ∑in=1 (Yi −b1 X i )2 n
respect to b1 yields ∂b1 = −2∑i =1 X i (Yi − b1 X i ). Setting this zero, and
∑in=1 X iYi
solving for the least squares estimator yields βˆ1 = ∑in=1 X i2
.
∑in=1 X i (Yi − 4)
(b) Following the same steps in (a) yields βˆ1 = ∑in=1 X i2
This implies
2
2 ESS ⎡⎣ ∑in=1 ( X i − X )(Yi − Y ) ⎤⎦
R = n =
∑i =1 (Yi − Y ) 2 ∑in=1 ( X i − X ) 2 ∑in=1 (Yi − Y ) 2
2
⎡ 1
∑in=1 ( X i − X )(Yi − Y ) ⎤
=⎢ n −1
⎥
⎢⎣ 1
n −1 ∑in=1 ( X i − X ) 2 1
n −1 ∑ n
(Y
i =1 i − Y ) 2
⎥⎦
2
⎡s ⎤ 2
= ⎢ XY ⎥ = rXY
s s
⎣ X Y⎦
s XY
(c) Because rXY = , rXY sY =
s X sY sX
n n
1
s XY ∑ i
(n − 1) i =1
( X − X )(Yi − Y ) ∑ ( X i − X )(Yi − Y )
= = i =1
= βˆ1
sX2 1 n n
∑(Xi − X )
(n − 1) i =1
2
∑i =1
(Xi − X ) 2
4.13. The answer follows the derivations in Appendix 4.3 in “Large-Sample Normal
Distribution of the OLS Estimator.” In particular, the expression for νi is now νi =
(Xi − µX)κui, so that var(νi) = κ3var[(Xi − µX)ui], and the term κ2 carry through the rest
of the calculations.
4.14. Because βˆ0 = Y − βˆ1 X , Y = βˆ0 + β1 X . The sample regression line is y = βˆ0 + β1 x ,