Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Theory Exercises Week 2

1. Consider the standard linear regression model with n observations and k explanatory
variables

y = X +"
2
"jX N (0; In )

(NB. Writing "jX indicates that the distribution of " conditional on X is speci…ed).
De…ne the OLS-estimator b = (X 0 X) 1 X 0 y.

(a) Show that the expectation equals: E [bjX] = : What do we call this property? Can
you also derive E [b]? Which is more informative: E [bjX] or E [b]?
(b) Show that the Variance-covariance matrix equals: V ar(bjX) = 2 (X 0 X) 1 .
P
(c) Show that X 0 X = ni=1 xi x0i , where x0i is row i of X.
(d) What happens to the variance-covariance matrix when the sample size increases?
What does this tell you about the asymptotic properties of the estimator?

2. (Continuation of exercise 1). Consider the standard linear regression model with n obser-
vations and k explanatory variables

y = X +"
2
"jX N (0; In )
1
De…ne the OLS-estimator b = (X 0 X) X 0 y.

(a) Derive the distribution of the OLS estimator b (conditional on X).


(b) Derive the distribution of the OLS residuals e (conditional on X).
(c) Show that e and b are independent (conditional on X).

3. Using the model above, we wish to predict the value of yp for a particular value of the
vector xp : k 1 of explanatory variables :

(a) What is the distribution of yp :


(b) Show that x0p b is an unbiased estimator of the expectation of yp :
(c) Determine the distribution of x0p b.
(d) Let ^ be another linear unbiased estimator of : By the Gauss-Markov theorem we
know that the OLS estimator b is better in the sense that

V ar[ ^ ] V ar [b] = positive semi de…nite

Show that the variance of the alternative predictor x0p ^ based on ^ is at least as big
as that of the OLS based predictor x0p b:

1
(e) Consider the estimation of a0 , where a is a given k 1 vector of constants. Compare
the quality of the estimators a0 ^ and a0 b.

4. Consider the classic linear regression model, using k1 regressors in X1 and k2 regressors in
X2 and two associated OLS regressions: one with X1 and X2 and the other with only X1 .

y = X1 1 + X2 2 +"
y = X 1 b1 + X 2 b 2 + e
y = X1 bR + eR

(a) Derive the bias of bR when the true 2 6= 0:


0 1 0
(b) Show that bR = b1 + (X1 X1 ) X1 X2 b2 .
(c) Suppose you have a model with only a constant and one explanatory variable (X1 :
n 2) and then you add another explanatory variable (X2 : n 1) that is positively
correlated with the original explanatory variable. As a result, the estimated coe¢ cient
of the original explanatory variable will change. Can you use the relation in part (b)
to tell in which direction it will change?

You might also like