Professional Documents
Culture Documents
3 Fall 2007 Exam PDF
3 Fall 2007 Exam PDF
3 Fall 2007 Exam PDF
Midyear Examination
Econometrics Honours
Economics 467
Thursday December 6, 2007, 14.00–17.00
Instructions:
As for the midterm exam, do not be upset if this exam seems too long for you. Do
not waste time on questions for which you do not see how to obtain the answer.
Rather, answer as much as you can. Everything you do will be taken into account.
However, it should be worth your while to take the time to read over the entire
exam before plunging in.
y = Xβ + u, E(uu> ) = Ω,
where the number of observations, n, is equal to 3m. The first three rows of the
matrix X are
1 4
1 8 ,
1 15
and every subsequent group of three rows is identical to this first group. The
covariance matrix Ω is diagonal, with typical diagonal element equal to ω 2 x2t2 ,
where ω > 0, and xt2 is the t th element of the second column of X.
What is the variance of β̂2 , the OLS estimate of β2 ? What is the probability limit, as
n → ∞, of the ratio of the conventional estimate of this variance, which incorrectly
assumes homoskedasticity, to a heteroskedasticity-consistent estimate?
2. Consider the linear regression model
where ct and yt are the logarithms of consumption and income, respectively. Show
that this model contains as a special case the following linear model with AR(1)
disturbances:
ct = δ0 + δ1 yt + ut , with ut = ρut−1 + εt , (2)
where εt is IID. Write down the relation between the parameters δ0 , δ1 , and ρ of this
model and the parameters α, β, γ0 , and γ1 of (1). How many and what restrictions
are imposed on the latter set of parameters by the model (2)?
Formulate a GNR, based on estimates under the null hypothesis, that allows you
to use a t test to test the restriction imposed on the model (1) by the model (2).
3. Show that the covariance of the random variable E(X1 | X2 ) and the random
variable X1 − E(X1 | X2 ) is zero.
Economics 467 Page 3 of seven pages
Show that the variance of the random variable X1 − E(X1 | X2 ) cannot be greater
than the variance of X1 , and that the two variances are equal if X1 and X2 are
independent.
Let a random variable X1 be distributed as N(0, 1). Now suppose that a second
random variable, X2 , is constructed as the product of X1 and an independent ran-
dom variable Z, which equals 1 with probability 1/2 and −1 with probability 1/2.
What is the (marginal) distribution of X2 ? What is the covariance between X1 and
X2 ? What is the distribution of X1 conditional on X2 ?
4. Show that, if the m--vector z ∼ N(µ, I), the expectation of the noncentral
chi-squared variable z>z is m + µ>µ
If F is a strictly increasing CDF defined on an interval [a, b] of the real line, where
either or both of a and b may be infinite, then the inverse function F −1 is a well-
defined mapping from [0, 1] on to [a, b]. Show that, if the random variable X
is a drawing from the U(0, 1) distribution, then F −1 (X) is a drawing from the
distribution of which F is the CDF.
Suppose that the random variable z follows the N(0, 1) density. If¡ z is a test ¢statistic
used in a two-tailed test, the corresponding P value is p(z) ≡ 2 1 − Φ(|z|) . Show
that Fp (·), the CDF of p(z), is the CDF of the uniform distribution on [0, 1]. In
other words, show that
y = α + βx1 + (1/β)x2 + u.
Describe clearly and carefully how to compute the one-step efficient estimator of the
parameters α and β. Your procedure may not make use of any nonlinear estimation
procedure.
In the regression model
y = α + βx + u,
the disturbances are serially correlated:
ut = ρut−1 + vt ,
where the vt are white noise. In order to estimate the parameters α, β, and ρ of
this model, the dependent variable y was first regressed on the constant, x, and the
first lags y−1 and x−1 of these two variables, as follows:
y = γ0 + γ1 x + γ2 y−1 + γ3 x−1 + u.
Next one-step efficient estimation was undertaken by use of a GNR. How were the
artificial variables, regressand and regressors, constructed? In the following set of
results, r denotes the regressand, c the constant, Rb the regressor that corresponds
to the parameter β, and Rr the one that corresponds to ρ.
ols r c Rb Rr
Ordinary Least Squares:
Variable Parameter Estimate Standard Error T Statistic
c 5.5472681 4.1977472 1.3214869
Rb -0.0423663 0.1262615 -0.3355445
Rr -0.0013788 0.1371420 -0.0100539
Number of observations = 49
Sum of squared residuals = 18971.4482373
Explained sum of squares = 1067.9629941
Estimate of residual variance (with d.f. correction) = 412.4227878
R squared (uncentred) = 0.0532931
Give the numerical values for the one-step efficient estimates of α, β, and ρ. Is the
hypothesis that ρ = 0 compatible with the data? Why or why not?
5. What are the defining properties of an orthogonal projection matrix? Can such
a matrix be non-square? Why or why not?
If the linear regression model y = Xβ + u is estimated by ordinary least squares,
the vector PX y of fitted values and the vector MX y of residuals are orthogonal.
Here PX and MX are the orthogonal projections on to the linear span of the
columns of X and the orthogonal complement of that span, respectively. Then
by Pythagoras’ Theorem, the total sum of squares is equal to the sum of squared
residuals plus the explained sum of squares:
Consider the unbiased estimator β̂W defined as the solution of the estimating equa-
tions
W >(y − Xβ) = 0.
Economics 467 Page 5 of seven pages
Express the fitted values, that is, the elements of the vector X β̂W , in the form
P y, and the residuals, that is, the elements of the vector y − X β̂W , in the form
M y. Show that P and M are a pair of complementary projections. Is the sum of
these squared residuals plus the sum of these fitted values equal to the total sum of
squares? Why or why not?
Given the result about P and M being complementary projections, it follows that
y = P y + M y. Multiplying on the left by y> gives
Explain clearly why this equation is not equivalent in general to the equation (3).
6. In the context of the model
it is desired to test the hypothesis that ρ = 1 and β +γ = 0. Express the model that
corresponds to this null hypothesis in terms of the first differences ∆yt ≡ yt − yt−1
and ∆xt ≡ xt − xt−1 .
Rewrite the model (4) using the first differences and the lags of yt and xt , with no
explicit mention of the current values yt and xt .
Here are some of the results obtained by running the regression (4) as written.
The next printout gives the results of running the regression in the form that uses
first differences and lags without the current values of the variables.
As you see, the identities of the regressors have been blanked out, except for the
constant. Identify which line of the printout corresponds to which variable, and
explain how you arrive at your answer.
Compute the value of an F statistic that allows you to test the null hypothesis. If
you wanted to test the component hypotheses, that ρ = 1 and β + γ = 0 separately,
compute the values of two test statistics that could be used for this purpose. What
would be the asymptotic limiting distributions of these statistics if the corresponding
null hypotheses were true?
7. The linear regression model
y = αι + βx + γz + u
Construct a confidence interval for the parameter β with confidence level 95% using
the fact that the 0.025-quantile of the standard normal distribution is -1.96
Describe how to obtain a bootstrap distribution that would allow you to construct
an equal-tailed bootstrap confidence interval for β. If you find that the 0.025-
quantile of this distribution is -2.284 while the 0.975-quantile is 2.029, construct the
bootstrap confidence interval at confidence level 95%.
Economics 467 Page 7 of seven pages
y = X1 β1 + X2 β2 + u,