3 Fall 2007 Exam PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

December 2007

Midyear Examination

Econometrics Honours
Economics 467
Thursday December 6, 2007, 14.00–17.00

Examiner: R. Davidson Associate Examiner: V. Zinde-Walsh

Instructions:

• Calculators are allowed


• NO notes or texts allowed
• Answer in exam book(s)
• Language Dictionaries are allowed
• You may keep the exam
• This exam comprises 7 pages, including the cover page

Economics 467 Page 1 of seven pages


Economics 467 Page 2 of seven pages

As for the midterm exam, do not be upset if this exam seems too long for you. Do
not waste time on questions for which you do not see how to obtain the answer.
Rather, answer as much as you can. Everything you do will be taken into account.
However, it should be worth your while to take the time to read over the entire
exam before plunging in.

1. Using a multivariate first-order Taylor expansion, show that, if γ = g(θ), the


asymptotic covariance matrix of the l --vector n1/2 (γ̂ − γ0 ) is given by the l × l
matrix G0 V ∞ (θ̂)G0>. Here θ is a k --vector with k ≥ l, G0 is an l × k matrix with
typical element ∂gi (θ)/∂θj , evaluated at θ0 , and V ∞ (θ̂) is the k × k asymptotic
covariance matrix of n1/2 (θ̂ − θ0 ).
Consider the linear regression model

y = Xβ + u, E(uu> ) = Ω,

where the number of observations, n, is equal to 3m. The first three rows of the
matrix X are  
1 4
1 8 ,
1 15
and every subsequent group of three rows is identical to this first group. The
covariance matrix Ω is diagonal, with typical diagonal element equal to ω 2 x2t2 ,
where ω > 0, and xt2 is the t th element of the second column of X.
What is the variance of β̂2 , the OLS estimate of β2 ? What is the probability limit, as
n → ∞, of the ratio of the conventional estimate of this variance, which incorrectly
assumes homoskedasticity, to a heteroskedasticity-consistent estimate?
2. Consider the linear regression model

ct = α + βct−1 + γ0 yt + γ1 yt−1 + εt , (1)

where ct and yt are the logarithms of consumption and income, respectively. Show
that this model contains as a special case the following linear model with AR(1)
disturbances:
ct = δ0 + δ1 yt + ut , with ut = ρut−1 + εt , (2)

where εt is IID. Write down the relation between the parameters δ0 , δ1 , and ρ of this
model and the parameters α, β, γ0 , and γ1 of (1). How many and what restrictions
are imposed on the latter set of parameters by the model (2)?
Formulate a GNR, based on estimates under the null hypothesis, that allows you
to use a t test to test the restriction imposed on the model (1) by the model (2).
3. Show that the covariance of the random variable E(X1 | X2 ) and the random
variable X1 − E(X1 | X2 ) is zero.
Economics 467 Page 3 of seven pages

Show that the variance of the random variable X1 − E(X1 | X2 ) cannot be greater
than the variance of X1 , and that the two variances are equal if X1 and X2 are
independent.
Let a random variable X1 be distributed as N(0, 1). Now suppose that a second
random variable, X2 , is constructed as the product of X1 and an independent ran-
dom variable Z, which equals 1 with probability 1/2 and −1 with probability 1/2.
What is the (marginal) distribution of X2 ? What is the covariance between X1 and
X2 ? What is the distribution of X1 conditional on X2 ?
4. Show that, if the m--vector z ∼ N(µ, I), the expectation of the noncentral
chi-squared variable z>z is m + µ>µ
If F is a strictly increasing CDF defined on an interval [a, b] of the real line, where
either or both of a and b may be infinite, then the inverse function F −1 is a well-
defined mapping from [0, 1] on to [a, b]. Show that, if the random variable X
is a drawing from the U(0, 1) distribution, then F −1 (X) is a drawing from the
distribution of which F is the CDF.
Suppose that the random variable z follows the N(0, 1) density. If¡ z is a test ¢statistic
used in a two-tailed test, the corresponding P value is p(z) ≡ 2 1 − Φ(|z|) . Show
that Fp (·), the CDF of p(z), is the CDF of the uniform distribution on [0, 1]. In
other words, show that

Fp (x) = x for all x ∈ [0, 1].

5. Consider the following nonlinear model:

y = α + βx1 + (1/β)x2 + u.

Describe clearly and carefully how to compute the one-step efficient estimator of the
parameters α and β. Your procedure may not make use of any nonlinear estimation
procedure.
In the regression model
y = α + βx + u,
the disturbances are serially correlated:

ut = ρut−1 + vt ,

where the vt are white noise. In order to estimate the parameters α, β, and ρ of
this model, the dependent variable y was first regressed on the constant, x, and the
first lags y−1 and x−1 of these two variables, as follows:

y = γ0 + γ1 x + γ2 y−1 + γ3 x−1 + u.

Here are the results obtained:


Economics 467 Page 4 of seven pages

ols y c x ylag xlag


Ordinary Least Squares:
Variable Parameter Estimate Standard Error T Statistic
constant 22.9012027 12.6297963 1.8132678
x 0.6391170 0.1458630 4.3816252
ylag 0.5019006 0.1381386 3.6333122
xlag -0.2207555 0.1689844 -1.3063662
Number of observations = 49
Sum of squared residuals = 18824.3994634
Explained sum of squares = 357583.1003909
Estimate of residual variance (with d.f. correction) = 418.3199881
R squared (uncentred) = 0.9499893

Next one-step efficient estimation was undertaken by use of a GNR. How were the
artificial variables, regressand and regressors, constructed? In the following set of
results, r denotes the regressand, c the constant, Rb the regressor that corresponds
to the parameter β, and Rr the one that corresponds to ρ.
ols r c Rb Rr
Ordinary Least Squares:
Variable Parameter Estimate Standard Error T Statistic
c 5.5472681 4.1977472 1.3214869
Rb -0.0423663 0.1262615 -0.3355445
Rr -0.0013788 0.1371420 -0.0100539
Number of observations = 49
Sum of squared residuals = 18971.4482373
Explained sum of squares = 1067.9629941
Estimate of residual variance (with d.f. correction) = 412.4227878
R squared (uncentred) = 0.0532931

Give the numerical values for the one-step efficient estimates of α, β, and ρ. Is the
hypothesis that ρ = 0 compatible with the data? Why or why not?
5. What are the defining properties of an orthogonal projection matrix? Can such
a matrix be non-square? Why or why not?
If the linear regression model y = Xβ + u is estimated by ordinary least squares,
the vector PX y of fitted values and the vector MX y of residuals are orthogonal.
Here PX and MX are the orthogonal projections on to the linear span of the
columns of X and the orthogonal complement of that span, respectively. Then
by Pythagoras’ Theorem, the total sum of squares is equal to the sum of squared
residuals plus the explained sum of squares:

kyk2 = kPX yk2 + kMX yk2 . (3)

Consider the unbiased estimator β̂W defined as the solution of the estimating equa-
tions
W >(y − Xβ) = 0.
Economics 467 Page 5 of seven pages

Express the fitted values, that is, the elements of the vector X β̂W , in the form
P y, and the residuals, that is, the elements of the vector y − X β̂W , in the form
M y. Show that P and M are a pair of complementary projections. Is the sum of
these squared residuals plus the sum of these fitted values equal to the total sum of
squares? Why or why not?
Given the result about P and M being complementary projections, it follows that
y = P y + M y. Multiplying on the left by y> gives

y>P y + y>M y = y>y = kyk2 .

Explain clearly why this equation is not equivalent in general to the equation (3).
6. In the context of the model

yt = α + ρyt−1 + βxt + γxt−1 + ut , (4)

it is desired to test the hypothesis that ρ = 1 and β +γ = 0. Express the model that
corresponds to this null hypothesis in terms of the first differences ∆yt ≡ yt − yt−1
and ∆xt ≡ xt − xt−1 .
Rewrite the model (4) using the first differences and the lags of yt and xt , with no
explicit mention of the current values yt and xt .
Here are some of the results obtained by running the regression (4) as written.

Ordinary Least Squares:


Variable Parameter estimate Standard error T statistic
constant 4.939564 1.923975 2.567374
lag(1,y) 0.980570 0.029678 33.040660
x -1.996857 0.682008 -2.927910
lag(1,x) 1.576122 0.721012 2.185986
Number of observations = 49 Number of estimated parameters = 4
Mean of dependent variable = 117.042589
Sum of squared residuals = 905.556545
Explained sum of squares = 866655.900282
Estimate of residual variance
(with d.f. correction) = 20.123479
Mean of squared residuals = 18.480746
Standard error of regression = 4.485920
R squared (uncentred) = 0.998956 (centred) = 0.995387

The next printout gives the results of running the regression in the form that uses
first differences and lags without the current values of the variables.

Variable Parameter estimate Standard error T statistic


constant 4.939564 1.923975 2.567374
delta(x) ??????? -1.996857 0.682008 -2.927910
lag(1,x) ??????? -0.420735 0.546203 -0.770291
Economics 467 Page 6 of seven pages

lag(1,y) ??????? -0.019430 0.029678 -0.654692


Number of observations = 49 Number of estimated parameters = 4
Mean of dependent variable = 4.485405
Sum of squared residuals = 905.556545
Explained sum of squares = 1159.954488
Estimate of residual variance
(with d.f. correction) = 20.123479
Mean of squared residuals = 18.480746
Standard error of regression = 4.485920
R squared (uncentred) = 0.561582 (centred) = 0.161279

As you see, the identities of the regressors have been blanked out, except for the
constant. Identify which line of the printout corresponds to which variable, and
explain how you arrive at your answer.
Compute the value of an F statistic that allows you to test the null hypothesis. If
you wanted to test the component hypotheses, that ρ = 1 and β + γ = 0 separately,
compute the values of two test statistics that could be used for this purpose. What
would be the asymptotic limiting distributions of these statistics if the corresponding
null hypotheses were true?
7. The linear regression model

y = αι + βx + γz + u

was run, with the following results:

Ordinary Least Squares:


Variable Parameter estimate Standard error T statistic
constant -4.734201 1.441893 -3.283323
x 4.839402 0.718131 6.738881
z 4.790881 0.428888 11.170459
Number of observations = 50 Number of estimated parameters = 3
Mean of dependent variable = 4.952398
Sum of squared residuals = 382.168703
Explained sum of squares = 3015.989600
Estimate of residual variance
(with d.f. correction) = 8.131249
Mean of squared residuals = 7.643374
Standard error of regression = 2.851535
R squared (uncentred) = 0.887537 (centred) = 0.824035

Construct a confidence interval for the parameter β with confidence level 95% using
the fact that the 0.025-quantile of the standard normal distribution is -1.96
Describe how to obtain a bootstrap distribution that would allow you to construct
an equal-tailed bootstrap confidence interval for β. If you find that the 0.025-
quantile of this distribution is -2.284 while the 0.975-quantile is 2.029, construct the
bootstrap confidence interval at confidence level 95%.
Economics 467 Page 7 of seven pages

8. Consider the following linear regression:

y = X1 β1 + X2 β2 + u,

where y is n × 1, X1 is n × k1 , and X2 is n × k2 . Let β̂1 and β̂2 be the OLS


parameter estimates from running this regression.
Now consider the following regressions, all to be estimated by OLS:
(a) y = X2 β2 + u;
(b) P1 y = X2 β2 + u;
(c) P1 y = P1 X2 β2 + u;
(d) PX y = X1 β1 + X2 β2 + u;
(e) PX y = X2 β2 + u;
(f) M1 y = X2 β2 + u;
(g) M1 y = M1 X2 β2 + u;
(h) M1 y = X1 β1 + M1 X2 β2 + u;
(i) M1 y = M1 X1 β1 + M1 X2 β2 + u;
(j) PX y = M1 X2 β2 + u.
Here P1 projects orthogonally on to the span of X1 , and M1 = I − P1 . For which of
the above regressions are the estimates of β2 the same as for the original regression?
Why? For which are the residuals the same? Why?

You might also like