Professional Documents
Culture Documents
Ecomt Solns1
Ecomt Solns1
1
due 2006-02-20
Feldkircher, Forstner, Ghoddusi, Grafenhofer, Pichler, Reiss, Yan, Zeugner
Exercise 1.1
1. qtd = α0 + α1 pt + α2 yt + ut1
To get the reduced form solve your system of equations for the endogenous
variables:
1
4. pt = α1
[(β0 − α0 ) − α2 yt + β1 pt−1 (ut2 − ut1 )]
To arrive at the final form, each equation may only contain own lags or exogenous
variables on the right-hand side. So (4) is already in the final form and for (3):
Rewrite (1) to get
1 d
pt = [q − α0 − α2 yt − ut1 ]
α1 t
If we lag this we get
1 d
pt−1 = [q − α0 − α2 yt−1 − ut1−1 ]
α1 t−1
1 d
qts = qtd = β0 + ut + β1 [ (q − α0 − α2 yt−1 − ut1−1 )]
α1 t−1
1
Exercise 1.2
1. E(AX + a) = AE(X) + a
Proof:
1.
2.
= E(XY 0 ) − EX(EY )0
2
3.
= V ar(AX) + 0
= A[V ar(x)]A0
4.
5. follows from 3
Exercise 1.3
1. X = T1 10 X and Y = T1 10 Y
3. Cov(X,
d Y ) = T1 [(X − 1X)0 (Y − 1Y )]
3
1. XA + 1a = XA + a
Proof:
1 0
XA + 1a =[1 (XA + 1a)]
T
1 0 1
10 1 a)] = XA + T a
[1 XA + |{z}
T T
equals T
= XA + a
0
2. Cov(X,
d Y ) = T1 X 0 Y − X Y
Proof:
1
Cov(X,
d Y ) = [(X − 1X)0 (Y − 1Y )]
T
1 0 0 0
= [X Y − X 0 1Y − X 10 Y + X 10 1Y ]
T
1 0 1 0 1 1 0
= X Y − X 0 1 Y − X 10 Y + T X Y
T |T {z } |T {z } T
0
X Y
1 0 0
= X 0 Y − 2X Y + X Y
T
1 0
= X 0Y − X Y
T
3. Cov(XA
d + 1a, Y B + 1b) = A0 Cov(X,
d Y )B
Proof:
1
Cov(XA
d + 1a, Y B + 1b) = [(XA + 1a − (1XA + 1a))0 (Y B + 1b − (1Y B + 1b))]
T
1
= [(XA − 1XA)0 (Y B − 1Y B)]
T
1
= A0 [(X − (1X)0 (Y − 1Y )]B
T
= A0 Cov(X,
d Y )B
ar(XA + 1a) = A0 Vd
4. Vd ar(X)A
Proof:
1
ar(XA + 1a) = [XA + 1a − 1(XA + 1a)]0 [XA + 1a − 1(overlineXA + 1a)]
Vd
T
1
= [XA + 1a − 1(X
T
4
Exercise 1.4
X = U ΣV 0 .
X 0 Xβ = X 0 y
V Σ0 ΣV 0 β = V Σ0 U 0 y |V 0 ·
Σ0 Σ(V 0 β) = Σ0 U 0 y
λ1 (V 0 β)1 σ (U 0 y)1
1
... ...
λr (V 0 β)r σr (U 0 y)r
=
0 0
... ...
0 0
Exercise 1.5
Show that X(X 0 X)−1 X 0 is the orthogonal projector on the column space spanned by
X and show that I − X(X 0 X)−1 X 0 is the projector on the ortho-complement of the
column space spanned by X.
Assumption: (X 0 X)−1 is invertible. Define P1 := X(X 0 X)−1 X 0 and P2 =
X(X 0 X)−1 (X 0 X)(X 0 X)−1 X 0 = P1 .
| {z }
I
5
Proof:
So P is symmetric.
Secondly, we show that P projects on the space Xb by showing that the remainder
term a − P a is orthogonal to the space Xb.
= a0 Xb − a0 X (X 0 X)−1 X 0 X b (4)
| {z }
I
0 0
= a Xb − a Xb = 0 (5)
Exercise 1.6
Part (i)
X 0 Xβ + 6= X 0 y where β + = (X 0 X)+ X 0 y
This implies:
(X 0 X)(X 0 X)+ X 0 y 6= X 0 y
6
OΛO0 OΛ+ O0 OΣ0 U 0 y 6= OΣ0 U 0 y
OIr Σ0 U 0 y 6= OΣ0 U 0 y
X 0 y 6= X 0 y
which is a contradiction.
Part (ii)
X 0 Xβ = OΛO0 β = 0
O0 OΛO0 β = ΛO0 β = O0 0 = 0
Λ+ O0 β = Λ+ Λ+ 0 = 0
OΛ+ O0 β = (X 0 X)+ β = 0
β 0 β + = β 0 (X 0 X)+ X 0 y = 00 X0 y = 0
Part (iii)
7
Denote with Ir the ”pseudo-identity matrix” as the matrix with the first r entries in
thediagonal equal to one, and the reamining entries equal to zero.
0 1
1 0 ... 0 0 ...
B C
B
B 0 1 ... 0 0 ... CC
. . .. ..
B C
B .. .. .. C
B . . . C
Ir = B
B C
C
B
B 0 0 ... 1 0 ... CC
B C
B
B 0 0 ... 0 0 ... CC
@ .. .. ..
A
. . .
So ||β + || ≤ ||β||. The latter conclusion follows from the fact that Ir β is a vector with
only the first r entries equal to those of β while the entries from r + 1 to k are zero.
Exercise 1.7
(1) Show that R2 as defined in class for inhomogenous regression (including the con-
stant term) is equal to
s2yby
R2 = ryb
2
y =
syy sybyb
Definitions:
1
PT
1. syby = T i=1 (yi − y)(b
yi − yb)
1
PT
2. syy = T i=1 (yi − y)2
sybyb
3. R2 = syy
2
sY Yb = (Y 0 Yb − T Y Yb ) = (Y 0 Yb − T Y )
2 2
sYb 0 Yb = (Yb 0 Yb − T Yb ) = (Yb 0 Yb − T Y )
8
To show < Y, Yb >=< Yb , Yb > .
Proof:
< Yb , Yb >=< Y − u
b, Yb >
=< Y, Yb >
= (T )−1 10 Y
1 1 1 1
= [ , , , . . . , , ][Y1 , Y2 , Y3 , . . . , YT ]0
T T T T
T
1X
= Yi
T i=1
=Y
9
Calculating sY Yb for our specific Y gives us:
T
1X
sY Yb = (Yi − Y )(Ybi − Yb )
T i=1
T
1X
= (Yi − Y )(Y − Y )
T i=1
T
1X
= (Yi − Y ) × 0
T i=1
=0
Exercise 1.8
We have to show
(y − Xβ)0 (y − Xβ) = (y − X β)
b 0 (y − X β) b 0 X 0 X(β − β)
b + (β − β) b
10
(y − Xβ)0 (y − Xβ) = [(y − X β) b 0 [(y − X β)
b − (Xβ − X β)] b − (Xβ − X β)]
b
b 0 − (X(β − β))
= [(y − X β) b 0 ][(y − X β)
b − (X(β − β))]
b
b 0 (y − X β)
= (y − X β) b 0 (X(β − β))
b − (y − X β) b 0 (y − X β)
b − (X(β − β)) b
| {z } | {z }
y−b
y y−b
y
b 0 (X(β − β))
+ (X(β − β)) b
b 0 (y − X β)
= [(y − X β) b 0X 0)
b + ((β − β)
b 0 X 0 )(X(β − β))]
+ ((β − β) b −u b0 (Xβ − yb) − (Xβ − yb)0 u
b
b 0 (y − X β)
= [(y − X β) b
b 0 X 0 )(X(β − β))]
+ ((β − β) b −u b0 yb
b0 Xβ + u
| {z } |{z}
0 0
− (Xβ)0 u
b + yb0 u
b
| {z } |{z}
β0X 0u
b=0 0
b 0 (y − X β)
= [(y − X β) b 0 X 0 )(X(β − β))]
b + ((β − β) b
Exercise 1.9
Show the second claim of item (iii) of the Frisch-Waugh theorem as discussed.
Frisch Waugh Theorem: We partition our regressor matrix X into X = [X1 , X2 ]
with X1 ∈ RT ×k1 , X2 ∈ RT ×k2 and are assuming that rk(X) = k1 + k2 . Then the
residuals of
1. y = X1 β1 + X2 β2 + u
f2 0 X
2. βb2 = (X f2 0 ye)
f2 )−1 (X
11
with P1 = (X10 X1 )−1 X10 and M1 = I − P1 . Here we regress first y on X1 and
denote the residuals of this regression as ye = M1 y. In a second step we then
regress X2 on X1 and again compute the residuals denoted as X
e = M1 X2 . In
a third step we use the formerly computed residuals and run the regression as
stated in (2). In the lecture it was shown that the residuals of (2) are the same
as the ones of (1). We are now asked to show that when running
f2 0 X
3. βb2 = (X f2 0 y)
f2 )−1 (X
the residuals of (3) are not equal with that of (1)=(2). In (3) we use the original
y-variable instead of ye.
Proof:
Write Normal Equations in a partitioned form:
X1 β1 + P1 X2 β2 = P1 y (7)
X1 β1 = −P1 X2 β2 + P1 y (8)
To get from (1) to (2) we have to premultiply (1) by X1 (X10 X1 )−1 . Now look at equation
(2*) and plug in the expression for X1 β1 :
12
X1 β1 + X20 X2 β2 = X20 y (9)
b = ye − X
The residuals ye = u f2 βb2 do not equal u(∗) = y − X
f2 βb2 .Only in the case
Exercise 1.10
0 0 0
CC = (LX + )(LX + ) + (C − LX + )(C − LX + )
0 0 0 0 0
= (LX + )(LX + ) + CC + (LX + )(LX + ) − C(LX + ) − (LX + )C
0 0 0
h 0 0
i0 0 0 0 0 0 0
= CC + 2L(X X)−1 X (X X)−1 X L − CX(X X)−1 L − L(X X)−1 X C
0 0 0 0 0 0 0 0 0 0
= CC + 2L(X X)−1 X X(X X)−1 L − CX(X X)−1 L − L(X X)−1 X C
0 0 0 0 0 0 0
= CC + 2L(X X)−1 L − L(X X)−1 L − L(X X)−1 L
0
= CC
0 0
• as X has full rank, we know that X + = (X X)−1 X ,
0 0 0 0 −1 0 −1
• (X X)−1 = (X X) = XX ,
13
0 0 0
• CX = L and therefore X C = L .
Exercise 1.11
(i) If β̃ is unbiased with VCV Σβ̃ β̃ , then it holds that M SE(β̃) = tr(Σβ̃ β̃ ), where
tr denotes the trace of a matrix.
k
X
M SE(β̃) = E[(β̃ − β)0 (β̃ − β)] = E[ (β̃i − βi )2 ]
i=1
Now since β̃ is unbiased, E(β̃) = β, and thus Σβ̃ β̃ = E[(β̃ − β)(β̃ − β)0 ].
Pk 2
Pk
Further tr(Σβ̃ β̃ ) = i=1 [E(β̃i − βi ) ] = E[ i=1 (β̃i − βi )2 ] = M SE(β̃) QED.
(ii) Let β̃1 and β̃2 be two unbiased estimators with covariance matrices Σβ̃1 β̃1 and Σβ̃2 β̃2 .
Show that it holds that
Define ∆ := Σβ̃2 β̃2 − Σβ̃1 β̃1 . Since Σβ̃1 β̃1 ≤ Σβ̃2 β̃2 , ∆ must be non-negative definite.
14
k
X
tr(Σβ̃2 β̃2 − Σβ̃1 β̃1 ) = tr(∆) = (e0i ∆ei ) ≥ 0
i=1
k
where ei ∈ R is a vector with 1 in the i-th row and all zeros else. Because ∆ is
non-negative definite, all the elements in this sum must be non-negative, and therefore
also the total sum. Thus M SE(β̃1 ) ≤ M SE(β̃2 ). QED
Since the OLS β̂ has the ”smallest” VCV matrix among all linear unbiased esti-
mators of β, this implies that it also has smaller or equal MSE among this class of
estimators.
(iii) Minimize M SE(β̃) over all linear unbiased estimatiors β̃. From the lecture we
know that Σβ̃ β̃ = σ 2 DD0 for all unbiased estimators of β, β̃ = Dy (DX = I).
From (i):
M SE(β̃) = tr(Σβ̃ β̃ ) = tr(σ 2 DD0 )
hence
tr(σ 2 DD0 ) = tr σ 2 (X + )(X + )0 + σ 2 (D − X + )(D − X + ) =
= tr σ 2 (X + )(X + )0 + tr σ 2 (D − X + )(D − X + ) =
15