Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

Law of Iterated Expectations: E (E (X1 |X2 )) = E (X1 ), Efficiency of the OLS Estimator: OLS estimator is more

in the sense
E (X1 h (X2 ) |X2 ) = h (X2 ) E (X1 |X2 )
efficientthan
linear unbiased estimator ,
 any other

that V V is a positive semidefinite matrix. If it is asPn
0
0
Arithmetic Operations
on
Matrices:
a
b
=
b
a
=
a
b
,
i
i
i=1
Pm
sumed that E (u|X) = 0 and E (uu0 |X) = 2 I.
C = AB Cij = k=1 Aik Bkj , AI = IA = A, A (B + C) = AB +
0
0
0
AC, (AB) = B 0 A0 , A0 A = (A0 A) , AA0 = (AA0 ) , AA1 =
Estimating
the Variance of the Error Terms: 2 =
1
A A=I
Pn
2
1
i because tr (E (u0 MX u)) = E (tr (u0 MX u)) =
i=1 u
nk
0
0
0
Regression Models and Matrix Notation: yi = xi + u, E (tr (u uMX )) = (n k) E (tr (u u)) implies E (u MX u) =
2
(n k) .
y = X + u where X = (x1 , x2 , ..., xk )
Overspecification: The true model is y = X + u and we
Estimating Pthe Simple
Linear Regression
 Model:


Pn

n
estimate
the model y = X + Z + v, by FWL theorem =
1
1 =
Pni=1 yi
Pnn
Pni=1 x2i
,

=
1
1
0
0
0
0

2
(X MZX)
i=1 xi yi
i=1 xi P i=1 xi
 XMZ y = + (X MZ X) X MZ u, is unbiased
n
(x

x
)(y

y
)

i
i
Cove (x,y)
but V V is positive semidefinite from the Gauss-Markov
i=1
= P
, 2 = y 1 x

n
Ve (x)
(xi
x)2
i=1
theorem, where is the estimator using the true model.
EstimatingPthe Multiple Linear Regression Model:
n
2
0
SSR () =
i=1 (yi X) , FOC gives X (y X) = 0 thus
1
0
0
= (X X) X y

Underspecification: The true model is y = X + 


Z+ u
and we estimate the model y = X + v, since E =


1
1
E (X 0 X) X 0 (X + Z + u) = + (X 0 X) X 0 Z, is un-



0
The Geometry of OLS Estimation: X 0 y X = 0 biased if and only if X Z = 0 or = 0.


1
x0i y X = 0, PX = X (X 0 X) X 0 , MX = I PX , PX PX = Tests of a Single Restriction: y = X + u, u N 0, 2 I ,
2
2
2
0
since the model can be rewritten as y = X1 1 + x2 2 + u, by
PX and PX
= PX , PX MX = 0, kyk = kPX yk + kMX yk

x0 M
the FWL theorem we have 2 = x02MXX1xy2 N 2 , 2 x02 M1 x2 ,
2
1
Linear Transformations of Regressors: Z = XA for any we denote that s2 = y 0 MX y/ (n k), the test statistic is t =
2
nonsigular k k matrix A, PXA = PX , X = X AA1 =
x02 MX1 y
1/2 Student (n k).
(XA) A1
s(x02 M1 x2 )
The Frisch-Waugh-Lovell Theorem: y = X1 1 + X2 2 + u, Tests of Several Restrictions: H1 : y = X1 1 +
1
0

PX1 PX1 = PXPX1 = P


X20 MX1 y, 1 = X2 2 + u (unrestricted model), H0 : y = X1 1 +
X1 , 2 = (X2 MX1 X2 )
u
(restricted model), the appropriate test statistic is
1
(X10 X1 ) X10 y X2 2
r SSRur )/r
Fr,nk , where SSRr SSRur =
F2 = (SSR
ur /(nk)

SSR
2
1
0
PM X2 y
=
y MX1 X2 (X20 MX1 X2 ) X20 MX1 y
and
X1
2
2

2
Goodness of Fit of a Regression: T SS = kyk = kPX yk +
2
0
SSRur = kMX yk = MX1 y PMX1 X2 y = y M1 y
2
SSR
ESS
2
2
kMP
X yk = ESS + SSR, R = T SS = 1 T SS , Rc = 1
1
n
y 0 MX1 X2 (X20 MX1 X2 ) X20 MX1 y.
(n1)y 0 MX y
2
Pn i=1 ui 2 , Radj

where

is
a
vector
of
ones.
0
(nk)y M y
(yi
y)

i=1
An Example of the F Test: y = 1 +X2 2 +u u N 0, 2 I ,
1
y 0 M X2 (X20 M X2 ) X20 M y/(k1)
Rc2
Are OLS Parameter
Estimators
Unbiased:i y = X+u, u F =



h
= nk
2

1
2

0M X
k1
1R
0 M yy 0 M X
c
1
y
X
X
M
y
/(nk)
)
(
2

2
2
IID 0, 2 I , E = + E (X 0 X) X 0 u with assumption
E (u|X) = 0, is unbiased.
Testing the Equality of Two Parameter Vectors (Chow
0
0
0 0
test): y (y1, y2 ) anx
 X
 (X1 ,X2 ) and the followingregres
Are OLS Parameter Estimators Consistent: LLN:
0
y1
X1
2
plimn x
= E (X), which implies that plimn n1 X 0 X = sion y2 = X2 1 + X2 + u, u N 0, I , since
E (x0 x) where (x0 x)i,j = E (xi xj ). The OLS Estimator is is defined as 2 1 , the restriction that 2 = 1 is equivalent
1
to the restriction that = 0. If SSR1 and SSR2 denote the sums
Consistent because plimn n1 X 0 X
plimn n1 X 0 u = 0.
of squared residuals from these two regressions, and SSRr denotes
the sum of squared residuals from regressing y on X, the F statistic
 
r SSR1 SSR2 )/k
becomes
F = (SSR

(SSR1 +SSR2 )/(n2k) .


Precision of the Least Squares Estimates: V
=
1
2 (X 0 X) , y = x1 1 +X2 2 +u, by FWL theorem, 1 =
 
2
V 1 = x0 MX x1
1

x01 MX2 y
x01 MX2 x1 ,

0
Linear Functions
 of Parameter Estimates: = , =
V (
0 ,
) = V 0



The Variance of Forecast Errors: V yn+1 x0n+1 = 2 +




1
V x0n+1 = 2 1 + x0n+1 (X 0 X) xn+1

You might also like