Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

OLS criterion

MOOC Econometrics
Lecture 2.3 on Multiple Regression: Model: y = X β + ε
Estimation
Dimensions: y (n × 1), X (n × k): observed data
Christiaan Heij β (k × 1), ε (n × 1): unobserved

Objective:
→ Estimate β by (k × 1) vector b so that Xb is ‘close’ to y .

Lecture 2.3, Slide 2 of 10, Erasmus School of Economics

OLS criterion OLS estimation

We assume that (n × k) matrix X has rank(X ) = k. S(b) = e 0 e = (y − Xb)0 (y − Xb)

Test = y 0 y − y 0 Xb − b 0 X 0 y + b 0 X 0 Xb
Prove that #(parameters) = k ≤ n = #(observations). = y 0 y − 2b 0 X 0 y + b 0 X 0 Xb

Answer: X is (n × k) matrix, hence k = rank(X ) ≤ n. Test


We used y 0 Xb = b 0 X 0 y . Prove this result.
 
e1
 e2  Answer: y 0 Xb is (1 × 1), so y 0 Xb = (y 0 Xb)0 = b 0 X 0 y .
Wish: small vector of residuals y − Xb = e =  .
 
..
 . 
en Facts of matrix derivatives (see Building Blocks):
∂b 0 a
∂b =a
Least squares criterion (‘ordinary least squares’, OLS):
∂b 0 Ab
→ minimize S(b) = e 0 e = ni=1 ei2 .
P
∂b = (A + A0 )b
Lecture 2.3, Slide 3 of 10, Erasmus School of Economics Lecture 2.3, Slide 4 of 10, Erasmus School of Economics
OLS estimation Geometric aspects
y is (n × 1), X is (n × k)
First order conditions for S(b) = y 0 y − 2b 0 X 0 y + b 0 X 0 Xb:
∂S
∂b = −2X 0 y + (X 0 X + X 0 X )b = −2X 0 y + 2X 0 Xb = 0. Define H = X (X 0 X )−1 X 0
M = I − H = I − X (X 0 X )−1 X 0
So: X 0 Xb = X 0y .
Test
Test Show that M 0 = M, M 2 = M, MX = 0, MH = 0.
Prove that rank(X ) = k implies that X 0X is invertible.
Answer: Direct calculations.
Answer: X 0X is (k × k) matrix, and Use (X 0 X )−1 symmetric and (X 0 X )−1 X 0 X = I .

X 0 Xa = 0 ⇒ a0 X 0 Xa = (Xa)0 Xa = 0 ⇒ Xa = 0 ⇒ a = 0.
Fitted values: ŷ = Xb = X (X 0 X )−1 X 0 y = Hy .
Last step follows from rank(X ) = k.
Residuals: e = y − Xb = y − Hy = My .
So: b = (X 0 X )−1 X 0 y
e and ŷ orthogonal: e 0 ŷ = (My )0 Hy = y 0 M 0 Hy = 0.
Lecture 2.3, Slide 5 of 10, Erasmus School of Economics Lecture 2.3, Slide 6 of 10, Erasmus School of Economics

Relation between y , X , b, and e Estimation of error variance σ 2

σ 2 = E (ε2i )
Estimate unknown ε = y − X β by residuals e = y − Xb.

1 Pn
Sample variance of residuals: σ̂ 2 = n−1 i=1 (ei − e)2 .

Test
Check that the (n × 1) vector of residuals e satisfies k linear restrictions,
so that e has (n − k) ‘degrees of freedom’.
 

Answer: rank(X ) = k, and X 0 e = X 0 (y − Xb) = X 0 y − X 0 Xb = 0.

1 0 1 Pn
OLS estimator: s 2 = n−k e e = n−k
2
i=1 ei
The ‘X -plane’ is k-dimensional subspace spanned by columns of X ,
that is, set of vectors Xa with a arbitrary (k × 1) vector.
Unbiased under standard assumptions (see next lecture).
Lecture 2.3, Slide 7 of 10, Erasmus School of Economics Lecture 2.3, Slide 8 of 10, Erasmus School of Economics
R-squared (R 2 ) TRAINING EXERCISE 2.3

 2
P
 2 (yi −y )(ŷi −ŷ )
2
Definition: R = cor(y , ŷ ) = ,
(yi −y )2 (ŷi −ŷ )2
P P

Train yourself by making the training exercise (see the website).


where ‘cor’ is correlation coefficient and ŷ = Xb.
After making this exercise, check your answers by studying the
Higher R 2 means better fit of Xb to observed y .
webcast solution (also available on the website).

If model contains constant term (x1i = 1 for all i = 1, . . . n):


0
R2 = 1 − Pn e e 2.
i=1 (yi −y )

Lecture 2.3, Slide 9 of 10, Erasmus School of Economics Lecture 2.3, Slide 10 of 10, Erasmus School of Economics

You might also like