Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Properties of the OLS estimators

We now return to the properties that we would like our estimators to have. Based on the
assumptions of the CLRM we can prove that the OLS estimators are best linear unbiased
estimators (BLUE). To do so, we first have to decompose the regression coefficients
estimated under OLS into their random and non-random components.
As a starting point, note that Yt has a non-random component (a + βXt), as well
as a random component, captured by the residuals ut. Therefore, Cov(X, Y) – which
depends on values of Yt – will have a random and a non-random component:
Cov(X, Y) = Cov(Xt, [a + βX + u])
= Cov(X, a) + Cov(X,βX) + Cov(X, u) (3.29)
However, because a and β are constants we have that Cov(X, a) = 0 and that
Cov(X,βX) = βCov(X, X) = βVar(X). Thus:
Cov(X, Y) = βVar(X) + Cov(X, u) (3.30)
and substituting that in Equation (3.28) yields:
β ˆ = Cov(X, Y)
Var(X)
=β+
Cov(X, u)
Var(X)
(3.31)
which says that the OLS coefficient β ˆ estimated from any sample has a non-random
component, β, and a random component which depends on Cov(Xt, ut).
Linearity
Based on assumption 3, we have that X is non-stochastic and fixed in repeated samples.
Therefore, the X values can be treated as constants and we need merely to concentrate
on the Y values. If the OLS estimators are linear functions of the Y values then they
are linear estimators. From Equation (3.24) we have that:
βˆ=
xtyt
x2 t (3.32)
Since the Xt are regarded as constants, then the xt are regarded as constants as well.
We have that:
βˆ=
xtyt
x2 t =
xt(Yt − Y ¯ )
x2 t =
xtYt − Y ¯ xt
x2 t (3.33)
but because Y ¯ xt = 0, we have that:
βˆ=
xtYt
x2 t =
ztYt (3.34)
ASTERIOU: “chap03” — 2011/3/29 — 18:47 — page 39 — #13
Simple Regression 39
where zt = xt / x2 t can also be regarded as constant and therefore β ˆ is indeed a linear
estimator of the Yt.
Unbiasedness
Unbiasedness of β ˆ
To prove that β ˆ is an unbiased estimator of β we need to show that E(β) ˆ = β. We have:
E(β) ˆ = E β + Cov(X, u)
Var(X)
(3.35)
However, β is a constant, and using assumption 3 – that Xt is non-random – we can
take Var(X) as a fixed constant to take them out of the expectation expression and have:
E(β) ˆ = E(β) + 1
Var(X)
E[Cov(X, u)] (3.36)
Therefore, it is enough to show that E[Cov(X, u)] = 0. We know that:
E[Cov(X, u)] = E

1n
nt=1
(Xt − X ¯ )(ut − u ¯)

(3.37)
where 1/n is constant, so we can take it out of the expectation expression, and we can
also break the sum down into the sum of its expectations to give:
E[Cov(Xt, ut )] = 1
n
E(X1 − X ¯ )(u1 − u ¯) + · · · + E(Xn − X ¯ )(un − u ¯)
=
1n
nt=1
E (Xt − X ¯ )(ut − u ¯) (3.38)
Furthermore, because Xt is non-random (again from assumption 3) we can take it out
of the expectation term to give:
E[Cov(X, u)] = 1
n
nt=1
(Xt − X ¯ )E(ut − u ¯) (3.39)
Finally, using assumption 4, we have that E(ut ) = 0 and therefore E(u ¯) = 0. So
E[Cov(X, u)] = 0 and this proves that:
E(β) ˆ = β
or, to put it in words, that β ˆ is an unbiased estimator of the true population parameter β.
ASTERIOU: “chap03” — 2011/3/29 — 18:47 — page 40 — #14
40 The Classical Linear Regression Model
Unbiasedness of a ˆ
We know that a ˆ = Y ¯ − βˆX ¯ , so:
E(a ˆ) = E(Y ¯ ) − E(β) ˆ X ¯ (3.40)
But we also have that:
E(Yt) = a + βXt + E(ut) = a + βXt (3.41)
where we eliminated the E(ut) term because, according to assumption 4, E(ut) = 0. So:
E(Y ¯ ) = a + βX ¯ (3.42)
Substituting Equation (3.42) into Equation (3.40) gives:
E(a ˆ) = a + βX ¯ − E(β) ˆ X ¯ (3.43)
We have proved before that E(β) ˆ = β; therefore:
E(a ˆ) = a + βX ¯ − βX ¯ = a (3.44)
which proves that a ˆ is an unbiased estimator of a.

You might also like