Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

OLS Assumptions

• ASSUMPTION SLR.1 - Linear In Parameters


• In the population model, the dependent variable, y, is related to the independent
variable, x, and the error (or disturbance), u, as y = β0 + β1x + u …..Eq 2.47, where
β1 is the slope parameter and β0 is the intercept parameter

• ASSUMPTION SLR.2 - Random Sampling


• We have a random sample of size n, {(xi,yi); i=1, . . . , n}, following the population
model
• We can write Eq. 2.47 in terms of the random sample as yi = β0 + β1xi + ui , i
=1,2,….,n

Wooldridge, 2016
OLS Assumptions

• As we saw earlier, the OLS slope and intercept estimates are not defined unless there is
some sample variation in the explanatory variable. This leads us to our 3rd assumption
• ASSUMPTION SLR.3 – Sample variation in the explanatory variable
• The sample outcomes on x, namely, {xi , i=1, . . . , n}, are not all the same value.
• If x varies in the population, random samples on x will typically contain variation, unless
the population variation is minimal or the sample size is small. (Wooldridge, 2016)
• ASSUMPTION SLR.4- Zero Conditional Mean
• The error u has an expected value of zero given any value of the explanatory variable , i.e.
E(u|x)=0

Wooldridge, 2016
The OLS Estimator
n
• Since (xi − x) (yi – y) = ni=1 yi (xi − x)
i=1
n
yi (xi − x)
• We can write, β1= i=1 2
i=1 (xi − x)
n

• β1 is a random variable in this context as we are interested in the behaviour of β1 across all
possible random samples from the population. (Note: we take repeated random samples of size
n from the population and calculate β1 in each of these samples. This will give a sampling
distribution of β1).
• Now, we write β1 in terms of the population coefficient and errors as β1=
i=1 (xi − x) (β0 + β1xi + ui)
n
(here we have defined ni=1 (xi − x)2 as SSTX to simplify the
SSTX
notation.)
n n n n
• i=1 (xi − x) (β0 + β1xi +ui) = β0 i=1 (xi − x) +β1 i=1 (xi − x)xi+ i=1 ui (xi − x)
n n n
• = β1SSTX + i=1 ui (xi − x) [Because i=1 (xi − x) =0 and i=1 (xi − x)xi = ni=1 (xi −
x)2 = SSTX ]
𝒏
• β𝟏 = β1+ (1/𝐒𝐒𝐓𝐗 )( 𝐢=𝟏 diui) (where di = xi − x ) ---Eq 2.52
Wooldridge, 2016
Theorem: Unbiasedness of OLS
• Using assumptions SLR.1 to SLR.4, E(β𝟎) = β0 and E(β𝟏) = β1 for any values of β0 and
β1 . In other words, β𝟎 is unbiased for β0, and β𝟏 is unbiased for β1.
• Proof: In this proof, we make a technical simplification and derive the expected values
conditional on xi. We also assume that xi are non-random (i.e. xi are fixed in repeated
samples)
• Because SSTx and di are functions only of the xi, they are non-random.
• We know that β1 = β1+ 1/SSTX ( ni=1 diui)
• Keeping the conditioning on {x1, x2 , x3 …. } implicit, we have
• E(β1) = β1+ E [1/SSTX ( ni=1 diui)]
• = β1+ 1/SSTX E( ni=1 diui)
• = β1+ 1/SSTX ni=1 E(diui)
• = β1+ 1/SSTX ni=1 di.E(ui)
• β1+ 1/SSTX ni=1 di.0 = β1 [expected value of ui (conditional on {x1, x2 , x3 …, xn} is zero
under Assumptions SLR.2 and SLR.4. ]
• Since unbiasedness holds for any outcome on {x1, x2 , x3 …, xn} unbiasedness also holds
without conditioning on {x1, x2 , x3 …, xn}
Wooldridge, 2016
• Taking the average of yi = β0 + β1xi + ui across i, we get y = β0 + β1x + u
• Plug y = β0 + β1x + u into the formula for β0
• β 0 = y - β 1x
• = β0 + β1x + u- β1x
• = β0 + (β1- β1)x +u
• Then, conditional on the values of the xi,
• E(β0) = E[β0 + (β1- β1)x +u]
• =β0 + E[(β1- β1)] 𝑥 [Since E(u) =0 by Assumptions SLR.2 and SLR.4]
• = β0 [We proved that E(β1)= β1, thereforeE[(β1- β1)] =0]
• Both of these arguments are valid for any values of β0 and β1, and so we have established
unbiasedness.

Wooldridge, 2016
Reference

• Wooldridge, Jeffrey M. (2016), Introductory Econometrics — A Modern Approach,


CENGAGE learning

You might also like