Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 35

y = 0 + 1x + u

1
 In the simple linear regression model, where
y = 0 + 1x + u, we typically refer to y as the
 Dependent Variable, or
 Left-Hand Side Variable, or
 Explained Variable, or
 Regressand

2
 In the simple linear regression of y on x, we
typically refer to x as the
 Independent Variable, or
 Right-Hand Side Variable, or
 Explanatory Variable, or
 Regressor, or
 Covariate, or
 Control Variables

3
 The average value of u, the error term, in the
population is 0. That is,

 E(u) = 0

 This is not a restrictive assumption, since we


can always use 0 to normalize E(u) to 0

4
 We need to make a crucial assumption about
how u and x are related
 We want it to be the case that knowing
something about x does not give us any
information about u, so that they are
completely unrelated. That is, that
 E(u|x) = E(u) = 0, which implies
 E(y|x) = 0 + 1x

5
E(y|x) as a linear function of x, where for any x
the distribution of y is centered about E(y|x)
y
f(y)

. E(y|x) =  +  x
.
0 1

x1 x2
6
 Basic idea of regression is to estimate the
population parameters from a sample
 Let {(xi,yi): i=1, …,n} denote a random sample
of size n from the population
 For each observation in this sample, it will be
the case that
 yi = 0 + 1xi + ui

7
Population regression line, sample data points
and the associated error terms
y E(y|x) = 0 + 1x
y4 .{
u4

y3 .} u3
y2 u2 {.

y1 .} u1

x1 x2 x3 x4 x
8
 To derive the OLS estimates we need to
realize that our main assumption of E(u|x) =
E(u) = 0 also implies that

9
 The slope estimate is the sample covariance
between x and y divided by the sample
variance of x
 If x and y are positively correlated, the slope
will be positive
 If x and y are negatively correlated, the slope
will be negative
 Only need x to vary in our sample

10
 Intuitively, OLS is fitting a line through the
sample points such that the sum of squared
residuals is as small as possible, hence the
term least squares
 The residual, û, is an estimate of the error
term, u, and is the difference between the
fitted line (sample regression function) and
the sample point

11
Sample regression line, sample data points
and the associated estimated error terms
y
y4 .
û4 {
yˆ  ˆ0  ˆ1 x
y3 .} û3
y2 û2 {.

} û1
y1 .
x1 x2 x3 x4 x
12
 Given the intuitive idea of fitting a line, we can
set up a formal minimization problem
 That is, we want to choose our parameters such
that we minimize the following:

 
n n

  uˆi    yi  ˆ0  ˆ1 xi


2 2

i 1 i 1

13
 If one uses calculus to solve the minimization
problem for the two parameters you obtain the
following first order conditions, which are the
same as we obtained before, multiplied by n

 
n

 y
i 1
 ˆ

i 0  ˆ x 0
 1 i

 
n

 i i 0 1i
x y
i 1
 ˆ  ˆ x  0

14
 The sum of the OLS residuals is zero
 Thus, the sample average of the OLS
residuals is zero as well
 The sample covariance between the
regressors and the OLS residuals is zero
 The OLS regression line always goes through
the mean of the sample

15
n

n  uˆ i

 uˆi  0 and thus,


i 1
i 1
n
0
n

 x uˆ
i 1
i i 0

y  ˆ0  ˆ1 x
16
We can think of each observation as being made
up of an explained part, and an unexplained part,
yi  yˆ i  uˆi We then define the following :
  y  y  is the total sum of squares (SST)
2
i

  yˆ  y  is the explained sum of squares (SSE)


2
i

 uˆ is the residual sum of squares (SSR)


2
i

Then SST  SSE  SSR


17
  y  y      y  yˆ    yˆ  y  
2 2
i i i i

   uˆ   yˆ  y  
2
i i

  uˆ  2 uˆ  yˆ  y     yˆ  y 
2 2
i i i i

 SSR  2 uˆ  yˆ  y   SSE
i i

and we know that  uˆ  yˆ  y   0 i i

18
 How do we think about how well our sample
regression line fits our sample data?
 Can compute the fraction of the total sum of
squares (SST) that is explained by the model,
call this the R-squared of regression
 R2 = SSE/SST = 1 – SSR/SST

19
 Now that we’ve derived the formula for
calculating the OLS estimates of our
parameters, you’ll be happy to know you
don’t have to compute them by hand
 Regressions in Stata are very simple, to run
the regression of y on x, just type
 reg y x

20
 Assume the population model is linear in
parameters as y = 0 + 1x + u
 Assume we can use a random sample of size
n, {(xi, yi): i=1, 2, …, n}, from the population
model. Thus we can write the sample model
yi = 0 + 1xi + ui
 Assume E(u|x) = 0 and thus E(ui|xi) = 0
 Assume there is variation in the xi

21
 In order to think about unbiasedness, we need to
rewrite our estimator in terms of the population
parameter
 Start with a simple rewrite of the formula as

  x  x y
ˆ1  i
2
i
, where
s x

s    xi  x 
2 2
x
22
  x  x  y   x  x     x  u  
i i i 0 1 i i

  x  x     x  x  x
i 0 i 1 i

   x  x u 
i i

   x  x      x  x x
0 i 1 i i

   x  x u
i i

23
  x  x   0,
i

 x  xx   x  x
2
i i i

so, the numerator can be rewritten as


 s    xi  x ui , and thus
2
1 x

  x  x u
ˆ1  1  i
2
i

s x

24
let d i   xi  x  , so that
 
ˆi  1   1 2  d i ui , then
 sx 

   
E ˆ1  1   1 2  d i E  ui   1
s
 x 

25
 The OLS estimates of 1 and 0 are unbiased
 Proof of unbiasedness depends on our 4
assumptions – if any assumption fails, then
OLS is not necessarily unbiased
 Remember unbiasedness is a description of
the estimator – in a given sample we may be
“near” or “far” from the true parameter

26
 Now we know that the sampling distribution
of our estimate is centered around the true
parameter
 Want to think about how spread out this
distribution is
 Much easier to think about this variance
under an additional assumption, so
 Assume Var(u|x) = 2 (Homoskedasticity)

27
 Var(u|x) = E(u2|x)-[E(u|x)]2
 E(u|x) = 0, so 2 = E(u2|x) = E(u2) = Var(u)
 Thus 2 is also the unconditional variance,
called the error variance
 , the square root of the error variance is
called the standard deviation of the error
 Can say: E(y|x)=0 + 1x and Var(y|x) = 2

28
Homoskedastic Case
y
f(y|x)

. E(y|x) =  +  x
.
0 1

x1 x2
29
Heteroskedastic Case
f(y|x)

y
.
. E(y|x) = 0 + 1x

.
x1 x2 x3 x
30
ˆ     1 
Var 1  Var  1  

2   d i ui 
 
  sx  
2 2

2  Var   d i ui   
 1   1 

 sx 
2
 sx 
 i Var  ui 
d 2

2 2
 1   1 
 2
 sx 
 d     sx2 
i
2 2 2
 i
d 2

 
2
 1  2 2 ˆ
  2  sx 
2
2  Var  1
 s x  s x

31
 The larger the error variance, 2, the larger
the variance of the slope estimate
 The larger the variability in the xi, the smaller
the variance of the slope estimate
 As a result, a larger sample size should
decrease the variance of the slope estimate
 Problem that the error variance is unknown

32
 We don’t know what the error variance, 2,
is, because we don’t observe the errors, ui

 What we observe are the residuals, ûi


 We can use the residuals to form an estimate
of the error variance

33
uˆi  yi  ˆ0  ˆ1 xi
   0  1 xi  ui   ˆ 0  ˆ1 xi
i 
 u  ˆ    ˆ  
0 0   1 1 
Then, an unbiased estimator of  2 is
1
ˆ 2   uˆi2  SSR /  n  2 
 n  2

34
ˆ  ˆ 2  Standard error of the regression
 
recall that sd ˆ  
sx
if we substitute ˆ for  then we have
the standard error of ˆ , 1

  
se ˆ1  ˆ /   xi  x 
2
 1
2

35

You might also like