Download as pdf or txt
Download as pdf or txt
You are on page 1of 78

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Multiple Regression Model:


Asymptotic Inference
Wooldridge (Chapter 5)
Dr. Rachida Ouysse
School of Economics
UNSW

Dr. Rachida Ouysse

ie-Slides06

Efficiency

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

Lecture Plan

Why large-sample properties (asymptotics)


Consistency of the OLS estimators
Asymptotic normality of the OLS estimators

Dr. Rachida Ouysse

ie-Slides06

LM test

Efficiency

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

What we need for inference

- We need the sampling distribution of the OLS estimators


a) MLR1-4 imply the OLS estimators are unbiased.
b) MLR1-6 (CLM) imply the OLS estimators are normally
distributed.
c) The normality leads to the exact distributions of the t-stat
and the F-stat, which are a basis for inference.

- MLR6 (u iid Normal) is often too strong an assumption in


practice.
Without MLR6, the results in b) and c) no longer hold.
But they hold approximately for large samples.
Inference will be based on large-sample approximation

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

What we need for inference

- We need the sampling distribution of the OLS estimators


a) MLR1-4 imply the OLS estimators are unbiased.
b) MLR1-6 (CLM) imply the OLS estimators are normally
distributed.
c) The normality leads to the exact distributions of the t-stat
and the F-stat, which are a basis for inference.

- MLR6 (u iid Normal) is often too strong an assumption in


practice.
Without MLR6, the results in b) and c) no longer hold.
But they hold approximately for large samples.
Inference will be based on large-sample approximation

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

What we need for inference

- We need the sampling distribution of the OLS estimators


a) MLR1-4 imply the OLS estimators are unbiased.
b) MLR1-6 (CLM) imply the OLS estimators are normally
distributed.
c) The normality leads to the exact distributions of the t-stat
and the F-stat, which are a basis for inference.

- MLR6 (u iid Normal) is often too strong an assumption in


practice.
Without MLR6, the results in b) and c) no longer hold.
But they hold approximately for large samples.
Inference will be based on large-sample approximation

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

What we need for inference

- We need the sampling distribution of the OLS estimators


a) MLR1-4 imply the OLS estimators are unbiased.
b) MLR1-6 (CLM) imply the OLS estimators are normally
distributed.
c) The normality leads to the exact distributions of the t-stat
and the F-stat, which are a basis for inference.

- MLR6 (u iid Normal) is often too strong an assumption in


practice.
Without MLR6, the results in b) and c) no longer hold.
But they hold approximately for large samples.
Inference will be based on large-sample approximation

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

What we need for inference

- We need the sampling distribution of the OLS estimators


a) MLR1-4 imply the OLS estimators are unbiased.
b) MLR1-6 (CLM) imply the OLS estimators are normally
distributed.
c) The normality leads to the exact distributions of the t-stat
and the F-stat, which are a basis for inference.

- MLR6 (u iid Normal) is often too strong an assumption in


practice.
Without MLR6, the results in b) and c) no longer hold.
But they hold approximately for large samples.
Inference will be based on large-sample approximation

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

What we need for inference

- We need the sampling distribution of the OLS estimators


a) MLR1-4 imply the OLS estimators are unbiased.
b) MLR1-6 (CLM) imply the OLS estimators are normally
distributed.
c) The normality leads to the exact distributions of the t-stat
and the F-stat, which are a basis for inference.

- MLR6 (u iid Normal) is often too strong an assumption in


practice.
Without MLR6, the results in b) and c) no longer hold.
But they hold approximately for large samples.
Inference will be based on large-sample approximation

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

What we need for inference

- We need the sampling distribution of the OLS estimators


a) MLR1-4 imply the OLS estimators are unbiased.
b) MLR1-6 (CLM) imply the OLS estimators are normally
distributed.
c) The normality leads to the exact distributions of the t-stat
and the F-stat, which are a basis for inference.

- MLR6 (u iid Normal) is often too strong an assumption in


practice.
Without MLR6, the results in b) and c) no longer hold.
But they hold approximately for large samples.
Inference will be based on large-sample approximation

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

What we need for inference

- We need the sampling distribution of the OLS estimators


a) MLR1-4 imply the OLS estimators are unbiased.
b) MLR1-6 (CLM) imply the OLS estimators are normally
distributed.
c) The normality leads to the exact distributions of the t-stat
and the F-stat, which are a basis for inference.

- MLR6 (u iid Normal) is often too strong an assumption in


practice.
Without MLR6, the results in b) and c) no longer hold.
But they hold approximately for large samples.
Inference will be based on large-sample approximation

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic (large-sample) analysis

- Reluctant to assume MLR6, we proceed as follows.


find the asymptotic distribution of the estimators (the

sampling distribution when n goes to infinity).


use the asymptotic distribution to approximate the sampling

distribution of the estimators.

- The strategy will work if


the asymptotic distribution is available (usually true) and
the sample size n is large

The strategy does work for the OLS estimators under MLR1-5.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic (large-sample) analysis

- Reluctant to assume MLR6, we proceed as follows.


find the asymptotic distribution of the estimators (the

sampling distribution when n goes to infinity).


use the asymptotic distribution to approximate the sampling

distribution of the estimators.

- The strategy will work if


the asymptotic distribution is available (usually true) and
the sample size n is large

The strategy does work for the OLS estimators under MLR1-5.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic (large-sample) analysis

- Reluctant to assume MLR6, we proceed as follows.


find the asymptotic distribution of the estimators (the

sampling distribution when n goes to infinity).


use the asymptotic distribution to approximate the sampling

distribution of the estimators.

- The strategy will work if


the asymptotic distribution is available (usually true) and
the sample size n is large

The strategy does work for the OLS estimators under MLR1-5.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic (large-sample) analysis

- Reluctant to assume MLR6, we proceed as follows.


find the asymptotic distribution of the estimators (the

sampling distribution when n goes to infinity).


use the asymptotic distribution to approximate the sampling

distribution of the estimators.

- The strategy will work if


the asymptotic distribution is available (usually true) and
the sample size n is large

The strategy does work for the OLS estimators under MLR1-5.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic (large-sample) analysis

- Reluctant to assume MLR6, we proceed as follows.


find the asymptotic distribution of the estimators (the

sampling distribution when n goes to infinity).


use the asymptotic distribution to approximate the sampling

distribution of the estimators.

- The strategy will work if


the asymptotic distribution is available (usually true) and
the sample size n is large

The strategy does work for the OLS estimators under MLR1-5.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic (large-sample) analysis

- Reluctant to assume MLR6, we proceed as follows.


find the asymptotic distribution of the estimators (the

sampling distribution when n goes to infinity).


use the asymptotic distribution to approximate the sampling

distribution of the estimators.

- The strategy will work if


the asymptotic distribution is available (usually true) and
the sample size n is large

The strategy does work for the OLS estimators under MLR1-5.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic (large-sample) analysis

- Reluctant to assume MLR6, we proceed as follows.


find the asymptotic distribution of the estimators (the

sampling distribution when n goes to infinity).


use the asymptotic distribution to approximate the sampling

distribution of the estimators.

- The strategy will work if


the asymptotic distribution is available (usually true) and
the sample size n is large

The strategy does work for the OLS estimators under MLR1-5.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic (large-sample) analysis

- Reluctant to assume MLR6, we proceed as follows.


find the asymptotic distribution of the estimators (the

sampling distribution when n goes to infinity).


use the asymptotic distribution to approximate the sampling

distribution of the estimators.

- The strategy will work if


the asymptotic distribution is available (usually true) and
the sample size n is large

The strategy does work for the OLS estimators under MLR1-5.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Consistency

Let bj be an estimator for parameter j , from a sample of size

n.
bj is consistent for j if and only if


P bj differs from j
tends to zero as n goes infinity. (see also Appendix C)
Consistency comes from the LLN (law of large numbers).

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Consistency

Let bj be an estimator for parameter j , from a sample of size

n.
bj is consistent for j if and only if


P bj differs from j
tends to zero as n goes infinity. (see also Appendix C)
Consistency comes from the LLN (law of large numbers).

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Consistency

Let bj be an estimator for parameter j , from a sample of size

n.
bj is consistent for j if and only if


P bj differs from j
tends to zero as n goes infinity. (see also Appendix C)
Consistency comes from the LLN (law of large numbers).

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

tor for parameter j, from a sample


Asymptotic analysis: Consistency

j
)

C)

N
ers).
)
Dr. Rachida Ouysse

ie-Slides06

Efficiency

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Consistency


Definition (Law of Large Numbers)
For a random sample, the probability that the sample mean differs
from the population mean tends to zero, as n goes to infinity. In
other terms, consider a random variable Z with mean E (Zi ) and
variance Z2 = Var (Zi ).PFor an iid sample of size n, Zi ,
n
Zi
, the sample mean. The LLN states
i = 1, , n, let Z = i =1
n
that:

P Z = E (Zi ) = 1 as n
We can express this result as follows:
plim Z = E (Zi )
the result above is called convergence in probability:

The sample mean Z converges in probability to the population


mean E (Zi ).
Dr. Rachida
Ouysse
ie-Slides06
Note that the LLN
is the
key statistical
result that is used to

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Consistency


Definition (Law of Large Numbers)
For a random sample, the probability that the sample mean differs
from the population mean tends to zero, as n goes to infinity. In
other terms, consider a random variable Z with mean E (Zi ) and
variance Z2 = Var (Zi ).PFor an iid sample of size n, Zi ,
n
Zi
, the sample mean. The LLN states
i = 1, , n, let Z = i =1
n
that:

P Z = E (Zi ) = 1 as n
We can express this result as follows:
plim Z = E (Zi )
the result above is called convergence in probability:

The sample mean Z converges in probability to the population


mean E (Zi ).
Dr. Rachida
Ouysse
ie-Slides06
Note that the LLN
is the
key statistical
result that is used to

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Consistency


Definition (Law of Large Numbers)
For a random sample, the probability that the sample mean differs
from the population mean tends to zero, as n goes to infinity. In
other terms, consider a random variable Z with mean E (Zi ) and
variance Z2 = Var (Zi ).PFor an iid sample of size n, Zi ,
n
Zi
, the sample mean. The LLN states
i = 1, , n, let Z = i =1
n
that:

P Z = E (Zi ) = 1 as n
We can express this result as follows:
plim Z = E (Zi )
the result above is called convergence in probability:

The sample mean Z converges in probability to the population


mean E (Zi ).
Dr. Rachida
Ouysse
ie-Slides06
Note that the LLN
is the
key statistical
result that is used to

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Asymptotic analysis: Consistency


Theorem (5.1. consistency of OLS)
Under MLR1 to MLR4, the OLS estimator bj is consistent for j ,
for all j = 0, 1, , k.
In fact, the consistency holds under an assumption weaker

than MLR4 (ZCM).


MLR4 (zero mean and zero correlation)
E (i ) = 0, and cov (i , xij ) = 0, for j = 1, , k.
MLR4 implies MLR4 . Not-MLR4 implies Not-MLR4.
MLR4 is not sufficient for unbiasedness, and for properly

defining PRF. (Think this one through!)


Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Asymptotic analysis: Consistency


Theorem (5.1. consistency of OLS)
Under MLR1 to MLR4, the OLS estimator bj is consistent for j ,
for all j = 0, 1, , k.
In fact, the consistency holds under an assumption weaker

than MLR4 (ZCM).


MLR4 (zero mean and zero correlation)
E (i ) = 0, and cov (i , xij ) = 0, for j = 1, , k.
MLR4 implies MLR4 . Not-MLR4 implies Not-MLR4.
MLR4 is not sufficient for unbiasedness, and for properly

defining PRF. (Think this one through!)


Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Asymptotic analysis: Consistency


Theorem (5.1. consistency of OLS)
Under MLR1 to MLR4, the OLS estimator bj is consistent for j ,
for all j = 0, 1, , k.
In fact, the consistency holds under an assumption weaker

than MLR4 (ZCM).


MLR4 (zero mean and zero correlation)
E (i ) = 0, and cov (i , xij ) = 0, for j = 1, , k.
MLR4 implies MLR4 . Not-MLR4 implies Not-MLR4.
MLR4 is not sufficient for unbiasedness, and for properly

defining PRF. (Think this one through!)


Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Asymptotic analysis: Consistency


Theorem (5.1. consistency of OLS)
Under MLR1 to MLR4, the OLS estimator bj is consistent for j ,
for all j = 0, 1, , k.
In fact, the consistency holds under an assumption weaker

than MLR4 (ZCM).


MLR4 (zero mean and zero correlation)
E (i ) = 0, and cov (i , xij ) = 0, for j = 1, , k.
MLR4 implies MLR4 . Not-MLR4 implies Not-MLR4.
MLR4 is not sufficient for unbiasedness, and for properly

defining PRF. (Think this one through!)


Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Consistency


In the simple regression model,

Pn
b1 = 1 +

(u u)(xi
i=1
Pn i
2
i=1 (xi x)

x)

(1)

Multiply both numerator and denominator by 1/n. The LLN

indicates that
P
plim ni=1 (ui u)(xi x)/n
Pn
plim b1 = 1 +
2
plim
i=1 (xi x) /n
Cov (ui , xi )
plim b1 = 1 +
as n .
Var (xi )
When Cov (ui , xi ) = 0, we have consistency

plim b1 = 1
Inconsistency results from Cov (u, x1 ) 6= 0.
Dr. Rachida Ouysse

ie-Slides06

(2)
(3)

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Consistency


In the simple regression model,

Pn
b1 = 1 +

(u u)(xi
i=1
Pn i
2
i=1 (xi x)

x)

(1)

Multiply both numerator and denominator by 1/n. The LLN

indicates that
P
plim ni=1 (ui u)(xi x)/n
Pn
plim b1 = 1 +
2
plim
i=1 (xi x) /n
Cov (ui , xi )
plim b1 = 1 +
as n .
Var (xi )
When Cov (ui , xi ) = 0, we have consistency

plim b1 = 1
Inconsistency results from Cov (u, x1 ) 6= 0.
Dr. Rachida Ouysse

ie-Slides06

(2)
(3)

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Consistency


In the simple regression model,

Pn
b1 = 1 +

(u u)(xi
i=1
Pn i
2
i=1 (xi x)

x)

(1)

Multiply both numerator and denominator by 1/n. The LLN

indicates that
P
plim ni=1 (ui u)(xi x)/n
Pn
plim b1 = 1 +
2
plim
i=1 (xi x) /n
Cov (ui , xi )
plim b1 = 1 +
as n .
Var (xi )
When Cov (ui , xi ) = 0, we have consistency

plim b1 = 1
Inconsistency results from Cov (u, x1 ) 6= 0.
Dr. Rachida Ouysse

ie-Slides06

(2)
(3)

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Consistency


In the simple regression model,

Pn
b1 = 1 +

(u u)(xi
i=1
Pn i
2
i=1 (xi x)

x)

(1)

Multiply both numerator and denominator by 1/n. The LLN

indicates that
P
plim ni=1 (ui u)(xi x)/n
Pn
plim b1 = 1 +
2
plim
i=1 (xi x) /n
Cov (ui , xi )
plim b1 = 1 +
as n .
Var (xi )
When Cov (ui , xi ) = 0, we have consistency

plim b1 = 1
Inconsistency results from Cov (u, x1 ) 6= 0.
Dr. Rachida Ouysse

ie-Slides06

(2)
(3)

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Normality

The consistency is not enough for statistical inference. We

need the sampling distribution of the OLS estimators.


How do we do inference without MLR6?
Central limit theorem (CLT) provides an answer: when n is

large, the OLS estimators are approximately normally


distributed.
We can do inference for large samples

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Normality

The consistency is not enough for statistical inference. We

need the sampling distribution of the OLS estimators.


How do we do inference without MLR6?
Central limit theorem (CLT) provides an answer: when n is

large, the OLS estimators are approximately normally


distributed.
We can do inference for large samples

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Normality

The consistency is not enough for statistical inference. We

need the sampling distribution of the OLS estimators.


How do we do inference without MLR6?
Central limit theorem (CLT) provides an answer: when n is

large, the OLS estimators are approximately normally


distributed.
We can do inference for large samples

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Normality

The consistency is not enough for statistical inference. We

need the sampling distribution of the OLS estimators.


How do we do inference without MLR6?
Central limit theorem (CLT) provides an answer: when n is

large, the OLS estimators are approximately normally


distributed.
We can do inference for large samples

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Normality


Theorem (5.2 asymptotic normality of OLS)
Under MLR1 to MLR5, the OLS estimator bj is consistent for j ,
for all j = 0, 1, , k.


(i) n bj j is approximately normally distributed with zero
mean and finite positive variance; 2 /aj2 where
P
rij2 .
aj2 = plim n1 ni=1 b
(ii)
b2 is consistent estimator for 2 = Var (ui )


(iii) bj j /se(bj ) is approximately Normal (0, 1), where
  q
\
se bj = Var (bj ) and
Var (bj ) =
Dr. Rachida Ouysse

b2
.
SSTj (1 Rj2 )
ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Normality


Theorem (5.2 asymptotic normality of OLS)
Under MLR1 to MLR5, the OLS estimator bj is consistent for j ,
for all j = 0, 1, , k.


(i) n bj j is approximately normally distributed with zero
mean and finite positive variance; 2 /aj2 where
P
rij2 .
aj2 = plim n1 ni=1 b
(ii)
b2 is consistent estimator for 2 = Var (ui )


(iii) bj j /se(bj ) is approximately Normal (0, 1), where
  q
\
se bj = Var (bj ) and
Var (bj ) =
Dr. Rachida Ouysse

b2
.
SSTj (1 Rj2 )
ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis: Normality


Theorem (5.2 asymptotic normality of OLS)
Under MLR1 to MLR5, the OLS estimator bj is consistent for j ,
for all j = 0, 1, , k.


(i) n bj j is approximately normally distributed with zero
mean and finite positive variance; 2 /aj2 where
P
rij2 .
aj2 = plim n1 ni=1 b
(ii)
b2 is consistent estimator for 2 = Var (ui )


(iii) bj j /se(bj ) is approximately Normal (0, 1), where
  q
\
se bj = Var (bj ) and
Var (bj ) =
Dr. Rachida Ouysse

b2
.
SSTj (1 Rj2 )
ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis:Inference
Under
CLM
 (MLR1-6), we did inference with


bj j /se(bj ) tnk1 , where tnk1 is approximately


Normal(0, 1) for large n k 1.

With
only MLR1-5, Theorem 5.2 indicates

a
bj j /se(bj ) Normal(0, 1).

For inference with large samples, we use normal critical values,

in line with what we did before.


Little has changed for single-parameter inference.
eg. the approximate 95% confidence interval is simply
CI = bj 1.96 se(bj ).

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis:Inference
Under
CLM
 (MLR1-6), we did inference with


bj j /se(bj ) tnk1 , where tnk1 is approximately


Normal(0, 1) for large n k 1.

With
only MLR1-5, Theorem 5.2 indicates

a
bj j /se(bj ) Normal(0, 1).

For inference with large samples, we use normal critical values,

in line with what we did before.


Little has changed for single-parameter inference.
eg. the approximate 95% confidence interval is simply
CI = bj 1.96 se(bj ).

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis:Inference
Under
CLM
 (MLR1-6), we did inference with


bj j /se(bj ) tnk1 , where tnk1 is approximately


Normal(0, 1) for large n k 1.

With
only MLR1-5, Theorem 5.2 indicates

a
bj j /se(bj ) Normal(0, 1).

For inference with large samples, we use normal critical values,

in line with what we did before.


Little has changed for single-parameter inference.
eg. the approximate 95% confidence interval is simply
CI = bj 1.96 se(bj ).

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis:Inference
Under
CLM
 (MLR1-6), we did inference with


bj j /se(bj ) tnk1 , where tnk1 is approximately


Normal(0, 1) for large n k 1.

With
only MLR1-5, Theorem 5.2 indicates

a
bj j /se(bj ) Normal(0, 1).

For inference with large samples, we use normal critical values,

in line with what we did before.


Little has changed for single-parameter inference.
eg. the approximate 95% confidence interval is simply
CI = bj 1.96 se(bj ).

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis:Inference
Under
CLM
 (MLR1-6), we did inference with


bj j /se(bj ) tnk1 , where tnk1 is approximately


Normal(0, 1) for large n k 1.

With
only MLR1-5, Theorem 5.2 indicates

a
bj j /se(bj ) Normal(0, 1).

For inference with large samples, we use normal critical values,

in line with what we did before.


Little has changed for single-parameter inference.
eg. the approximate 95% confidence interval is simply
CI = bj 1.96 se(bj ).

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis:Inference

Theorem 5.2 also implies that the F-stat under H0 has

approximate F-distribution with large samples. Hence joint


hypothesis testing can be carried out as before.
(Wald-stat Chi-squared distribution under H0 )
An interesting observation: the magnitude of se(bj ) is

expected to shrink at the rate of (1/n)0.5 . See Theorem 5.2.i:


bj Normal(j , [finite varinace]/n)
eg. Suppose the standard error of bj is .05 for n = 100. It is
expected to shrink to about .01 when n = 2500.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis:Inference

Theorem 5.2 also implies that the F-stat under H0 has

approximate F-distribution with large samples. Hence joint


hypothesis testing can be carried out as before.
(Wald-stat Chi-squared distribution under H0 )
An interesting observation: the magnitude of se(bj ) is

expected to shrink at the rate of (1/n)0.5 . See Theorem 5.2.i:


bj Normal(j , [finite varinace]/n)
eg. Suppose the standard error of bj is .05 for n = 100. It is
expected to shrink to about .01 when n = 2500.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis:Inference

Theorem 5.2 also implies that the F-stat under H0 has

approximate F-distribution with large samples. Hence joint


hypothesis testing can be carried out as before.
(Wald-stat Chi-squared distribution under H0 )
An interesting observation: the magnitude of se(bj ) is

expected to shrink at the rate of (1/n)0.5 . See Theorem 5.2.i:


bj Normal(j , [finite varinace]/n)
eg. Suppose the standard error of bj is .05 for n = 100. It is
expected to shrink to about .01 when n = 2500.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis:Inference

Theorem 5.2 also implies that the F-stat under H0 has

approximate F-distribution with large samples. Hence joint


hypothesis testing can be carried out as before.
(Wald-stat Chi-squared distribution under H0 )
An interesting observation: the magnitude of se(bj ) is

expected to shrink at the rate of (1/n)0.5 . See Theorem 5.2.i:


bj Normal(j , [finite varinace]/n)
eg. Suppose the standard error of bj is .05 for n = 100. It is
expected to shrink to about .01 when n = 2500.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Asymptotic analysis:Inference

Theorem 5.2 also implies that the F-stat under H0 has

approximate F-distribution with large samples. Hence joint


hypothesis testing can be carried out as before.
(Wald-stat Chi-squared distribution under H0 )
An interesting observation: the magnitude of se(bj ) is

expected to shrink at the rate of (1/n)0.5 . See Theorem 5.2.i:


bj Normal(j , [finite varinace]/n)
eg. Suppose the standard error of bj is .05 for n = 100. It is
expected to shrink to about .01 when n = 2500.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: Wald statistic


Consider the q restrictions and the joint hypothesis

H0 : kq+1 = 0, , k = 0
The unrestricted model:
y = 0 + 1 x1 + + k xk + u
The restricted model:

y = 0 + 1 x1 + + kq xkq + u(r )
Test statistic

SSRr SSRur
a
2q under H0 .
(SSRur /(n k 1))

(4)

Decision rule: reject H0 if W > c ( 2q critical value).


Clearly, W = q F . For large n k 1, Fq,nk1 (1/q)2q .
Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: Wald statistic


Consider the q restrictions and the joint hypothesis

H0 : kq+1 = 0, , k = 0
The unrestricted model:
y = 0 + 1 x1 + + k xk + u
The restricted model:

y = 0 + 1 x1 + + kq xkq + u(r )
Test statistic

SSRr SSRur
a
2q under H0 .
(SSRur /(n k 1))

(4)

Decision rule: reject H0 if W > c ( 2q critical value).


Clearly, W = q F . For large n k 1, Fq,nk1 (1/q)2q .
Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: Wald statistic


Consider the q restrictions and the joint hypothesis

H0 : kq+1 = 0, , k = 0
The unrestricted model:
y = 0 + 1 x1 + + k xk + u
The restricted model:

y = 0 + 1 x1 + + kq xkq + u(r )
Test statistic

SSRr SSRur
a
2q under H0 .
(SSRur /(n k 1))

(4)

Decision rule: reject H0 if W > c ( 2q critical value).


Clearly, W = q F . For large n k 1, Fq,nk1 (1/q)2q .
Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: Wald statistic


Consider the q restrictions and the joint hypothesis

H0 : kq+1 = 0, , k = 0
The unrestricted model:
y = 0 + 1 x1 + + k xk + u
The restricted model:

y = 0 + 1 x1 + + kq xkq + u(r )
Test statistic

SSRr SSRur
a
2q under H0 .
(SSRur /(n k 1))

(4)

Decision rule: reject H0 if W > c ( 2q critical value).


Clearly, W = q F . For large n k 1, Fq,nk1 (1/q)2q .
Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: Wald statistic


Consider the q restrictions and the joint hypothesis

H0 : kq+1 = 0, , k = 0
The unrestricted model:
y = 0 + 1 x1 + + k xk + u
The restricted model:

y = 0 + 1 x1 + + kq xkq + u(r )
Test statistic

SSRr SSRur
a
2q under H0 .
(SSRur /(n k 1))

(4)

Decision rule: reject H0 if W > c ( 2q critical value).


Clearly, W = q F . For large n k 1, Fq,nk1 (1/q)2q .
Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: Wald statistic


Consider the q restrictions and the joint hypothesis

H0 : kq+1 = 0, , k = 0
The unrestricted model:
y = 0 + 1 x1 + + k xk + u
The restricted model:

y = 0 + 1 x1 + + kq xkq + u(r )
Test statistic

SSRr SSRur
a
2q under H0 .
(SSRur /(n k 1))

(4)

Decision rule: reject H0 if W > c ( 2q critical value).


Clearly, W = q F . For large n k 1, Fq,nk1 (1/q)2q .
Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: LM statistic


LM test is mainly used for other purposes (eg.

heteroskedasticity).
We introduce it here as a test of linear restrictions.
b(r ) .
a) Run restricted regression and save the residual, u
b(r ) on all x variables (known as auxiliary
b) Regress the residual u
regression) and save the Rbu2(r ) .
c) LM = nRbu2(r ) =(sample size)(R 2 of auxiliary regression).
Approximately, it has 2q distribution under H0
d) Reject the restrictions (ie, H0 ) if LM > c, (the 2q critical
value), where q is the number of restrictions.
LM is based on the restricted model, while Wald is based on

the unrestricted model.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: LM statistic


LM test is mainly used for other purposes (eg.

heteroskedasticity).
We introduce it here as a test of linear restrictions.
b(r ) .
a) Run restricted regression and save the residual, u
b(r ) on all x variables (known as auxiliary
b) Regress the residual u
regression) and save the Rbu2(r ) .
c) LM = nRbu2(r ) =(sample size)(R 2 of auxiliary regression).
Approximately, it has 2q distribution under H0
d) Reject the restrictions (ie, H0 ) if LM > c, (the 2q critical
value), where q is the number of restrictions.
LM is based on the restricted model, while Wald is based on

the unrestricted model.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: LM statistic


LM test is mainly used for other purposes (eg.

heteroskedasticity).
We introduce it here as a test of linear restrictions.
b(r ) .
a) Run restricted regression and save the residual, u
b(r ) on all x variables (known as auxiliary
b) Regress the residual u
regression) and save the Rbu2(r ) .
c) LM = nRbu2(r ) =(sample size)(R 2 of auxiliary regression).
Approximately, it has 2q distribution under H0
d) Reject the restrictions (ie, H0 ) if LM > c, (the 2q critical
value), where q is the number of restrictions.
LM is based on the restricted model, while Wald is based on

the unrestricted model.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: LM statistic


LM test is mainly used for other purposes (eg.

heteroskedasticity).
We introduce it here as a test of linear restrictions.
b(r ) .
a) Run restricted regression and save the residual, u
b(r ) on all x variables (known as auxiliary
b) Regress the residual u
regression) and save the Rbu2(r ) .
c) LM = nRbu2(r ) =(sample size)(R 2 of auxiliary regression).
Approximately, it has 2q distribution under H0
d) Reject the restrictions (ie, H0 ) if LM > c, (the 2q critical
value), where q is the number of restrictions.
LM is based on the restricted model, while Wald is based on

the unrestricted model.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: LM statistic


LM test is mainly used for other purposes (eg.

heteroskedasticity).
We introduce it here as a test of linear restrictions.
b(r ) .
a) Run restricted regression and save the residual, u
b(r ) on all x variables (known as auxiliary
b) Regress the residual u
regression) and save the Rbu2(r ) .
c) LM = nRbu2(r ) =(sample size)(R 2 of auxiliary regression).
Approximately, it has 2q distribution under H0
d) Reject the restrictions (ie, H0 ) if LM > c, (the 2q critical
value), where q is the number of restrictions.
LM is based on the restricted model, while Wald is based on

the unrestricted model.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: LM statistic


LM test is mainly used for other purposes (eg.

heteroskedasticity).
We introduce it here as a test of linear restrictions.
b(r ) .
a) Run restricted regression and save the residual, u
b(r ) on all x variables (known as auxiliary
b) Regress the residual u
regression) and save the Rbu2(r ) .
c) LM = nRbu2(r ) =(sample size)(R 2 of auxiliary regression).
Approximately, it has 2q distribution under H0
d) Reject the restrictions (ie, H0 ) if LM > c, (the 2q critical
value), where q is the number of restrictions.
LM is based on the restricted model, while Wald is based on

the unrestricted model.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: LM statistic


LM test is mainly used for other purposes (eg.

heteroskedasticity).
We introduce it here as a test of linear restrictions.
b(r ) .
a) Run restricted regression and save the residual, u
b(r ) on all x variables (known as auxiliary
b) Regress the residual u
regression) and save the Rbu2(r ) .
c) LM = nRbu2(r ) =(sample size)(R 2 of auxiliary regression).
Approximately, it has 2q distribution under H0
d) Reject the restrictions (ie, H0 ) if LM > c, (the 2q critical
value), where q is the number of restrictions.
LM is based on the restricted model, while Wald is based on

the unrestricted model.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Large sample test: LM statistic


LM test is mainly used for other purposes (eg.

heteroskedasticity).
We introduce it here as a test of linear restrictions.
b(r ) .
a) Run restricted regression and save the residual, u
b(r ) on all x variables (known as auxiliary
b) Regress the residual u
regression) and save the Rbu2(r ) .
c) LM = nRbu2(r ) =(sample size)(R 2 of auxiliary regression).
Approximately, it has 2q distribution under H0
d) Reject the restrictions (ie, H0 ) if LM > c, (the 2q critical
value), where q is the number of restrictions.
LM is based on the restricted model, while Wald is based on

the unrestricted model.

Dr. Rachida Ouysse

ie-Slides06

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

LM statistic: Example
Take the example of housing price regression model:

log(price) = 0 + 1 log(assess) + 2 log(lotsize) +


3 log(sqrft) + 4 bdrms +
If the assessment is a rational valuation,

H0 : 1 = 1, 2 = 0, 3 = 0, 4 = 0 should hold.
Implementing the LM test:
a) Regress [log(price) log(assess)] on a constant and save the
b(r )
residuals, u
b(r ) on
b) Regress the residuals u
{log(assess), log(lotsize), log(sqrft), bdrms} and save the
R-squared, Rbu2(r )
c) as previous page c). LM = nRbu2(r ) is approximately distributes
as a 24
d) as previous page d).
Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

LM statistic: Example
Take the example of housing price regression model:

log(price) = 0 + 1 log(assess) + 2 log(lotsize) +


3 log(sqrft) + 4 bdrms +
If the assessment is a rational valuation,

H0 : 1 = 1, 2 = 0, 3 = 0, 4 = 0 should hold.
Implementing the LM test:
a) Regress [log(price) log(assess)] on a constant and save the
b(r )
residuals, u
b(r ) on
b) Regress the residuals u
{log(assess), log(lotsize), log(sqrft), bdrms} and save the
R-squared, Rbu2(r )
c) as previous page c). LM = nRbu2(r ) is approximately distributes
as a 24
d) as previous page d).
Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

LM statistic: Example
Take the example of housing price regression model:

log(price) = 0 + 1 log(assess) + 2 log(lotsize) +


3 log(sqrft) + 4 bdrms +
If the assessment is a rational valuation,

H0 : 1 = 1, 2 = 0, 3 = 0, 4 = 0 should hold.
Implementing the LM test:
a) Regress [log(price) log(assess)] on a constant and save the
b(r )
residuals, u
b(r ) on
b) Regress the residuals u
{log(assess), log(lotsize), log(sqrft), bdrms} and save the
R-squared, Rbu2(r )
c) as previous page c). LM = nRbu2(r ) is approximately distributes
as a 24
d) as previous page d).
Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

LM statistic: Example
Take the example of housing price regression model:

log(price) = 0 + 1 log(assess) + 2 log(lotsize) +


3 log(sqrft) + 4 bdrms +
If the assessment is a rational valuation,

H0 : 1 = 1, 2 = 0, 3 = 0, 4 = 0 should hold.
Implementing the LM test:
a) Regress [log(price) log(assess)] on a constant and save the
b(r )
residuals, u
b(r ) on
b) Regress the residuals u
{log(assess), log(lotsize), log(sqrft), bdrms} and save the
R-squared, Rbu2(r )
c) as previous page c). LM = nRbu2(r ) is approximately distributes
as a 24
d) as previous page d).
Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

LM statistic: Example
Take the example of housing price regression model:

log(price) = 0 + 1 log(assess) + 2 log(lotsize) +


3 log(sqrft) + 4 bdrms +
If the assessment is a rational valuation,

H0 : 1 = 1, 2 = 0, 3 = 0, 4 = 0 should hold.
Implementing the LM test:
a) Regress [log(price) log(assess)] on a constant and save the
b(r )
residuals, u
b(r ) on
b) Regress the residuals u
{log(assess), log(lotsize), log(sqrft), bdrms} and save the
R-squared, Rbu2(r )
c) as previous page c). LM = nRbu2(r ) is approximately distributes
as a 24
d) as previous page d).
Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

LM statistic: Example
Take the example of housing price regression model:

log(price) = 0 + 1 log(assess) + 2 log(lotsize) +


3 log(sqrft) + 4 bdrms +
If the assessment is a rational valuation,

H0 : 1 = 1, 2 = 0, 3 = 0, 4 = 0 should hold.
Implementing the LM test:
a) Regress [log(price) log(assess)] on a constant and save the
b(r )
residuals, u
b(r ) on
b) Regress the residuals u
{log(assess), log(lotsize), log(sqrft), bdrms} and save the
R-squared, Rbu2(r )
c) as previous page c). LM = nRbu2(r ) is approximately distributes
as a 24
d) as previous page d).
Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

LM statistic: Example
Take the example of housing price regression model:

log(price) = 0 + 1 log(assess) + 2 log(lotsize) +


3 log(sqrft) + 4 bdrms +
If the assessment is a rational valuation,

H0 : 1 = 1, 2 = 0, 3 = 0, 4 = 0 should hold.
Implementing the LM test:
a) Regress [log(price) log(assess)] on a constant and save the
b(r )
residuals, u
b(r ) on
b) Regress the residuals u
{log(assess), log(lotsize), log(sqrft), bdrms} and save the
R-squared, Rbu2(r )
c) as previous page c). LM = nRbu2(r ) is approximately distributes
as a 24
d) as previous page d).
Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Efficiency of OLS

Under MLR1-5, there are other estimators of the s, which

can also be unbiased, consistent and asymptotically normal,


see (5.16)-(5.19).
In what sense is OLS better than others?

It is justified on the asymptotic variance.


Theorem (5.3 asymptotic efficiency of OLS)
Under MLR1-MLR5, the OLS estimators have the smallest
asymptotic variances among the class of linear estimators.

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Efficiency of OLS

Under MLR1-5, there are other estimators of the s, which

can also be unbiased, consistent and asymptotically normal,


see (5.16)-(5.19).
In what sense is OLS better than others?

It is justified on the asymptotic variance.


Theorem (5.3 asymptotic efficiency of OLS)
Under MLR1-MLR5, the OLS estimators have the smallest
asymptotic variances among the class of linear estimators.

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Efficiency of OLS

Under MLR1-5, there are other estimators of the s, which

can also be unbiased, consistent and asymptotically normal,


see (5.16)-(5.19).
In what sense is OLS better than others?

It is justified on the asymptotic variance.


Theorem (5.3 asymptotic efficiency of OLS)
Under MLR1-MLR5, the OLS estimators have the smallest
asymptotic variances among the class of linear estimators.

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Theoretical properties of OLS under MLR1-5: consistency,

asymptotic normality, asymptotic efficiency.


The asymptotic normality of OLS implies that the t-stat and

F-stat can be used as we did in ie-Slides04/ie-Slides05, for


large samples. (pay attention to critical values)
We can do without MLR6. Inference is carried out by using

large-sample approximations.
Wald and LM test. (pay attention to critical values)

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Theoretical properties of OLS under MLR1-5: consistency,

asymptotic normality, asymptotic efficiency.


The asymptotic normality of OLS implies that the t-stat and

F-stat can be used as we did in ie-Slides04/ie-Slides05, for


large samples. (pay attention to critical values)
We can do without MLR6. Inference is carried out by using

large-sample approximations.
Wald and LM test. (pay attention to critical values)

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Theoretical properties of OLS under MLR1-5: consistency,

asymptotic normality, asymptotic efficiency.


The asymptotic normality of OLS implies that the t-stat and

F-stat can be used as we did in ie-Slides04/ie-Slides05, for


large samples. (pay attention to critical values)
We can do without MLR6. Inference is carried out by using

large-sample approximations.
Wald and LM test. (pay attention to critical values)

Dr. Rachida Ouysse

ie-Slides06

Summary

Motivation

Consistency

Asymptotic Normality

Inference

Wald test

LM test

Efficiency

Summary

Theoretical properties of OLS under MLR1-5: consistency,

asymptotic normality, asymptotic efficiency.


The asymptotic normality of OLS implies that the t-stat and

F-stat can be used as we did in ie-Slides04/ie-Slides05, for


large samples. (pay attention to critical values)
We can do without MLR6. Inference is carried out by using

large-sample approximations.
Wald and LM test. (pay attention to critical values)

Dr. Rachida Ouysse

ie-Slides06

Summary

You might also like