Professional Documents
Culture Documents
Chapter 1. Estimation Methods
Chapter 1. Estimation Methods
Chapter 1. Estimation Methods
Estimation Methods
7
Properties of an estimator
• Assume an unknown parameter , for example mean,
variance, relationship between two variables, etc.
• An estimator is a given strategy or method to estimate .
• Using the same method but different samples we may get
different estimates of , hence can be treated as a random
variable with properties such as mean and variance.
Example of a distribution of 𝜃
Normal Distribution
𝐸[𝜃]
Properties of an estimator
• Desired properties of an estimator,
Unbiasedness: =
Efficiency: has minimum variance among unbiased estimators.
lim P θˆn θ δ 1 for all δ 0
n
𝑛 ∞
𝐸[𝜃] = 𝜃
Estimation methods: Ordinary Least Square (OLS)
Assume a linear relationship between variable and K explanatory variables:
i =1,.....N
In matrix notation:
× × × ×
𝑥 𝑥 𝑥
𝑦 𝑥 𝑥 . 𝑥 𝜀
𝑦 𝑥 𝑥 . 𝑥 𝛽 𝜀
. . . . . .
= + .
. . . . . . .
𝛽
𝑦 𝑥 𝑥 . 𝑥 𝜀
yi
𝛽 +𝛽 𝑥
is equation of a line with
𝜀 intercept: 𝛽
𝜀 slope: 𝛽
xi
𝜀 is the distance between
each observation and the line
Find 𝛽 and 𝛽 that minimize the total distance between the line and all the 11
observations.
Estimation methods: Ordinary Least Square (OLS)
Estimation: Find b that minimize the sum of the squared errors
𝜺𝜺
First order condition: 𝜷 𝟏
12
Estimation methods: Ordinary Least Square (OLS)
𝑣𝑎𝑟 𝜺 ≡
We also need:
𝒗𝒂𝒓 𝜷 = 𝜎 𝑿′𝑿
13
Estimation methods: Ordinary Least Square (OLS)
Assumptions:
1. Linear relationship
2. E[e] = 0
3. Homoskedasticity and no-autocorrelation
var(e s2 , where is an identity matrix
efficiency
4. X and e are independent cov(e,xk) = 0 for all k
unbiasedness
5. Columns in X are linear independent (X'X)-1 exists
6. Normal distribution: . 14
t- statistics: ̂
0 . 0 ii 1 Heteroskedastic
0 . 0
𝑣𝑎𝑟 𝜺 = 𝜎
× . . . . ij 0 no autocorrelation
0 0 .
0 . 0 ii 1 Heterosk.
. 0
𝑣𝑎𝑟 𝜺 = 𝜎 0 . . ij 0 for 𝑗 = 𝑖 ∓ 1 17
× . . . . ,
0 0 . 1st order autocorrelation
,
.
𝟏
𝟏 𝟏
18
Some definitions
0 f X ( x) 1 f
i
X ( xi ) 1 i=1,…..,N
20% 20%
Probability
Probability
10% 10%
0% 0%
1 2 3 4 5 6 1 2 3 4 5 6
Outcomes Outcomes 19
Some definitions
Continuous distribution
Discrete distribution
0.4%
3.5%
30%
0.3%
3.0%
Probability
Probability
0.3%
2.5%
20% Density function fX(x)
0.2%
2.0%
which gives the
0.2%
1.5%
10%
0.1%
1.0%
relative likelihood for
0.1%
0.5%
X to have a given
0.0%
0%
value.
a b
1.00
1.18
1.36
1.54
1.72
1.90
2.08
2.26
2.44
2.62
2.98
3.16
3.34
3.52
3.70
3.88
4.06
4.24
4.42
4.78
4.96
5.14
5.32
5.50
5.68
5.86
1.00
1.20
1.40
1.60
1.80
2.00
2.20
2.40
2.60
2.80
3.00
3.20
3.40
3.60
3.80
4.00
4.20
4.40
4.60
4.80
5.00
5.20
5.40
5.60
5.80
6.00
1 2 3 4 5 6
Outcomes
Some definitions
Continuous distribution
0.4%
0.3%
0.3%
Probability
0.2%
0.2%
0.1%
0.1%
0.0%
1.00
1.18
1.36
1.54
1.72
1.90
2.08
2.26
2.44
2.62
2.80
2.98
3.16
3.34
3.52
3.70
3.88
4.06
4.24
4.42
4.60
4.78
4.96
5.14
5.32
5.50
5.68
5.86
Outcomes
0.3%
0.3%
Probability
0.2%
0.2%
0.1%
0.1%
0.0%
x
1.00
1.18
1.36
1.54
1.72
1.90
2.08
2.26
2.44
2.62
2.80
2.98
3.16
3.34
3.52
3.70
3.88
4.06
4.24
4.42
4.60
4.78
4.96
5.14
5.32
5.50
5.68
5.86
Outcomes
Some definitions
L( ) = f x |
i i
Normal Distribution
𝐿 𝜇, 𝜎 = 𝑓 𝜀 |𝜇, 𝜎
= 2𝜋𝜎 𝑒
𝜇
= 2𝜋𝜎 𝑒
𝑁 𝜀 −𝜇
Taking log: 𝑙𝑛 𝐿 𝜇, 𝜎 =− 𝑙𝑛 2𝜋𝜎 −
2 2𝜎 28
Since 𝜇 = 0 and 𝑁 𝜀
𝜀 = 𝑦 − 𝜷′𝒙 𝑙𝑛 𝐿 𝜷, 𝜎 =− 𝑙𝑛 2𝜋𝜎 −
2 2𝜎
Estimation methods: Maximum Likelihood (ML)
In matrix form:
,
29
Estimation methods: Maximum Likelihood (ML)
1 1
ˆ ~ N , Ι where I is the information matrix.
N
1 N
2 ln Li
i) based on the second derivatives: I2D
N
i 1