Professional Documents
Culture Documents
Lectures 10-11
Lectures 10-11
Lectures 10-11
Lectures 10-11
1
General framework
◮ We denote EP or Eθ,τ the expectation w.r.t. the c.d.f. Fθ,τ . Often, we drop the
subscripts when there is no ambiguity.
◮ Denote θ0 and τ0 the “true parameters”. Keep in mind that θ0 and τ0 can be
anywhere in Θ and T , respectively.
2
Questions
3
M-estimators
4
Example: GMM estimator
Eθ (g(z1 , θ)) = 0
5
A general consistency result
Theorem 1
Suppose Q : Θ → R is a nonrandom function so that
p
sup |Qn (z1 , . . . , zn ; θ) − Q(θ)| −
→ 0,
θ∈Θ
6
A consequence
7
Lemma “GMMC”
then
p
sup |Qn (z1 , . . . , zn , θ) − Q(θ)| −
→0
θ∈Θ
for Q(θ) = E(g(z1 , θ))′ W E(g(z1 , θ)), the latter being continuous.
8
Theorem “ULLN”
then
1 n
p
sup q(zi , θ) − E(q(z1 , θ)) −
→ 0,
θ∈Θ n
i=1
9
Consistency of the GMM estimator
Theorem 5
In the GMM setting suppose that:
p
1. Wn −
→ W ; W symmetric, positive definite, and non-random.
2. Eθ0 (g(z1 , θ)) = 0 if and only if θ0 = θ.
3. Θ is compact.
4. g(z, θ) is continuous in θ for every z ∈ Z, the latter being the range space of
z1 , and g(z1 , θ) is a random variable for every θ ∈ Θ.
5. Eθ0 (supθ∈Θ g(z1 , θ)) < ∞.
Then, every sequence of minimizers θ̂n of the GMM objective function
n ′ n
1 1
g(zi , θ) Wn g(zi , θ)
n i=1 n i=1
p
satisfies θ̂n −
→ θ0 .
10
Consistency of the MLE
Theorem 6
Let zi be i.i.d. with density f (z, θ) for θ ∈ Θ ⊆ Rp and z ∈ Z ⊆ Rq , where
f (z, θ) > 0 for every z ∈ Z and every θ ∈ Θ, and Z f (y, θ)dy = 1 for all θ ∈ Θ.
Assume that:
1. Θ is compact;
2. the map θ → f (z, θ) is continuous on Θ for every z ∈ Z, and f (z1 , θ) is a
random variable for every θ ∈ Θ;
3. E(supθ∈Θ | log(f (z1 , θ))|) < ∞;
4. E (log (f (z1 , θ0 )/f (z1 , θ))) > 0 for every θ ∈ Θ so that θ ∕= θ0 .
Then, the MLE θ̂n , i.e., any solution to the minimization problem
n
min n−1 [− log(f (zi , θ))],
θ∈Θ
i=1
p
satisfies θ̂n −
→ θ0 , the latter denoting the true parameter.
11
Mean-value-theorem
12
Asymptotic normality of the GMM estimator
Theorem 8
In the GMM setting, assume conditions (1)-(5) in the GMM consistency theorem
above, and suppose that:
1. θ → g(z, θ) is continuously differentiable for every z ∈ Z.
2. VC(g(z1 , θ0 )) =: Ω(θ0 ) is positive definite.
3. E supθ∈Θ ∂θ ∂
g(z1 , θ) < ∞.
∂
4. E ∂θ g(z1 , θ0 ) := G(θ0 ) is so that G′ (θ0 )W G(θ0 ) is nonsingular.
5. θ0 is in the interior of Θ.
Then,
√ d
→ N(0, Σ(θ0 )),
n(θ̂n − θ0 ) −
where Σ(θ0 ) is defined as
−1 −1
[G′ (θ0 )W G(θ0 )] G′ (θ0 )W Ω(θ0 )W G(θ0 ) [G′ (θ0 )W G(θ0 )] .
13
Estimation of Σ(θ0 )
Proposition 9
Under the conditions of the previous theorem, and if
′
E sup g(z1 , θ)g (z1 , θ) < ∞,
θ∈Θ
with
n n
∂
Ĝ = n−1 g(zi , θ̂n ) and Ω̂ = n−1 g(zi , θ̂n )g ′ (zi , θ̂n ),
i=1
∂θ i=1
p
satisfies Σ̂n −
→ Σ(θ0 ).
14
Further result
◮ Note that the right-hand-side is the asymptotic covariance matrix with weight-
ing matrix proportional to Ω−1 .
◮ For estimating Ω−1 one typically needs an estimator for θ0 in the first place,
leading to two-step GMM, iterated GMM, or continuously-updated GMM esti-
mators.
15
J-test
◮ In case m > p one can use the so-called J-test to test the hypothesis whether
there exists a θ ∈ Θ so that
E(g(z1 , θ)) = 0,
which converges (under some conditions) under the null hypothesis in distribu-
tion to a χ2 distributed random variable with m − p degrees of freedom.
16
A general asymptotic normality result for M-estimators
6. H(θ0 ) is nonsingular.
√ d
Then n(θ̂n − θ0 ) −→ N (0, H(θ0 )−1 Σ(θ0 )H(θ0 )−1 ).
17