Professional Documents
Culture Documents
Econometric Analysis MT Official Problem Set Solution 3
Econometric Analysis MT Official Problem Set Solution 3
KAMILA NOWAKOWICZ
The setting is a classic errors-in-variables model. But really this is another practice on how
you derive probability limits for certain averages.
In this question zi⇤ and yi⇤ are the true values. Instead of them we observe mismeasured
zi = zi⇤ + "i and yi = yi⇤ + i . The true model is given in terms of the true (unobserved) values:
yi⇤ = ↵ + zi⇤ .
We are told that zi⇤ , i and "i are mutually independent sequences of independent random
variables, zi⇤2 , 2
i and "2i are uniformly integrable, and E( i ) = E("i ) = 0. These are very strong
assumptions on the nature of the measurement error, but even in this case we expect to see
inconsistency. In reality, you may expect that the measurement error to not be independent
and random, which would make things even worse.
Useful results: We start with listing results which will be useful in solving this question. Like
last week, we will use convergence in r-th mean to show convergence in probability:
We are given information about squares of some variables being uniformly integrable. The
following will let us conclude that the variables themselves also have that property:
These solutions are adapted from solutions by Chen Qiu, which were based on Prof Hidalgo’s notes and
solutions for EC484. Their aim is to fill some gaps between notes and exercises. Prof Hidalgo’s notes should
always be the final reference. If you spot any mistakes in this file please contact me: K.Nowakowicz@lse.ac.uk.
1
Result: If {Xis } is U.I. for some s > 0 then also {Xir } is U.I. for any r < s.
Proof: By Jensen’s inequality:
r/s
sup E (|Xi |r 1(|Xi | > )) sup E |Xi |s 1(|Xi | > )s/r
i i
✓ ◆r/s
= sup E (|Xi | 1(|Xi | > ))
s
!0
i
as ! 1.
n
1X ⇤ p
(z E(zi⇤ )) ! 0
n i=1 i
n
1X p
i !0
n i=1
n
1X p
"i ! 0
n i=1
Theorem 10: If a sequence {xi }i2N is uniformly integrable then maxi E(|xi |) < 1.
Solution. We begin by rewriting the true (but unobservable) model in terms of the observable
variables:
yi⇤ = ↵ + zi⇤
yi i = ↵ + (zi "i )
yi = ↵ + (zi "i ) + i
= ↵ + zi + "
|i {z }i
ui
We denote the composite error i "i by ui to simplify notation. This way we get a model
which looks like a standard linear regression with a single regressor. Notice that zi and ui both
depend on "i : the regressor is endogenous, which is why we expect to see inconsistency. The
2
OLS estimator for is:
P
n
(zi z̄)(yi ȳ)
ˆ= i=1
P
n
(zi z̄)2
i=1
1
P
n
n
(zi z̄)ui
i=1
= + Pn
1
n
(zi z̄)2
i=1
↵
ˆ = ȳ ˆz̄
= ↵ + z̄ + ū ˆz̄
⇣ ⌘
=↵+ ˆ z̄ + ū
To get the probability limits of both coefficients we take advantage of Slutzky’s Theorem and
split them into smaller building blocks which we consider separately. We now need to find the
probability limits of the following four objects:
n
1X
A: (zi z̄)ui
n i=1
n
1X
B: (zi z̄)2
n i=1
n
1X
C : z̄ = zi
n i=1
n
1X
D : ū = ui
n i=1
C and D look simpler and will be useful in finding the limits of A and B. Start from easier
ones.
Step 1: The probability limit of D. Rewrite ui in terms of the random variables we have
explicit assumptions for:
n
1X
ū = ui
n i=1
n
1X
= ( i "i )
n i=1
n n
1X 1X
= i "i .
n i=1 n i=1
3
By Theorem 11,
n
1X p
i ! 0;
n i=1
n
1X p
"i ! 0.
n i=1
p
So D ! 0 by Slutzky’s Theorem.
Step 2: The probability limit of C. Again, we start by rewriting in terms of the original
variables:
n
1X
z̄ = zi
n i=1
n n
1X ⇤ 1X
= z + "i
n i=1 i n i=1
P
n
p P
n
p
As we have shown above (by Theorem 11): 1
n
"i ! 0 and 1
n
(zi⇤ E (zi⇤ )) ! 0.
i=1 i=1
Hence:
n n n
1X ⇤ 1X 1X
z̄ = (zi E(zi⇤ )) + E(zi⇤ ) + "i
n i=1 n i=1 n i=1
| {z } | {z }
p p
!0 !0
n
p 1X
! lim E(zi⇤ ) ⌘ µ⇤
n!1 n
i=1
n n n
1X 1X 1X
(zi z̄)ui = z i ui z̄ ui
n i=1 n i=1 n i=1
n n n
1X 1X 1X
= z i ui zi ui
n i=1 n i=1 n i=1
n
1X
= z i ui C·
| {zD}
n i=1 p
!0
4
The limit of the second term follows from what we have already done. It suffices to find
P
n
probability limit of n1 zi ui . By plugging in true model zi = zi⇤ + "i , and ui = i "i :
i=1
n
X n
1 1X ⇤
z i ui = (z + "i ) ( i "i )
n i=1
n i=1 i
n n n n
1X ⇤ 1X ⇤ 1X 1X 2
= z i z i "i + "i i "
n i=1 i n i=1 n i=1 n i=1 i
n
1 X
= 2 E zi⇤2 E 2
i
n i=1
n
1 X
2 max E zj⇤2 max E 2
k
n i=1 j k
| {z } | {z }
<1 <1
1
= max E zj⇤2 max E 2
k
n j k
| {z }
=D<1
D
!0 =
n
as n ! 1. Note that the maxima of the expectations are bounded by Theorem
10 (this is one of the necessary conditions for U.I. sequences).
P
n
2nd P
n
p
So, n1 zi⇤ i ! E(zi⇤ i ) = E(zi⇤ )E( i ) = 0 and hence n1 zi⇤ i ! 0.
i=1 i=1
P
n
p P
n
p
• By identical arguments, 1
n
zi⇤ "i ! 0, and 1
n
"i i ! 0.
i=1 i=1
• Summarising:
p 2 2
A ! (0 ⇥0+0 ") µ⇤ ⇥ 0 = ".
P
n
Step 4: Find the probability limit of B = 1
n
(zi z̄)2 .
i=1
5
n n
1X 1X 2
2
(zi z̄) = z z̄ 2
n i=1 n i=1 i p
!µ2⇤
p p
We have previously shown that z̄ ! µ⇤ , by Slutzky’s Theorem we get z̄ 2 ! µ2⇤ . For the
other term:
n n n n
1X 2 1 X ⇤2 2 X ⇤ 1X 2
zi = zi + z i "i + ".
n i=1 n i=1 n i=1 n i=1 i
Using our previously obtained results:
n
1X ⇤ p
z "i ! 0
n i=1 i
n n
1X 2 p 1X
"i ! lim E "2i = 2
"
n i=1 n!1 n i=1
Summarising:
p
B ! s2⇤ + 2 ⇥ 0 + 2
" µ2⇤ = s2⇤ + 2
" µ2⇤
6
The definition of almost surely convergence is a bit awkward. A common approach to show-
ing convergence almost surely is to use a stronger notion of convergence. Since complete
convergence ) almost sure convergence, we can address this question by showing convergence
completely.
The advantage of using convergence completely is that it involves probability terms that can
be bounded by expectations, and expectations are much easier for us to calculate.
P1 ⇣ ⌘
Solution. We show Pr ˆn
< 1, thus verify ˆn converging to
> completely
n=1
⇣ ⌘
and therefore, with probability 1 as well. We aim to bound terms P r ˆn > by
expectation terms through Markov inequality. Since:
ˆn 1
= (Z 0 Z) Z 0U
!
⇣ ⌘ 1 1X
n
= M̂ z i ui
n i=1
By Markov inequality:
✓ ◆
4
⇣ ⌘ E ˆn
Pr ˆn > 4
✓ ◆
4
We calculate E ˆn :
0 1 0 1
4 4
⇣ ⌘ 1 1 Xn ⇣ ⌘ 1 4 1 Xn
E@ M̂ zi ui A E @ M̂ zi ui A by Cauchy-Schwarz inequality
n i=1 n i=1
0 1
4
⇣ ⌘ 1 4
1X
n
= M̂ E@ z i ui A
n i=1
= AB
⇣ ⌘ 1 4 ⇣ Pn ⌘
4
where A = M̂ ,B=E 1
n i=1 zi ui .
n
!0 n
!!2
1 X X
= 4 z i ui z i ui
n i=1 i=1
n X
n
!2
1 X
= zi0 zj ui uj
n4 i=1 j=1
n n n n
1 XXXX 0 0
= z z j z z l u i uj uk ul ,
n4 i=1 j=1 k=1 l=1 i k
which means
0 1
n 4 n
1 X 1 X
E@ z i ui A = 4 E (zi0 zj zk0 zl ui uj uk ul )
n i=1 n i,j,k,l=1
n
1 X 0 0
= 4 z zj z zl E (ui uj uk ul ) .
n i,j,k,l=1 i k
Since ui are independent with mean 0, if any index appears only once we can pull out
E(ui ) = 0 and the whole term becomes zero. The only situations where the last term is
not zero are:
• i = j = k = l,
• i = j 6= k = l,
• i = k 6= j = l,
• i = l 6= j = k.
In the first case we get E(u4i ), in the remaining ones we get a term of the form E(u2i )E(u2j ).
Hence:
0 1
n 4 n n n
1 X 1 X 0 0 3 XX 0 0
E @ z i ui A = 4 z z i z z i E ui + 4
4
z zi z zj E u2i E u2j
n i=1 n i=1 i i n i=1 j=1 i j
n n
!2
1 X 3 X
= 4 kzi k4 E u4i + 4 2
kzi k E u2i
n i=1 n i=1
n ✓ ◆4 n ✓ ◆2 q !2
1 X 3 X
4 sup kzj k sup E(uk )4 + 4 sup kzj k E(u4i )
n j k n j | {z }
i=1 i=1 p
supk E(u4k )
p
where E (u2i ) E (u4i ) by Jensen’s inequality. Let
A4 B 3A4 B
= +
n3 n2
A B 3A4 B
4
+
n2 n2
4
4A B
=
n2
C
=
n2
For some constant C.
(3) Combining the above results:
✓ ◆
ˆn
4 C C
E M̂ 1
=
n2 n2
possibly for some other constant C. Finally:
1
X ⇣ ⌘ X 1 ✓ ◆
ˆ 1 ˆn
4
Pr n > 4
E
n=1 n=1
1
CX 1
4 2
<1
n
|n=1{z }
2
= ⇡6 <1