Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

One-Step MLE.

Many estimators are consistent and asymptotically


normal but not asymptotically ecient. Some of them can be
improved up to asymptotically ecient. We observe
dX
t
= S(, X
t
) dt + (X
t
) dW
t
, X
0
, 0 t T
in regular case, MLE is as. normal
L

T
_

T

__
=N
_
0, I ()
1
_
, I() = E

_

S (, )
()
_
2
and asymptotically ecient
lim
0
lim
T
sup
|
0
|<
TE

T

_
2
= I(
0
)
1
.
1
The family of measures is LAN
L
_
+
u

T
, , X
T
_
= exp
_
u
T
_
, X
T
_

u
2
2
I () + r
T
_
, u, X
T
_
_
Here r
T
0 and

T
_
, X
T
_
=
1

T
_
T
0

S (, X
t
)
(X
t
)
2
[dX
t
S (, X
t
) dt] =N (0, I())
Then having a consistent and as. normal estimator

T
we construct
the estimator

T
=

T
+

T
_

T
, X
T
_

T I(

T
)
and show that this estimator is asymptotically ecient
2
_

T

_

T =
_

T

_

T +

T
()
I ()
(1 + o(1))
+
1
I ()

T
_
T
0

S(

T
, X
t
)
(X
t
)
2
_
S (, X
t
) S
_

T
, X
t
_
dt (1 + o(1))
=
T
+

T
()
I ()
(1 + o(1))

T
I ()
1
T
_
T
0
_

S(, X
t
)
(X
t
)
_
2
dt (1 + o(1))
=

T
()
I ()
(1 + o(1)) + o(1) =N
_
0, I()
1
_
.
3
It is easy to verify by the Ito formula that
T
_
, X
T
_
=
T
_
, X
T
_

T
_
, X
T
_
=
1

T
_
X
T
X
0

S(, y)
(y)
2
dy
1
2

T
_
T
0

(, X
t
) dt +
+
1

T
_
T
0

S(, X
t
)
_

(X
t
)
(X
t
)

S(, X
t
)
(X
t
)
2
_
dt,
and dene the one-step maximum likelihood estimator by the same
formula

T
=

T
+

T
_

T
, X
T
_

T I(

T
)
.
We prove
L

T
_

T

__
=N
_
0, I()
1
_
4
Example. Let
dX
t
= (X
t
)
3
dt + dW
t
, X
0
, 0 t T.
The MLE cannot be written in explicit form, but the EMM

T
=
1
T
_
T
0
X
t
dt
is uniformly consistent and asymptotically normal. The one-step
MLE is

T
=

T

1

2
I T
_
T
0
_

T
X
t
_
5
dt.
This estimator is consistent, asymptotically normal
L

T
_

T

__
=N
_
0, I
1
_
5
Lower Bounds.
The rst one is the Cramer-Rao bound. Suppose that the observed
diusion process is
dX
t
= S (, X
t
) dt + (X
t
) dW
t
, X
0
, 0 t T
and we have to estimate some continuously dierentiable function
() , R.

T
= E

T
_
, X
T
_

T
_
where

T
_
, X
T
_
=

f (, X
0
)
f (, X
0
)
+
_
T
0

S (, X
t
)
(X
t
)
2
[dX
t
S (, X
t
) dt] .
6
Then we can write
E

T
_
, X
T
_

T
_
= E

T
_
, X
T
_ _

T
E

T
_

_
E

T
E

2
_
1/2
_
E

T
_
, X
T
_
2
_
1/2
and
E

T
_
, X
T
_
2
= E

_

f (, X
0
)
f (, X
0
)
_
2
+ T E

_

S (, )
()
_
2
= I
T
() .
Hence
E

T
E

_

() +

b ()
_
2
I
T
()
,
7
Using the equality
E

T
() b()

2
= E

T
()

2
b()
2
we obtain nally
E

T
()

_

() +

b ()
_
2
I
T
()
+ b ()
2
which is called the CramerRao inequality. If () = it became
E

_
1 +

b ()
_
2
I
T
()
+ b ()
2
and this last inequality is sometimes used to dene an asymptotically
ecient estimator

T
as an estimator satisfying for any the
relation
lim
T
T E

2
=
1
I ()
(wrong!).
8
Due to the well-known Hodges example this denition is not
satisfactory. Therefore we use another bound (inequality) called the
HajekLe Cam bound. For quadratic loss function this lower bound
is: for any estimator

T
and any
0

lim
0
lim
T
sup
|
0
|<
T E

1
I (
0
)
It can be considered as an asymptotic minimax version of the
CramerRao inequality.
To prove it we need the van Trees lower bound. Suppose that the
unknown parameter = (, ) is a random variable with density
p (), p () = 0 = p () and the Fisher information
I
p
=
_

p ()
2
p ()
d < .
9
Further we suppose that

L
_
,
1
; X
T
_
=
_
, X
T
_
L
_
,
1
; X
T
_
Then we can write
_

()

_
L
_
,
1
; X
T
_
p ()

d = () L
_
,
1
; X
T
_
p ()

() L
_
,
1
; X
T
_
p () d =
_

() L
_
,
1
; X
T
_
p () d.
In a similar way
E

1
_

T
()
_

_
L
_
,
1
; X
T
_
p ()

d
= E

1
_

() L
_
,
1
; X
T
_
p () d =
_

() p () d = E
P

()
10
The CauchySchwarz inequality gives us
_
E
P

()
_
2
E

1
_

T
()
_
2
L
_
,
1
; X
T
_
p () d
E

1
_

ln
_
L
_
,
1
; X
T
_
p ()

_
2
L
_
,
1
; X
T
_
p () d.
For the rst integral we have
E

1
_

T
()
_
2
L
_
,
1
; X
T
_
p () d
=
_

T
()
_
2
p () d = E
_

T
()
_
2
,
and for the second integral we obtain E
P
I
T
() + I
p
.
11
Therefore
E
_

T
()
_
2

_
E
P

()
_
2
E
P
I
T
() + I
p
.
This lower bound is due to van Trees (1968) and is called the van
Trees inequality, global CramerRao bound, integral type
CramerRao inequality or Bayesian CramerRao bound. If we need
to estimate only, then it becomes
E
_

T

_
2

1
E
P
I
T
() + I
p
.
The main advantage of this inequality is that the right hand side
does not depend on the properties of the estimators (say, bias) and so
is the same for all estimators. It is widely used in asymptotic
nonparametric statistics. In particular, it gives the HajekLe Cam
inequality in the following elementary way.
12
Let us introduce a random variable with density function
p (v) , v [1, 1] such that p (1) = p (1) = 0 and the Fisher
information I
p
< . Fix some > 0, put =
0
+ and write E for
the expectation with respect to the joint distribution of X
T
and .
Then we have
lim
T
sup
|
0
|<
T E

T

_
2
lim
T
T E
_

T

_
2
lim
T
T
E
P
I
T
() +
2
I
p
=
1
_
1
1
I (
0
+ u) p (u) du
.
Hence from the continuity of the function I () as 0 we obtain
lim
0
lim
T
sup
|
0
|<
T E

T

_
2

1
I (
0
)
13

You might also like