Professional Documents
Culture Documents
Principles of Parameter Estimation
Principles of Parameter Estimation
Principles of Parameter Estimation
4
ˆML ( X )
Fig. 12.1 PILLAI
Given X 1 x1, X 2 x2 , , X n xn , the joint p.d.f f X ( x1 , x2 , , xn ; )
represents the likelihood function, and the ML estimate
can be determined either from the likelihood equation
sup f X ( x1 , x2 , , xn ; ) (12-4)
ˆML
1 ( xi )2 / 2 2
f X ( x1 , x2 ,, xn ; ) e i 1
. (12-9)
( 2 )
2 n/2
n
i 1
1 n n n
2 E ( X i ) E ( X i )( X j ) .
2
n i 1 i 1 j 1, i j
1
n , 0 max( x1 , x2 , , xn ) . (12-17)
From (12-17), the likelihood function in this case is
maximized by the minimum value of , and since
max( X 1 , X 2 , , X n ),
we get
ˆML ( X ) max( X 1 , X 2 , , X n ) (12-18)
n i
1
f X ( x1 , x2 , , xn ; , ) x e i 1 . (12-26)
( ( )) i 1
This gives the log-likelihood function to be
L( x1 , x2 , , xn ; , ) log f X ( x1 , x2 , , xn ; , )
n n
n log n log ( ) ( 1) log xi xi . (12-27)
i 1 i 1
L n n
xi 0.
i 1 , ˆ , ˆ
(12-29)
Thus from (12-29)
ˆ ML
ˆ (X ) ,
ML
1 n (12-30)
n
xi
i 1
13
PILLAI
and substituting (12-30) into (12-28), it gives
(ˆ ML ) 1 n 1 n
log ˆ ML log xi xi . (12-31)
(ˆ ML ) n i 1 n i 1
Notice that (12-31) is highly nonlinear in ˆ ML .
In general the (log)-likelihood function can have more than
one solution, or no solutions at all. Further, the (log)-
likelihood function may not be even differentiable, or it can
be extremely complicated to solve explicitly
(see example 12.3, equation (12-31)).
Best Unbiased Estimator:
Referring back to example 12.1, we have seen that (12-12)
represents an unbiased estimator for with variance given
by (12-14). It is possible that, for a given n, there may be
14
other PILLAI
unbiased estimators to this problem with even lower
variances. If such is indeed the case, those estimators will be
naturally preferable compared to (12-12). In a given scenario,
is it possible to determine the lowest possible value for the
variance of any unbiased estimator? Fortunately, a theorem
by Cramer and Rao (Rao 1945; Cramer 1948) gives a
complete answer to this problem.
Cramer - Rao Bound: Variance of any ˆ
unbiased estimator based X 1 on
xobservations
1 , X 2 x2 , , X n xn
for must satisfy the lower bound
1 1
Var (ˆ) 2
. (12-32)
ln f X ( x1 , x2 ,, xn ; ) ln f X ( x1 , x2 ,, xn ; )
2
E E
2
This important result states that the right side of (12-32) acts
as a lower bound on the variance of all unbiased estimator for
, provided their joint p.d.f satisfies certain regularity
restrictions. (see (8-79)-(8-81), Text). 15
PILLAI
Naturally any unbiased estimator whose variance coincides
with that in (12-32), must be the best. There are no better
solutions! Such estimates are known as efficient estimators.
Let us examine whether (12-12) represents an efficient
estimator. Towards this using (12-11)
2 2
ln f X ( x1 , x2 ,, xn ; ) 1 n
4
( X i ) ; (12-33)
i 1
and
1 n
2
ln f ( x , x , , x ; ) n n
4
E X 1 2 n
E [( X i ) 2
] E [( X i )( X j )]
i 1 i 1 j 1, i j
1 n 2 n
4 2 , (12-34)
i 1
and substituting this into the first form on the right side of
(12-32), we obtain the Cramer - Rao lower bound for this
problem to be 16
PILLAI
2
. (12-35)
n
But from (12-14) the variance of the ML estimator in (12-12)
is the same as (12-35), implying that (12-12) indeed represents
an efficient estimator in this case, the best of all possibilities!
f ( | x1 , x2 , , xn )
ˆMAP
Fig. 12.2
19
PILLAI