Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Lecture 7

Minimum Variance Estimator:


An estimator which constrain the bias to be zero and find the estimate that minimizing the
variance is known as minimum variance estimator.
In statistics, a minimum-variance unbiased estimator (MVUE) or uniformly minimum-variance
unbiased estimator (UMVUE) is an unbiased estimator that has lower variance than any other
unbiased estimator for all possible values of the parameter.

Theorem:

If the statistic be such that  log L can be expressed in the form of  log L = A( ) (t −  ) , then t is
 
1
an MVB unbiased estimator of  with variance .
A( )

Proof:
If E(t ) =  + b( ) , where b( ) is the biased of  and it is differentiable function of  . Then MVB is
of the following type-

var(t ) 
1 + b( )2 ... ... ... (1)
  2 log L 
− E 2 
  

Condition under which MVB is attained

var(t ) 
 ( )2 ... ... ... (2)
  2 log L 
− E 2 
  

Again, we know that,


2
   log L  
covt , 
   
2 =  1
  log L 
var(t )  var 
  
2
   log L     log L 
 covt ,
    var(t )  var  
     
So, t and  log L are linearly related. Therefore, we can write as


 log L   log L 

− E  = At −  ( )
  
 log L
 = At −  ( )


Where A is independent of x' s but may be a function of  . Then we can write as (Cramer Rao
lower bound)
 log L
= A( ) t −  ( ) ... ... ... (3)


  log L 
 = A( )  var(t )
2
We have, var
  

From equation (2) we can write

var(t ) =
 ( )2
  log L 
var 
  

 var(t ) =
 ( )2
A( )2  var(t )
 var(t )2 =
 ( )2
A( )2
 ( )
 var(t ) =
A( )

 ( )
. If  ( ) =  then the variance is
1
Thus t is MVB estimator of  with variance . Now from
A( ) A( )
the equation (3) we have,
 log L
= A( ) (t −  )


Note: If the frequency function is not of  log L = A( ) (t −  ) form there may be still an

estimator of  ( ) which has uniformly in  smaller variance than any other estimator then it
is called a MVE.

MVB
Note: Efficiency of the estimator is given by
variance of the given estimator
Problem 1 : If x1 , x2 , , xn are drawn from the population N ( , 2 ), where  2 is known. Find
MVB estimator for  and also find its variance.

Solution: Since X ~ N ( , 2 ) , then the p.d . f is-


2
1  x − 

( )=
−  
−   ( x,  )   ,  2  0
1
f x ; ,  2
e 2   ;
 2
n

 1  − 2 2  ( xi − )
n 1 2

 L =   e

i =1

  2 

( ) − 21  (x
n
−  )2
n
 log L = − log 2 2
2 i
2 i =1

 log L n
 (xi −  )
1
 = 2
  i =1

 log L 1  n 


= 2  xi − n 
  i =1 

 log L
= 2 nx − n 
1

 
 log L
= 2 (x −  )
1

 
n

Hence x is an MVB unbiased estimator for  and var(ˆ) = var(x ) =


2
= A( ) .
n

Problem 2: If x1 , x2 , , xn are drawn from the population N (0, 2 ) , where  2 is unknown. Find
MVB estimator for  2 and also find its variance.

Solution: Since X ~ N (0, 2 ) , then the p.d . f is-


2
1 x 

( )= 
−  
1
f x ; 2
e 2   ; −   x  ,  2  0
2
n

 1  − 2 2 
n 1 2
xi
 L =   e i =1

  2 
n
log (2 ) − log  2 −  xi2
n n 1
 log L = −
2 2 2 2 i =1

 log L n
 xi2
n 1
 =− +
 2 2 2 2 4 i =1

 n 2 
  xi 
 log L n  i =1 2
 = − 
 2 2 4  n 
 
 
 log L  n
2
  xi −  
1 1
 = 2
 2
2 4
 n i =1 
n
Hence
1 n 2

n i =1
xi is an MVB unbiased estimator for  2 and var ˆ 2 =
2 4
n
= A( ) ( )

Problem 3: A random sample x1 , x2 , , xn is drawn from P( ) . Find MVB estimator for  and
also find its variance.
Solution: Since X ~ P( ) , then the p.d . f is-

e −  x
f (x ;  ) = ; x = 0,1,2, ... ,   0
x!
e − n  i
x
L=
x1! x2 ! ...  xn1!
n
 log L = −n +  xi log  −  log xi !
i =1


 log L
= −n +
xi 
 
 log L nx
 = −n
 
 log L
 (x −  )
1
 =
 
n

Hence x is an MVB unbiased estimator for  and var(ˆ ) =  = A( ) .


n
Problem 4 : A random sample x1, x2 , , xn is drawn from b(1, p ) . Find MVB estimator for p and
also find its variance.
Solution: Since X ~ b(1, p ) , then the p.d . f is-

f (x ; p ) = p x (1 − p )1− x
 L = p  i (1 − p )n− xi
x

 n 
 log L =  xi log p +  n −  xi  log (1 − p )
 
 i =1 
n
n −  xi

 log L
=
 xi − i =1
p p 1− p
 log L nx n − nx
 = −
p p 1− p
 log L nx − pnx − np + pnx
 =
p p (1 − p )
 log L
 (x − p )
n
 =
p p (1 − p )
 log L
 (x − p )
1
 =
p p (1 − p )
n

Hence x is an MVB unbiased estimator for p and var( pˆ ) = p(1 − p ) = A( ) .


n

You might also like