Professional Documents
Culture Documents
CRLB Vector Proof
CRLB Vector Proof
Graduate Course on the CMU/Portugal ECE PhD Program Spring 2008/2009 Chapter 3 Cramer-Rao Lower Bounds
Instructor: Prof. Paulo Jorge Oliveira pjcro @ isr.ist.utl.pt Phone: +351 21 8418053 ou 2053 (inside IST)
PO 0809
Syllabus:
Classical Estimation Theory
Chap. 2 - Minimum Variance Unbiased Estimation [1 week] Unbiased estimators; Minimum Variance Criterion; Extension to vector parameters; Efficiency of estimators;
Chap. 3 - Cramer-Rao Lower Bound [1 week] Estimator accuracy; Cramer-Rao lower bound (CRLB); CRLB for signals in white Gaussian noise; Examples;
Chap. 4 - Linear Models in the Presence of Stochastic Signals [1 week] Stationary and transient analysis; White Gaussian noise and linear systems; Examples; Sufficient Statistics; Relation with MVU Estimators; continues
PO 0809
Estimator accuracy:
The accuracy on the estimates dependents very much on the PDFs
0.035 0.03
p(x[0]; q )
-80
-60
-40
-20
0 x[0]
20
40
60
80
100
p(x[0]; q )
If 2 is Large then the performance of the estimator is Poor; If 2 is Small then the performance of the estimator is Good; or
If PDF concentration is High then the parameter accuracy is High. How to measure sharpness of PDF (or concentration)?
PO 0809
Estimator accuracy:
When PDFs are seen as function of the unknown parameters, for x fixed, they are called as Likelihood function. To measure the sharpness note that (and ln is monotone)
1 ln p x[0]; A = 2 x[0] A A
and
2 1 2 ln p x[0]; A = 2 . A
As we know that the estimator has variance 2 (at least for this example)
where the expectation is taken with respect to p(x; ). Then, the variance of any unbiased
E ln p x; = 0
( )
forall
(1)
var
()
1 2 E 2 ln p x;
( )
(2)
where the derivative is evaluated at the true value of and the expectation is taken with respect to p(x, ). Furthermore, an unbiased estimator can be found that attains the bound for all if and only if
for some functions g(.) and I (.). The estimator, which is the MVU estimator, is = g x ,
and the minimum variance 1/ I().
PO 0809
ln p x; = I g x
( ) ( )( ( ) )
(3)
()
E = = g
()
or
p ( x ; )
dx = g .
()
(p.1)
ln p x ; p x ; E ln p x ; = p x ; d x = dx
( )
( )
( )
( )
p x ; d x =
( )
1 = 0.
Remark: differentiation and integration are required to be interchangeable (Leibniz Rule)! Lets differentiate (p.1) with respect to and use the previous results
PO 0809
p x ;
( )d x = g ( )
or
ln p x ;
( )p
( x ; )d x =
()
as
ln p ( x; ) ln p ( x; ) p ( x; )d x = E = 0.
ln p x ;
( )p
( x ; )d x =
( ),
considering results
PO 0809
ln p x; It remains to relate this expression with the one in the Theorem p x; dx = ? Starting with the previous result
( )
( )
E ln p x ; = ln p x ; p x ; d x = 0
( )
( ) ( )
( )
2 ln p x ; ln p x ; p x ; ln p x ; p x ; d x = 2 p x ; +
( ) ( ) ( )
( )
( ) ( )dx =
2 ln p x ; ln p x ; ln p x ; p x ; 2 p x ; +
And finally
( )
( )
( )
( )
dx =0
2 ln p x ; E 2
( )
ln p x ; 2 = E
( )
PO 0809
The result (3) will be obtained next See also appendix 3.B for the derivation in the vector case.
PO 0809
PO 0809
Example:
Example (DC level in white Gaussian noise): Problem: Find MVU estimator. Signal model: Approach: Compute CRLB, if right form we have it.
Likelihood function:
2 N 1 N 1 1 1 N ln p x; A = 2 2 n=0 x[n] A = 2 n=0 x[n] A = 2 x A A A
CRLB:
The estimator is unbiased and has the same variance, thus it is a MVU estimator! And it has the form:
ln p x; A = I g x , A
PO 0809
( ) ( )( ( ) )
for
N I = 2
()
g x = x.
( )
If
and then
PO 0809
Example:
Example (phase estimation): Signal model:
Likelihood function:
( )
PO 0809
Example:
Example (phase estimation cont.):
as
N 1 n=0
for large N.
Bound decreases as SNR=A2/22 increases Bound decreases as N increases Does an efficient estimator exists? Does a MVUE estimator exists?
PO 0809
Fisher information:
We define the Fisher Information (Matrix) as
2 2 N 1 n ; I = E 2 ln p x; = n=0 E 2 ln p x
()
( )
CRLB
CRLB
PO 0809
Transformation of parameters:
Imagine that the CRLB is known for the parameter . Can we compute easily the CRLB for a linear transformation of the form = g() = a + b ?
= a + b,
E a + b = aE + b =
PO 0809
Transformation of parameters:
Remark: after a nonlinear transformation, the good properties can be lost.
Example: Suppose that given a stochastic variable an estimator for =g(A)=A2 (power estimator). Note that
we desire to have
PO 0809
where is interpreted as meaning the matrix is positive semi-definite. The Fisher information matrix I() is given as where the derivatives are evaluated at the true value of and the expectation is taken with respect to p(x;). Furthermore, an unbiased estimator may be found that attains the bound for all if and only if (3) for some functions p dimensional function g(.) and some p x p matrix I (.). The estimator, which is the MVU estimator, is
PO 0809
C
where the Jacobian is
1 g = ... g r 1
()
( ) 0 g ( ) g ( ) ...
1 1
( )I
()
p ...
()
...
g r p
()
In the Gaussian general case for x[n]=s[n]+w[n], where w N the Fisher information matrix is
( ( ) , C )
PO 0809
Example:
Example (line fitting): Signal model:
( )
1 (2 )
N 2 2
1 2
2
where = A B
where
2 ln p x; E A2 I = 2 ln p x; E BA
( ) ( )
()
)
2 ln p x; E AB 2 ln p x; E B 2
( ) ( )
ln p x; A
PO 0809
( )=
N 1 1 x[n] A Bn , 2 n=0
and
ln p x; B
( )=
N 1 1 x[n] A B n. 2 n=0
Example:
Example (cont.): Moreover
2 ln p x; A2
( ) = N ,
2
ln p x; AB
( )=
N 1 1 n, 2 n=0
and
2 ln p x; B 2
( )=
N 1 2 1 n. 2 n=0
Since the second order derivatives do not depend on x, we have immediately that
N 1 I = 2 N (N 1) 2
()
N (N 1) 2 N (N 1)(2N 1) 6
And also,
I 1
()
2(2N 1) N (N + 1) 2 = 6 N (N + 1)
6 N (N + 1) , 12 2 N (N 1)
( )
( )
12 2 N (N 2 1)
PO 0809
Example:
Example (cont.): Remarks: For only one parameter to be determined . Thus a general results was obtained: when more parameters are to be estimated the CRLB always degrades. Moreover
The parameter B is easier to be determined, as its CRLB decreases with 1/N3. This means that x[n] is more sensitive to changes in B than changes in A.
PO 0809
Bibliography:
Further reading
Harry L. Van Trees, Detection, Estimation, and Modulation Theory, Parts I to IV, John Wiley, 2001. J. Bibby, H. Toutenburg, Prediction and Improved Estimation in Linear Models, John Wiley, 1977. C.Rao, Linear Statistical Inference and Its Applications, John Wiley, 1973. P. Stoica, R. Moses, On Biased Estimators and the Unbiased Cramer-Rao Lower Bound, Signal Process, vol.21, pp. 349-350, 1990.
PO 0809