Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Delft University of Technology

Faculty of Electrical Engineering, Mathematics, and Computer Science


Circuits and Systems Group

ET 4386 Estimation and Detection


Richard C. Hendriks

Example 1 - Lecture 6

This example is a continuation of an example given in lecture 5.


Let x1 , ..., xN be iid measurements from a Poisson (λ) distribution with marginal pmf

λ xn
p(xn ; λ) = e−λ ,
xn !
and with expected value E[xn ] = λ.

∂ ln p(xn ;λ)
(a) Calculate ∂λ and show that the regularity condition is satisfied.

(b) Determine the CRLB for Var[λ̂] under the pmf p(x; λ)

(c) Give the MVU estimator for λ.

In the following we will consider λ to be a random variable instead of a deterministic param-


eter. Let the distribution of λ be uniform in the interval [0, c].

(d) Calculate the MMSE estimator λ̂ = E[λ|x1 , ..., xN ].


Make use of the the following relation:
Z u
xν−1 e−µx dx = µ−ν γ(ν, µu), (1)
0
Rx
where γ(s, x) is known as the incomplete Gamma function, defined as γ(s, x) = 0 ts−1 e−t dt.

(e) Calculate the MAP estimator λM AP .

Now let the distribution of λ be exponential with pdf p(λ; a) = ae−aλ , with λ ≥ 0, E[λ] = 1
a
and var[λ] = a12 .

(f ) Calculate the MMSE estimator λ̂ = E[λ|x1 , ..., xN ].


Make use of the the following relation:
Z ∞
xν−1 e−µx dx = µ−ν Γ(ν), (2)
0

(g) Calculate the MAP estimator λM AP .


Answer Example 1 - Lecture 6

N
−λ λxn = e−N λ λ( n=1 xn )
P
QN
(a) p(x; λ) = i=1 e xn !
Q N
n=1 xn !
PN
∂ ln p(x;λ) xn
∂λ = −N + n=1
λ
E[ xn ]
h i P
∂ ln p(x;λ) n=1N Nλ
The regularity condition holds as, E ∂λ = −N + λ = −N + λ = 0.

∂ 2 ln p(x;λ) − N
P
n=1 xn
(b) = .
h 22
∂λ i λ2
∂ ln p(x;λ)
E ∂λ2
= − Nλ . The CRLB is then given by Var[λ̂] ≥ h 1
∂ 2 ln p(x;λ)
i = λ
N
−E
∂λ2

(c) From question (a) we know that


PN
∂ ln p(x; λ) n=1 xn
= −N + .
∂λ λ
This can be rewritten as
 
PN
∂ ln p(x; λ) N  n=1 xn

=  −λ .
∂λ λ  N
|{z} | {z }

I(λ) λ̂

This is exactly the form


∂ ln p(x; λ)
= I(λ)(λ̂ − λ).
∂λ
PN
xn
The MVU estimator is thus given by λ̂ = n=1
N .

(d)
( N
n=1 xn ) 1
P
Rc
λe−N λ λQN c dλ
N
e−N λ λ1+( n=1 xn ) dλ
Rc P
0
n=1 xn ! 0
λ̂ = E[λ|x1 , ..., xN ] = = Rc PN
( )1
PN
Rc
e −N λ λQ n=1
xn
e−N λ λ( n=1 xn ) dλ
0 N c dλ 0
n=1 n !
x

Using Eq. (1) we then obtain


PN
N −(2+( n=1 xn )) γ(2 + N 1 γ(2 + N
P P
n=1 xn , N c) xn , N c)
E[λ|x1 , ..., xN ] =
−(1+( N
= Pn=1
N
.
) N
P PN
x n ) γ(1 + n=1 xn , N c)
N n=1 γ(1 + n=1 xn , N c)

Notice that
lim γ(s, x) = Γ(s)
x→∞

and that Γ(x + 1)/Γ(x) = x. For c → ∞ we thus obtain

1 Γ(2 + ( N 1+ N
P P
n=1 xn )) n=1 xn
lim E[λ|x1 , ..., xN ] = N
=
N Γ(1 + ( n=1 xn )) N
c→∞
P

2
(e) λM AP = arg maxλ log p(λ|x1 , ..., xN ). Function log p(λ|x1 , ..., xN ) is concave for λ ∈ [0, c].

d d
log p(λ|x1 , ..., xN ) = log p(λ) + log p(x1 , ..., xN |λ) − log p(x1 , ..., xN ) =
dλ dλ
N
( N
P
d n=1 xn )
X
− Nλ + ( xn ) log(λ) = −N + .
λ λ
n=1

( N
P PN
n=1 xn ) ( n=1 xn )
Thus, λM AP = N for N < c. Otherwise, λM AP = c.

(f )
PN
R∞ ( n=1 xn )
λe−N λ λQ ae−aλ dλ R∞ N
−(a+N )λ λ(1+ n=1 xn ) dλ
P
0 e
0 N
n=1 xn !
λ̂ = E[λ|x1 , ..., xN ] = = R∞ PN =
( )
PN
R∞
e−N λ λQN
n=1 xn
ae−aλ dλ 0 e−(a+N )λ λ( n=1 xn ) dλ
0
n=1 xn !

 
1P
Γ 2+ N
P
x
N n=1 n
1+ N
P
(N +a)2+ n=1 xn
 = n=1 xn
1P PN
Γ 1 + n=1 xn N +a
N
(N +a)1+ n=1 xn

N
If we define α = N +a , we can write
PN
n=1 xn
E[λ|x1 , ..., xN ] = α + (1 − α)E[λ],
N
which clearly shows the trade-off between the data (x1 , ..., xN ) and the information of
the prior by means of E[λ].

(g) λM AP = arg maxλ log p(λ|x1 , ..., xN ). Function log p(λ|x1 , ..., xN ) is concave.

d d
log p(λ|x1 , ..., xN ) = log p(λ) + log p(x1 , ..., xN |λ) − log p(x1 , ..., xN ) =
dλ dλ
N
( N
P
d n=1 xn )
X
− (a + N )λ + ( xn ) log(λ) = −(a + N ) + .
λ λ
n=1

( N
P
n=1 xn )
Thus, λM AP = N +a

You might also like