An Introduction To Signal Detection and Estimation - Second Edition

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

An Introduction to Signal Detection and

Estimation - Second Edition


Chapter II: Selected Solutions
H. V. Poor
Princeton University
March 16, 2005

Exercise 2:
The likelihood ratio is given by
L(y) =

3
,
2(y + 1)

0 y 1.

a. With uniform costs and equal priors, the critical region for minimum Bayes error is
given by {y [0, 1]|L(y) 1} = {y [0, 1]|3 2(y + 1)} = [0, 1/2]. Thus the Bayes rule
is given by

1 if 0 y 1/2
.
B (y) =
0 if 1/2 < y 1
The corresponding minimum Bayes risk is
r(B ) =

 1
1  1/2 2
11
dy = .
(y + 1)dy +
2 0 3
24
1/2

b. With uniform costs, the least-favorable prior will be interior to (0, 1), so we examine
the conditional risks of Bayes rules for an equalizer condition. The critical region for the
Bayes rule 0 is given by



1 = y [0, 1] L(y)
where

 =

1
2

3
0

0
1

0
1 0

= [0,  ],

if 0 0 37
if 37 < 0 < 35 .
if 37 0 1

Thus, the conditional risks are:


 

R0 (0 ) =

2
(y + 1)dy =
3

and

2
3


2

+1

if 0 0 37
if 37 < 0 < 35 ,
if 37 0 1

0
if 0 0 37
dy = 1  if 37 < 0 < 35 .
R1 (0 ) =


1
if 37 0 1
By inspection, a minimax threshold L is the solution to the equation
 1

2L
3

L
+ 1 = 1 L ,
2

which yields L = ( 37 5)/2. The minimax risk is the value of the equalized conditional
risk; i.e., V (L ) = 1 L .
c. The Neyman-Pearson test is given by

1 if
N P (y) = 0 if

0 if

3
2(y+1)
3
2(y+1)
3
2(y+1)

>
= ,
<

where and 0 are chosen to give false-alarm probability . Since L(y) is monotone
decreasing in y, the above test is equivalent to

1 if y < 
N P (y) = 0 if y =  ,

0 if y > 
3
1. Since Y is a continuous random variable, we can ignore the randomwhere  = 2
ization. Thus, the false-alarm probability is:

PF (N P ) = P0 (Y <  ) =

 
0

2
(y + 1)dy =
3


2
3


2

+1

if  0
if 0 <  < 1 .
if  1

The threshold for PF (N P ) = is the solution to

which is  =

2 
3


+ 1 = ,
2

1 + 3 1. So, an -level Neyman-Pearson test is




1 if y 1 + 3 1
N P (y) =
.
0 if y > 1 + 3 1

The detection probability is

 

PD (N P ) =

dy =  =

1 + 3 1,

0 < < 1.

Exercise 4:
Here the likelihood ratio is given by


L(y) =

2 y y2
e 2

2e (y1)2
e 2 ,

y 0.

a. Thus, Bayes critical regions are of the form





1 = y 0 (y 1)2  ,


where  =

2e

log

0
10

. There are three cases:

if  < 0


1 = 1  , 1+  if 0 1 .

0, 1 + 
if  > 1

0

0

2e
= 2e ; the condition

< 0 1, where
1+

2





0 1 is equivalent to 0 0 0 , where 0 = 2 ; and the condition  > 1 is

The condition < 0 is equivalent to

1+

equivalent to 0 0 < 0 .


The minimum Bayes risk V (0 ) can be calculated for the three regions:
0 < 0 1,

V (0 ) = 1 0 ,
 1+ 

V (0 ) = 0
and

1 




1 
y2
y2
2

ey dy + (1 0 )
e 2 dy +
e 2 dy ,
0
1+ 

 1+ 

0 0 0 ,

2
2
y
e dy + (1 0 )
0 0 < 0 .
V (0 ) = 0
e 2 dy,
1+ 
0
b. The minimax rule can be found by equating conditional risks. Investigation of the
above shows that this equality occurs in the intermediate region 0 0 0 , and thus
corresponds to a threshold L (0, 1) solving




e L e L = 2e(1 + (1 + L ) (1 L )).

The minimax risk is then either of the equal conditional risks; e.g.,


1+ L
1 L
e
.
V (L ) = e
c. Here, randomization is unnecessary, and the Neyman-Pearson critical regions are
of the form



1 = y 0 (y 1)2  ,
3

where  =

2e

log(). There are three cases:


 if < 0


1 = 1  , 1+  if 0 1 .

0, 1 + 
if  > 1

The false-alarm probability is thus:


 < 0

PF (N P ) = 0,
 1+

PF (N P ) =

ey dy = e1+

 1+

and
PF (N P ) =

e1

 
2
sinh
 ,
e

ey dy = 1 e1

0  1,

 > 1.

From this we see that the threshold for level NP testing is




2

sinh1 e
if 0 < 1 e2
2
=
.
2
[1 + log(1 )] if 1 e2 < < 1


The detection probability is thus




 

PD (N P ) = 2 1 +


= 2 1 + sinh

e
2



1 sinh

e
2

 



0 < 1 e2 ,

and


PD (N P ) = 2 1 +

 

1
1

= 2 (2 + log(1 ))
,
2
2

1 e2 < 1.

Exercise 6 a & b:
Here we have p0 (y) = pN (y + s) and p1 (y) = pN (y s), which gives
1 + (y + s)2
L(y) =
.
1 + (y s)2
a. With equal priors and uniform costs, the critical region for Bayes testing is 1 =
{L(y) 1} = {1 + (y + s)2 1 + (y s)2 } = {2sy 2sy} = [0, ). Thus, the Bayes
test is

1 if y 0
B (y) =
0 if y < 0
4

The minimum Bayes risk is then


1 0
1 tan1 (s)
1
1
1
dy +
dy =
.
r(B ) =
2 0 [1 + (y + s)2 ]
2 [1 + (y s)2 ]
2

b. Because of the symmetry of this problem with uniform costs, we can guess that
1/2 is the least-favorable prior. To conrm this, we can check that this answer from Part
a gives an equalizer rule:
R0 (1/2 ) =


0

1 0
1
1
dy
=
dy = R1 (1/2 ).
[1 + (y + s)2 ]
2 [1 + (y s)2 ]

Exercise 7:
a. The densities under the two hypotheses are:
p0 (y) = p(y) = ey ,
and
p1 (y) =

p(y s)p(y)ds =

 y

y > 0,

esy es ds = yey ,

y > 0.

Thus, the likelihood ratio is


L(y) =

p1 (y)
= y,
p0 (y)

y > 0.

b. Randomization is irrelevant here, so the false-alrm probability for threshold is


PF (N P ) = P0 (Y > ) = e ,
which gives the threshold = log , for level Neyman-Pearson testing. The corresponding detection probability is
PD (N P ) = P1 (Y > ) =

yey dy = ( + 1)e = (1 log ),

0 < < 1.

c. Here the densities under the two hypotheses become:


p0 (y) =

n

k=1

and
p1 (y) =
=

 
n

p(yk ) =

n


eyk ,

0 < min{y1 , y2 , . . . , yn },

k=1

p(yk s) p(s)ds =

 min{y1 ,y2 ,...,yn }  


n

k=1

p0 (y) (n1) min{y1 ,y2 ,...,yn }


e
1 ,
n1
5

esyk es ds

k=1

0 < min{y1 , y2 , . . . , yn }.

Thus, the likelihood ratio is


L(y) =


1  (n1) min{y1 ,y2 ,...,yn }
e
1 ,
n1

0 < min{y1 , y2 , . . . , yn }.

d. The false-alarm probability incurred by comparing L(y) from Part c to a threshold


is

PF (N P ) = P0 (L(Y ) > ) = P0

= P0 (

n


(Yk > )) =

k=1


from which we have =

log((n 1) + 1)
min{Y1 , Y2 , . . . , Yn } >
n1


n


P0 (Yk > ) =

k=1

n1

n


k=1

log , or, equivalently,




(n1)/n 1
e(n1) 1
=
.
=
n1
n1

Exercise 15
a. The LMP test is

1 if

lo (y) = , if

0 if
we have

p (y)
|=0

p (y)
|=0

p (y)
|=0

p (y)
|=0

p0 (y)

> p0 (y)
= p0 (y)
< p0 (y) .

= sgn(y) ;

thus

1 if sgn(y) >
lo (y) = , if sgn(y) =

0 if sgn(y) < .
To set the threshold , we consider

0
if 1
P0 (sgn(Y ) > ) = 1/2 if 1 < 1

1
if < 1 .
This implies that

1
if 0 < < 1/2
1 if 1/2 < 1 .
6

e = en ,

The randomization is
P0 (sgn(Y ) > )
=
=
P0 (sgn(Y ) )

2
if 0 < < 1/2
2 1 if 1/2 < 1 .

The LMP test is thus




lo (y) =

2 if y > 0
0 if y 0

for 0 < < 1/2 ; and it is




lo (y) =

1
if y 0
2 1 if y < 0

for 1/2 < 1.


For xed > 0, the detection probability is
PD (lo ; ) = P (sgn(Y ) > ) + P (sgn(Y ) = )


 1 |y|
e
dy
2
 10 |y|

e
dy + (2 1) 0

2
0

1 |y|
dy
2 e

if 0 < < 1/2


if 1/2 < 1

(2 e )
if 0 < < 1/2
.

1 + ( 1)e
if 1/2 < 1

b. For xed , the NP critical region is


= {|y| |y | >  }

if  <
if 
if  > ,

(, )

= (( 2+ ), )

from which

(  +)/2

P0 ( ) = 12 e

if  <
if 
if  > .

Clearly, we must know to set  , and thus the NP critical region depends on . This
implies that there is no UMP test.
The generalized likelihood ratio test uses this statistic:
sup e|y||y| = exp{sup(|y| |y |)}
>0

>0

1 if y < 0
.
ey if y 0
7

Exercise 16:
We have M hypotheses H0 , H1 . . . , HM 1 , where Y has distribution Pi and density pi
under hypothesis Hi . A decision rule is a partition of the observation set into regions
0 , 1 , . . . , M 1 , where chooses hypothesis Hi when we observe y i . Equivalently, a
decision rule can be viewed as a mapping from to the set of decisions {0, 1, . . . , M 1},
where (y) is the index of the hypothesis accepted when we observe Y = y.
On assigning costs Cij to the acceptance of Hi when Hj is true, for 0 i, j (M 1),
we can dene conditional risks, Rj (), j = 0, 1, . . . , M 1, for a decision rule , where
Rj () is the conditional expected cost given that Hj is true. We have
Rj () =

M
1


Cij Pj (i ).

i=0

Assuming priors j = P (Hj occurs), j = 0, 1, . . . , M 1, we can dene an overall average


risk or Bayes risk as
r() =

M
1


j Rj ().

j=0

A Bayes rule will minimize the Bayes risk.


We can write
r() =

1
M
1 M



j Cij Pj (i ) =

j=0 i=0

M
1

i=0

M
1


j Cij

j=0

j Cij Pj (i )

M
1


i=0

M
1


pj (y)(dy) =

j=0

M
1 

i=0

M
1


j Cij pj (y) (dy).

j=0

Thus, by inspection, we see that the Bayes rule has decision regions given by

i = y


M
1


j Cij pj (y)
 j=0

M
1


min

0kM 1

j=0

j Ckj pj (y) .

Exercise 19:
a. The likelihood ratio is given by
&n

L(y) =


0
=
1

n

n
2

2
2
0 1
2
2
0
1

(yk 1 )2 /212
1
k=1 21 e
&n
(yk 0 )2 /202
1
k=1 20 e

1
12
2 2
2
0
1

'

n
k=1

yk2

02
2

1
0

'

n
k=1

yk

which shows the structure indicated.


b. If 1 = 0 and 12 > 02 , then the Neyman-Pearson test operates by comparing
'
the quantity nk=1 (yk )2 to a threshold, choosing H1 if the threshold is exceeded and
'
H0 otherwise. Alternatively, if 1 > 0 and 12 = 02 , then the NP test compares nk=1 yk
to a threshold, again choosing H1 when the threshold is exceeded. Note that, in the rst
case, the test statistic is quadratic in the observations, and in the second case it is linear.
c. For n = 1, 1 = 0 and 12 > 02 ,, the NP test is of the form


N P (y) =

1 if (y1 )2 
,
0 if (y1 )2 < 

where  > 0 is an appropriate threshold. We have




PF (N P ) = P0 ((Y1 )2 >  ) = 1 P0 (  Y1

=1


0

)






+
=2 1
.
0
0

Thus, for size we set

 = 0 1 1

2

and the detection probability is




PD (N P ) = 1 P1 (  Y1


0 1
=2 1

1
1
2

) = 2 1



0 < < 1.


1

You might also like