Professional Documents
Culture Documents
An Introduction To Signal Detection and Estimation - Second Edition
An Introduction To Signal Detection and Estimation - Second Edition
An Introduction To Signal Detection and Estimation - Second Edition
Exercise 2:
The likelihood ratio is given by
L(y) =
3
,
2(y + 1)
0 y 1.
a. With uniform costs and equal priors, the critical region for minimum Bayes error is
given by {y [0, 1]|L(y) 1} = {y [0, 1]|3 2(y + 1)} = [0, 1/2]. Thus the Bayes rule
is given by
1 if 0 y 1/2
.
B (y) =
0 if 1/2 < y 1
The corresponding minimum Bayes risk is
r(B ) =
1
1 1/2 2
11
dy = .
(y + 1)dy +
2 0 3
24
1/2
b. With uniform costs, the least-favorable prior will be interior to (0, 1), so we examine
the conditional risks of Bayes rules for an equalizer condition. The critical region for the
Bayes rule 0 is given by
1 = y [0, 1] L(y)
where
=
1
2
3
0
0
1
0
1 0
= [0, ],
if 0 0 37
if 37 < 0 < 35 .
if 37 0 1
R0 (0 ) =
2
(y + 1)dy =
3
and
2
3
2
+1
if 0 0 37
if 37 < 0 < 35 ,
if 37 0 1
0
if 0 0 37
dy = 1 if 37 < 0 < 35 .
R1 (0 ) =
1
if 37 0 1
By inspection, a minimax threshold L is the solution to the equation
1
2L
3
L
+ 1 = 1 L ,
2
which yields L = ( 37 5)/2. The minimax risk is the value of the equalized conditional
risk; i.e., V (L ) = 1 L .
c. The Neyman-Pearson test is given by
1 if
N P (y) = 0 if
0 if
3
2(y+1)
3
2(y+1)
3
2(y+1)
>
= ,
<
where and 0 are chosen to give false-alarm probability . Since L(y) is monotone
decreasing in y, the above test is equivalent to
1 if y <
N P (y) = 0 if y = ,
0 if y >
3
1. Since Y is a continuous random variable, we can ignore the randomwhere = 2
ization. Thus, the false-alarm probability is:
PF (N P ) = P0 (Y < ) =
0
2
(y + 1)dy =
3
2
3
2
+1
if 0
if 0 < < 1 .
if 1
which is =
2
3
+ 1 = ,
2
1 if y 1 + 3 1
N P (y) =
.
0 if y > 1 + 3 1
PD (N P ) =
dy = =
1 + 3 1,
0 < < 1.
Exercise 4:
Here the likelihood ratio is given by
L(y) =
2 y y2
e 2
2e (y1)2
e 2 ,
y 0.
1 = y 0 (y 1)2 ,
where =
2e
log
0
10
if < 0
1 = 1 , 1+ if 0 1 .
0, 1 +
if > 1
0
0
2e
= 2e ; the condition
< 0 1, where
1+
2
1+
V (0 ) = 1 0 ,
1+
V (0 ) = 0
and
1
1
y2
y2
2
ey dy + (1 0 )
e 2 dy +
e 2 dy ,
0
1+
1+
0 0 0 ,
2
2
y
e dy + (1 0 )
0 0 < 0 .
V (0 ) = 0
e 2 dy,
1+
0
b. The minimax rule can be found by equating conditional risks. Investigation of the
above shows that this equality occurs in the intermediate region 0 0 0 , and thus
corresponds to a threshold L (0, 1) solving
e L e L = 2e(1 + (1 + L ) (1 L )).
The minimax risk is then either of the equal conditional risks; e.g.,
1+ L
1 L
e
.
V (L ) = e
c. Here, randomization is unnecessary, and the Neyman-Pearson critical regions are
of the form
1 = y 0 (y 1)2 ,
3
where =
2e
if < 0
1 = 1 , 1+ if 0 1 .
0, 1 +
if > 1
PF (N P ) = 0,
1+
PF (N P ) =
ey dy = e1+
1+
and
PF (N P ) =
e1
2
sinh
,
e
ey dy = 1 e1
0 1,
> 1.
2
sinh1 e
if 0 < 1 e2
2
=
.
2
[1 + log(1 )] if 1 e2 < < 1
PD (N P ) = 2 1 +
= 2 1 + sinh
e
2
1 sinh
e
2
0 < 1 e2 ,
and
PD (N P ) = 2 1 +
1
1
= 2 (2 + log(1 ))
,
2
2
1 e2 < 1.
Exercise 6 a & b:
Here we have p0 (y) = pN (y + s) and p1 (y) = pN (y s), which gives
1 + (y + s)2
L(y) =
.
1 + (y s)2
a. With equal priors and uniform costs, the critical region for Bayes testing is 1 =
{L(y) 1} = {1 + (y + s)2 1 + (y s)2 } = {2sy 2sy} = [0, ). Thus, the Bayes
test is
1 if y 0
B (y) =
0 if y < 0
4
b. Because of the symmetry of this problem with uniform costs, we can guess that
1/2 is the least-favorable prior. To conrm this, we can check that this answer from Part
a gives an equalizer rule:
R0 (1/2 ) =
0
1 0
1
1
dy
=
dy = R1 (1/2 ).
[1 + (y + s)2 ]
2 [1 + (y s)2 ]
Exercise 7:
a. The densities under the two hypotheses are:
p0 (y) = p(y) = ey ,
and
p1 (y) =
p(y s)p(y)ds =
y
y > 0,
esy es ds = yey ,
y > 0.
p1 (y)
= y,
p0 (y)
y > 0.
0 < < 1.
n
k=1
and
p1 (y) =
=
n
p(yk ) =
n
eyk ,
0 < min{y1 , y2 , . . . , yn },
k=1
p(yk s) p(s)ds =
k=1
esyk es ds
k=1
0 < min{y1 , y2 , . . . , yn }.
1 (n1) min{y1 ,y2 ,...,yn }
e
1 ,
n1
0 < min{y1 , y2 , . . . , yn }.
PF (N P ) = P0 (L(Y ) > ) = P0
= P0 (
n
(Yk > )) =
k=1
log((n 1) + 1)
min{Y1 , Y2 , . . . , Yn } >
n1
n
P0 (Yk > ) =
k=1
n1
n
k=1
(n1)/n 1
e(n1) 1
=
.
=
n1
n1
Exercise 15
a. The LMP test is
1 if
lo (y) = , if
0 if
we have
p (y)
|=0
p (y)
|=0
p (y)
|=0
p (y)
|=0
p0 (y)
> p0 (y)
= p0 (y)
< p0 (y) .
= sgn(y) ;
thus
1 if sgn(y) >
lo (y) = , if sgn(y) =
0 if sgn(y) < .
To set the threshold , we consider
0
if 1
P0 (sgn(Y ) > ) = 1/2 if 1 < 1
1
if < 1 .
This implies that
1
if 0 < < 1/2
1 if 1/2 < 1 .
6
e = en ,
The randomization is
P0 (sgn(Y ) > )
=
=
P0 (sgn(Y ) )
2
if 0 < < 1/2
2 1 if 1/2 < 1 .
lo (y) =
2 if y > 0
0 if y 0
lo (y) =
1
if y 0
2 1 if y < 0
1 |y|
e
dy
2
10 |y|
e
dy + (2 1) 0
2
0
1 |y|
dy
2 e
(2 e )
if 0 < < 1/2
.
1 + ( 1)e
if 1/2 < 1
if <
if
if > ,
(, )
= (( 2+ ), )
from which
( +)/2
P0 ( ) = 12 e
if <
if
if > .
Clearly, we must know to set , and thus the NP critical region depends on . This
implies that there is no UMP test.
The generalized likelihood ratio test uses this statistic:
sup e|y||y| = exp{sup(|y| |y |)}
>0
>0
1 if y < 0
.
ey if y 0
7
Exercise 16:
We have M hypotheses H0 , H1 . . . , HM 1 , where Y has distribution Pi and density pi
under hypothesis Hi . A decision rule is a partition of the observation set into regions
0 , 1 , . . . , M 1 , where chooses hypothesis Hi when we observe y i . Equivalently, a
decision rule can be viewed as a mapping from to the set of decisions {0, 1, . . . , M 1},
where (y) is the index of the hypothesis accepted when we observe Y = y.
On assigning costs Cij to the acceptance of Hi when Hj is true, for 0 i, j (M 1),
we can dene conditional risks, Rj (), j = 0, 1, . . . , M 1, for a decision rule , where
Rj () is the conditional expected cost given that Hj is true. We have
Rj () =
M
1
Cij Pj (i ).
i=0
M
1
j Rj ().
j=0
1
M
1 M
j Cij Pj (i ) =
j=0 i=0
M
1
i=0
M
1
j Cij
j=0
j Cij Pj (i )
M
1
i=0
M
1
pj (y)(dy) =
j=0
M
1
i=0
M
1
j=0
Thus, by inspection, we see that the Bayes rule has decision regions given by
i = y
M
1
j Cij pj (y)
j=0
M
1
min
0kM 1
j=0
j Ckj pj (y) .
Exercise 19:
a. The likelihood ratio is given by
&n
L(y) =
0
=
1
n
n
2
2
2
0 1
2
2
0
1
(yk 1 )2 /212
1
k=1 21 e
&n
(yk 0 )2 /202
1
k=1 20 e
1
12
2 2
2
0
1
'
n
k=1
yk2
02
2
1
0
'
n
k=1
yk
N P (y) =
1 if (y1 )2
,
0 if (y1 )2 <
PF (N P ) = P0 ((Y1 )2 > ) = 1 P0 ( Y1
=1
0
)
+
=2 1
.
0
0
= 0 1 1
2
PD (N P ) = 1 P1 ( Y1
0 1
=2 1
1
1
2
) = 2 1
0 < < 1.
1