Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Making Optimal Decisions

Remember the example:

H0 : x[0] <

H1 : x[0] >

Using detection theory, rules can be derived on how to chose .

• Neyman-Pearson Theorem: Maximize detection probability for a given false alarm


probability.

• Minimum probability of error

• Bayesian detector

17
Minimum Probability of Error
Assume the prior probabilities of H0 and H1 are known and represented by P (H0 ) and
P (H1 ), respectively. The probability of error, Pe , is then defined as

Pe = P (H1 )P (H0 |H1 ) + P (H0 )P (H1 |H0 ) = P (H1 )PM + P (H0 )PF A

Our goal is to design a detector that minimizes Pe . It is shown that the following detector is
optimal in this case
p(x|H1 ) P (H0 )
> =
p(x|H0 ) P (H1 )
In case P (H0 ) = P (H1 ), the detector is called the maximum likelihood detector.

32
Minimum Probability of Error - Derivation

Pe = P (H1 )P (H0 |H1 ) + P (H0 )P (H1 |H0 )


Z Z
= P (H1 ) p(x|H1 )dx + P (H0 ) p(x|H0 )dx
R0 R1

We know that Z Z
p(x|H1 )dx = 1 p(x|H1 )dx,
R0 R1

such that
✓ Z ◆ Z
Pe = P (H1 ) 1 p(x|H1 )dx + P (H0 ) p(x|H0 )dx
R1 R1
Z
= P (H1 ) + [P (H0 )p(x|H0 ) P (H1 )p(x|H1 )] dx
R1

33
Minimum Probability of Error - Derivation
Z
Pe = P (H1 ) + [P (H0 )p(x|H0 ) P (H1 )p(x|H1 )] dx
R1

We want to minimize Pe , so an x should only be included in the region R if the integrand

[P (H0 )P (x|H0 ) P (H1 )P (x|H1 )]

is negative for that x.

P (H0 )p(x|H0 ) < P (H1 )p(x|H1 )


p(x|H1 ) P (H0 )
> =
p(x|H0 ) P (H1 )

34
Minimum Probability of Error– Example DC in WGN
Consider the following signal detection problem

H0 : x[n] = w[n] n = 0, 1, . . . , N 1

H1 : x[n] = s[n] + w[n] n = 0, 1, . . . , N 1

where the signal is s[n] = A for A > 0 and w[n] is WGN with variance 2
. Now the min.
p(x|H1 ) P (H0 )
probability of error detector decides H1 if p(x|H0 ) > P (H1 ) = 1 (assuming P (H0 ) = P (H1 ) =
0.5), leading to h i
1 1
PN 1 2
N exp 2 2 n=0 (x[n] A)
(2⇡ 2 ) 2
h PN 1 2 i >1
1 1
N exp 2 2 n=0 x [n]
(2⇡ 2 ) 2

Taking the logarithm of both sides and simplification results in


N 1
1 X A
x[n] >
N n=0 2
35
Neyman-Pearson Theorem – Example DC in WGN
Consider the following signal detection problem

H0 : x[n] = w[n] n = 0, 1, . . . , N 1

H1 : x[n] = s[n] + w[n] n = 0, 1, . . . , N 1

where the signal is s[n] = A for A > 0 and w[n] is WGN with variance 2
. Now the NP
detector decides H1 if
h PN i
1 1 1 2
N exp 2 2 n=0 (x[n] A)
(2⇡ 2 ) 2
h PN 1 2 i >
1 1
N exp 2 2 n=0 x [n]
(2⇡ 2 ) 2

Taking the logarithm of both sides and simplification results in


N 1
1 X 2
A 0
x[n] > ln + =
N n=0 NA 2
25
Minimum Probability of Error-Ex. DC in WG
We decide  if  > A/2 (The same form of detector as for the NP
criterion except for the threshold!)

The probability of error decreases monotonically


with SNR which is of course deflection
coefficient!
Minimum Probability of Error – MAP detector

Starting from
p(x|H1 ) P (H0 )
> =
p(x|H0 ) P (H1 )
we can use Bayes’ rule:
p(x|Hi )p(Hi )
p(Hi |x) =
p(x)
we arrive at
p(H1 |x) > p(H0 |x).

this is called the MAP detector, which, if P (H1 ) = P (H0 ) reduces again to the ML detector.

37
 Another form of the minimum probability of error detector
follows directly Bayesian inequality, thus we decide
 if P( | > P( |

 We choose the hypothesis whose a posteriori probability is


maximum that minimizes probability of error (Maximum a posteriori
probability (MAP) detector
 For equal probabilities the MAP detector reduces to the ML
detector
 The decision regions for the DC level in WGN with N=1, A=1,   
Bayes Risk

A generalisation of the minimum Pe criterion is one where costs are assigned to each type
of error:
Let Cij be the cost if we decide Hi while Hj is true. Minimizing the expected costs we get
1 X
X 1
R = E[C] = Cij P (Hi |Hj )P (Hj )
0=1 j=0

If C10 > C00 and C01 > C11 the detector that minimises the Bayes risk is to decide H1 when

p(x|H1 ) C10 C00 P (H0 )


> = .
p(x|H0 ) C01 C11 P (H1 )

38
 Consider we have one observation x[0] (N=1)
 To minimize    

 if  0  
2
A 
 if    0 
2 2

 if  0 

 Consider we have multiple observations (N>1) , we can not plot
the multivariate PDFs anda observe the regions over which
each one yields the maximum instead we need to derive a test
statistic
 The conditional PDF is

where
 To maximize the likelihood we minimize the exponential term
Minimum Distance Receiver
 We decide

 Min  or max probability of a correct decision 

You might also like