Chapter 2: Detection Theory: Statistical Signal Processing

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Statistical Signal Processing

Chapter 2: DETECTION THEORY


Instructor:

Phuong T. Tran, PhD.


(slides are borrowed from Prof. Natasha Devoyre, UIC)

Ton Duc Thang University


Faculty of Electrical and Electronics Engineering

Apr. 27 2019
Outline

1 Binary Hypothesis Testing


Neyman - Pearson test
Bayesian Risk

2 Multiple Hypothesis Testing

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 2 / 22


Outline

1 Binary Hypothesis Testing


Neyman - Pearson test
Bayesian Risk

2 Multiple Hypothesis Testing

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 2 / 22


Objectives

Understand the concept of hypothesis testing and


Bayesian risk
Apply Neyman - Pearson theorem to design a
decision rule for hypothesis testing problems.

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 3 / 22


Binary Hypothesis Testing

Outline

1 Binary Hypothesis Testing


Neyman - Pearson test
Bayesian Risk

2 Multiple Hypothesis Testing

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 4 / 22


Binary Hypothesis Testing

Binary hypothesis testing


Decide between two hypotheses: H0 or H1 .
To do so we find a decision rule which maps the observation x into either
H0 or H1 . Because the observation process is modeled probabilistically,
the following errors may be made:
P(H0 ; H0 ) = prob( decide H0 when H0 is true) = prob of correct
non-detection
P(H0 ; H1 ) = prob( decide H0 when H1 is true) = prob of missed
detection := PM
P(H1 ; H0 ) = prob( decide H1 when H0 is true) = prob of false
alarm := PFA
P(H1 ; H1 ) = prob( decide H1 when H1 is true) = prob of detection
:= PD
More generally P(Hi ; Hj ) is the probability of deciding Hi when
hypothesis Hj is true.
Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 5 / 22
Binary Hypothesis Testing

Binary hypothesis testing

We want to design a “good” detection/decision rule, so need a criterion


for “good”. Two criterias are introduced in this course:
1 Neyman - Pearson (NP): maximize PD subject to a desired fixed
PFA .
2 Generalized Bayes risk: minimize the Bayesian risk (cost function)
for arbitrary costs Cij for deciding Hi when Hj is true. Takes into
account prior probabilities P(Hi ). Reduce to the following for
specific choices of costs Cij and priors P(Hi ):
Minimum probability of error (min PE ) or maximum a posteriori
(MAP): Cii = 0, Cij = 1 for i 6= j.
Maximum likelihood (ML): Cii = 0, Cij = 1 for i 6= j AND all priors
are equal, i.e. P(Hi ) = P(Hj ), ∀i, j.

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 6 / 22


Binary Hypothesis Testing Neyman - Pearson test

Outline

1 Binary Hypothesis Testing


Neyman - Pearson test
Bayesian Risk

2 Multiple Hypothesis Testing

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 7 / 22


Binary Hypothesis Testing Neyman - Pearson test

Neyman - Pearson hypothesis


testing
Theorem (Neyman - Pearson Theorem (3.1, p. 65))
To maximize PD for a given PFA = α, decide H1 if

p(x; H1 )
L(x) := >γ
p(x; H0 )

where the threshold γ is found from


Z
PFA = p(x; H0 )dx = α
{x:L(x)>γ}

L(x) is the likelihood ratio, and comparing L(x) to a threshold is termed


the likelihood ratio test.

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 8 / 22


Binary Hypothesis Testing Neyman - Pearson test

Example 1

Example 1
Consider the two hypotheses:

H0 : x [0] ∼ N (0, 1) (µ = 0)
H1 : x [0] ∼ N (1, 1) (µ = 1)

Based on the single observation x [0], decide which hypothesis it was


generated from.

Useful result from Problem 2.1 in textbook: If T ∼ N (µ, σ 2 ), then

γ−µ
 
Pr {T > γ} = Q
σ

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 9 / 22


Binary Hypothesis Testing Neyman - Pearson test

Example 2

Example 2
We are given N observations x [n], n = 0, 1, ..., N − 1, which are i.i.d.
and, depending on the hypothesis, are generated as

H0 : x [n] ∼ N (0, σ02 )


H1 : x [n] ∼ N (0, σ12 ) (σ12 > σ02 )

Determine the Neyman - Pearson hypothesis test.

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 10 / 22


Binary Hypothesis Testing Neyman - Pearson test

Example 3

Example 3
We are given N observations x [n], n = 0, 1, ..., N − 1, which are i.i.d.
and, depending on the hypothesis, are generated as

H0 : x [n] = w [n] w [n] ∼ N (0, σ 2 ) i.i.d.


H1 : x [n] = A + w [n] w [n] ∼ N (0, σ 2 ) i.i.d.

Determine the Neyman - Pearson hypothesis test.

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 11 / 22


Binary Hypothesis Testing Neyman - Pearson test

Deflection coefficient

The deflection coefficient d is defined, for a test statistic T , as

[E (T ; H1 ) − E (T ; H0 )]2
d2 =
var (T ; H0 )

and is useful in characterizing the performance of a detector. Usually, the


larger the deflection coefficient, the easier it is to differentiate between
the two signals, and thus the better the detection performance.

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 12 / 22


Binary Hypothesis Testing Neyman - Pearson test

Receiver operating
characteristics (ROC)
The Receiver Operating Characteristics (ROC) is a graph of PD (y-axis)
versus PFA (x-axis), showing the tradeoff between the two. For γ = +∞
you have PFA = PD = 0, while for γ = −∞ you have PFA = PD = 1. For
intermediate γ you lie on a curve above the PFA = PD line.
[different terminology - can you map it to ours?]

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 13 / 22


Binary Hypothesis Testing Bayesian Risk

Outline

1 Binary Hypothesis Testing


Neyman - Pearson test
Bayesian Risk

2 Multiple Hypothesis Testing

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 14 / 22


Binary Hypothesis Testing Bayesian Risk

Bayesian risk
Associate with each of the four detection possibilities a cost, i.e. Cij is the
cost of deciding hypothesis Hi when hypothesis Hj is true. In the binary
hypothesis testing case, i, j ∈ {0, 1}. Let P(Hi |Hj ) be the probaility of
deciding Hi when Hj is true, and P(Hi ) be the prior probability of
hypothesis Hi . Note the Bayesian formulation, assigning priors to the
hypotheses is different than in the classical Neyman - Pearson criterion.
1 X
X 1
Bayes risk := R = E [C ] = Cij P(Hi |Hj )P(Hi )
i=0 j=0

Under the assumption that C10 > C00 , C01 > C11 , the detector that
minimizes the Bayes risk is to decide Hi if

p(x|H1 ) (C10 − C00 )P(H0 )


> =γ
p(x|H0 ) (C01 − C11 )P(H1 )

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 15 / 22


Binary Hypothesis Testing Bayesian Risk

Bayesian risk
The Bayesian risk detection framework encompasses:
Minimum probability of error (min PE ) or maximum a posteriori
(MAP) (same): Cii = 0, Cij = 1 for i 6= j. These detectors decide
H1 if
p(x|H1 ) P(H0 )
- min PE : > =γ
p(x|H0 ) P(H1 )
- MAP: P(H1 |x) > P(H0 |x)

Maximum likelihood (ML): Cii = 0, Cij = 1 for i 6= j AND all priors


are equal, i.e. P(Hi ) = P(Hj ), ∀i, j. This detector decides H1 if

- ML: P(x|H1 ) > P(x|H0 )

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 16 / 22


Binary Hypothesis Testing Bayesian Risk

Bayesian risk example

Example 4
Find the detection rule that minimizes the probability of error for the
following binary hypothesis testing problem:

H0 : x [n] = w [n] w [n] ∼ N (0, σ 2 ) i.i.d.


H1 : x [n] = A + w [n] w [n] ∼ N (0, σ 2 ) i.i.d., A > 0

Also determine the probability of error achieved by this detector.

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 17 / 22


Multiple Hypothesis Testing

Outline

1 Binary Hypothesis Testing


Neyman - Pearson test
Bayesian Risk

2 Multiple Hypothesis Testing

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 18 / 22


Multiple Hypothesis Testing

Multiple hypothesis testing


In binary hypothesis testing we detected one of 2 hypotheses. We now
wish to detect one of M > 2 hypotheses. Neyman - Pearson is possible,
see reference pp. 81, we only consider Bayes risk minimization, i.e. we
wich to minimize
M−1
X M−1
X
R= Cij P(Hi |Hj )P(Hi )
i=0 j=0

The detector that minimizes the Bayes risk choses the hypothesis Hi for
which
M−1
X
Ci (x ) := Cij P(Hi |Hj )P(Hi )
j=0

is minimal over all i = 0, 1, ..., M − 1 (picks the one with minimal cost).
Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 19 / 22
Multiple Hypothesis Testing

Multiple hypothesis testing


(cont.)

Once again, the Bayes risk is a generalization of MAP and ML detectors,


which in the multiple hypothesis case reduce to
Minimum probability of error (min PE ) or maximum a posteriori
(MAP) (same): Cii = 0, Cij = 1 for i 6= j. These detectors decide Hi
if P(Hi |x) > P(Hj |x), ∀j.
Maximum likelihood (ML): Cii = 0, Cij = 1 for i 6= j AND all priors
are equal, i.e. P(Hi ) = P(Hj ), ∀i, j. This detector decides Hi if
P(x|Hi ) > P(x|Hj ), ∀j.

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 20 / 22


Multiple Hypothesis Testing

Multiple hypothesis testing


example

Example 5
Assume that we have three hypotheses:

H0 : x [n] = −A + w [n] n = 0, 1, ..., N − 1


H1 : x [n] = w [n] n = 0, 1, ..., N − 1
H2 : x [n] = A + w [n] n = 0, 1, ..., N − 1

where A > 0 and w [n] is white Gaussian noise with variance σ 2 .


Assuming equal priors on the hypotheses, find the detector that minimizes
the probability of error and find an expression for this probability of error.

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 21 / 22


Multiple Hypothesis Testing

Steven M. Kay.
Fundamentals of Statistical Signal Processing - Volume II: Detection Theory, 1e.
Prentice-Hall PTR, 1998.

Apr. 27 2019 EE702030 - Lecture 3: Hypothesis Testing 22 / 22

You might also like