Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Elements of Detection Theory

Sistemi di Telecomunicazioni LM
Communication Systems: Theory and Measurement

A.A. 2022-2023

Dipartimento di Ingegneria dell’Energia Elettrica e dell’Informazione (DEI) - Guglielmo Marconi


University of Bologna

Prof. Davide Dardari


Outline

1 Elements of Detection Theory


MAP and ML decision rules
Binary hypothesis testing: LRT
Examples
2 Detection of two waveforms in AWGN
Problem setting
Likelihood Ratio test (LRT)
Sufficient statistics
3 Correlator - and Matched Filter-based Schemes

4 Performance

5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples

7 The main plot

D. Dardari —- Page 1/39


Outline

1 Elements of Detection Theory


MAP and ML decision rules
Binary hypothesis testing: LRT
Examples
2 Detection of two waveforms in AWGN
Problem setting
Likelihood Ratio test (LRT)
Sufficient statistics
3 Correlator - and Matched Filter-based Schemes

4 Performance

5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples

7 The main plot

D. Dardari —- Page 1/39


Hypothesis testing

Consider a random phenomenon whose state can take one out of M outcomes (hypotheses)

H ∈ {H0 , H1 , . . . , HM−1 }
with probabilities pi = P{H = Hi } = PH (Hi ), known a priori. H is a random variable

(noisy) observation

H Y: random vector
y: realization of Y (measurement)

Random phenomenon

We cannot observe H directly, but through a "noisy" measurement Y = [Y1 , Y2 , . . . , YN ] whose


statistical characteristics depend on the actual value taken by H through the observation
model (known)

PY|H (y|H = Hi ) = PY|H (y|Hi )


This is the likelihood of observing y under the hypothesis Hi .
D. Dardari —- Page 1/39
Hypothesis testing

Our objective is to find the rule to maximize the probability of correct decision when
observing y

Ĥ = Ĥ(y) How to find the function


(detector) Ĥ(y)?
Ĥ ∈ {H0 , H1 , . . . , HM−1 } (decision)

The probability that Hi is the correct hypothesis given that y was observed, defined by
PH|Y (Hi |Y = y) = PH|Y (Hi |y), is given by (Bayes’ rule)

PY|H (y|Hi ) · pi
PH|Y (Hi |y) =
PY (y)
PM−1
where PY (y) = i=0 PY|H (y|Hi ) · pi , and pi = PH (Hi ).

D. Dardari —- Page 2/39


MAP decision rule

The rule to maximize the probability of correct decision, and hence minimize the probability of
error, is

Maximum a posteriori (MAP) decision rule

Ĥ = Ĥ(y) = arg max PH|Y (Hi |y)


Hi :i=0,...,M−1

D. Dardari —- Page 3/39


Binary hypothesis testing

Suppose M=2, i.e., H ∈ {H0 , H1 }.


The MAP rule is equivalent to the test

Ĥ=H1
PH|Y (H1 |y) ≷ PH|Y (H0 |y)
Ĥ=H0

It can be rewritten as

PY|H (y|H1 ) · p1 H1 PY|H (y|H0 ) · p0


≷ (1)
PY (y) H0 PY (y)

Likelihood ratio test (LRT)


PY|H (y|H1 ) H1 p0
Λ(y) = ≷ =η
PY|H (y|H0 ) H0 p1

Λ(y): Likelihood ratio (LR). η: threshold.

D. Dardari —- Page 4/39


Likelihood ratio test (LRT)

Threshold device

y 𝚲(𝒚) '
Decision 𝐻
Compute
LR
Observation

Note: Λ(y) is always a scalar regardless the dimension N of the observation y.


Λ(y) is said to be a Sufficient statistic.

The log-likelihood ratio test (LLRT) is often used equivalently


H1
l(y) = ln Λ(y) ≷ ln η
H0

D. Dardari —- Page 5/39


Particular case: equiprobable hypotheses
1
Equiprobable hypotheses: pi = M

The MAP decision rule is equivalent to the

Maximum likelihood (ML) decision rule

Ĥ = Ĥ(y) = arg max PY|H (y|Hi )


Hi :i=0,...,M−1

compute
𝑃(𝑦|𝐻!)

compute
y 𝑃(𝑦|𝐻") Decide "
𝐻
for the
largest decision

compute
𝑃(𝑦|𝐻#$")

D. Dardari —- Page 6/39


Example: 1-bit transmission

Problem statement

+A
-A y "
𝐻
detector

H0 : y = −A + n
H1 : y=A+n

with A > 0 (known), and n ∼ N 0, σ 2 .




Question: Find the decision rule which minimizes the probability of error and compute the
error probability

D. Dardari —- Page 7/39


Example: 1-bit transmission
Observation model (likelihood function)

(y + A)2 (y − A)2
   
1 1
PY|H (y|H0 ) = √ exp − PY|H (y|H1 ) = √ exp −
2π σ 2 σ2 2π σ 2 σ2

Likelihood ratio
PY|H (y|H1 )
 i  
1 h 2yA
Λ(y) = = exp − 2 y2 + A2 − 2 A y − y2 − A2 − 2 A y = exp
PY|H (y|H0 ) 2σ σ2

Log-Likelihood ratio test (LLRT)

Threshold device
H1 p0 H1 σ2 p0 𝒚 $
Decision 𝐻
l(y) ≷ ln → y ≷ ln =λ
H0 p1 H0 2A p1 Observation

H
If the hypotheses are equiprobable it is λ = 0, and hence simply y ≷H10 0.
D. Dardari —- Page 8/39
Example: 1-bit transmission

Probability of error

Pe = Pe |H0 · PH (H0 ) + Pe |H1 · PH (H1 )


where PH (H0 ) = PH (H1 ) = 0.5.

Z ∞
Pe |H0 = P {Y > 0|H0 } = PY|H (y|H0 ) dy
0
Z 0
Pe |H1 = P {Y < 0|H1 } = PY|H (y|H1 ) dy
−∞

D. Dardari —- Page 9/39


Example: 1-bit transmission

D. Dardari —- Page 10/39


Example: 1-bit transmission
 
Note that P {Y > 0|H0 } = P {n > A|H0 } = 12 erfc √A .

Due to symmetry:

Probability of error
1

A

1 √ 
Pe = erfc √ = erfc SNR
2 2σ 2
A2
where SNR = 2 σ2

10 0

10 -1

10 -2

10 -3

10 -4
Pe

10 -5

10 -6

10 -7

10 -8

10 -9
0 5 10 15
SNR (dB)

Exercise: Compute Pe in case of not equiprobable hypotheses. D. Dardari —- Page 11/39


Example: radar

Problem statement

H0 : y(t) = w(t)
H1 : y(t) = s(t) + w(t) t ∈ [0, T0 ]

s(t) known and zero outside the observation interval.


N0
w(t): zero-mean ergodic Gaussian random process with white spectrum Gw (f ) = 2

Question: Determine the likelihood ratio test

D. Dardari —- Page 12/39


Example: radar

The LR functional is
P (y(t)|H1 ) P (y(t)|s(t)) H1
Λ(y(t)) = = ≷ η
P (y(t)|H0 ) P (y(t)|s(t) = 0) H0

Recall that

|y(t) − x(t)|2 dt
R !
||y − x||2
 
1 1 T0
P (y(t)|x(t)) = p (y|x) = exp − = exp −
(2πσ 2 )N/2 2 σ2 (2πσ 2 )N/2 2 σ2

where σ 2 = N0 /2, y and x are the vectors containing the series expansion coefficients of y(t) and x(t),
respectively, according to some basis set in [0, T0 ].

D. Dardari —- Page 13/39


Example: radar

 
exp − N1 T |y(t) − s(t)|2 dt
R
P (y(t)|H1 ) 0 0
Λ(y(t)) = =  
P (y(t)|H0 ) exp − N1 T |y(t)|2 dt
R
0 0

Taking the logarithm


Z Z Z Z
1 1 2 1
l(y(t)) = ln Λ(y(t)) = − y2 (t) dt − s2 (t) dt + y(t) s(t) dt + y2 (t) dt
N0 T0 N0 T0 N0 T0 N0 T0

H1
Z
Es 2 p0
l(y(t)) = − + y(t) s(t) dt ≷ ln η = ln
N0 N0 T0 H0 p1

After some arrangements the test becomes


H1
Z  
N0 Es p0
Z= y(t) s(t) dt ≷ + ln =λ
T0 H0 2 N0 p1

Z: is the Sufficient statistics. The test involves the cross-correlation between the received signal y(t) and
a local replica of s(t)
D. Dardari —- Page 14/39
Example: radar

Correlator

y(t) Z "
𝐻
integrator

s(t)

We can obtain the same result using the following equivalent scheme (Matched filter)

Matched filter
"
𝐻
y(t) Z
h(t)
t=T

where h(t) = s(T0 − t)

R R
In fact, z(t) = y(t) ⊗ h(t) = y(τ ) h(t − τ ) dτ = y(τ ) s(T0 − t + τ ) dτ .
Sampling in tR= T0 we obtain
Z = z(T0 ) = T y(τ ) s(τ ) dτ (correlation)
0
D. Dardari —- Page 15/39
Example: Detection of a binary sequence in Gaussian noise

Problem statement:
a = [a1 , a2 , . . . , aN ], with ai ∈ {−1, +1}

We get the observations yi = ai + ni , i = 1, 2, . . . , N, with ni ∼ N 0, σ 2 and i.i.d.




In vector form: y = a + n

Denote by A the set of all possible sequences. Note that we have M = 2N hypotheses
(possible sequences).
Assume sequences are equiprobable, i.e., pi = M1 , i = 1, . . . M.

Question: Determine the detector providing the minimum probability of error

The ML decision rule provides the minimum probability of error

â = â(y) = arg max p (y|a) = arg max ln p (y|a)


a∈A a∈A

D. Dardari —- Page 16/39


Example: Detection of a binary sequence in Gaussian noise

N PN !
(yi − ai )2 − ai )2
 
1 1 i=1 (yi
Y
p (y|a) = √ exp − = exp −
i=1 2π σ 2 2 σ2 (2π σ)N/2 2 σ2

N
X
â = â(y) = arg max ln p (y|a) = arg min (yi − ai )2 = arg min ||y − a|| (2)
a∈A a∈A a∈A
i=1

We decide for the sequence a which is the closest to the observed sequence y.

Note that, in general, the complexity of computing (2) is prohibitive as one has to test M = 2N
sequences!1

However, in this (lucky) particular case in which all the sequences are possible and equiprobable, (2) can
be solved by decomposing it in N simpler tests (symbol-by-symbol decision)

âi = arg min (yi − ai )2 i = 1, 2, . . . , N


ai ∈{−1,+1}
1
Note that the estimated number of atoms in the observable universe is in between 1078 and 1082 . For instance, the
number of binary sequences of length N = 1000 is astronomically higher....
D. Dardari —- Page 17/39
Example: Detection of a binary sequence in Gaussian noise

The probability to decode erroneously a single symbol is


1 √ 
Pes = erfc SNR
2
A2
SNR = 2σ 2
(in this case A = 1).

Probability of error of the sequence (i.e., the probability that at least one symbol is in error)

Pe = 1 − (1 − Pes )N

Note: this is true only when symbols are independent and with symbol-by-symbol decoding!

In general, by introducing redundancy (coding), and hence dependence, one can improve the
performance.

D. Dardari —- Page 18/39


Outline

1 Elements of Detection Theory


MAP and ML decision rules
Binary hypothesis testing: LRT
Examples
2 Detection of two waveforms in AWGN
Problem setting
Likelihood Ratio test (LRT)
Sufficient statistics
3 Correlator - and Matched Filter-based Schemes

4 Performance

5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples

7 The main plot

D. Dardari —- Page 19/39


Problem setting

Consider the following binary detection problem

H1 : y(t) = s1 (t) + w(t)


H2 : y(t) = s2 (t) + w(t)

where s1 (t) and s2 (t) are (known) finite-energy signals associated to equiprobable hypothesis
H1 and H2 , respectively, and w(t) is AWGN with power spectral density Gw (f ) = N20 .

The target is to minimize the probability of error Pe in taking the decision on whether s1 (t) (H1 )
or s2 (t) (H2 ) was transmitted based on the observation of y(t) in [0, T].

D. Dardari —- Page 19/39


Problem setting

The problem is equivalent to the following one

H1 : y = s1 + w
H2 : y = s2 + w

where y, s1 , s2 and w are the N-dimension vector representation of signals y(t), s1 (t), s2 (t) and
w(t), respectively.
In particular, w ∼ N 0, σ 2 I , with σ 2 = N0 /2.


D. Dardari —- Page 20/39


Likelihood Ratio test (LRT)

Likelihood Ratio test (LRT)


p (y|H1 ) H1
Λ(y) = ≷ 1
p (y|H2 ) H2

Note: The LRT in this case corresponds to the MAP criterium which minimizes the probability
of error and it is equivalent to the ML criterium (equiprobable hypothesis)

D. Dardari —- Page 21/39


Likelihood Ratio test (LRT)

In fact, since

ky − si k2
 
1
p (y|Hi ) = exp −
(2πσ 2 )N/2 2 σ2
it is
Log-likelihood
1 1 H1
l(y) = ln Λ(y) = − ky − s1 k2 + ky − s2 k2 ≷ 0
2 σ2 2 σ2 H2

and hence
Nearest neighbor rule

H1
ky − s2 k2 ≷ ky − s1 k2 → Ĥ = arg mini=1,2 ky − si k2 (3)
H2

D. Dardari —- Page 22/39


Sufficient statistics

Developing (3) we get


H1
kyk2 + ks2 k2 − 2 sT2 y ≷ kyk2 + ks1 k2 − 2 sT1 y (4)
H2

H2 1 
(s2 − s1 )T y ≷ ks2 k2 − ks1 k2 (5)
H1 2

Sufficient statistics

(s2 − s1 )T y =< (s2 − s1 ), y > (6)

D. Dardari —- Page 23/39


Geometric interpretation

y2

s1
y

y1
projection
s2

Sufficient statistics

(s2 − s1 )T y =< (s2 − s1 ), y >

It represents the projection of vector y into the vector s2 − s1 .

D. Dardari —- Page 24/39


Continuous time

Since
Z T
(s2 − s1 )T y =< (s2 − s1 ), y >=< s2 (t) − s1 (t), y(t) >= (s2 (t) − s1 (t)) y(t) dt (7)
0

the LRT (5) becomes

T
Es2 − Es1
Z H2
(s2 (t) − s1 (t)) y(t) dt ≷ =η (8)
0 H1 2

D. Dardari —- Page 25/39


Outline

1 Elements of Detection Theory


MAP and ML decision rules
Binary hypothesis testing: LRT
Examples
2 Detection of two waveforms in AWGN
Problem setting
Likelihood Ratio test (LRT)
Sufficient statistics
3 Correlator - and Matched Filter-based Schemes

4 Performance

5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples

7 The main plot

D. Dardari —- Page 26/39


Correlator scheme

Correlator

"
𝐻
y(t) Z
integrator

s2(t)-s1(t)

T
Es2 − Es1
Z H2
Z= (s2 (t) − s1 (t)) y(t) dt ≷ =η (9)
0 H1 2

D. Dardari —- Page 26/39


Matched filter scheme

Define the filter with impulse response h(t) = s2 (T − t) − s1 (T − t)

Matched filter
"
𝐻
y(t) Z
h(t)
t=T

The response of the filter to y(t) sampled at time t = T is equivalent to that of the correlator

Z T Z
Z= (s2 (t) − s1 (t)) y(t) dt = y(τ ) h(T − τ ) dτ (10)
0

D. Dardari —- Page 27/39


Outline

1 Elements of Detection Theory


MAP and ML decision rules
Binary hypothesis testing: LRT
Examples
2 Detection of two waveforms in AWGN
Problem setting
Likelihood Ratio test (LRT)
Sufficient statistics
3 Correlator - and Matched Filter-based Schemes

4 Performance

5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples

7 The main plot

D. Dardari —- Page 28/39


Performance

Divide both sides in (5) by ks2 − s1 k


H2 1 ks2 k2 − ks1 k2
z ≷ (11)
H1 2 ks2 − s1 k
where for convenience we have defined the normalized sufficient statistics (scalar)

(s2 − s1 )T
z= y = vT y (12)
ks2 − s1 k
with

(s2 − s1 )
v= (13)
ks2 − s1 k
Note: v is the orientation of vector s2 − s1 and d1,2 = ks2 − s1 k is the Euclidean distance
between signals s1 and s2 .

D. Dardari —- Page 28/39


Performance

Thanks to the isotropic property of Gaussian noise, the probability of error is the same for the
two symbols. So that we focus on the transmission of s1 .
The received signal is
y = s1 + w (14)
The normalized sufficient statistics (conditioned on s1 ) is

(s2 − s1 )T
z = vT y = s1 + w̃ (15)
ks2 − s1 k

Since kvk = 1, then w̃ ∼ N 0, σ 2




D. Dardari —- Page 29/39


Performance

The test is

(s2 − s1 )T H2 1 ks2 k2 − ks1 k2


z= s1 + w̃ ≷
ks2 − s1 k H1 2 ks2 − s1 k
There is an error (i.e., I decide for s2 ) if

1 ks2 k2 − ks1 k2 (s2 − s1 )T d12


w̃ > − s1 =
2 ks2 − s1 k ks2 − s1 k 2

The probability of error is therefore


r !
2
 
d12 1 d12
Pe = P w̃ > = erfc
2 2 8 σ2

Note: it depends only on the distance between s1 and s2 and not on their amplitudes and
orientations.

D. Dardari —- Page 30/39


Particular case: collinear signals

si = h xi i = 1, 2 with x1 = −a and x2 = a
 
y = h x + w w ∼ N 0, σ 2 I (16)

Sufficient statistics (normalized)


s2 − s1 h 2a h
z = vT y with v = = =
ks2 − s1 k khk2a khk
hT h hT
z = vT y = x+ w
khk khk

Sufficient statistics

z = khk x + w̃ (17)

where w̃ ∼ N 0, σ 2 since kvk = 1.




D. Dardari —- Page 31/39


Particular case: collinear signals

Probability of error
r ! r !
2
1 d12 1 khk2 a2
Pe = erfc = erfc (18)
2 8 σ2 2 2 σ2
being d12 = ks2 − s1 k = khk 2a.

D. Dardari —- Page 32/39


Outline

1 Elements of Detection Theory


MAP and ML decision rules
Binary hypothesis testing: LRT
Examples
2 Detection of two waveforms in AWGN
Problem setting
Likelihood Ratio test (LRT)
Sufficient statistics
3 Correlator - and Matched Filter-based Schemes

4 Performance

5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples

7 The main plot

D. Dardari —- Page 33/39


Extension to non-binary signaling

xj ∈ real alphabet of cardinality M

si = h xi i = 1, 2, . . . M

Sufficient statistics

z = khk x + w̃ (19)
2

with w̃ ∼ N 0, σ

Note: same sufficient statistics as in the binary case, but the computation of Pe is much more
involved.

D. Dardari —- Page 33/39


Extension to complex signals

Received complex signal

y=s+w

with w ∼ CN 0, 2 σ 2 I


Supposing again collinear signaling: xj ∈ complex alphabet of cardinality M (constellation)

si = h xi i = 1, 2, . . . M

D. Dardari —- Page 34/39


Extension to complex signals

Sufficient statistics
h
z = vH y = khk x + w̃ with v = (20)
khk

with w̃ ∼ CN 0, 2 σ 2


In the binary case (M = 2), the probability of error is identical to that in the real case
r !
2
1 d12
Pe = erfc (21)
2 8 σ2
being d12 = ks2 − s1 k.

D. Dardari —- Page 35/39


Outline

1 Elements of Detection Theory


MAP and ML decision rules
Binary hypothesis testing: LRT
Examples
2 Detection of two waveforms in AWGN
Problem setting
Likelihood Ratio test (LRT)
Sufficient statistics
3 Correlator - and Matched Filter-based Schemes

4 Performance

5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples

7 The main plot

D. Dardari —- Page 36/39


Examples

2
Note that d12 = ks1 − s2 k2 = ks1 k2 + ks2 k2 − 2sT1 s2 = E1 + E2 − 2 E12

In the particular case E1 = E2 = E and E12 = ρ E, where ρ is the (normalized) correlation


2
coefficient between s1 and s2 , it is d12 = 2 E(1 − ρ).

Probability of error
r ! r !
2
1 d12 1 E (1 − ρ)
Pe = erfc = erfc
2 8 σ2 2 2 N0

D. Dardari —- Page 36/39


Examples

ρ = −1 Antipodal signals (optimum case)


r !
1 E
Pe = erfc
2 N0

ρ = 0 Orthogonal signals
r !
1 E
Pe = erfc
2 2 N0

D. Dardari —- Page 37/39


Homework

Detection of 1-bit 2-FSK (frequency shift keying) transmission



H0 : y(t) = s1 (t) + w(t)
H1 : y(t) = s2 (t) + w(t) t ∈ [0, T0 ]

where w(t) is AWGN with PSD Gw (f ) = N0 /2, and



s1 (t) = A cos(2π f1 t)
s2 (t) = A cos(2π f2 t) t ∈ [0, T0 ] zero otherwise

with f1 = f0 + ∆/2, f2 = f0 − ∆/2, and f0 is integer multiple of 1/T0 , f0  1/T0 .

Question
Find the minimum frequency shift ∆ so that s1 (t) and s2 (t) are orthogonal in [0, T0 ].
Design the matched filter detector and find the probability of error

D. Dardari —- Page 38/39


Outline

1 Elements of Detection Theory


MAP and ML decision rules
Binary hypothesis testing: LRT
Examples
2 Detection of two waveforms in AWGN
Problem setting
Likelihood Ratio test (LRT)
Sufficient statistics
3 Correlator - and Matched Filter-based Schemes

4 Performance

5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples

7 The main plot

D. Dardari —- Page 39/39


The main plot

• The MAP/ML criteria minimize the Pe and lead to the nearest neighbor decision rule
• In binary detection the correlator and matched filter schemes are optimal and equivalent
• The sufficient statistics is a scalar and the performance depends only on the distance
between the two TX signals
• Extension to complex signals leads to similar results

D. Dardari —- Page 39/39

You might also like