Professional Documents
Culture Documents
Elements Detection Theory
Elements Detection Theory
Sistemi di Telecomunicazioni LM
Communication Systems: Theory and Measurement
A.A. 2022-2023
4 Performance
5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples
4 Performance
5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples
Consider a random phenomenon whose state can take one out of M outcomes (hypotheses)
H ∈ {H0 , H1 , . . . , HM−1 }
with probabilities pi = P{H = Hi } = PH (Hi ), known a priori. H is a random variable
(noisy) observation
H Y: random vector
y: realization of Y (measurement)
Random phenomenon
Our objective is to find the rule to maximize the probability of correct decision when
observing y
The probability that Hi is the correct hypothesis given that y was observed, defined by
PH|Y (Hi |Y = y) = PH|Y (Hi |y), is given by (Bayes’ rule)
PY|H (y|Hi ) · pi
PH|Y (Hi |y) =
PY (y)
PM−1
where PY (y) = i=0 PY|H (y|Hi ) · pi , and pi = PH (Hi ).
The rule to maximize the probability of correct decision, and hence minimize the probability of
error, is
Ĥ=H1
PH|Y (H1 |y) ≷ PH|Y (H0 |y)
Ĥ=H0
It can be rewritten as
Threshold device
y 𝚲(𝒚) '
Decision 𝐻
Compute
LR
Observation
compute
𝑃(𝑦|𝐻!)
compute
y 𝑃(𝑦|𝐻") Decide "
𝐻
for the
largest decision
compute
𝑃(𝑦|𝐻#$")
Problem statement
+A
-A y "
𝐻
detector
H0 : y = −A + n
H1 : y=A+n
Question: Find the decision rule which minimizes the probability of error and compute the
error probability
(y + A)2 (y − A)2
1 1
PY|H (y|H0 ) = √ exp − PY|H (y|H1 ) = √ exp −
2π σ 2 σ2 2π σ 2 σ2
Likelihood ratio
PY|H (y|H1 )
i
1 h 2yA
Λ(y) = = exp − 2 y2 + A2 − 2 A y − y2 − A2 − 2 A y = exp
PY|H (y|H0 ) 2σ σ2
Threshold device
H1 p0 H1 σ2 p0 𝒚 $
Decision 𝐻
l(y) ≷ ln → y ≷ ln =λ
H0 p1 H0 2A p1 Observation
H
If the hypotheses are equiprobable it is λ = 0, and hence simply y ≷H10 0.
D. Dardari —- Page 8/39
Example: 1-bit transmission
Probability of error
Z ∞
Pe |H0 = P {Y > 0|H0 } = PY|H (y|H0 ) dy
0
Z 0
Pe |H1 = P {Y < 0|H1 } = PY|H (y|H1 ) dy
−∞
Probability of error
1
A
1 √
Pe = erfc √ = erfc SNR
2 2σ 2
A2
where SNR = 2 σ2
10 0
10 -1
10 -2
10 -3
10 -4
Pe
10 -5
10 -6
10 -7
10 -8
10 -9
0 5 10 15
SNR (dB)
Problem statement
H0 : y(t) = w(t)
H1 : y(t) = s(t) + w(t) t ∈ [0, T0 ]
The LR functional is
P (y(t)|H1 ) P (y(t)|s(t)) H1
Λ(y(t)) = = ≷ η
P (y(t)|H0 ) P (y(t)|s(t) = 0) H0
Recall that
|y(t) − x(t)|2 dt
R !
||y − x||2
1 1 T0
P (y(t)|x(t)) = p (y|x) = exp − = exp −
(2πσ 2 )N/2 2 σ2 (2πσ 2 )N/2 2 σ2
where σ 2 = N0 /2, y and x are the vectors containing the series expansion coefficients of y(t) and x(t),
respectively, according to some basis set in [0, T0 ].
exp − N1 T |y(t) − s(t)|2 dt
R
P (y(t)|H1 ) 0 0
Λ(y(t)) = =
P (y(t)|H0 ) exp − N1 T |y(t)|2 dt
R
0 0
H1
Z
Es 2 p0
l(y(t)) = − + y(t) s(t) dt ≷ ln η = ln
N0 N0 T0 H0 p1
Z: is the Sufficient statistics. The test involves the cross-correlation between the received signal y(t) and
a local replica of s(t)
D. Dardari —- Page 14/39
Example: radar
Correlator
y(t) Z "
𝐻
integrator
s(t)
We can obtain the same result using the following equivalent scheme (Matched filter)
Matched filter
"
𝐻
y(t) Z
h(t)
t=T
R R
In fact, z(t) = y(t) ⊗ h(t) = y(τ ) h(t − τ ) dτ = y(τ ) s(T0 − t + τ ) dτ .
Sampling in tR= T0 we obtain
Z = z(T0 ) = T y(τ ) s(τ ) dτ (correlation)
0
D. Dardari —- Page 15/39
Example: Detection of a binary sequence in Gaussian noise
Problem statement:
a = [a1 , a2 , . . . , aN ], with ai ∈ {−1, +1}
In vector form: y = a + n
Denote by A the set of all possible sequences. Note that we have M = 2N hypotheses
(possible sequences).
Assume sequences are equiprobable, i.e., pi = M1 , i = 1, . . . M.
N PN !
(yi − ai )2 − ai )2
1 1 i=1 (yi
Y
p (y|a) = √ exp − = exp −
i=1 2π σ 2 2 σ2 (2π σ)N/2 2 σ2
N
X
â = â(y) = arg max ln p (y|a) = arg min (yi − ai )2 = arg min ||y − a|| (2)
a∈A a∈A a∈A
i=1
We decide for the sequence a which is the closest to the observed sequence y.
Note that, in general, the complexity of computing (2) is prohibitive as one has to test M = 2N
sequences!1
However, in this (lucky) particular case in which all the sequences are possible and equiprobable, (2) can
be solved by decomposing it in N simpler tests (symbol-by-symbol decision)
Probability of error of the sequence (i.e., the probability that at least one symbol is in error)
Pe = 1 − (1 − Pes )N
Note: this is true only when symbols are independent and with symbol-by-symbol decoding!
In general, by introducing redundancy (coding), and hence dependence, one can improve the
performance.
4 Performance
5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples
where s1 (t) and s2 (t) are (known) finite-energy signals associated to equiprobable hypothesis
H1 and H2 , respectively, and w(t) is AWGN with power spectral density Gw (f ) = N20 .
The target is to minimize the probability of error Pe in taking the decision on whether s1 (t) (H1 )
or s2 (t) (H2 ) was transmitted based on the observation of y(t) in [0, T].
H1 : y = s1 + w
H2 : y = s2 + w
where y, s1 , s2 and w are the N-dimension vector representation of signals y(t), s1 (t), s2 (t) and
w(t), respectively.
In particular, w ∼ N 0, σ 2 I , with σ 2 = N0 /2.
Note: The LRT in this case corresponds to the MAP criterium which minimizes the probability
of error and it is equivalent to the ML criterium (equiprobable hypothesis)
In fact, since
ky − si k2
1
p (y|Hi ) = exp −
(2πσ 2 )N/2 2 σ2
it is
Log-likelihood
1 1 H1
l(y) = ln Λ(y) = − ky − s1 k2 + ky − s2 k2 ≷ 0
2 σ2 2 σ2 H2
and hence
Nearest neighbor rule
H1
ky − s2 k2 ≷ ky − s1 k2 → Ĥ = arg mini=1,2 ky − si k2 (3)
H2
H2 1
(s2 − s1 )T y ≷ ks2 k2 − ks1 k2 (5)
H1 2
Sufficient statistics
y2
s1
y
y1
projection
s2
Sufficient statistics
Since
Z T
(s2 − s1 )T y =< (s2 − s1 ), y >=< s2 (t) − s1 (t), y(t) >= (s2 (t) − s1 (t)) y(t) dt (7)
0
T
Es2 − Es1
Z H2
(s2 (t) − s1 (t)) y(t) dt ≷ =η (8)
0 H1 2
4 Performance
5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples
Correlator
"
𝐻
y(t) Z
integrator
s2(t)-s1(t)
T
Es2 − Es1
Z H2
Z= (s2 (t) − s1 (t)) y(t) dt ≷ =η (9)
0 H1 2
Matched filter
"
𝐻
y(t) Z
h(t)
t=T
The response of the filter to y(t) sampled at time t = T is equivalent to that of the correlator
Z T Z
Z= (s2 (t) − s1 (t)) y(t) dt = y(τ ) h(T − τ ) dτ (10)
0
4 Performance
5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples
(s2 − s1 )T
z= y = vT y (12)
ks2 − s1 k
with
(s2 − s1 )
v= (13)
ks2 − s1 k
Note: v is the orientation of vector s2 − s1 and d1,2 = ks2 − s1 k is the Euclidean distance
between signals s1 and s2 .
Thanks to the isotropic property of Gaussian noise, the probability of error is the same for the
two symbols. So that we focus on the transmission of s1 .
The received signal is
y = s1 + w (14)
The normalized sufficient statistics (conditioned on s1 ) is
(s2 − s1 )T
z = vT y = s1 + w̃ (15)
ks2 − s1 k
The test is
Note: it depends only on the distance between s1 and s2 and not on their amplitudes and
orientations.
si = h xi i = 1, 2 with x1 = −a and x2 = a
y = h x + w w ∼ N 0, σ 2 I (16)
Sufficient statistics
z = khk x + w̃ (17)
Probability of error
r ! r !
2
1 d12 1 khk2 a2
Pe = erfc = erfc (18)
2 8 σ2 2 2 σ2
being d12 = ks2 − s1 k = khk 2a.
4 Performance
5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples
si = h xi i = 1, 2, . . . M
Sufficient statistics
z = khk x + w̃ (19)
2
with w̃ ∼ N 0, σ
Note: same sufficient statistics as in the binary case, but the computation of Pe is much more
involved.
y=s+w
with w ∼ CN 0, 2 σ 2 I
si = h xi i = 1, 2, . . . M
Sufficient statistics
h
z = vH y = khk x + w̃ with v = (20)
khk
with w̃ ∼ CN 0, 2 σ 2
In the binary case (M = 2), the probability of error is identical to that in the real case
r !
2
1 d12
Pe = erfc (21)
2 8 σ2
being d12 = ks2 − s1 k.
4 Performance
5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples
2
Note that d12 = ks1 − s2 k2 = ks1 k2 + ks2 k2 − 2sT1 s2 = E1 + E2 − 2 E12
Probability of error
r ! r !
2
1 d12 1 E (1 − ρ)
Pe = erfc = erfc
2 8 σ2 2 2 N0
ρ = 0 Orthogonal signals
r !
1 E
Pe = erfc
2 2 N0
Question
Find the minimum frequency shift ∆ so that s1 (t) and s2 (t) are orthogonal in [0, T0 ].
Design the matched filter detector and find the probability of error
4 Performance
5 Generalizations
Extension to non-binary signaling
Extension to complex signals
6 Examples
• The MAP/ML criteria minimize the Pe and lead to the nearest neighbor decision rule
• In binary detection the correlator and matched filter schemes are optimal and equivalent
• The sufficient statistics is a scalar and the performance depends only on the distance
between the two TX signals
• Extension to complex signals leads to similar results