Professional Documents
Culture Documents
Exerc Det Deter Answer
Exerc Det Deter Answer
Exerc Det Deter Answer
Exercises -Detection
Problem 1:
T
Problem 1a: Let H = 1, r, ..., rN −1 .
h i
1 1 T −1 (x − AH)
p(x; H1 ) = N 1 exp − 2 (x − AH) C
(2π) 2 det 2 (C)
1
exp − 12 xT C−1 x .
p(x; H0 ) = N 1
(2π) 2 det 2 (C)
p(x;H1 )
L(x) = p(x;H0 ) > λ.
1
ln L(x) = xT C−1 AH − HT C−1 HA2 > ln λ
2
1 0
T (x) = xT C−1 HA > ln λ + HT C−1 HA2 = λ
2
Problem 1b: T (x) is Gaussian distributed under both H1 and H0 .
N −1
A2 X 2n
2 h
T −1
2 i
= E ((AH + w) − E [(AH + w)]) C HA = E wT C−1 HA = var[T ; H0 ] = 2 r
σ
n=0
v
0 u 2 NX
u −1
λ → λ = Q (Pf a ) t A
0 −1
Pf a = Q q P 2
r2n
A2 N −1 2n
r σ
σ2 n=0 n=0
v
0 A2 PN −1 2n
u N −1
λ − 2 r 2
= Q Q−1 (Pf a ) − t A
u X
PD = Q q σ P n=0 2
r2n ,
A 2 N −1 2n
r σ
σ2 n=0 n=0
q
PN −1 −1 1−r2N A2 1−r2N
Problem 1c: For 0 ≤ r ≤ 1, n=0 = r2n
and PD = Q Q (Pf a ) − σ2 1−r2
1−r2
q
2
When N → ∞ for 0 ≤ r ≤ 1, PD will become PD = Q Q−1 (Pf a ) − A 2
1
σ 1−r 2
Problem 2:
Problem 2a: As the noise is white and Gaussian, the shape of the signal does not
influence the detection performancce, but the power does. As both signals have
an equal power, the detection performance will be equal.
Problem 2b: Using the matrix inversion lemma we can calculate C−1 , that is, C−1 =
1
1 11T
σ4
σ2
− 1+ N2
. We can use this result to calcualte the PD :
σ
√
PD = Q Q−1 (Pf a ) − sT C−1 s .
! r
N
A2
r
A2
For s1 [n] we then get PD = Q − Q−1 (P f a)
=Q f a) −
σ2 N
+1
. Q−1 (P σ2
N
+1 σ2
q
For s2 [n] and (even) N we get PD = Q Q−1 (Pf a ) − A2 σN2 . The PD for even
N and s2 [n] will thus always be larger.
One can also argue that s[n] should ideally equal the eigenvector of C that corre-
sponds to the minimum eigenvalue. The largest eigenvalue is 1. This corresponds
with s1 [n]. Signal s2 [n] is at least orthogonal to this eigenvector and corresponds
to the minimum eigenvalue. s2 [n] will thus have the best detection performance.
Problem 3:
(1−p1 )k p1
Problem 3a: LRT: (1−p0 )k p0
≥λ
(1−p1 )k
(1−p0 )k
≥ λ pp10
p
log λ 0
k ≥ 1−pp11 = λ0
log 1−p
0
Problem 3b:
N −1
X σs2n
T (x) = x2 [n]
σs2n + σ 2
n=0
2
Problem 5:
Problem 5a: We need to calculate ŝ = E[s|x]. However, A and w are Gaussian (and
thus also jointly Gaussian) distributed. In addition, the model is linear:
x = 1A + w = s + w.
−1
In this case the MMSE estimator is given by ŝ = E[A|x]1 = C−1 T −1 −1
A + H Cw H HT C−1
w x=
−1
σ 2 x̄
1
N σ2
+ σ12 x̄
σ2
= 2 A σ2 1
A σA + N
Problem 6:
T
Problem 6a: s = AH, H = 1, r1 , ..., rN −1 with s ∼ N (0, σA
2 HHT ).
−1
ŝ = Cs Cs + σ 2 I x
Using the matrix inversion lemma it follows that
−1 2
σA HT HσA
2 2 HHT
σA
ŝ = Cs Cs + σ 2 I x= HHT
(1 − 2 )x = 2 x
σ2 σ 2 + HT HσA σ 2 + HT HσA
We then get
P 2
N −1 n
σA 2 xT HHT x n=0 r x[n]
T (x) = xT ŝ = 2 =
σ 2 + HT HσA σ2
2 +
PN −1
r2n
σA n=0
N −1
!−1 N −1
σ2 X X
= 2 + r2n rn x[n]
σA n=0 n=0
ŝ = ÂH
P 2
N −1 n
n=0 r x[n]
T (x) = xT ŝ = xT HÂ = PN −1
σ2
2
σA
+ n=0 r2n
PN −1 σs2n x2 [n]
Problem 7: T (x) = xT ŝ = xT Cs (Cs + σ 2 I)−1 x = n=0 σs2n +σ 2
Problem 8:
Problem 8a: We have w ∼ N (0, Cw ) and s ∼ N (0, Cs ) = N (0, Cw η). So, T (x) =
xT C−1
w xη
xT C−1 −1
w Cs (Cs + Cw ) x = 1+η ≥ γ and T 0 (x) = xT C−1 0
w x ≥ γ . We know
that xT C−1 x ∼ χ2N (whitening of x)
3
Problem 8b:
H0 x ∼ N (0, Cw )
H1 x ∼ N (0, (1 + η)Cw )
so,
H0 T (x) ∼ χ2N
H1 T1+η
(x)
∼ χ2N
Problem 9: We can use the expression for general Gaussian detection. That is
1 T h −1 i
T 0 (x) = x Cw Cs (Cs + Cw )−1 x + xT (Cs + Cw )−1 µs
2
1 T σs2 2 −1 −1
= x 2 σs + σ 2 x + xT σs2 + σ 2 A1
2 σ
1 σs2 2 −1 T A
= 2
σs + σ 2 x x+ 2 xT 1
2σ σs + σ 2
N −1 N −1
N σs2 1 1 X 2 NA 1 X
= x [n] + 2 x[n]
2 σ 2 σs2 + σ 2 N σs + σ 2 N n
n=0
| {z } | {z }
estimate of variance estimate of mean
From this we can clearly see the contribution in the detector based on the deterministic
component (mean) of the data and the random component (variance) of the data.
Problem 10:
T
Problem 10a: Let H = 1, r, ..., rN −1 .
h i
p(x; A, H1 ) = 1
N exp − 2σ1 2 (x − AH)T (x − AH)
(2πσ 2 ) 2
1
1 T
p(x; H0 ) = N exp − 2σ 2 x x .
(2πσ 2 ) 2
dp(x; A, H1 )
=0
dA
PN −1 n
xT H+HT x n=0 r x[n]
This leads to ÂM LE = HT H
= P N −1 2n .
n=0 r
4
Problem 10b:
h PN −1 i
1 1
N exp − 2σ 2 n=0 (x[n] − ÂM LE rn )2
p(x; ÂM LE , H1 ) (2πσ 2 ) 2
LG (x) = = h PN −1 2 i >γ
p(x; H0 ) 1
exp − 1
x [n]
N
(2πσ 2 ) 2 2σ 2 n=0