Professional Documents
Culture Documents
Digital Communication Systems: Last Time We Talked About
Digital Communication Systems: Last Time We Talked About
Digital Communication Systems: Last Time We Talked About
Digital Communication
Receiver structure
Systems
1 2
DETECT
DEMODULATE & SAMPLE
SAMPLE
at t = T
OPTIONAL
ESSENTIAL
the minimum distance Detection, decision-making process of selecting possible digital symbol
3 4
Detection of Binary Signal in Gaussian Noise Detection of Binary Signal in Gaussian Noise
z (T )
where H1 and H0 are the two
5
0 possible binary hypothesis 6
H 0
1
Receiver Functionality The test statistic z(T) is also a Gaussian random variable with
a mean of either a1 or a0 depending on whether binary one or
The recovery of signal at the receiver consist of two parts: zero was transmitted
1. Waveform-to-sample transformation 1 z a 2
1
Demodulator followed by a sampler p(z | s0 ) exp 0
2
0 2 2 0
Where ai(T) is the desired signal component, Where 0 2 is the noise variance
and no(T) is the noise component
The ratio of instantaneous signal power to average noise
2. Detection of symbol power , (S/N)T, at a time t=T, out of the sampler is:
Assume that input noise is a zero-mean Gaussian random
process and receiving filter is linear, then n0(T) is a Gaussian S a i2
random variable (3.45)
N T 02
1 1 n
2
p (n0 ) exp 0 (3.4) Need to achieve maximum (S/N)T
0 2 2 0
7 8
Find Filter Transfer Function H0(f) To find H(f) = H0(f) that maximizes (S/N) T ; we make use of
the Schwarz’s Inequality:
Objective: To maximizes (S/N)T
Expressing signal ai(t) at filter output in terms of filter transfer 2 2 2
function H(f)
f1 ( x) f 2 ( x)dx
f1 ( x) dx
f 2 ( x) dx (3.49)
(3.46)
j 2 ft
a i (t ) H ( f ) S( f ) e df
Equality holds if f1(x) = k f*2(x) where k is arbitrary constant and *
where S(f) is the Fourier transform of input signal s(t) indicates complex conjugate
Output noise power can be expressed as: Associating H(f) with f1(x) and S(f) ej2 fT with f2(x), we get:
N0
2
0 | H ( f ) | 2 df 2 2 2
2 (3.47)
H ( f ) S ( f ) e j 2fT df H ( f ) df S ( f ) df
(3.50)
Expressing (S/N)T as:
2
Substitute in eq-3.48 to yield:
j 2 fT
H ( f ) S( f ) e df
S
(3.48) S 2 2
N T N0
| H ( f ) | 2 df
N T N 0
S ( f ) df (3.51)
2
2
9 Where energy of input signal s(t) is given by: E
S ( f ) df 10
S 2E
Detection
Therefore max
N T N 0
Sampling the Matched filter output reduces the received signal to a
Thus max (S/N)T depends on input signal energy single variable z(T), after which the detection of symbol is carried out
and power spectral density of noise and The concept of maximum likelihood detector is based on Statistical
NOT on the particular shape of the waveform Decision Theory
It allows us to
S 2E
max formulate the decision rule that operates on the data
The equality in N T N 0 holds for the optimum filter
transfer function H0(f), i.e., when: optimize the detection criterion
H ( f ) H 0 ( f ) kS * ( f ) e j 2 fT (3.54)
h (t ) 1
kS * ( f )e j 2 fT
(3.55)
2
Probabilities Review How to Choose the decision threshold?
Optimum decision rule based on Maximum a posteriori probability
P[s0], P[s1] a priori probabilities
If
These probabilities are known before transmission
P[s0|z] > P[s1|z] H0
P[z]
else
probability of the received sample
P[s1|z] > P[s0|z] H1
p(z|s0), p(z|s1)
conditional pdf of received sample z, conditioned on the event
Problem is that a posteriori probabilities are not known.
that si was transmitted (i = 0,1). Also called likelihood of si
Solution: Use Baye’s theorem to replace a posteriori probabilities :
P[s0|z], P[s1|z] a posteriori probabilities
After examining the sample z, we make a refinement of our
previous knowledge about the transmitted symbol, si
P[s1|s0], P[s0|s1]
Probability of a wrong decision (error)
P[s1|s1], P[s0|s0] p ( z | s1 ) P ( s1 )
H1
p( z | s0 ) P (s0 )
H1
15 16
Hence:
Hence
z ( a1 a 0 ) ( a12 a 02 )
exp 1
02 2 02 H1 H1
02 (a1 a0 )(a1 a0 ) ( a1 a 0 )
z z 0
Taking the natural log of both sides will give 202 (a1 a0 ) 2
H1 H0 H0
z(a1 a0 ) (a12 a02 )
ln{L( z)} 0 = is the optimum threshold level for minimizing the probability
02 2 02
of making an incorrect decision in this binary case.
H0
Example: For antipodal signal, s1(t) = - s0 (t) a1 = - a0
H1
H1
z ( a1 a 0 ) 2 2
( a a ) ( a 1 a 0 )( a 1 a 0 )
1
0
02 2 02 2 02 z 0
H0
17 H0 18
3
ML detection rule viewed as a
This means that if received signal was positive, s1 (t) was sent,
minimum distance detection rule
else s 0 (t) was sent
For a general M-ary modulation scheme having an
N-dimensional signal space, the detector chooses that
vector si as the transmitted symbol, which minimizes the
Euclidean distance:
p p
19 20
Probability of Error
Schematic example of ML decision regions
for General M-ary case Error will occur if:
s1 is sent s0 is estimated
Example: M = 4, N =2 2 (t ) P ( H 0 | s1 ) P (e | s1 )
• Partition the signal space into M Z2 0
decision regions, Z1, ...,ZM P (e | s1 ) p ( z | s1 ) dz
s3 s1 P (e | s0 ) 0
p ( z | s 0 ) dz
P ( H 0 | s1 ) P ( s1 ) P ( H 1 | s 0 ) P ( s 0 )
Z4 21 22
1 1 z a 2 Hence,
0 0 2
exp 0
dz
2 0
a a0
PB Q 1
equation B . 18
2 0
23 24
4
a a0 Error Probability of Binary Signals
PB Q 1 equation B . 18
2 0
Unipolar Signaling
a1 a0
To minimize PB, we need to maximize: WHY? s1 (t ) A , 0 t T , for binary 1
0 s (t ) 0 , 0 t T , for binary 0
0
( a1 a0 ) 2
An equivalent statement is that we need to maximize
20
( a1 a0 ) 2 2 Ed (difference) Signal to Noise Ratio,
20 N0 where, 2
s ( t ) s 0 ( t ) dt
T
E d 1
0
2 2
s ( t ) dt s ( t ) dt 2 s ( t ) s 0 ( t ) dt
T T T
1 0 1
0 0 0
E 0 0
Eb
s ( t ) s 0 ( t ) dt
T
2E
b
2 2 Ed 2
A b
T 0
1
0 0 Pb Q Q A T Q
N
2 N0 2 N 0 0
25 26
27 28