Digital Communication Systems: Last Time We Talked About

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Last time we talked about:

Digital Communication
Receiver structure
Systems

 Impact of AWGN and ISI on the transmitted


signal
Dr. Shurjeel Wyne  Optimum filter to maximize SNR
Lecture 6  Matched filter and correlator receiver
Baseband Demodulation and  Signal space used for detection
Detection  Orthogonal N-dimensional space
 Signal to waveform transformation and vice versa

1 2

Today we are going to talk Demodulation and Detection


about:
AWGN

DETECT
DEMODULATE & SAMPLE
SAMPLE
at t = T

 Signal detection in AWGN channels RECEIVED


WAVEFORM FREQUENCY
DOWN
RECEIVING
FILTER
EQUALIZING
FILTER THRESHOLD MESSAGE
TRANSMITTED CONVERSION

MAP and Maximum likelihood detector WAVEFORM COMPARISON SYMBOL


 OR
CHANNEL
FOR COMPENSATION
SYMBOL

Minimum distance detector


BANDPASS FOR CHANNEL
INDUCED ISI
 SIGNALS

OPTIONAL

ESSENTIAL

Figure 3.1: Two basic steps in the demodulation/detection of digital signals


 Average probability of symbol error
The digital receiver performs two basic functions:
 Union bound on error probability
Demodulation, to recover a waveform to be sampled at t = nT.
 Upper bound on error probability based on 

the minimum distance  Detection, decision-making process of selecting possible digital symbol

3 4

Detection of Binary Signal in Gaussian Noise Detection of Binary Signal in Gaussian Noise

 For any binary modulation, the transmitted signal over a


symbol interval (0,T) is:
s (t ) 0  t  T for a binary 0
si (t )   0
 s1 (t ) 0  t  T for a binary 1
 The received signal r(t) degraded by noise n(t) and possibly
degraded by the impulse response of the channel hc(t), is

r (t )  si (t ) * hc (t )  n (t ) i  1, 2 (3.1)  The recovery of signal at the receiver consist of two parts


Where n(t) is assumed to be zero mean AWGN process  Filter
 Reduces the received signal to a single variable z(T)
 For ideal distortionless channel where hc(t) is an impulse function
and convolution with hc(t) produces no degradation, r(t) can be  z(T) is called the test statistic
represented as:  Detector (or decision circuit)
(3.2)  Compares the z(T) to some threshold level 0 , i.e.,
r (t )  si (t )  n(t ) i  1,2 0t T H 1

z (T ) 
 where H1 and H0 are the two
5
 0 possible binary hypothesis 6
H 0

1
Receiver Functionality  The test statistic z(T) is also a Gaussian random variable with
a mean of either a1 or a0 depending on whether binary one or
The recovery of signal at the receiver consist of two parts: zero was transmitted
1. Waveform-to-sample transformation  1  z a 2 
1
Demodulator followed by a sampler p(z | s0 )  exp  0

2  

 At the end of each symbol duration T, the output of the sampler 0 2   0  


yields a sample z(T), called test statistic
1  1  z  a 2 
(3.3) exp 
p(z | s1)   
1

0 2  2  0  
Where ai(T) is the desired signal component, Where 0 2 is the noise variance
and no(T) is the noise component
 The ratio of instantaneous signal power to average noise
2. Detection of symbol power , (S/N)T, at a time t=T, out of the sampler is:
 Assume that input noise is a zero-mean Gaussian random
process and receiving filter is linear, then n0(T) is a Gaussian  S  a i2
random variable    (3.45)
 N T  02
1  1 n 
2

p (n0 )  exp    0   (3.4)  Need to achieve maximum (S/N)T
0 2  2   0  
7 8

Find Filter Transfer Function H0(f)  To find H(f) = H0(f) that maximizes (S/N) T ; we make use of
the Schwarz’s Inequality:
 Objective: To maximizes (S/N)T
 Expressing signal ai(t) at filter output in terms of filter transfer  2  2  2
function H(f) 

f1 ( x) f 2 ( x)dx  

f1 ( x) dx  
f 2 ( x) dx (3.49)
 (3.46)

j 2  ft
a i (t )  H ( f ) S( f ) e df
  Equality holds if f1(x) = k f*2(x) where k is arbitrary constant and *
where S(f) is the Fourier transform of input signal s(t) indicates complex conjugate
 Output noise power can be expressed as:  Associating H(f) with f1(x) and S(f) ej2 fT with f2(x), we get:
N0 
 
2
0  | H ( f ) | 2 df  2  2  2
2 (3.47)
 H ( f ) S ( f ) e j 2fT df   H ( f ) df  S ( f ) df

(3.50)
 Expressing (S/N)T as:   

2
  Substitute in eq-3.48 to yield:

j 2  fT
H ( f ) S( f ) e df
 S  
   (3.48) S  2  2
 N T N0


| H ( f ) | 2 df   
 N T N 0
 
S ( f ) df (3.51)
2 
 2
9 Where energy of input signal s(t) is given by: E  
S ( f ) df 10

 S  2E
Detection
 Therefore max   
 N T N 0
 Sampling the Matched filter output reduces the received signal to a
 Thus max (S/N)T depends on input signal energy single variable z(T), after which the detection of symbol is carried out
and power spectral density of noise and  The concept of maximum likelihood detector is based on Statistical
NOT on the particular shape of the waveform Decision Theory
 It allows us to
S  2E
max     formulate the decision rule that operates on the data
 The equality in  N T N 0 holds for the optimum filter
transfer function H0(f), i.e., when:  optimize the detection criterion

H ( f )  H 0 ( f )  kS * ( f ) e  j 2  fT (3.54)

h (t )   1
kS * ( f )e  j 2  fT
 (3.55)

For real valued s(t):  kS (T  t ) 0  t  T H 1


h (t )   (3.56)
0 else where z (T ) 

 0
H 0
The MATCHED FILTER h(t) shown above is a linear filter designed to
provide the maximum signal-to-noise power ratio at its output for a given11 12

transmitted symbol waveform

2
Probabilities Review How to Choose the decision threshold?
 Optimum decision rule based on Maximum a posteriori probability
 P[s0], P[s1]  a priori probabilities
 If
 These probabilities are known before transmission
P[s0|z] > P[s1|z]  H0
 P[z]
 else
 probability of the received sample
P[s1|z] > P[s0|z]  H1
 p(z|s0), p(z|s1)
 conditional pdf of received sample z, conditioned on the event
 Problem is that a posteriori probabilities are not known.
that si was transmitted (i = 0,1). Also called likelihood of si
 Solution: Use Baye’s theorem to replace a posteriori probabilities :
 P[s0|z], P[s1|z]  a posteriori probabilities
 After examining the sample z, we make a refinement of our
previous knowledge about the transmitted symbol, si
 P[s1|s0], P[s0|s1]
 Probability of a wrong decision (error)

 P[s1|s1], P[s0|s0] p ( z | s1 ) P ( s1 )
H1
p( z | s0 ) P (s0 )
H1

 Probability of a correct decision  


 p ( z | s1) P ( s1) 
p ( z | s0 ) P ( s0 )
13 P( z) 
H0
P( z) 
H0 14

 MAP detection rule :  Substituting the pdfs


H1
p ( z | s1 ) P (s0 ) 1  1  z  a 2 
L(z)  
 likelihood ratio test ( LRT ) H0 : p ( z | s0 )  exp  0
 
p( z | s0 ) P ( s1 ) 2  

H0 0 2   0  
 Maximum likelihood (ML) detection rule
1  1  z  a 2 
When the two signals, s0(t) and s1(t), are equally likely, i.e., P(s0) = H1 : p ( z | s1 )  exp    1
 
P(s1) = 0.5, then the maximum a posteriori (MAP) decision rule 0 2  2   0  
simplifies to:
H1
p ( z | s1 )
L(z)  
1  max likelihood ratio test H1 1  1  H1
p(z | s0 ) 
exp    z  a1 2 
H0
p ( z | s1 )   0 2  2  0   1
L(z)  1
p ( z | s0 )  1  1 2 
exp   z  a0  
H0  0 2  2 0  H0

15 16

 Hence:
 Hence
 z ( a1  a 0 ) ( a12  a 02 )  
exp    1
  02 2 02  H1 H1
 02 (a1  a0 )(a1  a0 )  ( a1  a 0 )
z z 0
 Taking the natural log of both sides will give  202 (a1  a0 )  2
H1 H0 H0
z(a1  a0 ) (a12  a02 ) 
  ln{L( z)}   0 = is the optimum threshold level for minimizing the probability
 02 2 02 
of making an incorrect decision in this binary case.
H0
 Example: For antipodal signal, s1(t) = - s0 (t)  a1 = - a0
H1
H1
z ( a1  a 0 )  2 2
( a  a ) ( a 1  a 0 )( a 1  a 0 )
 1
 0

 02  2  02 2  02 z 0
H0 
17 H0 18

3
ML detection rule viewed as a
This means that if received signal was positive, s1 (t) was sent,
minimum distance detection rule
else s 0 (t) was sent
 For a general M-ary modulation scheme having an
N-dimensional signal space, the detector chooses that
vector si as the transmitted symbol, which minimizes the
Euclidean distance:
p p

1  For an AWGN channel, the decision rule based on the


ML criterion reduces to finding the signal si that is
closest in distance to the received signal vector z, i.e.,
choose si for which || z - si || is minimum for all i=1,...,M

19 20

Probability of Error
Schematic example of ML decision regions
for General M-ary case  Error will occur if:
 s1 is sent  s0 is estimated
Example: M = 4, N =2  2 (t ) P ( H 0 | s1 )  P (e | s1 )
• Partition the signal space into M Z2 0
decision regions, Z1, ...,ZM P (e | s1 )   p ( z | s1 ) dz


• vector z lies in region Zi if s2  s0 is sent  s1 is estimated


|| z - si || is minimum P ( H 1 | s0 )  P (e | s0 )
for all i=1,...,M Z1 

s3 s1 P (e | s0 )   0
p ( z | s 0 ) dz

Z3  1 (t )  The total probability of error is the sum of the errors


2
PB   P (e, s )  P (e | s ) P ( s )  P (e | s
i 1 1 0 ) P ( s0 )
s4 i 1

 P ( H 0 | s1 ) P ( s1 )  P ( H 1 | s 0 ) P ( s 0 )
Z4 21 22

 If signals are equally probable  1za 2



 1 
Probability of
bit error
PB  P ( H 0 | s1 ) P ( s1 )  P ( H 1 | s 0 ) P ( s 0 ) PB 
0 

0 2
exp   
 2   0
0


 dz

1
 P ( H 0 | s1 )  P ( H 1 | s 0 )  (z  a0 )
2  u 
1 0
PB  P(H0 | s1)  P(H1 | s0 ) bySymmetry
 P(H1 | s0 )
2  1  u2 
  ( a1  a 0 )
2 0 2
exp  
 2 
 du
 The probability of bit error PB, is the probability that an incorrect
hypothesis is made
 The above integral cannot be evaluated in closed form
 Numerically, PB is the area under the tail of either of the
 It is termed as the Q-function, where the argument of the
conditional distributions p(z|s1) or p(z|s2)
function is the lower limit of the integral
 
PB   0
P (H 1 | s 0 ) dz   0
p ( z | s 0 ) dz

 1  1  z  a 2  Hence,
  0  0 2
exp    0
  dz
 2   0  
 a  a0
PB  Q  1

  equation B . 18
 2 0
23 24

4
 a  a0 Error Probability of Binary Signals
PB  Q  1   equation B . 18
 2 0 
Unipolar Signaling
a1  a0 
 To minimize PB, we need to maximize: WHY? s1 (t )  A , 0  t  T , for binary 1
0 s (t )  0 , 0  t  T , for binary 0
0

( a1  a0 ) 2
 An equivalent statement is that we need to maximize
 20
( a1  a0 ) 2 2 Ed (difference) Signal to Noise Ratio,

 20 N0 where, 2

 s ( t )  s 0 ( t )  dt
T
E d  1
0
2 2

 s ( t )  dt   s ( t )  dt  2  s ( t ) s 0 ( t ) dt
T T T
 1 0 1
0 0 0
 E  0  0
  Eb 
 s ( t ) s 0 ( t ) dt
T
 2E
b
2 2  Ed   2
 A b
T 0
1
0  0 Pb  Q   Q A T  Q
    N 
 2 N0   2 N 0   0
25 26

Bipolar Signaling (antipodal)



Orthogonal Bipolar ( Antipodal)
s1 (t )  A, 0  t  T, for binary 1
 Eb   2Eb 
s0 (t )   A, 0  t  T , for binary 0 Pb  Q  
 Pb  Q  
 N0  N0 
 

 Bipolar signals require a


factor of 2 increase in energy
compared to Orthogonal
 Since 10log102 = 3 dB, we
say that bipolar signaling
offers a 3 dB better
performance than Orthogonal
 Ed   2   
Pb  Q   Q 4 A T   Q 2Eb 
  2N0   N 
 2N0     0 

27 28

Eb/No figure of merit in digital


Comparing BER Performance
communications
For E b / N 0  10 dB  SNR or S/N is the average signal power to the
PB , orthogonal  9 .2 x10  2 average noise power. In DCS analysis, SNR is
modified in terms of bit-energy, because:
PB , antipodal  7 .8 x10  4
 Signals are transmitted within a symbol duration and
hence, are energy signals (zero power).
 A merit at bit-level facilitates comparison of different
DCSs transmitting different number of bits per symbol.

Eb STb S W Rb : Bit rate


 
N 0 N / W N Rb W : Bandwidth
 For the same received signal to noise ratio, antipodal provides lower
bit error rate than orthogonal
29 Note: S/N = Eb/No x Bandwidth efficiency 30

You might also like