Professional Documents
Culture Documents
222 - EC8501, EC6501 Digital Communication - Notes
222 - EC8501, EC6501 Digital Communication - Notes
EC6501
DIGITAL COMMUNICATION
UNIT I SAMPLING & QUANTIZATION
SAMPLING:
www.BrainKart.com
Click Here for Digital Communication full study material.
g(t ) (t nTs )
n
1 m
G( f )
(f )
Ts m Ts
f G( f mf )
m
s s
g (t ) f s G( f mf s ) (3.2)
m
or G ( f ) fsG( f ) fs G( f mf s ) (3.5)
m
m 0
If G( f ) 0 for f W and Ts 1 2W
n j n f
G ( f ) g(
2W
) exp(
W
) (3.4)
n
www.BrainKart.com
Click Here for Digital Communication full study material.
With
1.G( f ) 0 for f W
2. fs 2W
wefind from Equation (3.5) that
1
G( f ) G ( f ) , W f W (3.6)
2W
Substituting (3.4)into (3.6) wemay rewriteG( f ) as
1
n jnf
G( f )
2W n
g(
2W
) exp(
W
) , W f W (3.7)
n
g (t) is uniquely determined by g ( ) for n
2W
n
or g ( ) contains all information of g (t)
2W
n
To rec onstruct
g(t) from g( ) , wemay have
2W
g(t)
G( f ) exp( j2f t)df
W 1 n jn f
) exp( j2 f t)df
W 2W
g( ) exp(
n 2W W
g( n ) exp j2 f (t n )df (3.8)
1 W
2W 2W
W
n 2W
g( n ) sin(2Wt n)
n 2W 2 Wt n
n
2W) sin c(2Wt n) , - t
g( (3.9)
n
www.BrainKart.com
Click Here for Digital Communication full study material.
Figure 3.4 (a) Anti-alias filtered spectrum of an information-bearing signal. (b) Spectrum
of instantaneously sampled version of the signal, assuming the use of a sampling rate
greater than the Nyquist rate. (c) Magnitude response of reconstruction filter.
Pulse-Amplitude Modulation :
www.BrainKart.com
Se
Let s(t ) denote th
CAsD
eqEu
ngeineceerionfg Cfo
lallteg
-etop puls es as
s(t ) m( nT
n
s ) h(t nTs ) (3.10)
1, 0 t T
h(t ) 1 , t 0, t T (3.11)
2
0,
other w is e
The instantaneously sampled version of m(t ) is
m (t ) m(nT
n
s ) (t nTs ) (3.12)
m (t ) h(t ) m ( )h(t )d
m(nT s )( nTs )h(t )d
n
m(nT ) s
( nTs )h(t )d (3.13)
n
m (t ) h(t ) m(nT
n
s )h(t nT s ) (3.14)
M ( f ) fs M ( f k fs ) (3.17)
k
S ( f ) fs M ( f k fs )H ( f ) (3.18)
k
Page 5
www.BrainKart.com
Click Here for Digital Communication full study material.
www.BrainKart.com
Click Here for Digital Communication full study material.
Page 8
www.BrainKart.com
Click Here for Digital Communication full study material.
fs > 2fA(max)
Quantization Process:
www.BrainKart.com
Click Here for Digital Communication full study material.
Quantization Noise:
Page 10
www.BrainKart.com
Click Here for Digital Communication full study material.
(3.25)
L
m m m , L : total number of levels
max max
f (q) 1 , q (3.26)
Q 2 2
0, otherwise
Q
Q
1
2 2
E[Q ] q f (q)dq
2 2 2 2
q dq
2 2
2
(3.28)
12
3P 2 www.BrainKart.com
( )2 R (3.33)
m2m ax
Click Here for Digital Communication full study material.
Page 12
www.BrainKart.com
Click Here for Digital Communication full study material.
Page 13
www.BrainKart.com
Click Here for Digital Communication full study material.
- law
log(1 m )
(3.48)
log(1 )
dm log(1 )
(1 m ) (3.49)
d
A - law
A(m) 1
0m
1 log A
A
1 log( A m ) 1
(3.50)
m 1
1 log A A
1 log A 0m
1
dm A
A (3.51)
d (1 A) m 1
m1
A
Page 14
www.BrainKart.com
Click Here for Digital Communication full study material.
Page 15
www.BrainKart.com
Click Here for Digital Communication full study material.
Page 16
www.BrainKart.com
Click Here for Digital Communication full study material.
Time-Division Multiplexing(TDM):
Digital Multiplexers :
Page 17
www.BrainKart.com
UNIT II WAVEFORM CODING
Page 19
www.BrainKart.com
Click Here for Digital Communication full study material.
Page 20
www.BrainKart.com
Click Here for Digital Communication full study material.
J E x n 2
p
2
wEk
x n x n k
k 1
Page 21
p p
E x n 2
The autocorrelation
RX ( k Ts ) RX k Exnxn k
We may simplify J as
p p p
J 2 2
X w k RX k w j wk RX k j (3.63)
k 1 j 1 k 1
J
p
2R k 2 w j RX k j 0
wk X
j 1
p
w j RX k j RX k RX k , k 1,2,,p (3.64)
j 1
RX 0, RX 1,, RX p
Substituting (3.64)into (3.63) yields
p p
J min 2 w k wk RX k
2
k 1
X R
k X
k 1
p
2 w R k
X k X
SCAD k 1
EngiTneering College Page 22
r w r R 1r
2 2 T
(3.67)
X X 0 X X X X
Click Here for Digital Communication full study material.
r T R 1r 0, J is always less than 2
Click Here for Digital Communication full study material.
X X X min X
www.BrainKart.com
Click Here for Digital Communication full study material.
wk n 1 wk n
1
g , k 1,2,,p (3.69)
k
2
where 1
is a step- size parameterand is for convenience
2
of presentation.
J P
g 2R k 2 w R k j
k
wk
X j 1 j X
p
ŵk n 1 ŵk n xn k xn ŵ j nxn j
p
j 1
SCADEn wˆkgineering
n xn College
ken , k 1,2,, p (3.72) Page 23
p
wheree n xn wˆ j nxn j by (3.59) (3.60) (3.73) www.BrainKart.com
j 1
Figure 3.27
Block diagram illustrating the linear adaptive prediction process
Usually PCM has the sampling rate higher than the Nyquist rate
.The encode signal contains redundant information. DPCM can
efficiently remove this redundancy.
Page 24
www.BrainKart.com
Figure 3.28 DPCM system. (a) Transmitter. (b) Receiver.
Page 25
www.BrainKart.com
Processing Gain:
Page 26
www.BrainKart.com
2. Assign the available bits in a perceptually efficient
manner.
Page 27
www.BrainKart.com
UNIT III BASEBAND TRANSMISSION
Duo-binary Signaling :
1 if symbol bk is 1 c k ak ak 1
a
1 if symbol bk is 0
k
Page 28
www.BrainKart.com
HI ( f ) HNyquist ( f )[1 exp( j2fTb )]
HNyquist ( f )[exp( jfTb ) exp( jfTb )] exp( jfTb )
2HNyquist ( f ) cos(fTb ) exp( jfTb )
1, | f | 1/ 2Tb
HNyquist ( f )
0, otherwise
sin(t / Tb ) sin[(t Tb ) / Tb ]
hI (t) t / T (t T ) / T
b b b
T sin(t / Tb )
2
b
t(Tb t)
dk bk dk 1
Page 29
www.BrainKart.com
{dk} is applied to a pulse-amplitude modulator, producing a
corresponding two-level sequence of short pulse {ak}, where
+1 or –1 as before
ck ak ak 1
Page 30
www.BrainKart.com
Modified Duo-binary Signaling :
ck ak ak 1
HIV ( f ) HNyquist ( f )[1 exp( j4 fTb )]
2 jHNyquist ( f )sin(2 fTb ) exp( j2 fTb )
2 j sin(2 fTb) exp( j2 fTb), | f |1/ 2Tb
HIV( f )
0, elsewhere
precoding
dk bk d k 2
symbol 1 if either symbol bk or dk 2 is 1
symbol 0 otherwise
Page 31
www.BrainKart.com
|ck|=1 : random guess in favor of symbol 1 or 0
Page 32
www.BrainKart.com
Generalized form of correlative-level coding:
N 1 t
h(t) wn sin c n
n Tb
Page 33
www.BrainKart.com
Baseband M-ary PAM Transmission:
Page 34
www.BrainKart.com
Tapped-delay-line equalization :
Page 35
www.BrainKart.com
N
h(t ) w (t kT )
kN
k
1, 1,
p(nT ) n 0 n 0
0, n 0 0, n 1, 2, .... ,N
Zero-forcing equalizer
– Optimum in the sense that it minimizes the peak
distortion(ISI) – worst case
– Simple implementation
– The longer equalizer, the more the ideal condition for
distortionless transmission
Adaptive Equalizer :
www.BrainKart.com
Adaptive equalization
– Adjust itself by operating on the the input signal
Training sequence
– Precall equalization
– Channel changes little during an average data call
Prechannel equalization
– Require the feedback channel
Postchannel equalization
synchronous
– Tap spacing is the same as the symbol duration of
transmitted signal
Least-Mean-Square Algorithm:
E en2
Ensemble-averaged cross-correlation
e yn
2E en n
w w 2E en 2E en xnk 2Rex (k)
w
k k k
www.BrainKart.com
Optimality condition for minimum mean-square error
0 for k 0, 1,...., N
wk
Mean-square error is a second-order and a parabolic function
of tap weights as a multidimentional bowl-shaped surface
Adaptive process is a successive adjustments of tap-weight
seeking the bottom of the bowl(minimum value )
Steepest descent algorithm
– The successive adjustments to the tap-weight in
direction opposite to the vector of gradient )
– Recursive formular ( : step size parameter)
1
w (n 1) w (n) , k 0, 1,...., N
wk
k k
2
wk (n) Rex (k ), k 0, 1,...., N
Least-Mean-Square Algorithm
– Steepest-descent algorithm is not available in an
unknown environment
– Approximation to the steepest descent algorithm using
instantaneous estimate
Rex (k) en xnk
wk (n 1) wk (n) e n xnk
Page 38
www.BrainKart.com
In the case of small , roughly similar to steepest
descent algorithm
Page 39
www.BrainKart.com
Implementation Approaches:
Analog
– CCD, Tap-weight is stored in digital memory, analog
sample and multiplication
– Symbol rate is too high
Digital
– Sample is quantized and stored in shift register
– Tap weight is stored in shift register, digital
multiplication
Programmable digital
– Microprocessor
– Flexibility
– Same H/W may be time shared
Page 40
www.BrainKart.com
yn hk xnk
k
h0 xn hk xnk hk xn k
Using data decisk i0ons madek o0 n the basis of precursor to take
care of the postcursors
– The decision would obviously have to be correct
Page 41
www.BrainKart.com
In the case of an M-ary system, the eye pattern contains (M-
1) eye opening, where M is the number of discreteamplitude
levels
Page 42
www.BrainKart.com
Interpretation of Eye Diagram:
Page 43
www.BrainKart.com
Page 44
www.BrainKart.com
UNIT IV DIGITAL MODULATION SCHEME
Page 45
www.BrainKart.com
Page 46
www.BrainKart.com
Page 47
www.BrainKart.com
ASK, OOK, MASK:
Page 48
www.BrainKart.com
• One amplitude encodes a 0 while another amplitude encodes
a 1 (a form of amplitude modulation)
Page 49
www.BrainKart.com
Implementation of binary ASK:
Page 50
www.BrainKart.com
A cos2f 2 t binary1
s t
A cos2f 2 t binary 0
FSK Bandwidth:
Page 51
www.BrainKart.com
• Applications
– On voice-grade lines, used up to 1200bps
– Used for high-frequency (3 to 30 MHz) radio
transmission
– used at higher frequencies on LANs that use coaxial
cable
DBPSK:
• Differential BPSK
– 0 = same phase as last signal element
– 1 = 180º shift from last signal element
Page 52
www.BrainKart.com
A cos 2f t 11
A cos 2f t
3
c
4
st
c 01
4
A cos 2fct
3
00
4
A cos 2f t
10
c 4
Concept of a constellation :
Page 53
www.BrainKart.com
M-ary PSK:
Using multiple phase angles with each angle having more than one
amplitude, multiple signals elements can be achieved
D
R R
L log 2 M
Page 54
www.BrainKart.com
QAM:
– As an example of QAM, 12 different phases are
combined with two different amplitudes
– Since only 4 phase angles have 2 different amplitudes,
there are a total of 16 combinations
– With 16 signal combinations, each baud equals 4 bits of
information (2 ^ 4 = 16)
– Combine ASK and PSK such that each signal
corresponds to multiple bits
– More phases than amplitudes
– Minimum bandwidth requirement same as ASK or PSK
Page 55
www.BrainKart.com
QAM and QPR:
Page 56
www.BrainKart.com
Offset quadrature phase-shift keying (OQPSK):
Page 57
www.BrainKart.com
Generation and Detection of Coherent BPSK:
Figure 6.26 Block diagrams for (a) binary FSK transmitter and
(b) coherent binary FSK receiver.
Page 58
www.BrainKart.com
Fig. 6.28
Page 59
www.BrainKart.com
Figure 6.29 Signal-space diagram for MSK system.
Page 60
www.BrainKart.com
Generation and Detection of MSK Signals:
Figure 6.31 Block diagrams for (a) MSK transmitter and (b)
coherent MSK receiver.
Page 61
www.BrainKart.com
Page 62
www.BrainKart.com
Page 63
www.BrainKart.com
UNIT V ERROR CONTROL CODING
Block Codes:
Page 64
www.BrainKart.com
• The maximum number of detectable errors is
dmin 1
• That is the maximum number of correctable errors is given
by,
d min 1
t
2
where dmin is the minimum Hamming distance between 2
codewords and means the smallest integer
www.BrainKart.com
• So, to obtain the codeword for dataword 1011, the first, third
and fourth codewords in the list are added together, giving
1011010
• This process will now be described in more detail
k
c di a i
i 1
• ai must be linearly independent, i.e.,
Since codewords are given by summations of the ai vectors,
then to avoid 2 datawords having the same codeword the ai vectors
must be linearly independent.
• Sum (mod 2) of any 2 codewords is also a codeword, i.e.,
Since for datawords d1 and d2 we have;
d 3 d1 d 2
So,
k k k k
c 3 d3i a i (d1i d 2i )a i d1i a i d 2i a i
i1 i1 i1 i1
c 3 c1 c 2
Page 66
www.BrainKart.com
Error Correcting Power of LBC:
1 0 1 1
G
0
1 0 1
a1 = [1011]
a2 = [0101]
• For d = [1 1], then;
1 0 1 1
0 1 0 1
c
1 1 1 0
Linear Block Codes – example 2:
0 0 0 1
G 0 0 1 0 0 1
0 0 0 1 0 1
0 0 0 0 1 1
www.BrainKart.com
Systematic Codes:
Page 68
www.BrainKart.com
• Another possibility is algebraic decoding, i.e., the error flag
is computed from the received codeword (as in the case of
simple parity codes)
• How can this method be extended to more complex error
detection and correction codes?
Page 69
www.BrainKart.com
Parity Check Matrix:
This is so since,
k
c di a i
i 1
and so,
k k
b j .c b j. d i a i di (a i .b j ) 0
i 1 i 1
Page 70
www.BrainKart.com
• In this example the H matrix has only one row, namely b1.
This vector is orthogonal to the plane containing the rows of
the G matrix, i.e., a1 and a2
• Any received codeword which is not in the plane containing
a1 and a2 (i.e., an invalid codeword) will thus have a
component in the direction of b1 yielding a non- zero dot
product between itself and b1.
Error Syndrome:
Page 71
www.BrainKart.com
• For systematic linear block codes, H is constructed as
follows,
G = [ I | P] and so H = [-PT | I]
where I is the k*k identity for G and the R*R identity for H
• Example, (7,4) code, dmin= 3
1 0 0 0 0 1 1
0 1 0 0 1 0 1 0 1 1 1 1 0 0
G I | P
H - PT | I 1 0 1 1 0 1 0
0 0 1 0 1 1 0
1 1 0 1 0 0 1
0 0 0 1 1 1 1
0 1 1
1 0 1
1 1 0
s crHT 1 1 0 1 0 0 11 1 1 0 0 0
1 0 0
0 1 0
0 0 1
Page 72
www.BrainKart.com
Standard Array:
c1 (all zero) c2 …… cM s0
e1 c2+e1 …… cM+e1 s1
e2 c2+e2 …… cM+e2 s2
e3 c2+e3 …… cM+e3 s3
… …… …… …… …
eN c2+eN …… cM+eN sN
Hamming Codes:
Page 73
www.BrainKart.com
• The original form is non-systematic,
1 1 1 0 0 0 0
1 0 0 0 0 1 1 1 1
G 0 0 1 1 0
0 0 H 0 1 1 0 0 1 1
1 0 1 0 1
1 0 1 0 1 0 1
1 1 0 1 0 0 1
Page 74
www.BrainKart.com
• Constraint length C=n(L+1) is defined as the number of
encoded bits a message bit can influence to
Page 75
www.BrainKart.com
x ' j m j3 m j2 m j
x '' m m m
j j3 j1 j
x '' j mj2 mj
Here each message bit influences
a span of C = n(L+1)=3(1+1)=6
successive output bits
Page 76
www.BrainKart.com
Page 77
www.BrainKart.com
Convolution point of view in encoding and generator matrix:
Page 78
www.BrainKart.com
www.BrainKart.com
g ( 2 ) [1 0 11]
(1)
g [111 1]
Representing convolutional codes: Code tree:
Page 79
www.BrainKart.com
www.BrainKart.com
Page 80
www.BrainKart.com
Representing convolutional codes compactly: code trellis and
state diagram:
State diagram
Page 81
www.BrainKart.com
• Assuming encoder zero initial state, encoded word for any
input of k bits can thus be obtained. For instance, below for
u=(1 1 1 0 1), encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1
1, 1 1) is produced:
•
Page 82
www.BrainKart.com
THE VITERBI ALGORITHEM:
ln p(y, x ) ln p( y | x )
m j0 j mj
Page 83
www.BrainKart.com
THE SURVIVOR PATH:
Page 84
www.BrainKart.com
• For this reason the non-survived path can
be discarded -> all path alternatives need not
to be considered
• Note that in principle whole transmitted
sequence must be received before decision.
However, in practice storing of states for
input length of 5L is quite adequate
Page 85
www.BrainKart.com
The decoded ML code sequence is 11 10 10 11 00 00 00 whose
Hamming
distance to the received sequence is 4 and the respective decoded
sequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum
distance path.
(Black circles denote the deleted branches, dashed lines: '1' was
applied)
Page 86
www.BrainKart.com
– Figure right shows a common point
at a memory depth J
– J is a random variable whose applicable
magnitude shown in the figure (5L)
has been experimentally tested for
negligible error rate increase
– Note that this also introduces the
delay of 5L!
• H(7,4)
• Generator matrix G: first 4-by-4 identical matrix
Page 87
www.BrainKart.com
• Message information vector p
• Transmission vector x
• Received vector r
and error vector e
• Parity check matrix H
Error Correction:
Page 88
www.BrainKart.com
Example of CRC:
Page 89
www.BrainKart.com
Example: Using generator matrix:
g ( 2 ) [1 0 11]
(1)
g [111 1]
1
1 00 0
1 11 01
11
10
01
Page 90
www.BrainKart.com
correct:1+1+2+2+2=8;8 (0.11) 0.88
false:1+1+0+0+0=2;2 (2.30) 4.6
total path metric: 5.48
Page 91
www.BrainKart.com
Turbo Codes:
• Backgound
– Turbo codes were proposed by Berrou and Glavieux in
the 1993 International Conference in Communications.
– Performance within 0.5 dB of the channel capacity limit
for BPSK was demonstrated.
• Features of turbo codes
– Parallel concatenated coding
– Recursive convolutional encoders
– Pseudo-random interleaving
– Iterative decoding
Page 92
www.BrainKart.com
Motivation: Performance of Turbo Codes
• Comparison:
– Rate 1/2 Codes.
– K=5 turbo code.
– K=14 convolutional code.
• Plot is from:
– L. Perez, “Turbo Codes”, chapter 8 of Trellis Coding by
C. Schlegel. IEEE Press, 1997
Pseudo-random Interleaving:
Page 93
www.BrainKart.com
• Solution:
– Make the code appear random, while maintaining
enough structure to permit decoding.
– This is the purpose of the pseudo-random interleaver.
– Turbo codes possess random-like properties.
– However, since the interleaving pattern is known,
decoding is possible.
• In a coded systems:
– Performance is dominated by low weight code words.
• A “good” code:
– will produce low weight outputs with very low
probability.
• An RSC code:
– Produces low weight outputs with fairly low
probability.
– However, some inputs still cause low weight outputs.
• Because of the interleaver:
– The probability that both encoders have inputs that
cause low weight outputs is very low.
– Therefore the parallel concatenation of both encoders
will produce a “good” code.
Iterative Decoding:
• There is one decoder for each elementary encoder.
• Each decoder estimates the a posteriori probability (APP) of
each data bit.
• The APP‟s are used as a priori information by the other
decoder.
• Decoding continues for a set number of iterations.
Page 94
www.BrainKart.com