Professional Documents
Culture Documents
Ec6501 - Notes - Rejinpaul PDF
Ec6501 - Notes - Rejinpaul PDF
com
EC6501
DIGITAL COMMUNICATION
OBJECTIVES:
To know the principles of sampling &
quantization
To study the various waveform coding
schemes
To learn the various baseband transmission
schemes
To understand the various Band pass signaling
schemes
To know the fundamentals of channel coding
Get useful study materials from www.rejinpaul.com
SYLLABUS www.rejinpaul.com
UNIT I SAMPLING & QUANTIZATION 9
Low pass sampling – Aliasing- Signal Reconstruction-Quantization - Uniform & non-uniform
quantization - quantization noise - Logarithmic Companding of speech signal- PCM - TDM 56
TOTAL: 45 PERIODS
OUTCOMES
Upon completion of the course, students will be
able to
Design PCM systems
Design and implement base band transmission
schemes
Design and implement band pass signaling
schemes
Analyze the spectral characteristics of band
pass signaling schemes and their noise
performance
Design error control coding schemes
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
EC6501
DIGITAL COMMUNICATION
UNIT - 1
INTRODUCTION
UNIT I
SAMPLING & QUANTIZATION (9)
Low pass sampling
Aliasing
Signal Reconstruction
Quantization
Uniform & non-uniform quantization
Quantization Noise
Logarithmic Companding of speech signal
PCM
TDM
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Input Low
Signal Source Channel
Pass Sampler Quantizer Multiplexer
Analog/ Encoder Encoder
Filter
Digital
Carrier
Pulse
Line
To Channel Modulator Shaping
Encoder
Filters
De- Receiver
From Channel Detector
Modulator Filter
Carrier Ref.
Signal
Digital-to-Analog Channel De-
at the
Converter Decoder Multiplexer
user end
Key Questions
Nyquist Theorem
For lossless digitization, the sampling rate
should be at least twice the maximum
frequency of the signal to be sampled.
In mathematical terms:
fs > 2*fm
Limited Sampling
But what if one cannot sample fast
enough?
Limited Sampling
Reduce signal frequency to half of
maximum sampling frequency
Aliasing effect
LP filter
Nyquist rate
aliasing
v = g (m ) (18) Fig. 10
Linear Quantization
• Applicable when the signal is in a
finite range (fmin, fmax)
• The entire data range is divided
into L equal intervals of length Q
(known as quantization interval or
quantization step-size)
• Q=(fmax-fmin)/L Interval i is
mapped to the middle value of this
interval
• We store/send only the index of
quantized value min
Quantization Noise
Output sample
XQ 6
-8 -6 -4 -2 2 4 6 8
-2
Input sample
X
-4
-6
Non-Linear Quantization
• The quantizing intervals are not of equal size
• Small quantizing intervals are allocated to small
signal values (samples) and large quantization
intervals to large samples so that the signal-to-
quantization distortion ratio is nearly independent of
the signal level
• S/N ratios for weak signals are much better but are
slightly less for the stronger signals
• “Companding” is used to quantize signals
Function representation
Companding
• Formed from the words compressing and
expanding.
• A PCM compression technique where analogue
signal values are rounded on a non-linear scale.
• The data is compressed before sent and then
expanded at the receiving end using the same
non-linear scale.
• Companding reduces the noise and crosstalk
levels at the receiver.
log(1 + µ m )
v= (5.23)
log(1 + µ )
d m log(1 + µ )
= (1 + µ m ) (5.24)
dv µ
• μ-law is neither strictly linear nor strictly logarithmic
• A-law :
Am 1
1 + log A , 0 ≤ m ≤
A
v = (5.25)
1 + log( A m ) , 1 ≤ m ≤ 1
1 + log A A
1 + log A 1
, 0 ≤ m ≤
d m A A
= (5.26)
dv 1 Fig. 5.11
(1 + log A) m , A ≤ m ≤ 1
Get useful study materials from www.rejinpaul.com 45
www.rejinpaul.com
Fig.11 Back Next
x[n]=speech /song/
0.5
-0.5
-1
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
y[n]=C(x[n])
0.5
Companded Signal
-0.5
-1
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
0.5
Close View of the Signal
Segment of x[n] 0
-0.5
-1
2200 2300 2400 2500 2600 2700 2800 2900 3000
Segment of y[n]
0.5
-1
2200 2300 2400 2500 2600 2700 2800 2900 3000
Get useful
Technical study materials
Presentation Page 49 from www.rejinpaul.com
Concepts www.rejinpaul.com
Quantization
More About Non-Uniform Quantizers (Companding)
Uniform quantizer = use more levels when you need it.
The human ear follows a logarithmic process in which high amplitude sound doesn’t
require the same resolution as low amplitude sounds.
One way to achieve non-uniform quantization is to use what is called as “Companding”
Companding = “Compression + Expanding”
Compressor Uniform
Expander
Function Quantization
Function
(-1)
Get useful
Technical study materials
Presentation Page 50 from www.rejinpaul.com
Pulse-Code Modulation
www.rejinpaul.com
3. Encoding
1.To translate the discrete set of sample vales to a
more appropriate form of signal Fig. 11
2.A binary code
The maximum advantage over the effects of noise in a
transmission medium is obtained by using a binary
code, because a binary symbol withstands a relatively
high level of noise.
The binary code is easy to generate and regenerate
Table. 2
2. Reconstruction
1.Recover the message signal : passing the expander
output through a low-pass reconstruction filter
Get useful study materials from www.rejinpaul.com 56
www.rejinpaul.com
Categories of multiplexing
t
• Advantages: Time
– Only one carrier in the medium at any given time
– High throughput even for many users
– Common TX component design, only one power amplifier
– Flexible allocation of resources (multiple time slots).
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Time Division Multiplexing
• Disadvantages
– Synchronization
– Requires terminal to support a much higher data
rate than the user information rate therefore
possible problems with intersymbol-
interference.
• Application: GSM
GSM handsets transmit data at a rate of 270 kbit/s in
a 200 kHz channel using GMSK modulation.
Each frequency channel is assigned 8 users, each
having a basic data rate of around 13 kbit/s
Get useful study materials from www.rejinpaul.com
Time Division Multiplexing
www.rejinpaul.com
At the Transmitter
Simultaneous transmission of several signals on a time-sharing basis.
Each signal occupies its own distinct time slot, using all frequencies, for
the duration of the transmission.
Slots may be permanently assigned on demand.
At the Receiver
Decommutator (sampler) has to be synchronized with the incoming
waveform Frame Synchronization
Low pass filter
ISI – poor channel filtering
Feedthrough of one channel's signal into another channel -- Crosstalk
TDM-PAM: Transmitter
TDM-PAM : Receiver
Samples of Signal -1
g1(t)
time
0 Ts 2Ts
Samples of signal - 2
g2(t)
Ts Ts
0 Ts 2Ts
4
4
4
1 1 1
2 2 2
3 3 3
Time
Problem
Two low-pass signals of equal bandwidth
are sampled and time division
multiplexed using PAM. The TDM signal is
passed through a Low-pass filter & then
transmitted over a channel with a
bandwidth of 10KHz.
Continued….
Problem (continued…)
Problem: Solution
End of Unit-1
Unit – II
Waveform Coding
Syllabus
Prediction Filtering
• Linear prediction is a mathematical operation
where future values of a discrete-time signal
are estimated as a linear function of previous
samples.
Principle of DPCM
Delta Modulation
• A Delta modulation (DM or Δ-modulation) is an analog-to-digital
and digital-to-analog signal conversion technique used for
transmission of voice information where quality is not of primary
importance.
Features
• the analog signal is approximated with a series of
segments
• each segment of the approximated signal is compared to
the original analog wave to determine the increase or
decrease in relative amplitude
• the decision process for establishing the state of
successive bits is determined by this comparison
• only the change of information is sent, that is, only an
increase or decrease of the signal amplitude from the
previous sample is sent whereas a no-change condition
causes the modulated signal to remain at the same 0 or 1
state of the previous sample.
VCA
• 4. i = 1, k1 = r(1)
• 5. For i-p<=j<=p
qi(j) = qi-1(j) + ki *qi-1 (i-j)
ki = qi-1(j)/qi(0)
aj(i) = qi-1(i-j)
E(i) = E(i-1)(1-ki2)
• 6. If i<p, back to step 5
• 7. Stop
• If we only calculate ki, then only first two expressions
in step 5 are enough. It is suitable for fix-point
calculation (r<=1) or hardware implementation.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Thank you
Unit 3
Baseband Transmission
Syllabus
Properties of Line codes- Power Spectral Density of Unipolar /
Polar RZ & NRZ – Bipolar NRZ - Manchester- ISI – Nyquist
criterion for distortionless transmission – Pulse shaping –
Correlative coding - Mary schemes – Eye pattern - Equalization
Baseband Transmission
• The digital signal used in baseband
transmission occupies the entire bandwidth
of the network media to transmit a single data
signal.
• Baseband communication is bidirectional,
allowing computers to both send and receive
data using a single cable.
Baseband Modulation
• An information bearing-signal must conform to the limits of its channel
• • Generally modulation is a two-step process
• – baseband: shaping the spectrum of input bits to fit in a limited spectrum
• – passband: modulating the baseband signal to the system rf carrier
• • Most common baseband modulation is Pulse Amplitude Modulation
(PAM)
• – data amplitude modulates a sequence of time translates of basic pulse
• – PAM is a linear form of modulation: easy to equalize, BW is pulse BW
• – Typically baseband data will modulate in-phase [cos] and quadrature
[sine] data
• streams to the carrier passband
• • Special cases of modulated PAM include
• – phase shift keying (PSK)
• – quadrature amplitude modulation (QAM)
Line Codes
• In telecommunication, a line code (also called digital baseband
modulation or digital baseband transmission method) is a code
chosen for use within a communications system for baseband
transmission purposes.
Unipolar coding
• Unipolar encoding is a line code. A positive voltage represents a
binary 1, and zero volts indicates a binary 0. It is the simplest line
code, directly encoding the bitstream, and is analogous to on-off
keying in modulation.
• This is ideal if one symbol is sent much more often than the other
and power considerations are necessary, and also makes the signal
self-clocking.
• It is called NRZ because the signal does not return to zero at the
middle of the bit.
• Compared with its polar counterpart, Uni Polar NRZ, this scheme is
very expensive.
• The normalized power (power required to send 1 bit per unit line
resistance) is double that for polar NRZ.
Return-to-zero
• Return-to-zero (RZ) describes a line code used in
telecommunications signals in which the signal
drops (returns) to zero between each pulse.
• This takes place even if a number of consecutive
0s or 1s occur in the signal.
• The signal is self-clocking. This means that a
separate clock does not need to be sent
alongside the signal, but suffers from using twice
the bandwidth to achieve the same data-rate as
compared to non-return-to-zero format.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Polar RZ
BiPolar Signalling
Manchester Encoding
Disadvantages:
(a) an ideal LPF is not physically realizable.
(b) Note that
Pr ( f ) Re[ (1) n P ( f n / T )] T cos( fT / 2)
n
PI ( f ) Im[ ( 1) n P ( f n / T )] 0
n
( wt ) / 2
, w
P ( w) sin( wT / 2) T
0,
w
T
1 /T ( wt / 2)
p(t ) e jwt d w
2 / T sin (wT / 2)
2 n1T
1, n0
A 2 n1 p(t )dt
2
2
T
0, n0
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Eye Diagram
• Eye diagram is a means of evaluating the quality of a received
digital a eform
– By quality is meant the ability to correctly recover symbols and timing
– The received signal could be examined at the input to a digital receiver
or at some stage within the receiver before the decision stage
• Eye diagrams reveal the impact of ISI and noise
• Two major issues are 1) sample value variation, and 2) jitter
and sensitivity of sampling instant
• Eye diagram reveals issues of both
• Eye diagram can also give an estimate of achievable BER
• Check eye diagrams at the end of class for participation
2nd Nyquist
1st Nyquist:
1st Nyquist:
2nd Nyquist:
2nd Nyquist:
1st Nyquist
1st Nyquist:
1st Nyquist:
2nd Nyquist:
2nd Nyquist:
Thank you
UNIT IV
Geometric Representation of
Signals
• Objective: To represent any set of M energy
signals {si(t)} as linear combinations of N
orthogonal basis functions, where N ≤ M
• Real value energy signals s1(t), s2(t),..sM(t),
each of duration TOrthogonal
sec basis
function
N
0 t T
si (t ) sij j (t ), 4.1
(5.5)
j 1 i==1,2,....,M
coefficient
Energy signal
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
• Coefficients:
T i=1,2,....,M
sij si (t ) j (t )dt , (5.6)
0
j=1,2,....,M
• Real-valued basis functions:
T
1 if i j
0 i (t ) j (t )dt ij 0 if i j (5.7)
(a) Synthesizer for generating the signal si(t). (b) Analyzer for
generating the set of signal vectors si.
So,
• Each signal in the set si(t) is completely
determined by the vector of its coefficients
si1
s
i2
.
si , i 1,2,....,M (5.8)
.
.
siN
Finally,
• The signal vector si concept can be extended to 2D, 3D
etc. N-dimensional Euclidian space
• Provides mathematical basis for the geometric
representation of energy signals that is used in noise
analysis
• Allows definition of
– Length of vectors (absolute value)
– Angles between vectors
– Squared value (inner product of si with itself)
siT si
2 Matrix
si Transposition
N
= sij2 , i 1,2,....,M (5.9)
j 1
Also,
What is the relation between the vector
representation of a signal and its energy value?
• Where si(t) is N
si (t ) sij j (t ), (5.5)
j 1
N T
N
• After substitution: Ei sij j (t ) sikk (t ) dt
0 j 1 k 1
N N T
• After regrouping: Ei
j 1
s s (t ) (t )dt
k 1
ij ik j k (5.11)
0
• Φj(t) is orthogonal, so N
2 2
finally we have: E i sij = s i (5.12)
j 1
The energy of a
signal is equal to the
squared length of its
vector
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Euclidian Distance
• The Euclidean distance between two points
represented by vectors (signal vectors) is equal
to
||si-sk|| and the squared value is given by:
N
si s k = (sij -skj ) 2
2
(5.14)
j 1
T
= ( si (t ) sk (t ))2 dt
0
switch
cos w2t
= 2 Eb
cos(2f H t 1 ) 0 t Tb
Tb
2 Eb
sBFSK(t) = cos(2f ct (t ))
Tb
2 Eb t
= cos 2f c t 2k FSK m( )d
Tb
t
where (t) = 2k FSK m( )d
FSK Example
Data
1 1 0 1
FSK
Signal
0 1 1
x
a0 0 VCO modulated composite
a1 1 signal
cos wct
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
BT = 2f +2B
BT = 2(f + Rb)
0
VH (t )VL (t )dt 0 ?
• interference between VH(t) and VL(t) will average to 0 during
demodulation and integration of received symbol
2Eb
then vH(t) vL(t) = cos( 2 ( f c f )t ) cos( 2 ( f c f )t )
Tb
=
Eb
cos(2 (2 f c )t ) cos(2 (2f )t )
Tb
T Tb
QPSK
• Quadrature Phase Shift Keying (QPSK)
can be interpreted as two independent
BPSK systems (one on the I-channel
and one on Q-channel), and thus the
same performance but twice the
bandwidth (spectrum) efficiency.
I I
Carrier phases
Carrier phases
{0, /2, , 3/2}
{/4, 3/4, 5/4, 7/4}
Types of QPSK
Q Q Q
I I I
QPSK
• The striking result is that the bit error probability of QPSK is identical to
BPSK, but twice as much data can be sent in the same bandwidth. Thus,
when compared to BPSK, QPSK provides twice the spectral efficiency with
exactly the same energy efficiency.
• Similar to BPSK, QPSK can also be differentially encoded to allow non-
coherent detection.
QAM Transmitter
QAM Receiver
Carrier Synchronization
• Synchronization is one of the most critical
functions of a communication system with
coherent receiver. To some extent, it is the
basis of a synchronous communication
system.
• Carrier synchronization
• Symbol/Bit synchronization
• Frame synchronization
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
-asin(ct)
cos(ct) /2phase
shift
DPSK
• DPSK is a kind of phase shift keying which
avoids the need for a coherent reference
signal at the receiver.
• Differential BPSK
– 0 = same phase as last signal element
– 1 = 180º shift from last signal element
Thank you
Unit - 5
ENTROPY
• The e t op of a sou e is defi ed as the sou e hi h p odu es a e age
i fo atio pe i di idual essage o s ol i a pa ti ula i te al .
• Then the number of messages is given as
Co t…
• Thus the total amount of information due to L
Co t…
PROPERTIES OF ENTROPY
• The entropy of a discrete memoryless channel
source
• Property 1
1. Entropy is zero, if the event is sure or its impossible
Co t…
Cont..
RATE OF INFORMATION
• RATE OF INFORMATION
SOURCE CODING
• An important problem in communication system is the
efficient representation of data generated by a source, which
can be achieved by source encoding (or) source coding
process
• The device which performs source encoding is called source
encoder
STATEMENT
• Shanon first theorem is stated as Given a discrete memoryless source of
entropy H, the average codeword length L for any distortionless source
encoding is bounded as
L>= H
• According to source coding theorem the entropy H represents as the
fundamental limit on the average number of bits per source symbol
necessary to represent a discrete memoryless source
• It can be made as small as, but not smaller than the entropy H thus, with
Lmin=H, then η is represented as
η= H/L
Co t…
• Approach
The code rate is given as
STATEMENT
Two parts
Signal power
Noise power
MUTUAL INFORMATION
CHANNEL CAPACITY
CHANNEL CAPACITY
• What is channel capacity?
• Redundancy
Error Control
Coding
Introduction
• Error Control Coding (ECC)
– Extra bits are added to the data at the transmitter
(redundancy) to permit error detection or
correction at the receiver
– Done to prevent the output of erroneous bits
despite noise and other imperfections in the
channel
– The positions of the error control coding and
decoding are shown in the transmission model
Transmission Model
Error Modulator
Digital Source Line X(w)
Control (Transmit
Source Encoder Coding
Coding Filter, etc)
Hc(w) Channel
Transmitter
N(w) Noise
+
Error Demod
Digital Source Line
Control (Receive
Sink Decoder Decoding Y(w)
Decoding Filter, etc)
Receiver
Error Models
• Many other types
• Burst errors, i.e., contiguous bursts of bit
errors
– output from DFE (error propagation)
– common in radio channels
– Insertion, deletion and transposition errors
• We will consider mainly random errors
Block Codes
• We will consider only binary data
• Data is grouped into blocks of length k bits
(dataword)
• Each dataword is coded into blocks of length n
bits (codeword), where in general n>k
• This is known as an (n,k) block code
Block Codes
• Dataword length k = 4
• Codeword length n = 7
• This is a (7,4) block code with code rate = 4/7
• For example, d = (1101), c = (1101001)
Codeword +
Dataword possible errors
(k bits) Channel (n bits)
Channel
decoder
Error flags
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Hamming Distance
• Error control capability is determined by
the Hamming distance
• The Hamming distance between two
codewords is equal to the number of
differences between them, e.g.,
10011011
11010010 have a Hamming distance = 3
• Alternatively, can compute by adding
codewords (mod 2)
=01001001 (now count up the ones)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Hamming Distance
• The Hamming distance of a code is equal to
the minimum Hamming distance between
two codewords
• If Hamming distance is:
1 – no error control capability; i.e., a single error
in a received codeword yields another valid
codeword
XXXXXXX X is a valid codeword
Note that this representation is diagrammatic
only.
In reality each codeword is surrounded by n
codewords. That is, one for every bit that
could be changed
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Hamming Distance
• If Hamming distance is:
2 – can detect single errors (SED); i.e., a single error
will yield an invalid codeword
XOXOXO X is a valid codeword
O in not a valid codeword
See that 2 errors will yield a valid (but
incorrect) codeword
Hamming Distance
• If Hamming distance is:
3 – can correct single errors (SEC) or can detect
double errors (DED)
XOOXOOX X is a valid codeword
O in not a valid codeword
See that 3 errors will yield a valid but incorrect
codeword
X is a valid codeword
O is an invalid codeword
d min 1
• That is the maximum number of correctable
errors is given by,
d min 1
t
2
where dmin is the minimum Hamming distance
between 2 codewords and. means the smallest
integer
c3 c1 c2
Systematic Codes
• I is k*k identity matrix. Ensures dataword
appears as beginning of codeword
• P is k*R matrix.
CYCLIC CODES
Definition
• An (n,k) linear code C is cyclic if every cyclic
shift of a codeword in C is also a codeword in
C.
If c0 c1 c2 …. cn-2 cn-1 is a codeword, then
cn-1 c0 c1 …. cn-3 cn-2
cn-2 cn-1 c0 …. cn-4 cn-3
: : : : :
c1 c2 c3 …. cn-1 c0 are all codewords.
Example 3
• The (7,4) Hamming code discussed before is
cyclic:
Notice that,k 1
C m j g 1 ; where m j 0, if j 0 or j k 1
j 0
Code Polynomial
• Let c = c0 c1 c2 …. cn-1. The code polynomial
of c: c(X) = c0 + c1X+ c2 X2 + …. + cn-1 Xn-1
where the power of X corresponds to the bit position,
and
the coefficients are 0’s and 1’s.
• Example:
1010001 1+X2+X6
0101110 X+X3+X4+X5
Each codeword is represented by a polynomial of degree
less than or equal n-1. deg[ c(X) ] n 1
Example:
m( x ) m0 m1 x m2 x 2
g ( x ) g 0 g1 x
m( x ) g ( x ) (m0 g 0 ) (m1 g1 )x (m2 0)x 2
addition
• For the (7,4) code given in the Table, the nonzero code
polynomial of minimum degree is g(X) = 1 + X + X3
Generator Polynomial
• Since the code is cyclic: Xg(X), X2g(X),…., Xn-r-1g(X)
are code polynomials in C. (Note that deg[Xn-r-
1g(X)] = n-1).
Constructing g(X)
• The generator polynomial g(X) of an (n,k) cyclic
code is a factor of Xn+1.
Xkg(X) is a polynomial of degree n.
Xkg(X)/ (Xn+1)=1 and remainder r(X). Xkg(X) = (Xn+1)+ r(X).
But r(X)=Rem[Xkg(X)/(Xn+1)]=g(k)(X) =code polynomial= a(X)g(X).
Therefore, Xn+1= Xkg(X) + a(X)g(X)= {Xk + a(X)}g(X). Q.E.D.
(1)To construct a cyclic code of length n, find the
factors of the polynomial Xn+1.
(2)The factor (or product of factors) of degree n-k
serves as the generator polynomial of an (n,k)
cyclic code. Clearly, a cyclic code of length n does
not exist for every k.
+ + + + Output
gr gr-1 gr-2 g1 g0
Input
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
+ + + +
Output
g0 g1 g2 gr-1 gr
Input
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Output
g0 g1 g2 gr-1 gr
+ + + +
Input
Encoder Circuit
Gate
g1 g2 gr-1
+ + +
+
Gate
+
+
Input 1 1 0 1
Register : 000 110 101 100 100
initial 1st shift 2nd shift 3rd shift 4th shift
Codeword: 1 0 0 1 0 1 1
Parity-Check Polynomial
• Xn +1 = g(X)h(X)
• deg[g(x)] = n-k, deg[h(x)] = k
• g(x)h(X) mod (Xn +1) = 0.
• h(X) is called the parity-check polynomial. It
plays the rule of the H matrix for linear codes.
• h(X) is the generator polynomial of an (n,n-k)
cyclic code, which is the dual of the (n,k) code
generated by g(X).
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
• STEPS:
(1) Syndrome computation
(2) Associating the syndrome to the error pattern
(3) Error correction
Syndrome Computation
Gate
r = 0010110
+
+
Shift Input Register contents
0 0 0 (initial state) • What is g(x)?
1 0 000 • Find the syndrome using
2 1 100 long division.
3 1 110 • Find the syndrome using
4 0 011 the shortcut for the
remainder.
5 1 011
6 0 111
7 0 1 0 1 (syndrome s)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Association of Syndrome to Error
Pattern
• Look-up table implemented via a combinational logic circuit
(CLC). The complexity of the CLC tends to grow exponentially
with the code length and the number of errors to correct.
• Cyclic property helps in simplifying the decoding circuit.
• The circuit is designed to correct the error in a certain location
only, say the last location. The received word is shifted cyclically
to trap the error, if it exists, in the last location and then correct
it. The CLC is simplified since it is only required to yield a single
output e telling whether the syndrome, calculated after every
cyclic shift of r(X), corresponds to an error at the highest-order
position.
• The received digits are thus decoded one at a time.
Meggit Decoder
Shift r(X) into the buffer B and the syndrome
register R simultaneously. Once r(X) is completely
shifted in B, R will contain s(X), the syndrome of
r(X).
1. Based on the contents of R, the detection circuit
yields the output e (0 or 1).
2. During the next clock cycle:
(a) Add e to the rightmost bit of B while shifting
the contents of B. (The rightmost bit of B may be
read out). Call the modified content of B r1(1)(X).
(cont’d)
Gate
+
r = 0010110 Gate +
Worked Example
Consider the (7,4) Hamming code generated by 1+X+X3.
• Theorem:
If g(X) has l roots (out of it n-k roots) that are consecutive
powers of a, then the code it generates has a minimum
distance d = l + 1.
• To design a cyclic code with a guaranteed minimum
distance of d, form g(X) to have d-1 consecutive roots. The
parameter d is called the designed minimum distance of
the code.
• Since roots occur in conjugates, the actual number of
consecutive roots, say l, may be greater than d-1. dmin = l +
1 is called the actual minimum distance of the code.
Design Example
X15 + 1 has the roots 1= a0, a1 , …., a14.
Conjugate group Corresponding
polynomial
a0) f1X1 + X
(a, a2 , a4 , a8) f2X 1 + X + X4
(a3 , a6 , a9 , a12) f3X 1 + X + X2 + X3 +
X4
(a5 , a10) f4X 1 + X + X2
(a7, a14 , a13 , a11) f5X 1 + X3 + X4
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
BCH Codes
• Definition of the codes:
• For any positive integers m (m>2) and t0 (t0 <
n/2), there is a BCH binary code of length n =
2m - 1 which corrects all combinations of t0 or
fewer errors and has no more than mt0 parity-
check bits.
Block length 2m 1
Number of parity - check bits n k mt 0
min imum distance d min 2t 0 1
n k b g(X) (octal)
7 3 2 35 (try to find dmin!)
15 10 2 65
15 9 3 171
31 25 2 161
63 56 2 355
63 55 3 711
511 499 4 10451
1023 1010 4 22365
Basic Definitions
Generator Polynomial
• A convolutional code may be defined by a set
of n generating polynomials for each input bit.
• For the circuit under consideration:
g1(D) = 1 + D + D2
g2(D) = 1 + D2
• The set {gi(D)} defines the code completely.
The length of the shift register is equal to the
highest-degree generator polynomial.
Decoding
This is the
survival
path in
this
example
Decoded
sequence is
m=[10 1110]
Compute the two possible paths at
Add the weight of the each state and select the one This is called the
path at each state with less cumulative Hamming survival path
Getweight
useful study materials from www.rejinpaul.com
www.rejinpaul.com
Sequence 2:
code sequence: .. 00 11 10 11 00 ..
state sequence: a0 b c a1
Labeled: (D2LN)(DL)(D2L) = D5L3N
Prop. : w =5, dinf =1, diverges from the allzero path by 3
branches.
Sequence 3:
code sequence: .. 00 11 01 01 00 10 11 00 ..
state sequence: a0 b d c b c a1
Labeled: (D2LN)(DLN)(DL)(DL)(LN)(D2L) = D7L6N3
Prop. : w =7, dinf =3, diverges from the allzero path by 6
branches.
Transfer Function
• Input-Output relations:
a0 = 1
b = D2LN a0 + LNc
c = DLb + DLNd
d = DLNb + DLNd
a1 = D2Lc
• The transfer function T(D,L,N) = a1 /a0
D5 L3
T(D, L, N)
1 DNL(1 L)
mdH y,
dH y,
c
c
c p
Pr y |
.(1 p)
where p is the probability of bit error of BSC from modulation
max
cCm
c min
Pr y,
d y, c
cCm H
Choose the code sequence through the trellis which has the
smallest Hamming distance to the received sequence!
What path through the trellis does the Viterbi Algorithm choose?
SUMMARY
• Learnt about the concepts of Entropy and the
source coding techniques
• Statement and the theorems of Shanon
• Concepts about mutual information and
channel capacity
• Understand error control coding techniques
and the concepts about linear block codes,
cyclic codes, convolution codes & viterbi
decoding algorithm
Thank you