Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 99

Advanced Data

Communication and
Networks

Chapter 3: Coding and Error Control


Dr Umar Sani Abdullahi
Umar.abdullahi@bazeuniversity.edu.ng
2022
Errors

• Errors in the data are basically caused due to the


various impairments that occur during the
process of transmission.
• When there is an imperfect medium or
environment exists in the transmission it prone
to errors in the original data.
• The impairments can be classified as follows, as
already discussed earlier:
• Attenuation , Noise and Distortion
Types of Errors
• If the signal comprises of binary data there can be two types of errors which are
possible during the transmission: Single bit errors and Burst Errors
Single-bit errors:
• In single-bit error, a bit value of 0 changes to bit value 1 or vice versa. Single bit errors
are more likely to occur in parallel transmission.

Burst errors:
• In Burst error, multiple bits of the binary value changes. Burst error can change any
two or more bits in a transmission. These bits need not be adjacent bits. Burst errors are
more likely to occur in serial transmission.
Coding and Error Control

• Three approaches are in common use for coping


with data transmission error:
– Error detection codes: simply detects the presence of
an error
– Error correction codes: also called forward error
correction (FEC) code, not just to detect but correct
errors
– Automatic repeat request (ARQ) protocols: receiver
discards a block of data in which an error is detected
and the transmitter retransmits that block of data
Outline

3.1 Error Detection

3.2 Block Error Correction Codes

3.3 Convolutional Code


Error Detection

 Basic Idea

 Parity Check

 Cyclic Redundancy Check


Error Detection

• Define these probabilities with respect to errors in


transmitted frames:
– Pb : Probability of single bit error (BER)
– P1 : Probability that a frame arrives with no bit errors
– P2 : While using error detection, the probability that a frame
arrives with one or more undetected errors, also known as the
residual error rate and is the probability that an error will be
undetected despite the use of an error detection scheme.
– P3 : While using error detection, the probability that a frame
arrives with one or more detected bit errors but no undetected
bit errors
Error Detection

• Assuming no error detection and Pb is constant


and independent for each bit:
P1  1  Pb 
F

P2  1  P1
P3  0
• Where F is the number of bits per frame.
Error Detection

• The probability that a frame arrives with no bit


errors decreases when the probability of a
single bit error increases
• The probability that a frame arrives with no bit
errors decreases with increasing frame length
Error Detection
• Error detection principles – transmitter:
– For a given frame of bits, additional bits that constitute an
error-detecting code are added. These extra bits are nothing
but the redundant bits which will be removed by the receiver
after receiving the data.
– This code is calculated as a function of the other transmitted
bits
– For a data block of k bits, the error detection algorithm yields
an error detection code of n-k bits, where (n-k)<k
– The error detection code (also check bits), is appended to the
data block to produce a frame of n bits which is then
transmitted
Error Detection

• Error detection principles –receiver:


– Separates the incoming frame into the k bits of
data and (n-k) bits of the error detection code.
– Performs the same error detection calculation on
the data bits and compares this value with the
value of the incoming error detection code.
– A detected error occurs if and only if there is a
mismatch
Error Detection

 Basic Idea

 Parity Check

 Cyclic Redundancy Check


Parity Check

• Parity check: the simplest error detection


scheme
– Append a parity bit to the end of a block of data
– A typical example is character transmission, in
which a parity bit is attached to each 7-bit
chacrater.
– The value of this bit is selected so that the
character has an even number of 1s (even parity)
or an odd number of 1s (odd parity)
Parity Check

[Example] A transmitter is transmitting 1110001 and


using odd parity, it will append a 1 and transmit
11100011 so that the number of 1 is odd.
The receiver examines the received character and if the total
number of 1 is odd, assumes that no error has occurred.
If one bit (or any odd number of bits) is erroneously inverted
during transmission, then the receiver will detect an error.
However if two (or any even number) bits are inverted due to
error, an undetected error occurs
Parity check is not foolproof, as noise impulse are
often long enough to destroy more than one bit.
Error Detection

 Basic Idea

 Parity Check

 Cyclic Redundancy Check


Cyclic Redundancy Check

• Cyclic redundancy check: one of most common,


most powerful error detecting codes
– Given a k-bit block of bits, the transmitter generates
an n-k bits sequence, known as a frame check
sequence (FCS)
– The resulting frame consisting of n bits is exactly
divisible by some predetermined number
– The receiver then divides the incoming frame by
that number. If there is no remainder, assumes there
was no error.
Error Detection

 Cyclic Redundancy Check


 Modulo 2 Arithmetic

 Polynomials

 Digital Logic
Modulo 2 Arithmetic

• Modulo 2 arithmetic uses binary addition with no


carriers, which is just the XOR operation.
• Binary subtraction with no carriers is also interpreted
as the XOR operation.
1111 1111 11001
 1010  0101  11
0101 1010 11001
11001
101011
CRC with Modulo-2 Arithmetic

• Cyclic Redundancy Check (CRC) is an error detection technique


widely used in digital communication systems to detect errors
introduced during data transmission or storage.
• It works based on the principles of modulo 2 arithmetic, which is
also known as binary arithmetic or Boolean algebra.
• Modulo 2 arithmetic involves performing addition and subtraction
without carry, effectively reducing numbers to either 0 or 1. This
method is particularly efficient for error detection because it utilizes
simple bitwise operations.
Cyclic Redundancy Check

• Now define:
• T = n-bit frame to be transmitted
• D = k-bit block of data; the first k bits of T
• F = (n – k)-bit FCS; the last (n – k) bits of T
• P = pattern of n–k+1 bits; this is the predetermined
divisor
• Q = Quotient
• R = Remainder
Basic Principle of CRC in Modulo 2 Arithmetic

• In CRC, the sender and receiver agree on a fixed divisor called the "generator polynomial" or
"check polynomial."
• The data to be transmitted is considered as a binary number, and the divisor is used to perform
modulo 2 division.
• The remainder obtained from this division is added to the end of the data as a checksum.
• When the receiver receives the data, it performs the same modulo 2 division with the agreed-upon
divisor.
• If the remainder is zero, it indicates that the data is likely error-free.
• Otherwise, the presence of a non-zero remainder indicates the presence of errors in the received
data.

• The key advantage of CRC based on modulo 2 arithmetic is its simplicity and efficiency in
detecting errors, especially single-bit errors and burst errors. It doesn't require complex
mathematical calculations, making it suitable for real-time error detection in various communication
protocols.
• Overall, CRC using modulo 2 arithmetic is a robust and widely used error detection technique that
enhances the reliability of data transmission in digital communication systems.
Basic Principle of CRC in Modulo 2 Arithmetic

1. Data Representation: The data to be transmitted is represented as a sequence of bits (0s and 1s).
2. Divisor Selection: A fixed divisor polynomial is chosen. This polynomial serves as the key element
in the CRC calculation. It's represented as a binary number, where the positions of 1s indicate the
powers of the polynomial. For example, if the divisor polynomial is 1101, it corresponds to x^3 +
x^2 + 1.
3. Appending Checksum (CRC Bits): To the original data, a certain number of bits are added. The
number of bits added is determined by the degree of the divisor polynomial. For example, if the
divisor polynomial has degree 3, then 3 bits (CRC bits) are added to the end of the data.
4. CRC Calculation: The modulo 2 arithmetic (binary arithmetic) is used to calculate the CRC bits.
The data (including appended CRC bits) is divided by the divisor polynomial using XOR (exclusive
OR) operations. The remainder obtained from this division is the CRC value.
5. Transmitting Data: The original data along with the calculated CRC bits (checksum) are
transmitted to the receiver.
6. Error Detection: At the receiver's end, the received data is divided again by the same divisor
polynomial using modulo 2 arithmetic. If the remainder obtained is zero, no error is detected. If the
remainder is non-zero, an error is detected.
7. Verifying CRC: The receiver compares the calculated CRC bits with the received CRC bits. If they
match, the data is considered error-free. If they don't match, an error is indicated.
Cyclic Redundancy Check

• For T/P to have no remainder. Start with


nk
T 2 DF
– By multiplying D by 2n-k, we have in effect shifted
it to the left by n-k bits and padded out the result
with zeroes.
– Adding F yields the concatenation of D and F,
which is T.
Cyclic Redundancy Check

• Divide 2n-kD by P gives quotient and


remainder
2nk D R
Q
P P

• Use remainder as FCS


nk
T 2 DR
Cyclic Redundancy Check

• Does R cause T/P have no remainder?


T 2nk D  R 2nk D R
  
P P P P
• Substituting,
T R R RR
Q  Q Q
P P P P
– No remainder, so T is exactly divisible by P
Cyclic Redundancy Check

• The FCS is easily generated: Simply divide 2 n-kD by


P and use the (n-k)-bit remainder as the FCS.
• On reception, the receiver will divide T by P and will
get no remainder if there have been no errors.
• The pattern P is chosen to be one bit longer than the
desired FCS, and the exact bit pattern chosen depends
on the type of errors expected. At minimum, both the
high- and low-order bits of P must be 1.
Cyclic Redundancy Check

[Example] Given Message D=1010 0011 01 (10 bits)


Pattern P=110101 (6 bits)
FCS R= to be calculated (5 bits)
Solution
Thus, n=15, k=10, and (n-k)=5
The message is multiplied by 25, yielding 1010 0011
0100 000
This product is divided by P:
Cyclic Redundancy Check
1 1 0 1 0 1 0 1 1 0 ←Q
P→1 1 0 1 0 1/1 0 1 0 0 0 1 1 0 1 0 0 0 0 0 ←2n-kD
1 1 0 1 0 1
1 1 1 0 1 1
1 1 0 1 0 1
1 1 1 0 1 0
1 1 0 1 0 1
1 1 1 1 1 0
1 1 0 1 0 1
1 0 1 1 0 0
1 1 0 1 0 1
1 1 0 0 1 0
1 1 0 1 0 1
0 1 1 1 0 ←R
Cyclic Redundancy Check

The remainder is added to 25 D to give T=1010 0011


0101 110, which is then transmitted.
If there are no errors, the receiver receives T intact.
Then the received frame is divided by P: (on the next
page)

Because there is no remainder for T/P, it is assumed that


there have been no errors.
Cyclic Redundancy Check
1 1 0 1 0 1 0 1 1 0 ←Q
P→1 1 0 1 0 1/1 0 1 0 0 0 1 1 0 1 0 1 1 1 0 ←T
1 1 0 1 0 1
1 1 1 0 1 1
1 1 0 1 0 1
1 1 1 0 1 0
1 1 0 1 0 1
1 1 1 1 1 0
1 1 0 1 0 1
1 0 1 1 1 1
1 1 0 1 0 1
1 1 0 1 0 1
1 1 0 1 0 1
0 ←R
Cyclic Redundancy Check Example 2
1. Padding: To perform CRC, we need to append zeros to the data stream based on the degree
of the generator polynomial. The degree of the generator polynomial is 2 (since it has 3
bits), so we add two zeros to the data:
• Data: 1011 00
2. Division using Modulo 2 Arithmetic: Now, we perform modulo 2 division between the
padded data and the generator polynomial without any long division.
1011 00
3. Final Remainder: The final
XOR 101
remainder obtained is 1000.
---------- 4. Append CRC Check Value: The
0010 (We drop the leftmost bit as it becomes 0) remainder obtained (1000) is the
XOR 101 CRC check value. We append this
---------- value to the original data:
1000 (We drop the leftmost bit as it becomes 0)
Transmitted Frame: 1011 0010
XOR 000 (Padding)
The transmitted frame is sent to the
---------- receiver.
1000
Cyclic Redundancy Check

• Concise method for specifying the occurrence


of one or more errors.
– An error results in the reversal of a bit.
– This is equivalent to taking the XOR of the bit and
1: 0+1=1; 1+1=0.
– The errors in an n-bit frame can be represented by
an n-bit field with 1s in each error position.
Cyclic Redundancy Check

• The resulting frame Tr can be expressed as

Tr  T  E
where T  transmitted frame
E  error pattern with 1s in positions where errors occur
Tr  received frame
Generation of Spreading Sequences

 Cyclic Redundancy Check


 Modulo 2 Arithmetic

 Polynomials

 Digital Logic
Outline

3.1 Error Detection

3.2 Block Error Correction Codes

3.3 Convolutional Code


Block Error Correction Codes

• Error detection code requires retransmission.


• This approach inadequate for wireless applications
because:
– Error rate on wireless link can be high, results in a large
number of retransmissions
– Long propagation delay compared to transmission time,
results in a very inefficient system.
• The common approach to retransmission is to retransmit the
frame in error plus all subsequent frames
• An error in a single frame necessitates retransmitting many
frames.
Block Error Correction Codes
• Correct errors in an incoming transmission on the
basis of the bits in that transmission
– Transmitter
• Forward error correction (FEC) encoder maps each k-bit block
into an n-bit block codeword, which is transmitted after
modulation
• During transmission the signal is subject to noise, which may
produce bit errors in the signal.
– Receiver
• Incoming signal is demodulated to produce a bit string which may
contain errors
• Block passed through an FEC decoder
Block Error Correction Codes
Block Error Correction Codes

• There are four possible outcomes:


– No errors present: Codeword produced by decoder
matches original codeword
– Decoder detects and corrects bit errors: map this
block into the original data block
– Decoder detects but cannot correct bit errors:
simply reports uncorrectable error
– Decoder detects no bit errors, though errors are
present: map this block into a k-bit block differ from the
original one.
Block Error Correction Codes

• How is it possible for the decoder to correct bit


errors?
– In essence, error correction works by adding
redundancy to the transmitted message.
– Two ways to add redundancy:
• Original k-bit block shows up in the n-bit block (as in
error detection codes)
• The original k-bit does not appear in the codeword
• We are looking at block error correction code in
this section.
Block Error Correction Codes

 Block Code Principles


 Hamming Code
 Cyclic Code
 BCH Code
 Reed-Solomon Codes
 Block Interleaving
Block Code Principles

• Hamming distance:
– Hamming distance d(v1,v2) between two n-bit
sequences v1 and v2 is the number of bits in which
v1 and v2 disagree.
– If v1=011011, and v2=110001, then d(v1,v2) =3.
– If v1=100011011, and v2=101011001, then d(v1,v2)
=2.
Block Code Principles

[Example] For k=2 and n=5, we can make the


following assignment:
Data Block Codeword
00 00000
01 00111
10 11001
11 11110

Now, suppose that a code block is received with the bit


pattern 00100. This is not a valid codeword and so
the receiver has detected an error.
Block Code Principles

We can not be sure which data block was sent. However


– It would require only a single bit change to
transform the valid codeword 00000 to 00100.
– It would take two bit changes to transform 00111 to
00100
– Three bit changes to transform 11110 to 00100
– Four bit changes to transform 11001 into 00100.
• Thus we can deduce that the most likely codeword that
was sent was 00000 and the desired data block is 00.
This is error correction.
Block Code Principles

In terms of Hamming distance, we have:


d(00000,00100)=1; d(00111,00100)=2;
d(11001,00100)=4; d(11110,00100)=3.
Therefore, if an invalid codeword is received, then the
valid codeword that is closest to it (minimum
distance) is selected.
This will only work if there is a unique valid codeword
at a minimum distance from each invalid codeword.
In our example, it is not true that there is only one valid
codeword at a minimum distance.
Block Code Principles
Invalid Minimum Valid codeword Invalid Minimum Valid codeword
codeword distance codeword distance

00001 1 00000 10000 1 00000


00010 1 00000 10001 1 11001
00011 1 00111 10010 2 00000 or 11110
00100 1 00000 10011 2 00111 or 11001
00101 1 00111 10100 2 00000 or 11110
00110 1 00111 10101 2 00111 or 11001
01000 1 00000 10110 1 11110
01001 1 11001 10111 1 00111
01010 2 00000 or 11110 11000 1 11001
01011 2 00111 or 11001 11010 1 11110
01100 2 00000 or 11110 11011 1 11001
01101 2 00111 or 11001 11100 1 11110
01110 1 11110 11101 1 11001
01111 1 00111 11111 1 11110
Block Code Principles

There are eight cases in which an invalid codeword is at


a distance 2 from two different valid codewords.
– If one such invalid codeword is received, the receiver has
no way to choose between the two alternatives.
– An error is detected but cannot be corrected.
In every case in which a single bit error occurs, the
resulting codeword is of distance 1 from only one
valid codeword and the decision can be made.
This code is therefore capable of correcting all single-
bit errors but cannot correct double bit errors.
Block Code Principles

• In general, an (n, k) block code encodes k data bits


into n-bit codewords
• It is equivalent to the design of a function of the form
vc=f(vd), where vd is a vector of k data bits and vc is a
vector of n codeword bits.
• With an (n,k) block code, there are 2k valid
codewords out of a total of 2n possible codewords.
The ratio of redundant bits to data bits, (n-k)/k is
called the redundancy of the code.
Block Code Principles

• The ratio of data bits to total bits, k/n is called the


code rate.
• In telecommunication and information theory, the
code rate (or information rate) of a forward error
correction code is the proportion of the data-stream
that is useful (non-redundant).
• The code rate is a measure of how much additional
bandwidth is required to carry data at the same data
rate as without the code.
Block Code Principles

• For a code consisting of the code words w1, w2, …, ws,


where s=2n, the minimum distance dmin of the code is
defined as: d  mind ( w , w )
min i j
i j

• For a given positive integer t, if a code satisfies


dmin≥2t+1, then the code can correct all bit errors up
to and including errors of t bits.
• If dmin≥2t, then all errors ≤ t-1 bits can be corrected
and errors of t bits can only be detected..
Block Code Principles

• The maximum number of guaranteed correctable


errors per codeword satisfies
 d min  1 
t 
 2 
Where  x  means the largest integer not to exceed x.
• The number of errors, t, that can be detected satisfies:

t  d min  1
Block Code Principles

• The design of a block code:


– For given n and k, largest possible value of dmin
– Relatively easy to encode and decode, requiring
minimal memory and processing time
– The number of n-k to be small to reduce bandwidth
– The number of n-k to be large to reduce error rate
Block Error Correction Codes

 Block Code Principles


 Hamming Code
 Cyclic Code
 BCH Code
 Reed-Solomon Codes
 Block Interleaving
Hamming Code

• Hamming codes are a family of (n, k) block error-


correcting code that have the following parameters:
– Block length: n = 2m – 1
– Number of data bits: k = 2m – m – 1
– Number of check bits: n – k = m
– Minimum distance: dmin = 3
– m≥3
• Hamming codes are designed to correct single bit
errors.
Hamming Code

• Encoding: k data bits + (n -k) check bits


• Decoding: compares received (n -k) bits with
calculated (n -k) bits using XOR
– Resulting (n -k) bits called syndrome word
– Syndrome range is between 0 and 2(n-k)-1
– Each bit of syndrome indicates a match (0) or
conflict (1) in that bit position
Block Error Correction Codes

 Block Code Principles


 Hamming Code
 Cyclic Code
 BCH Code
 Reed-Solomon Codes
 Block Interleaving
Cyclic Code

• Most error-correcting block codes are cyclic codes.


• For cyclic codes, a valid codeword (c0, c1, …, cn-1),
cyclically shifted right one bit, is also a valid
codeword (cn-1, c0, …, cn-2)
• Can be encoded and decoded using linear feedback
shift registers (LFSRs)
Cyclic Code

• The LFSR implementation of a cyclic error-


correcting encoder is the same as that of the CRC
error-detecting code.
• Key difference: cyclic error-correcting encoder takes
fixed-length input (k) and produces fixed-length
check code (n-k)
– In contrast, CRC error-detecting code accepts arbitrary
length input for fixed-length check code
Cyclic Code

• Decoding process:
– Process received bits to compute the syndrome
code (same fashion as the encoder processes the
data bits to produce the check code)
– If the syndrome bits are all zero, no error has been
detected.
– If the syndrome is nonzero, perform additional
processing on the syndrome for error correction.
Cyclic Code

• The syndrome pattern consists of n-k bits and


therefore takes on 2n-k possible values.
• A value of all zeros indicates no errors. Then a total
of 2n-k-1 different error patterns can be corrected.
• To be able to correct all possible single bit errors with
an (n,k ) code, n  ( 2 nk  1)
• To be able to correct all single and double bit errors,
the relationship is
n( n  1)
n  ( 2 nk  1)
2
Block Error Correction Codes

 Block Code Principles


 Hamming Code
 Cyclic Code
 BCH Code
 Reed-Solomon Codes
 Block Interleaving
BCH Code
• BCH codes are among the most powerful cyclic
block codes and are widely used.
• For positive pair of integers m and t, a (n, k) BCH
code has parameters:
– Block length: n = 2m – 1
– Number of check bits: n – k £ mt
– Minimum distance:dmin ³ 2t + 1
• Correct combinations of t or fewer errors
• Flexibility in choice of parameters
– Block length, code rate
Block Error Correction Codes

 Block Code Principles


 Hamming Code
 Cyclic Code
 BCH Code
 Reed-Solomon Codes
 Block Interleaving
Reed-Solomon Codes

• Subclass of nonbinary BCH codes


• Data processed in chunks of m bits, called symbols
• An (n, k) RS code has parameters:
– Symbol length: m bits per symbol
– Block length: n = 2m – 1 symbols = m(2m – 1) bits
– Data length: k symbols
– Size of check code: n – k = 2t symbols = m(2t) bits
– Minimum distance: dmin = 2t + 1 symbols
• The encoding algorithm expands a block of k symbols
to n symbols by adding n-k redundant check symbols.
Block Error Correction Codes

 Block Code Principles


 Hamming Code
 Cyclic Code
 BCH Code
 Reed-Solomon Codes
 Block Interleaving
Block Interleaving

• Block interleaving is a common technique


used with block codes in wireless systems.
• The advantages: a burst error that affects a
sequence of bits is spread out over a number of
separate blocks at the receiver so that the
correction is possible.
Block Interleaving

• Data written to and read from memory in


different orders
• Data bits and corresponding check bits are
interspersed with bits from other blocks
• At receiver, data are deinterleaved to recover
original order
• A burst error that may occur is spread out over
a number of blocks, making error correction
possible
Block Interleaving

• A simple and common interleaving technique:


– Data to be transmitted are stored in a rectangular array in
which each row consists of n bits, equal to the block size.
– Data are then read out one column at a time.
– The result is that the data bits from a single n-bit block are
spread out and interspersed with bits from other blocks.
– During transmission, a burst of noise affects a consecutive
sequence of bits, those bits belong to different blocks and
hence only a fraction of the bits in error need to be
corrected by any one set of check bits.
Block Interleaving
Outline

3.1 Error Detection

3.2 Block Error Correction Codes

3.3 Convolutional Code


Convolutional Code

 Convolutional Code Principles

 Decoding

 Turbo Code
• In block codes, information bits are followed
by parity bits. In convolution codes,
information bits are spread along the sequence.
• Block codes are memoryless whereas
Convolution codes have memory. Convolution
codes use small codewords in comparison to
block codes, both achieving the same quality.
Convolutional Code Principles

• An (n, k) block code process data in blocks of k bits


at a time, producing a block a n bits as output for
every block of k bits as input.
• It may not be convenient if data are transmitted and
received in a more or less continuous stream,
particularly with large n.
• Convolution code generates redundant bits
continuously so that error checking and correcting are
carried out continuously.
Convolutional Code Principles

• A convolutional code is defined by three parameters:


n, k, and K.
– (n, k, K) code
• Input processes k bits at a time
• Output produces n bits for every k input bits
• K = constraint factor
• k and n generally very small
– n-bit output of (n, k, K) code depends on:
• Current block of k input bits
• Previous K-1 blocks of k input bits
– The current output of n bits is a function of the last K×k
input bits.
Convolutional Code Principles

• Shift register is convenient for describing and


implementing the encoding process:
– For a (n, k, K) code, the shift register contains the most recent
K×k input bits. The register is initialized to all zeros.
– The encoder produces n output bits, after which the oldest k
bits from the register are discarded and k now bits are shift in.
– Although the output of n bits depends on K×k input bits, the
rate of encoding is n output bits per k input bits. The code
rate therefore is k/n.
– Most commonly k=1 and hence a shift register has a length
of K.
Convolutional Code Principles
[Example] For a (2, 1, 3) code, the encoder converts an input bit
un into two output bits vn1 and vn2, using the three most recent
bits.
Convolutional Code Principles

• Finite-state machine is also used to represent a


convolutianal code.
• The machine has 2k(K-1) states, and the transition from
one state to another is determined by the most recent
k bits of input and produce n output bits.
• The initial state of the machine corresponds to all-
zeros state.
Convolutionall Code Principles
• There are 4 states, one for each
possible pair of values for the last
two bits.
• The next input bit cause a
transition and produces an output
of two bits.
• If the last two bits were 10(un-1=1,
un-2=0) and input bit is 1, then the
current state b(10) will change to
d(11)
v n1  un 2  un 1  un
v n 2  un  2  un
Convolutional Code

 Convolutional Code Principles

 Decoding

 Turbo Code
Decoding

• Trellis diagram – expanded encoder diagram


– Constructed by reproducing the states horizontally and
showing the state transitions going from left to right
corresponding to time, or data input.
– The next figure demonstrates a (2,1,3) code.
– For example, for path a-b-c-b-d-c-a-a produces the output
11 10 00 01 01 11 00 and was generated by the input
1011000.
Decoding
Decoding

• Viterbi code – error correction algorithm


– Compares received sequence with all possible transmitted
sequences
– Algorithm chooses path through trellis whose coded
sequence differs from received sequence in the fewest
number of places
– Once a valid path is selected as the correct path, the
decoder can recover the input data bits from the output
code bits
– The metric used to measure differences between received
sequences and valid sequences. Normally Hamming
distance are used.
Decoding

Viterbi algorithm for w=10 01 01 00 10 11 00 with decoding


window b=7
Decoding
Convolutional Code

 Convolutional Code Principles

 Decoding

 Turbo Code
Turbo Code

• Turbo code: emerged as a popular choice for 3G


wireless systems.
• Turbo codes exhibit performance, in terms of bit error
probability, that is very close to the Shannon limit.
• It can be efficiently implemented for high-speed use.
• A number of different turbo encoders and decoders
have been introduced, most of which are based on
convolutional encoding.
Turbo Code

• In turbo encoding scheme, a specific encoder is


replicated.
• One encoder receives a stream of input bits and produces
a single output check bit C1i for each input bit.
• The other encoder receives an interleaved version of the
input bit stream, producing a sequence of C2i check bit.
• The initial input bit plus the two check bits are then
multiplexed to produce the sequence I1C11C21I2C12C22…
Turbo Code
Turbo Code

• The resulting sequence has a code rate 1/3.


• Other code rate can be achieved by a process called
puncturing.
– A code rate of ½ can be achieved by taking only half of the
check bits, alternating between outputs from the two
encoders.
I1 I2 I3 I4 I5 I6 I7
C11 C12 C13 C14 C15 C16 C17
C21 C22 C23 C24 C25 C26 C27
Turbo Code

• For turbo coding, a variation of the convolutional


code, known as a recursive systematic convolutional
code is used for encoder 1 and 2.
– Recursive: a convolutional encoder output is fed back to
the shift register.
– Systematic: the output of the encoder consists of input bits.
Turbo Code

The turbo encoder using two RSC coders


Turbo Code
Turbo Code
• Decoding Process:
– The received data are depunctured, if necessary by estimating
the missing check bits or by setting the missing bits to 0.
– Decoder 1 operates first, using the I’ and C’ 1 values to produce
correction bits (X1) and then feed it into decoder 2.
– Decoder 2 uses I’ and C’2 together with X1 to produce correction
values X2.
– X2 are fed back to decoder 1 for a second iteration of decoding
algorithm.
– After sufficient iterations, an output bit is generated.
– During interleaving must be performed to align bits properly.
Turbo Code

You might also like