Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

EC401 M1-Information Theory & Coding

Module 6
KTUStudents.in
ktustudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Syllabus
• Convolutional Codes
• Encoding
• Time and frequency domain approach
• KTUStudents.in
State , tree and trellis diagram
• Transfer function and minimum free distance
• Maximum likelihood decoding
• Viterbi Algorithm
• Sequential Coding

For more study materials: WWW.KTUSTUDENTS.IN


Convolutional coding
• Type of Channel Coding

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Convolutional coding
• What is difference between block codes, cyclic
codes and convolutional codes
• To get high redundancy -memory locations
KTUStudents.in
should more-digital communication s/m will
complex. Block Codes and Cyclic
Codes
• Using Less number of memory locations-high
redundancy can be achieved-complexity will
be less Convolutional coding

For more study materials: WWW.KTUSTUDENTS.IN


Convolutional coding
• (n,k,m ) Convolutional Encoder

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Convolutional codes

KTUStudents.in
k input
(n,k,m)
n output
bits bits

m bits in memory

For more study materials: WWW.KTUSTUDENTS.IN


CONVOLUTIONAL CODES
• An (n,k,m) convolutional code can be
implemented with a k-input, n-output linear
sequential circuit with input memory m.

KTUStudents.in
• Convolutional code differs from block codes in
that
– Encoder contains memory
– At any given time the n-outputs depends on the k-
inputs at that time and m previous input blocks.
• n and k are small integers with k<n
• m must be large for low error probability.
For more study materials: WWW.KTUSTUDENTS.IN
Example 1: (2,1,3) Convolutional
Encoder

KTUStudents.in

Implemented by Linear feed forward Shift register

are generator sequences


For more study materials: WWW.KTUSTUDENTS.IN
Encoding Equations

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


• Encoding equations: denotes
discrete convolution

• Convolution operation: For any l ≥ 0 and

KTUStudents.in
• For example 1,

• The codeword is given by,


For more study materials: WWW.KTUSTUDENTS.IN
Example 1

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Generator Matrix of (n,k,m)
Convolutional Encoder

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Example 1

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Transform Domain Representation
• convolution operations in time domain can be
replaced by more convenient polynomial
multiplications in transform domain.
KTUStudents.in
• For (2,1,m) Convolutional Encoder

For more study materials: WWW.KTUSTUDENTS.IN


Transform Domain Representation

KTUStudents.in

In general

For more study materials: WWW.KTUSTUDENTS.IN


Transform Domain Representation
For Example 1

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Example 2: (3,2,1) Convolutional
Encoder

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Example 2: (3,2,1) Convolutional
Encoder

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Example 2: (3,2,1) Convolutional
Encoder

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Example 2: (3,2,1) Convolutional
Encoder

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Question
• Draw (3,1,2) Convolutional Encoder
Assume G(D) = (1+D,1+D2,1+D+D2)

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Rate and Constraint Length of
Convolutional Code
• Rate of Convolutional Encoder = k/n
• Constraint Length (K) is defined as the number
of shifts over which a single message bit can
KTUStudents.in
influence the encoder output.
• That is K shifts are required for a message bit
to enter the shift register and finally come
out.
• K=m+1

For more study materials: WWW.KTUSTUDENTS.IN


Question
• Draw ½ rate efficiency and constraint length 4
convolutional encoder: Example 1

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


State Diagram
• Preparation of State Transition Table

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


State Diagram Representation
(2,1,3) Convolutional Encoder
Example 1

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Question
Consider a (2,1,2) Convolutional code with

KTUStudents.in
– Draw the encoder diagram.
– Draw the state diagram

For more study materials: WWW.KTUSTUDENTS.IN


Code Tree or Tree Diagram
• The Tree Diagram adds the dimension of time
to state diagram.
• Code Tree can be explained with the help of
KTUStudents.in
following Example
• Draw Convolutional Encoder with constraint
length 3 and rate ½ with g(1)=(1 1 1) and g(2)=
(1 0 1). Get state Transition Table.

For more study materials: WWW.KTUSTUDENTS.IN


Draw Convolutional Encoder with
constraint length 3 and rate ½ with
g(1)=(1 1 1) and g(2)= (1 0 1).Example 2

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Convolutional Encoder with constraint
length 3 and rate ½. Example 2
Input bit Present State Next State Output Bits
m0 m1 m0 m1 V(1) V(2)
0 0 0 0 0 s0 0 0
1 0 0 1 0 s1 1 1

KTUStudents.in
0 1 0 0 1 s2 1 0
1 1 0 1 1 s3 0 1
0 0 1 0 0 1 1
1 0 1 1 0 0 0
0 1 1 0 1 0 1
1 1 1 1 1 1 0

For more study materials: WWW.KTUSTUDENTS.IN


Code Tree or Tree Diagram

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Code Tree or Tree Diagram

KTUStudents.in
Message: 11011
Code Word:
11 01 01 00 01

For more study materials: WWW.KTUSTUDENTS.IN


Code Tree or Tree Diagram
• Input 0 specifies the upper branch and input 1
specifies lower branch.
• Output binary symbols are indicated on the
KTUStudents.in
branch.
• A specific path in the tree is traced from left to
right in accordance with the input message
sequence.
• After first m+1 branches, tree becomes
repetitive.

For more study materials: WWW.KTUSTUDENTS.IN


TRELLIS DIAGRAM
• Trellis Diagram is the extension of state
diagram in time.

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Trellis Diagram

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Trellis Diagram
• The Trellis contains L+K levels, where L is the length
of the incoming message sequence, and K is
constraint length of the code.

KTUStudents.in
• The levels of Trellis are labeled as j=0,1,……,L+K-1
• Input 0 is drawn as a solid line. Input 1 is drawn as a
dashed line.
• For message, 10011 output sequences 11 10 11 11
01.

For more study materials: WWW.KTUSTUDENTS.IN


Maximum Likelihood Decoding
• Consider code word c is generated from message
u.
• At the receiver side r is received. From r , c and u
are to be estimated.
KTUStudents.in
• Take p(r/c) conditional probability of receiving r
given that v was sent.
• The log likelihood function equals log p(r/c)
• Maximum Likelihood decoder choose the
estimate of c for which the log likelihood function
log p(r/c) is maximum.

For more study materials: WWW.KTUSTUDENTS.IN


Maximum Likelihood Decoding
For BSC,

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Maximum Likelihood Decoding
• N log(1-p) is constant for all c.
• Maximum Likelihood decoding rule for BSC,
can be restated as choose estimate of c
KTUStudents.in
minimizes the hamming distance between
received vector r and transmitted vector c.

For more study materials: WWW.KTUSTUDENTS.IN


Viterbi Algorithm For BSC
 It is maximum likelihood decoding algorithm
 For BSC, decoding rule is based on minimum hamming distance.
Step 1: Starting at level j=0, compute the partial hamming distance
received code word and the code word received for the single path
entering each node. Store the path and its hamming distance for

KTUStudents.in
each state.
Step 2:Increment the level j by 1.Compute the partial hamming
distance for all paths entering a state by adding the branch
hamming distance entering that state to the hamming distance of
the connecting states at the preceding time unit. For each state,
store the path with the smallest hamming distance, together with
the hamming distance and eliminate all other paths.
Step 3: If j< L+m repeat step 2.Other wise stop.

For more study materials: WWW.KTUSTUDENTS.IN


Viterbi Algorithm For BSC

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Viterbi Algorithm For BSC

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Minimum Free Distance
• The free distance (dfree) of a convolutional code is
defined as the minimum Hamming distance
between any two code words in the code.

KTUStudents.in
• A convolutional code with free distance dfree can
correct t errors if and only if dfree is greater than
2t.
• If the errors are more than above condition, then
Viterbi algorithm will break down in its operation
resulting same hamming distance for many
number of paths.

For more study materials: WWW.KTUSTUDENTS.IN


Viterbi Algorithm For Non BSC
 For Non BSC, decoding rule is based on maximum likelihood
function log p(r/v)
 M(r/v) = log p(r/v)
Step 1: Starting at level j=0, compute the partial metric received code
word and the code word received for the single path entering each

KTUStudents.in
node. Store the path and its metric for each state.
Step 2:Increment the level j by 1.Compute the partial metric for all
paths entering a state by adding the branch metric entering that
state to the metric of the connecting states at the preceding time
unit. For each state, store the path with the largest metric, together
with the metric and eliminate all other paths.
Step 3: If j< L+m repeat step 2.Other wise stop.

For more study materials: WWW.KTUSTUDENTS.IN


Viterbi Algorithm For Non BSC

KTUStudents.in

b) log p(ri/vi) table c) C2 log p(ri/vi)+C1 C2=7.195, C1 = 2.3

For more study materials: WWW.KTUSTUDENTS.IN


Viterbi Algorithm For Non BSC

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Viterbi Algorithm For Non BSC

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Viterbi Algorithm For Non BSC

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Viterbi Algorithm For Non BSC

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Viterbi Algorithm For Non BSC

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Transfer Function: Example 2
a: 00
b: 10
c: 01
d:11

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Transfer Function: Example 2
• Using Masons Gain formulae

KTUStudents.in

dfree = 5, exponent of first term or smallest


exponent
For more study materials: WWW.KTUSTUDENTS.IN
Transfer Function: Example 1

KTUStudents.in

Instead of X, D. Also L can be appended as done in the previous case.


dfree = 6

For more study materials: WWW.KTUSTUDENTS.IN


Sequential Coding
• A drawback of viterbi algorithm is that it need to
perform fixed number of computation (2K)
irrespective of the noise.
• Sequential Decoding is a decoding procedure
KTUStudents.in
whose decoding procedure is adaptable to the
noise level.
• The major drawback of sequential decoding is
that noisy frames take large amounts of
computation, and decoding time will exceed the
upper limit, causing information to be lost or
erased.

For more study materials: WWW.KTUSTUDENTS.IN


Sequential Coding
The Viterbi metric can be modified as Fano Metric

KTUStudents.in
Where R is code rate.

n is number of output bits


l is number of branches.
When comparing paths of different lengths the path with the largest Fano
metric is the bestFor
path.
more study materials: WWW.KTUSTUDENTS.IN
• Example: Consider (3,1,2) convolutional encoder.
Assume that a codeword is transmitted from this
code over BSC. Received vector:

For BSC is taken as the metric and the

KTUStudents.in
path with the smallest metric is selected as the
maximum likelihood path.
Now consider partial path metric for two paths
different lengths.
eg:

The partial path metric are:


For more study materials: WWW.KTUSTUDENTS.IN
Indicating is the better of two paths. But
we can notice that is more likely to be part
of the maximum likelihood path than .

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


• Ex: For a BSC with transition probability p, the
Fano metric for the truncated codeword and
are given by,

KTUStudents.in

This indicates that is the better of the two


paths.

For more study materials: WWW.KTUSTUDENTS.IN


Sequential Coding
• Two Approaches for Sequential Coding
• Stack Algorithm
• Fano Algorithm
KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Stack Algorithm

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Stack Algorithm
• Step 1: Load the stack with the origin node in the
tree, whose metric is taken to be zero.
• Step 2:Compute the metric of the successors of
the top path in the stack.
KTUStudents.in
• Step 3:Delete the top path from the Stack.
• Step 4:Insert new paths in the stack and
rearrange the stack in the order of decreasing
metric values.
• Step 5:If the top path in the stack ends at
terminal node in the tree, stop. Otherwise return
to Step 2.
For more study materials: WWW.KTUSTUDENTS.IN
Fano Algorithm

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Fano Algorithm
• Decoder starts at origin node with T=0 and M=0.
• It then looks forward to the best 2k succeeding
nodes.
KTUStudents.in
• If MF>T, decoder moves forward and checking
whether the end of the tree has been reached, a
Threshold tightening is done if this node is being
visited for the first time.
• Threshold Tightening is increasing of T to some
extent.
For more study materials: WWW.KTUSTUDENTS.IN
Fano Algorithm
• If MF<t, then the decoder looks backward to
the preceding node.
• If MB <T, then T is lowered and look forward to
KTUStudents.in
the best node.
• If MB>T, the decoder moves back to previous
node.

For more study materials: WWW.KTUSTUDENTS.IN

You might also like