Download as pdf or txt
Download as pdf or txt
You are on page 1of 83

Coding Techniques[1, 2, 3, 4, 5]

Manjunatha. P
manjup.jnnce@gmail.com

Professor
Dept. of ECE
J.N.N. College of Engineering, Shimoga

Overview
1
2
3
4

5
6
7

Convolutional Encoding
Convolutional Encoder Representation
Formulation of the Convolutional Decoding Problem
Properties of Convolutional Codes: Distance property of convolutional
codes
Systematic and Nonsystematic Convolutional Codes
Performance Bounds for Convolutional Codes, Coding Gain
Other Convolutional Decoding Algorithms:
Sequential Decoding:
Feedback Decoding:

Turbo Codes

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

2 / 83

Encoding of Convolutional Codes

Encoding of Convolutional Codes

Encoding of Convolutional Codes

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

3 / 83

Encoding of Convolutional Codes

Encoding of Convolutional Codes

Transmission through noisy channel causes errors in data.


Transmission errors can occur, 1s become 0s and 0s become 1s.
To correct the errors, some redundancy bits are added to the
information sequence, at the receiver the correlation is exploited to
locate transmission errors.
Here, only binary transmission is considered.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

4 / 83

Encoding of Convolutional Codes

Encoding of Convolutional Codes

Type of Errors Single-bit error


Only 1 bit in the data unit (packet, frame, cell) has changed.
Either 1 to 0, or 0 to 1.

Burst error
2 or more bits in the data unit have changed.
More likely to occur than the single-bit error because the duration of
noise is normally longer than the duration of 1 bit
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

5 / 83

Encoding of Convolutional Codes

Convolutional Encoding

Convolutional Encoding

Convolutional codes are commonly specified by three parameters:


(n,k,K) where
n = Total number of bits in the output codeword of the encoder.
k = Number of input bits
K = Constraint length, it represents the number of k tuple stages in
the encoding shift register.
The constraint length K represents the number of k bit shifts over
which a single information bit can influence the encoder output.
The quantity k/n called the code rate, is a measure of the efficiency
of the code commonly k and n parameters range from 1 to 8 and K
from 2 to 10.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

6 / 83

Encoding of Convolutional Codes

Convolutional Encoding

A block diagram of a binary rate R=1/2 nonsystematic feedforward convolutional encoder


with memory order m=3 (n,k,K i.e., 2,1,3) is as shown in Figure.
The encoder consists of an K= 3-stage shift register together with n=2 modulo-2 adders
and a multiplexer for serializing the encoder outputs.
The mod-2 adder can be implemented as EXCLUSIVE-OR gate.
Since mod-2 addition is a linear operation, the encoder is a linear feedforward shift
register.
All convolutional encoders can be implemented using a linear feedforward shift register of
this type.

First code
symbol
u1
Input bit
m

Output
branch word
u2
second code
symbol

Figure 1: Convolutional Encoder(rate=1/2, K=3)


Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

7 / 83

Encoding of Convolutional Codes

Convolutional Encoding

Constraint length K=3, k=1 input, n=2 modulo-2 adders i.e., k/n=1/2
At each input bit, a bit is shifted to the leftmost stage and the bits in the registers are
shifted one position to the right.
Connection vector for the encoder is as follows:
g1

111

g2

101

where a 1 in the ith position indicates the connection in the shift register, and 0
indicates no connection in the shift register and the modulo-2 adder.

First code
symbol
u1
Input bit
m

Output
branch word
u2
second code
symbol

Figure 2: Convolutional Encoder(rate=1/2, K=3)


Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

8 / 83

Encoding of Convolutional Codes

Convolutional Encoding

Consider a message vector m=1


0 1 are inputted one at time in
the instants t1 ,t2 , and t3
K-1=2 zeros are inputted at
times t4 and t5 to flush the
register

u1=1
t1

u1=1
t4

u1=1

u1=0

The output sequence of the


encoder is 1 1, 1 0, 0 0, 1
0, 1 1
u1=1
t2

0
u1=0

u1=1
t5

1
u1=1

u1=0
t3

u1=0
t6

u1=0

u1=0

Figure 3: Convolutionally encoded message


Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

9 / 83

Encoding of Convolutional Codes

Impulse Response of the Encoder

Impulse Response of the Encoder


The impulse response is the response of the encoder to a single one bit that moves
through the encoder.
The output sequence for the input one is called the impulse response of the encoder.
Register
contents
100
010
001

Branch word
u1
1
1
1

Input sequence:
Output sequence:

u2
1
0
1

1
11

0
10

0
11

The output for the input sequence m=1 0 1 can be found by the superposition or the
linear addition of the time-shifted input impulses.
Input bit m
1
0
1
Modulo-2 sum
Manjunatha. P

(JNNCE)

11

11

output
10
00
10

Coding Techniques[1, 2, 3, 4, 5]

1
0
1
0

1
0
1
0

00
10
10

11
11
10 / 83

Encoding of Convolutional Codes

Polynomial Representation

Polynomial Representation

Encoding of Convolutional Codes: Polynomial Representation


In any linear system, time-domain operations involving convolution
can be replaced by more convenient transform-domain operations
involving polynomial multiplication.
Since a convolutional encoder is a linear system, each sequence in the
encoding equations can be replaced by corresponding polynomial, and
the convolution operation replaced by polynomial multiplication.
In the polynomial representation of a binary sequence, the sequence
itself is represented by the coefficients of the polynomial

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

11 / 83

Encoding of Convolutional Codes

Polynomial Representation

First code
symbol
u1
Input bit
m

Output
branch word

For the given encoder g1 (X )


represents upper connection and
g2 (X ) represents lower connection.

u2

g1 (X )

1 + X + X2

second code
symbol

g2 (X )

1 + X2

The estimation of output sequence is as follows:


U(X ) = m(X )g1 (X ) interlaced with m(X )g2 (X )
m(X )g1 (X )=
m(X )g2 (X )=
m(X )g1 (X )=
m(X )g2 (X )=
U(X )=
U(X )=

Manjunatha. P

(JNNCE)

(1 + X 2 )(1 + X + X 2 )=
(1 + X 2 )(1 + X 2 ) =
1 + X + 0X 2 + X 3 + X 4
1 + 0X + 0X 2 + 0X 3 + X 4
(1,1)+(1,0)X+(0,0)X 2 +(1,0)X 3 +(1,1)X 4
(1 1) (1 0) (0,0) (1 0) (1 1)

Coding Techniques[1, 2, 3, 4, 5]

1 + X + X3 + X4
1 + X4

12 / 83

Encoding of Convolutional Codes

Polynomial Representation

Graphically, there are three ways to represent convolution encoder, in


which we can gain better understanding of its operation. These are:
1
2
3

State Diagram
Tree Diagram
Trellis Diagram

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

13 / 83

Encoding of Convolutional Codes

State Representation and the State diagram

State Representation and the State diagram


The state diagram is a graph of the possible states of the encoder and the possible
transitions from one state to another.
The diagram shows the possible state transitions.
Each circle in the state diagram represents a state.
At any one time, the encoder resides in one of these states.
The lines to and from it shows the state transition that are possible as bits arrive.
Assuming that the encoder is initially in state S0 (all-zero state), the code word
corresponding to any given information sequence can be obtained by following the path
through the state diagram and noting the corresponding outputs on the branch labels.
Following the last nonzero information block, the encoder is return to state S0 by a
sequence of m all-zero blocks appended to the information sequence.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

14 / 83

Encoding of Convolutional Codes

State Representation and the State diagram

State Representation and the State diagram


Table 1: State transition table (rate=1/2, K=3)
Input
0
1
0
1
0
1
0
1

First code
symbol
u1
Input bit
m

Output
branch word
u2
second code
symbol

Present State
00
00
01
01
10
10
11
11

Output bit

The output in the table is X-OR of input and


present state content

1(11)

Next State
00
10
00
10
01
11
01
11

Output
00
11
11
00
10
01
01
10

0(00)

a=00

0(11)

Input bit

States represent possible contents of the


rightmost K-1 register content.
1(00)

For this example there are only two transitions


from each state corresponding to two possible
input bits.

c=01

b=10

0(10)

Solid line denotes for input bit zero, and dashed


line denotes for input bit one.

0(01)
d=11
1(01)

Tuple
00
10
01
11
Manjunatha. P

State
a
b
c
d

(JNNCE)

1(10)

Figure 4: State diagram for rate=1/2 and


Coding Techniques[1,
2, 3, 4, 5]
K=3

15 / 83

Encoding of Convolutional Codes

State Representation and the State diagram

First code
symbol
u1
Input bit
m

Output
branch word
u2
second code
symbol

The encoder output for the input 1101100 is


Output sequence: U= 11 01 01 00 01 01 11

Input
bit
1
1
0
1
1
0
0
Manjunatha. P

(JNNCE)

Register
contents
000
100
110
011
101
110
011
001

State at
time ti
00
00
10
11
01
10
11
01

State at
time ti+1
00
10
11
01
10
11
01
00

Coding Techniques[1, 2, 3, 4, 5]

Branch
word
time ti
u1
u2
1
1
0
1
0
1
0
0
0
1
0
1
1
1
16 / 83

Encoding of Convolutional Codes

Tree diagram

The Tree diagram


00
00

State diagram does not represent the time history to


track encoder transition as a function of time.

a
00
a

11

11

10
b

Tree diagram provides time history.

01

00

11

10

Encoding is described by traversing from left to right.

11
b

If the input bit is zero its branch word is found by


moving to the next rightmost branch in the upward
direction, and if the input bit is one its branch word is
found by moving to the next rightmost branch in the
downward direction

00

01

01
d

10

00

00
11
a
10
c

11

00

10
b

01

11

Assuming initially the contents of register are zero, if


the input bit is zero, its corresponding output is 00
otherwise if the first input bit is one its corresponding
output is 11.

11

01
c

01
0

10

a
1

0
1
0
1
0
1
0
1

11

11
a
11

10
b

01

10

11
10
c

00
b

Next
State
00
10
00
10
01
11
01
11

10
00

00

Present
State
00
00
01
01
10
10
11
11

01
d

The limitation of tree diagram is that the number of


branches increases as a function of 2L , where L is the
number of branch words.

Input

00

Output

00
01

01
d

11

10
00

b
11

00
11
11
00
10
01
01
10

a
01
c

11

00

10
b

01
d

01
11

01
c

10
d

00

10

01
d

t1

t2

t3

t4

10
t5

Figure 5: Tree representation


Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

17 / 83

Encoding of Convolutional Codes

Trellis Diagram

Trellis Diagram
It looks like a tree structure with emerging branches hence it is called a trellis.
A trellis is more instructive than a tree in that it brings out explicitly the fact that the
associated convolutional encoder is finite state machine.
The convention used in Figure to distinguish between input symbol 0 and 1 is as follows.
A code branch produced by an input 0 is drawn as a solid line, whereas a code branch
produced by an input 1 is drawn as a dashed line.
Each input sequence corresponds to a specific path through the trellis.
The trellis contains (L+K) levels, where L is the length of the incoming message
sequence, and K is the constraint length of the code.
The levels of the trellis are labeled as j=0, 1, 2L+K-1. The first (K-1) levels corresponds
to the encoders departure from the initial state a, and the last (K-1) levels corresponds to
the encoders return to the state a.
Not all these state can be reached in these two portions of the trellis.
After the initial transient, the trellis contains four nodes at each stage, corresponding to
the four states.
After the second stage, each node in the trellis has two incoming paths and two outgoing
paths.
Of the two outgoing paths, corresponds to the input bit 0 and the other corresponds input
bit 1.
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

18 / 83

Encoding of Convolutional Codes

The Trellis diagram

The Trellis diagram


Solid line denotes for input bit zero,
and dashed line denotes for input bit
one.
Nodes represent the encoder states,
First row represents state a=00
subsequent rows correspond to state
b=10, c=01 and d=11.
At each unit of time, the trellis
requires 2K 1 nodes to represent the
2K 1 .
Input
0
1
0
1
0
1
0
1

Present
State
00
00
01
01
10
10
11
11

Manjunatha. P

Next
State
00
10
00
10
01
11
01
11

(JNNCE)

t1
a=00

00

t2

11

00

t3

11

00

t4

11

00

t6

11

11
00

00
10

t5

11

11

b=10

00
11

00

10

10

c=01

Output

01

01

01

01

01
01

01

d=11

00
11
11
00
10
01
01
10

10

10

10

Steady State

Figure 6: Trellis diagram for rate=1/2 and


K=3

Coding Techniques[1, 2, 3, 4, 5]

19 / 83

Encoding of Convolutional Codes

Encoding of Convolutional Codes: Transform Domain

Encoding of Convolutional Codes: Transform Domain


In any linear system, time-domain operations involving convolution
can be replaced by more convenient transform-domain operations
involving polynomial multiplication.
Since a convolutional encoder is a linear system, each sequence in the
encoding equations can be replaced by corresponding polynomial, and
the convolution operation replaced by polynomial multiplication.
In the polynomial representation of a binary sequence, the sequence
itself is represented by the coefficients of the polynomial

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

20 / 83

Encoding of Convolutional Codes

Encoding of Convolutional Codes: Transform Domain

For example, for a (2, 1, m) code, the encoding equations become


V (0) (D)
V

(1)

(D)

u(D)g (0) (D)

u(D)g (1) (D)

where u(D) = u0 + u1 (D) + u2 (D 2 ) . . .


The encoded sequences are
(0)

+ v1 D + v2 D 2 + . . .

(1)

+ v1 D + v2 D 2 + . . .

V (0) (D)

v0

V (1) (D)

v0

(0)

(0)

(1)

(1)

The generator polynomials of the code are


(0)

+ g1 D + . . . + gm D m

g0

(1)

+ g1 D + . . . + gm D m

[v (0) (D), v (1) (D)]

g (0) (D)

g0

g (1) (D)

V (D)

(0)

(0)

(1)

(1)

After multiplexing, the code word become


V (D)

v (0) (D 2 ) + Dv (1) (D 2 )

D is a delay operator, and the power of D denoting the number of time units a bit is
delayed with respect to the initial bit.
The general formula after multiplexing (n=numbr of output):
V (D)
Manjunatha. P

(JNNCE)

v (0) (D n ) + Dv (1) (D n ) + . . . + D (n1) v (n1) (D n )


Coding Techniques[1, 2, 3, 4, 5]

21 / 83

Encoding of Convolutional Codes

Encoding of Convolutional Codes: Transform Domain

Example 11.1
For (2,1,3) convolutional code,g (0) = [1 0 1 1] and g (1) = [1 1 1 1] the generator
polynomials are
g (0) (D)

1 + D2 + D3

g (1) (D)

1 + D + D2 + D3

For the information sequence u(D) = 1 + D 2 + D 3 + D 4 , then


v (0) (D)

(1 + D 2 + D 3 + D 4 )(1 + D 2 + D 3 )

(D)

(1 + D 2 + D 3 + D 4 + D 2 + D 4 + D 5 + D 6 + D 3 + D 5 + D 6 + D 7 )

v (0) (D)

(1 + D 2 + D 2 + D 3 + D 3 + D 4 + D 4 + D 5 + D 5 + D 6 + D 6 + D 7 )

(D)

(1 + D 7 )

v (1) (D)

(1 + D 2 + D 3 + D 4 )(1 + D + D 2 + D 3 )

(1 + D 2 + D 3 + D 4 + D + D 3 + D 4 + D 5 + D 2 + D 4 + D 5 + D 6

+D 3 + D 5 + D 6 + D 7 )

1 + D + D3 + D4 + D5 + D7

(0)

(0)

(1)

(D)

v (1) (D)

and the code word is


v (D) = [1 + D 7 , 1 + D + D 3 + D 4 + D 5 + D 7 ].
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

22 / 83

Encoding of Convolutional Codes

Encoding of Convolutional Codes: Transform Domain

Example 11.1
and the code word is
v (D) = [1 + D 7 , 1 + D + D 3 + D 4 + D 5 + D 7 ].
After multiplexing, the code word become
v (D) = v (0) (D 2 ) + Dv (1) (D 2 )

v (0) (D 2 )
v

(1)

(1)

(1 + D 14 )

Dv (1) (D 2 ) = D(1 + D 2 + D 6 + D 8 + D 10 + +D 14 )

Dv (1) (D 2 ) = D + D 3 + D 7 + D 9 + D 11 + +D 15

(D )
(D )

v (D)

v (0) (D 2 ) + Dv (1) (D 2 )

v (D)

1 + D + D 3 + D 7 + D 9 + D 11 + D 14 + D 15

The result is the same as convolution and matrix multiplication.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

23 / 83

Decoding of Convolutional Codes

Decoding of Convolutional Codes

Decoding of Convolutional Codes

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

24 / 83

Decoding of Convolutional Codes

Decoding of Convolutional Codes

There are several different approaches to decoding of convolutional codes.


These are grouped in two basic categories.
1

Maximum likely-hood decoding


Viterbi decoding

Sequential Decoding
i) Stack Algorithm
ii) Fano Algorithm

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

25 / 83

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

26 / 83

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm

In 1967, Viterbi introduced a decoding algorithm for convolutional


codes which has since become known as Viterbi algorithm.
Later, Omura showed that the Viterbi algorithm was equivalent to
finding the shortest path through a weighted graph.
Forney recognized that it was in fact a maximum likelihood decoding
algorithm for convolutional codes; that is, the decoder output selected
is always the code word that gives the largest value of the
log-likelihood function.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

27 / 83

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm

In order to understand Viterbis decoding algorithm, expand the state diagram of the
encoder in time (i.e., to represent each time unit with a separate state diagram).
The trellis diagram contains h+m+1 time units or levels, and are labeled from 0 to h+m
(h information sequence length ).
Assuming that the encoder always starts in state S0 and returns to state S0 , the first m
time units correspond to the encoders departure from state S0 , and the last m time units
correspond to the encoders return to state S0
Not all states can be reached in the first m or the last m time units. However, in the
center portion of the trellis, all states are possible, and each time unit contains a replica of
the state diagram.
There are two branches leaving and entering each state.
The upper branch leaving each state at time unit i represents the input ui = 1, while the
lower branch represents ui = 0.
Each branch is labeled with the n corresponding outputs vi , and each of the 2h code
words of length N = n(h + m) is represented by a unique path through the trellis.
In the general case of an (n, k, m) code and an information sequence of length kh, there
are 2k branches leaving and entering each state, and 2kh distinct paths through the trellis
corresponding to the 2kh code words.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

28 / 83

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm

A partial path metric for the first j branches of a path can now be
expressed as
The following algorithm, when applied to the received sequence r
from a DMC, finds the path through the trellis with the largest metric
(i.e., the maximum likelihood path).
The algorithm processes r in an iterative manner.
At each step, it compares the metrics of all paths entering each state,
and stores the path with the largest metric, called the survivor,
together with its metric.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

29 / 83

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm

The Viterbi Algorithm


1

Step 1. Beginning at time unit j = m, compute the partial metric for


the single path entering each state. Store the path (the survivor) and
its metric for each state.

Step 2. Increase j by 1. Compute the partial metric for all the paths
entering a state by adding the branch metric entering that state to
the metric of the connecting survivor at the preceding time unit. For
each state, store the path with the largest metric (the survivor),
together with its metric, and eliminate all other paths.

Step 3. If j < h + m, repeat step 2. Otherwise, stop.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

30 / 83

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm

Add-Compare- Select Computation


Figure shows the Add-Compare- Select
Computation (ACS) logic.
The state metric a0 is calculated by adding
the previous time state metric of a, a to
the branch metric aa0 and the previous
time state metric of c, c to the branch
metric ca0 .

This results in two possible path metrics as aa '


candidates for the new state metric a0 .
The two candidates are compared and the
largest likelihood (smallest distance) of the
two path metrics is stored as the new state
metric a0 for state a.
The new path history ma0 for state a where
m
a0 is the message path history of the state
augmented by the data of the winning path.

The Figure also contains the ACS logic for


the node b with new state metric b0 and
path history mb0

ca '

cb '

ab '

Compare

Compare

m a
Select
1-of-2

a '

m c

Select
1-of-2

'
ma

To another logic unit

m a

m c

Select
1-of-2

Select
1-of-2

a '

'
mb

To another logic unit

Figure 7: Add Compare select

The same operation is performed for the


paths in other cells.
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

31 / 83

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm

G (X ) = [1 + X + X 2 , 1 + X 2 ]
Input
0
1
0
1
0
1
0
1

First code
symbol
u1
Input bit
m

Present
State
00
00
01
01
10
10
11
11

Output
branch word
u2
second code
symbol

Next
State
00
10
00
10
01
11
01
11

Output
00
11
11
00
10
01
01
10

Figure 8: convolutional encoder of


R=1/2 and K=3
Output bit

1(11)

0(00)

a=00

0(11)

Input bit

Tuple
00
01
10
11

State
a
c
b
d

1(00)
c=01

b=10

0(10)

0(01)
d=11
1(01)
1(10)

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

32 / 83

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm

The trellis diagram is redrawn by labeling each branch


with the Hamming distance between received code
symbol and the branch word.

Output bit

0(00)

1(11)

Figure shows a message sequence m the corresponding


codeword sequence U and a noise corrupted received
sequence Z.

0(11)

a=00

Input bit

The branch words of the Trellis are known a priori to


both encoder and decoder.

1(00)
c=01

b=10

At time t1 the received code is 11, state 00 00


transition with an output of 00 and is compared with
received code and its Hamming distance is of 2 and is
marked on that branch.

0(10)

0(01)
d=11

Similarly at time t1 the received code is 11, state 00


10 transition with an output of 11 and is compared with
received code and its Hamming distance is of 0 and is
marked on that branch.

1(01)
1(10)

Input data
t1
a=00

00

t2

11

00

t3

11

00

t4

11

00

t5

11

00

t6

Received bit

10

11
00

00

a=00

00

01

01

00

01

Z:
t1

11

01
t2

01
t3

1
1

10
t4

1
1

01
t5

1
1

t6

1
1

10

10

11

11

11

b=10

m: 1

Transmitted bit U: 11

c=01

b=10
01

01

01

01

01
01

01

d=11
10

10

10

c=01

Steady State

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

0
0

d=11

Brach
metric

2
0

2
0
2

33 / 83

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm


Input data

t1
a=00

00

t2

11

00

t3

11

00

t4

11

00

t5

11

00

m: 1

Transmitted bit U: 11
Received bit

11

11

b=10

10

11
00

00

Z:
t1

a=00

00

01

01

00

01

11

01
t2

01
t3

1
1

c=01
01

01

01

01

d=11
10

Brach
metric

2
0

10

Steady State

At each time there are 2

t6

1
1

K 1

c=01
10

01

01

01
t5

b=10
01

10
t4

1
1

10

10

t6

11

2
0

d=11

states, and at each state two paths are entering (two paths leaving).

Computing the metrics for the two paths entering each state and eliminating one of them and this is done for each of
2K 1 states at time ti then the decoder moves to time ti+1 and repeats the process.
t1
a=00

t1
a=00

t2

t2

State metrics

t1

Ta=2

State metrics
0

b=10 Ta=2

Tb=0

t1
a=00 b=10

t02

Ta=3
1

0
Tb=0

c=01

State metrics

b=10

1
0

d=11

c=01
(JNNCE)

t3

TbT=3
a =3

Manjunatha. P

State metrics

t3

0
b=10

t2

a=00

Coding Techniques[1, 2, 3, 4, 5]

Tc=2

Tb=3

Td=0

Tc=2
34 / 83

1
t1
a=00

00

t2

11

00

a=00
b=10

11
State

t2

t4

00

t1

11

t1
a=00b=10

11

Ta=3

Received bit

00

a=00

01

00

01

01
Tb=3

t3

10
t4

10

01
t5

1
1

Tc=2

d=11

Tc=2
1
2

c=01

c=01
10

Tb=3
2

01

d=11
10

01

11 Ta =3 01
t2
1
1

Z:
t1

b=10

c=01

01

01

01

t6

1
1

b=10
01

1 metrics
State

10

Tb=0
01

m: 1

Transmitted
bit U: t11
t2
3

0
10

01

t3

2
The Viterbi
Decoding
Algorithm
1
2

t6
Input data

00

00

b=10

00

11

11

10

t5

11
metrics

=2=0
TT
a b
0

c=01

a=00

00

The Viterbi Decoding Algorithm

Ta=2

t3

11

t1
b=10

a=00

d=11

Brach
metric

2
0

T0d=0

2
0

Steady State

Td=0
d=11
At time t3 two paths diverging from each state. As a result two paths entering each state time time t4 .
Larger cumulative path metric entering each state can be eliminated. In case if there are two paths having same
cumulative path metric, then one path is selected arbitrarily.

t1
a=00

t1

a=00

2
2

t2
t2

1
1

t3
t3

1
11

b=10

b=10
2

State metrics

t3

t4

t4

State metrics
Ta=31

d=11

Coding Techniques[1, 2, 3, 4, 5]

Tb=31

Tc=0
0

d=11

21

c=01 c=01

0 0

(JNNCE)

t2

0 0

d=11

t3

a=00

2 2

c=01
c=01

Manjunatha. P

t4

t2

Ta=3

b=10

00

a=00
t1

1 1

b=10

d=11

t1

t4

Tb=3

Tc=0

0
Td=2

Td=2
35 / 83

t1

t3
t4
t1
t3
t4
t2
Decoding
The
Viterbi
Td=0 Decoding Algorithm
d=11Algorithm
1 The Viterbi
1
a=00

t2

a=00

0
t1
a=00
b=10

00

t2

11

00

t3

00

11

t4

11

t1

t2

a=00
b=10

c=01

00

t3

10

c=01

t4

10

t6

b=10 Input data


t1

00

c=01

c=01
01

01

d=11

01

01

c=01d=11

01

10

01

c=01

10

1
Tc=0
1

1
0

t6
1
Brach
metric

Td=2

0 =2
T
d

d=11

Tc=0

0 0

d=11

01
t5

b=10

01

01

10
t4

T1b=31

1
Tb=3

00

Ta=3
t3

1
0

01

101

b=10

2
t2

State
1 metrics

01 4

11

201

10

d=11

Z:
t1

a=00

d=11

Received bit

110

m: 1

3
2
Transmitted
bit U: 11

11

00

00
11

a=00

12 1
11

00

b=10

t5

11

11

Ta=3

1 1

2
0

Steady State

t1

t3

t2

a=00

t1

t4
t3

t2

a=00

1
1

t5

1
t4

t1
a=00
t

t5

Ta=1

11

11

b=10

b=10
1

2
c=01

d=11

Tc=3

0
Td=2

d=11

d=11

d=11

0
Tc=3

c=01

Tb=1

c=01
0

T =1

1b

00

c=01

Ta=1

b=10

t5 State metri

t4
t5 State metrics

t4

a=00

b=10

t3

t2
t3

t2

Td=2

State metrics
t1

t2

t3

t4

a=00

Manjunatha.
P
t

(JNNCE)
0
t

t5

t6

t1

t2

t3

t4

t5

a=00

1
1
Coding
2, 3, 4, 5]
t Techniques[1,

1
1

t6
Ta=2
1

36 / 83
t

The Viterbi Decoding Algorithm

d=11

t1

t3

t2

t4

t5

t1
a=00t1

m: 1

t1 bit U: 11
Transmitted
0t2

11

01 t3

01

01

011

t4

11

a=00

b=10

Z:
t1

11

t5 01

001

t2

02

1
1

t3

t4

1
1

10
1

11

1
0

c=01

00

d=11c=01

10

0 0
0

d=11
d=11

22

d=11
d=11

11

11

11Tb=1
00

10

Tc=3

01

01

10

d=11

Tc=3
01

01

01

Ta=1t6

00

T =1
1 10 b

10

00

Ta=1

00

0
00

22

t5

1
t5 State metrics

00
11

11

c=01

t4

t4

0t3

t2

t5 State metrics

t4

00
11

b=10
b=10

c=01
c=01

Brach
metric

t3

11

b=10

1
1

00

1
2

b=10

t6

t2

11

t1
a=00

01

t5 1

t3

t2

00

a=00

a=00

Received
b=10 bit

Td=2

a=00
Input data

c=01

The Viterbi Decoding Algorithm

d=11

01

01

Td=2

Td=210

10

Steady State

State me
t1

a=00

t1

t2

t2

t3

t3

t4

t4

t5

t5

a=00

0 0

b=10
b=10
2 2

1
1

1 1

t6

t6 1
1

a=00

0 2

d=11

d=11

(JNNCE)

a=00

t2

t4

t3

t5

b=10

t4

t6

State metrics
t5

Ta=2

Tb=2

t6

Ta=2

0
0

Manjunatha. P

t3

1
Tb=2

1
0

t1

0
b=10

c=01
0

t2

c=01

c=01

t1

c=01

Tc=2
0

Tc=2

Td=1

d=11

d=11

Coding Techniques[1, 2, 3, 4, 5]

Td=1

37 / 83

The Viterbi Decoding Algorithm

Received bit

Z:
t1

a=00

11

01

01

t2

10

t3

The Viterbi Decoding Algorithm

1
1

01

t4

t5

1
1

t6

1
1

b=10

0
d=11
Decoded Output:

0
0

1
2

2
2

State
metric

2
1

2
c=01

2
0

Figure 9: Add compare-select computation in Viterbi decoding

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

38 / 83

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm

SKLAR -7.16 Using the branch word information on the encoder trellis Figure 7.7, decode the
sequence Z=(01 11 00 01 11 rest all 0) using hard decision Viterbi decoding
Solution: Trellis diagram and its branch output are shown in First Figure and in the second
Figure branch metric and its path metric and decoded path are shown.

Received bit

a=00

Z:
t1

11

01

01

t2
00
11

t3
00
11

10
t4

00
11

Received bit

01
t5

00
11

t6
00
11

Z:
t1

a=00

01

11

11
00

10

10

00

10

01

t5

0
2

1
1

2
0

c=01

c=01
01

01

01

01

01

01

01

1
d=11

d=11
10

10

10
Decoded Output:

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

2
1

2
2

Decoded
path

2
0

t6

b=10

10

11

t4

11
00

00
t3

2
1

b=10

Manjunatha. P

11
t2

1
0

39 / 83

The Viterbi Decoding Algorithm

Proakis IV Edition-8-26
G (D) = [1, 1 + D 2 , 1 + D + D 2 ]

The Viterbi Decoding Algorithm

Table 2: State transition table for the


(3,1,2)encoder

k=1

Input
0
1
0
1
0
1
0
1

SR

SR

SR

n=3

Figure 10: R=1/3 convolutional


encoder with memory m=2

Present State
00
00
01
01
10
10
11
11

Next State
00
10
00
10
01
11
01
11

Output
000
111
011
100
001
110
010
101

10

1(110)

Tuple
00
01
10
11

State
s0
s2
s1
s3

0(001)

1(111)
1(100)
0(000)

0(011)

00

01

0(010)
1(101)

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

11

40 / 83

The Viterbi Decoding Algorithm

The Viterbi Decoding Algorithm

10

0(001)

1(110)
1(111)

1(100)
0(000)

0(011)

00

01

0(010)
1(101)

001

S3

S1

S1

010

S2

000

Manjunatha. P

(JNNCE)

S0

000

S0

111
000

S0

101

S2

S2

S2

011

011

111

111

111
S0

S1

101

S2

011

110

S1

101
100

100

S3

110
010

S1

101

101

001

S3

110

011

011

111

010

001

S3

110
010

11

000

3
Time units

S0

000

Coding Techniques[1, 2, 3, 4, 5]

S0

000

S0

000

S0

41 / 83

The Viterbi Decoding Algorithm

1(001)

S3

S1

010

S1

S0

S0

000

1(001)
X

S3

S0

1(001)

S1

0(110)

1(100)

1(111)

1(111)

S2

101
5

7
S2

S2

S2
0(011)

011

111

111

011

X
000

S0

110
2

S1

5
S0

110
1

(JNNCE)

0(000)

101
1(100)

S2

4
S0

S1

0(011)

2
0(000)
r=110

Manjunatha. P

101
100

S0

010
4

S1
X

0(101)

0(101)

010

X
000

011

111

S1

S0

111
3

S0

000

S3

0(110)

S0

1(001)
X

S3
0(110)

1(010)

000

1(010)
1

0(011)

S3

0(110)

000

S2

011
011

111

S0

000

2
1

101

S2

011

111

S0

0(000)

S1

101
1(100)
S2

0(011)

S0

0(000)

0
Input: 1

S1

S2

111

1(111)

1(111)

S2

0(110)

010

101
100

1(100)

S3

0(110)

S1

0(101)

1(001)

S3

0(110)

1(010)

0(101)

1(001)

S3

0(110)
1(010)

The Viterbi Decoding Algorithm

X
000

4
S0

010
4

Coding Techniques[1, 2, 3, 4, 5]

000

S0

101
5

X
000

S0

101
6

42 / 83

Sequential Decoding: The Fano Decoding Algorithm

Sequential Decoding: The Fano Decoding Algorithm

Sequential Decoding: The Fano Decoding


Algorithm

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

43 / 83

Sequential Decoding: The Fano Decoding Algorithm

Sequential Decoding: The Fano Decoding Algorithm

The decoder starts at the origin node with the threshold T=0 and the metric value M=0.
It looks forward to the best of the 2k succeeding nodes, i.e., with largest metric.
If Mf is the metric of the forward node being examined and if Mf T then the decoder
moves to this node.
It checks for the end of tree has been reached, otherwise threshold tightening is performed
if the node is examined for the first time i.e., T is increased by the largest multiple of a
threshold increment so that new threshold does not exceed the current metric.
If the node has been examined previously, no threshold tightening is performed.
Then the decoder again looks forward the the best succeeding node.
If Mf < T , the decoder looks backward to the preceding node.
If Mb is the metric of the backward node being examined, and if Mb T , then the T is
lowered by and the look forward to the best node step is repeated.
If Mb T , the decoder moves back to the preceding node.

In this example Received sequence is 11 00 01 10 01

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

44 / 83

Sequential Decoding: The Fano Decoding Algorithm

Sequential Decoding: The Fano Decoding Algorithm


00

At timet1 receives symbol 11 decoder moves


downward and decodes as bit 1.

00
a
00
a

11

11

10
b

11

10

Count 3 is turnaround criterion decoder backs out


tries alternate path by decrementing count by i. e.,
count=2

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

01
d

10

11
a

11

10
c

00

10
b

01

11

11
01
c

01
0

00

10

01
d

10
00

00
a
1

11
a

11

11

10
b

01

10

11

10
c

00
b

00

01

01
d

10

11

00

b
11
01
c

01
d

11

10
01

00

11
01
c

10
d

00

10

01
d

t2

t1
Z=

Manjunatha. P

01

00

At timet4 receives symbol 10 and moves 11 branch


decoder moves downward and decodes as bit 0
again it is disagreement with count = 3.
Count 3 is turnaround criterion all the alternative
paths have traversed decoder backs out tries
alternate path by decrementing count i. e.,
count=1 moves to the node at t3 level.

00

00

At timet3 receives symbol 01 there are two branches


11 and 00 decoder moves upward arbitrarily and
decodes as bit 0 with disagreement count = 2.
At timet4 receives symbol 10 there are two branches
00 and 11 decoder moves upward arbitrarily and
decodes as bit 0 with disagreement count = 3.

11
b
a

01

00

At timet2 receives symbol 00 there are two branches


10 and 01 decoder moves upward arbitrarily and
decodes as bit 0 with disagreement count = 1.

11

t3
00

t4
01

10
t6

t5
10

01

45 / 83

Sequential Decoding: The Fano Decoding Algorithm


8

Sequential Decoding: The Fano Decoding Algorithm

At timet3 receives symbol 01 there are two


branches 11 and 00 and untried one is 00 decoder
moves downward and decodes as bit 1 with
disagreement count = 2.

00
00
a
00
a

11

11

10
b

01

00

11

At timet4 receives symbol 10 and moves 10 branch


decoder moves upward and decodes as bit 0 with
agreement with count = 2.

10
c

11
b

00

01

01
d

10

00

00
11
a

10

10

At timet5 receives symbol 01 there are two branches


11 and 00 decoder moves upward arbitrarily and
decodes as bit 0 with disagreement count = 3.

At this count decoder back up and reset the


counter=2 timet5 receives symbol 01 there are two
branches 11 and 00 and tries alternate path as 00
with disagreement of 1 and again count = 3.

00

10
b

01

11

11

01
c

01
0

11

11

00

10

01
d

10
00

00
a
1

11
a

11

11

10
b

01

10

12

13

14

The decoder backs out of this path and sets


count=2, all the alternate paths have traversed at
t5 level, so decoder returns to node at t4 and resets
count=1

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

10
c

00
b

00

01

01
d

10

11

00

b
11
01
c

At t4 with data as 10 and path with 01 makes


count=3 decoder back to the time t2 node
At timet2 receives symbol 00 and now follows the
branch word 01 with disagreement of 1 with count
= 1.

11

01
d

11

10
01

00

11
01
c

10
d

00

10

01
d

t2

t1
Z=

11

t3
00

t4
01

10
t6

t5
10

01

46 / 83

Feedback Decoding

Feedback Decoding

Feedback Decoding

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

47 / 83

Feedback Decoding
1

Feedback Decoding

It decodes at stage j based on metrics computed


from stages j, j+1, ... j+m, where m is the
preselected integer. Look ahead length L is defined
as L=m+1. Considering L=3 decoder decodes at j
based j,j+1,j+2.

00
00

00
a

Beginning with the first branch it computes


Hamming path metrics and decides the bit is zero if
the minimum distance is in the upper part of the
tree and decides the bit is one if the minimum
distance is in the lower part of the tree.

10
01
11

10
c

11
b
a

00

01

01
d

00

10
00

11
a
10
c

11

00

10
b

11
b

01
11

01
c

01
0

00

10

01
d

10
00

00
1

11
a

Upper-half metrics: 3,3,6,4

11

11

10
b

10
c

Lower-half metrics: 2,2,1,3

The minimum metric is contained in the lower part


of the tree therefore the first coded bit is one.
Next step is to extend the lower part of the tree.

In the next step slide over two code symbols.

01
11

10
c

00
b

00

01

01
d

11

10
00

b
11
01
c
01
d

11

10
01

00

11
01
10

11

11
b

2L

Assuming the received sequence as Z=1 1 0 0 0 1 0


0 0 1 and examining the eight paths from time t1
through t3 and computing the metrics of these
eight paths.

00
a

Upper-half metrics: 2,4,3,3

00

10

01
d

10
11

Lower-half metrics: 3,1,4,4

t2

t1
Z=

11

t3
00

t4
01

10
t6

t5
00

01

The same procedure is continued until the entire


message is decoded.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

48 / 83

Structural Properties of Convolutional Codes

Structural Properties of Convolutional Codes

Structural Properties of Convolutional Codes

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

49 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:

Signal Flow Graph:


To estimate the Error correcting capability and coding gain the transfer function T(D) has
to be estimated for the given state diagram of convolutional code.
To estimate the transfer function signal flow graph method is used.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

50 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:

A signal flow graph consists of nodes and directed branches and it


operates by the following rules:
1

A branch multiplies the signal at its input node by the transmittance


characterizing that branch.
A node with incoming branches sums the signals produced by all of
these branches.
The signal at a node is applied equally to all the branches outgoing
from that node.
The transfer function of the graph is the ratio of the output of the
signal to the input signal

The branches of the graph is labeled as D 0 = 1, D 1 , D 2 , D 3 where the


exponent of D denotes the Hamming distance between the sequence
of output bits corresponding to each branch and the sequence of
output bits corresponding to the all-zero branch.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

51 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:

Signal Flow Graph:


Any system operation can be represented by a set of linear algebraic equations.
The algebraic equations represent the relationship between system variables.
A signal flow graph is a graphical representation of relationship between variables of a set
of linear equations.
ax1 + dx3 = x2
For the node x2 incoming path gains are ax1 and dx3
bx2 + fx4 = x3
For the node x3 incoming path gains are bx2 and fx4
ex2 + cx3 = x4
Similarly for other nodes
gx3 + hx4 = x5
d

f
3

Basic Terminologies:
Node: Node represents the system variable.
Branch: The line joining x1 and x2 forms the branch. Branch j-k originates at node j and
terminates upon node k, the direction from j to k being indicated by an arrowhead on the
branch. Each branch j-k has associated with it a quantity called the branch gain and each
node j has an associated quantity called the node signal .
Source (Input) Node: A source is a node having only outgoing branches.
Sink (output) Node: A sink is a node having only incoming branches.
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

52 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:

Path: A path is any continuous succession of branches traversed in the indicated branch
directions.
Forward path: A forward path is a path from source to sink along which no node is
encountered more than once .
Loop: A loop is path which originates and terminates at the same node (a closed path
without crossing the same point more than once).
Non-Touching Loops: Loops are said to be non-touching if they do not passes through a
common node. Or two loops are non-touching or non-interacting if they have no nodes in
common.
Feedback Loop: A feedback loop is a path that forms a closed cycle along which each
node is encountered once per cycle.
Path Gain: A path gain is the product of the branch gains along that path.
Loop gain: The loop gain of a feedback loop is the product of the gains of the branches
forming that loop. The gain of a flow graph is the signal appearing at the sink per unit
signal applied at the source.
To find the graph gain, first locate all possible sets of non-touching loops and write the
algebraic sum of their gain products as the denominator of.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

53 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:

Masons Gain Fomula:


The general expression for graph gain may be written as
P
Gk k
G= k

where Gk P
= gain ofP
the kth forward
path
P
= 1 Pm1 + Pm2 Pm3 + ...
m

Pmr = Gain product of the mth possible combination of non touching


loops
= 1 - (sum of loop gains) + (sum of the products of the gains
combination of two non touching loops) - (sum of the products of the
gains combination of 3 non touching loops) ........
k = The value of for that part of the graph not touching the kth
forward path
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

54 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:

The Transfer Function of a Convolutional Code:


For every convolutional code, the transfer function gives information
about the various paths through the trellis that start from the all-zero
state and return to this state for the first time. According to the
coding convention described before, any code word of a convolutional
encoder corresponds to a path through the trellis that starts from the
all-zero state and returns to the all-zero state.
The performance of a convolutional code depends on the distance
properties of the code. The free distance of convolutional code is
defined as the minimum Hamming distance between any two code
words in the code. A convolutional code with free distance dfree can
correct t errors if and only if dfree is greater than 2t.
The free distance can be obtained from the state diagram of the
convolutional encoder.
Any nonzero code sequence corresponds to a complete path beginning
and ending at the all-zero state.
The state diagram is modified to a graph called as signal flow graph
with a single input and a single output.
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

55 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:

There is one path which departs from all zeros at time t1 and merges with zeros at time
t4 with a distance of 5.
There are two paths of distance 6 one departs from all zeros at time t1 and merges with
zeros at time t5 and another departs from all zeros at time t1 and merges with zeros at
time t6 .
t1
a=00

00
11

t2

00

t3

11

00

t4

11

00

t5

00

11

t6

t1

11

a=00

t2

t3

11

11

b=10

10

00

b=10

01

01

01

01

10

10

Figure 11: Trellis diagram

(JNNCE)

1
1

d=11

1
1

1
1

Figure 12: Diagram showing distance from all

Steady State

Manjunatha. P

t6

0
2

01

d=11
10

0
1

c=01
01

t5

0
2

10

10

c=01

01

11
00

00

t4

0
2

zeros path

Coding Techniques[1, 2, 3, 4, 5]

56 / 83

Structural Properties of Convolutional Codes


1

Signal Flow Graph:


2

Branches of the state diagram are labeled as D, D or D where the D denotes the Hamming distance from the branch
word of that branch to the all zeros branch.
Node a is split into two nodes labeled as a and e one which represents the input and the other output of the state
diagram.
All paths originating at a = 00 and terminating terminating at e = 00
Output bit

1(11)

0(00)

a=00

0(11)

Input bit

10

D
01
D

1(00)

d=11

D
10

c=01

b=10

11

10

11

0(10)

a=00

D2

b=10

0(01)

c=01

D2

e (a)=00

D0=1

d=11

00

1(01)

Figure 14: Signal flow graph

1(10)

Figure 13: State diagram

Manjunatha. P

(JNNCE)

Xb

D Xa + Xc

Xc

DXb + DXd

Xd

DXb + DXd

Xe

D Xc

Coding Techniques[1, 2, 3, 4, 5]

57 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:


10

D
Output bit

0(00)

1(11)

a=00

01
D

0(11)

Input bit

11
a=00

1(00)

D2

b=10

c=01

b=10

d=11

D
10

10
D

11
c=01

D2

e (a)=00

D0=1

0(10)

00
0(01)

Figure 16: Signal flow graph

d=11
1(01)
1(10)

Figure 15: State diagram

D
D

There are two forward paths


Gain of the forward path (abce) is: G1 = (D 2 D D 2 ) = (D 5 )
Gain of the forward path (abdce) is:G2 = (D 2 D D D 2 ) = (D 6 )

= 1 (L1 + L2 + L3 ) + (L1 L2 ) = 1 (D + D + D 2 ) + (D 2 )
= 1 2D D 2 + D 2 = 1 2D
1 = 1 (L1 ) = 1 D
2 = 1 Non touching loops: nil

D
D0=1

D0=1
G

Figure 17: Loops in a signal flow graph


=
There are three loops in a graph:
Gain of the self loop at (d) is: L1 = D
Gain of the loop for (cb) is:L2 = (D 1) = D
Gain of the loop for (cdb) is:L3 = (D D 1) = D 2
Manjunatha. P

(JNNCE)

Gk k

T (D) =

G1 1 + G2 2

D 5 (1 D) + D 6 (1)
1 2D

D5 D6 + D6
1 2D

D5
1 2D

T (D) = D 5 + 2D 6 + 4D 7 + . . . + 2l D l+5
[Use Binomial expansion: (1 x)1 = 1 + x + x 2 + x 3 + x 4 + ..]

Coding Techniques[1, 2, 3, 4, 5]

58 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:

There is one path at distance 5 from the all zeros


path, that departs from the all zeros path at time
t1 and and merges with it at time t4
Similarly there are two paths at distance 6 one
which from departs at time t1 and and merges
with it at time t5 , and other which departs at
time t1 and and merges with it at time t6 and so
on.

a=00

In general there are 2l paths of distance l + 5


from the all zeros path where l = 0, 1, 2, . . .

b=10

The minimum distance in the set of all arbitrarily


long paths that diverge and remerge, called the
minimum free distance or simply the free
distance. It is of 5 in this example.
The error correcting capability of the code with
the free distance df as


df 1
t =
2

t =

51
2

t1

t2

t3

t4

0
2

t5

0
2

t6

0
2

2
0
1
c=01

1
1

1
1

1
1

d=11

Figure 18: Trellis diagram showing


distance from all zeros path


=2

The encoder can correct any two channel errors.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

59 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:

Problem 9.41 Proakis A convolutional code is described by


g1

[1 0 0]

g2 = [1 0 1]

g3 = [1 1 1]

a. Draw the encoder corresponding.


b. Draw the state-transition diagram.
c. Draw the trellis diagram.
d. Find the transfer function and the free distance.
e. If the received sequence is r =(110, 110, 110, 111,010, 101, 101), using the Viterbi algorithm find the
transmitted bit sequence.
Solution:

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

60 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:

Table 3: State transition table for the


(3,1,2)encoder
k=1

SR SR SR

Input
0
1
0
1
0
1
0
1

n=3

Present State
00
00
01
01
10
10
11
11

State b
01

Next State
00
10
00
10
01
11
01
11

Output
000
111
011
100
001
110
010
101

1(100)

0(011)
0(001)
0(010)

Tuple
00
01
10
11

State
a
b
c
d

0(000)

1(111)

State a
00

State c
10

1(110)

1(101)

State d
11

Figure 19: State diagram for rate 1/3 m=2 convolutional code (3,1,2)
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

61 / 83

Structural Properties of Convolutional Codes


State b
01

Signal Flow Graph:

1(100)

0(011)
0(001)

Tuple
00
01
10
11

0(010)
0(000)

1(111)

State a
00

State c
10

State
a
b
c
d

1(110)

1(101)

State d
11

000

000

000

000

011
111

111

111

111

000

011

011
111

b
100
001

100

001

c
110

100
001

001
110

110
010

011
110
010

010

d
101

101

101

Steady State

Figure 20: Trellis diagram for rate 1/3 m=2 convolutional code (3,1,2)
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

62 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:


D2

State b
01

1(100)

D2

0(011)

0(001)
0(010)
0(000)

D2

e (a)

Figure 22: Signal flow graph

State d
11

D2
D2

D
D

Figure 21: State diagram

State c
10

1(110)

1(101)

D3

a
1(111)

State a
00

There are two forward paths


Gain of the forward path (acbe) is: P1 = (D 3 D D 2 ) = (D 6 )
Gain of the forward path (acde) is:P2 = (D 3 D 2 D D 2 ) = (D 8 )

D
b

= 1 (L1 + L2 + L3 ) + (L1 L2 ) = 1 (D 2 + D 2 + D 4 ) + (D 4 )
= 1 2D 2 D 4 + D 4 = 1 2D 2
1 = 1 (L1 ) = 1 D 2
2 = 1

D
D

D
G

Figure 23: Loops in a signal flow graph

=
=

Gk k

T (D) =

P1 1 + P2 2

D 6 (1 D 2 ) + D 8 (1)
1 2D 2

D6 D8 + D8
1 2D 2

D6
1 2D 2

There are three loops in a graph:


T (D) = D 6 + 2D 8 + 4D 10 + 8D 12
Gain of the self loop at (d) is: L1 = D 2
Hence, dfree = 6
2
Gain of the loop for (cb) is:L2 = (D D) = D
[Use Binomial expansion: (1 x)1 = 1 + x + x 2 + x 3 + x 4 + ..]
Gain of the loop for (cdb) is:L3 = (D D 2 D) = D 4
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

63 / 83

Structural Properties of Convolutional Codes

110

110

000

a
111

000

111

110
4

000
011

111

001
1
c
110

100
001
3

010
3
4
000
000
X
011
011 X
111
111

101

101

001
4
011
110
X 010
4

101

000
011

X 100

100
001
5
110
010
3

110
010
1

111

Signal Flow Graph:

100
6

000

011
6

001
5

010
5
101

101

Steady State

The path traced by the Viterbi is (111, 110, 010, 011, 000, 000, 000) and
the decoded sequence is (1 1 0 0 0 0 0)
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

64 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:

Problem 9.41 Proakis A convolutional code is described by


g1

[1 1 0]

g2 = [1 0 1]

g3 = [1 1 1]

a. Draw the encoder corresponding.


b. Draw the state-transition diagram.
c. Draw the trellis diagram.
d. Find the transfer function and the free distance.
Solution:

Table 4: State transition table for the


a.

(3,1,2)encoder

Input U
1
Output

Manjunatha. P

(JNNCE)

Input
0
1
0
1
0
1
0
1

Present State
00
00
01
01
10
10
11
11

Coding Techniques[1, 2, 3, 4, 5]

Next State
00
10
00
10
01
11
01
11

Output
000
111
011
101
101
010
110
001

65 / 83

Structural Properties of Convolutional Codes

b.

Signal Flow Graph:

c.

State b
01

1(100)

000

0(011)

00
111

0(101)

1(111)

State a
00

111

000
111

State c
10

101

111

111

011

011

011

100

100

100

101

1(010)

101

101

010
110

010

010

110

110

11
001

State d
1(001)
11

(JNNCE)

000

10
010

Manjunatha. P

000

01

0(110)
0(000)

000

001

001

Steady State

Coding Techniques[1, 2, 3, 4, 5]

66 / 83

Structural Properties of Convolutional Codes

Signal Flow Graph:


D

State b
01

1(100)

0(011)

D2

0(101)
0(110)
0(000)

1(111)

State a
00

D3

D2

D2

e (a)

State c
10

D
1(010)

1(001)

Figure 25: Signal flow graph

State d
11

Figure 24: State diagram

D
D

There are two forward paths


Gain of the forward path (acbe) is: P1 = (D 3 D 2 D 2 ) = (D 7 )
Gain of the forward path (acde) is:P2 = (D 3 D D 2 D 2 ) = (D 8 )

D2

= 1 (L1 + L2 + L3 ) + (L1 L2 ) = 1 (D + D 3 + D 4 ) + (D 4 )
= 1 D D3 D4 + D4 = 1 D D3
1 = 1 (L1 ) = 1 D
2 = 1

D2
P

Figure 26: Loops in a signal flow graph

=
=

Gk k

T (D) =

P1 1 + P2 2

D 7 (1 D) + D 8 (1)
1 D D3

D7 D8 + D8
1 D D3

D7
1 D D3

There are three loops in a graph:


T (D) = D 7 + D 8 + D 9 +
Gain of the self loop at (d) is: L1 = D
Hence, dfree = 7
2
3
Gain of the loop for (cb) is:L2 = (D D ) = D
[Use Binomial expansion: (1 x)1 = 1 + x + x 2 + x 3 + x 4 + ..]
2
4
Gain of the loop for (cdb) is:L3 = (D D D ) = D
Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

67 / 83

Convolutional Codes

Systematic and Nonsystematic Convolutional Codes

Systematic and Nonsystematic Convolutional Codes


u1

A systematic convolutional code is one in


which the input k-tuple appears as part of the
output branch n-tuple.

Input bit

Output
u2

Figure shows the binary rate 1/2,K=3


systematic encoder.
For Linear Block codes, any non systematic
code transformed into a systematic code with
the same block distance properties. But this is
not in the case of convolution codes.
The reason for this is the convolution codes
depend largely on free distance; in general,
reduces the maximum possible free distance
for a given constraint length and rate.
The table shows the maximum free distance
for the rate 1/2 systematic and nonsystematic
codes for k=2 through 8.

Manjunatha. P

(JNNCE)

Figure 27: Systematic convolution code

Constraint Free Distance


Length
Systematic
2
3
4
6
7
8

Coding Techniques[1, 2, 3, 4, 5]

3
4
5
6
6
7

Free Distance
Nonsystematic
3
5
6
7
10
10
68 / 83

Convolutional Codes

Catastrophic Error Propagation in Convolution Codes

Catastrophic Error Propagation in Convolution Codes


A catastrophic error is an event where by a finite number of code symbol errors causes an
infinite number of decoded data bit errors.
Massey and Sain has derived a sufficient and necessary condition for convolution codes to
display catastrophic error propagation.
The condition for the catastrophic error propagation is that have a common polynomial
factor.
For example it is illustrated in the following example of rate 1/2,K=3 encoder with upper
polynomial and lower polynomial as follows
g1 (X )

1+x

g2 (X )

1 + x2

the generator g1 and g2 have common polynomial factors 1+X since


1 + X 2 = (1 + X )(1 + X )

00

D
01
D

u1
Input bit

d=11

D
11

Output
11

u2

a=00

D2

10
b=10

01
c=01

D2

e (a)=00

D0=1
10

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

69 / 83

Convolutional Codes

Catastrophic Error Propagation in Convolution Codes

From the state diagram for any rate code, catastrophic error occur if and only if any
closed-loop path in the diagram has zero weight(zero distance from all the zero path)
Assume that the all zero path is correct path and incorrect path is a,b,d,d,...d,c, has
exactly 6 ones. So number of error occur is proportional to the number of times is self
loop at the position d.

00

D
01
D

u1
Input bit

d=11

D
11

Output
11

u2

a=00

D2

10
b=10

01
c=01

D
D0

D2

e (a)=00

=1

10

Figure 29: Catastrophic error propagation

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

70 / 83

Convolutional Codes

Coding Gain

Coding Gain

Eb
The coding gain is expressed in dB, in the required N
to achieve a specified error
0
probability of the coded system over an uncoded system with the same modulation and
channel characteristics.

The basic formula to compute the coding gain is given by



G (db) =

Where

Eb
N0


u

and

Manjunatha. P

Eb
N0

(JNNCE)

Eb
N0


(db)

Eb
N0


(db)
c

are uncoded and coded gain respectively.

Coding Techniques[1, 2, 3, 4, 5]

71 / 83

Convolutional Codes

df

3
4
5
6
7
8
9

5
6
7
8
10
10
12

Upper
bound (dB)
3.97
4.76
5.43
6.00
6.99
6.99
7.78

Coding Gain

df
8
10
12
13
15
16
18

Upper
bound (dB)
4.26
5.23
6.02
6.37
6.99
7.27
7.78

Table list an upper bound on the coding gains, compared to uncoded coherent BPSK for
several maximum free distance convolutional codes with constraint length varying from 3
to 9 over a channel with hard decision decoding.
The coding gain cannot increase indefinitely;it has an upper bound.
The bound in dB can defined as
coding gain log10 (rdf )
where r is the code rate and df is the free distance

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

72 / 83

Solutions

Solutions

Solutions to Chapte 7: Channel Coding


(Bernard Sklar)

Note: State diagram, tree diagram and trellis diagram for K=3 are same
only changes will occur in the output that depends upon the connection
vector.

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

73 / 83

Solutions

Solutions

7.1 Draw the state diagram, tree diagram and trellis diagram for the K=3, rate=1/3 code generated by
g1 (X ) = X + X 2 ,
g2 (X ) = 1 + X + X 2 , g3 (X ) = 1 + X

Table 5: State transition table (rate=1/3,


K=3)
Input
bit
0
1
0
1
0
1
0
1

1
Input U
Output

2
3

State at
time ti
00
00
01
01
10
10
11
11

States represent possible contents of the


rightmost K-1 register content.

Output bit

For this example there are only two transitions


from each state corresponding to two possible
input bits.
Solid line denotes for input bit zero, and dashed
line denotes for input bit one.

State at
time ti + 1
00
10
00
10
01
11
01
11

Output
000
011
110
101
111
100
001
010

0(000)

1(011)

a=00

0(110)

Input bit

1(101)
c=01

b=10

0(111)

Tuple
00
10
01
11

Manjunatha. P

State
a
b
c
d

(JNNCE)

0(001)
d=11
1(100)
1(010)

Figure 30:

State
Coding Techniques[1, 2, 3, 4,
5]

diagram for rate=1/3 and

74 / 83

Solutions

Solutions

000
000
a
000
a

011

011

111
b

100

000

110

111
011
b

101

100

001
d

010

000

00
110
a
111
c

11

101

111
b

011
b

t1

100

a=00

110

000
011

t2

000

t3

011

000

t5

000

t6

011

101

010

001
d

111

110
101

111

111

000

c=01
a

011

110
a
011

100

111
b

111
c

101

101

010

000
1

110

110

b=10
d

001

100

001

001
100

100

d=11

100

010

110

010

010

111
101
b

101

100
d

011

000

Figure 32: Trellis diagram for rate=1/3 and

011

K=3

010

110
a
001
c

101

111
b

100
d

Steady State

001

100
110

001
c

010
d

101

010

001
d

t2

000
011

001
100

t1

t4

011

t3

t4

010
t5

Figure 31: Tree diagram for rate=1/3


P (JNNCE)
Coding Techniques[1, 2, 3, 4, 5]
and Manjunatha.
K=3

75 / 83

Solutions

Solutions

7.2 Given a K=3 and rate 1/2 binary convolutional code with the partially completed state diagram shown in Figure 7.1 find the
complete state diagram, and sketch a diagram for encoder
Output bit

1(11)

00

Table 6: State transition table (rate=1/2,

Input bit

K=3)
01

10
0(10)

11

1(00)

Figure 33: State diagram for rate=1/2,


K=3

Input
bit
0
1
0
1
0
1
0
1

State at
time ti
00
00
01
01
10
10
11
11

States represent possible contents of the


rightmost K-1 register content.
For this example there are only two transitions
from each state corresponding to two possible
input bits.

State at
time ti + 1
00
10
00
10
01
11
01
11

Output bit

1(11)

Output
00
11
01
10
10
01
11
00

0(00)

00

0(01)

Input bit

1(10)
01

10
0(10)

1
0(11)

Input U

11

Output

1(01)
1(00)

Figure 35: Modified State diagram


(JNNCE)
Figure 34:
Encoder diagramCoding Techniques[1, 2, 3, 4, 5]

Manjunatha. P

76 / 83

Solutions

Solutions

7.3 Draw the state diagram, tree diagram and trellis diagram for the convolutional encoder characterized by the block diagram
in Figure p7.2

Table 7: State transition table (rate=1/2,


K=3)
Input
bit
0
1
0
1
0
1
0
1

Input U
Output

State at
time ti
00
00
01
01
10
10
11
11

States represent possible contents of the


rightmost K-1 register content.
For this example there are only two transitions
from each state corresponding to two possible
input bits.
Solid line denotes for input bit zero, and dashed
line denotes for input bit one.

State at
time ti + 1
00
10
00
10
01
11
01
11

Output bit

0(00)

1(10)

a=00

Output
00
10
11
01
11
01
00
10

0(11)

Input bit

1(01)
01

10

0(11)

Tuple
00
10
01
11

Manjunatha. P

State
a
b
c
d

(JNNCE)

0(00)
11
1(01)
1(10)

Figure 36:

Coding Techniques[1, 2, 3, 4,
5]
State

diagram for rate=1/3 and

77 / 83

Solutions

Solutions

00
00
a
00
a

10

10

11
b

01

00

11

11
c

10
b
a

01

01

00
d

10

00

00
11
a
11
c

a=00
11
b

10
b

t1

11

01

00

t2

10

00

t3

10

00

t4

10

00

t5

10

00

t6

10

01
11

00
c

01
0

01

10
d

11

11

b=10

10

11

11
01

01

00

01

11

11

00

c=01

00
a
1

10

11
a
10

11
b

11
c

01

00

00
01

10

01
10

00
d

10

10

01

01

00

d=11

11
01
b

Steady State

10
00

b
11
a
00
c

01
b

01
d

Figure 38: Trellis diagram for rate=1/3 and

11

K=3

01

00
c

01

10

00
d

t2

10

11
10

t1

01

01
11

t3

t4

10
t5

Manjunatha.
P (JNNCE)
Coding Techniques[1, 2, 3, 4, 5]
Figure
37: Tree
diagram for rate=1/3

78 / 83

Solutions

Solutions

7.5 Consider the convolutional encoder shown in Fiugre a)Write the connection vectors and polynomials for this encoder.
b)Draw the state diagram, tree diagram and trellis diagram

Table 8: State transition table (rate=1/2,


K=3)

Input
bit
0
1
0
1
0
1
0
1

Input U
Output

Figure 39: problem 7.5

State at
time ti
00
00
01
01
10
10
11
11

Solution:
Connection vectors and polynomials are:
g1 = [1 0 1], g2 = [0 1 1]
g1 (X ) = 1 + X 2 , g2 (X ) = X + X 2

State at
time ti + 1
00
10
00
10
01
11
01
11

Output bit

1(10)

Output
00
10
11
01
01
11
10
00

0(00)

00

0(11)

Input bit

States represent possible contents of the


rightmost K-1 register content.
For this example there are only two transitions
from each state corresponding to two possible
input bits.

1(01)
01

10
0(01)

0(10)

Tuple
State
00
a
10
b
01
c
11
d
Manjunatha. P (JNNCE)

11
1(11)
1(00)

Figure 40:

Coding Techniques[1, 2, 3, 4,
5]
State

diagram for rate=1/2 and

79 / 83

Solutions

Solutions

00
00
a
00
a

10

10

01
b

11

00

11

01
c

10
b
a

01

11

10
d

00

00

00

t1

11
a
01
c

11

01
b

10
b

a=00

00

t2

10

00

t3

10

00

t4

10

00

t5

10

00

t6

10

01
11
11

10
c

11
0

11

11

b=10

01

00

01

10
d

11
01

01

00

01

01

01

00

c=01

00
a
1

10

11
a
10

01
b

01
c

11

11
11

01
c

01
b

00

01

11

11

10

10
11

00

11
00

10
d

10

10

d=11

00

Steady State

00

b
11
a
10
c

10

01

10
c

01

00

10
d

t2

K=3

11
11

00

t1

Figure 42: Trellis diagram for rate=1/3 and

01
b

11
d

t3

t4

00
t5

Figure 41: Tree diagram for rate=1/3


and K=3

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

80 / 83

Solutions

Solutions

7.6 An encoder diagram is shown in Figure. Find the encoder output for an input sequence 1 0 0 1 0 1 0
Input U

Output

Figure 43: problem 7.6


Solution:
Connection vectors and polynomials are: g1 = [1 0 1], g2 = [1 1 1]
g1 (X ) = 1 + X 2 , g2 (X ) = 1 + X + X 2
For the following table, column 1 represents the given input, column 2 represents the contents of the registers and column 5
represents the output for the given input.

Table 9: State transition table (rate=1/2, K=3)


Input
bit
1
0
0
1
0
1
0

Manjunatha. P

(JNNCE)

Register
Contents
000
100
010
001
100
010
101
010

State at
time ti
00
00
10
01
00
10
01
10

State at
time ti + 1
00
10
01
00
10
01
10
01

Coding Techniques[1, 2, 3, 4, 5]

Output
00
11
01
11
11
01
00
01

81 / 83

Solutions

Solutions

7.6 Figure shows an encoder for a (3,1) convolutional code. Find the transfer function T (D) and minimum free distance for this
code. Also, draw the state diagram for the code.

Input U
1
Output

Manjunatha. P

(JNNCE)

Coding Techniques[1, 2, 3, 4, 5]

82 / 83

References

[1] S. Lin and D. J. C. Jr., Error Control Coding, 2nd ed.

Pearson / Prentice Hall, 2004.

[2] R. Blahut, Theory and Practice of Error Control Codes, 2nd ed.
[3] J. G. Proakis, Digital communications, 4th ed.

Addison Wesley, 1984.

Prentice Hall, 2001.

[4] J. G. Proakis and M. Salehi, Communication Systems Engineering, 2nd ed.


2002.
[5] S. Haykin, Digital communications, 2nd ed.

Manjunatha. P

(JNNCE)

Prentice Hall,

Wiley, 1988.

Coding Techniques[1, 2, 3, 4, 5]

83 / 83

You might also like