Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 42

Introduction

• Convolutional coding were introduced in 1955 by Elias.


• Convolutional encoders are linear and time-invariant system given by the
convolution of a binary data stream with generator sequences.
• They can be implemented by shift registers.
• Can achieve a larger coding gain than can be achieved using a block coding
with the same complexity
Convolutional Encoder

• The information, consisting of individual bits, is fed into a shift register in order to be
encoded.
A convolutional encoder is generally characterized in form (n, k, or (k / n,
where : K) K)
– k inputs and n outputs
• In practice, usually k=1 is chosen.

Rc  k / n is the coding rate, determining the number of data


bits per coded bit.

– K is the constraint length of the convolutinal code (where the encoder has
K-1 memory elements).

S is the number of memory elements


The encoder shown is a (2, 1, 3), 1/2 code rate, and 4 possible
states
Encoding Process

• The output bit combination is described by a polynomial. It is characterized by the


number and positioning of the taps at the shift register, which are indicated by,
generator polynomials g, the coefficients of which are 0 or 1, according to whether
there was a tap at the respective position or not. It is common practice to combine the
coefficients as octal numbers.
Non-Systematic Convolutional (NSC) Codes

Here is an example of a NSC code with a


c1(t) constraint length K = 3 and generator polynomial

G(D) = [1 + D + D2, 1 + D2]

To make the generator polynomials more compact


1 1 1 we convert them to a binary string and group the
bits into threes to write them in octal form
m(t)
D D In this case 1 + D2 is 101 and 1 + D + D2 is
111.
1 0 1 101 is 58, where the subscript 8 indicates it is an
octal number. Similarly, 111 is 78

Therefore, this is the (7, 5)8 NSC code


c2(t)
Encoding Process

For example,to encode the message, ms = [1, 1, 0, 0, 1, 0, 1].


The output are:
c1= [1,0, 0, 1, 1, 1, 0], c2 = [1, 1, 1, 1, 1, 0, 0],
and the interleaved stream is:
c = [11, 01, 01, 11, 11, 10, 00]

And using polynomial form, the process is as follow:


Example1: Consider the binary convolutional encoder with constraint length K=2,
k=1, and n=2. The generators are: g1=[111], g2=[101].(in octal these generators
are(7,5).

[g1]=[1 1 1], C1=m(t)+S1+S2


[g2]=[1 1 1] , C2=m(t)+S2
Using the (7, 5)8 NSC Code to Encode
the Message 101000
c1(t) :Output calculation
Input m(t) S1 S2 c1 (t)  m(t)  S1 
c2(t) S2
Encoding Process (Initialised state S1S2 = 00) c2 (t)  m(t)  S 2
At time t = 1 1 At time t = 2
1

1 0 0
0 1 0
1
0

At time t = 3 0

1 0 1

0
Using the (7, 5)8 NSC Code to Encode the
Message 101000

At time t = 4 1 At time t = 5
1

0 1 0 0 0 1
0 1

At time t = 6 0

0 0 0

The codeword is [c1(1)c2(1), c1(2)c2(2), c1(3)c2(3),


c1(4)c2(4), c1(5)c2(5), c1(6)c2(6)]

=[11 10 00 10 11 00]
State Table of a Convolutional Code

• The memory elements can contain four possible values: 00, 01, 10, 11
• These values are called states and only certain state transitions are allowed. For
example, we cannot go from state 00 to state 11 in one step or stay in state 10.
• All possible state transitions, along with their corresponding input and
outputs, can be determined from the encoder and stored in a state table.
• The state table for the (7, 5)8 NSC code is shown below:
Input Initial State Next State Output
S1 S2 S1 S2 c1 c2
0 0 0 0 0 0 0
1 0 0 1 0 1 1
0 0 1 0 0 1 1
1 0 1 1 0 0 0
0 1 0 0 1 1 0
1 1 0 1 1 0 1
0 1 1 0 1 0 1
1 1 1 1 1 1 0
State diagram of a Convolutional
Code
• A more compact form of the state table is the state diagram
• It is much easier to determine a codeword corresponding to a message using
the state diagram
• Starting from state 00, trace the message through the state diagram and
record each output
1/10 Input/Output

11
What is the codeword for the message
1/01
m = [1 0 0 1 1 1]?
0/01 0/10

01 10

1/00
0/11 1/11

00

Start at the all-zero state


0/00
Tree Diagram of a Convolutional Code

: Time 1 2 3 4
• A Tree Diagram is obtained from the state a
table and is one method of showing all a 00
possible code words up to a given time. a 00 b
0 11
00 c
• A input of 0 is represented by moving to an b
b
10
upper branch in the tree and an input of 1 11 00
corresponds to moving to a lower branch. a
d
01
a
• The states are denoted by a, b, c 1 c 11
and d: a = 00, b = 10, c = 01, d = 10 b
b 00
11 11 c
• The red line is the input 0, 1, 0, 1, which d 01
has the codeword 00, 11, 10, 00 01 d
10
Trellis Diagrams

State Input/Output

• One problem with the tree diagram is that the 00 0/00


number of branches increase exponentially
0/11
• We can represent all possible state transitions
with a graph, where nodes represent states and
edges represent transitions. Each edge is labelled 01 1/11

with an input and its corresponding coded output. 1/00

• By making copies of the graph and joining them 0/10


together we create a trellis with each path 10 0/01
corresponding to a codeword.
1/01

11 1/10
Trellis Diagrams

• A trellis diagram of the (7, 5)8 NSC code is given below


• The red line shows the message 10101 giving a codeword 11 10 00 10 00
State t=1 t=4 t=5
t=2 t=3
Convolutional Encoder

Example2: Consider the binary convolutional encoder with constraint length K=3, k=1, and
n=3. The generators are: g1=[100], g2=[101], and g3=[111]. The generators are more
conveniently given in octal form as (4,5,7)

Q1 Q2 Q3
[g1]=[1 0 0], C1=Q1
[g2]=[1 0 1] , C2=Q1+Q3
[g3]=[1 1 1 ], C3=Q1+Q2+Q3
State Table of a Convolutional Code

draw the state diagram

For example, for data sequence :


1011….
output will be: 111 001 100 110
Tree Diagram

K=3, k=1, n=3 convolutional encoder

Input bit: 101

The state of the first (K-1)k


stages of the shift register:
a=00; b=01;
Output bits:
111 001 100 c=10; d=11
Trellis Diagram
Example3: Consider the 2/3 rate convolutional encoder with generators
[g1]=[1011], [g2]=[1101], [g3]=[1010](in octal these generators are(13,15,12).

C1=Q1+Q3+Q4
C2=Q1+Q2+Q4
C3=Q1+Q3
State Table of a Convolutional Code
State diagram of a Convolutional
Code
Trellis Diagram
Viterbi’s Algorithm

• As stated previously, it would be impractical to compare every possible codeword


with the received word until the codeword with the minimum Hamming (or
Euclidean) distance is found.

• One maximum-likelihood decoding algorithm which significantly reduces the


number of codewords that are compared with the received word is Viterbi’s
algorithm, presented by Andrew Viterbi in 1967.

• Viterbi’s algorithm is applied to the trellis diagram of the code. It is a recursive


algorithm that calculates the accumulated distance between the received word
and paths in the trellis at each state.

• When two paths converge into a state the least likely path is discarded.

• Finally, the path of length n with the smallest accumulated distance is the
codeword that maximises P(r|c).
Viterbi’s Algorithm

• A branch metric t(s, s’) is the distance between the received output and branch
output from a previous state s to a current state s’ at time t
• A path metric Mt(s’) is the accumulated distance of a path at state s’

• To obtain the path metrics at time t we add the previous path metrics that are
connected to each state to the corresponding branch metrics.
Current Path metric
Mt-1(s1)
s1
1 t (s ,
s’)
Previous
path metrics ’s Mt(s’) = min{Mt-1(s1) + t(s1, s’), Mt-1(s2) + t(s2, s’)}

Keep the paths with the smallest path metricst(s2, s’)


s2
Current Branch metrics (Hamming or squared Euclidean
Mt-1(s2)
distance)
Viterbi’s Algorithm

1. At t = 0, initialise the path metric at the all-zero state to M0(0) = 0.


2. Set t = t + 1
3. For each state at time t calculate its path metric by first adding each
branch metric entering the state to the path metrics of the previous states

M t s   M t 1 s' µt s, s'


4. Discard the path with the higher path metric at each state and store the
remaining path metrics
5. If t = n then find the state with the lowest path metric and trace back the
path. This is the maximum likelihood path. Else go to 2.
Viterbi Decoding Example with No Errors

Received
Word 11 10 00 10 11
0 2 2=0+2 1 0 1 2
00 0 /0 0 0 /0 0 0/00 0 /0 0 0/00

2 1 0
0 /1 1 0/11 0/11

0 1 2
01 1 /1 1 1 /1 1 1 /1 1

0
1/00
0 1 0
0/10 0 /1 0 0 /1 0
1 2
10 0/01 0/01
0+0=0 2
1
1/01 1/01
t=1
1
11 1/10
Viterbi Decoding Example with No Errors

Received
Word 11 10 00 10 11
0 2 2 1 2+1=3 0 1 2
00 0 /0 0 0 /0 0 0 /0 0 0 /0 0 0/00

2 1 0
0 /1 1 0/11 0/11

0 1
0=0+0
2
01 1 /1 1 1 /1 1 1 /1 1

0
1/00
0 1 0
0/10 0 /1 0 0 /1 0
1 2
10 0 /0 1 0/01

0 2
3=2+1
1
1 /0 1 1/01
t=2
1
11 1/10

2=0+2
Viterbi Decoding Example with No Errors

Choose path with smallest value


Received
Word 11 10 00 11 10
,3=3+0
0 2 2 1 3 0 2=0+2 1 2
00 0/00 0 /0 0 0 /0 0 0/00 0/00

2 1 0
0 /1 1 0/11 0/11
,4=3+1
0 1 0 2 3=2+1
01 1/11 1/11 1/11

0
0
1 /0 0
0
1
0/10 0 /1 0 0 /1 0
1 2
10 0 /0 1 0 /0 1

3 ,0=0+0
0 2 1 5=3+2
1 /0 1 1/01
t=3
1
11 1/10 ,4=3+1
2 3=2+1
Viterbi Decoding Example with No Errors

Received
Word 11 10 00 10 11
,3=2+1
0 2 2 1 3 0 2 1 4=3+1 2
00 0/00 0 /0 0 0 /0 0 0/00 0/00

2 1 0
0 /1 1 0/11 0/11
,0=0+0
0 1 0 2 3 5=3+2
01 1 /1 1 1 /1 1 1 /1 1

0
0
1/00
0
1
0/10 0 /1 0 0 /1 0
1 2
10 0 /0 1 0 /0 1

0 3 0
2 1
1 /0 1 1/01
t=4
1
11 1/10

2 3
Viterbi Decoding Example with No Errors

Received
Word 11 10 00 10 11
,5=3+2
0 2 2 3 0 2 1 3 2 0 =0 +
00 0/00
1
0/00 0/00 0/00 0/00
0

2 1 0
0/11 0/11 0/11

0 1 0 2 3 0
01 1/11 1/11 0 1/11

0
1 /0 0
0
1
0/10 0 /1 0 0 /1 0
1 2
10 0 /0 1 0 /0 1

0 3 0
2 1
1 /0 1 1/01
t=5
1
11 1/10

2 3
Viterbi Decoding Example with No Errors

Received
Word 11 10 00 10 11
0 2 2 3 0 2 1 3 2 0
00 0/00
1
0/00 0/00 0/00
0/00

0
2 1 0/11

0/11 0/11

0 1 0 2 3 0
01 1/11 1/11 0 1/11

0
1 /0 0
0
1
0/10 0 /1 0 0 /1 0
1 2
10 0 /0 1 0 /0 1

0 3 0
2 1
1 /0 1 1/01

1
11 1/10

2 3
Viterbi Decoding Example with No Errors

• At t = 3 each state has two branches entering it. We calculate the path
metrics for each branch entering the state and keep the smallest one.
The other branch is discarded, eliminating a competitor path.

• We can see that Viterbi’s algorithm has found a single path in the
trellis that matches the received word.

• In the next example, we add one error to the received word and see if
Viterbi’s algorithm is able to correct it.
Viterbi Decoding Example with One Error

An error
Received
Word 11 10 01 10 11
,4=3+1
0 2 2 1 3 1 1=0+1 1 2
00 0/00 0 /0 0 0 /0 0 0 /0 0 0/00

1 1 0
0 /1 1 0/11 0/11
,5=3+2
0 1 0 1 2=2+0
01 1 /1 1 1 /1 1 1 /1 1

1
0
1/00
0
2
0/10 0 /1 0 0 /1 0
0 2
10 0 /0 1 0 /0 1

3 ,1=0+1
0 2 0 4=3+1
1 /0 1 1/01
t=3
2
11 1/10 ,3=3+0
2 4=2+2
Viterbi Decoding Example with One Error

Received
Word 11 10 01 10 11
,2=1+1
0 2 2 1 3 1 1 1 3=2+1 2
00 0/00 0 /0 0 0 /0 0 0/00 0/00

1 1 0
0 /1 1 0/11 0/11
,1=1+0
0 1 0 1 2 5=3+2
01 1 /1 1 1 /1 1 1 /1 1

1
0
1/00
0
2
0/10 0 /1 0 0 /1 0
0 2
10 0 /0 1 0 /0 1

0 3 1
2 0
1 /0 1 1/01
t=4
2
11 1/10

2 3
Viterbi Decoding Example with One Error

Received
Word 11 10 01 10 11
,4=2+2
0 2 2 1 3 1 1 1 2 2 1=1+0
00 0/00 0 /0 0 0 /0 0 0 /0 0 0 /0 0

1 1 0
0 /1 1 0/11 0/11

0 1 0 1 2 1
01 1 /1 1 1 /1 1 1 /1 1

1
0
1/00
0
2
0/10 0 /1 0 0 /1 0
0 2
10 0 /0 1 0 /0 1

0 3 1
2 0
1 /0 1 1/01
t=5
2
11 1/10

2 3
Viterbi Decoding Example with One Error

Received
Word 11 10 01 10 11
0 2 2 1 3 1 1 1 2 2 1
00 0/00 0 /0 0 0 /0 0 0 /0 0 0/00

1 1 0
0 /1 1 0/11 0/11

0 1 0 1 2 1
01 1 /1 1 1 /1 1 1 /1 1

1
0
1/00
0
2
0/10 0 /1 0 0 /1 0
0 2
10 0 /0 1 0 /0 1

0 3 1
2 0
1 /0 1 1/01

2
11 1/10

2 3
Draw the binary convolutional decoder given in example2 K=3, k=1, and n=3
Viterbi Decoding K=3, k=1, n=3 convolutional decoder
Soft-Decision Decoding

Hard
Soft Decision
Values
Channel Demodulator Decoder

Euclidean Soft-Decision
Distance Decoding
Decoder

1.3 0.7

1- 0.3 1 The decoder now takes in real values that give a


Received measure of likelihood of the received values
value
BPSK Constellation
Soft-Decision Viterbi Algorithm

• The soft-decision Viterbi algorithm is applied in the same way as the


hard-decision Viterbi algorithm,

• The only difference is the branch metrics are now squared Euclidean
distances and the trellis outputs are the modulated symbols (1), not
the coded bits

• The squared Euclidean distances give a measure of how likely a received


symbol is of being a 0 or a 1. This extra information results in a significant
increase in error- correction.
Soft-Decision Viterbi Decoding Example

• Now we will repeat the previous hard-decision Viterbi algorithm example, but with
soft inputs.

Message: m = [1 0 1 0 0]
Codeword: c = [11 10 00 10 11]
After x = [11 1-1 -1-1 1-1 11]
BPSK
Modulatio
n: r = [0.4, 0.6, 0.5, -0.3, -0.8, -0.7, 0.9, -0.2, 0.4,
 -1, 1 0( 0.3
) 1
Received values from
demodulator:
Soft-Decision Viterbi Decoding Example

0.6 ,0.4 0.3- ,0.5 0.7- ,0.8- 0.2- ,0.9 0.3 ,0.4

,12.49=8.84+3.6511.64,=7.39+4.257.39,=7.26+0.13
0 2.74 4.52 4.52 7.26 0.13
2.89=2.04+0.85 3.65 8.84=7.39+1.45 4.25 7.39=1.26+6.13

6.13 1.45
,9.79=6.46+3.33
0.52 7.39=4.46+2.93 ,2.04=1.39+0.65 0.85
1.94 1.26 6.13 12.84=7.79+5.05

0.13
0.74
3.33 0.65

0.52 3.94 6.46 5.05 ,7.39=7.26+6.13 2.93


1.39=1.26+0.13

2.93
,9.39=6.46+2.93
4.46 3.33 7.79=4.46+3.33

You might also like