Professional Documents
Culture Documents
Slides ChannelCoding
Slides ChannelCoding
Slides ChannelCoding
Channel coding
Francois Horlin
1
Outline
Introduction
Block codes
Low density parity check codes
Convolutional codes
Exercises
2
References
3
Outline
Introduction
Block codes
Low density parity check codes
Convolutional codes
Exercises
4
Outline: Introduction
5
Introduction
6
ARQ advantages/drawbacks
7
FEQ advantages/drawbacks
8
Error correcting code (N,K)
Definitions:
Parity bits: N K additional bits
Redundancy: (N K)/K
Code rate: K/N 1
9
Parity check codes
10
Single-parity check code
11
Rectangular code
12
Error performance versus bandwidth
13
Power versus bandwidth
Coding gain:
SNRuncod SNRcod [dB]
14
Channel models
15
Binary Symmetric Channel (BSC)
16
Gaussian Channel
17
Outline
Introduction
Block codes
Low density parity check codes
Convolutional codes
Exercises
18
Outline: Block codes
19
Introduction to block codes
The inputs of the decoder are often the detected bits to limit the
decoder complexity
Therefore hard decoding, working on the BSC channel, is assumed
20
Vector space and subspace
21
Example (N = 6)
Vector space:
22
Linear block codes
23
Vector space and su
Vector space and subspace
24
Example (K = 3, N = 6)
25
Detection
26
Generator matrix
27
Generator matrix
Equivalently:
v1
..
h i
u = d1 dK
.
vK
= dG
28
Example (K = 3, N = 6)
Generator matrix:
v1 1 1 0 1 0 0
G = v
2
= 0
1 1 0 1 0
v3 1 0 1 0 0 1
29
Systematic linear block codes
Definition: the mapping is such that part of the code vector coincides
with the message vector
The generator matrix has the form:
h i
G = P IK
where
P is the parity array portion (size K N K)
I K is the identity matrix (size K)
The code vector is composed of the parity bits d P and of the
message vector d
30
Parity-check matrix
31
Syndrome testing
The received vector r is the transmitted vector u plus an error vector
e caused by the channel:
r = u+e
s := r HT
= u HT + e HT
= e HT
The syndrome is equal for the corrupted received vector or for the
corresponding error vector
An error is detected when the syndrome is different from 0
32
Example (K = 3, N = 6)
33
Standard array
34
Standard array
The first row contains all codewords. It starts with the all-zeros
codeword.
The first column contains all correctable error patterns. They are
chosen by the code designer.
35
Example (K = 3, N = 6)
36
Example (K = 3, N = 6)
37
Error correction
Steps:
Calculate the syndrome s = r H T
Determine the error pattern ej corresponding to the syndrome s
Estimate the transmitted code vector by correcting the received
vector u = r + ej (modulo 2 subtraction or addition are
equivalent!)
If the error caused by the channel is not a coset leader, then an
erroneous decoding will result
38
Example (K = 3, N = 6)
The syndrome and the corresponding error pattern are equal to:
h i
s = r HT = 1 0 0
h i
e = 1 0 0 0 0 0
39
Hamming weight and distance
Definitions:
W (u) is the number of non-zero elements in u
D(u, v) is the number of elements in which u and v differ
D(u, v) = W (u + v)
40
Minimum distance of a block code
41
Optimal decoder strategy
u = max P (r|u)
u
Equivalently:
u = min D(r, u)
u
42
Error detection and correction
Error detection and correction
decision line
U r1 r2 r3 r4 V
corrupted codewords
outside of the code
TX codeword corrupted codeword
inside the code
43
43
Error detection and correction
44
Link with the minimum distance
EDC = Dmin 1
45
Outline
Introduction
Block codes
Low density parity check codes
Convolutional codes
Exercises
46
Outline: Low density parity check codes
Tanner graph
Hard decoding
Soft decoding in the probability domain
Soft decoding in the log domain
Code design criterion
47
Introduction to LDPC codes
Low density parity check (LDPC) codes are block codes of sparse
parity check matrix H:
As the number of non-zero elements in matrix H is small, the
decoder complexity can be kept low
Performance of hard decoding is generally poor, but soft decoding
can easily be implemented improving significantly performance
Matrix H can be regular (all row sums are equal, all column sums are
equal) or irregular
48
Tanner graph
Bipartite graph: the nodes are separated into two classes, the edges
are undirected and only connect two nodes of different class
Tanner graph: bipartite graph used to represent the low-density
parity-check matrix H, as follows:
Variable nodes (v-node) ci correspond to the noisy codeword
Check nodes (c-node) fj correspond to the syndrome
Check node fj connected to the variable node ci if element H ji = 1
49
epresentations:
heck matrix and Tanner graph Small size example
he low density c0 c1 c2 c5 = 0
matrix
esented by a
nner graph or
ph
nodes correspond to
H is much larger
Exchanges the most probable bit between v-nodes and c-nodes (binary
messages)
Intrinsically satisfies the maximum a-posteri (MAP) criterion:
u = max P (u|r)
u
51
Hard decoding: step 0
ard-decision decoding
es ci send a message to theirText f0 f1 f2 f3 check
g the bit they believe to be the nodes
m.
c0 c1 c2 c3 c4 c5 c6 c7
y0 y1 y2 y3 y4 y5 y6 y7 52
rd-decision decoding Hard decoding: step 1
f0 f1 f2 f3
Text
c0 c1 c2 c3 c4 c5 c6 c7
w message for a node ry00 ry11 ry22 ry33 ry44 ry55 yr66 yr77
account the previous
d from that node
Every c-node fj calculates a response to its connected variable nodes
f0 f1 f2 f3
th the variable node
The update
eck node response
contains the bit that fj believes to be the correct one for
the v-node ci assuming that the other v-nodes connected are correct
To calculate a new message for a v-node, the previous message from
that node is NOTc0 taken
c1 c2 account
into c3 c4 ! c5 c6 c7
y0 y1 y2 y3 y4 y5 y6 y7 53
c0 c1 c2 c3 c4 c5 c6 c7
c0 c1 c2 c3 c4 c5 c6 c7
Andr Bourdoux 12
Variable nodes use the messages from the c-nodes AND the received
bit ri to make a decision (majority voting)
To calculate a new message for a c-node, the previous message from
that node is NOT taken into account !
54
Hard decoding: next steps
ui = ci
55
0 1]
Example
0 1]
Transmitted codeword: [1 0 0 1 0 1 0 1]
Received codeword: [1 1 0 1 0 1 0 1]
k nodes
56
Example
nodes
57
Soft decoding in the probability domain
Soft decoding works like hard decoding, but exchanges real values (bit
probabilities) instead of binary values between v-nodes and c-nodes
Intrinsically satisfies the MAP criterion:
u = max P (u|r)
u
58
Soft decoding: step 0
1
qij (0) = P (ci = 0|ri ) = 2
1 + e2ri /w
qij (1) = P (ci = 1|ri ) = 1 qij (0)
59
Soft decoding: step 1
a) fj b) fj
rji(b)
qij(b)
rji(b)
qij(b) ci
ci
yi
Response rji (0) of fj to ci is the probability that ci is a 0, equal to the
Figure 2: a) illustrates the calculation of rji(b) and b) qij(b)
probability that the number of 1s among the connected variable
nodes except ci is even (Galagers formula):
2. The check nodes calculate their response messages rji:2
1 1 Y
rji (0) = + r (0) = (1 1 1 2qY i0 j (1))
2 2 ji0 + (1 - 2qi0 j(1)) (3)
i Cj\i2 2
i0 Vj \i
= 1 rji (0)
rji (1) and
rji(1) = 1 - rji(0) (4)
60
Galagers formula
61
Soft decoding: step 2
a) fj b) fj
rji(b)
qij(b)
rji(b)
qij(b) ci
ci
yi
ri Text
Figure 2: a) illustrates the calculation of rji(b) and b) qij(b)
Response qij (0) of ci to fj is the probability that ci is a 0, given the
observation ri and
2. The check nodes the their
calculate messages
response communicated
messages rji:2 by all fcheck
j ! ci nodes
except fj 1 1 Y
rji(0) = + (1 - 2qi0 j(1)) (3)
2 2
i0 Vj \i
and
rji(1) = 1 - rji(0) (4) 62
Soft decoding: step 2
Y
qij (0) = Kij P (ci = 0|ri ) rj 0 i (0)
j 0 Fi\j
Y
qij (1) = Kij P (ci = 1|ri ) rj 0 i (1)
j 0 Fi\j
where Kij is a constant chosen such that qij (0) + qij (1) = 1
63
Soft decoding: next steps
Soft decision:
Y
Qi (0) = Ki P (ci = 0|ri ) rji (0)
jFi
Y
Qi (1) = Ki P (ci = 1|ri ) rji (1)
jFi
64
Soft decoding in the log-domain
65
Soft decoding (log domain): step 0
Initialize:
P (ci = 0|ri ) 2ri
L(qij ) = L(ci ) := log = 2
P (ci = 1|ri ) w
66
Soft decoding (log domain): step 1
where:
ij := sign(L(qij ))
ij := abs(L(qij ))
and:
x ex + 1
(x) := log tanh = log x = 1 (x) ; x > 0
2 e 1
67
Soft decoding (log domain): step 2
68
Soft decoding (log domain): next steps
Soft decision:
X
L(Qi ) = L(ci ) + L(rji )
jFi
Hard decision:
1; L(Q ) < 0
i
ui =
0; else
69
Complexity reduction
70
Code design criterion
Si H (et G ) ont beaucoup de structure (re
plus facile.
Iterative decoding relies on exchange of independent messages between
Irregularite : plus difficile dacceder a H et
v-nodes and c-nodes
The diameter of Un petitin diametre
the loops (longueur
the Tanner graph des
should be boucles)
as large as d
possible to makeinduit un mauvais
sure messages decodage.
are sufficiently independent
Introduction
Block codes
Low density parity check codes
Convolutional codes
Exercises
72
Outline: Convolutional codes
Connection representation
State diagram
Trellis diagram
Viterbi decoding
Distance properties
73
Introduction to convolutional codes
74
Connection representation
Connection representation *
u{u
[k]
1{u k }}
k1
{dkd[k]
} d[k]
dk d[k
dk11] d[k
dk22]
u{u k }}
[k]
2{v k2
75
49
Polynomial representation
g1 (X) = 1 + X + X2
g2 (X) = 1 + X2
76
State diagram
77
Example: message 11011
78
State diagram
79
Trellis diagram
80
Trellis diagram
81
Optimal decoder strategy
because:
Q
Channel is memoryless, such that P (r|u) = n=1 P (r[n]|u[n])
Maximizing a function is equivalent to maximizing its logarithm
82
Hard decoding
P (r|u) = pD (1 p)LD
1p
log P (r|u) = D log + L log (1 p) D
p
83
Soft decoding
84
Viterbi algorithm
85
Example: rate 1/2, K = 3, hard decoding
86
Viterbi algorithm
87
Viterbi algorithm
88
Viterbi algorithm
89
Viterbi algorithm
90
Viterbi algorithm
91
Distance properties
92
Distance properties
Given the all-zero transmission, the most probable error comes when
the surviving path diverges from and remerges directly to the all-zero
path.
The free distance is given by the weight of the corresponding path (5
in the example).
93
Best known convolutional codes
94
Best known convolutional codes
95
Outline
Introduction
Linear block codes
Low density parity check codes
Convolutional codes
Exercises
96
Exercise 1
Consider a (7,4) code whose generator matrix is:
1 1 1 1 0 0 0
1 0 1 0 1 0 0
G =
0 1 1 0 0 1 0
1 1 0 0 0 0 1
Suppose
Suppose you are trying to find the quickest way to get
get from
from London
London toto
Vienna. A trellis diagram has been constructed based
Vienna. based on
on the
the various
various
schedules. The labels on each path are travel times.
schedules.
Using the Viterbi algorithm, find the fastest route. How
Using How does
does the
the
algorithm work? What calculations must be made?
algorithm made? What
What information
information
must be retained in the memory?
must
98
110