Professional Documents
Culture Documents
An Introduction To Low-Density Parity-Check Codes
An Introduction To Low-Density Parity-Check Codes
Paul H. Siegel
Electrical and Computer Engineering
University of California, San Diego
5/ 31/ 07
1
©©Copyright
Copyright2007
2007by
byPaul
PaulH.
H.Siegel
Siegel
Outline
INFORMATION
SOURCE TRANSMITTER RECEIVER DESTINATION
CHANNEL
SIGNAL RECEIVED
SIGNAL
MESSAGE
MESSAGE
NOISE
SOURCE
f ( y | 1) f ( y | 1)
P P
# information bits
def
R C
channel use
0 0.9
ε
0.8
0.7
?
0.6
0.5
ε 0.4
0.3
1 1 0.2
0.1
1- ε 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0 0
0.8
p
0.7
0.6
0.5
0.4
p 0.3
0.2
1 1 0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1-p
H 2 ( p ) p log 2 p (1 p ) log 2 (1 p)
5/ 31/ 07 LDPC Codes 8
More Channels and Capacities
P
C log 2 1 2
1
f ( y | 1) f ( y | 0) 2
1.8
1.6
1.4
P P 1.2
0.8
0.6
0.4
0.2
0
-10 -8 -6 -4 -2 0 2 4 6 8 10
x x1 , x2 , , xk c c1 , c2 , , cn
Source Encoder
Channel
Sink Decoder
xˆ xˆ1 , xˆ 2 , , xˆ k y y1 , y 2 , , y n
Code rate: R k
n
5/ 31/ 07 LDPC Codes 10
Shannon’s Coding Theorems
• A regrettable review:
Doob, J.L., Mathematical Reviews, MR0026286 (10,133e)
“The discussion is suggestive throughout, rather than
mathematical, and it is not always clear that the author’s
mathematical intentions are honorable.”
• Problem
Randomness + large block length + optimal decoding =
COMPLEXITY!
• Solution
• Long, structured, “pseudorandom” codes
• Practical, near-optimal decoding algorithms
• Examples
• Turbo codes (1993)
• Low-density parity-check (LDPC) codes (1960, 1999)
• State-of-the-art
• Turbo codes and LDPC codes have brought Shannon
limits to within reach on a wide range of channels.
LDPC
codes from Trellis and Turbo Coding,
Schlegel and Perez, IEEE Press, 2004
5/ 31/ 07 LDPC Codes 15
Linear Block Codes - Basics
•• (n,k)
(n,k)==(7,4)
(7,4),, RR==4/7
4/7
•• ddmin =3
min = 3
•• single
singleerror
errorcorrecting
correcting 5
•• double
doubleerasure
erasurecorrecting
correcting
•• Encoding
Encodingrule:
rule: 2 3
1
1.1. Insert
Insertdata
databits
bitsin
in1,1,2,2,3,3,4.4.
7 6
2.2. Insert
Insert“parity”
“parity”bits
bitsin
in5,5,6,6,77 4
to
toensure
ensureananeven
evennumber
number
of
of1’s
1’sinin
each
eachcircle
circle
5/ 31/ 07 LDPC Codes 17
Example: (7,4) Hamming Code
• 2k=16 codewords
• Systematic encoder places input bits in positions 1, 2, 3, 4
• Parity bits are in positions 5, 6, 7
0000 000 1000 111
0001 011 1001 100
0010 110 1010 001
0011 101 1011 010
0100 101 1100 010
0101 110 1101 001
0110 011 1110 100
0111 000 1111 111
5/ 31/ 07 LDPC Codes 18
Hamming Code – Parity Checks
11 22 33 44 55 66 77
5
1 1 1 0 1 0 0
2 3
1 1 0 1 1 0 1 0
7 6 1 1 0 1 0 0 1
4
1 1 1 0 1 0 0 0
H 1 0 1 1 0 1 0 H c 0
T
1 0 0 0 1 1 1 u u1 , u 2 , u3 .u 4
c c1 , c2 , c3 , c4 , c5 , c6 , c7
0 1 0 0 1 0 1
G
0 0 1 0 1 1 0
0 0 0 1 0 1 1
u G c
5/ 31/ 07 LDPC Codes 20
Parity-Check Equations
1 1 1 0 1 0 0 c1 c2 c3 c5 0
H 1 0 1 1 0 1 0 c1 c3 c4 c6 0
1 1 0 1 0 0 1 c1 c2 c4 c7 0
c1
c2
c1 c2 c3 c5 0
c3
c4 c1 c3 c4 c6 0
c5
c6 c1 c2 c4 c7 0
c7
variable nodes
(bit, left)
check nodes
(constraint, right)
The
Thedegree
degreeof
ofaanode
nodeisisthe
thenumber
number
of
ofedges
edgesconnected
connectedtotoit.
it.
Gallager,
Gallager,1963
1963
Gallager,
Gallager,1963
1963
Gallager,
Gallager,1963
1963
Richardson,
Richardson,
Irregular Shokrollahi,
IrregularLDPC
LDPC (3,6)
(3,6)
Shokrollahi,
and
andUrbanke,
Urbanke,
2001
2001
n=10 6
n=106
R=1/2
R=1/2
( x) x i
i ( x )
i 2
i x i
i 1
i number of variable i number of check
nodes of degree i nodes of degree i
( x)
i 1
i x i 1 ( x)
i 2
i x i 1
• Conversion rule
x L x P x R x
( x) ( x)
1 L1 P 1 R 1
• Design rate
1
i ( x ) dx
i i
R ( , ) 1 1 0
i i
1
i
( x ) dx
0
• Under certain conditions related to codewords of weight n/2,
the design rate is achieved with high probability as n
increases.
1
1
1
H′ = 1 n-k
1
0 1
1
1
n-k k
(complexity ~O(n2) )
• Another general encoding technique based upon “approximate
lower triangulation” has complexity no more than O(n2), with
the constant coefficient small enough to allow practical
encoding for block lengths on the order of n=105.
5/ 31/ 07 LDPC Codes 39
Linear Encoding Complexity
(0) (1) 1
arg max
b 0 ,1
P
xC
X |Y ( x | y)
xi b
• Assume that codewords are transmitted equiprobably.
• If every codeword xX(y) satisfies xi=b, then set
xˆ MAP ( y ) b
• Otherwise, declare a bit erasure in position i.
• Estimated codeword:
[0 0 0 0 0 0 0]
0
• Decoding successful!!
• This procedure would have 0 0
worked no matter which 0
codeword was transmitted. 0 0
0
• Initialization:
• Forward known variable node
values along outgoing edges
• Accumulate forwarded values at
check nodes and “record” the
parity
• Delete known variable nodes and
all outgoing edges
x x Forward
Forwardknown
knownvalues
values
0 0
? ?
0 0
? ?
0 0
?
5/ 31/ 07
?
LDPC Codes 52
Peeling Decoder - Initialization
Delete known variable
x Accumulate parity x nodes and edges
0 0
? ?
0 0
? ?
0 0
?
5/ 31/ 07 LDPC Codes ? 53
Decoding with the Tanner Graph:
an a-Peeling Decoder
• Decoding step:
• Select, if possible, a check node with one edge remaining;
forward its parity, thereby determining the connected
variable node
• Delete the check node and its outgoing edge
• Follow procedure in the initialization process at the known
variable node
• Termination
• If remaining graph is empty, the codeword is determined
• If decoding step gets stuck, declare decoding failure
0 0
? ?
0 0
?
5/ 31/ 07 LDPC Codes ? 55
Peeling Decoder – Step 1
Delete known variable
x Accumulate parity x nodes and edges
0 0
0 0
0 0
? ?
0 0
?
5/ 31/ 07 LDPC Codes ? 56
Peeling Decoder – Step 2
Find degree-1 check node;
Delete check node and edge;
x forward accumulated parity;
determine variable node value
x forward new variable node value
0 0
0 0
0 0
1 1
0 0
?5/ 31/ 07 ?
LDPC Codes 57
Peeling Decoder – Step 2
Delete known variable
x Accumulate parity x nodes and edges
0 0
0 0
0 0
1 1
0
0 ?
?
5/ 31/ 07 LDPC Codes
1 58
Peeling Decoder – Step 3
Find degree-1 check node;
Delete check node and edge;
x forward accumulated parity;
determine variable node value
x decoding complete
0 0
0 0
0 0
1 1
0 0
15/ 31/ 07 1
LDPC Codes 59
Message-Passing Decoding
• The local decoding procedure can be
described in terms of an iterative,
“message-passing” algorithm in
which all variable nodes and all
check nodes in parallel iteratively
pass messages along their adjacent
edges.
• The values of the code bits are
updated accordingly.
• The algorithm continues until all
erasures are filled in, or until the
completion of a specified number of
iterations. LDPC Codes
5/ 31/ 07 60
Variable-to-Check Node Message
? u
from channel
? v=? uu ? v=u
edge
edge ee edge
edge ee
? ?
?
? v1
u=? u = v1+ v2+ v3
edge
edge ee edge
edge ee
v1 v2
v2 v3
00 00 00
?? ??
??
00 00
00
?? ??
??
00 00
00
?? ??
??
11 11
11
5/ 31/ 07 LDPC Codes 63
Message-Passing Example – Round 1
x y Check-to-Variable
x y Variable-to-Check
0 0 0 0
? ? 0 ?
0 0 0 0
? ? ? ?
0 0 0 0
? ? ? ?
1 1 1 1
5/ 31/ 07 LDPC Codes 64
Message-Passing Example – Round 2
x y Check-to-Variable
x y Variable-to-Check
0 0 0 0
0 ? 0 ?
0 0 0 0
? ? 1 ?
0 0 0 0
? ? ? ?
1 1 1 1
5/ 31/ 07 LDPC Codes 65
Message-Passing Example – Round 3
x y Check-to-Variable
x y Variable-to-Check
Decoding complete
0 0 0 0
0 ? 0 ?
0 0 0 0
1 ? 1 ?
0 0 0 0
? ? 1 ?
1 1 1 1
5/ 31/ 07 LDPC Codes 66
Sub-optimality of Message-Passing Decoder
Hamming code: decoding of 3 erasures
• There are 7 patterns of 3 erasures that
correspond to the support of a weight-3
codeword. These can not be decoded by
any decoder! 0
• The other 28 patterns of 3 erasures can be
uniquely filled in by the optimal decoder. ? ?
• We just saw a pattern of 3 erasures that ?
was corrected by the local decoder. Are
0 1
there any that it cannot? 0
• Test: ???0 010
0
• There is a unique way to fill the
erasures and get a codeword: 1 0
1
1100 010
0 1
The optimal decoder would find it.
0
• But every parity-check has at least 2
erasures, so local decoding will not
work!
Codeword
support set
S={4,6,7}
1 2 3 4 5 6 7
0 0 0 1 0 1 1
1 2 3 4 5 6 7
1 2 3 4 5 6 7
Message-passing is suboptimal!!
• Bad news:
• Message-passing decoding on a Tanner graph is
not always optimal...
• Good news:
• For any code C, there is a parity-check matrix on
whose Tanner graph message-passing is optimal,
e.g., the matrix of codewords of the dual code C .
• Bad news:
• That Tanner graph may be very dense, so even
message-passing decoding is too complex.
1 1 0 1 0 0 0
H 0 0 1 1 0 1 0
0 0 0 1 1 0 1
R=4/7 dmin=2
• Good news:
• If a Tanner graph is cycle-free, the message-
passing decoder is optimal!
• Bad news:
• Binary linear codes with cycle-free Tanner graphs
are necessarily weak...
• Good news:
• The Tanner graph of a long LDPC code behaves
almost like a cycle-free graph!
• Concentration
• With high probability, the performance of ℓ rounds
of MP decoding on a randomly selected (n, λ, ρ)
code converges to the ensemble average
performance as the length n→∞.
• Convergence to cycle-free performance
• The average performance of ℓ rounds of MP
decoding on the (n, λ, ρ) ensemble converges to the
performance on a graph with no cycles of length
≤ 2ℓ as the length n→∞.
• Threshold calculation
• There is a threshold probability p*(λ,ρ) such that,
for channel erasure probability ε < p*(λ,ρ), the
cycle-free error probability approaches 0 as the
number of iterations ℓ→∞.
1
d 1
d (1 p1 ) d 1
1 1 p1
5/ 31/ 07 LDPC Codes 82
Density Evolution-2
• Consider a variable node of degree
d with independent incoming
messages.
edge
edge ee vv Pr v ? Pr u 0 ? Pr ui ?, for all i 1, , d 1
u0 p0 1 (1 p1 )
d 1
u1
• The probability that edge e connects to
from channel ud 2
a variable node of degree d is λd , so
u d 1 dv
Pr v ?
d 1
d p0 1 1 p1 d 1
p0 1 1 p1
pℓ = p0 λ (1–ρ(1–pℓ-1))
5/ 31/ 07 LDPC Codes 83
Threshold Property
pℓ = p0 λ (1–ρ(1–pℓ-1))
then
lim p 0.
p * , ,
for sufficiently large block length n .
• Example: (j,k)=(3,4)
f ( x, p) x p 1 (1 x)
3 2
x p* 0.6474
p = 0.7
p = 0.6474
p=0.6
p=0.5
(1 (1 x))
i2
i
i 1
x
1
p * ( , ) .
(0) (1)
where
v p ( x) p ( x)
c( x) 1 (1 x)
1
c( x) v p ( x), for all x (0,1)
( x) x 2 ( x) x 3
v p ( x) p ( x) c( x) 1 (1 x)
px 2 1 (1 x) 3
1
1 x 2
v ( x)
p
p
0.9 1
p=0.5
x 2
0.8
p=0.6 v p1 ( x)
0.7
p=0.7 p
0.6 p=0.8 for various
0.5 values of
0.4
initial erasure
p*0.6474 probability p
0.3
0.2
c( x) 1 (1 x) 3
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.6
0.5
c( x) 1 (1 x) 3
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.9 p0=0.6
q 0.8
fraction of 0.7
erasures 0.6
from check
nodes 0.5
0.4
0.3
0.2
p
0.1
fraction of erasures from variable nodes
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.9 p0=0.6
q 0.8
q1=0.936
fraction of 0.7
erasures 0.6
from check
nodes 0.5
0.4
0.3
0.2
p
0.1
fraction of erasures from variable nodes
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.9 p10.5257
q 0.8
q10.936
fraction of 0.7
erasures 0.6
from check
nodes 0.5
0.4
0.3
0.2
p
0.1
fraction of erasures from variable nodes
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.9 p10.5257
q 0.8
0.4
0.3
0.2
p
0.1
fraction of erasures from variable nodes
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.9
q 0.8 p20.4788
fraction of 0.7 q20.8933
erasures 0.6
from check
nodes 0.5
0.4
0.3
0.2
p
0.1
fraction of erasures from variable nodes
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.9
q 0.8 p20.4788
fraction of 0.7
q30.8584
erasures 0.6
from check
nodes 0.5
0.4
0.3
0.2
p
0.1
fraction of erasures from variable nodes
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.9
q 0.8
p30.4421
fraction of 0.7
q30.8584
erasures 0.6
from check
nodes 0.5
0.4
0.3
0.2
p
0.1
fraction of erasures from variable nodes
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.9
q 0.8
p30.4421
fraction of 0.7
0.3
0.2
p
0.1
fraction of erasures from variable nodes
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.9
q 0.8
fraction of 0.7
0.3
0.2
p
0.1
fraction of erasures from variable nodes
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
argmax
xi 1
P
~ xi
X |Y ( x | y)
argmax
xi 1
P
~ xi
Y|X ( y | x)PX ( x)
n
argmax
xi 1
PY j | X j ( y j | x j ) f C ( x)
~ xi j 1
•• Variable-to-check
Variable-to-check
v u0
d 1
u ch
u1 v(b) u ch u (b) , for b {1}
k 1
k
from channel ud 2
u d 1
u
•• Check-to-variable
Check-to-variable v0
d 1
v1
u (b)
{ x1 , x 2 ,, x d 1 }
v ( x ),
f (b, x1 , x2 , , xd 1 )
k 1
k k
vd-1
vd
where
whereff isisthe
theparity-check
parity-checkindicator
indicatorfunction.
function.
•• Variable-to-check d 1
Then,
Then,aareasonable
reasonableestimate
estimatetotosend
sendtotocheck
checknode
node00based
basedupon
uponthe
theother
other
estimates
estimateswould
wouldbe
bethe
theproduct
productof ofthose
thoseestimates
estimates(suitably
(suitablynormalized).
normalized).
d 1
Pˆ ( x b) Pch ( x b) P ( x b)
k 1
k
We
Wedodonot
notuse
usethe
the“intrinsic
“intrinsicinformation”
information”uu00provided
providedby
bycheck
checknode
node0.0.
The
Theestimate
estimatevvrepresents
represents“extrinsic
“extrinsicinformation”.
information”.
5/ 31/ 07 LDPC Codes 114
Check-Node Update - Heuristic
d 1
u (b)
{ x1 , x 2 ,, x d 1 }
v ( x ),
f (b, x1 , x2 , , xd 1 )
k 1
k k
Parity-check
Parity-checknode equation: rrsstt==00
nodeequation: rr u
Over
Over{-1,1},
{-1,1},this
thistranslates
translatesto:
to:rr· ·ss· ·tt==11 v0
ss
P(r=1)
P(r=1)==P(s
P(s==1,1,tt==1)
1)++P(s
P(s==-1,
-1,tt==-1)
-1) v1
==P(s
P(s==1)P(t
1)P(t==1)
1)++P(s
P(s==-1)P(t
-1)P(t==-1)
-1) tt v2
[by
[byindependence
independenceassumption]
assumption]
Similarly
Similarly
P(r
P(r==-1)
-1)==P(s
P(s==1,1,tt==-1)+P(s
-1)+P(s==-1,
-1,tt==1)
1)
==P(s
P(s==1)P(t
1)P(t==-1)+P(s
-1)+P(s==-1)P(t
-1)P(t==1)
1)
5/ 31/ 07 LDPC Codes 115
Log-Likelihood Formulation
• The sum-product update is simplified using log-likelihoods
• For message u, define
u (1)
L(u ) log
u (1)
• Note that
e L (u ) 1
u (1) and u (1)
1 e L (u ) 1 e L (u )
L(v)
L(u 0 ) d 1
L(u1 )
from channel
L (v ) L(u )
k 0
k
L(u d 2 )
L(u d 1 )
L(u)
edge e
L(v1)
d 1
L (v k )
1
L(u ) 2 tanh
k 1
tanh
2
L(vd-2)
L(vd-1)
rr u
e L (u )
1 e 1 e 1
L ( v1 ) L ( v2 )
L (v ) L(v ) v0
e L (u )
1 e 1
1 e 2 1 ss
v1
• Noting that
L(a) L(a)
tt v2
2
eL(a)
1 e e 2
L(a)
tanh( 2
)
L(a)
1 e 2 e 2
L(a) L(a)
e
we conclude
L (u ) L ( v1 ) L ( v2 )
tanh( 2
) tanh( 2
) tanh( 2
)
• Threshold calculation
• There is a threshold channel parameter p*(λ,ρ)
such that, for any “better” channel parameter p,
the cycle-free error probability approaches 0 as
the number of iterations ℓ→∞.
Here
•• Here denotes
denotesconvolution
convolutionofofdensities andisis
densitiesand
interpreted
interpretedas
asan
aninvertible
invertibleoperator
operatoron
onprobability
probabilitydensities.
densities.
•• We interpret(P)
Weinterpret and(P)
(P)and (P)as
asoperations
operationsonondensities:
densities:
( P)
i2
i ( P ) (i 1) and ( P)
i2
i ( P ) (i 1)
•• The
Thefraction
fractionof
ofincorrect
incorrect(i.e.,
(i.e.,negative)
negative)messages afterℓℓ
messagesafter
iterations
iterationsis:
is: 0
P ( z )dz
5/ 31/ 07 LDPC Codes 124
Threshold
•• The threshold*
Thethreshold *isisthe maximumsuch
themaximum suchthat
that
0
lim
P ( z)dz 0.
•• Operationally,
Operationally,this
thisrepresents
representsthe theminimum
minimumSNRSNRsuch
suchthat
that
aa code
codedrawn
drawnfrom
fromthethe((,
,))ensemble
ensemblewill
willensure
ensurereliable
reliable
transmission
transmissionas
asthe
theblock
blocklength
lengthapproaches
approachesinfinity.
infinity.
•• For
For aa given
given rate,
rate, the
theobjective
objective isis to optimize (x)
to optimize (x)
and (x)
and (x) for
for the
the best
best threshold
threshold p*.
p*.
•• The
The maximum
maximumleft left and
and right
right degrees
degrees areare fixed.
fixed.
•• For
For some
some channels,
channels, thethe optimization
optimization procedure
procedure isis
not
not trivial,
trivial, but
but there
there are
are some
some techniques
techniques thatthat can
can
be
be applied
applied inin practice.
practice.
BiAWGN
BiAWGN Rate
RateR=½
R=½
σσShSh==0.979
0.979
λmax σ*
15 0.9622
20 0.9646
30 0.9690
40 0.9718
AWGN
AWGN
R=1/2
R=1/2
nn=10
=10,3,10
3
10,4,
4
10
10,5,10
5 6
106
BER
Richardson,
Richardson,
Shokrollahi,
Shokrollahi,
and
andUrbanke,
Urbanke,
2001
2001
Chung,
Chung,etetal.,
al.,
2001.
2001.
0.0045dB
0.0045dBfrom
from
Shannon
Shannonlimit!
limit!
Chung,
Chung,etetal.,
al.,
2001.
2001.
Hou,
Hou,etetal.,
al.,
2001
2001
R=1/2,
R=1/2,
(3,6)
(3,6)
Kurkoski,
Kurkoski,etet
al.,
al.,2002
2002
Rate
Rate7/8
7/8
Regular
Regularj=3
j=3
n=495
n=495
Rate
Rate7/8
7/8
Regular
Regularj=3
j=3
n=495
n=495
Varnica
Varnicaand
and
Kavcic,
Kavcic,2003
2003
R=0.7
R=0.7
n=10 6
n=106
• B.M. Kurkoski, P.H. Siegel, and J.K. Wolf, “Joint message passing
decoding of LCPC coded partial-response channels,” IEEE Trans.
Inform. Theory, vol. 48, no. 6, pp. 1410-1422, June 2002. (See also
B.M. Kurkoski, P.H. Siegel, and J.K. Wolf, “Correction to “Joint
message passing decoding of LCPC coded partial-response
channels”,” IEEE Trans. Inform. Theory, vol. 49, no. 8, p. 2076,
August 2003.)