Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

International Journal for Electro Computational World Knowledge Interface Vol. 1, Issue 4, Dec 2011, ISSN No.

2249-541X

A Survey on LDPC Decoding


Techniques
Varsha Vimal Sood1, Dr. H.P.Sinha2, Alka Kalra3

1
ECE Dept., HIET,Kaithal-136027, Haryana, India,
2
Dr.H.P.Sinha, Professor, ECE Dept., MMEC, MMU,Mullana-144003,Ambala,Haryana, India,
3
ECE Dept., HCTM,Kaithal-136027, Haryana, India
1
varsha.vimal@gmail.com
2
drhpsinha@gmail.com
3
kalra_alka30@yahoo.co.in

Abstract— Error correcting codes are the most for a lot of different channels and linear time
important tools for reliable communication in any complex algorithms for decoding. They are also
noisy communication channel. LDPC code is a suited for implementations that make heavy use
linear error-correcting code that has a parity check
of parallelism. Low-density parity-check (LDPC)
matrix with small number of nonzero elements in
codes are used by high-speed communication
each row and column. LDPC codes are one of the
best performing channel coding schemes, as they
systems due to their near Shannon limit error-
can theoretically reach Shannon’s limit The main correcting capability. To achieve the desired bit
advantage of LDPC codes is that they provide error rate (BER), longer LDPC codes with higher
better bit error rate (BER) performance for code rate are preferred in practice. There have
different channels. LDPC codes are gaining been simulations that perform within 0.04 dB of
increased attention in communication standards the Shannon limit at a bit error rate of 10 −6 with a
due to their linear decoding complexity and block length of 107. An interesting fact is that
capacity achieving performance. The high
those high performance codes are irregular.
definition television (HDTV) satellite standard,
LDPC allows for the reliable transmission, or
known as Digital Video Broadcasting (DVB-S2)
transmission system standard includes irregular
storage, of data in noisy environments. Even
LDPC codes. They are also strong contenders for short codes provide a substantial coding gain over
coding in 4G wireless systems and hard disk drives un-coded or low complexity coded systems.
.In this paper decoding algorithms used for LDPC These results allows for lower transmission
decoding Soft-decision, hard-decision and hybrid power, transmission over noisier channels, with
decoding have been discussed and compared. the same, if not better reliability. The LDPC
codes were forgotten for a long time owing to
Index Terms—LDPC, BF, SPA, WBF, IMWBF
their computational and decoding complexity
I. INTRODUCTION until their rediscovery by MacKay and Neal
[14]. In recent years, iterative decoding of low-
density parity-check (LDPC) codes has received
The LDPC codes were first introduced by
much attention [9][16]. The standard belief-
Gallager in his PhD thesis in 1960[15]. An LDPC
propagation (BP) decoding which is based on soft
code is a linear error-correcting code that has
decision decoding can achieve excellent
a parity check matrix H with small number of
performance but often with heavy
nonzero elements in each row and
implementation complexity. Bit-flipping (BF)
column. Although LDPC codes can be defined
decoding is simple, but it often incurs
over any finite field, the majority of research is
considerable performance loss compared to the
focused on LDPC codes over GF(2), in which "1"
BP decoding [10-11]. To bridge the performance
is the only nonzero element. The code is the set
gap between hard decision decoding and soft
of vectors x such that Hx' = 0[13]. The main
decision decoding, weighted BF (WBF) decoding
advantage of LDPC codes is that they provide a
[12] and its variants were proposed [1-8][17].
performance which is very close to the capacity
The improved version of the modified WBF

www.ijecwki.com Page 6
International Journal for Electro Computational World Knowledge Interface Vol. 1, Issue 4, Dec 2011, ISSN No. 2249-541X

algorithm proposed by Jiang et al, referred to as LDPC codes can be classified in two categories,
IMWBF, performs best so far when compared regular and irregular LDPC codes. A regular
with the other variants of BF [8]. In this paper, LDPC code is characterized by two values:
we review the various types of hard and soft and is the number of ones in each column
decision decoding techniques like the BF and its of the parity check matrix H , represents the
variants, SPA and min-sum algorithms. For some number of ones in each row. A LDPC code is
high-rate LDPC codes of large row weight it is called regular if is constant for every column
shown in[7] that the LP-WBF algorithm and its regular and = · (n/m) is also constant for
improved version perform extraordinarily well. every row. There are two different rates in LDPC
Unfortunately, its dual message-passing decoding codes. The true rate is the normal rate: R = K/N.
does not work well. Compared to serial The second rate is called as design rate: Rd = 1 -
implementations, various WBF algorithms in / The relation between those two rates is:
their parallel form converge significantly faster R >Rd. An irregular LDPC code is a code with
and often perform better [8]. different numbers of ones in each row and
columns. They are known to be better than the
II. CLASSIFICATION OF LDPC CODES regular one.
Channel coding is a way of introducing
A. LDPC Encoding
controlled redundancy into a transmitted binary
data stream in order to increase the reliability of The encoding of LDPC is similar to that of any
transmission and lower power transmission linear block codes for which a generator matrix
requirements. Channel coding is carried out by (G) and parity check matrix (H) are defined. In
introducing redundant parity bits into the order to achieve a systematic LDPC code, G must
transmitted information stream. The requirement be in the following form G [I P]k Where I k is an
of a channel coding scheme only exists because identity matrix and P defines the parity bits. In
of the noise introduced in the channel. Simple some cases, a code may be specified by only the
channel coding schemes allow the received H matrix and it becomes necessary to solve for
transmitted data signal to detect errors, while the G matrix. The H matrix is often in an
more advanced channel coding schemes provide arbitrary format, it must be converted into
the ability to recover a finite details about of canonical form [ ] n k .H =[PT I ] ,where I n−k is
corrupted data. This results in more reliable an identity matrix and defines the parity bits .
communication, and in many cases, eliminates Typically, encoding consists of using the G
the need for retransmission. Although channel matrix to compute the parity bits and decoding
coding provides many benefits, there is an consists of using the H matrix and soft-decision
increase in the number of bits being transmitted. decoding. In the encoding stage, the main task is
This is important when selecting the best channel identifying the fixed bits position. As in the
coding scheme to achieve the required bit error systematic LDPC codes, the value of the
rate for a system. transmission codeword is the same with the value
of the H matrix‟s message word.

www.ijecwki.com Page 7
International Journal for Electro Computational World Knowledge Interface Vol. 1, Issue 4, Dec 2011, ISSN No. 2249-541X

B. LDPC Decoding 3. Results of this metrics defines a set of bits to


Soft-decision, hard-decision and hybrid decoding be flipped
algorithms have been proposed for decoding 4. Flip the values of selected bits
LDPC codes. The decoding algorithms proposed 5. Check the result and proceed to next iteration
by Gallager had two common features. Firstly,
based on the observed channel output, these Weighted bit-flipping algorithm
algorithms tried to iteratively find the codeword The standard WBF algorithm initially finds the
that was sent over the channel. Secondly, these most unreliable message node participating in
algorithms operated locally in the sense that they each individual check. Since the magnitude of the
combined partial information that could then be received soft value determines the reliability of
used in other partial-information combining. the hard decision the least reliable message
node‟s magnitude for each individual check
Hard decision decoding during the algorithm‟s initialization step is given
1. Simple decoder construction by:
2. Input values are not considering the channel …(1)
information where denotes the absolute value, i.e. the
3. Bit flipping algorithms magnitude, of the nth message node‟s soft value,
4. Faster convergence with significant impact on while is the lowest magnitude of all
error correcting characteristics message nodes participating in the mth check.
Soft decision decoding Description: The WBF algorithm smoothes the
1. Complicated decoder construction difference between simple BF and soft decoding.
2. Channel information is considered in decoding The main advantage of WBF is that it is simple.
process As the BF algorithm it too is based on the
3. Message passing algorithms iterative approach.
4. Slow converging, but more powerful methods Algorithm:
of decoding 1.It Flips only that bit of codeword, for which
(i)Hard Decision Decoding: The hard-decision metric:
decoding algorithm is simple, , fast and their …(2)
hardware implementations are easy. Major
achieves the maximum value. The algorithm
drawback of this algorithm is that it operates on
achieves slow convergence due to flipping of
hard decisions made by the decoder at the
only one bit per iteration.
channel output. This throws away valuable
2. Count syndrome components for all check
information coming from the channel especially
nodes:
when we are dealing with continuous-output
…(3)
channels.
3. Count evaluation metrics for each bit of
Bit Flip Decoding- Bit-flipping (BF) decoding
received codeword:
algorithm is a hard-decision decoding algorithm
(4)
which is much simpler than the soft decision
decoding techniques like the Sum-Product
Algorithm (SPA) or its modifications but does 4. Flip the bit with highest value of En.
not perform as well. To reduce the performance 5. Check the final codeword and proceed to next
gap between SPA and BF based decoders, iteration.
variants of the latter such as weighted bit-flipping The WBF algorithm involves three steps: (1) The
(WBF) and improved modified bit-flipping bit sequence obtained by hard decision is
(IMWBF) algorithms have been proposed. They multiplied with the transpose of the PCM, and the
provide tradeoffs between computational resultant syndrome vector is derived.(2) For
complexity and error performance. each message node at position n, the WBF
Algorithm for Bit Flipping decoding: algorithm computes the error-term En. En is used
1. Define parity-check equations from control to quantify the probability that the bit at position
matrix H n would be flipped. In the next step of the WBF
2. Based on parity-check equations define algorithm, the bit having the highest error term
evaluation metrics En will be deemed the least reliable bit and hence

www.ijecwki.com Page 8
International Journal for Electro Computational World Knowledge Interface Vol. 1, Issue 4, Dec 2011, ISSN No. 2249-541X

flipped. The foregoing three steps are repeated, operating at different SNRs, we should weight
until an all-zero syndrome vector s is obtained, or the effect of the soft-value differently.
the maximum affordable number of iterations has Since I-WBF considers both the check-node
been reached. Compared to serial based and the message-node based information
implementations, various WBF algorithms in during the evaluation of En, it enhances the
their parallel form converge significantly faster performance of the WBF algorithm. As the WBF
and often perform better. So far, various reported the I-WBF algorithm also attribute the violation
WBF algorithms suffer from slow convergence, of a particular parity check to only the least
mainly due to their serial flipping strategy. To reliable bit. Thus in this contribution, the BER
accelerate the rate of convergence, multiple bits performance of the I-WBF algorithm is further
should be allowed for flipping in each iteration. improved by using more sophisticated bit-
The weighted bit-flipping (WBF) algorithm flipping, while avoiding any preprocessing such
strikes a good trade-off between the associated as finding the optimal weighting factor a of the I-
decoding complexity and the achievable WBF algorithm.
performance. The attractive property of the WBF The disadvantage of the I-WBF algorithm is that
algorithm is that during each iteration the the optimum value has to be found specifically
weighted sum of the same values is computed, for each particular column weight and its value
resulting in a significantly lower decoding in should be optimized for each individual SNR .
comparison to the SPA. Furthermore, both the WBF and the I-WBF
The drawback of WBF algorithm is that it algorithms consider only the specific check-node
attributes the violation of a particular parity check based information, which relies on the message
to only the least reliable bit. Also if the weighting node having the lowest soft-value . However,
factor utilized in the I-WBF algorithm is not all message nodes participating in the mth parity
optimum, the BER performance may be check are contributing, i.e. all message nodes
significantly degraded. The problem of might be liable to change, if the check they
convergence is significantly reduced by participate in is violated. However, for two
introducing parallelization in the WBF. different message nodes participating in the same
violated parity check, the probability that the
Improved weighted bit-flipping algorithm check is violated owing to the message node
As seen the WBF algorithm only considers the having a high soft magnitude is lower than that
check-node based information during the associated with the message node having a low
evaluation of the error-term En. By contrast, the soft magnitude. The performance of the decoder
I-WBF algorithm enhances the performance of improves significantly by introducing the
the WBF algorithm, since it considers both the parallelization feature in WBF.
check-node based and the message-node based
information during the evaluation of En. When Parallel Weighted Bit Flipping (PWBF)
the error-term En is high, the corresponding bit is The Parallel Weighted Bit Flip improves the
likely to be an erroneous bit and hence ought to WBF algorithms in the way of convergence time
be flipped. However, when the soft-value of a and number of iterations. It also introduces the
certain bit is high, the message node itself is ability of parallel flipping of several bits in one
demonstrating some confidence that the iterations. Parallelization allows improving speed
corresponding bit should not be flipped. Hence of decoding and even improving overall
the above equation is modified as: performance of decoder.
…(5) PWBF Algorithm:
considers the extra information provided by 1. Count syndrome component for each parity –
the message node itself, thus a message node check equation
having a higher soft-value magnitude has a lower 2. Count the metric from WBF algorithm for each
chance of being flipped, despite having a high bit of received codeword
error term En owing to encountering unreliable 3. Each unsatisfied parity–check equation “votes”
parity checks. We note however that for LDPC for one bit to be flipped
codes having different column weights, or 4. If the number of votes for particular bit
exceeds the threshold, this bit is flipped

www.ijecwki.com Page 9
International Journal for Electro Computational World Knowledge Interface Vol. 1, Issue 4, Dec 2011, ISSN No. 2249-541X

5. All bits are flipped in parallel is a (0) and (1) that indicates the (current)
The major advantage of PWBF algorithm over amount of believe in that is a ”0” or a ”1”.
the above mentioned techniques is that the overall Φ(x) = − log(tanh(1/2x)), x≥0 …(8)
performance of the decoder increases due to The messages and and, respectively, that
increase in speed and convergence time. are passed between the ith variable node and the
(ii)Soft Decision Decoding: jth check node. In representing the connectivity
Soft decision decoding algorithms have better of the factor graph, Col[i]refers to the set of all
BER performance. In the soft-decision decoding the check nodes adjacent to the ith variable node
algorithm, sum-product and min-sum algorithm and Row[ j] refers to the set of all the variable
are studied and expounded. The message passing nodes adjacent the jth check node. The posterior
algorithm operates in the factor graph and LLR is computed in each iteration using the
computes the marginal functions associated with update
the global code constraint. The message passing . …(9)
algorithm exchange the extrinsic messages along
A hard decision is made based on the posterior
the edges of the factor graph. At the nodes, local
LLR in every iteration. The iterative decoding
decoding operations update the extrinsic
algorithm is allowed to run until the hard
messages according to the message update rules.
decisions satisfy all the parity check equations or
The messages can be either in terms of
when an upper limit on the iteration number is
probabilities or L-values. The sum-product
reached, whichever occurs earlier. In the sum-
algorithm is symbol-wise decoding whereas min-
product algorithm (and other message-passing
sum algorithm is block-wise decoding. However,
algorithms) the messages that a variable node
the performance of the sum-product algorithm is
sends to its neighboring check nodes represent its
better (lower word error rate) than the min-sum
belief about its value together with a reliability
algorithm for fixed SNR. The min-sum algorithm
measure. Similarly, the message that a check
can be considered as an approximation of the
node sends to a neighboring variable node is a
sum-product algorithm. If the factor graph is
belief about the value of that variable node
cycle free, then the message passing algorithm
together with some reliability measure. In a sum-
will be optimal (maximum likelihood) otherwise
product decoder a variable node receives the
sub-optimal.
beliefs that all the neighboring check nodes have
about it. The variable node processes these
Sum-Product Algorithm(SPA)
messages (in this case a simple summation) and
The sum-product algorithm is a common form of
sends its updated belief about itself back to the
the message-passing algorithm. The algorithm
neighboring check nodes. It can be understood
uses the channel information and the values
that the reliability of messages at a variable node
coming from the channel. It forms a probabilistic
increases as it receives a number of (mostly
value for each received bit and iteratively
correct) beliefs about itself. This is very similar to
refreshes this value to find an estimate for that
a repetition code which receives multiple beliefs
bit. Variable-to-check and check-to-variable
for a single bit from the channel and hence is able
messages are computed using the following
to make a more reliable decision. The main task
equations:
of a check node is to force its neighbors to satisfy
)=∑ )+ ( ) … (6)
an even parity. So, for a neighboring variable
)= ( )X( node v, it processes the beliefs of other
…(7) neighboring variable nodes about themselves and
is referred to as the variable-to-check sends a message to v, which indicates the belief
message is a message sent by the variable node of this check node about v. The sign of this
to the check node . Every message contains message is chosen to force an even parity check
and its magnitude depends on the reliability of
always the pair (0) and (1) which stands
the other incoming messages. Therefore, similar
for the amount of belief that is a ”0” or a ”1”.
to a variable node, an outgoing message from a
is the check-to-variable message sent by the
check node is due to the processing of all the
check node to the variable node Again there incoming edges except the one which receives the
outgoing message. However, unlike a variable
www.ijecwki.com Page 10
International Journal for Electro Computational World Knowledge Interface Vol. 1, Issue 4, Dec 2011, ISSN No. 2249-541X

node, a check node receives the belief of all the approximately an additional 0.5dB of signal-to-
neighboring variable nodes about their own noise ratio Eb/N0 to achieve the same bit error
values. As a result, the reliability of the outgoing rate as the Sum-Product algorithm, when using a
message is even less than that of the least reliable regular LDPC code for transmission over an
incoming message. In other words, the reliability additive white Gaussian noise (AWGN) channel
of messages decreases at the check nodes. So, in with binary input. For irregular codes, the loss in
simple words, at a check node we force the performance can be up to 1.0dB.
neighboring variable nodes to satisfy an even
parity check at the expense of losing reliability in III. CONCLUSION
the messages, but at a variable node we LDPC codes are one of the best performing
strengthen the reliabilities. This process is channel coding schemes, as they can theoretically
repeated iteratively to clear the errors introduced reach Shannon‟s limit . LDPC codes can be
by the channel. thought of as a generic term for a class of error
The sum-product algorithm (SPA) achieves correcting codes distinguished from others by a
the near-capacity performance asymptotically very sparse parity-check matrix. LDPC
The Sum-product algorithm makes use of the soft performance improves as block length increases,
received signal, which is vital when continuous- so they can theoretically achieve Shannon‟s limit
output channels are used. However, the as block length goes to infinity. We have
computational complexity of the sum-product discussed LDPC decoding algorithms and their
algorithm (SPA) is very high. The Sum-Product comparison. Hard decision decoding algorithms
Algorithm for decoding LDPC codes is hard to are simple, fast and their hardware
implement in practice, since it requires nonlinear implementations are easy but the input values do
functions and multiplications. To reduce the not consider the channel information. Next
computational complexity of the SPA, instead of generation of optical communication systems are
taking L an approximation of it is taken operating at 40 Gb/s. The current circuit
which simplifies the update rule at the check technology does not allow soft decision decoding
node. This modified version of SPA is known as algorithms to operate at these data rates. A Soft
the Min-Sum algorithm. decision decoder often needs several tens or
hundreds of serial iterations for the iterative
Min-sum algorithm: decoding process to converge, which is not
The min-sum algorithm (MSA) which replaces always realistic for high speed communications
the nonlinear check node operation by a single because of high decoding delay. Second, because
minimum operation was proposed to reduce the of the nature of its decoding, it can become too
complexity of the standard SPA at the cost of a complex for hardware implementation. As
noticeable degradation in the decoding compared to the hard decision decoding
performance. In the min-sum algorithm, the algorithm, these give better performance since the
update rule at a variable node is the same as the channel information is considered in the decoding
sum-product algorithm, but the update rule at a process. It is also more powerful method of
check node c is simplified by taking decoding the LDPC codes. However depending
term instead of L which is actually the upon the application one can choose the most
suitable technique out of the two categories of
approximation of the latter. The magnitude of
hard and soft decision decoding.
L computed using min-sum approximation is
usually overestimated and correction terms are
introduced to reduce the approximation error.
REFERENCES
This approximation becomes more accurate as
the magnitude of the messages is increased. So in
[1] Varshney, L.R., “Performance of LDPC
later iterations, when the magnitude of the
Codes Under Faulty Iterative Decoding”, in IEEE
messages is usually large, the performance of this
Transactions on Information Theory, July 2011,
algorithm is almost the same as that of the sum-
Vol.57
product algorithm. The Min-Sum algorithm is
[2] Voicila, A., Declercq, D., Verdier, F.,
less complex to implement, it requires
“Fossorier, M., Urard,P.,”Low Complexity

www.ijecwki.com Page 11
International Journal for Electro Computational World Knowledge Interface Vol. 1, Issue 4, Dec 2011, ISSN No. 2249-541X

Decoding for Non-Binary LDPC Codes in Higher IEEE Trans. Commun., vol. 50, pp. 406–414,
Order Fields”, IEEE Transactions on Mar. 2002.
Communication,May2010,Vol.58 [12] Kou, Y., Lin, S., and Fossorier, M. „Low-
[3] Hemati, S., Banihashemi, A.H.;Dynamics density parity-check codes based on finite
and Performance Analysis of Analog Iterative geometries: a rediscovery and new results‟, IEEE
Decoding for Low-Density Parity-Check (LDPC) Trans. Inf. Theory, 2001, 47, pp. 2711–2736
Codes, IEEE Transactions on [13] D. J. C. MacKay, “Good error-correcting
Communication,2006,Vol.54 codes based on very sparse matrices,” IEEE
[4] X. Wu, M. Jiang, C. Zhao, and X. You, “Fast Trans. Inform. Theory, vol. 45, pp. 399–432,
weighted bit-flipping decoding of finite-geometry Mar. 1999.
LDPC codes,” in Proc. 2006 IEEE Information [14] MacKay, D.J.C., and Neal, R.M, „Near
Theory Workshop,Oct. 2006, pp. 132–134. Shannon limit performance of low density parity
[5] C.-H. Lee and W. Wolf, “Implementation- check codes‟, Electron. Lett., 1997, 33, pp. 457–
efficient reliability ratio based weighted bit- 458
flipping decoding for LDPC codes,” Electron. [15] Tanner, M.R.: „A recursive approach to low
Lett., vol. 41, pp. 1356–1358, 2005. complexity codes‟, IEEE Trans. Inf. Theory,
[6] Z. Liu and D. A. Pados, “Low complexity 1981, 27, (5), pp. 533–547
decoding of finite geometry LDPC codes,” IEEE [16] Y. Kou, S. Lin, and M. Fossorier, “Low
Trans. Commun., vol. 53, pp. 415–42, 2005. density parity check codes based on finite
[7] M. Shan, C. Zhao, and M. Jiang, “Improved geometries: a rediscovery and more,” IEEE
weighted bit-flipping algorithm for decoding Trans. Inform. Theory, vol. 47, pp. 2711–2736,
LDPC codes,” IEE Proc.-Commun., vol. 152, pp. Nov. 2001.
919–922,2005. [17] J. Zhang and M. Fossorier, “A modified
[8] M. Jiang, C. Zhao, Z. Shi, and Y. Chen, “An weighted bit-flipping decoding of low density
improvement on the modified weighted bit parity-check codes,” IEEE Commun. Lett., vol. 8,
flipping decoding algorithm for LDPC codes,” pp. 165–167, Mar. 2004.
IEEE Commun. Lett., vol. 9, pp. 814–816, 2005. [18] Gallager, R.: „Low density parity check
[9] Zhang, Juntan, “Iterative decoding of low codes‟, IEEE Trans. Inf. Theory,1962, 8, pp. 21–
density parity check codes and turbo code”,2005 28
[10] F. Guo and L. Hanzo, “Reliability ratio
based weighted bit-flipping decoding for low-
density parity-check codes,” Electron. Lett., vol.
40, pp. 1356–1358, Oct. 2004.
[11] J. Chen and M. P. C. Fossorier, “Near-
optimum universal belief propagation based
decoding of low-density parity check codes,”

www.ijecwki.com Page 12

You might also like