Difference To Sum Ratio Factor Based Min-Sum Decoding For Low Density Parity Check Codes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Difference to Sum Ratio Factor based Min-Sum decoding

for Low Density Parity Check Codes




Mohammad Rakibul Islam
1
, Khandaker Sultan Mahmood, Md. Moshiur Rahman Farazi, Md. Farhan
Tasnim Oshim, Mohd. Azfar Nazim, Iftekhar Hasan
Dept. of Electrical and Electronic Engineering, Islamic University of Technology
Board Bazar, Gazipur-1704, Dhaka, Bangladesh
E-mail: rakibultowhid@yahoo.com
1
Corresponding author

Abstract Low Density Parity Check Codes (LDPC) give
groundbreaking performance which is known to approach
Shannons limits for sufficiently large block length. Historically
and recently, LDPC have been known to give superior
performance than concatenated coding. In the following paper,
a proposal to modify the standard Min-Sum (MS) algorithm for
decoding LDPC codes is presented. This is done by introduction
of a difference to sum ratio factor, in the check to bit node
updating process. The algorithm is further extended by
implementing hard decision of the Bit-Flipping (BF) algorithm
over the soft decision of MS algorithm. Simulation results
demonstrate that the proposed algorithms are effective in
imparting a better performance in terms of a lower bit error
rate (BER) at medium to high signal to noise ratio (SNR) when
compared to the traditional MS algorithm while adding fair
amount of complexity.

Keywords Low-Density Parity-Check Codes; Min-Sum
algorithm; Bit-Flipping algorithm; DSR factor; bit error rate
(BER).

I. INTRODUCTION

Low density parity-check (LDPC) codes were first proposed
by Gallager in 1963 in [1] and after more than 30 years of
research, it was rediscovered by Mackay and Neal [2] and
Sipser and Spielman [3]. An extensive research focused on
binary LDPC codes have shown to achieve a rate very close
to the Shannons limit. Such codes have already been
considered in communication standards such as digital video
broadcasting (DBV-S2), WiFi (802.11n), WiMax (802.16e)
and 10 Gigabit Ethernet (10GBASE-T). Codes of various
schemes, with a wide range of complexity and fairly effective
to highly accurate performance have been achieved.

Since rediscovery of Mackay and Neal, several researchers
proposed several algorithms to achieve performance of LDPC
code near Shannon limit. M. Fossorier, et al. [5] proposed a
new algorithm to reduce the complexity of LDPC code based
on belief propagation. Improvement of belief propagation
decoding for LDPC code is done by K. Chung, et al. [6],


Yuan-Mao Chang, et al. [7], N. Varnica, et al. [8], S. Gounai,
et al. [9] and Nedeljko Varnica, et al. [10]. For the Binary
Input Additive White Gaussian noise (BIAWGN) channel, an
LDPC code of length one million constructed by Richardson,
et al. [11] achieved a bit-error probability of 10
-6
less than
0.13 dB away from capacity, surpassing the best (Turbo)
codes known hitherto. On that basis, Chung, et al. [12]
constructed another LDPC code, which achieved within 0.04
dB of the Shannon limit ( a theoretical capacity of any
channel set by Shannon for randomly constructed code) at a
bit error rate of 10
-6
using a block length of 107. Bit flipping
algorithm [2] is hard decision decoding of LDPC code where
messages are binary bits. First improved bit flipping
algorithm is proposed by N. Miladinovic and M.P.C
Fossorier [13]. Later on researchers invented weighted bit
flipping [14], modified and improved weighted bit-flipping
[15-17], Parallel Weighted Bit-Flipping [18], Improved
parallel weighted bit flipping [19], Fast Parallel Weighted Bit
Flipping [20] and Low latency low power bit flipping [21]
algorithm for increasing the performance of LDPC codes.
Bootstrap decoding [22] is applied to weighted bit-flipping
algorithm which is initiated by first erasing a number of less
reliable bits, then assigning new values and reliabilities to
erasure bits by passing messages from non-erasure bits
through the reliable check equations. Stochastic Decoding
where probabilities are encoded by a Bernoulli sequence was
first introduced in [23] for acyclic (16, 8) LDPC code and
later improvement for capacity-approaching LDPC codes on
factor graphs has been done in [24].

The paper is organized as follows: Low density parity check
code is discussed in section II. Section III and section IV
reviews standard Belief Propagation (BP) decoding and Min-
Sum (MS) decoding algorithms. Our proposed algorithms:
Min Sum algorithm with Difference to Sum Ratio (DSR)
Factor (A) and Bit Flipping Min Sum (BFMS) algorithm with
DSR Factor (B) are discussed in Section V. Corresponding
results and its interpretation is analyzed in Section VI.
Section VII concludes the paper.

978-1-4577-1719-2/12/$26.00 2012 IEEE 96
II. LOW DENSITY PARITY CHECK CODES

LDPC codes are block codes with parity-check matrices H
that contain only a very small number of non-zero entries.
This sparseness of E is essential for an iterative decoding
complexity that increases only linearly with the code length.
The biggest difference between LDPC codes and classical
block codes is how they are decoded. Classical block codes
are generally decoded with Maximum Likelihood (ML) like
decoding algorithms and so are usually short and designed
algebraically to make this task less complex. However, LDPC
codes are decoded iteratively using a graphical representation
of their parity-check matrix and so are much longer, less
structured, and designed with the properties of B.
A sparse binary parity check matrix E = {H] can be
used to completely describe a binary (, K) (
C
,

) LDPC
code where denotes the length and K denotes the
dimension and H K due to redundant parity check
sums.
C
denotes the number of ones in each column and


represent the number of ones in each row. If
C
and

are
fixed the code is termed as regular. An LDPC code may be
represented by a bipartite graph, called Tanner graph. Each
row of the parity check matrix represents a parity check
constraint,

where H and each column represents


the variable nodes, c
]
where . A sample parity
check matrix from [25] is given in (1).


E = _







_ ()





Fig. 1 Tanner graph representation of a (
C
= 2,

= 3)
LDPC code with code length = bits.
The corresponding Tanner Graph representation of the parity
check matrix, H is shown Fig.1. For a given row, if the coded
bits c
]
are 1 then a connection is made between the coded bits,
c
]
and parity check equation,

in the tanner graph. Tanner


graph is useful in representing links through which all the
check nodes and variable nodes are updated in the decoding
process.
Some of the extensively used LDPC decoders employ
Belief Propagation (BP) [4] and Min-Sum (MS) [6]
algorithms which are able to achieve highly efficient bit-error
rate (BER) performance. The main decoding for LDPC codes
is the BP algorithm but this requires complex computation of
check nodes and hence it is difficult in hardware
implementation. In the MS algorithm, the decoding
performance is sacrificed by trading off with computational
complexity. In order to obtain better decoding performance,
two algorithms are devised that modifies Min-Sum algorithm
with Bit Flipping algorithm and introduces a unique
Difference to Sum Ratio (DSR) factor. While Min-Sum
decoding is a soft decision approach in decoding the
codeword, Bit-Flipping algorithm imposes a hard decision on
the same codeword. This, in turn, reduced the probability of
sign change in the decoded vector from which the codeword
is obtained more efficiently.

III. STANDARD BELIEF PROPAGATION (BP)
DECODING

Sum-product decoding is a soft decoding approach to decode
a transmitted message. Since log likelihood ratios are utilized,
this algorithm is often termed as log likelihood decoding. Let
a binary message (x
1
, x
2
.... x
N
) is transmitted with N number
of message bits and is transmitted over an Additive White
Gaussian Noise (AWGN) channel. Let the received message
be (y
1
, y2 y
N
) and binary hard decision obtained from the
received message be (b
1
, b
2
b
N
).

b

= _
,

>

where i = 1 to N

For BP decoding, the extrinsic information passed between
nodes, is given as probabilities rather than as hard decisions.
In this BP algorithm, LDPC codes are decoded iteratively in
two sequential steps: check node updating & variable node
updating. During check node update, parity check operations
are performed after receiving information from the
neighboring variable nodes and results are sent back the
neighboring variable nodes. For variable node update, the
decoded bits and its corresponding soft information are
updated from the check nodes and the results are sent back to
the check nodes.

Notations:

LLR
j
the information delivered by the log-likelihood
ratio of received symbol y
i


II

= n
P(x
i
=0|)
i
p(x
i
=1|
i
)
(2)

97

2
channel variance

ij
message from check node i to variable node j.

ij
message from variable node j to check node i.

V
ij
all variable nodes in V
i
except node j.

C
ji
all check nodes in C
j
except node i.

The steps for decoding by Sum-Product algorithm is briefly
discussed below:

a) Initialization: Set the iteration counter, itr = 0 and
maximum number of iteration, itr = l
max
. Set LLR of
the channel output LLR
ch
= 2y
i
/
2
for Additive White
Gaussian Noise (AWGN) channel. For each i and j,
all
ij
is initialized to the value of the extrinsic LLR
of the received value y
j
, which is LLR
j
. Then the
values of and are calculated and exchanged
between the variable and check nodes until the
parity-check equation is satisfied.

b) Check node computation: Compute
ij
from the
messages originating from all other variable nodes
connected to check node C
i
.
The nonlinear function, is defined as,

() = -log(tanh
|x|
2
) (3)

The function is used to update the check node
using the following equation

y
]
t
= [ sn(o
]
t
)
]ev
i]
(_ (|o
]
t
|)
]ev
i]
) (4)

c) Variable node computation: Variable nodes are
updated after completion of check node
computation.
ij
message are computed using the
channel information, LLR
j
and other check node
values that are connected to that particular variable
node V
j
.

o
]
t
= II
]
+_ y
]
t
eC
]i
(5)

After computing o
]
, all the message bits in the j
th

column are again updated by adding the channel information,
LLR
j
.
The syndrome check, is defined as

]
= II
]
+ _ y
] eC
]
(6)

d) Message computation: From the updated vector ,
the new estimated message bit is extracted. As this
computation is done by the sum-product decoding,
the decision criteria for the new estimated message
x will be

]
= _
,
]

,
]
>

(7)

e) Validation: The received code word is now x. The
stopping criterion is

E
1
= _
,
i
s o o co or
ors,
i
s no o o co or



If the valid code word is obtained then the decoding
stops or the decoder again repeats iteration from step (b)
onwards.


IV. MIN-SUM (MS) DECODING

The Min-Sum (MS) algorithm is a soft decoding approach to
decode a transmitted message with lower complexity
compared to Belief Propagation (BP) Algorithm [26][27]. Let
a binary message (x
1
, x
2
... x
N
) is transmitted with N number of
message bits and is transmitted over an additive white
Gaussian noise (AWGN) channel. Let the received message
be (y
1
,y
2
,.., y
N
) and binary hard decision obtained from the
received message be (b
1
, b
2
,,b
N
).

b

= _
,

>

where i = 1 to N

In the MS algorithm, LDPC codes are decoded iteratively in
two sequential steps: check node updating & variable node
updating. During check node update, parity check operations
are performed after receiving information from the
neighboring variable nodes and results are sent back the
neighboring variable nodes. For variable node update, the
decoded bits and its corresponding soft information are
updated from the check nodes and the results are sent back to
the check nodes.

The steps for this decoding algorithm are as follows:

a) Initialization: Initialize
ij
with the channel
reliability (CR)

o
]
t=0
=
]
=
2
]
c
2
, j = 0,, N-1, i M(j) (8)

b) Check node computation: The check node update
equation is given by

y
]
t
= [ sn(o
]
(t-1)
o
]
)
]ev
i]

mn
]e
i]
|o
]
(t-1)
o
]
|
(9)

98
c) Variable node update: The variable node update
equation is given by

o
]
t
=
]
+ [ y
]
t
ec
]i
(10)


The message is then computed to an estimated code word.

]
=
]
+ _ y
]
t
eC
]
(11)

d) Message Computation: The decision criterion for
this algorithm differs is given as follows,



]
= _
,
]
>
, ors




e) Validation: The received codeword is x and the
stopping criterion is


E
1
= _
, s o o co or
ors, s no o o co or




If the valid code word is obtained then the decoding stops or
the decoder again repeats iteration from step (b) onwards.


V. PROPOSED ALGORITHMS

A. Min Sum algorithm with Difference to Sum Ratio
(DSR) Factor
Our proposed algorithm introduces a factor, x in the
check to bit node updating process in the MS algorithm
described in step (b). This factor,x lies between zero 0 and 1
i.e. x |,] and is computed as the ratio of the difference of
the estimated codeword to the sum of the estimated codeword
in present and preceding iteration and is termed as the DSR
factor. A value of = signifies that the modified algorithm
is equivalent to the MS algorithm.DSR factor, is introduced
to minimize the probability of sign change during check node
update in later iterations. The modification step is described
as:








if
_ sn(o
]
(t)
o
]"
)
]ev
i]
= _ sn(o
]
(t-1)
o
]"
)
]ev
i]





then
y
]
t
= _ _ sn(o
]
(t-1)
o
]"
)
]ev
i]
_

n
e
]
|o
]
(t-1)
o
]"
|

else

y
]
t
= _ _ sn(o
]
(t-1)
o
]"
)
]ev
i]
_
n
e
]
|o
]
(t-1)
o
]"
|


B. Bit Flipping Min Sum (BFMS) algorithm with DSR
Factor

In addition to the modification of the MS algorithm
described in (A), Bit Flipping (BF) is introduced at the end of
each iteration. This operation reduces the probability of any
accumulated errors in the codeword. The prime objective of
this modification is to attain better performance by combining
the effectiveness of the two algorithms by imposing hard
decision to the decoded vector, x. The steps are described as
follows:

i. At the end of the iteration for MS algorithm,
syndrome vector S is found by multiplying the
decoded vector with the transpose of the parity
check matrix: S = E
1
. If all the elements of
S are zero, the decoding is declared to be a
success, otherwise go to step (ii).
ii. For each check equation equal to zero, find the
position of the variable nodes in the parity
check matrix where it assumes 1 and these
nodes are called valid variable nodes, as shown
in Fig 2. Form a column vector with the
positions of the valid variable nodes.
iii. Each bit in the codeword, which does not
belong to any position given in the column
vector in step (ii), is flipped.




99

Fig. 2 Bit Flipping algorithm steps for a (2, 3) LDPC code with N=6 bits.

Fig. 3 BER performance of proposed algorithm with rate (1008, 504)
LDPC codes with iteration number = 10


Fig. 4 BER performance of proposed algorithm with rate (1008, 504)
LDPC codes with iteration number = 20


VI. SIMULATION

In this section the effectiveness of the proposed algorithms
are verified by considering a regular (5, 13) LDPC code. The
LDPC code has a codeword length, N = 1008 and
information length, K = 504 which corresponds to a code rate
of 0.5. Fig 3 was obtained using a maximum of 10 iterations.
It is observed that DRS-MS algorithm gives a better
performance than MS algorithm for the given SNR range of -
1 to 5 dB while DRS-BFMS gives a better performance for
SNR range of 1 to 5 dB. Moreover, it is noticed that DSR-MS
algorithm attains a better performance than DSR-BFMS in
the SNR range -1 to 3.5 dB. A significant improvement is
observed in DSR-BFMS above 4 dB. Fig 4 was obtained for
the same algorithms but computed for a maximum of 20
iterations. This increase in iteration exhibits an improvement
for both DSR-BFMS and DSR-MS algorithm when compared
to MS in the range -1 to 5 dB. Similar to Fig 3, DRS-MS
gives a better performance than DRS-BFMS in Fig 4. From
the plots obtained, it is observable that the BER performance
of the proposed algorithms is better than Min-Sum decoding
algorithm within the SNR range of interest.
The computation of complexity reveals that DSR-
MS algorithm is only slightly complex relative to MS
algorithm which is prominent for its minimal complexity.
Only one addition multiplication operation is required in
DSR-MS compared to MS only for the circumstances where
sign change occur in the check to bit node updating process
between successive iterations. This in turn abates the
complexity in later iterations where the probability of sign
change is reduced. In DSR-BFMS complexity is reasonably
increased but this is outweighed by its BER performance.
Similar reduction of complexity is achieved in later iteration
for DSR-BFMS.

VII. CONCLUSION

The LDPC codes are capable of nearing Shannon
capacity performance when decoded with iterative message
passing decoders such as MS or BP algorithm which are soft
decoding approach. In this paper, decoding algorithms have
been proposed combining the hard and soft decoding
approach. We used BF algorithm to introduce hard decision
over the soft-decoded code. Our modification allowed us to
minimize the effect of sudden sign changes in the decoded
vector during the check to bit node update in successive
decoding iterations. In this paper we have introduced a new
unique factor named DSR factor, . This factor was
constantly updated every iteration and it was computed as a
vector rather than a single valued variable whose vector
dimension was equal to the parity check matrix. The
dynamic DSR factor ensures minimal sign flipping for the
codeword computed in later iterations. The latter portion of
the algorithm imposes hard decision on the codeword after
the soft decision has already been implemented. The
-1 0 1 2 3 4 5
10
-2
10
-1
10
0
SNR in dB
B
E
R


MS
DSR-BFMS
DSR-MS
-1 0 1 2 3 4 5
10
-2
10
-1
10
0
SNR in dB
B
E
R


MS
DSR-BFMS
DSR-MS
1 1 0 1 0 0

0 1 1 0 1 0

1 0 0 0 1 1

0 0 1 1 0 1
C
1

C
2

C
3

C
4

V
1
V
2
V
3
V
4
V
5
V
6
Valid variable nodes
corresponding to C
1
Valid
Check
Nodes
100
simulation results reflect on the fact, modifications meliorate
BER performance as well as improve the decoders
convergence behavior by trading off with complexity. It
further infers that even with a very small number of iterations
the proposed algorithms significantly outperforms the
standard MS decoder.
Analyses of such techniques are required for
irregular LDPC codes and for non-binary instances. It has
been proven that non-binary LDPC codes have shown
performance which is advantageous over its binary
counterpart; however, this gives rise to a much higher
computational complexity. The BF algorithm that was
introduced as a hard decoding approach trades off between
performance and complexity. Development of these
algorithms is still an open challenge for non-binary cases.
Further advancement in these iterative decoding algorithms is
possible for the proposed algorithms either in terms of
performance or computational complexity.

REFERENCES
[1] R. G. Gallager, Low-Density Parity-Check Codes. Cambridge,
MA: MIT Press, 1963
[2] J. C. Mackay and R. M. Neal, Near Shannon limit performance
of low-density parity-check codes, Electron. Lett., vol. 32, pp.
1645-1646, Aug. 1996.
[3] S. Lin and D. J. Costello, Jr., Error Control Coding:
Fundamentals and Applications. Englewood Cliffs, NJ: Prentice
Hall, 1983.
[4] J. C. MacKay, Good error-correcting codes based on very sparse
matrices, IEEE Trans. Inform. Theory, vol. IT-45, pp. 399-432,
Mar.1999.
[5] M. Fossorier, M. Mihaljevic, and H. Imai, Reduced complexity
iterative decoding of low-density parity-check codes, IEEE
Trans. Commun., vol. 47, pp. 673-680, May 1999.

[6] Chung, k. and Huo, J. Improved Belief Propagation (BP)
Decoding for LDPC Codes with a large number of short cycles
IEEE 63rd Vehicular Technology Conference, 2006. VTC 2006-
Spring.
[7] Yuan-Mao Chang; Vila Casado, A.I.; Chang, M.-C.F.; Wesel,
R.D. Lower-Complexity Layered Belief-Propagation Decoding
of LDPC Codes IEEE International Conference on
Communications, 2008. ICC'08.
[8] Varnica, N.; Fossorier, M.P.C.; Kavcic, A.; , "Augmented Belief
Propagation Decoding of Low-Density Parity Check Codes,"
Communications, IEEE Transactions on , vol.55, no.7, pp.1308-
1317, July 2007
[9] Gounai, S.; Ohtsuki, T.; Kaneko, T.; Modified Belief
Propagation Decoding Algorithm for Low-Density Parity Check
Code Based on Oscillation IEEE 63rd Vehicular Technology
Conference, 2006. VTC 2006-Spring. Volume: 3 Page(s): 1467
1471.
[10] Varnica Nedeljko; Fossorier Marc; , "Improvements in belief-
propagation decoding based on averaging information from
decoder and correction of clusters of nodes," Communications
Letters, IEEE , vol.10, no.12, pp.846-848, December 2006
[11] Richardson, T. J.; Shokrollahi, A.; and Urbanke R., Design of
capacity-approaching low-density parity-check codes, IEEE
Trans. Inform. Theory, vol. 47, Feb. 2001, pp. 619-637.
[12] S.Y. Chung, J. G. D. Forney, T. Richardson, and R. Urbanke, On
the design of low-density parity-check codes within 0.0045 dB of
the Shannon limit, IEEE Commun. Lett., vol. 5, Feb. 2001, pp.
58-60.
[13] Miladinovic, N.; Fossorier, M.P.C Improved bit flipping
decoding of low-density parity check codes IEEE International
Symposium on Information Theory, 2002.
[14] Guo F. and. Hanzo L., Reliability ratio based weighted bit-
flipping decoding for low-density parity-check codes, IEEE
Electron. Lett., vol. 40, pp. 1356-1358, Oct. 2004.
[15] Zhang J. and Fossorier M., A modified weighted bit-flipping
decoding of low density parity-check codes, IEEE Commun.
Lett., vol. 8, pp. 165-167, Mar. 2004.
[16] Ming Jiang; Chunming Zhao; Zhihua Shi; Yu Chen; , "An
improvement on the modified weighted bit flipping decoding
algorithm for LDPC codes," Communications Letters, IEEE ,
vol.9, no.9, pp. 814- 816, Sep 2005
[17] Shan M.; Zhao C.; Jiang M., Improved weighted bit-flipping
algorithm for decoding LDPC codes, IEE Proc.-Commun., vol.
152, pp. 919922, Dec. 2005.
[18] Xiaofu Wu ; Chunming Zhao ; Xiaohu You ; Parallel Weighted
Bit-Flipping Decoding IEEE Communications Letters,
Volume : 11 , Issue:8 On page(s): 671 Issue Date : August
2007.
[19] Guangwen Li ; Dashe Li ; Yuling Wang ; Wenyan Sun ;
Improved parallel weighted bit flipping decoding of finite
geometry LDPC codes Fourth International Conference on
Communications and Networking in China, 2009. ChinaCOM
2009.
[20] Vanek, M.; Farkas, P.; Fast Parallel Weighted Bit Flipping
decoding algorithm for LDPC codes Wireless
Telecommunications Symposium, 2009. WTS 2009.
[21] Ismail, M. ; Ahmed, I. ; Coon, J. ; Armour, S. ; Kocak, T. ;
McGeehan, J. ; Low latency low power bit flipping algorithms
for LDPC decoding 21st International Symposium on Personal
Indoor and Mobile Radio Communications (PIMRC), 2010 IEEE.
[22] Nouh A.; Banihashemi A.H., Bootstrap decoding of low-
density parity-check codes, IEEE Communications Letters, Vol.
6, Issue 9, Page 391 - 393, 2002.
[23] Gross W.; Gaudet V. and Milner A., Stochastic implementation
of LDPC decoders, Proc. 39th Asilomar Conf. on Signals,
Systems, and Comput- ers, Nov. 2005.
[24] Sharifi Tehrani, S.; Gross, W.J.; Mannor, S.; "Stochastic decoding
of LDPC codes," Communications Letters, IEEE, vol.10, no.10,
pp.716-718, Oct. 2006.
[25] R. A. Carrasco, M. Johnston, Non-binary error control coding
for Wireless Communication and Data Storage, Chichester, WS:
John Wiley & Sons, Ltd., 2008.
[26] N. Wiberg, Codes and Decoding on general graphs, Ph.D.
dissertation, Linkoping University, Sweden, 1996.
[27] M.C.Davey, Error-correction using Low-Density Parity-Check
Codes, University of Cambridge, British, 1999.







101

You might also like