Iterative Hard-Decision Decoding Algorithms For Binary Reed-Muller Codes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Received April 19, 2022, accepted May 28, 2022, date of publication June 6, 2022, date of current version

June 9, 2022.
Digital Object Identifier 10.1109/ACCESS.2022.3180368

Iterative Hard-Decision Decoding Algorithms for


Binary Reed-Muller Codes
YONG-TING NI ( ), DUC NHAT NGUYEN ( ), FENG-KAI LIAO ( ),
TZU-CHIEH KAO ( ), AND CHAO-YU CHEN ( ) , (Member, IEEE)
Department of Engineering Science, National Cheng Kung University, Tainan 70101, Taiwan
Corresponding author: Chao-Yu Chen (super@mail.ncku.edu.tw)
This work was supported by the Ministry of Science and Technology, Taiwan, under Grant MOST 109–2628–E–006–008–MY3 and Grant
MOST 111–2218–E–305–002.

ABSTRACT In this paper, novel hard-decision iterative decoding algorithms for binary Reed-Muller (RM)
codes are presented. First, two algorithms are devised based on the majority-logic decoding algorithm
with reliability measures of the received sequence. The bit-flipping (BF) and the normalized bit-flipping
(NBF) decoding algorithms are hard-decision decoding algorithms. According to the updated hard reliability
measures, the BF and NBF algorithms flip one bit of the received hard-decision sequence at a time in each
iteration. The NBF decoding algorithm performs better than the BF decoding algorithm by normalizing
the reliability measures of the information bits. Moreover, the BF and NBF algorithms are modified to flip
multiple bits in one iteration to reduce the average number of iterations. The modified decoding algorithms
are called the multiple-bits-flipping (MBF) algorithm and the normalized multiple-bits-flipping (NMBF)
algorithm, respectively. The proposed algorithms have low computational complexities and can converge
rapidly after a small number of iterations. The simulation results show that the proposed hard-decision
decoding algorithms outperform the conventional decoding algorithm.

INDEX TERMS Reed-Muller code, bit-flipping, majority-logic decoding, iterative decoding.

I. INTRODUCTION recursive decoding algorithms were provided in [7]–[11].


The Reed-Muller (RM) code is a class of multiple-error- In 2006, the recursive list decoding was modified by Dumer
correction codes. In the past, RM codes were used in wireless and Shabunov to approach the performance of ML decod-
communications, especially in deep-space communications. ing [9]. Recently, Ye and Abbe proposed the recursive
RM codes were first discovered by Muller [1] and the projection-aggregation (RPA) decoding algorithm with lists
conventional decoding scheme of RM code, which is the to have comparable performance with the recursive list
majority-logic decoding, was proposed by Reed in 1954 [2]. decoding [10]. In the same year, the recursive puncturing-
Ever since their discovery, a number of efficient decod- aggregation (RXA) algorithm was proposed via replacing the
ing schemes have been constructed. In 1995, Schnabl and projection by puncturing in RPA [11].
Bossert proposed a generation of Reed-Muller codes by mul- In recent years, the breakthrough of polar codes [12]–[15]
tiple concatenations and provided a new decoding procedure has brought attention back to RM codes due to the sim-
using soft-decision information [3]. In 1999, a maximum- ilarity of these two codes. The performance comparison
likelihood (ML) decoder which uses a distance-preserving between RM codes and polar codes was demonstrated
map and multiple fast Hadamard transforms (FHTs) was in [10], [15]–[17]. It has been shown that the RM codes
presented by Jones and Wilkinson [4] whereas a maximum with RPA decoding [10] can significantly outperform the
a posterior (MAP) decoding algorithm for the first-order polar codes under successive-cancellation list (SCL) decod-
RM codes was proposed in [5]. In 2000, a new soft-decision ing [18]–[20]. In addition, by exploiting the idea of decoding
majority-logic decoding algorithm was presented by revising polar codes, [21], [22] presented permutation-based decoding
the conventional majority-logic decoding [6]. Besides, the methods for RM codes.
Although various algorithms for decoding RM codes
The associate editor coordinating the review of this manuscript and have been designed, most of them are soft-decision
approving it for publication was Mingjun Dai . decoding. In this paper, the hard-decision decoding is taken

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 10, 2022 59373
Y.-T. Ni et al.: Iterative Hard-Decision Decoding Algorithms for Binary Reed-Muller Codes

into consideration with low complexity. The conventional ab = (a0 · b0 , a1 · b1 , . . . , an−1 · bn−1 ) where ‘‘·’’ represents
majority-logic decoding has low complexity but the perfor- the product and say that the product vector xi1 xi2 · · · xiγ where
mance is not good enough. In contrast, the hard-decision 1 ≤ i1 < i2 < · · · < iγ ≤ m has degree γ . Then, RM(r, m)
ML decoding algorithm has better performance but the com- can be generated by the following vectors:
plexity of the ML decoding is extremely large. In this work,
we aim for new algorithms outperforming the conventional {x0 , x1 , x2 , . . . , xm , x1 x2 , . . . , xm−1 xm ,
majority-logic decoding with controllable complexity. . . . , up to products of degree r}. (2)
This paper presents novel iterative decoding algorithms for
Therefore, the generator matrix of RM(r, m) can also be
Reed-Muller codes by exploiting the idea of bit-flipping [23]
expressed as
and min-sum [24] decoding algorithms for low-density  
parity-check (LDPC) codes [25]. These proposed algo- x0
rithms are devised based on the min-sum operation and  x1 
..
 
the reliability measures of the received sequence. The first
.
 
 
proposed algorithm is called the bit-flipping (BF) decod-  
 xm 
ing algorithm because we flip one bit of the received  
G=
 x1 x2 .

(3)
hard-decision sequence at a time in each iteration. Through ..
.
 
decoding iterations, the hard-decision sequence is updated.∗ 



Next, the second decoding algorithm, called the normal-  xm−1 xm 
..
 
ized bit-flipping (NBF) decoding algorithm, is proposed.  
 . 
The reliability measures of information bits are normalized
up to products of degree r
depending on the degree to balance the number of votes.
Therefore, the NBF decoding algorithm offers better perfor- Assume that u = (u0 , u1 , . . . , uK −1 ) is the information
mance compared with the BF decoding algorithm. To reduce vector where K is the number of information bits. Then, the
the number of iterations and the complexity, a termina- encoded codeword c = (c0 , . . . , c2m −1 ) can be obtained by
tion process is provided as well. These proposed iterative K −1
decoding algorithms can converge very rapidly. For the c=u·G=
X
uj gj (4)
hard-decision BF and NBF decoding algorithms, we addi- j=0
tionally provide a condition to allow flipping multiple bits
in each iteration. The advantage of multiple-bits flipping is where gj is the jth row of the generator matrix G. For 1 ≤
to reduce the required number of iterations and hence to j ≤ K − 1, if uj is the coefficient of the product vector
have less computational complexity. We call them multiple- xi1 xi2 · · · xiγ where 1 ≤ i1 < i2 < · · · < iγ ≤ m,
bits-flipping (MBF) decoding and normalized multiple-bits- we call uj the information bit corresponding to a product
flipping (NMBF) decoding, respectively. vector of degree γ . Also, u0 is called the information bit cor-
The numerical experiment results show that the proposed responding to the all-one vector because it is the coefficient
hard-decision decoding algorithms perform better than the of x0 . Take RM(2, 3) for example, if the information vector
conventional majority-logic decoding algorithm over the is u = (u0 , u1 , u2 , u3 , u4 , u5 , u6 ), then the encoded codeword
additive white Gaussian noise (AWGN) channel. is expressed as
The rest of this paper is organized as follows. Nota- c = u0 x0 + u1 x1 + u2 x2 + u3 x3 + u4 x1 x2 + u5 x1 x3 + u6 x2 x3
tions and definitions are provided in Section II. Section III
introduces the conventional majority-logic decoding for RM where u0 is the information bit corresponding to all-one vec-
codes. In Section IV, the steps and examples for the proposed tor; u1 , u2 , and u3 are the information bits corresponding to
algorithms are given. Section V shows the simulation results the product vectors of degree 1; u4 , u5 , u6 are the information
over the AWGN channel. Finally, the concluding remarks are bits corresponding to the product vectors of degree 2.
given in Section VI.
III. REVIEW OF THE CONVENTIONAL
II. PRELIMINARY AND NOTATIONS MAJORITY-LOGIC DECODING
We denote the binary rth-order Reed-Muller code of length For the hard-decision decoding, the conventional decoding
2m over GF(2) by RM(r, m). Let the 2m -tuple vectors algorithm is the majority-logic decoding algorithm which was
first proposed by Reed in [2]. The concept of the conventional
xi = (00 · · · 0} |11 {z
| {z · · · 1} |00 {z
· · · 0} · · · 11 · · · 1})
| {z (1) majority-logic decoding is simple and the majority-logic
2i−1 2i−1 2i−1 2i−1 decoding has low complexity. However, the performance
with 2m−i+1 segments for i = 1, 2, . . . , m and x0 = is limited. In contrast, the hard-decision ML decoding
(11 · · · 1) which is the all-one vector. For vectors a = algorithm has better performance but the complexity is
(a0 , a1 , . . . , an−1 ) and b = (b0 , b1 , . . . , bn−1 ), we denote extremely large. The comparison of complexities for different
hard-decision decoding algorithms is given in Table 1. In our
∗ The BF algorithm was briefly described in the conference paper [26]. work, the major contribution is to modify the conventional

59374 VOLUME 10, 2022


Y.-T. Ni et al.: Iterative Hard-Decision Decoding Algorithms for Binary Reed-Muller Codes

majority-logic decoding with better performance and accept- based on these 2m−γ votes in Step 3. The estimation of uj
able complexity. In the following, we will introduce the pro- is the value (∈ {0, 1}) with majority votes. That is why it is
cedure of the conventional majority-logic decoding. called the majority-logic decoding. Then, Step 4 decreases
During the decoding process, there are 2m−γ votes that can the degree by setting γ = γ − 1 and repeats the decoding
be used to estimate each information bit corresponding to a process until γ = 0.
product vector of degree γ . We denote k the index of vote
where 0 ≤ k < 2m−γ . Let uj be the coefficient of the product IV. PROPOSED ITERATIVE DECODING ALGORITHMS
vector xi1 xi2 · · · xiγ of degree γ where 1 ≤ i1 < i2 < · · · < In this section, we first demonstrate the hard-decision BF
iγ ≤ m. We first define the set A as and NBF decoding algorithms. Then, two modified decoding
n algorithms, i.e., MBF and NMBF algorithms, which consider
A = a1 2i1 −1 + a2 2i2 −1 + · · · + aγ 2iγ −1 : at ∈ {0, 1}, multiple-bits flipping are introduced.
for t = 1, 2, . . . , γ } . (5)
A. BF DECODING ALGORITHM
Then, the set E is defined as
We follow the same notations given in Section III. Besides,
E = {0, 1, . . . , m − 1}\{i1 − 1, i2 − 1, . . . , iγ − 1} let Imax be the maximum number of iterations in the decoding
(i) (i) (i)
= {j1 , j2 , . . . , jm−γ } (6) process. For 1 ≤ i ≤ Imax , let v(i) = (v0 , v1 , . . . , vn−1 )
denote the hard-decision codeword generated in the ith
and the set B is defined as decoding iteration. For 1 ≤ i ≤ Imax and 0 ≤ j < K ,
n (i)
B = b1 2j1 + b2 2j2 + · · · + bm−γ 2jm−γ : bt ∈ {0, 1}, Rj denotes the reliability measure of the jth information bit
uj in the ith decoding iteration. Furthermore, Ml represents
for t = 1, 2, . . . , γ } . (7)
the set of index j such that Gj,l = 1 which is the element
For each qk ∈ B, we can construct the following set: in the jth row and lth column of the generator matrix G
γ
in (3). Giving an additional condition on γ , Ml represents
Sk = qk + A = {qk + p : p ∈ A} (8) the collection of j’s such that Gj,l = 1 and the jth row vector
consisting of bit indices of the received sequence. of G is a product vector of degree γ . Moreover, Dl denotes
(i)
The steps of the conventional majority-logic decoding for the difference between the estimated codeword bit vl and
RM(r, m) can be described as follows: the reliability measure in the lth position of the received
Initialization: Let γ = r and n = 2m . sequence. Based on the notations defined above, the block
Step 1. For 0 ≤ l < n, make a hard-decision on each diagram of the decoding scheme is illustrated in Fig. 1.
received value yl from noisy channel by In each decoding iteration, it allows flipping only one bit
( from the updated codeword at a time.
0, if yl > 0; Initialization: Set i = 1 and γ = r. Let Imax denote the
vl = (9)
1, otherwise. maximum number of iterations and let y = (y0 , y1 , . . . , yn−1 )
be the received sequence over AWGN channel.
Step 2. For any information bit uj which is the coefficient (1)
of a product vector of degree γ , compute each ûj,k Step 1. For 0 ≤ l < n, determine vl for each received bit
for uj by yl according to the hard-decision rule:
(
if yl > 0;
X
ûj,k = vl (10) (1) 1,
vl = (12)
l∈Sk −1, otherwise.
for k = 0, 1, . . . , 2m−γ − 1 where Sk is given in (8). Also, let φ (1) = v(1) .
Step 3. Then, determine the estimation of uj according to Step 2. For 0 ≤ k < 2m−γ , compute each vote information
the majority voting of 2m−γ equations from (10). If for the information bit uj corresponding to a vector
γ = 0, go to Step 5. of degree γ :
Step 4. Remove the estimated information of uj ’s corre- Y (i)
sponding to vectors of degree γ from the received ûj,k = φl (13)
sequence and modify vl → v0l for 0 ≤ l < n by l∈Sk
X
v0 = v − uj xi1 xi2 · · · xiγ (11) where Sk is the set of indices in the received
1≤i1 <i2 <···<iγ ≤m sequence which are related to uj as given in (8).
Step 3. Obtain the reliability measure of the information
where v = (v0 , v1 , . . . , vn−1 ) and v0 = bit uj by summing all 2m−γ votes from Step 2:
(v00 , v01 , . . . , v0n−1 ). Set γ = γ − 1, go to Step 2.
Step 5. Stop decoding and obtain all decoded information 2m−γ
X−1
(i)
bits uj . Rj = ûj,k . (14)
In Step 2, it can be found that there are 2m−γ estimates k=0

for one information bit uj and the estimated uj is determined If γ = 0, go to Step5.

VOLUME 10, 2022 59375


Y.-T. Ni et al.: Iterative Hard-Decision Decoding Algorithms for Binary Reed-Muller Codes

FIGURE 1. The block diagram of the hard-decision bit-flipping decoding algorithm.

(i)
Step 4. For l = 0, 1, . . . , n − 1, if φl does not involve the Set i = i + 1. Then, for l = 0, 1, . . . , n − 1, update
information
 0 of any information bit of degree γ , then the estimated codeword bit as follows:
(i) (i)
φl = φl ; otherwise,
(
(i−1)
(i) −vl , if l = l 0 ;
vl = (i−1)
 0
(i) (i)
Y (i) vl , otherwise.
φl = φl sgn(Rj ). (15)
γ
j∈Ml Set γ = r and φ (i) = v(i) . Go to Step 2.
0 To reduce the complexity, a termination process is devel-
Set γ = γ − 1 and φ (i) = φ (i) . Go to Step 2. oped in Step 7. The decoding process is terminated when
Step 5. If i = Imax , make the following hard-decision: the updated sequences are identical within two consecutive
(i) (i)
1) uj = 1, if Rj ≤ 0; 2) uj = 0, if Rj > 0 and iterations.
stop decoding. Otherwise, go to Step 6. Note that Step 2, Step 4, and Step 8 of the BF algorithm
Step 6. (Re-encoding) Let v̂ = (v̂0 , v̂1 , . . . , v̂n−1 ) where in [26] are modified in this manuscript to have simpler
Y (i) expressions.
v̂l = sgn(Rj ) (16)
Example 1: To demonstrate the decoding process, let us
j∈Ml
consider RM(1, 3) with the generator matrix given by
and 0 ≤ l < n. Go to Step 7.  
(i)
Step 7. (Termination) If vl = v̂l for all l, make the 1 1 1 1 1 1 1 1
(i) 0 1 0 1 0 1 0 1
following hard-decision: 1) uj = 1, if Rj ≤ 0; 2) G= 0 0 1 1 0 0 1 1 .

(i)
uj = 0, if Rj > 0 and stop decoding. Otherwise, 0 0 0 0 1 1 1 1
go to Step 8.
Step 8. For l = 0, 1, . . . , n − 1, if Assume the information vector is u = (u0 , u1 , u2 , u3 ) =
(0, 0, 1, 0) where u0 is the information bit corresponding
(i)
vl 6 = v̂l , (17) to the all-one vector and u1 , u2 and u3 are the infor-
mation bits corresponding to vectors of degree 1. Hence,
calculate
we have the encoded codeword c = (c0 , c1 , . . . , c7 ) =
(i) (i)
Dl = vl − v̂l · min|Rj | ;
(0, 0, 1, 1, 0, 0, 1, 1). Assume the received sequence is
j∈Ml
y = (y0 , y1 , . . . , y7 )
otherwise, Dl = 0. Go to Step 9.
= (1.5, 0.4, −2.3, 1.5, 0.3, 2.4, −1.2, −0.7)
Step 9. Find l 0 with the maximum value of Dl , i.e.,
l 0 = argmax Dl . over the AWGN channel. First of all, we make the
0≤l<n hard-decision of y and obtain the initial information φ (1) =

59376 VOLUME 10, 2022


Y.-T. Ni et al.: Iterative Hard-Decision Decoding Algorithms for Binary Reed-Muller Codes

(1) (1) (1) (1)


(φ0 , φ1 , . . . , φ7 ) = (1, 1, −1, 1, 1, 1, −1, −1) according In this example, we find v3 6= v̂3 . Therefore, we have
to Step 1. From (13), we have the following four equations
(1)
D3 = v3 − v̂3 · min {|6|, |2|, | − 2|} = |1 − (−2)| = 3

(1) (1)
û1,0 = φ0 φ1 = 1;
(1) (1)
û1,1 = φ2 φ3 = −1; and
(1) (1)
û1,2 = φ4 φ5 = 1; D = (D0 , D1 , . . . , D7 ) = (0, 0, 0, 3, 0, 0, 0, 0)
(1) (1)
û1,3 = φ6 φ7 =1 where there is only one position of v(1) distinct from the
for j = 1. Then, the reliability measure of u1 in the first re-encoded codeword. The largest value is D3 = 3, so we
iteration is obtain l 0 = 3. Thus, we flip one bit of the initial sequence
v(1) by
(1)
R1 = û1,0 + û1,1 + û1,2 + û1,3 = 1 + (−1) + 1 + 1 = 2 (
(1)
(2) −v , if l = 3;
according to (14). Similarly, we can also obtain R2 =
(1) vl = (1)l
vl , otherwise.
(1)
−2 and R3 = 2 which are the reliability measures of u2 and
u3 , respectively. Next, we need to modify φ (1) by removing Then, φ (2) = v(2) = (1, 1, −1, −1, 1, 1, −1, −1) is the
the contributions from u1 , u2 , and u3 . Then, we obtain the updated sequence for the next decoding iteration.
estimations of u0 as follows: After performing Steps 1 to 6 in the second iteration,
(2)
(1) we can obtain that vl = v̂l for all l = 0, 1, 2, . . . , n. That is,
û0,0 = φ0 = 1;
(1) (1) (2) (2)
û0,1 = φ1 sgn(R1 ) = 1 · sgn(2) = 1; v0 = v̂0 = sgn(R0 ) = 1;
(1) (1) (2) (2) (2)
û0,2 = φ2 sgn(R2 ) = −1 · sgn(−2) = 1; v1 = v̂1 = sgn(R0 )sgn(R1 ) = 1;
(1) (1) (1) (2) (2) (2)
û0,3 = φ3 sgn(R1 )sgn(R2 ) = 1 · sgn(2)sgn(−2) = −1; v2 = v̂2 = sgn(R0 )sgn(R2 ) = −1;
(1) (1) (2) (2) (2) (2)
û0,4 = φ4 sgn(R3 ) = 1 · sgn(2) = 1; v3 = v̂3 = sgn(R0 )sgn(R1 )sgn(R2 ) = −1;
(1) (1) (1) (2) (2) (2)
û0,5 = φ5 sgn(R1 )sgn(R3 ) = 1 · sgn(2)sgn(2) = 1; v4 = v̂4 = sgn(R0 )sgn(R3 ) = 1;
(1) (1) (1) (2) (2) (2) (2)
û0,6 = φ6 sgn(R2 )sgn(R3 ) = −1 · sgn(−2)sgn(2) = 1; v5 = v̂5 = sgn(R0 )sgn(R1 )sgn(R3 ) = 1;
(1) (1) (1) (1) (2) (2) (2) (2)
û0,7 = φ7 sgn(R1 )sgn(R2 )sgn(R3 ) v6 = v̂6 = sgn(R0 )sgn(R2 )sgn(R3 ) = −1;
(2) (2) (2) (2) (2)
= −1 · sgn(2)sgn(−2)sgn(2) = 1. v7 = v̂7 = sgn(R0 )sgn(R1 )sgn(R2 )sgn(R3 ) = −1.
Therefore, the reliability measure of u0 is It means that the re-encoded codeword v̂ is identical to the
(1) updated sequence v(2) in the previous iteration. Hence, there
R0 = û0,0 + û0,1 + û0,2 + û0,3
is no need to flip any bit. The updated hard-decision sequence
+û0,4 + û0,5 + û0,6 + û0,7 (2) (2) (2)
v(2) = (v0 , v1 , . . . , v7 ) = (1, 1, −1, −1, 1, 1, −1, −1)
= 1 + 1 + 1 + (−1) is a valid codeword, so we can terminate the decoding
+1 + 1 + 1 + 1 = 6. process. Then, we output the decoded information vector
(û0 , û1 , û2 , û3 ) = (0, 0, 1, 0) based on the hard-decision of
In Step 6, we can re-encode the estimated codeword based on (2) (2) (2) (2)
the reliability measures R0 , R1 , R2 , and R3 in Step 7.
the reliability measures of information bits:
(1) B. NBF DECODING ALGORITHM
v̂0 = sgn(R0 ) = sgn(6) = 1;
(1) (1) For the BF algorithm, the reliability measure is updated only
v̂1 = sgn(R0 )sgn(R1 ) = sgn(6)sgn(2) = 1; (i)
based on the signum of Rj ’s in (15). In the NBF algorithm,
(1) (1)
v̂2 = sgn(R0 )sgn(R2 ) = sgn(6)sgn(−2) = −1; we want to introduce the magnitudes of reliability measures
(i)
v̂3 =
(1) (1) (1)
sgn(R0 )sgn(R1 )sgn(R2 ) when φl ’s are updated. In addition, the concept of the
min-sum algorithm is exploited and then (13) and (15) are
= sgn(6)sgn(2)sgn(−2) = −1;
(1) (1)
modified with the min-sum operations. Besides, the reliabil-
v̂4 = sgn(R0 )sgn(R3 ) = sgn(6)sgn(2) = 1; (i)
ity measures Rj ’s in (14) have to be normalized to balance
(1) (1) (1)
v̂5 = sgn(R0 )sgn(R1 )sgn(R3 ) the number of votes for different degrees. Therefore, the
= sgn(6)sgn(2)sgn(2) = 1; modified algorithm is called the normalized BF algorithm.
(1)
v̂6 = sgn(R0 )sgn(R2 )sgn(R3 )
(1) (1) By letting µγ be the normalized factor used for degree γ ,
Steps 2 to 4 of the BF decoding are modified and described
= sgn(6)sgn(−2)sgn(2) = −1; as follows.
(1) (1) (1) (1)
v̂7 = sgn(R0 )sgn(R1 )sgn(R2 )sgn(R3 ) Step 2. For 0 ≤ k < 2m−γ , compute each vote information
= sgn(6)sgn(2)sgn(−2)sgn(2) = −1. for the information bit uj corresponding to a vector

VOLUME 10, 2022 59377


Y.-T. Ni et al.: Iterative Hard-Decision Decoding Algorithms for Binary Reed-Muller Codes

of degree γ by the min-sum operation: multiple bits can be flipped at a time in each iteration. The
  MBF algorithm can be obtained by modifying Step 9 of the
Y (i) (i) BF algorithm.
ûj,k =  sgn(φl ) × min|φl | (18)
l∈Sk Initialization: Set i = 1 and γ = r. Assign a pre-defined
l∈Sk
value to DTh . Let the maximum number of iterations be Imax .
where Sk is the set of indices in the received Step 9.Find the index l 0 with the maximum value of Dl ,
sequence which are related to uj as given in (8). i.e.,
Step 3. Obtain the reliability measure of the information
bit uj by summing all 2m−γ votes from Step 2: l 0 = argmax Dl .
0≤l<n
2m−γ
X−1
(i) 1 Set i = i + 1. Then, update the estimated code-
Rj = ûj,k . (19)
µγ word by flipping not only the codeword bit with the
k=0
largest distance but also the codeword bits whose
where µγ is a positive integer. If γ = 0, go to distances are larger than DTh . That is, for 0 ≤ l < n,
Step 5.  (i−1)
(i)
Step 4. For l = 0, 1, . . . , n − 1, if φl does not involve the −vl
 , if l = l 0 ;
information
 0 of any information bit of degree γ , then (i)
vl = −vl (i−1)
, if l 6= l 0 and Dl > DTh ;
(i) (i)
φl = φl ; otherwise,  (i−1)
, if l 6= l 0 and Dl ≤ DTh .

vl

Set γ = r and φ (i) = v(i) . Go to Step 2.


 
 0
(i) (i) (i) 
Y
φl = sgn(φl ) sgn(Rj ) The major difference between the BF and the MBF algo-

γ rithms is that the MBF algorithm allows flipping multiple
j∈Ml
( ) bits in each iteration. Therefore, the required iterations of
×min
(i)
|φl |,
(i)
minγ |Rj | . (20) the MBF algorithm is reduced. In addition, the bit error
j∈Ml rate (BER) performance is not affected if an appropriate value
0 of DTh is selected.
Set γ = γ − 1 and φ (i) = φ (i) . Go to Step 2. Next, by utilizing the concept of normalization in NBF,
(i)
Remark 1: For the BF decoding algorithm, because φl ∈ we can devise the normalized MBF decoding algorithm, i.e.,
(i) the NMBF decoding algorithm. The NMBF algorithm can
{1, 0, −1} and Rj ∈ N, (18) and (20) can be reduced to
  perform close to the NBF algorithm with fewer iterations,
which will be demonstrated in the next section.
(i) (i)
Y
ûj,k =  sgn(φl ) × min|φl |
l∈Sk
l∈Sk V. SIMULATION RESULTS AND COMPLEXITY ANALYSIS
(i)
Y
= φl In this section, we compare the performance and complexity
l∈Sk among various decoding algorithms over the AWGN channel.
First, we will discuss the influence of the normalization factor
and (i)
  µγ on BER performance. In the NBF decoding algorithm, Rj
 0 is normalized by µγ since the number of votes could be
(i) (i) (i) 
Y
φl = sgn(φl ) sgn(Rj )

very large. Furthermore, in order to balance the number of
j∈Ml
γ votes for different degrees, we set µγ = 2µγ +1 for γ =
( ) 0, 1, . . . , r − 1. For the ease of presentation, we use ‘‘NX ’’
×min
(i)
|φl |,
(i)
minγ |Rj | to denote the normalization factor µ0 = X . Also, ‘‘IX ’’
j∈Ml represents that the maximum number of iterations is X . E.g.,
(i) (i) in Fig. 2, ‘‘N4’’ and ‘‘I30’’ represent that we use µ0 = 4 and
Y
= φl sgn(Rj )
γ Imax = 30, respectively, for the NBF decoding.
j∈Ml
Fig. 2 demonstrates the BER performances of the NBF
which are identical to (13) and (15), respectively. decoding algorithm with different µ0 for RM(2, 7). From
Remark 2: To obtain better performance, µγ can be Fig. 2, we can observe that the NBF decoding algorithm with
adjusted for different RM(r, m). We will discuss its impact µ0 = 32 (N32) has the best BER performance in RM(2, 7).
on the error performance in Section V. To evaluate the number of iterations for the BF and
NBF decoding algorithms to reach the performance of con-
C. MBF AND NMBF DECODING ALGORITHMS vergence, we provide the BER performances of RM(2, 7)
Except for the notations given in Section IV-A, we define the decoded with these two algorithms in Fig. 3 and Fig. 4,
new parameter DTh which is a pre-defined threshold. If the respectively. Fig. 3 shows that the performances of RM(2, 7)
distances calculated in Step 8 of the BF algorithm are larger decoded with the BF decoding algorithm converge at
than DTh , then the corresponding bits are flipped. Therefore, 10 iterations. Similarly, the NBF decoding algorithm requires

59378 VOLUME 10, 2022


Y.-T. Ni et al.: Iterative Hard-Decision Decoding Algorithms for Binary Reed-Muller Codes

FIGURE 2. BER performances of RM(2, 7) decoded with the NBF FIGURE 4. BER performances of RM(2, 7) decoded with the NBF
algorithm for different µ0 . algorithm for different iterations.

FIGURE 3. BER performances of RM(2, 7) decoded with the BF algorithm FIGURE 5. BER performances of RM(2, 7) decoded with various
for different iterations. hard-decision algorithms.

TABLE 1. The comparison of computational complexities for different


10 iterations to converge to the best performance according hard-decision decoding algorithms.

to Fig. 4. Numerical experiments show that a small number


of iterations is needed for performance convergence. In the
sequel, we set Imax = 30 for the BF and NBF decoding
algorithms.
Moreover, the BER performances of different hard-decision
decoding algorithms are shown in Fig. 5 and Fig. 6. We com-
pare the performances among the proposed BF decoding
algorithm, NBF decoding algorithm, and the conventional
majority-logic decoding algorithm (It is labeled by the hard there are 0.55 dB and 0.81 dB gain in comparison with the
majority.) in Figs. 5 and 6. We can see that the BF decod- conventional majority-logic decoding algorithm for RM(2, 7)
ing algorithm outperforms the conventional majority-logic and RM(3, 8), respectively.
decoding algorithm. At the BER of 10−5 , the proposed The general complexity comparison for hard-decision
BF decoding algorithm has at least 0.3 dB gain over the algorithms is provided in Table 1. From Table 1, we find
conventional majority-logic decoding algorithm. The NBF
∗ Note that, for the first-order RM codes, the complexity order of the
decoding algorithm performs better than the BF decoding
hard-decision maximum likelihood (HDML) algorithm is O(n log n) when
algorithm. As shown in Fig. 6, the NBF algorithm can FHT decoding is used [27], [28]. Besides, K denotes the code dimension
have 0.5 dB gain compared to the BF algorithm. Moreover, and 2K is the number of codewords.

VOLUME 10, 2022 59379


Y.-T. Ni et al.: Iterative Hard-Decision Decoding Algorithms for Binary Reed-Muller Codes

FIGURE 6. BER performances of RM(3, 8) decoded with various FIGURE 8. BLER performances of RM(2, 7) decoded with various
hard-decision algorithms. hard-decision algorithms.

FIGURE 7. The average number of iterations for RM(2, 7) decoded with FIGURE 9. BLER performances of RM(2, 7) decoded with the MBF
the BF and NBF algorithms. algorithm for different DTh .

that our proposed BF and NBF algorithms have the same can terminate the decoding process with fewer iterations, the
complexities of additions. The NBF algorithm outperforms overall complexity can be reduced significantly.
the BF algorithm in BER, but it requires additional multipli- Fig. 8 shows the block error rate (BLER) performances of
cations. However, we can choose the normalization factor to the BF, MBF, NBF, and NMBF algorithms. The NBF and
be a power of two. Hence, multiplications can be avoided for NMBF algorithms have the same BLER performances. The
fixed-point implementation. MBF algorithm performs close to the BF algorithm. There is
The comparison of the average number of iterations for the only a 0.02 dB difference at BLER of 10−4 . However, at SNR
proposed algorithms is shown in Fig. 7. We set Imax = 30. of 7 dB, the average number of iterations for the BF algorithm
That is, the decoding process will be terminated before or at is 9.44 while that for the MBF algorithm is only 2.07. The
the 30th iteration. Since the BF and NBF decoding algorithms reduction ratio is 78.07% ((9.44 − 2.07)/9.44).
flip only one bit in each iteration, they require more iterations Fig. 9 demonstrates the performances of the MBF algo-
to successfully decode the erroneous codewords. The MBF rithm with different values of DTh . We observe that if an
and NMBF decoding algorithms require fewer iterations appropriate threshold DTh is selected, the MBF algorithm can
compared with the BF and NBF algorithms, respectively. The perform close to the BF algorithm.
number of iterations can be reduced by 40% to 80%. The
major complexities of these iterative algorithms are the com- VI. CONCLUSION
putations in each iteration. For each iteration,
Pr the complexity In this paper, we have proposed several hard-decision decod-
m m−i . If we

order of required operations is O i=0 i · 2 ing algorithms for binary RM codes. They are iterative

59380 VOLUME 10, 2022


Y.-T. Ni et al.: Iterative Hard-Decision Decoding Algorithms for Binary Reed-Muller Codes

decoding algorithms that have not been proposed in the litera- [12] E. Arikan, ‘‘Channel polarization: A method for constructing capacity-
ture. The performance can be improved by updating the relia- achieving codes for symmetric binary-input memoryless channels,’’ IEEE
Trans. Inf. Theory, vol. 55, no. 7, pp. 3051–3073, Jan. 2009.
bility measures and flipping codeword bits. These algorithms [13] K. Niu and K. Chen, ‘‘CRC-aided decoding of polar codes,’’ IEEE Com-
do not need large computational complexity since they can mun. Lett., vol. 16, no. 10, pp. 1668–1671, Oct. 2012.
converge very rapidly in a small number of iterations. For [14] K. Niu, K. Chen, J. Lin, and Q. T. Zhang, ‘‘Polar codes: Primary concepts
and practical decoding algorithms,’’ IEEE Commun. Mag., vol. 52, no. 7,
hard-decision decoding, our proposed NBF decoding algo- pp. 192–203, Jul. 2014.
rithm outperforms the conventional majority-logic decoding [15] S. A. Hashemi, N. Doan, M. Mondelli, and W. J. Gross, ‘‘Decoding
algorithm with 0.55 dB to 0.8 dB gain. These proposed Reed–Müller and polar codes by successive factor graph permutations,’’
in Proc. IEEE 10th Int. Symp. Turbo Codes Iterative Inf. Process. (ISTC),
algorithms have the same complexity orders. Specifically, Dec. 2018, pp. 1–5.
the BF and MBF decoding algorithms only require integer [16] E. Arıkan, ‘‘A performance comparison of polar codes and Reed–Müller
additions. codes,’’ IEEE Commun. Lett., vol. 12, no. 6, pp. 447–449, Jun. 2008.
[17] M. Mondelli, S. H. Hassani, and R. L. Urbanke, ‘‘From polar to
Decoding the RM codes with our iterative decoding Reed–Müller codes: A technique to improve the finite-length perfor-
algorithms converges very fast with a small number of iter- mance,’’ IEEE Trans. Commun., vol. 62, no. 9, pp. 3084–3091, Sep. 2014.
ations. Moreover, the proposed MBF/NMBF algorithms can [18] K. Chen, K. Niu, and J. R. Lin, ‘‘List successive cancellation decoding of
polar codes,’’ Electron. Lett., vol. 48, no. 9, pp. 500–501, Apr. 2012.
decrease the number of iterations such that the overall com- [19] I. Tal and A. Vardy, ‘‘List decoding of polar codes,’’ IEEE Trans. Inf.
plexity can be reduced. The average number of iterations Theory, vol. 61, no. 5, pp. 2213–2226, May 2015.
can be reduced by 40% to 80% compared to the BF/NBF [20] A. Balatsoukas-Stimming, M. B. Parizi, and A. Burg, ‘‘LLR-based suc-
cessive cancellation list decoding of polar codes,’’ IEEE Trans. Signal
algorithms. Process., vol. 63, no. 19, pp. 5165–5179, Oct. 2015.
Possible future work includes the theoretical analysis of the [21] M. Kamenev, Y. Kameneva, O. Kurmaev, and A. Maevskiy, ‘‘A new
core parameters of the proposed algorithms, e.g., µ0 and DTh . permutation decoding method for Reed–Müller codes,’’ in Proc. IEEE Int.
Symp. Inf. Theory (ISIT), Paris, France, Jul. 2019, pp. 26–30.
Furthermore, the generalization of our proposed algorithms to [22] K. Ivanov and R. Urbanke, ‘‘Permutation-based decoding of Reed–Müller
decode non-binary RM codes is suggested as a future research codes in binary erasure channel,’’ in Proc. IEEE Int. Symp. Inf. Theory
topic. (ISIT), Paris, France, Jul. 2019, pp. 21–25.
[23] J. Zhang and M. P. C. Fossorier, ‘‘A modified weighted bit-flipping decod-
ing of low-density parity-check codes,’’ IEEE Commun. Lett., vol. 8, no. 3,
ACKNOWLEDGMENT pp. 165–167, Mar. 2004.
[24] M. P. C. Fossorier, M. Mihaljevic, and H. Imai, ‘‘Reduced complexity
The authors would like to express their sincere gratitude iterative decoding of low-density parity check codes based on belief prop-
to an Associate Editor Dr. Mingjun Dai and an anonymous agation,’’ IEEE Trans. Commun., vol. 47, no. 5, pp. 673–680, May 1999.
reviewer for their valuable comments that have significantly [25] R. G. Gallager, ‘‘Low-density parity-check codes,’’ IRE Trans. Inf. Theory,
vol. IT-8, no. 1, pp. 21–28, Jan. 1962.
improved the quality of this paper. [26] Y.-T. Ni, C.-Y. Pai, and C.-Y. Chen, ‘‘An iterative bit-flipping decoding
algorithm for binary Reed–Müller codes,’’ in Proc. Int. Symp. Inf. Theory
REFERENCES Appl. (ISITA), Kapolei, HI, USA, Oct. 2020, pp. 1–5.
[27] R. R. Green, ‘‘A serial orthogonal decoder,’’ JPL Space Programs Sum-
[1] D. E. Müller, ‘‘Application of Boolean algebra to switching circuit design mary, vol. 4, nos. 37–39, pp. 247–253, 1966.
and to error detection,’’ Trans. IRE Prof. Group Electron. Comput., vol. EC- [28] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting
3, no. 3, pp. 6–12, Sep. 1954. Codes. Amsterdam, The Netherlands: Elsevier, 1977.
[2] I. Reed, ‘‘A class of multiple-error-correcting codes and the decoding
scheme,’’ IRE Prof. Group Inf. Theory, vol. 4, no. 4, pp. 38–49, Sep. 1954.
[3] G. Schnabl and M. Bossert, ‘‘Soft-decision decoding of Reed–Müller
codes as generalized multiple concatenated codes,’’ IEEE Trans. Inf. The-
ory, vol. 41, no. 1, pp. 304–308, Jan. 1995.
[4] A. E. Jones and T. A. Wilkinson, ‘‘Performance of Reed–Müller codes YONG-TING NI received the B.S. and M.S.
and a maximum-likelihood decoding algorithm for OFDM,’’ IEEE Trans. degrees in engineering science from the National
Commun., vol. 47, no. 7, pp. 949–952, Jul. 1999. Cheng Kung University (NCKU), Tainan, Taiwan,
[5] A. Ashikhmin and S. Litsyn, ‘‘Simple MAP decoding of first-order in 2018 and 2020, respectively. Currently, he is
Reed–Müller and Hamming codes,’’ IEEE Trans. Inf. Theory, vol. 50, no. 8, an Engineer with Phison Electronics Corporation,
pp. 1812–1818, Aug. 2004. Taiwan.
[6] I. Dumer and R. Krichevskiy, ‘‘Soft-decision majority decoding of
Reed–Müller codes,’’ IEEE Trans. Inf. Theory, vol. 46, no. 1, pp. 258–264,
Jan. 2000.
[7] I. Dumer, ‘‘Recursive decoding and its performance for low-rate
Reed–Müller codes,’’ IEEE Trans. Inf. Theory, vol. 50, no. 5, pp. 811–823,
May 2004.
[8] I. Dumer, ‘‘Soft-decision decoding of Reed–Müller codes: A simpli-
fied algorithm,’’ IEEE Trans. Inf. Theory, vol. 52, no. 3, pp. 954–963,
Mar. 2006. DUC NHAT NGUYEN received the B.S. and M.S.
[9] I. Dumer and K. Shabunov, ‘‘Soft-decision decoding of Reed–Müller degrees in engineering science from the National
codes: Recursive lists,’’ IEEE Trans. Inf. Theory, vol. 52, no. 3, Cheng Kung University (NCKU), Tainan, Taiwan,
pp. 1260–1266, Mar. 2006. in 2019 and 2021, respectively. Currently, he is
[10] M. Ye and E. Abbe, ‘‘Recursive projection-aggregation decoding an Engineer with Wistron NeWeb Corporation,
of Reed–Müller codes,’’ IEEE Trans. Inf. Theory, vol. 66, no. 8, Taiwan.
pp. 4948–4965, Aug. 2020.
[11] M. Lian, C. Häger, and H. D. Pfister, ‘‘Decoding Reed–Müller codes using
redundant code constraints,’’ in Proc. IEEE Int. Symp. Inf. Theory (ISIT),
Jun. 2020, pp. 42–47.

VOLUME 10, 2022 59381


Y.-T. Ni et al.: Iterative Hard-Decision Decoding Algorithms for Binary Reed-Muller Codes

FENG-KAI LIAO received the B.S. degree CHAO-YU CHEN (Member, IEEE) received the
in mechanical and electro-mechanical engineer- B.S. degree in electrical engineering and the
ing from the National Sun Yat-sen University M.S. and Ph.D. degrees in communications engi-
(NSYSU), Kaohsiung, in 2020, and the M.S. neering from the National Tsing Hua Univer-
degree in engineering science from the National sity (NTHU), Hsinchu, Taiwan, in 2000, 2002,
Cheng Kung University (NCKU), Tainan, Taiwan and 2009, respectively, under the supervision of
in 2022. Prof. Chi-Chao Chao.
From 2008 to 2009, he was a Visiting Ph.D.
Student with the University of California, Davis
(with Prof. Shu Lin). From 2009 to 2016, he was
the Technical Manager with the Communication System Design Division,
Mediatek Inc., Hsinchu. From July 2018 to August 2018, he was with the
University of California, Davis, as a Visiting Scholar (with Prof. Shu Lin).
Since February 2016, he has been a Faculty Member with the National
Cheng Kung University (NCKU), Tainan, Taiwan, where he is currently an
Associate Professor with the Department of Engineering Science. His current
TZU-CHIEH KAO received the B.S. degree research interests include sequence design, error-correcting codes, digital
in engineering science from the National communications, and wireless networks.
Cheng Kung University (NCKU), Tainan, in 2021, Dr. Chen was a recipient of the 15th and 20th Y. Z. Hsu Science Paper
where he is currently pursuing the Ph.D. degree in Award Administered by Far Eastern Y. Z. Hsu Science and Technology
engineering science. His research interests include Memorial Foundation, Taiwan, in 2017 and 2022, and the Best Paper Award
sequence design and error-correcting codes. for Young Scholars by the IEEE Information Theory Society Taipei/Tainan
Chapter and the IEEE Communications Society Taipei/Tainan Chapter,
in 2018. Since January 2019, he has been serving as the Vice Chair for the
IEEE Information Theory Society Tainan Chapter.

59382 VOLUME 10, 2022

You might also like