Professional Documents
Culture Documents
BCH Decoding
BCH Decoding
Yunghsiang S. Han
Y. S. Han
Decoding Procedure
The BCH/RS codes decoding has four steps:
1. Syndrome computation
2. Solving the key equation for the error-locator
polynomial (x)
3. Searching error locations given the (x) polynomial
by simply nding the inverse roots
4. (Only nonbinary codes need this step) Determine the
error magnitude at each error location by
error-evaluator polynomial (x)
The decoding procedure can be performed in time or
frequency domains.
This lecture only considers the decoding procedure in
Graduate Institute of Communication Engineering, National Taipei University
Y. S. Han
Y. S. Han
Syndrome Computation
Let , 2 , . . . , 2t be the 2t consecutive roots of the
generator polynomial for the BCH/RS code, where is
an element in nite eld GF (q m ) with order n.
Let y(x) be the received vector. Then dene the
syndrome Sj , 1 j 2t, as follows:
Sj
n1
i=0
v
ei (j )i
(1)
eik ik j ,
k=1
Y. S. Han
k=1
Graduate Institute of Communication Engineering, National Taipei University
Y. S. Han
Y. S. Han
Key Equation
Recall that the error-locator polynomial is
(x) = (1 xX1 )(1 xX2 ) (1 xXv ) = 0 +
i xi ,
i=1
where 0 = 1.
Dene the innite degree syndrome polynomial (though
we only know the rst 2t coecients) as
S(x) =
Sj+1 xj
j=0
j=0
xj
( v
)
Yk Xkj+1
k=1
Y. S. Han
Yk Xk
=
.
1 xXk
k=1
(x) = (x)S(x)
v
v
=
Yk Xk
(1 xXj ).
k=1
j=1
j=k
Y. S. Han
(2)
(3)
k=0
Y. S. Han
Y. S. Han
Chien Search
The next important decoding step is to nd the actual
error locations X1 = i1 , X2 = i2 , . . . , Xv = iv .
Note that (x) has roots
X11 = i1 , X21 = i2 , . . . , Xv1 = iv .
Observe that an error occurs in position i if and only if
(i ) = 0 or
v
k ik = 0.
k=0
Then
((i1) ) =
k=0
k ik+k
v (
)
=
k ik k .
k=0
10
Y. S. Han
11
Y. S. Han
Forneys Formula
For nonbinary BCH or RS codes one still needs to
determine the error magnitude for each error location.
These values, Y1 , Y2 , . . . , Yv , can be obtained by utilizing
the error-evaluator polynomial. This step is known as
Forneys formula.
By substituting Xk1 = ik into the error-evaluator
polynomial we have
(Xk1 ) = Yk Xk
(1 Xk1 Xj ).
j=1
j=k
12
Y. S. Han
13
(1 Xk1 Xj )
j=1
j=k
1
(Xk1 ).
Yk
(4)
Y. S. Han
equation.
14
Y. S. Han
15
Y. S. Han
16
x5 + 1 = x2 (x3 + 1) + (x2 + 1)
x3 + 1 = x(x2 + 1) + (x + 1)
x2 + 1 = (x + 1)(x + 1) + 0
Since d(x) divides x5 + 1 and x3 + 1 it must also divide
x2 + 1. Since it divides x3 + 1 and x2 + 1 it must also
divide x + 1. Consequently, x + 1 = d(x).
The useful aspect of this process is that, at each
iteration, a set of polynomials fi (x), gi (x), and ri (x) are
generated such that
fi (x)a(x) + gi (x)b(x) = ri (x).
A way to obtain fi (x) and gi (x) is as follows.
Graduate Institute of Communication Engineering, National Taipei University
(5)
Y. S. Han
17
(6)
Y. S. Han
18
Y. S. Han
19
Y. S. Han
20
Y. S. Han
21
Y. S. Han
22
Y. S. Han
Example
Consider the triple-error-correcting BCH code where
generator polynomial has , 2 , . . . , 6 as roots and is a
primitive element of GF (24 ) with 4 = + 1. Let the
received vector y(x) = x7 + x2 . We now want to nd the
error locations of the received vector.
First we need to calculate the syndrome coecients. By (1),
we have
S(x) = x4 + 3 x3 + 9 x + 12 .
Next we perform Sugiyama algorithm as follows:
23
Y. S. Han
24
i (x)(gi (x))
i (x)(ri (x))
qi (x)
x6
S(x)
x2 + 3 x + 6
11 x + 3
x2 + 3 x + 6
Y. S. Han
25
Y. S. Han
26
Y. S. Han
27
1)
S1 + 1 = 0,
2)
S3 + 1 S2 + 2 S1 + 3 = 0,
3)
..
.
S5 + 1 S4 + 2 S3 + 3 S2 + 4 S1 + 5 = 0,
)
..
.
(7)
()
()
() (x) = 1 + 1 x + 2 x2 + + d xd .
Graduate Institute of Communication Engineering, National Taipei University
Y. S. Han
28
Y. S. Han
29
Y. S. Han
30
Y. S. Han
Example
Consider the same BCH code and received vector as in the
previous example. Then
S(x) = x4 + 3 x3 + 9 x + 12 .
Next we perform Berlekamp-Massey algorithm as follows:
31
Y. S. Han
32
() (x)
2 d
-1/2
-1
12
1 + 12 x
(take = 1/2)
1 + 12 x + 9 x2
(take = 0)
1 + 12 x + 9 x2
Y. S. Han
i Sji , j = v + 1, v + 2, . . . , 2t.
i=1
33
Y. S. Han
34
[k]
Lk
[k]
i Sji , j = Lk + 1, Lk + 2, . . . , k.
i=1
Graduate Institute of Communication Engineering, National Taipei University
Y. S. Han
35
Lk1
Then Sk =
[k1]
Ski .
i=1
dk = Sk Sk = Sk +
i=1
Lk1
Lk1
[k1]
i
Ski
[k1]
Ski .
i=0
Y. S. Han
36
the formula
[k] (x) = [k1] (x) + Ax [m1] (x),
(9)
Lk
i=0
Lk1
[k]
i Ski
i=0
Lm1
[k1]
i
Ski +A
[m1]
Ski .
i=0
Y. S. Han
= k m.
Then the second summation gives
Lm1
[m1]
Smi = Adm .
i=0
If we choose
A = d1
m dk ,
then
dk = dk d1
m dk dm = 0.
We still need to prove that such selection indeed makes a
shortest LSFR.
Graduate Institute of Communication Engineering, National Taipei University
37
Y. S. Han
Lk1
=S
[k1]
j = Lk1 + 1, Lk1 + 2, . . . , k 1
j
i
Sji
= Sk
j=k
i=1
38
Y. S. Han
39
and
Lk
[k]
i Sji = Sj j = Lk + 1, Lk + 2, . . . , k.
i=1
In particular, we have
Sk =
Lk
[k]
i Ski .
i=1
Lk
i=1
[k]
i Ski
Lk
i=1
Lk1
[k]
i
[k1]
Skij .
j=1
Y. S. Han
Lk
[k1]
Lk1
Sk =
[k]
i Skij .
i=1
j=1
However, we have
Lk1
Sk =
[k1]
Ski .
i=1
Lk1
Sk =
j=1
Lk
[k1]
[k]
i Skij ,
i=1
40
Y. S. Han
41
Y. S. Han
42
Y. S. Han
43
Y. S. Han
Notations
The generator polynomial for an (n, k) RS code can be
written as
2t
(x i ).
g(x) =
i=1
44
Y. S. Han
and
r(x) =
2t1
ri xi .
i=0
45
Y. S. Han
k
p(x) =
(x ) =
pi xi .
k=2
i=0
46
Y. S. Han
= X 1 r() = X 1
2t1
ri i
i=0
Graduate Institute of Communication Engineering, National Taipei University
47
Y. S. Han
= X 1
2t1
i=0
X 1
48
2t1
2i pi
ai pi i
1
= aX
.
i
i
Wm ( )
( X)
i=0
2t1
2i pi
i=0 (i x)
Dene f (x) =
for x Lm . f (x) can
be pre-computed for all values of x Lm .
Y = af (X) and
Y i pi
ri =
.
i
f (X)Wm ( )
Assume that there are v 1 errors, with error locators
Xi and corresponding error values Yi for i = 1, 2, . . . , v.
By linearity we have
r k = pk k
i=1
Yi
, k = 0, 1, . . . , 2t 1.
k
f (Xi )( Xi )
Y. S. Han
Dene
F (x) =
i=1
Yi
f (Xi )(x Xi )
Yi
Nm (x)
=
,
f (Xi )(x Xi )
Wm (x)
i=1
v
i=1 (x
where Wm (x) =
Xi ) is the error locator
polynomial for the errors among the message locations.
Note that the error locator polynomial dened here is
dierent from previously dened by Peterson.
It is clear that deg(Nm (x)) < deg(Wm (x)).
Graduate Institute of Communication Engineering, National Taipei University
49
Y. S. Han
We have
rk
k
Nm ( ) =
W
(
), k Lc = 0, 1, . . . , 2t 1.
m
k
pk
k
50
Y. S. Han
Y
rk =
0
k=e
otherwise.
51
Y. S. Han
WB Key Equation
Let Em = {i1 , i2 , . . . , ivl } Lm denote the error locations
among the message locations.
Let Ec = {ivl +1 , ivl +2 , . . . , iv } Lc denote the error
locations among the check locations.
The (error location, error value) pairs are (Xi , Yi ),
i = 1, 2, . . . , v.
By linearity,
rk = pk
vl
Yi
f (Xi )(k X i)
i=1
Y if error locator X is in check location k
j
j
+
0 otherwise.
Graduate Institute of Communication Engineering, National Taipei University
52
Y. S. Han
We have
rk
k
Nm ( ) =
W
(
), k Lc \ Ec .
m
k
pk
53
Y. S. Han
54
We write (10) as
N (xi ) = W (xi )yi , i = 1, 2, . . . , 2t
(11)
Y. S. Han
i=1
Y [Xi ]
Nm (x)Wc (x)
N (x)
=
=
,
f (Xi )(x Xi )
Wm (x)Wc (x)
iEcm (x Xi )
where Ecm = Ec Em .
Suppose we want determine Y [Xk ] at message location.
Multiplying both sides of the above equation by
Y [Xk ]
i=k (Xk Xi )
iEcm
= N (Xk ).
f (Xk )
Graduate Institute of Communication Engineering, National Taipei University
55
Y. S. Han
W (x) =
(x Xi )
jEcm i=j
and
W (Xk ) =
(Xk Xi ).
i=k
iEcm
Thus,
N (Xk )
Y [Xk ] = f (Xk )
.
W (Xk )
When the error is in a check location, Xj = k for
k Ec , we have
rk = Y [Xj ] + pk
vl
i=1
N (Xj )
Y [Xi ]
= Y [Xj ] + pk Xj
.
f (Xi )(k Xi )
W (Xj )
56
Y. S. Han
Thus,
N (Xj )
Y [Xj ] = rk pk Xj
.
W (Xj )
Both N (Xj ) = Nm (Xj )Wc (Xj ) and
W (Xj ) = Wm (Xj )Wc (Xj ) (Since Wc (Xj ) = 0) are 0 so a
LHopitials rule must be used. Since
and
so
N (Xj )
Nm (Xj )
=
= 0.
W (Xj )
Wm (Xj )
Then
Graduate Institute of Communication Engineering, National Taipei University
57
Y. S. Han
N (Xj )
Y [Xj ] = rk pk Xj
.
W (Xj )
58
Y. S. Han
59
(12)
Y. S. Han
Welch-Berlekamp Algorithm
We are interested in a solution satisfying
deg(N (x)) < deg(W (x)) and deg(W (x)) m/2.
The rank of a solution [N (x), W (x)] is dened as
rank[N (x), W (x)] = max{2 deg(W (x)), 1 + 2 deg(N (x))}.
WB algorithm constructs a solution to the rational
interpolation problem of rank m and show that it is
unique.
Since the solution is unique, by the denition of the rank,
the degee of N (x) is less than the degree of W (x).
Let P (x) be an interpolation polynomial such that
P (xi ) = yi , i = 1, 2, . . . , m.
Graduate Institute of Communication Engineering, National Taipei University
60
Y. S. Han
61
(13)
Y. S. Han
62
Y. S. Han
63
Let
N (x) W (x)P (x) = Q(x)(x) + R(x).
Dene the mapping
E : S {h F[x]| deg(h(x)) < m}
(14)
Y. S. Han
64
k
where k (x) = i=1 (x xi ) and Pk (x) is a polynomial that
interpolations the rst k points, Pk (xi ) = yi , i = 1, 2, . . . , k.
The WB- algorithm nds a sequence of solution [N (x), W (x)]
of minimum rank satisfying the interpolation(k) problem, for
k = 1, 2, . . . , m.
If [N (x), W (x)] is an irreducible solution to the
interpolation(k) problem and [M (x), V (x)] is another solution
such that rank[N (x), W (x)] + rank[M (x), V (x)] 2k, then
[M (x), V (x)] can be reduced to [N (x), W (x)].
Proof: By assumption, there exist two polynomials Q1 (x) and
Q2 (x) such that
N (x) W (x)Pk (x) = Q1 (x)k (x)
M (x) V (x)Pk (x) = Q2 (x)k (x).
Recall that N (xi ) = yi W (xi ) and M (xi ) = yi V (xi ) for
Graduate Institute of Communication Engineering, National Taipei University
(15)
Y. S. Han
65
i = 1, . . . , k. Hence
N (xi )V (xi ) = M (xi )W (xi ), i = 1, . . . , k
which implies
k (x)|(N (x)V (x) M (x)W (x)).
From the denition of the rank we have
deg(N (x)V (x)) = deg(N (x)) + deg(V (x))
rank[N (x), W (x)] 1 rank[M (x), V (x)]
+
<k
2
2
and
deg(M (x)W (x)) = deg(M (x)) + deg(W (x))
rank[M (x), V (x)] 1 rank[N (x), W (x)]
+
< k.
2
2
Graduate Institute of Communication Engineering, National Taipei University
(16)
Y. S. Han
66
Then deg(N (x)V (x) M (x)W (x)) < k. From (16), we have
N (x)V (x) M (x)W (x) = 0.
(17)
(18)
N (x)
w(x)
M (x)
v(x)
= h(x), so
(19)
Y. S. Han
67
Y. S. Han
68
Y. S. Han
69
Y. S. Han
70
Y. S. Han
71
Y. S. Han
72
Y. S. Han
73
Y. S. Han
74
References
[1] G. C. Clark, Jr. and J. B. Cain, Error-Correction Coding
for Digital Communications, New York, NY: Plenum
Press, 1981.
[2] R. E. Blahut, Theory and Practice of Error Control
Codes, Reading, MA: Addison-Wesley Publishing Co.,
1983.
[3] W. C. Human and V. Pless, Fundamentals of
Error-Correcting Cmodes, Cabridge University Press,
2003.
[4] T.K. Moon, Error Correction Coding: Mathematical
Methods and Algorithms, Hoboken, NJ: John Wiley &