Chapter Linear Block Codes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY

CEMSE DIVISION ELECTRICAL ENGINEERING PROGRAM


EE 242 Digital Communications and Coding
Linear Block Codes

Channel Coding
Encoding data in a communications channel by adding redundancy bits to the symbols needed
to be transmitted in order to lower the error rate. Such methods are widely deployed in wireless
communications. There are two major classes of channel codes:

1. Block codes.

2. Convolutional Codes.

Here are some applications in which channel coding techniques are heavily used:

1. CD (Compact Disk)

• Interleaved Reed-Solomon codes


• Protection against burst errors (consecutive errors)
• Interference
• ...

2. Data Modems

• Convolutional codes, coded modulation (Ungerboek 1982), . . .


• Digital Subscriber Line (DSL): Reed Solomon forword error correction and trellis coded
modulation (TCM)
• ...

3. Mobile Communications

• Interleaved convolutional codes


• Reed solomon codes, Hamming codes
• ...

1
4. Satellite Communications

• Convolutional codes, Reed solomon codes, Reed Muller codes


• Turbo codes
• Code concatenating
• ...

Historical developments
Year Development
1948 Shannon founded the information theory
1950 One bit error correcting Hamming codes
1952 Gilbert bounds for block codes
1961/62 Sequential decoding (Wozencraft, Fano)
1967 viterbi decoding
1968/69 Berlekamp-Massey algorithm for decoding of BCH codes and Reed Solomon (RS) codes
1968 Convolutional codes in Pioneer spacecraft
1974 Convolutional codes in Helios spacecraft
1976 Ungerboeck codes (coded modulation)
1982 Trellis coded modulation (TCM)
1985 VLSI (Very Large Scale Integration) realization of RS and Viterbi decoder
1988/90 punctured convolutional codes for mobile communications
1993 Turbo codes

Shannon’s Channel Coding Theorem


There exists a code of rate Rc smaller than the channel capacity Cc , with which the error probability
can be made arbitrarily small (but not zero) i.e, all codes with R > Cc error probability cannot be
made arbitrarily small. However the code length and decoding complexity is not restricted. The
theorem gives no construction of a code that is capable of achieving the capacity.
Shannon’s coding theorem gives coding theorists targets to aim at and they basically on several
fronts most notably.

1. Design codes that asymptotically achieve capacity.

2. Reduce code length.

3. Reduce the capacity of coding and decoding.

2
Block Codes
In block codes, a binary sequence of length k, representing the information bits, is mapped into one
of M = 2k messages of length n. (n>k)
——— 1
If we know rank of H matrix and n is known then we can find k of the linear block code by
k = n − Rank.
——– 1
How do we use the n bits? The codeword is usually transmitted over the communication channel
by sending a sequence of n binary symbols using one of the modulation techniques that we have
studied, BPSK, QPSK, FSK. (Do we have to send the n bits together?)
Block codes are memoryless: Each set of k bits is independent from the next sequence of k bits.
Sequence of codewords independent of each other.

Convolutional Codes
A convolutional code is a set of infinite sequences of code symbols i.e, no data blocks but data
streams. Encoding is done by linear shift register circuit (with or without feedback). In general, a
convolutional encoder maps k input streams to n output streams as shown in Figure 1

Figure 1: Convolutional code with two memory elements

Code rate: Rc = nk
represents the info bits sent in transmission of a binary symbol over channel.
(Given a sequence of k bits mapped into a sequence of n bits, how does the entropy of the
codewords differ?)
n>k ⇒ Rc <1

3
General Properties of Linear Block Codes
A binary block C consist of a set of M vectors of length n
 
cm1
 cm2 
 
cm =  .. 
 (1)
 . 
cmn

How many possible codewords? 2n (n bits). We choose only 2k of them (k bits).


A block of k info bits is mapped into a codeword of length n selected from the set of M codewords.
This is called an (n, k) block code with rate Rc = nk .
Why Linear block codes? Linearly guarantees easy implementation of encoding and decoding.
What does linearity mean in this context? For any two codewords c1 and c2 ,
c1 + c2 is also a codeword.

(The all zero vector is a codeword - Why?)


———- 2
To change same Block code into Linear block code

1. We should satisfied that 0 (zero vector) is valid codewords. If not, we should add it.

2. We should satisfied that sum of every two codewords is also valid codeword. If not, we should
add them to our Block code to make it linear.

We can represent G in systamatic form by using row operations.


——— 2

Generator and Parity Check Matrices


Mapping from k bits to n bits can be represented by a k x n (generator matrix) G as the following

cm = u m G 1 ≤ m ≤ 2k (2)
Where cm is the corresponding codeword, um is the info sequence. G can be defined as
 
g1
g2 
 
G=  .. 
 (3)
.
gk

4
g1 is a codeword - why?
What info sequnce (generates) it?
Thus, the codeword corresponding to the input sequence
k
X
um = (um1 , ..., umk )G = umi gi (4)
i=0

Codewords are the set of linear combination pf rows of G.


Codewords are the row space of G.
We can put things in a more formal theoretical perspective. Consider the set of all n tuples
F n , {0, 1}n equiped with binary addition (XOR) and scalar multiplication. It is easy to see that
(F n , +, ·) is a vector space.
A linear block code is simply a subspace of the vector space defined above. By definition, a linear
code is closed under addition and is trivially closed under scalar multiplication. So, it is a subspace
of the mother vector space. One question then is how to determine the dimension of the vector
space, we need to write the matrix G in reduced row Echlon form. Using that form we deduce the
dimension of the space (=rank(G)).
Example: Consider the following 4 x 8 generator matrix G. The row rank of G is k ≤ 4. We
write it in row Echlon form to determine its rank.
 
1 1 1 1 1 1 1 1
0 1 0 0 1 1 0 1
G=
 

0 0 1 0 1 0 1 1
0 0 0 1 0 1 1 1
Replace row1 by row1+row2 to get
 
1 0 1 1 0 0 1 0
0 1 0 0 1 1 0 1
 
 
0 0 1 0 1 0 1 1
0 0 0 1 0 1 1 1
Now replace row1 by row1+row2+row3+row4 to get
 
1 0 0 0 1 1 1 0
0 1 0 0 1 1 0 1
 
 
0 0 1 0 1 0 1 1
0 0 0 1 0 1 1 1

5
(8, 4) code of the form
h i h i
I|P c = u I|P
h i
= u|uP

c = uG = [u1 , u2 , u3 , u4 ]G
= [u1 , u2 , u3 , u1 + u2 + u3 , u4 , u1 + u2 + u4 , u1 + u3 + u4 , u2 + u3 + u4 ]

From above, we see that the number of nonzero rows in the reduced row Echlon form is equal to
4.
Note that the row space of G and that of its reduced form are the same. So, the code generated
by the two matrices is the same.

Systematic Code
In the example above, the reduced row Echlon form takes the form
h i
G = Ik |P
It is not always possible to do so. For example, consider the generator matrix
 
1 1 1 1 1 1 1 1
0 1 0 1 0 1 0 1
G=
 

0 0 1 1 0 0 1 1
0 0 0 0 1 1 1 1

We can show that th reduced row Echlon form is given by


 
1 0 0 1 0 1 1 0
0 1 0 1 0 1 0 1
G=
 

0 0 1 1 0 0 1 1
0 0 0 0 1 1 1 1
which is not in the [Ik |P ] form.
If G has the following structure h i
Ik |P (5)
Such that P is of size k x (n − k), and Ik is the identity matrix (kxk). The resulting linear block
code is systematic.

6
h i h ih i h h i i
cm = u1 , u2 , . . . , uk G = u1 , u2 , . . . , uk Ik |P = u1 , u2 , . . . , uk | u1 , u2 , . . . , uk P (6)

Where theh first part of ithe final result above u1 , u2 , . . . , uk is the info sequence (k). And the
second part u1 , u2 , . . . , uk P is the parity check bits (n − k) , which provides redundancy against
errors.
Any linear block code has a systematic equivalent i.e.
h i
G = Ik |P (7)

by elementary row and column operations.


Codewords are of dimension k in n−dimensional space.
⇒ Orthogonal complement is of dimension (n − k) in n−dimensional space ( also called dual
space because it will be the generator matrix of the dual code C ⊥ which is the orthogonal code to
C. In other words, < C, C ⊥ >= 0, ∀c ∈ C
Let H be the corresponding matrix of dimension (n − k x n), then for any codeword cH T = 0.
Rows of G are codewords ⇒ GH T = 0.
For systematic codes h i
G = Ik |P (8)
Then h i
H = P t |In−k (9)
Where P t is of size(n − k x k).
Check that by the following example:
 
1 0 0 0 1 0 1
h i 0 1 0 0 1 1 1
G = I4 |P = 
 

0 0 1 0 1 1 0
0 0 0 1 0 1 1

7
Let u = (u1 , u2 , u3 , u4 ) be the info

c1 = u1
c2 = u2
c3 = u3
c4 = u4
c5 = u1 + u2 + u3
c6 = u2 + u3 + u4
c7 = c1 + c2 + c4

Weight and Distance for Linear Block Codes


Weight of a codeword w(c) is the number of nonzero components of the codeword
w(0) = 0 0 is a codeword
Hamming distance between c1 and c2 is the number of the components at which c1 and c2 differ

d(c1 , c2 ) = w(c1 − c2 )
w(c) = d(c, 0)

——— 4
Let v,w and x be three binary n-tuples. then

d(v, w) + d(w, x) ≥ d(v, x) (Traingular inequality)

Proof:

d(v, w) = w(v + w)
d(w, x) = w(w + x)
d(v, x) = w(v + x)

we know that: w(a) + w(b) ≥ w(a + b)

8
Let a = v + w, b = w + x, we get

w(v + w) + w(w + x) ≥w(v + w + w + x)


w(v + w) + w(w + x) ≥w(v + x)
d(v, w) + d(w, x) ≥d(v, x)

dmin = wmin

Proof:

dmin = min{d(v, w) : v, wc, v 6= w}


= min{w(v + w) : v, wc, v 6= w}
= min{w(x) : xc, x 6= 0}
dmin = wmin

Theorem:

Let C an (n, k) linear code with parity check matrix H. For each codeword of hamming weight l,
there exist l columns of H such that the vector sum of these l columns is equal to zero vector.

Proof:

• H = [h0 , h1 , ..., hn−1 ], where hi represents ith columns of H.

• Let vi1 , vi2 , ..., vil be l nonzero components of codeword v, where 0 ≤ i1 ≤ i2 ≤ ... ≤ n − 1
then, vi1 = vi2 = ... = vil = 1.
Since v is codeword we must have

0 = v.H T = v0 h0 + v1 h1 + ... + vn−1 hn−1


0 = vi1 hi1 + vi2 hi2 + ... + vil hil
0 = hi1 + hi2 + ... + hil .

• Viceversa: if there exists l columns of H whose vector sum is zero vector, there exists a
codeword of hamming weight l in C.

• If no d − 1 or fewer columns of H add to 0. the code has minimum weight at least d.

• The minimum weight of C, dmin is equal to the fewest number of coumns of H that add to 0.

————- 4

9
Error Detection and Correction Capability of Block Codes
dmin is the minimum separation between a pair of codewords.
dmin errors can transform of the 2k codewords into another.
When this happens, we have an undeleted error.
If number of errors is less than dmin , it is not possible for the error to transform one codeword
into another.
Another way of seeing this is that: column span of H is dmin − 1 (dmin − 1 linearly independent
columns).
Any error of weight ≤ dmin − 1 can not result in cH T = 0.
Once an error is detected, we can ask for retransmission.

Error Correction Capability


———– 5

Theorem:

A linear code is capable of correcting λ or fewer errors and simultaneously detecting l(l > λ) or
fewer errors if dmin ≥ λ + l + 1.

Proof:

We see that λ < (dmin − 1)/2, Then all error patterns of λ or fewer errors can be used as coset
leaders in a standard array. Hence, they are correctable.
In order to show that any error pattern of l or fewer error is detectable, we need to show that no
error pattern x of l or fewer errors can be in same coset as an error pattern y of λ or fewer errors.
Suppose x, y are in same coset =⇒ x + y is nonzero codeword.

w(x + y) ≤ w(x) + w(y) ≤ l + λ ≤ dmin (10)

This is impossible since wmin of the code is dmin . Hence x and y are in different cosets. As a
result when x occurs, it will not be mistaken as y. Therefore, x is detectable.

Theorem:

A block code C with minimum distance (dmin ) is capable of correcting all error patterns of weight
t or less. where t is an integer such that

2t + 1 ≤ dmin ≤ 2t + 2 (11)

10
Proof:

v is transmitted codeword. r is received sequence. Let w 6= v be any other codeword, then

d(v, w) ≤ d(v, r) + d(r, w) Traningular Inequality (12)


0
If the error pattern has weight t0 , then d(v, r) = t ,
Since v, and w are codewords
d(v, w) ≥ dmin ≥ 2t + 1 (13)
0
Therefore, d(r, w) ≥ d(v, w) − d(v, r) ≥ 2t + 1 − t
0
If t ≤ t, then
0 0
d(r, w) ≥ t + 1 > t and d(v, r) = t ≤ t (14)

Theorem:

For all l ≥ t + 1, there is atleast one error pattern of weight l that may not be correctly decoded
by ML decoder. t is random error correcting capacity of the code.

Theorem:

Minimum distance of a linear block code C is dmin so that it can simultaneously correct v errors
and e erasures is dmin ≥ 2v + e + 1.

Proof:

The deletion e-component results in a shortened code of length n − e.

dmin − e ≥ 2v + 1

Hence v errors in the unerased positions can be corrected. As a results the shortened code with
e-components erased can be recovered.
Finally, since dmin ≥ e + 1, there is only one and only one codeword in the original code that agrees
with the unerased components.
Hence, the entire codeword can be recovered.
———– 5

Examples of Linear Block Codes


1. The Binary Repetition Code

• Most obvious way to add redundancy: repeat each information bit n times.

11
• Length n, dimension k, dmin = n.
• It can be generated using the generator matrix:
 
G = 1 1 ··· 1

• Parity–Check matrix of repetition code may be givenby:


 
1 0 0 ··· 0 1
0 1 0 · · · 0 1
 
H=  .. .. . . .

. . 0 1

0 0 ··· ··· 1 1

• Problem: Rate Rc = n1 : Energy expense for redundancy compensates coding gain.

——– 3
Example: Let H be the parity check matrix of an (n, k) linear code C that has both odd and
even-weight codewords. Construct a new linear code C1 with the following parity-check matrix

 
0
 
 0 
..
 
H .
 
H1 =  (15)
 

 0 
 
 ··· ··· 
 
1 11 · · · 1

1 Show that C1 is an (n + 1, k) linear code.


2 Show that every codeword of C1 has even weight.
3 Show that C1 can be obtained form C by adding an extra parity check digit. denoted by
vinf to the left of each codeword v as follows
(a) if v has odd weight, then vinf = 1, and
(b) if v has even weight, then vinf = 0.

Solution:

* The matrix H1 is an (n − k + 1) × (n + 1) matrix.


* First we note that the n-k rows of H are linearly independent. It is clear that the first
(n − k) rows of H1 are also linearly independent.

12
* The last row of H1 has a ”1” at its first position but other rows of H1 have a ”0” at
their first position. Any linear combination including the last row of H1 will never yield
a zero vector.
* Thus all the rows of H1 are linearly independent. Hence the row space of H1 has
dimensions n − k + 1.
* The dimensions of it’s null space, C1 is then equal to

dim(C1 ) = (n + 1) − (n − k + 1) = k

* Hence C1 is an (n + 1, k) linear code.

2. Show that every codeword of C1 has even weight.


Solution:

* The last row of H1 is an all-one vector is ”1”. Hence for any odd weight vector v,

vH1T 6= 0

and v cannot be a codeword in C1 .


* Therefore, C1 consists of only even-weight code words.

3. Show that C1 can be obtained form C by adding an extra parity check digit. denoted by vinf
to the left of each codeword v as follows

(a) if v has odd weight, then vinf = 1, and


(b) if v has even weight, then vinf = 0.

Solution: Let v be a codeword in C. Then vH T = 0. Extend v by adding a digit vinf to its


left.

* This results in a vector of n + 1 digits.

v1 = (vinf , v) = (vinf , v0 , v1 , ..., vn−1 ).

* For v1 to be a vector in C1 , we must require that

v1 H1 T = 0

* Note that the inner product of v1 with any of the first n − k rows of H1 is 0.
* The inner product of v1 with the last row of H1 is

vinf + v0 + v1 + ... + vn−1

13
* For this sum to be zero, we mustrequire that vinf = 1 if the vector v has odd weight and
vinf = 0 if the vector v has even weight.
* Therefore, any vector v1 formed as above is a codeword in C1 . there are 2k such code-
words.
* The dimension of C1 is k. these 2k codewords are all the codewords of C1

———- 3

2. Single Parity Check (SPC) Codes

• Add a single even parity bit to the information bits so the the total numbers of ones in
the codeword is even. Why always using even parity in case of linear block codes?
• Length n, dimension n − 1, dmin = 2.
• It can be generated using the generator matrix:
 
1 0 0 ··· 0 1
0 1 0 · · · 0 1 
 
G=  .. .. . . .

. . 0 1

0 0 ··· ··· 1 1
• Parity–Check matrix of the SPC code may be given by:
 
H = 1 1 ··· 1

• It is clear from G and H that it is the dual code of the repetition code.
• Problem: Distance 2 does not allow for correcting errors
• Based on this simple idea, A very important classes of codes have been created
so called Low Density Parity Check (LDPC) Codes which are considered as
capacity approaching codes. They are designed by Gallager in his doctoral
dissertation at the Massachusetts Institute of Technology (MIT) in 1960

Bounds for the parameters of Linear Block Codes


The minimum distance dmin is very important parameter in characterizing the performance in the
performance of a given code. Hence, bounds for the minimum distance will help us to find some
bounds for the probability of error and accordingly assessing the capabilities of codes.
—– 7 We are interested n finding the minimum parity bits (n − k) required for a t-correcting binary
code of length n. —— 7
In the following sections we study some bounds for the minimum distance of linear block codes.
codes.

14
Hamming Bound for Linear Block Codes
The Hamming bound is a limit on the parameters of an arbitrary block code, also called sphere-
packing bound or the volume bound from an interpretation in terms of packing balls into the space
of all possible codewords. It provides an important relation for the limitation on the efficiency any
error-correcting code can exploit the space in which its codewords are embedded. A code which
attains the Hamming bound is said to be a perfect code.
———— 8

Hamming Bound:

For any binary (n, k) linear code with minimum distance (2t + 1) or greater, the number of parity-
check bits satisfies the following inequality:
      
n n n
n − k ≥ log2 1 + + + .... +
1 2 t

Proof:

Number of vectors (n-tuples) of weight t or less (which we can use as coset leader) are:
     
n n n
+ + .... +
1 2 t
     
n−k n n n
⇒2 ≥ + + .... +
1 2 t
by taking log2 on both sides
     
n n n
n − k ≥ log2 + + .... +
1 2 t
t  
X n
⇒ n − k ≥ log2
j=0
j
And we end up with:
t  
X n
≤ 2n−k (16)
j=0
j
———— 8
The hamming code CH (n, k, dmin = 3) satisfies the hamming bound with equality and hence,
it is considered to be a perfect code. Also, the repetition code with odd length n. The only
non trivial in a sense of correcting more than one bit error is the golay code with parameters
CG (n = 23, k = 12, dmin = 7). There exists only one nonbinary perfect code which is the ternary
Golay code CG (n = 11, k = 6, dmin = 5)

15
Singelton Bound for Linear Block Codes
Singelton bound provides an upper bound on the minimum distance dmin of an arbitrary block code
C with block length n, size M = 2k . The singelton bound is formulated as follows:

———— 9
dmin ≤ n − k + 1 (17)

Proof:

Any nonzero codeword with only one information weight can atmost have n−k +1 codeword weight.
Since, minimum distance of a code is equal to the minimum weight of a nonzero codeword.

⇒ dmin ≤ n − k + 1

———— 9
Linear block codes that achieve equality in the Singleton bound are called Maximum Distance
Separable (MDS) codes. Examples of such codes include codes that have only one codeword (dmin =
n), single parity check codes (dmin = 2) and their dual codes, repetition codes. However, these codes
are trivial. Examples of non trivial MDS code include Reed Solomon (RS) codes and their extended
versions.
———- 10

Theorem:

If we consider an (n, k) linear code C whose G contains no zero column and if we arrange all the
codewords of C as rows of a 2k by n array, then each column of the array consists of 2k−1 zeros and
2k−1 ones.

Proof:

We will show tha t the number of codewords that (1) at the lth position is same as number of
codewords that have (0) at the lth position.
In the codeword array, each column contains atleast one non-zero entry. Let S0 : codewords with
(0) at lth position and S1 : codewords with (1) at lth position and x : codeword from S1 . Then if
we adding x to each vector in S0 we will obtain a set S1 of codewords with (1) at lth position

⇒ S0 ≤ S1 (18)
0
Then adding x to each vector, S1 we obtain a set S0 of codewords with (0) at lth position
0 0
⇒ S0 = S1 and S0 ≤ S0

16
S1 ≤ S0 (19)
From (18) and (19), we get S1 = S0 ⇒ lth column contain 2k−1 zeros and 2k−1 ones.

Theorem:

dmin of previous C satisfied the following inequality:

n2k−1
dmin ≤
2k − 1

Proof:

The total number of 1’s in the array is n2k−1 . Each non zero codeword has weight atleast dmin .
Hence,

(2k − 1)dmin ≤ n2k−1


n2k−1
dmin ≤ ⇒ Plotkin Bound (20)
2k − 1
———- 10

Varshamov-Gilbert Bound for Block Codes


The Gilbert–Varshamov bound is a limit on the parameters of a block code (not necessarily linear).
It is also known as the Gilbert Shannon Varshamov (GSV) bound. All the previously mentioned
bounds are necessary conditions that must be stratified by the three main parameters (n, k, dmin )
of a block code. The varshamov-gilbert bound provides sufficient conditions for the existence of
a (n, k)block code with minimum distance dmin . Also, It goes further to prove the existence of a
linear block code with the given parameters. The Varshamov-Gilbert states that if the inequality

X−2 
dmin
n−1

≤ 2n−k (21)
i=0
i
is satisfied, then there exists a binary code with minimum distance dmin >= d. This is a positive
result giving us some optimism that good codes do exist but it does not give an efficient
procedure to find them. It is similar to shannon theorem in this sense. It is also similar
to shannon theorem in the sense of the procedure embedded in the proof of finding
a code is not used in practical applications. Here is a proof of the bound for linear block code.

Proof: First of all, keep in mind that for a linear block code, the minimum distance dmin if
and only if any dmin − 1 columns of the parity check matrix H are linearly independent. Consider

17
the process of choosing n columns for H so that dmin − 1 of them are linearly independent. H is
a matrix of n − k × n matrix of rank n − k. Suppose that in the construction procedure of H,
we have successfully chosen n − 1 columns with no dmin − 1 of them are linearly dependent. For
choosing the n column, we have to choose among columns of size n − k which have a cardinality
of 2n−k . Moreover, we have to exclude the all 0 column, exclude any previously selected column,
exclude every sum of any previously selected two columns, and so on up to excluding the sum of
previously selected dmin −2 columns. In the worst case, all of these columns that we have to exclude
are different, leaving us over with
     
n−k n−1 n−1 n−1
2 − (1 + + + ··· ) (22)
1 2 dmin − 2
available columns. Hence, to make sure that a choice is available, this has to positive to get the
Varshamov-Gilbert bound at d = dmin .

Hard Decision Decoding of Linear Block Codes


We can pursue soft decoding of linear block codes. However, we will not do that here. Instead, we
assume that the analog samples are quantized.
Decoding is then done on the decoded bits. This results in less in performance but reduces the
computational complexity.

Minimum Distance Decoding


Decoder compares the received codeword with the M possible transmitted codewords

d(y, ci ) (23)

Decoder decides in favor of codeword ci closest to d.


Minimum-distance decoding results in a minimum probability of codeword error for the binary
symmetric channel (BSC).
Another way to do it
y = c∗i + e (24)
Another y to all codewords cj
y + cj = c∗i + cj + e (25)
The codeword cj that result in the minimum weight of (y + cj ) is the most probable codeword
that must have been received.

18
So compute M errors
e m = y + cm (26)
Choose cm for which em has minimum weight.

Syndrome and Standard Array Decoding


(Make use of Hard Decision Decoding).
Let cm be the transmitted codeword and y be the received codeword

y = cm + e (27)

Where e is the error binary vector.


Let’s calculate yH T

yH T = cm H T + eH T
s = eH T
(28)

Where s is the syndrome vector (of dimension ( n − k) ) of the error pattern. s is characteristic
of the error pattern (syndrome).
If s = 0 ⇒ error pattern = codeword ⇒ undetected error
Error remains undetected if it is equal to one of the codewords.
There are 2k − 1 such patterns.
There are a total of 2n − 1 error patterns of which 2k − 1 are undetected because they correspond
to actual codewords.
Remaining 2n − 2k nonzero patterns can be detected but not corrected beause there are only2n−k
syndromes.
Different error patterns result in the same syndrome.
For ML decoding, we are looking for the error pattern of least weight among all patterns.
Construct a decoding table

c1 = 0 c2 c3 ... c2k
e2 c2 + e2 c3 + e 2 ... c2k + e2
.. .. .. .. .. (29)
. . . . .
e2n−k c2 + e2n−k c3 + e2n−k . . . c2k + e2n−k

Table of size 2n−k x 2k . The table is called standard array.

19
Each row consists of k possible sequences that would result from first error.
Each row is a coset.
Leaft most codeword for error patterns is coset leader.
Coset: all possible received sequences resulting from an error pattern.
By construction: coset leader has the lowest weight among all coset members.

——— 6

Theorem:

For (n, k) linear code C with minimum distance, all n-tuples at weight t = [(dmin − 1)/2] or less
can be used as coset leaders of a standard array of C.

Proof:

• since dmin = wmin .

• Let x, y have n-tuples of weight t or less.

• w(x + y) ≤ w(x) + w(y) ≤ 2t < dmin

• Suppose x and y are in same codet, then x + y must be a nonzero codeword in C.

• This impossible as weight of x + y < dmin .

Theorem:

For (n, k) linear code C with minimum distance dmin . If all n-tuples of weight t = [(dmin − 1)/2]
or less can be used as coset leaders of a standard array of C, then there is atleast one n-tuple of
weight t + 1 that cannot be used as codet leader.

Proof:

Let v be the minimum weight codeword of C. Let x and y be two n-tuples that satisfies the following
conditions:

a) x + y = v,

b) x, y don’t have non-zero component in common places.

• From definition, x and y must be in the same coset and w(x) + w(y) = w(v) = dmin .

20
• if w(y) = t + 1 ⇒ w(x) = t or t + 1 (because: 2t + 1 ≤ dmin ≤ 2t + 2).

• Therefore, if x is chosen as codet leader, y can’t be coset leader.

———6
Suppose ei is a coset leader and cm Tx codeword ⇒ Rx sequence y = cm + ei ⇒ syndrome

s = yH T
= cm H T + ei H T
= ei H T

All received sequences in same coset result in same syndrome.


Each coset has same syndrome.
One-to-one corresponding between cosets and syndromes.
To decode:
Find the member with the lowest weight (in this case coset leader) and add it to received y.
Coset leaders are the only error patterns that are correctable.
These are 2n − 1 nonzero error patterns, 2k − 1 (codewords) not detectable, and 2n − 2k are
detectable of which 2n−k − 1 are correctable.
Example: Construct the standard array for the (5, 2) systematic code with the generator matrix
" #
1 0 1 0 1
0 1 0 1 1

2= 22 = 4

00000 01011 10101 11110


00001 01010 10100 11111
00010 01001 10111 11100
00100 01111 10001 11010
01000 00011 11101 10110
10000 11011 00101 01110
11000 10011 01101 00110
10010 11001 00011 01100

All zero pattern weight are pattern.


Two error patterns of weight 2.

21
 
1 0 1 0 0
H = 0 1 0 1 0
 

1 1 0 0 1

(n − k x n = 3 x 5)

Syndrome Most Likely Error Pattern


000 00000
001 00001
010 00010
100 00100
011 01000
101 10000
110 11000
111 10010

22

You might also like