Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

EE 572

Exercises: Error correction codes


The Hamming (7, 4) code (which takes k = 4 bits in, and furnishes n = 7 bits out) is listed
in the following table:
Input
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111

Codeword Weight
0000000
0
0001110
3
0010101
3
0011011
4
0100011
3
0101101
4
0110110
4
0111000
3
1000111
4
1001001
3
1010010
3
1011100
4
1100100
3
1101010
4
1110001
4
1111111
7

Consider the following setup in which, according to the switch, the four input bits (u1 , u2 , u3 , u4 )
can be sent over the channel raw, or first coded and then decoded, to obtain the output
(u1 , u2 , u3 , u4 ):
Binary symmetric channel

(u1 , u2 , u3 , u4 )

1q

Encoder
(x1 , . . . , x7 )

0
Decoder

q
1

1q

(u1 , u2 , u3 , u4 )

The channel is a memoryless binary symmetric channel, with crossover probability q < 12 :
Pr(output = 0 | input = 1) = Pr(output = 1 | input = 0) = q
Pr(output = 0 | input = 0) = Pr(output = 1 | input = 1) = 1 q

(channel error)
(correct output)

1. Calculate the probability of receiving


ui = ui ,

for all i,

when the four bits are sent over the channel raw.
2. Repeat the above when the four bits are first coded, and then decoded at the channel
output.
3. Find the generator matrix G and the parity-check matrix H associated with this code.
4. From the parity-check matrix H, construct a syndrome decoding table.
Solution:
1. For the first part, the channel is memoryless, so that
Pr(u1 = u1 , u2 = u2 , u3 = u3 , u4 = u4 ) = Pr(u1 = u1 ) Pr(u2 = u2 ) Pr(u3 = u3 ) Pr(u4 = u4 )
= (1 q)4
is the probability of receiving all four bits correctly.
2. For the second part, the Hamming code is a perfect code, meaning that balls of Hamming
radius t form a partition of the space of all n-bit words; here t = 1 and n = 7, so the
code can tolerate at most t = 1 error over the n = 7 bits transmitted . Let x = (x1 , . . . , x7 )
denote the transmitted codeword, and x = (x1 , . . . , x7 ) the received seven bits at the channel
output. If
d(x, x ) 1
[d(, ) = Hamming distance]
meaning that the number of bit errors is zero or one, then the closest code word to x is
the transmitted word x, so that x decodes to u = u, as desired. If instead two or more bit
errors occur over the channel, then x is closer to a different codeword than x, and so x
decodes to a u which differs from u, giving a word error.
The probability that u = u is thus
Pr(u = u) = Pr(x1 = x1 , x2 = x2 , . . . , x7 = x7 )
+ Pr(x1 6= x1 , x2 = x2 , . . . , x7 = x7 )
+ Pr(x1 = x1 , x2 6= x2 , . . . , x7 = x7 )
+
+ Pr(x1 = x1 , x2 = x2 , . . . , x7 6= x7 )
= (1 q)7 + 7q(1 q)6

(all bits correct)


(one error: bit x1 )
(one error: bit x2 )
..
.
(one error: bit x7 )

A comparison of the uncoded and coded correct word probabilities, versus the raw channel
bit error probability q, appears as the following graph:

1.0

Correct word probability

0.9

Coded
Uncoded

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.0

0.1

0.2

0.3

0.4

0.5

Raw bit error probability q

3. The generator matrix G fulfills the equation

x1


x2


u1

x3


u2

x4 = G


u3

x5

u4

x6

x7
From the coding table, we see that when u = (1, 0, 0, 0), we have x = (1, 0, 0, 0, 1, 1, 1), or
1



0


1

0

0 = G


0

1

0

1

1
This identifed the first column of G. The second, third, and fourth columns can likewise

be identified as
0



1


0

0

0 = G
,

0

0

0

1

0

0



0

1

0 = G
,

1

1

0

0


0


0

0

1 = G


0

1

1

1

Arranging these column vectors then gives the generator matrix as


1

0
G=

0
0

1
0

We observe that the first four rows give the identity matrix, because the code is systematic:
the first four output bits contain the input bits. We can thus write
"

I4
G=
P

where the submatrix P contains the final three rows.


The parity-check matrix H satisfies HG = 0. We can then take
1

H = [P I3 ] = 1
1

0
1

1
0

1
1

1
0

0
1

0
0

since this gives


"

HG = [ P

I]

I
P

= P+P = 0

(recalling that, in modulo-2 arithmetic, 1 + 1 = 0).


4. A given word x is a codeword if and only if it lies in the null space of the parity check
matrix:
Hx = 0

x is a codeword.

Let the received codeword be x = x + e where e collects the channel errors (i.e., ei = 0
if the i-th bit was received correctly, and ei = 1 if there is an error on the i-th bit). By
linearity, we have

s = Hx = |{z}
Hx +He = He.
=0

The vector s is the syndrome since it depends only on the channel errors e, not on the
codeword sent. If s 6= 0, one seeks the maximum likelihood estimate for the channel errors,
i.e., the e of smallest Hamming weight for which He = s for the given s. The receiver
then forms x + e as the maximum likelihood estimate for the transmitted codeword, which
By exhausting candidate choices for e , we obtain the following
is then decoded to give u.
table (rewriting the syndrome and the error estimate as row vectors):
syndrome s
000
001
010
011
100
101
110
111

error estimate e
0000000
0000001
0000010
0100000
0000100
0010000
0001000
1000000

You might also like