Professional Documents
Culture Documents
Ejercicio Hamming Code PDF
Ejercicio Hamming Code PDF
Codeword Weight
0000000
0
0001110
3
0010101
3
0011011
4
0100011
3
0101101
4
0110110
4
0111000
3
1000111
4
1001001
3
1010010
3
1011100
4
1100100
3
1101010
4
1110001
4
1111111
7
Consider the following setup in which, according to the switch, the four input bits (u1 , u2 , u3 , u4 )
can be sent over the channel raw, or first coded and then decoded, to obtain the output
(u1 , u2 , u3 , u4 ):
Binary symmetric channel
(u1 , u2 , u3 , u4 )
1q
Encoder
(x1 , . . . , x7 )
0
Decoder
q
1
1q
(u1 , u2 , u3 , u4 )
The channel is a memoryless binary symmetric channel, with crossover probability q < 12 :
Pr(output = 0 | input = 1) = Pr(output = 1 | input = 0) = q
Pr(output = 0 | input = 0) = Pr(output = 1 | input = 1) = 1 q
(channel error)
(correct output)
for all i,
when the four bits are sent over the channel raw.
2. Repeat the above when the four bits are first coded, and then decoded at the channel
output.
3. Find the generator matrix G and the parity-check matrix H associated with this code.
4. From the parity-check matrix H, construct a syndrome decoding table.
Solution:
1. For the first part, the channel is memoryless, so that
Pr(u1 = u1 , u2 = u2 , u3 = u3 , u4 = u4 ) = Pr(u1 = u1 ) Pr(u2 = u2 ) Pr(u3 = u3 ) Pr(u4 = u4 )
= (1 q)4
is the probability of receiving all four bits correctly.
2. For the second part, the Hamming code is a perfect code, meaning that balls of Hamming
radius t form a partition of the space of all n-bit words; here t = 1 and n = 7, so the
code can tolerate at most t = 1 error over the n = 7 bits transmitted . Let x = (x1 , . . . , x7 )
denote the transmitted codeword, and x = (x1 , . . . , x7 ) the received seven bits at the channel
output. If
d(x, x ) 1
[d(, ) = Hamming distance]
meaning that the number of bit errors is zero or one, then the closest code word to x is
the transmitted word x, so that x decodes to u = u, as desired. If instead two or more bit
errors occur over the channel, then x is closer to a different codeword than x, and so x
decodes to a u which differs from u, giving a word error.
The probability that u = u is thus
Pr(u = u) = Pr(x1 = x1 , x2 = x2 , . . . , x7 = x7 )
+ Pr(x1 6= x1 , x2 = x2 , . . . , x7 = x7 )
+ Pr(x1 = x1 , x2 6= x2 , . . . , x7 = x7 )
+
+ Pr(x1 = x1 , x2 = x2 , . . . , x7 6= x7 )
= (1 q)7 + 7q(1 q)6
A comparison of the uncoded and coded correct word probabilities, versus the raw channel
bit error probability q, appears as the following graph:
1.0
0.9
Coded
Uncoded
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.0
0.1
0.2
0.3
0.4
0.5
x1
x2
u1
x3
u2
x4 = G
u3
x5
u4
x6
x7
From the coding table, we see that when u = (1, 0, 0, 0), we have x = (1, 0, 0, 0, 1, 1, 1), or
1
0
1
0
0 = G
0
1
0
1
1
This identifed the first column of G. The second, third, and fourth columns can likewise
be identified as
0
1
0
0
0 = G
,
0
0
0
1
0
0
0
1
0 = G
,
1
1
0
0
0
0
0
1 = G
0
1
1
1
0
G=
0
0
1
0
We observe that the first four rows give the identity matrix, because the code is systematic:
the first four output bits contain the input bits. We can thus write
"
I4
G=
P
H = [P I3 ] = 1
1
0
1
1
0
1
1
1
0
0
1
0
0
HG = [ P
I]
I
P
= P+P = 0
x is a codeword.
Let the received codeword be x = x + e where e collects the channel errors (i.e., ei = 0
if the i-th bit was received correctly, and ei = 1 if there is an error on the i-th bit). By
linearity, we have
s = Hx = |{z}
Hx +He = He.
=0
The vector s is the syndrome since it depends only on the channel errors e, not on the
codeword sent. If s 6= 0, one seeks the maximum likelihood estimate for the channel errors,
i.e., the e of smallest Hamming weight for which He = s for the given s. The receiver
then forms x + e as the maximum likelihood estimate for the transmitted codeword, which
By exhausting candidate choices for e , we obtain the following
is then decoded to give u.
table (rewriting the syndrome and the error estimate as row vectors):
syndrome s
000
001
010
011
100
101
110
111
error estimate e
0000000
0000001
0000010
0100000
0000100
0010000
0001000
1000000