Chapter 6

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Chapter 6: Channel Coding for

Digital Communication
Overview
 Error control
 Redundancy
 Hamming distance & error correction
 Error correction
 Block codes, Hamming Codes

2
Channel Coding – Error Control

Source Channel Bandpass


Information Baseband
encode encode mod.
Source mod.

Digital modulation

Channel
Digital demodulation

Source Channel Demod. &


Information Detect
decode decode Sample
Sink

3
Error Control …
 Digital transmission systems introduce errors
 Applications require certain reliability level
 Data applications require error-free transfer
 Voice & video applications tolerate some errors
 Error control (or channel) coding ensures a data stream is
transmitted to a certain level of accuracy despite errors
 Usually build on top of frames (block of bits or codeword)

4
Error Control …
 Two basic approaches:
 Error detection – are there incorrect bits?
 Error correction – repair any mistakes that have happened?
 Forward error correction (FEC)
 Invest effort before error happened
 Backward error correction
 Invest effort after error happened; try to repair it

Error control

Error detection Error correction

Forward error Backward error


correction correction

5
Error Control – Redundancy
 Any form of error control requires redundancy in the
frames/codewords
 Without redundancy
 A frame of length k can represent 2k different frames
 All of them are legal!
 How could a receiver possibly decide that one legal frame
is not the one that had originally been transmitted?
 Not possible!
000….001
000….000
111….111
Set of all 111….110
possible
frames Set of all legal
frames
6
Error Control – Redundancy …
 Core idea: Declare some of the possible messages illegal!
 Still need to be able to express 2k legal frames
! More than 2k possible frames are required
! More than k bits are required in a frame
 Use frames with n > k total length
 r =n-k are the redundant bits (typically, as header or trailer)
 Having more possible than legal frames allows receiver to
detect illegal frames
Set of all legal
frames
000….001010
000….000000
111….111011
111….110110
Set of all 000….000010 111….111111
possible 000….000011
111….111101
frames
7
How do illegal messages help with detecting bit errors?
 Transmitter only sends legal frame
 Physical medium/receiver might corrupt some bits
 Hope: A legal frame is only corrupted into an illegal
message
 But one legal frame is never turned into another legal frame
 Necessary to realize this hope:
 Physical medium only alters up to a certain number of bits (by
assumption) – say, k bits per frame
 This is only an assumption!
 How does it relate to the BER or the SNR?
 Legal messages are sufficiently different so that it is not possible to
change one legal frame into another by altering at most b bits

8
Altering frames by changing bits
 Suppose the following frames are the only legal bit
patterns: 0000, 0011, 1100, 1111

0001
0000 0011
Lines connect frames
0010
that only differ in a single
0101, 0110,
bit = that can be
1000 0100 1001, 1010 0111 1011 converted into each
other by flipping one bit
1101

1100 1110 1111 Here: No single bit error


can convert one legal
frame into another one!
uvxy – legal frame abcd – illegal frame

9
Simple Redundancy Example: Parity
 A simple rule to construct 1 redundant bit (i.e., n=k+1):
Parity
 Odd parity: Add one bit, choose its value such that the number of
1’s in the entire message is odd
 Even parity: Add one bit, choose its value such that the number of
1’s in the entire message is even

 Example:
 Original message without redundancy: 01101011001
 Odd parity: 011010110011
 Even parity: 011010110010

 Parity bit used in ASCII code

10
How good is the single parity check code?
 Redundancy: single parity check code adds 1 redundant bit
per k information bits
 Overhead = 1/(k + 1)

 Coverage: For even parity all error patterns with odd # of


errors can be detected
 An error pattern is a binary (k + 1)-tuple with 1s where errors occur
and 0’s elsewhere
 Of 2k+1 binary (k + 1)-tuples, ½ are odd
 So 50% of error patterns can be detected

 Is it possible to detect more errors if we add more check


bits?
 Yes, with the right codes

11
Two-Dimensional Parity Check
 More parity bits to improve coverage
 Arrange information as columns
 Add single parity bit to each column
 Add a final “parity” column
 Used in early error control systems
1 0 0 1 0 0
0 1 0 0 0 1
Last column consists
1 0 0 1 0 0 of check bits for each
row
1 1 0 1 1 0
1 0 0 1 1 1
Bottom row consists of
check bit for each column

12
Error-detecting capability
1 0 0 1 0 0 1 0 0 1 0 0
0 0 0 0 0 1 0 0 0 0 0 1
Two
1 0 0 1 0 0 One 1 0 0 1 0 0
error errors
1 1 0 1 1 0 1 0 0 1 1 0
1, 2, or 3 errors can
1 0 0 1 1 1 1 0 0 1 1 1 always be detected;
Not all patterns >4
errors can be detected
1 0 0 1 0 0 1 0 0 1 0 0
0 0 0 1 0 1 0 0 0 1 0 1
1 0 0 1 0 0 Three 1 0 0 1 0 0
Four
1 0 0 1 1 0
errors 1 0 0 0 1 0 errors
(undetectable)
1 0 0 1 1 1 1 0 0 1 1 1
Arrows indicate failed check bits

13
Content
 Error control
 Redundancy
 Hamming distance & error correction
 Error correction
 Block codes, Hamming Codes

14
What is a good code?
 Error patterns are o o
o o
introduced by channels x x Poor
x x x o o distance
 These error patterns map o
x x properties
transmitted codeword to o
o o o
o
nearby n-tuple
 If codewords close to
x = codewords
each other then detection
o = non-codewords
failures will occur
 Good codes should o x
x o
maximize separation o
o o Good
between codewords x o x o x distance
o o properties
o o
x o x

15
Altering frames by changing bits
 Example: Suppose the following frames are the only legal
bit patterns: 0000, 0011, 1100, 1111
Lines connect frames
0001 that only differ in a single
0000 0011 bit = that can be
0010 converted into each
other by flipping one bit
0101, 0110,
1000 0100 1001, 1010 0111 1011 Here: No single bit error
can convert one legal
frame into another one!
1101

1100 1110 1111 Two bit changes


necessary to go from
one legal frame to
uvxy – legal frame abcd – illegal frame another

16
Distance between frames
 In previous example: Two bit changes necessary to go
from one legal frame to another
 Formally: Hamming distance
 Let x=x1,…, xn and y=y1,…, yn be frames
 d(x,y) = number of 1 bits in x XOR y (x  y )
 Intuitively: the number of bit positions where x and y are different

Example: x=0011010111
y=0110100101
x  y=0101110010

d(x,y) = 5

17
Hamming distance of a set of frames
 The Hamming distance of a set of frames S:

 The smallest distance between any two frames in the set

Examples: 0000 0011 3


001011 011101

1 4
101011
1100 1111
All distances are 2 One distance is 1!

18
Hamming distance and error detection/correction
 What happens if d(S) = 0?
 This does not make sense, by definition

 What happens if d(S) = 1?


 There exist x,y  S such that d(x,y) = 1; no other pair is closer
1 bit difference
x y
 A single bit error converts from one legal frame x to another legal
frame y
 Cannot detect or correct anything

19
Hamming distance and detection/correction
 What happens if d(S) = 2?
 There exist x,y  S such that d(x,y) = 2; no other pair is closer
 In particular: any u with d(x,u) = 1 is illegal,
 As is any u with d(y,u)=1

1 bit difference 1 bit difference


x u y

 I.e., errors which modify a single bit always lead to an illegal frame
! Can be detected!
 Generalizes to all legal frames, because Hamming distance
describes the “critical cases”
 But not corrected – upon receiving u, no way to decide whether x
or y had been sent (symmetry!)

20
Hamming distance and detection/correction
 What happens if d(S) = 3?
 There exist x,y  S such that d(x,y) = 3; no other pair is closer
 Every s with d(x,s)=1 is illegal AND d(y,s) > 1!

1 bit difference 1 bit difference 1 bit difference


x s u y

 Hence: the receipt of s could have the following causes:


 Originally, x had been sent, but 1 bit error occurred
 Originally, y had been sent, but 2 bit errors occurred
 (Originally, some other frame had been sent, but at least 2 bit errors
occurred)
 Assuming that fewer errors have happened, a received frame s
can be mapped to a frame x!
 Hence, the error has been “corrected” – hopefully, correctly!

21
Generalization – Required Hamming distances
 The examples above can be generalized

 To detect d bit errors, a Hamming distance of d+1 in the


set of legal frames is required
 So that it is not possible to re-write a legal frame into another one
using at most d bits

 To correct d bit errors, a Hamming distance of 2d+1 in the


set of legal frames is required
 So that all illegal frames at most d bits away from legal frame are
more than d bits away from any other legal frame

22
Frame sets – code books, codes
 A terminology aspect:
 The set of legal frames S  {0,1}n is also called a code book or
simply a code

 The rate R of a code S is defined as:


 Rate characterizes the efficiency

 The distance  of a code S is defined as:


 Distance characterizes error correction/detection capabilities

 A good code should have large distance and large rate –


but arbitrary combinations are not possible

23
Content
 Error control
 Redundancy
 Hamming distance & error correction
 Error correction
 Linear block codes and Hamming codes

24
Linear Block Code
 The information bit stream is chopped into blocks of k bits
 Each block is encoded to a larger block of n bits
 Called codeword
 The coded bits are modulated and sent over channel
 The reverse procedure is done at the receiver

Channel
Data block Codeword
encoder
k bits n bits

n-k Redundant bits


k
Rc 
Information bits per codeword
Code rate
n symbol

25
Linear Block Code …
 In linear block code (LBC), the extra n-k bits are linear
functions (combinations) of the original k bits
 These extra bits are called parity-check bits

 LBCs use a larger number of parity bits to either


 Detect more than one error or
 Correct one or more errors

26
Linear Block Codes …
 Consider an (n, k) binary block code
 There are 2n possible codewords
 Select 2k codewords from the 2n possibilities to code
 Each k bit information block is uniquely mapped to one of these 2k
codewords
 Code rate Rc = k/n information bits per codeword symbol

 For 2k << 2n the distance between codewords can be


increased
 Thus provides for better error correction and detection

27
Linear Block Codes …
 Define a generator matrix G of the form

 v 11 v 12 v1 n 
 V1  
v 2 n 

   v 21 v 22
G     

   
 V k   

v k1 vk 2  v kn 

28
Linear Block Codes …
 Encoding in (n, k) block code

U  mG  V1 
V 
(u1 , u2 , , un )  (m1 , m2 , , mk )   2 
 
 
Vk 
(u1 , u2 , , un )  m1  V1  m2  V2    mk  Vk

 G describes how codewords are generated from information bits

29
Linear Block Codes …
 Example: Block code (6,3)
Message vector Codeword

000 000000
 V1   1 1 0 1 0 0  100 110100
G   V 2    0 1 1 0 1 0  010 011010
 V 3   1 0 1 0 0 1  110 1 01 1 1 0
001 1 01 0 0 1
101 0 111 0 1
011 1 1 0 011
1 11 0 0 0 1 11

30
Systematic Block Codes
 For a systematic block code (n, k), the first (or last) k
elements in the codeword are information bits

G  [P I k ]
I k  k  k identity matrix
Pk  k  (n  k ) matrix

U  (u1 , u2 ,..., un )  ( p1 , p2 ,..., pn  k , m1 , m2 ,..., mk )


      
parity bits message bits

31
Systematic Block Codes
 For any linear code we can find a matrix H ( n  k )n which its
rows are orthogonal to rows of G

GH  0 T

 H checks the parity of the received word


 i.e. maps the n-bit word to a m-bit syndrome
 Codewords (=mG) should have parity of 0

 H is called the parity check matrix and its rows are linearly
independent
 For systematic linear block codes:

H  [I n  k PT ]

32
Systematic Block Codes
 For the (6,3) block code example see earlier, the parity
check matrix is given as
1 0 0 1 0 1 
H   0 1 0 1 1 0 
 0 0 1 0 1 1 

 Thus, multiplication of any valid codeword with the parity-


check matrix results in an all-zero vector
 This property is used to determine whether the received
vector is a valid codeword or has been corrupted

33
Systematic Block Codes
Data source
Source m Channel U Modulation
coder encoding

channel

Source Channel Demodulation


Data sink
decoder
m̂ decoding r Detection

r  Ue
r  (r1 , r2 ,...., rn ) received codeword or vector
e  (e1 , e2 ,...., en ) error pattern or vector
 Syndrome testing
 S is syndrome of r, corresponding to the error pattern e
S  rHT  eHT

34
Syndrome Testing
 If r is a valid codeword then S = 0
 Thus, the syndrome equals the all-zero vector if
 The transmitted codeword is not corrupted or
 Is corrupted in a manner such that the received codeword is a valid
codeword in the code but is different from the transmitted
codeword
 If the received codeword r contains detectable errors, then
S0
 If the received codeword contains correctable errors, the
syndrome identifies the error pattern corrupting the
transmitted codeword
 These errors can then be corrected

35
Syndrome Testing
 Note that the syndrome is a function only of the error
pattern e and not the transmitted codeword U

 Because S = eHT corresponds to n−k equations in n


unknowns
 Hence, not easier to solve mathematically
 However, since the probability of bit error is typically small
and independent for each bit, the most likely error pattern
is the one with minimal weight
 This corresponds to the least number of errors introduced
in the channel

36
Systematic Block Codes
 For the (6,3) block code example above

Error pattern Syndrome


000000 000 U  (101110) transmitted.
000001 101
r  (001110) is received.
000010 011
000100 110
The syndrome of r is computed :
001000 001 S  rHT  (001110)H T  (100)
010000 010 Error pattern corresponding to this syndrome is
100000 100
eˆ  (100000)
010001 111
The corrected vector is estimated
ˆ  r  eˆ  (001110)  (100000)  (101110)
U
There is a unique mapping from Syndrome ↔ Error Pattern

37
Generator Matrix
 Example: The generator matrix for (7, 4) systematic code is

1. Compute the code generated for 1000, 0100 and 0010


2. Let 1101000 be transmitted and 1100000 be received.
Show how the error is detected.

38
Assignment
 Give a (7,4) code with generator & parity check metrics G
and H as follows
 Compute the code generated for 1000, 0100, 0010, and 0001
 Check that GHT = 0
 Let 1101000 be transmitted and 1100000 be received. Show how the
error is detected.

 
1 1 0 1 0 0 0 
  1 0 0 1 0 1 1
H 0 1 0 1 1 1 0
G  
0 1 1 0 1 0 0   P | I 
1 1 1 0 0 1 0
k

 
 1 0 1 0 0 0 1 0 0 1 0 1 1 1
 k ( n  k ) 
      
k k

39
Content
 Error control
 Redundancy
 Hamming distance & error correction
 Error correction
 Linear block codes and Hamming codes

40
History
 Developed by Richard Hamming in the late 1940’s
 He recognized that the further evolution of computers
required greater reliability
 In particular the ability detect errors and correct errors
 His search for error-correcting codes led to the Hamming
Codes
 Hamming Codes are still widely used in computing,
telecommunication, and other applications

41
Hamming Codes
 Class of error-correcting codes
 Can detect single and double-bit errors
 Can correct single-bit errors
 For each m > 2, there is a Hamming code of length
n = 2m – 1 with n – k = m parity check bits

Redundancy
m n = 2m–1 k = n–m m/n
3 7 4 3/7
4 15 11 4/15
5 31 26 5/31
6 63 57 6/63

42
Hamming (7,4) Code
 m = 3 Hamming Code
 Information bits are b1, b2, b3, b4
 Equations for parity checks b5, b6, b7

b5 = b1  b3  b4  b1 
 b5  1 0 1 1  
b6 = b1  b2  b4  b   1 1 0 1  b2 
 6   b 
b7 = b2  b3  b4  b 7   0 1 1 1   3 
b4 

 There are 24 = 16 codewords


 (0,0,0,0,0,0,0) is a codeword

43
Hamming (7,4) Code …
 Hamming code really refers to a specific (7,4) code
Hamming introduced in 1950
 Hamming code adds 3 additional check bits to every 4
data bits of the message for a total of 7
 Hamming's (7,4) code can correct any single-bit error, and
detect all two-bit errors
 Since the medium would have to be uselessly noisy for 2
out of 7 bits (about 30%) to be lost, Hamming's (7,4) is
effectively lossless

44
Hamming (7,4) Code …
Information Codeword Weight
b1 b2 b3 b4 b 1 b 2 b 3 b 4 b5 b6 b7 w(b)
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 1 1 1 1 4
0 0 1 0 0 0 1 0 1 0 1 3
0 0 1 1 0 0 1 1 0 1 0 3
0 1 0 0 0 1 0 0 0 1 1 3
0 1 0 1 0 1 0 1 1 0 0 3
0 1 1 0 0 1 1 0 1 1 0 4
0 1 1 1 0 1 1 1 0 0 1 4
1 0 0 0 1 0 0 0 1 1 0 3
1 0 0 1 1 0 0 1 0 0 1 3
1 0 1 0 1 0 1 0 0 1 1 4
1 0 1 1 1 0 1 1 1 0 0 4
1 1 0 0 1 1 0 0 1 0 1 4
1 1 0 1 1 1 0 1 0 1 0 4
1 1 1 0 1 1 1 0 0 0 0 3
1 1 1 1 1 1 1 1 1 1 1 7

45
Parity Check Equations
 Rearrange parity check equations:
0 = b5  b5 = b1  b3  b4  b5
0 = b6  b6 = b1  b2  b4  b6
0= b7  b7 = b2  b3  b4  b7

 In matrix form  b1 
b 
 2
 0  1 0 1 1 1 0 0   b3 
 0   1 1 0 1 0 1 0   b   H b T  0
    4 
 0   0 1 1 1 0 0 1  b5 
 
 b6 
b 
 7
 All codewords must satisfy these equations

46
Error Detection with Hamming Code

0
0
1011100 1
1 Single error detected
s=He=1101010 0 = 0
0111001 0
1
0
0

0
1
1011100 0 0 1 1
s=He=1101010 0 = 1 + 0 = 1 Double error detected
0111001 1 1 0 1
0
0

47
Error Detection with Hamming Code

1
1
1011100 1 1 0 1
s=He= 1101010 0 = 1 + 1 + 0 = 0 Triple error not
0111001 0 0 1 1 detected
0
0

48

You might also like