Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 28

EE3101

Communication Engineering
Chapter 4, Error Correction Coding
Course Intended Learning Outcomes
Explain the basic concepts of error detection/correction coding and
perform error analysis
4.1 Block Codes
4.2 Cyclic Codes
Error correction or channel coding has built in redundancies to enable reliable
transmission that can withstand effects due to noise, interference and multipath fading.

4.1 Block Codes


• Data is segmented into blocks of k data bits, i.e. each block has 2k distinct messages.
• An encoder transforms each block into a large one of n-bits (code bits or channel
symbol) referred to as (n, k) code.
• n-k are called redundant bits, parity bits or check bits, used for error detection and/or
correction.

• Redundancy = (n-k)/k
• Code rate = k/n
4.1.1 Parity-check code

1) Single-Parity-check-code
• Redundant bit = 1 bit  even or odd parity
• Code rate k/(k+1)
• Can only detect only odd number of bit errors, cannot correct errors
• Probability of undetected error Pud (even number of bits are inverted)
 

p is the bit error probability


 Example: Even-Parity Code
Message Parity Codeword
000 0 0 000
• A (4, 3) even parity, error-detection code
100 1 1 100
• Probability of channel symbol error is;
010 1 1 010
110 0 0 110 • Can detect single or triple error patterns.
001 1 1 001 • Probability of undetected error is equal to the
101 0 0 101 probability that two or four errors occur
011 0 0 011 anywhere in a codeword.
111 1 1 111
 
 2) Rectangular Code (Product Code)
• Can be considered as a parallel
data transmission.
• Data bloc; M rows and N columns
• Append
• a horizontal parity check to 1 1 1 0 0 1
each row and 0 0 0 0 1 1
• a vertical parity check to each
1 1 0 0 0 0
column
• Coded block; (M+1) rows and 1 1 1 1 0 0
(N+1) columns 1 1 0 1 1 0 Vertical
Horizontal parity check
• Can correct single bit error
parity check
• Serial transmission;
111001000011110000111100110110
4.1.2 Linear Block Codes

• Belong to a class of parity check codes characterized by (n, k) notation


• Encoder transforms a block of k message digits (a message vector) into a longer block
of n codeword digits (a code vector) constructed from a given alphabet of elements
• 2k k-tuples (sequence of k digits) messages are linearly, uniquely mapped to 2k n-
tuples codewords
• This mapping is accomplished via a look-up table.
4.1.2.1 Vector Spaces Addition Multiplication

• Set of all binary n-tuples Vn is called a vector


space over the binary field of two elements
(0 and 1).
• The field has two operations, addition (XOR)
and multiplication (AND).

 4.1.2.2 Vector Subspaces

Subset S of the vector space Vn is called a subset if;


• The all-zeros vector is in S.
• Sum of any two vectors in S is also in S (closure property)
Suppose Vi and Vj are two codewords (or code vectors) in an (n, k) binary block code. The
code is said to be linear if () is also a code vector. (property of linear dependence)
Example; 24 codewords are the total population of vector space V4.
0000 0001 0010 0011 0100 0101 0110 0111
1000 1001 1010 1011 1100 1101 1110 1111

An example of subspace of V4 contains;


0000 0101 1010 1111
Add any two of this subspace vectors.
What do you get?

In a noisy channel, a codeword within the 2k


allowable ones may be received as any one of 2n
vectors. Graphically 
SKLAR Fig. 6.10
If perturbed vector is not too unlike the valid
codeword, then it may be possible to decode
correctly.
Goals for coding
1. pack the largest amount of code vectors in the vector space to increase coding
efficiency.
2. code vectors should be as far apart from each other as possible.

Example: (6,3) linear block code Message Vector Code Vector


000 000 000
Only 8 out of 64, 6-tuples in the V6 100 110 100
vector space are used as code 010 011 010
vectors. 110 101 110
001 101 001
101 011 101
011 110 011
111 000 111
4.1.2.3 Generator Matrix

• When k is large, a look-up table is not feasible, instead a generator matrix is used.
• For a k-dimensional subspace of the n-dimensional binary vector space (k<n),
• A set of n-tuples can generate all the 2k member vectors of the subspace.
• The generating set of vectors is said to span the subspace.
• Code vector U is generated as a linear combination of k linearly independent n-tuples
V1, V2, …, Vk, i.e.,
 𝑼 =𝑚 1 𝑽 𝟏 +𝑚 2 𝑽 𝟐+ …+𝑚 𝑘 𝑽 𝒌

Where mi=(0 or 1) are the message digits and i=1, 2,…, k


In general, a generator matrix is in the form of a (kn) matrix.

  𝑽𝟏 𝑉 11 𝑉 12 ⋯ 𝑉 1𝑛


[ ][
𝑮= 𝑽 𝟐 =

𝑽𝒌
𝑉 21

𝑉 𝑘1
𝑉 22

𝑉𝑘2



𝑉 2𝑛

𝑉 𝑘𝑛
] Generator matrix

If the message vector m is expressed as a row vector

 𝒎= [ 𝑚1 𝑚2 ⋯ 𝑚𝑘 ] Message vector

Then the code vector U is given by; U=mG Code vector


Example;

  𝑽𝟏 1 1 0 1 0 0

[ ][
𝑮= 𝑽 𝟐 = 0
𝑽𝟑 1
1
0
1
1
0
0
1
0
0
1 ] Generator matrix

 
Code vector

• Thus the code vector U corresponding to a message vector m is a linear


combination of the rows of G.
• The encoder only needs to store the k rows of G instead of the total 2k code
vectors.
4.1.2.4 Systematic Linear Block Codes

• A systematic (n, k) linear block code is a mapping.


• Mapping from k-dimensional message vector to a n-dimensional code vector.
• It is done in such a way that part of the sequence generated coincides with the k
message digits.
• The remaining (n-k) digits are parity bits.

  𝑝11 𝑝 12 ⋯ 𝑝1 , 𝑛− 𝑘 1 0 ⋯ 0

[𝑝
𝑮= [ 𝑷∨ 𝑰 𝒌 ] = 21

𝑝𝑘 1
𝑝 22

𝑝𝑘 2



𝑝2 , 𝑛− 𝑘

𝑝 𝑘 , 𝑛 −𝑘
0

0
1

0



0

1 ]
Where P is the parity array portion of the generator matrix with;
pij = (0 or 1), and Ik is the kk identity matrix.
Reduced complexity because it’s not necessary to store identity matrix.
 Systematic code vector:
 

… … … … …

P I3
Example;
 

Redundant bits Original message bits

The 3 redundant bits provide greater ability to detect and correct errors than single
parity checks.
 4.1.2.5 Parity Check Matrix

• Parity-check matrix H is used to decode   1 0 ⋯ 0

[ ]
and validate the codeword. 0 1 ⋯ 0
• The rows of H must be orthogonal to G. ⋮ ⋮ ⋯ ⋮
𝐼 𝑛 −𝑘
i.e. GHT = 0.
• H is (n-k)n matrix
• G is kn matrix
[ ]
𝑯𝑻= −
𝑃
=
0
𝑝11
𝑝21
0
𝑝 12
𝑝 22



1
𝑝1 , 𝑛 −𝑘
𝑝2 , 𝑛 −𝑘
• To be orthogonal, H must be of the form; ⋮ ⋮ ⋯ ⋮
and HT is given by; 𝑝𝑘 1 𝑝𝑘 2 ⋯ 𝑝𝑘 , 𝑛 −𝑘

 
 4.1.2.6 Syndrome Testing Transmission U

• Let r be the received code vector:


• The syndrome S is defined as; Error vector e = (e1, e2,…,en)
 𝑺=𝒓 𝑯 𝑻 = (𝑼 +𝒆 ) 𝑯 𝑻 =𝒆 𝑯 𝑻 , 𝑠𝑖𝑛𝑐𝑒 𝑼 𝑯 𝑻 =𝟎
• Important property of linear block code; mapping between error patterns and
syndromes is a one-to-one so that error can be identified and corrected.
Example Transmitted code vector U=101110
Received code vector r=001110

 Syndrome of corrupted code vector = syndrome of error pattern.


4.1.2.7 Error Correction

• One to one mapping between error patterns and syndrome testing implies detected
errors can be corrected.
• A standard array (n, k) code is defined as
First row: contains all the 2k code vectors starting with all-zero vector.
First column: contains all the 2n-k correctable error pattern.
Each row is called a coset, which consists of;
I. Coset leader; a correctable error pattern in the first column
II. All code vectors perturbed by that error pattern.
  𝑈1 𝑈2 ⋯ 𝑈𝑖 ⋯ 𝑈2
Array of 2n-k  2k

[ ]
𝑘

𝑒2 𝑈 2 +𝑒 2 ⋯ 𝑈 𝑖+ 𝑒2 ⋯ 𝑈 2 +𝑒 2
𝑘
= 2n elements that
⋮ ⋮ ⋯ ⋮ ⋯ ⋮
𝑒𝑗 𝑈 2 +𝑒 𝑗 ⋯ 𝑈 𝑖+ 𝑒 𝑗 ⋯ 𝑈 2 +𝑒𝑗
𝑘
corresponds to all the
⋮ ⋮ ⋯ ⋮ ⋯ ⋮ 2k n-tuples in the
𝑈2𝑛− 𝑘 𝑈 2+ 𝑒2𝑛 −𝑘 ⋯ 𝑈 𝑖 +𝑒 2
𝑛−𝑘 ⋯ 𝑈 2 + 𝑒2
𝑘 𝑛 −𝑘
vector space.
Syndrome of a Coset

• If ej is the coset leader or error pattern of the jth coset, then Ui+ej is an n-tuple in
this coset, and the syndrome is given by;
 

All members of a coset have the same syndrome


Syndromes for different cossets are different
 Error Correction Decoding Procedure

1) Calculate the syndromes using S = rHT.


2) Locate the coset leader ej (estimated error pattern) using the calculated S.
3) The error pattern is assumed to be the corruption.

Where denotes estimated value of U, and


denotes the estimated value of

 As if , it implies that the original code word can be correctly resumed if the
estimated error is equal to the actual error pattern. Otherwise, an undetectable
decoding error exists.
Codeword assigned
to message
8 valid code vectors
Message Code
Vector Vector
Example of a standard array for a (6, 3) code. 26 = sixty four 6-tuples
000 000 000
000000 110100 011010 101110 101001 011101 110011 000111
100 110 100
010 011 010 000001 110101 011011 101111 101000 011100 110010 000110
110 101 110 000010 110110 011000 101100 101011 011111 110001 000101
001 101 001 000100 110000 011110 101010 101101 011001 110111 000011
101 011 101 001000 111100 010010 100110 100001 010101 111011 001111
011 110 011 010000 100100 001010 111110 111001 00110 100011 010111
111 000 111
100000 010100 111010 001110 001001 111101 010011 100111
010001 100101 001011 111111 111000 001100 100010 010110

Correctable error patterns; non-zero coset leader


Arbitrarily chosen correctable
error pattern to make up the 64
  1 0 0 Corresponding

𝐹𝑜𝑟 𝑒𝑎𝑐h 𝑐𝑜𝑠𝑒𝑡 𝑙𝑒𝑎𝑑𝑒𝑟 ; 𝑺=𝒆 𝒋 𝑯 =𝒆 𝒋


𝑻

[ ]
0
0
1
0
1
1
0
1
1
0
0
1
0
1
1
syndrome
look-up table
Error
Pattern
000000
000001
Syndrome

000
101
000010 011
  If code vector: U=101110, 000100 110
but received vector: r=001110
001000 001
Syndrome of r: S=[001110]HT=[100]
010000 010
Estimated error pattern:
Corrected vector estimated by: 100000 100
001110+100000 010001 111
=101110
because error pattern is one of the 8 allowable ones.
 4.1.3 Coding Strength
4.1.3.1 Weight and Distance of Binary Vectors
Not all error patterns can be correctly decoded, the code’s correction capability can be
determined by its weight and distance.
1. Hamming weight w(U) of a code vector U is the number of non-zero elements in U.
2. Hamming distance between two code vectors U and V, d(U, V) is defined as the
number of elements in which they differ and;

where 0 is the all zeros code vector


Example w
U = 100101101 5
V = 011110100 5
U+V = 111011001 =d(U, V)=6
4.1.3.2 Minimum distance of a linear code

• Minimum distance dmin is a measure of the code’s capability of error detection and
correction.
• By property of linear block code, W=U+V must also be a code vector, so,
• d(U, V) = w(U+V) = w(W) = d(W, 0)
Þ dmin = min d(U, V) = min d(W, 0) for all U, V, W.

dmin is the smallest distance between the all-zeros code vector and all the other code
vectors.
4.1.3.4 Error Detection and Correction
dmin
U V even dmin
t
e
dmin
U V odd dmin
t
e
• Error detection capability is defined as e = dmin-1
• It guarantees that all error patterns of (dmin-1) or fewer bit errors can be detected
• Error correction capability is defined as;
  𝑑 𝑚𝑖𝑛 −1
𝑡 =⌊ ⌋ 𝑤h𝑒𝑟𝑒 ⌊ 𝑥 ⌋ 𝑟𝑜𝑢𝑛𝑑𝑠 𝑥 𝑡𝑜 𝑡h𝑒 𝑠𝑚𝑎𝑙𝑙𝑒𝑠𝑡 𝑖𝑛𝑡𝑒𝑔𝑒𝑟 , 𝑒 . 𝑔 . ⌊ 1.9 ⌋=1
2
• It’s the maximum number of guaranteed correctable errors per codeword.
• An (n, k) code is capable of detecting 2n-2k error patterns of length n.
Example: using the same as before
Code vector: U=101110, If the Hamming Probability
Received vector: r=001110 correct distance
codeword
Syndrome of r: S=[001110]HT is:
=[100]
000 000 3
Assume BPSK 110 100 4
PB = 10-2 011 010 2
Minimum Hamming distance: 3 101 110 1
i.e. correct 1 error and
101 001 4
detect 2 errors
011 101 3
Highest probability 110 011 5
000 111 2
Example: using the same as before
Code vector: U=101110, If the Hamming Probability
Received vector: r=010110 correct distance
codeword
is:

Not a valid 000 000 3


010 110 codeword
110 100 2
110 100 These may be 011 010 2
correct as they
011 010 have the same 101 110 3
000 111 highest probability 101 001 6
011 101 3
110 011 3
000 111 2

You might also like