Download as pdf or txt
Download as pdf or txt
You are on page 1of 119

Outline of Presentation

 Introduction

 Modern Algebra

 Code Construction

 Research Problems

 Conclusion

 References
1 3/8/2021
Introduction…
General Block Diagram of Communication System
Information Source Channel Modulator
Source Encoder Encoder (writing unit)

Channel
Noise (Storage
Medium)

Information Source Channel Demodulator


Destination Decoder Decoder (Reading
unit)

Fig 1 General Block Diagram


2 of Communication System 3/8/2021
Introduction
 Clude Shannon is called as father of coding Theory

 Clude Shannon published a landmark seminal paper on “Mathematical


theory for communication” in 1948 to achieving reliable
communication over a noisy transmission channel.

 Shannon’s central theme was that , if the signaling rate of the system is
less than the channel capacity, reliable communication can be achieved
if one chooses proper encoding and decoding technique.

 There has been increasing demand for efficient and reliable digital data
transmission and storage system.

3 3/8/2021
Introduction…
 The design of good codes and efficient decoding method,
initiated by Hamming, Slepian and other in the early 1950’s,

 Recent developments have contributed towards achieving


the realistically required with-

 High speed digital systems and

 The use of error correcting codes for error control has


become an integral part of modern communication system
and data storage system.

4 3/8/2021
Introduction…
 Linear Block code (n , k), where
 n- codeword length ,

 k- block length(message length)

 Construction of ECC technique


 Requires Systematic Generator matrix ‘G’ of size ( k x n ), i.e.
G=[Ik : P] or G = [P : Ik] or

 Generator polynomial of degree (n-k) (Cyclic Code).

 Systematic parity-check matrix H=[pT : I(n-k)] or H= [I(n-k) : pT ] of


size (n-k) x n or Parity check polynomial of degree k (Cyclic
Code).
5 3/8/2021
Introduction…
 What is ECC ?
 Error correcting codes are an important tool used in digital
communication systems for correcting errors introduced due to noisy
channel
 What are the Types of ECC ?
 These codes are mainly of two types
 Backward Error Correcting (BEC) codes

 Forward Error Correcting (FEC) codes

 In BEC code, if errors detected at receiver end, request for


retransmission is sent to transmitter end.

 In FEC codes, if errors detected at receiver end then correction is


executed at receiver end.
6 3/8/2021
Introduction…
 Forward Error Correcting (FEC) codes includes

 Linear Block codes

 Hamming Codes

 Cyclic code

 Low-density parity-check codes(LDPC)

 BCH codes,

 R-S codes,

 Turbo codes

7 3/8/2021
Introduction…
 Forward Error Correcting (FEC) codes includes

 Fire codes

 Concatenated codes

 Interleaved codes

 Product Codes

 Burton Code

 R. M. Code

 Convolutional codes etc..

8 3/8/2021
Introduction…
Applications of codes
 Practical application of Error Correcting Codes are for
 High data rate applications

 Satellite-based DVB

 5G mobile telephony, ITU-T G.hn operating at data rates 10 GBPS

 Data storage system

 Optical Communication

 G base-T Ethernet which sends data at 10 GBPS over twisted-pair


cables

 Wi-Fi 802.11 standard as an optional part of 802.11n and 802.11ac

 IEEE wireless local area network


9 3/8/2021
Modern Algebra
 Much of this work for ECC is mathematics in nature,
requires an extensive background in modern algebra and
probability theory to understand.

 Brief Introduction of Modern Algebra


 Group

 Fields

 Vector Spaces

10 3/8/2021
Modern Algebra
Groups
 Let G be a set of elements.
 A binary operation * on G is rule that assigns to each pair of
elements a and b a uniquely defined third element c = a* b in G.
 When such a binary operation * is defined on G, we say that G is
closed under *.
 For example, let G be the set of all integers & let the binary
operations on G be real addition +.
 We all know that for any two integers i & j is a uniquely defined
integer in G.
 Hence, the set of integer is closed under real addition.
11 3/8/2021
Modern Algebra
Groups:
 A binary operation * on G is said to be associative if, for any a ,b, & c in
G.
𝑎 ∗ 𝑏 ∗ 𝑐 = 𝑎 ∗ 𝑏 ∗ 𝑐.
 G contains an element e such that for any a in G, a ∗ e = e ∗ a = a This
element e is called an identity element.
 For any element a in G there exist another element a’ in G such that
𝑎 ∗ 𝑎′ = 𝑎′ ∗ 𝑎 = 𝑒. The element a’ is called an inverse of a(a is also an
inverse of a’).
 A group G is said to be commutative if its binary operation * also satisfies
the following conditions :
 For any a & b in G,
𝑎 ∗ 𝑏 = 𝑏 ∗ 𝑎.

12 3/8/2021
Modern Algebra…
 Groups :
 Example
 The set of all integer is a commutative group under real addition.
 In this case, the integer 0 is the identity element, & the integer –i is the
inverse of integer i.
 The set of all rational numbers excluding zero is a commutative group under
real multiplication.
 The integer 1 is the identity element with respect to real multiplication, & the
rational number 𝑏 𝑎 is the multiplicative inverse of 𝑎 𝑏.
 Groups with finite numbers of elements do exist, as we shall in the next
example.
 The number of elements in a group is called as order of the group.
 A group of finite order is called a finite group.
13 3/8/2021
Modern Algebra…
Group:
 Example :
Consider the set of two integers 𝐺 = 0,1 . Let us define a binary
operation, denoted by ⊕, on G as follows
 Solution
0 ⊕ 0 = 0. 0⊕1 = 1 1⊕0 = 1 1⊕1 = 0

This binary operation is called modulo-2 addition. The set 𝐺 = 0,1 is


a group under modulo-2 addition. It follows from the definition of
modulo-2 addition ⊕that G is closed under ⊕, & ⊕is commutative.
The inverse of 0 is itself, and the inverse of 1 is also itself, thus G
together with ⊕is a commutative group.
 For any positive integer m, it is possible to construct a group of order m
under a binary operation that is very similar to real addition.
14 3/8/2021
Modern Algebra…
Fields:
 Let F be a set of elements on which two binary operations called
addition “+” & multiplication “.”are defined.
 The set F together with the two binary operations “+” & “.” ,Is a field if
the following conditions are satisfied:
i. F is commutative group under addition +.
ii. The identity element with respect to addition is called the zero
elements or
iii. The additive identity of F & is denoted by 0.
iv. The set of nonzero elements in F is a commutative group under
multiplicative identity of F and is denoted by 1.
v. Multiplication is distributive over addition; that is, for any three
elements a, b, & c in F
𝑎. 𝑏 + 𝑐 = 𝑎. 𝑏 + 𝑎. 𝑐

15 3/8/2021
Modern Algebra…
 Fields:
 It follows from the definition that a field consist of at least two
elements, the additive and multiplicative identity.

 The number of elements in field is called as order of the field.

 A field with a finite number if elements is called a finite field.

 In a field the additive inverse of an element a is denoted by –a, and


the multiplicative inverse of a is denoted by a-1.

 For Example:

 GF(2) = {0, 1} is the smallest field of Galois Field of 2 element.

16 3/8/2021
Modern Algebra…
 Vector Spaces Vn :
 Let V be a set of elements on which a binary operation called
addition, +, is defined. Let F be a field.

 A multiplication operation by “.”, between the elements in F and


elements in V is also defined.

 The set V is called a vector space over the field F if it satisfies the
following conditions:
 V is Commutative under addition.

 For any element a in F and any element v in V, a.v is an element in V.

17 3/8/2021
Modern Algebra…
 Vector Spaces Vn :
 (Distributive Law) For any element u and v in V and any element a and b in
F,
𝑎 . 𝒖 + 𝒗 = 𝑎 . 𝒖 + 𝑎 . 𝒗,
𝑎 + 𝑏 . 𝒗 = 𝑎. 𝒗 + 𝑏 . 𝒗.
 (Associative Law) For any v in V and any a and b in F.
𝑎 . 𝑏 . 𝒗 = 𝑎 . (𝑏 . 𝒗) .
 Let 1 be the unit element of F. Then, for any v in V, 1 . 𝒗 = 𝒗.
 The elements of V are called as vectors & elements of field F are called as
scalars.
 The additions on V is called a vector addition.
 The multiplication that combines a scalar in F and vector in V is referred to
as scalar multiplication (or product ).
 The additive identity of V is denoted by 0.
18 3/8/2021
Modern Algebra…
 Vector Spaces Vn :
 Let n =5. the vector space V5 of all 5-tuples over GF(2) consist
of the following set of 32 vectors which are distinct :
00000 00001 00010 00011
00100 00101 00110 00111
01000 01001 01010 01011
01100 01101 01110 01111
10000 10001 10010 10011
10100 10101 10110 10111
11000 11001 11010 11011
11100 11101 11110 11111

 These are the linear combinations of basis vector or spanning


set. 19 3/8/2021
Modern Algebra…
 Vector Spaces Vn :
 Addition of Vectors, Let 𝒗𝟏 = (𝟏 𝟎 𝟏 𝟏 𝟏) &
𝒗𝟐 = (𝟏 𝟏 𝟎 𝟎 𝟏)
 The vector sum of 𝒗𝟏 &𝒗𝟐 is
10111 + 11001 = 1 + 1,0 + 1,1 + 0,1 + 1 = 01110 .
 Scalar multiplication with vectors, Let “0” & “1” are the
scalar
0. 11010 = 0.1,0.1,0.0,0.1,0.0 = 00000 ,
1. 11010 = 1.1,1.1,1.0,1.1,1.0 = 11010 ,
 The vector space of all n-tuples over any field F constructed in a
similar manner.
 However, we are mostly concerned with the vector space of all n-
tuples over GF(2) or over an extension field of GF(2) [e.g. GF(2m)].
 Because V is a vector space over a field F, it may happen that subset
S of V is also a vector space over F.
 Such a subset is called a subspace of V.
20 3/8/2021
Modern Algebra…
Vector Spaces Vn :
 Let S be a nonempty subset of a vector space V over a field
F then, S is a subspace of V if the following conditions are
satisfied;

 For any two vectors u & v in S, u + v also a vector in S.

 For any element a in F & any vector u in S, a.u is also in S.

 Consider the vector space of all 5 tuple over GF(2) the set
00000 , 00111 , 11010 , 11101

21 3/8/2021
Modern Algebra…
 Vector Spaces Vn :
 Linear Combination of vectors
 Let 𝑣1 , 𝑣2 , ⋯ , 𝑣𝑘 be k vectors in vector space V over a field F, let 𝑎1 , 𝑎2 , ⋯ , 𝑎𝑘 be k
scalars from F. The sum of product of scalar and vector that is
𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑘 𝑣𝑘
 Clearly, the sum of two linear combinations of 𝑣1 , 𝑣2 , ⋯ , 𝑣𝑘 .
𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑘 𝑣𝑘 + 𝑏1 𝑣1 + 𝑏2 𝑣2 + ⋯ + 𝑏𝑘 𝑣𝑘
= 𝑎1 + 𝑏1 𝑣1 + 𝑎2 + 𝑏2 𝑣2 + ⋯ + 𝑎𝑘 + 𝑏𝑘 𝑣𝑘

 Scalar Product :
 The product of scalar c in F & a linear combination of 𝑣1 , 𝑣2 , ⋯ , 𝑣𝑘 .

𝑐 . 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑘 𝑣𝑘 = 𝑐. 𝑎1 𝑣1 + 𝑐. 𝑎2 𝑣2 + ⋯ + 𝑐. 𝑎𝑘 𝑣𝑘

22 3/8/2021
Modern Algebra…
Vector Spaces Vn :
 Statement:

 Let 𝑣1 , 𝑣2 , ⋯ , 𝑣𝑘 be k vectors in vector space over a field F. The set of


all linear combinations of 𝑣1 , 𝑣2 , ⋯ , 𝑣𝑘 forms a subspace of V.

 Proof:

 A set of vectors 𝑣1 , 𝑣2 , ⋯ , 𝑣𝑘 in a vector space V over a field F said to


be linearly dependent if and only if there exist k scalars 𝑎1 , 𝑎2 , ⋯ , 𝑎𝑘
from field F, not all zero, such that
𝑎𝑘 𝑣1 + 𝑎𝑘 𝑣2 + ⋯ + 𝑎𝑘 𝑣𝑘 = 0

 A set of vectors 𝑣1 , 𝑣2 , ⋯ , 𝑣𝑘 is said to be linearly independent if


it is not linearly dependent.
23 3/8/2021
Modern Algebra…
 Vector Spaces Vn :
 That is 𝑣1 , 𝑣2 , ⋯ , 𝑣𝑘 are linearly independent. If and only if
𝑎𝑘 𝑣1 + 𝑎𝑘 𝑣2 + ⋯ + 𝑎𝑘 𝑣𝑘 ≠ 0
 Unless 𝑎1 = 𝑎2 = ⋯ = 𝑎𝑘 = 0.
 Example:

 Consider the vector space of all 5-tuple over GF(2) the


linear combinations of 00111 & 11101 are

0. 00111 + 0. 11101 = 00000


0. 00111 + 1. 11101 = 11101
1. 00111 + 0. 11101 = 00111
1. 00111 + 1. 11101 = 11010

24 3/8/2021
Modern Algebra…
Vector Spaces Vn :
 Example:
 The vectors (10110) , (01001), & (11111) are linearly dependent. Since
1 . 10110 + 1. 01001 + 1. 11111 = (00000);

 However, (10110) , (01001), & (11111) are linearly independent.

 All eight combinations of these vectors are given here:


0. 10110 + 0. 01001 + 0. 11111 = 00000 ,
0. 10110 + 0. 01001 + 1. 11111 = 11011 ,
0. 10110 + 1. 01001 + 0. 11111 = 01001 ,
0. 10110 + 1. 01001 + 1. 11111 = 10010 ,
1. 10110 + 0. 01001 + 0. 11111 = 10110 ,
1. 10110 + 0. 01001 + 1. 11111 = 01101 ,
1. 10110 + 1. 01001 + 0. 11111 = 11111 ,
1. 10110 + 1. 01001 + 1. 11111 = 00100 ,

A set of vectors is said to span a vector space V if every vector in V is a linear combination of
vectors in set .
25 3/8/2021
Modern Algebra…
 Vector Spaces Vn :
 Basis Vector
 In any vector space or subspace there exist at least one set B of
linearly independent vectors that span the space.

 This is called as a basis (or base)of the vector space.

 The number of vectors in a basis of a vector space is called as the


dimension of the vector space.

 Consider a vector space Vn of all n-tuples over GF(2).

 Let us form the following n, n-tuples:


𝑒0 = 1, 0, 0, 0, ⋯ , 0, 0
𝑒1 = 0, 1, 0, 0, ⋯ , 0, 0

𝑒𝑛;1 = 0, 0,260, 0, ⋯ , 0, 1 3/8/2021
Modern Algebra…
 Vector Spaces Vn :
 Where the n-tuple ei has only one nonzero component at the ith
position.

 Then every n-tuple 𝑎0 , 𝑎1 , 𝑎2 , ⋯ , 𝑎𝑛;1 in Vn can be expressed as a


linear combination of 𝑒0 , 𝑒1 , ⋯ , 𝑒𝑛;1 as follows:
𝑎0 , 𝑎1 , 𝑎2 , ⋯ , 𝑎𝑛;1 = 𝑎0 𝑒0 + 𝑎1 𝑒1 + 𝑎2 𝑒2 + ⋯ + 𝑎𝑛;1 𝑒𝑛;1 .
 Therefore 𝑒0 , 𝑒1 , ⋯ , 𝑒𝑛;1 span the vector space Vn of n-tuple over GF(2).

 We also see that 𝑒0 , 𝑒1 , ⋯ , 𝑒𝑛;1 are linearly independent.

 Hence, they form a basis for Vn, & dimension of Vn is n.

 If k < n & 𝑣1 , 𝑣2 , ⋯ , 𝑣𝑘 are k linearly independent vectors in Vn, then all


the linear combinations of 𝑣1 , 𝑣2 , ⋯ , 𝑣𝑘 of the form, 𝒖 = 𝑐1 𝒗1 + 𝑐2 𝒗2 +
⋯ + 𝑐𝑛 𝒗𝑛 form a k- dimensional27subspace S of Vn. 3/8/2021
Modern Algebra…
Vector Spaces Vn :
 Because of each ci has two possible values 0 or 1, there are 2k possible
distinct linear combinations of 𝑣1 , 𝑣2 , … , 𝑣𝑘 .

 Thus, S consists of 2k vectors and is a k-dimensional subspace of Vn.

 Let 𝒖 = 𝑢1 , 𝑢2 , ⋯ , 𝑢𝑛;1 & 𝒗 = 𝑣1 , 𝑣2 , ⋯ , 𝑣𝑛;1 be two n-tuples in Vn.

 We define the inner product or (dot product) of u & v as:


𝒖. 𝒗 = 𝑢0 . 𝑣0 + 𝑢1 . 𝑣1 + ⋯ + 𝑢𝑛;1 . 𝑣𝑛;1
 Where 𝑢𝑖 . 𝑣𝑖 & 𝑢𝑖 . 𝑣𝑖 + 𝑢𝑖:1 . 𝑣𝑖:1 are carried out in modulo-2
multiplication & addition.

 Hence, inner product of 𝑢𝑖 . 𝑣𝑖 is a scalar in GF(2).

 If 𝒖. 𝒗 = 0, u & v are said to be orthogonal to each other


28 3/8/2021
Modern Algebra…
Vector Spaces Vn :
 Statement:
 Let S be a k-dimensional subspace of the vector space Vn of n-tuple
over GF(2).
 The dimension of its null space Sd is in n-k. in other words ,
𝑑𝑖𝑚 𝑆 + dim 𝑆𝑑 = 𝑛
 Irreducible polynomial:
 For a polynomial 𝑓(𝑋) over 𝐺𝐹(2), if polynomial has an even
number of terms, it is devisable by 𝑋 + 1.
 A polynomial 𝑝(𝑋) over 𝐺𝐹(2) of degree m is said to be irreducible
over 𝑝(𝑋). If it is not divisible by any polynomial over 𝐺𝐹(2) of
degree less than m but greater than zero.

29 3/8/2021
Modern Algebra…
 Irreducible polynomial P(X):
 Among the four polynomials of degree 2, 𝑋 2 , 𝑋 2 +1, & 𝑋 2 + 𝑋
are not irreducible, since they are divisible by X or X + 1;

 However, 𝑋 2 + 𝑋 + 1 does not have either 0 or 1 as a root & so is


not divisible by any polynomial of degree 1.

 Therefore 𝑋 3 + 𝑋 + 1 is not divisible by 𝑋 𝑜𝑟 𝑋 + 1.


 Therefore, 𝑋 2 + 𝑋 + 1 is an irreducible polynomial of degree 2.

 The polynomial 𝑋 3 + 𝑋 + 1 is an irreducible polynomial of degree 3.

 𝑋 3 + 𝑋 + 1 is neither divisible by any polynomial of degree 1, nor


any polynomial of degree 2 or higher except itself
30 3/8/2021
Modern Algebra …
 An irreducible polynomial P(X) of degree m is said to be primitive if the
smallest positive integer n for which p(X) divides Xn+1 where, n=2m-1
 For Example Consider P(X)= 𝑋 3 + 𝑋 + 1 is irreducible polynomial over
GF(2)

 Primitive Polynomial help us to construct the Extension field of


irreducible polynomial P(X) where its roots exist.

31 3/8/2021
Modern Algebra
 Extension field GF(2m): (Where m is the degree of
irreducible polynomial)
 Let m = 4, the polynomial 𝑝 𝑋 = 1 + 𝑋 + 𝑋 4 is a primitive
polynomial over GF(2), set 𝑝 𝛼 = 1 + 𝛼 + 𝛼 4 = 0.
 α is the primitive element, exist in the extension field GF(2m) of
GF(2).
 Then ∝4 = 1+ ∝, using this relation, we can construct 𝐺𝐹 24 .
 The identity ∝4 = 1 +∝ is used repeatedly to form the
polynomial representation for the element of 𝐺𝐹 24 .

𝛼 5 = 𝛼. 𝛼 4 = 𝛼 1 + 𝛼 = 𝛼 + 𝛼 2
𝛼 6 = 𝛼. 𝛼 5 = 𝛼 𝛼 + 𝛼 2 = 𝛼 2 + 𝛼 3
𝛼 6 = 𝛼. 𝛼 6 = 𝛼 𝛼 2 + 𝛼 3 = 𝛼 3 + 𝛼 4 = 𝛼 3 + 1 + 𝛼 = 1 + 𝛼 + 𝛼 3

32 3/8/2021
Modern Algebra

 To multiply two elements 𝛼 𝑖 & 𝛼 𝑗 , we simply add their exponent &


use fact that 𝛼15 = 1.
 For Example 𝛼 5 . 𝛼 7 = 𝛼12 & 𝛼12 . 𝛼 7 = 𝛼19 = 𝛼 4 ,
 Dividing 𝛼 𝑗 𝑏𝑦 𝛼 𝑖 , we simply multiply 𝛼 𝑗 by multiplicative invers
𝛼15; 𝑖 for 𝛼 𝑖 .
4 12
 For example 𝛼 𝛼12 = 𝛼 4 . 𝛼 3 = 𝛼 7 & 𝛼 𝛼5 = 𝛼12 . 𝛼10 = 𝛼 22 =
𝛼7 .
 To add 𝛼 𝑖 & 𝛼 𝑗 , we use their polynomial representation given thus.

𝛼 5 + 𝛼 7 = 𝛼 + 𝛼 2 + 1 + 𝛼 + 𝛼 3 = 1 + 𝛼 2 + 𝛼 3 = 𝛼13 ,

1 + 𝛼 5 + 𝛼10 = 1 + 𝛼 + 𝛼 2 + 1 + 𝛼 + 𝛼 2 = 0.

33 3/8/2021
Modern Algebra
 Extension field GF(2m): (Where m is the degree of irreducible
polynomial)
 Table 1 : All 16 elements of GF(16) i.e. GF(24) are given in power of α, polynomial form
of α & 4 tuple form (binary form) is given

34 3/8/2021
Modern Algebra …
 Factorization of Xn + 1 over GF(2), where n =𝟐𝒎 -1
 Let 𝑓 𝑥 be a polynomial with coefficients from GF(2). Let 𝛽 be an
element in extension field GF 2𝑚 .

 If 𝛽 is a root of 𝑓 𝑥 , then for any l ≥ 0, 𝛽2 is also a root of 𝑓 𝑥 .

 The element 𝛽2𝑙 is called a conjugate of 𝛽.

 The 2𝑚 + 1 nonzero elements of GF(2) from all the roots of 𝑥 2𝑚;1 + 1.

 The element of GF(2m) form all the roots of 𝑥 2𝑚 + 𝑥.

 Let ∅ 𝑥 be the polynomial of smallest degree over GF(2) such that


∅ 𝛽 = 0. The ∅ 𝑥 is called the minimal polynomial of 𝛽, ∅ 𝑥 is
unique.

 The minimal polynomial ∅ 𝑥 of the field element 𝛽 is irreducible.


35 3/8/2021
Modern Algebra …
 Factorization of Xn + 1 over GF(2), where n =𝟐𝒎 -1
 Consider the Galois fields GF(16), Let 𝛽 = 𝛼 3 .
 The conjugates of 𝛽 are
1 2 3<
𝛽2 = 𝛼 6 . 𝛽2 = 𝛼12 . 𝛽2 𝛼 24 = 𝛼 9 .

 The minimal polynomial of 𝛽 = 𝛼 3 is then


∅ 𝑋 = 𝑋 + 𝛼 3 𝑋 + 𝛼 6 𝑋 + 𝛼12 𝑋 + 𝛼9
 Multiplying out the right hand side of proceeding equation, obtain,
∅ 𝑋 = 𝑋 2 + 𝛼 3 + 𝛼 6 𝑋 + 𝛼 9 𝑋 2 + 𝛼12 + 𝛼 9 𝑋 + 𝛼 21
= 𝑋2 + 𝛼2𝑋 + 𝛼9 𝑋2 + 𝛼8𝑋 + 𝛼6
= 𝑋 4 + 𝛼 2 + 𝛼 8 𝑋 3 + 𝛼 6 + 𝛼10 + 𝛼 9 𝑋 2 + 𝛼17 + 𝛼 8 + 𝛼15

36 3/8/2021
Modern Algebra …
 Factorization of Xn + 1 over GF(2), where n =𝟐𝒎 -1
22
 Consider the element 𝛼 5 𝑖𝑛 𝐺𝐹 16 . Since the 𝛼 5 = 𝛼 20 = 𝛼 5 ,
 The only conjugate of 𝛼 5 is 𝛼 10 , Both 𝛼 5 & 𝛼 10 both have order n =3 . The
minimal polynomial of 𝛼 5 & 𝛼 10 is 𝑋 2 + 𝑋 + 1 . Whose degree is factor of
m = 4.

 The conjugates of 𝛼 3 are 𝛼 6 , 𝛼 9 , & 𝛼 12 , They all have order m =5.

 The conjugates for α7 are α11 , α13 ,α14 , The minimal polynomial for α7 is:
𝑋 + 𝛼 7 𝑋 + 𝛼 11 𝑋 + 𝛼 13 𝑋 + 𝛼 14
= 𝑋 2 + 𝛼 7 + 𝛼 11 𝑋 + 𝛼 18 𝑋 2 + 𝛼 13 + 𝛼 14 𝑋 + 𝛼 27
= 𝑋 2 + 𝛼 8 𝑋 + 𝛼 3 𝑋 2 + 𝛼 2 𝑋 + 𝛼 12
= 𝑋 4 + 𝛼 8 + 𝛼 2 𝑋 3 + 𝛼 12 + 𝛼 10 + 𝛼 3 𝑋 2 + 𝛼 20 + 𝛼 5 X + 𝛼 15
= 𝑋4 + 𝑋3 + 1

37 3/8/2021
Modern Algebra…
 Factorization of Xn + 1 over GF(2), where n =𝟐𝒎 -1
 The conjugecy class of each element from GF(16) and corresponding
polynomial are given in the table below.
 There fore the polynomial corresponding to elements of GF(16) except
zero is as follows.
𝑋 15 + 1 = 𝑋 + 1 𝑋 4 + 𝑋 + 1 𝑋 4 + 𝑋 3 + 𝑋 2 + 𝑋 + 1 𝑋 2 + 𝑋 + 1 𝑋 4 + 𝑋 3 + 1

Conjugate Roots Minimal Polynomials


0 𝑋
1 𝑋+1
𝛼, 𝛼 2 , 𝛼 4 , 𝛼 8 𝑋4 + 𝑋 + 1
𝛼 3 , 𝛼 6 , 𝛼 9 , 𝛼12 𝑋4 + 𝑋3 + 𝑋2 + 𝑋 + 1
𝛼 5 , 𝛼 10 𝑋2 + 𝑋 + 1
𝛼 7 , 𝛼 11 , 𝛼 13 , 𝛼 14 𝑋4 + 𝑋3 + 1
38 3/8/2021
Modern Algebra …
Factorization of Xn + 1 over GF(2) where n =𝟐𝒎 -1
 The systematic (n, k) cyclic code is a sub class of linear block code
which are constructed using generator polynomial of degree n-k.

 Parity check polynomial of degree k, where n-is codeword length,

k- message length.
𝑋𝑛 + 1 = 𝑔 𝑋 . 𝑕 𝑋 .
 Theorem 1:
 The generator polynomial g(X) of an (n , k) cyclic code is a
factor of X n + 1, where n =2m – 1.
 Theorem 2:
 If g(X) is a polynomial of degree (n-k) and is a factor of X n + 1,
then g(X) generates an (n , k) cyclic code where n =2𝑚 -1
39 3/8/2021
Construction of Codes
 Linear Block Code (n , k):
 Information in most current digital data communication and storage
systems is coded in the binary digits 0 & 1, we discuss only the linear
block codes with system form the binary field GF(2).

 In block coding the information sequence is segmented in to message


blocks of fixed length; each message block is denoted by u consist of k
information digits.

 There are 2k distinct messages.

 The encoder, according to 𝑣 = 𝑢. 𝐺 (𝑚𝑜𝑑 − 2), transforms each input


message u in to a binary n-tuple v with n > k.

40 3/8/2021
Construction of Codes …
 Linear Block Codes (n , k):
 These binary n-tuples are called as codeword (Code Vector) of the
message u.

 Therefore corresponding to the 2k possible message there are 2k code


words.

 This set of 2k codeword is called a block code.

 For the block code to be useful the 2k codes must be distinct.

 Therefore, there should be one-to-one correspondence between a


message u and its codeword v.

41 3/8/2021
Construction of Codes …
Linear Block Codes (n , k):
 A block code of length n and 2k code words is called a linear (n, k)
code if and only if its 2k code words from a k-dimensional subspace
of the vector space of all the n-tuples over the field GF(2).

 For a block code with 2k code words and length n, unless it has a
certain special structure, the encoding apparatus will be
prohibitively complex for large k & n.

 Since it has to store 2k code words of length n in to dictionary.

 A desirable structure for a block code to posses is linearity, which


greatly reduces the encoding complexity.

42 3/8/2021
Construction of Codes …
Linear Block Codes (n , k):
 Matrix representation of codes
 In fact, a binary code is linear if & only if the modulo-2 sum of two
codeword is also a codeword.
 Because an (n, k) linear code C is a k-dimensional subspace of the vector
space Vn of all the binary n-tuples, it is possible the find k linearly
independent codeword, 𝑔0 , 𝑔1 , ⋯ , 𝑔𝑘;1 every codeword v in C is a linear
combination of these k codeword that is,
𝒗 = 𝑢0 𝑔0 + 𝑢1 𝑔1 + ⋯ + 𝑢𝑘;1 𝑔𝑘;1
 Where, 𝑢𝑖 = 0 or 1 for 0 ≤ 𝑖 < 𝑘 .
 We arrange linearly independent codeword as the rows of a k × 𝑛 matrix
as follows:

43 3/8/2021
Construction of Codes …
Linear Block Codes (n , k):

𝒈0 𝑔0,0 𝑔0,1 𝑔0,2 ⋯ 𝑔0,𝑛;1


𝒈1 𝑔1,0 𝑔𝑘;1,1 𝑔1,2 ⋯ 𝑔1,𝑛;1
𝐺 = ⋮ = ⋱
⋮ ⋮ ⋮ ⋮
𝒈𝑘;1 𝑔𝑘;1,0 𝑔𝑘;1,1 𝑔𝑘;1,2 ⋯ 𝑔𝑘;1,𝑛;1
 Where 𝑔𝑖 = 𝑔𝑖0 , 𝑔𝑖1 , ⋯ , 𝑔𝑖,𝑘;1 for 0 ≤ 𝑖 < 𝑘.
 If 𝑢 = 𝑢0 , 𝑢1 , ⋯ , 𝑢𝑘;1 is the message to be encoded, the
corresponding codeword can be given as follows
𝒗= 𝒖. 𝐺
𝒈0
𝒈1
𝒗 = 𝑢0 , 𝑢1 , ⋯ , 𝑢𝑘;1 ⋮
𝒈𝑘;1

𝒗 = 𝑢0 𝑔0 + 𝑢1 𝑔1 + ⋯ + 𝑢𝑘;1 𝑔𝑘;1
44 3/8/2021
Construction of Codes …
 Linear Block Codes (n , k):

𝒗 = 𝑢0 𝑔0 + 𝑢1 𝑔1 + ⋯ + 𝑢𝑘;1 𝑔𝑘;1
 Clearly, the rows of G generate (or spam) the linear code C.

 For this reason the matrix G is called a Generator matrix for C.

 Note that any k linearly independent codeword of an (n, k) linear code


can be used to form a generator matrix for the code.

 An (n, k) linear code is completely specified by the k row of the generator


matrix G.

 Therefore the encoder has only to store the k rows of G and to form a
linear combination of these k rows based on the input messages
𝒖 = 𝑢0 , 𝑢1 , ⋯ , 𝑢𝑘;1 .

45 3/8/2021
Construction of Codes …
Linear Block Codes (n , k):
 Where 𝑝𝑖,𝑗 = 0 or 1.

 Let 𝑰𝑘 denote the 𝑘 × 𝑘 matrix.

 Then 𝑮 = 𝑷 | 𝑰𝑘 .

46 3/8/2021
Construction of Codes …
Linear Block Codes (n , k):
 A desirable property for a linear block code to possess the systematic
structure of the codeword.

 A codeword is divided into two parts, the message part and the
redundant checking part ((n-k)Parity bits).

47 3/8/2021
Construction of Codes …
 Linear Block Code (n , k):
 Let 𝒖 = 𝑢0 , 𝑢1 , ⋯ , 𝑢𝑘;1 be the message to be encoded.
 The corresponding codeword is
𝒗 = 𝑣0 , 𝑣1 , ⋯ , 𝑣𝑛;1
𝒗 = 𝑢0 , 𝑢1 , ⋯ , 𝑢𝑘;1 . 𝑮
 The components of 𝒗 are:
𝑣𝑛;𝑘:𝑖 = 𝑢𝑖 𝑓𝑜𝑟 0 ≤ 𝑖 < 𝑘 & 𝑣𝑗 = 𝑢0 𝑝0𝑗 + 𝑢1 𝑝1𝑗 + ⋯ +𝑢𝑘;1 𝑝𝑘;1,𝑗 .
 For 0 ≤ 𝑗 < 𝑛 − 𝑘 the rightmost digits of a codeword 𝒗 are identical to
the information digits 𝑢0 , 𝑢1 , ⋯ , 𝑢𝑘;1 to be encoded.
 The leftmost n-k redundant digits are linear sums of the information
digits.
 The n - k equations are called parity check equations of the code.
48 3/8/2021
Construction of Codes …
Encoding:
 Encoding i.e. constructing encoded sequence is as follows :

 The (7, 4) linear code given in table has the following matrix as a
generator matrix.

𝒈0 1 1 0 1 0 0 0
𝒈1 0 1 1 1 0 1 0
𝐺 = ⋮ =
1 1 1 0 0 1 0
𝒈𝑘;1 1 0 1 0 0 0 1
 If 𝒖 = (1101) is the message to be encoded its corresponding codeword
𝑣 = 1. 𝑔0 + 1. 𝑔1 + 0. 𝑔2 + 1. 𝑔3
𝑣 = 1101000 + 0110100 + 1010001
𝑣 = 0001101
49 3/8/2021
Construction of Codes …
 Linear Block Code (n , k):
 The right most 4 digits of each codeword are identical to the
corresponding information digits.
 A linear systematic (n, k) code is completely specified by a kxn
matrix G of the following form:
 Table 2 for (7, 4) systematic block code is given below :
Message Codeword Message Codeword

50 3/8/2021
Construction of Codes …
Encoding:
 The matrix G given is in systematic form. Let 𝒖 = 𝑢0 , 𝑢1 , ⋯ , 𝑢𝑘;1 be
the message to be encoded & 𝒗 = 𝑣0 , 𝑣1 , 𝑣2 , 𝑣3 , 𝑣4 , 𝑣5 , 𝑣6 be the
corresponding codeword, Then
1 1 0 1 0 0 0
𝑣 = 𝑢0 , 𝑢1 , 𝑢2 , 𝑢3 . 0 1 1 0 1 0 0
1 1 1 0 0 1 0
1 0 1 0 0 0 1
 By matrix multiplication, we get the following digits of the codeword
v

𝑣6 = 𝑢3 𝑣5 = 𝑢2 𝑣4 = 𝑢1
𝑣3 = 𝑢0 𝑣2 = 𝑢1 + 𝑢2 + 𝑢3 𝑣1 = 𝑢0 + 𝑢1 + 𝑢2
𝑣0 = 𝑢0 + 𝑢2 + 𝑢3
 The codeword corresponding to the message
u = 1011 is v = 1001011
51 3/8/2021
Construction of Codes …
Calculations: C=u. G
1 1 0 1 0 0 0
0 1 1 0 1 0 0
𝐶= 1 0 1 1 .
1 1 1 0 0 1 0
1 0 1 0 0 0 1

𝑐1 = 1.1 + 0.0 + 1.1 + 1.1 = 1,


𝑐2 = 1.1 + 0.1 + 1.1 + 1.0 = 0,
𝑐3 = 1.0 + 0.1 + 1.1 + 1.1 = 0,
𝑐4 = 1.1 + 0.0 + 1.0 + 1.0 = 1,
𝑐5 = 1.0 + 0.1 + 1.0 + 1.0 = 0,
𝑐6 = 1.0 + 0.0 + 1.1 + 1.0 = 1,
𝑐7 = 1.0 + 0.0 + 1.0 + 1.1 = 1.
 Codeword is
𝐶 = 1001011
52 3/8/2021
Construction of Codes …
Encoding:
 Statement:
 For any k × 𝑛 matrix G over GF (2) with k linearly independent rows,
there exist an 𝑛 − 𝑘 × 𝑛 matrix H over GF(2) with 𝑛 − 𝑘 linearly
independent rows such that for any row gi in G & any hj in H, gi hj= 𝟎.
 The row space of G is the null space and vice versa
 Consider the following 3 × 6 matrix over GF(2):
1 1 0 1 1 0
𝐺= 0 0 1 1 1 0
0 1 0 0 1 1
 Row space of this matrix is the null space
1 0 1 1 0 0
𝐻= 0 1 1 0 1 0
1 1 0 0 0 1
 We can easily check that each row of G is orthogonal to each row of H.
𝐺. 𝐻𝑇 = 0 𝑚𝑜𝑑 − 2
 Hence , r. 𝐻𝑇 = 0 (𝑚𝑜𝑑 − 2)
53 3/8/2021
Construction of Codes…
Decoding of Linear Block Codes
 Decoding ( to recover the original n- bit sequence transmitted through
Channel)

 Compute r.HT

 If results 𝑟. 𝐻 𝑇 = 0 mod 2, then the received codeword is same as


transmitted.

 Errors are detected by syndrome computation method and corrected


calculated.

 The syndrome selects the error pattern vector which to be added to


received vector and error is rectified.

 Consider a simple code construction explained above.


54 3/8/2021
Decoding of Linear Block Codes
Decoding:
 Let the parity check matrix of (7,4) hamming code be
1 0 0 1 0 1 1
𝐻= 0 1 0 1 1 1 0
0 0 1 0 1 1 1

 Suppose that code-word vector 𝑣 = 1 0 0 1 0 1 1 is


transmitted & 𝑟 = 1 0 0 1 1 1 1 is the received code-
word

 For decoding we compute syndrome of r


𝑆 = 𝑟 . 𝐻𝑇
1 0 0
0 1 0
0 0 1
𝑆= 1 0 0 1 1 1 1. 0 1 0
0 1 1
1 1 1
55 1 0 1 3/8/2021
Decoding of Linear Block Codes
Decoding:
 The correctable error patterns & their corresponding syndromes are
given in table 3.
 The error pattern corresponding to syndrome 0 1 1 is
𝑒 = 0 0 0 0 1 0 0 is caused by channel.
 Received code r is decoded in to
𝑣∗ = 𝑟 + 𝑒
𝑣 ∗ = 1 0 0 1 1 1 1 + 0 0 0 01 0 0
𝑣∗ = 1 0 0 1 0 1 1
Table 3: The correctable error patterns & their corresponding syndromes
Syndrome Coset Leaders
(100) (1 0 0 0 0 0 0 )
( 0 1 0) (0 0 1 0 0 0 0 )
( 1 1 0) (0 0 0 1 0 0 0 )
( 0 1 1) (0 0 0 0 1 0 0 )
( 1 1 1) (0 0 0 0 0 1 0 )
( 1 0 1) 56 (0 0 0 0 0 0 1) 3/8/2021
Decoding of Linear Block Codes
Decoding:
 If two or more error occurs during transmission through
channel, the decoder commits error
 Let v = ( 0 0 0 0 0 0 0) is transmitted and r =(1 0 0 0 0 1 0)
is received.
 We see that two errors have occurred during the transmission
of v, the error pattern is not correctable and will cause a
decoding error.
 When r is received the receiver computes the syndrome
s=r.HT = (1 1 1), from decoding table we see that the error
pattern is e=( 0 0 0 0 0 1 0)
𝑣 ∗ = 𝑟 + 𝑒 = ( 1 0 0 0 1 1 0)
 Since v* is not a actual code vector transmitted, hence decoder
commits error.

57 3/8/2021
Construction of Codes …
Cyclic Codes (n, k)
 The systematic (n, k) cyclic code is a sub class of linear block code
which are constructed using generator polynomial

 Theorem:
 The generator polynomial g(X) of an (n , k) cyclic code is a factor of
Xn +1

 Theorem:
 If g(X) is a polynomial of degree (n-k) and is a factor of X n + 1, then
g(X) generates an (n , k) cyclic code.

58 3/8/2021
Construction of codes …
Generalized Block Diagram of Encoding of (7,4)
Code
𝑢0
uo 𝑢1 𝑢2

Fig 2 Generalized Block Diagram of Encoding of (7,4) Code


59 3/8/2021
Construction of codes …
 Block Diagram of Encoding of (7,4) Code, where n=7 and k=4

Codeword

Fig 3 Block Diagram of Encoding of (7,4) Code


60 3/8/2021
Construction of codes …
General Block Diagram of decoding

Fig 4 General Block61Diagram of decoding 3/8/2021


Construction of codes …
 Diagram for Syndrome computation:

Fig 5 Diagram for Syndrome


62 computation 3/8/2021
Construction of codes…
 Diagram for Error Correction:

63 3/8/2021
Fig 6 Diagram for Error Correction
Construction of Codes …
Analytical method of Cyclic Code (n , k)
 Encoding in systematic form consists of three steps:

 Pre-multiply the message u(X) by Xn−k

 Obtain the remainder b(X) from dividing Xn−k u(X) by the generator
polynomial g(X)

 Combine b(X) and Xn−k u(X) to obtain the code polynomial b(X) +Xn−k u(X).

 Consider the (7, 4) cyclic code generated by g(X) = 1 +X+X3

 Let u(X)=1011 = 1 +X2+X3 be the message to be encoded

 Dividing X3.u(X) =X3+X5 +X6 by g(X)

 C(X)= m(X): b(X)

64 3/8/2021
Construction of Codes …
Example of Analytical Method:

g(x)

b(x)=1 0 0
65 3/8/2021
Codeword = 1 0 0 1 0 1 1
Construction of Codes …
 Shift Register Method:
 Consider the (7,4) cyclic code Generated by 𝑔 𝑋 = 1 + 𝑋 + 𝑋 3 .
 Suppose that the message 𝑢 = 1011 is to be encoded.

Fig 7 Encoder Circuit for an (7,4) cyclic code Generated by


𝑔 𝑋 = 1 + 𝑋 + 𝑋3

66 3/8/2021
Construction of Codes …
 Example of Cyclic Encoding:

u=1011

𝑆1 = 𝑆0 + 𝑚 + 𝑆2
𝑆0 = 𝑆2 + 𝑚
𝑆2 = 𝑆1

After four shifts, the content of the register are (100).


Thus, the complete codeword is ( 1 0 0 1 0 1 1), and the code polynomial is
1+X3+X5+X6 Message bits
Parity bits
67 3/8/2021
Construction of Codes …
Cyclic Decoding:
 A syndrome circuit for (7,4) cyclic code generated by
𝑔 𝑋 = 1 + 𝑋 + 𝑋 3.
 Suppose that the received vector is 𝑟 = (0 0 1 0 1 1 0)
 As the received vector is shifted in to the circuit, the register are given in
table
 At the end of seventh shift, the register contains the syndrome
𝑆 = 101
 the register is shifted once more with gate disabled, the new contents will
be 𝑆 (1) = 1 0 1 , which is the syndrome for 𝑟 (1) = 0 0 0 1 0 1 1 , a cyclic
shift or r.

68 3/8/2021
Construction of Codes …
Consider r=(10010110) is received

r=(10010110)

Fig 8 Syndrome circuit for an (7,4)


69
cyclic code generated by 3/8/2021
g(X)=1+X+X3
Construction of Codes …

Fig 9 General cyclic code decoder with received polynomial


𝑟 𝑋 = 𝑟0 + 𝑟1 𝑋 + 𝑟2 𝑋 2 + ⋯ + 𝑟𝑛;1 𝑋 𝑛;1 shifted into the syndrome
register from the
70 left end. 3/8/2021
Construction of Codes …
 Cyclic Decoding (Meggit Decoder ):
 Consider the decoding of the (7,4) cyclic code generated by
𝑔 𝑋 = 1 + 𝑋 + 𝑋3.

 They form all correctable error pattern.

 Considered r=(𝑟0 , 𝑟1 , 𝑟2 , … … 𝑟𝑛;1 )

 Suppose that the received polynomial 𝑟 𝑋 = 𝑟0 + 𝑟1 𝑋 + 𝑟2 𝑋 2 + ⋯ +


𝑟6 𝑋 6 is shifted into shift register from the left end.

71 3/8/2021
Construction of Codes …
Decoding Using Shift Register Method
 Error Pattern and their syndromes:
 The received polynomial r(X) shifted into the syndrome register
from the left end.
 The seven single-error patterns and their corresponding
syndromes are listed in Table 4
Table 4: Error Pattern and their syndromes in polynomial
and binary form

72 3/8/2021
Construction of Codes …
Cyclic Decoding (Meggit Decoder ):
 We see that e6(X) = X6 is the only error pattern with an error at

location X6.

 When this error pattern occurs, the syndrome in the syndrome

register will not be (1 0 1) after the entire received polynomial r(X)

has entered the syndrome register.

 However after another 6-i shifts, the content in the syndrome

register will be (101).

 The complete decoding circuit is shown in figure 10.

73 3/8/2021
Construction of Codes …

 Fig 10 Decoding circuit for the (7,4) cyclic code generated by


𝑔 𝑋 = 1 + 𝑋 + 𝑋3.
74 3/8/2021
Construction of Codes …

 Cyclic Decoding (Meggit Decoder ):

 Fig 11 shows the decoding process.

 Suppose that the code-word v=( 1 0 0 1 0 1 1) or v(X)=1+X2+X3+X5+X6 is transmitted


and r=(1 0 1 1 0 1 1) or r(X)= 1+X2+X3+X5+X6 is received.

 A single error occurs at location X2.

 There is a pointer to indicate the error location after each shift.

 We see that after four more shifts the contents in the syndrome register are (101),
and the erroneous digit r2 is the next digit to come out from the buffer register.

75 3/8/2021
Construction of Codes …

76 3/8/2021

Fig 11 Error correcting process of the shift register method


LDPC Codes
Code under consideration here are Low Density
Parity Check (LDPC) Codes.

Following Steps are involved in Design of the LDPC


codes;

Message Block for formation of LDPC Code


Generation of parity check ‘H’ matrix
Generation of Generator matrix ‘G’ from ‘H’ matrix
Encoding of the message using ‘G’ matrix
Decoding of received codeword
77 3/8/2021
LDPC Codes ….
 Generation of proper Parity Check Matrix (H) is the key to
success for LDPC codes.
 Parity Check Matrix (H)
 It is sparse matrix (Low-Density)
Matrix must have following properties;
 Row consists of fixed number of 1’s
 Column consists of fixed number of 1’s
 Occurrence of 1’s between any two columns should not match
at more than one positions.
 Number of 1’s in rows and columns should be small compared to
length

 LDPC codes are called regular if its parity-check matrix ' H ' satisfies first two
conditions, otherwise called as an irregular LDPC code
 ‘H’ has small density of 1’s, hence the name Low Density parity-check codes

78 3/8/2021
LDPC Codes ….
 The digital communication system with an Error Correcting
Coding technique mainly consists of

 Code formation (Encoder) at transmitter

 Transmission of the codes through channel with BPSK


modulation and

 Retrieval of these codes at receiver end (Decoder).

79 3/8/2021
Encoder
Encoding is the process in which k-bit message
(information) is augmented with ‘n-k’ parity- check
bits to form an n-bit LDPC codeword
Let
‘m’ be message;
(m =[ 1 1 0]=[ m1, m2, m3], k =03 bit )
Then
LDPC code for the message shall be
c=mG
where,
‘G’ is Generator matrix
80 3/8/2021
Encoder….
Generator matrix ‘G’ is
Gk x n = [Ik P k x (n-k)]
where,
G is a generator matrix of size k×n
I is a identity matrix of size k×k
P is a parity bit matrix of size k×(n-k)
The ‘G’ matrix is derived from Parity Check matrix ‘H’, where ,
H(n-k)x n = [PT In-k ]
Relation between matrix G and H is such that
H.GT = 0

81 3/8/2021
Encoder….
Let us consider an example (6, 3)
 Consider the 3 bit message
m =[ 1 1 0]=[ m1, m2, m3]
 Code formation required ‘G’ matrix which can be
derived from ‘H’ matrix,
 Let ‘H’ matrix be of size 3×6
1 1 0 1 0 0
H  0 1 1 0 1 0
1 1 1 0 0 1

H  PT 
I n k 
82 3/8/2021
Encoder….
 Generator Matrix ‘G’
G  Ik P 
1 0 0 1 0 1
G  0 1 0 1 1 1
 LDPC Code Formation
0 0 1 0 1 1

C  mG
1 0 0 1 0 1
Thus the codeC
contains 1 00bit &13 parity
 1 3message 1 1 1
0 bits
0 0 1 0 1 1
C  1 1 0 0 1 0

83 3/8/2021
Decoder
 Code word is sent through Additive White Gaussian Noise
(AWGN) channel with BPSK modulation.
 Received code word is checked for error using ‘H’ matrix as;
H .C T  0
1 
1 
1 1 0 1 0 0   1.1  1.1  0.1  1.0  0.1  0.0 
0    
0
0
 1 1 0 1 
 0     0.1  1.1  1.0  0.0  1.1  0.0 
 1   1.1  1.1  1.0  0.0  0.1  1.0 
1 1 1 0 0  1   
 

0 
 0 0 0

 If the result is zero then received code is correct otherwise


erroneous.
84 3/8/2021
Decoder….
 If the error occurs in the code during transmission then
correction is made at receiver.
 Error occurred during transmission in last bit, hence
received code is C  1 1 0 0 1 1
H .C T  0
1 
1 
1 1 0 1 0 0   1.1  1.1  0.1  1.0  0.1  0.1 
0    
0
0
 1 1 0 1 
 0    0.1  1.1  1. 0  0. 0  1.1  0.1
 1   1.1  1.1  1.0  0.0  0.1  1.1 
1 1 1 0 0  1   
 

1 
 0 0 1
 As the resultant is non-zero the received codeword is
erroneous. Error correction is done using Message
passing algorithm(MPA) 85 3/8/2021
Decoder….
 Tanner graph of ‘H’ matrix plays an important role in
message passing algorithm.
Tanner Graph
• A Tanner graph is graphical representation of H matrix
• It consists of two sets of nodes,
1. Variable node vi = {v0, v1, …} &
2. Check node cj={c0, c1….,}
є = {e1, e2,…}, edges, corresponding interaction of check
node & variable nodes in H matrix
The number of edges incident on a vertex vi is called the
degree of variable node (dvi ) and that on check node cj is
called degree of check node(dcj) .

86 3/8/2021
Decoder….
 The Tanner Graph for the ‘H’ matrix under consideration is
Tanner Graph

as under. 8

7 V0 V1 V2 V3 V4 V5

1 1 0 1 0 0
6

H  0 1 1 0 1 0 4

1 1 1 0 0 1 2

1 C0 C1 C2

0
0 1 2 3 4 5 6 7

 The edges starting & ending at the same vertex is called short
cycle
 The number of edges in a short cycle is called as length of
short cycle
 Here in a given Tanner graph ( v0 - c0 - v1 - c2 - v0 ) formed
short cycle of length 4
 Smallest short cycle in Tanner graph is called Girth of the Tanner
graph 87 3/8/2021
Decoder….
 Steps in Message Passing Algorithm(MPA):
 Bits in the received codeword's are assigned to variable
nodes of Tanner graph (Intrinsic Bits)
 Variable node gets extrinsic bits from their neighboring
check nodes (Interaction amongst vi & cj in H matrix)
 Correctness of the intrinsic bit is decided on the basis of
majority of bits received at variable node.
 Correction is made accordingly at variable node and again
condition H.CT = 0 is verified.
 The process is executed till correction of complete code
word.

88 3/8/2021
Decoder….
Example for Code correction
 Correct Code was C  1 1 0 0 1 0
 Erroneous code received at Receiver C  1 1 0 0 1 1
 At receiver the bits are assigned to variable nodes vi
v0  1, v1  1, v2  0, v3  0, v4  1, v5  1
 Degree of variable nodes decide the edges between check node and
variable node

Variable Node v0:


 Degree of node 2,
Two edges from check node c0 & c2
 Total 3 bits will received at the Node,
 01 Intrinsic bit from channel
 02 Extrinsic bits from check nodes (c0 & c2)
89 3/8/2021
Decoder….
Tanner graph for the case is represented here
Tanner Graph
8
Trans.codeword 1 1 0 0 1 0
Recd.codeword 1 1 0 0 1 1
7 V0 V1 V2 V3 V4 V5
variable
nodes
6
0
5
1
4

1 C0 C1 C2
check nodes

0
0 1 2 3 4 5 6 7

• Check node c0 is connected to three variable nodes v0 , v1 , v3


• Extrinsic Bits send by the edges at variable nodes are calculated using modulo
2 addition
• Extrinsic bit at c0 to v0 is calculated as
Bit assigned to v1 = 1 and bit assigned to v3 = 0, hence modulo 2 sum of these
two = (v1 v3) =  (1 o) =  1,
• Similarly Extrinsic bit at c2 to v0 is = 0
• Bits received at v0 are (1 , 1 , 0), decision is taken on majority i.e. ‘1’
90 3/8/2021
Same is updated at node v0
Decoder….
Variable node v1
 Degree of node is 3
 Four bits are received at v1
 one intrinsic bit = 1 from channel
 3 extrinsic bits from check node c0 ,c1 and c2 ( 1,1,0)
 Majority selection amongst bits (1,1,1,0) is 1 and same is updated at v1
Tanner Graph
8
Trans.codeword 1 1 0 0 1 0
Recd.codeword 1 1 0 0 1 1
7 V0 V1 V2 V3 V4 V5
variable
nodes
6

5 1 0
4
1
3

1 C0 C1 C2
check nodes

0 91 3/8/2021
0 1 2 3 4 5 6 7
Decoder….
Variable node v2
 Degree of node is 2
 Three bits are received at v2
 one intrinsic bit = 0 from channel
 2 extrinsic bits from check node c1 ,c2 (0, 1)
 Majority selection amongst bits (0, 0,1) is 0 and same is updated at v2

Tanner Graph
8
Trans.codeword 1 1 0 0 1 0
Recd.codeword 1 1 0 0 1 1
7 V0 V1 V2 V3 V4 V5
variable
nodes
6
1
5
0
4

1 C0 C1 C2
check nodes

0
0 1 2 3 4 5 6 7
92 3/8/2021
Decoder….
Variable node v3
 Degree of node is 1
 Two bits are received at v3
 one intrinsic bit = 0 from channel
 one extrinsic bits from check node c0 (0)
 Majority selection amongst bits (0, 0) is 0 and same is updated at v3
Tanner Graph
8
Trans.codeword 1 1 0 0 1 0
Recd.codeword 1 1 0 0 1 1
7 V0 V1 V2 V3 V4 V5
variable
nodes
6 0
5

1 C0 C1 C2
check nodes

0 93 3/8/2021
0 1 2 3 4 5 6 7
Decoder….
Variable node v4
 Degree of node is 1
 Two bits are received at v4
 one intrinsic bit = 1 from channel
 one extrinsic bits from check node c1 (1)
 Majority selection amongst bits (1, 1) is 1 and same is updated at v4
Tanner Graph
8
Trans.codeword 1 1 0 0 1 0
Recd.codeword 1 1 0 0 1 1
7 V0 V1 V2 V3 V4 V5
variable
nodes
6

5
1
4

1 C0 C1 C2
check nodes

0 94 3/8/2021
0 1 2 3 4 5 6 7
Decoder….
Variable node v5
 Degree of node is 1
 Two bits are received at v5
 one intrinsic bit = 1 from channel
 one extrinsic bits from check node c2 (0)
 Majority selection amongst bits (1, 0) in this case authenticity of
extrinsic bit shall be consider and is updated at v5
Tanner Graph
8
Trans.codeword 1 1 0 0 1 0
Recd.codeword 1 1 0 0 1 1
7 V0 V1 V2 V3 V4 V5
variable
nodes
6

4
0
3

1 C0 C1 C2
check nodes

0
0 1 2 3 4 5 6 7

Updated code word is verified with the = 0, If found true the H.CT
correction is completed . Otherwise the process is continued till the
satisfaction of the condition
95 3/8/2021
Sum Product Algorithm
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

Y1 = -0.1 Y2 = 0.5 Y3 = -0.8 Y4 = 1.0 Y5 = -0.7 Y6 = 0.5

 LLR Values From Variable Node To Check Node


𝐸𝑏 𝐸𝑏
𝑅𝑖 = 4 ∗ 𝑌𝑖 ∗ Where, 0 < 𝑖 ≤ 𝑛 , = 1.25
𝑁0 𝑁0
𝐿1 = 𝑅1 = 4 ∗ −0.1 ∗ 1.25 = −0.5 𝐿2 = 𝑅2 = 4 ∗ 0.5 ∗ 1.25 = 2.5

𝐿3 = 𝑅3 = 4 ∗ −0.8 ∗ 1.25 = −4.0 𝐿4 = 𝑅4 = 4 ∗ 1.0 ∗ 1.25 = 5

𝐿5 = 𝑅5 = 4 ∗ −0.7 ∗ 1.25 = −3.5 96 𝐿6 = 𝑅6 = 4 ∗ 0.5 ∗ 1.25 = 2.5 3/8/2021


E11, E12, E14 Calculation
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

Y1 = -0.1 Y2 = 0.5 Y3 = -0.8 Y4 = 1.0 Y5 = -0.7 Y6 = 0.5

 Expectations from Check node to Variable Node:


𝑅2 𝑅 2.5 5
1 + tanh ∗ tanh 4 1 + tanh ∗ tanh
2 2 2 2
𝐸1,1 = log 𝑒 = log 𝑒 = 2.4217
𝑅 𝑅 2.5 5
1 − tanh 2 ∗ tanh 4 1 − tanh ∗ tanh
2 2 2 2
𝑅 𝑅 −0.5 5
1 + tanh 1 ∗ tanh 4 1 + tanh ∗ tanh
2 2 2 2
𝐸1,2 = log 𝑒 = log 𝑒 = −0.4930
𝑅 𝑅 −0.5 5
1 − tanh 1 ∗ tanh 4 1 − tanh ∗ tanh
2 2 2 2
𝑅 𝑅 −0.5 2.5
1 + tanh 1 ∗ tanh 2 1 + tanh ∗ tanh
2 2 2 2
𝐸1,4 = log 𝑒 = log 𝑒 = −0.4217
𝑅 𝑅 −0.5 2.5
1 − tanh 1 ∗ tanh 2 1 − tanh ∗ tanh
2 2 97 2 2 3/8/2021
E22, E23, E25 Calculation
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

Y1 = -0.1 Y2 = 0.5 Y3 = -0.8 Y4 = 1.0 Y5 = -0.7 Y6 = 0.5

 Expectations from Check node to Variable Node:


𝑅3 𝑅5 −4.0 −3.5
1 + tanh ∗ tanh 1 + tanh ∗ tanh
2 2 2 2
𝐸2,2 = log 𝑒 = log 𝑒 = 3.0265
𝑅3 𝑅5 −4.0 −3.5
1 − tanh ∗ tanh 1 − tanh ∗ tanh
2 2 2 2
𝑅2 𝑅5 2.5 −3.5
1 + tanh ∗ tanh 1 + tanh ∗ tanh
2 2 2 2
𝐸2,3 = log 𝑒 = log 𝑒 = −2.1892
𝑅2 𝑅5 2.5 −3.5
1 − tanh ∗ tanh 1 − tanh ∗ tanh
2 2 2 2
𝑅3 𝑅2 −3.5 2.5
1 + tanh ∗ tanh 1 + tanh ∗ tanh
2 2 2 2
𝐸2,5 = log 𝑒 = log 𝑒 = − 2.3001
𝑅3 𝑅2 −3.5 2.5
1 − tanh ∗ tanh 1 − tanh ∗ tanh
2 2 98 2 2 3/8/2021
E31, E35, E36 Calculation
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

Y1 = -0.1 Y2 = 0.5 Y3 = -0.8 Y4 = 1.0 Y5 = -0.7 Y6 = 0.5

 Expectations from Check node to Variable Node:


𝑅6 𝑅 2.5 −3.5
1 + tanh ∗ tanh 5 1 + tanh ∗ tanh
2 2 2 2
𝐸3,1 = log 𝑒 = log 𝑒 = −2.1892
𝑅 𝑅 2.5 −3.5
1 − tanh 6 ∗ tanh 5 1 − tanh ∗ tanh
2 2 2 2
𝑅 𝑅 −0.5 −3.5
1 + tanh 1 ∗ tanh 5 1 + tanh ∗ tanh
2 2 2 2
𝐸3,5 = log𝑒 = log 𝑒 = − 0.4217
𝑅 𝑅 −0.5 −3.5
1 − tanh 1 ∗ tanh 5 1 − tanh ∗ tanh
2 2 2 2
𝑅 𝑅 −0.5 2.5
1 + tanh 1 ∗ tanh 6 1 + tanh ∗ tanh
2 2 2 2
𝐸3,6 = log 𝑒 = log 𝑒 = 0.4696
𝑅 𝑅 −0.5 2.5
1 − tanh 1 ∗ tanh 6 1 − tanh ∗ tanh
2 2 99 2 2 3/8/2021
E43, E44, E46 Calculation
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

Y1 = -0.1 Y2 = 0.5 Y3 = -0.8 Y4 = 1.0 Y5 = -0.7 Y6 = 0.5

 Expectations from Check node to Variable Node:


𝑅6 𝑅 2.5 5
1 + tanh ∗ tanh 4 1 + tanh ∗ tanh
2 2 2 2
𝐸4,3 = log 𝑒 = log 𝑒 = 2.4217
𝑅 𝑅 2.5 5
1 − tanh 6 ∗ tanh 4 1 − tanh ∗ tanh
2 2 2 2
𝑅 𝑅 −4 2.5
1 + tanh 3 ∗ tanh 3 1 + tanh ∗ tanh
2 2 2 2
𝐸4,4 = log 𝑒 = log 𝑒 = − 2.3001
𝑅 𝑅 −4 2.5
1 − tanh 3 ∗ tanh 3 1 − tanh ∗ tanh
2 2 2 2
𝑅 𝑅 −4 5
1 + tanh 3 ∗ tanh 4 1 + tanh ∗ tanh
2 2 2 2
𝐸4,6 = log 𝑒 = log 𝑒 = − 3.6869
𝑅 𝑅 −4 5
1 − tanh 3 ∗ tanh 4 1 − tanh ∗ tanh
2 2 100 2 2 3/8/2021
LLR Calculation to Check Validity
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

L1 = -0.2676 L2 = 5.0334 L3 = -3.7676 L4 = 2.2783 L5 = -6.2217 L6 = -0.7173

 Updated Li at Variable node:


𝐿𝑖 = −0.2676 50.334 −3.7676 2.2783 −6.2217 −0.7173
 Calculation of z :
1, 𝐿𝑖 ≤ 0
𝑧= ,
0, 𝐿𝑖 > 0
𝑧= 1 0 1 0 1 1
101 3/8/2021
Validation of Code

 Calculate H.zT
𝑎𝑙𝑙 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 = 0, 𝑇𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒
𝐻𝑧 𝑇 =
𝑎𝑙𝑙 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 ≠ 0, 𝐶𝑜𝑛𝑡𝑖𝑛𝑢𝑒
1
1 1 0 1 0 0 0
𝐻 = 0 1 1 0 1 0 &z= 1
0
1 1 1 0 0 1 1
1
𝐻𝑧 𝑇 = 1 0 1 0 𝑇

 Since all elements of HzT are not equal to zero so we will


continue the iteration since the decoded code is not valid
code

102 3/8/2021
Calculation of LLR Values
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

R1 = -0.5 R2 = 2.5 R3 = -4.0 R4 = 5 R5 = -3.5 R6 = -2.5

 Variable node to Check Node:


𝐿1,1 = 𝑅1 + 𝐸3,1 = −0.5 + −2.1892 = −2.6892
𝐿3,1 = 𝑅1 + 𝐸1,1 = −0.5 + 2.4217 = −2.6892

103 3/8/2021
Calculation of LLR Values
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

R1 = -0.5 R2 = 2.5 R3 = -4.0 R4 = 5 R5 = -3.5 R6 = -2.5

 Variable node to Check Node:


𝐿1,2 = 𝑅2 + 𝐸1,2 = 2.5 + 3.0265 = 5.5265
𝐿2,2 = 𝑅2 + 𝐸2,1 = 2.5 + (−0.4930) = 2.0070

104 3/8/2021
Calculation of LLR Values
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

R1 = -0.5 R2 = 2.5 R3 = -4.0 R4 = 5 R5 = -3.5 R6 = -2.5

 Variable node to Check Node:


𝐿2,1 = 𝑅3 + 𝐸2,3 = −4.0 + 2.4217 = −1.5783
𝐿2,2 = 𝑅3 + 𝐸4,3 = −4.0 + −2.892 = −6.1892

105 3/8/2021
LLR Calculation to Check Validity
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

R1 = -0.5 R2 = 2.5 R3 = -4.0 R4 = 5 R5 = -3.5 R6 = -2.5

 Variable node to Check Node:


𝐿1,4 = 𝑅4 + 𝐸4,4 = 5 + −2.3001 = 2.6999
𝐿4,4 = 𝑅4 + 𝐸1,4 = 5 + −0.4217 = 4.5783

106 3/8/2021
LLR Calculation to Check Validity
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

R1 = -0.5 R2 = 2.5 R3 = -4.0 R4 = 5 R5 = -3.5 R6 = -2.5

 Variable node to Check Node:


𝐿2,5 = 𝑅5 + 𝐸2,5 = −3.5 + −0.4217 = −3.2917
𝐿3,5 = 𝑅5 + 𝐸3,5 = −3.5 + −2.3001 = −5.8001

107 3/8/2021
LLR Calculation to Check Validity
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

R1 = -0.5 R2 = 2.5 R3 = -4.0 R4 = 5 R5 = -3.5 R6 = 2.5

 Variable node to Check Node:


𝐿3,6 = 𝑅6 + 𝐸4,6 = 2.5 + −3.6869 = −1.1869
𝐿4,6 = 𝑅6 + 𝐸3,6 = 2.5 + 0.4696 = 2.9696
 Aposteriori Probabilities:
−2.6892 5.5265 0 2.6999 0 0
𝐿𝑐𝑖 = 0 2.0070 −1.5783 0 −3.9217 0
−1.9217 0 0 0 −5.8001 −1.1869
0 0 −6.1892 4.5783 0 2.9696
108 3/8/2021
Iteration II
 Expectations at check nodes calculations
2.4217 −0.4930 0 −0.4217 0 0
𝐸𝑖,𝑗 = 0 3.0265 −2.1892 0 −2.3001 0
−2.1892 0 0 0 −0.4217 0.4696
0 0 2.4217 −2.3001 0 −3.6869

 LLR values at Variable nodes


𝐿𝑖 = −0.2676 5.0334 − 3.7676 2.2783 −6.2217 −0.7173

 Decoded Message From LLR values


𝑍= 1 0 1 0 1 1

 Checking Validity by HzT


𝑎𝑙𝑙 𝑒𝑙𝑒𝑚𝑒𝑛𝑡𝑠 ≠0
𝐻𝑧 𝑇 = 1 0 1 0 𝑇 Continue

 Since the all elements are not zero we will continue


 Aposteriori Probabilities
0.6779 3.9907 0 2.0695 0 0
𝐿𝑖,𝑗 = 0 0.4940 −1.2123 0 −4.3388 0
2.1426 0 0 0 −4.6041 −1.8963
0 0 −58721 2.3674 0 0.5984

109 3/8/2021
Iteration III
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

R1 = -0.5 R2 = 2.5 R3 = -4.0 R4 = 5 R5 = -3.5 R6 = 2.5

Aposteriori Probabilities
0.6779 3.9907 0 2.0695 0 0
𝐿𝑖,𝑗 = 0 0.4940 −1.2123 0 −4.3388 0
2.1426 0 0 0 −4.6041 −1.8963
0 0 −58721 2.3674 0 0.5984

110 3/8/2021
E11, E12, E14 Calculation Iteration III
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

R1 = -0.5 R2 = 2.5 R3 = -4.0 R4 = 5 R5 = -3.5 R6 = 2.5

 Expectations from Check node to Variable Node:


𝑅2 𝑅4 3.9907 2.0695
1 + tanh ∗ tanh 1 + tanh ∗ tanh
2 2 2 2
𝐸1,1 = log 𝑒 = log 𝑒 = 1.9352
𝑅2 𝑅4 3.9907 2.0695
1 − tanh ∗ tanh 1 − tanh ∗ tanh
2 2 2 2
𝑅1 𝑅4 0.6779 2.0695
1 + tanh ∗ tanh 1 + tanh ∗ tanh
2 2 2 2
𝐸1,2 = log 𝑒 = log 𝑒 = 0.5180
𝑅1 𝑅4 0.6779 2.0695
1 − tanh ∗ tanh 1 − tanh ∗ tanh
2 2 2 2
𝑅1 𝑅2 0.6779 3.9907
1 + tanh ∗ tanh 1 + tanh ∗ tanh
𝐸1,4 = log 𝑒 2 2 = log 𝑒 2 2 = 0.6515
𝑅1 𝑅2 0.6779 3.9907
1 − tanh ∗ tanh 111 1 − tanh ∗ tanh
2 2 2 2 3/8/2021
E22, E23, E25 Calculation Iteration III
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

R1 = -0.5 R2 = 2.5 R3 = -4.0 R4 = 5 R5 = -3.5 R6 = 2.5

 Expectations from Check node to Variable Node:


𝑅3 𝑅5 −1.2123 −4.3388
1 + tanh ∗ tanh 1 + tanh ∗ tanh
𝐸2,2 = log 𝑒 2 2 = log 𝑒 2 2 = 1.1733
𝑅3 𝑅5 −1.2123 −4.3388
1 − tanh ∗ tanh 1 − tanh ∗ tanh
2 2 2 2
𝑅2 𝑅5 0.4940 −4.3388
1 + tanh ∗ tanh 1 + tanh ∗ tanh
𝐸2,3 = log 𝑒 2 2 = log 𝑒 2 2 = −0.4808
𝑅2 𝑅5 0.4940 −4.3388
1 − tanh ∗ tanh 1 − tanh ∗ tanh
2 2 2 2
𝑅3 𝑅2 0.4940 −1.2123
1 + tanh ∗ tanh 1 + tanh ∗ tanh
𝐸2,5 = log 𝑒 2 2 = log 𝑒 2 2 = − 0.2637
𝑅3 𝑅2 0.4940 −1.2123
1 − tanh ∗ tanh 1 − tanh ∗ tanh
2 2 112 2 2 3/8/2021
E31, E35, E36 Calculation Iteration III
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

R1 = -0.5 R2 = 2.5 R3 = -4.0 R4 = 5 R5 = -3.5 R6 = 2.5

 Expectations from Check node to Variable Node:


𝑅6 𝑅5 −1.8963 −4.6041
1 + tanh ∗ tanh 1 + tanh ∗ tanh
𝐸3,1 = log 𝑒 2 2 = log 𝑒 2 2 = −1.8332
𝑅6 𝑅5 −1.8963 −4.6041
1 − tanh ∗ tanh 1 − tanh ∗ tanh
2 2 2 2
𝑅1 𝑅5 2.1426 −4.6041
1 + tanh ∗ tanh 1 + tanh ∗ tanh
𝐸3,5 = log 𝑒 2 2 = log 𝑒 2 2 = − 2.0620
𝑅1 𝑅5 2.1426 −4.6041
1 − tanh ∗ tanh 1 − tanh ∗ tanh
2 2 2 2
𝑅1 𝑅6 2.1426 −1.8963
1 + tanh ∗ tanh 1 + tanh ∗ tanh
𝐸3,6 = log 𝑒 2 2 = log 𝑒 2 2 = − 1.3362
𝑅1 𝑅6 2.1426 −1.8963
1 − tanh ∗ tanh 1 − tanh ∗ tanh
2 2 113 2 2 3/8/2021
E43, E44, E46 Calculation Iteration III
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

R1 = -0.5 R2 = 2.5 R3 = -4.0 R4 = 5 R5 = -3.5 R6 = 2.5

 Expectations from Check node to Variable Node:


𝑅6 𝑅 2.3674 0.5984
1 + tanh ∗ tanh 4 1 + tanh ∗ tanh
2 2 2 2
𝐸4,3 = log 𝑒 = log 𝑒 = 0.4912
𝑅 𝑅 2.3674 0.5948
1 − tanh 6 ∗ tanh 4 1 − tanh ∗ tanh
2 2 2 2
𝑅 𝑅 −5.8721 0.5984
1 + tanh 3 ∗ tanh 6 1 + tanh ∗ tanh
2 2 2 2
𝐸4,4 = log 𝑒 = log 𝑒 = − 0.5948
𝑅 𝑅 −5.8721 0.5984
1 − tanh 3 ∗ tanh 3 1 − tanh ∗ tanh
2 2 2 2
𝑅 𝑅 −5.8721 2.3674
1 + tanh 3 ∗ tanh 4 1 + tanh ∗ tanh
2 2 2 2
𝐸4,6 = log 𝑒 = log 𝑒 = − 2.3381
𝑅 𝑅 −5.8721 2.3674
1 − tanh 3 ∗ tanh 4 1 − tanh ∗ tanh
2 2 114 2 2 3/8/2021
LLR Calculation to Check Validity
C1 C2 C3 C4

V1 V2 V3 V4 V5 V6

L1 = -0.2676 L2 = 5.0334 L3 = -3.7676 L4 = 2.2783 L5 = -6.2217 L6 = -0.7173

 Updated Li at Variable node:


𝐿𝑖 = 3.2684 4.1912 −3.9896 5.0567 −5.0999 −1.9001
 Calculation of z :
1, 𝐿𝑖 ≤ 0
𝑧= ,
0, 𝐿𝑖 > 0
𝑧= 0 0 1 0 1 1
115 3/8/2021
Validation of Code

 Calculate H.zT
𝑎𝑙𝑙 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 = 0, 𝑇𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒
𝐻𝑧 𝑇 =
𝑎𝑙𝑙 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 ≠ 0, 𝐶𝑜𝑛𝑡𝑖𝑛𝑢𝑒
0
1 0 0 0
1 1 0
𝐻= 0 1 1 0 1 0 &z = 1
0
1 1 1 0 0 1 1
1
𝐻𝑧 𝑇 = 0 0 0 0 𝑇

 Since all elements of HzT are equal to zero so we will


terminate the iteration since the decoded code is
valid code
116 3/8/2021
References
 1. Shu lin, D.J Costello, “Error Correcting Coding,” Second edition,
Pearson Prentice Hall, 2004.
 2. K. Sam Shanmugam, ”Digital and Analog Communication System”
John Wiley & Sons, 1998.
 3. W. Cary Huffman and Vera Pless, ”Fundamentals of Error Correcting
Codes” Cambrige University Press, 2003.
 4. Richard E.Blahut,”Theory And Practice Of Error Control Codes Richard
EBlahut” Addison-Wesley Publishing Company, Boston, 1983.
 5. R. G. Gallagar,”Low Density Parity Check” MIT Press, 1963.
 6. Gallager, R.G., "Low-density parity-check codes, IRE Transactions on
Information Theory,vol.8, no.1, pp.21-28, January 1962.
 7. Sarah J. Johnson, ”Introducing Low Density Parity Check”.
 8. Samuele Bandi, Velio Tralli, Andrea Conti, and Maddalena Nonato,”On
Girth Conditioning for Low-Density Parity-Check Codes”, IEEE
Transactions on Communications, vol. 59, no. 2, February 2011.

117 3/8/2021
References

 9. Xia Zheng, Francis C. M. Lau and Chi K. Tse, "Constructing


Short-Length Irregular LDPC Codes with Low Error Floor," IEEE
Transactions on Communications, Vol. 58, No. 10, pp. 2823-2834,
October 2010.
 10. T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, “Design
of capacity-approaching irregular low-density parity-check
codes,” IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 619–637,
February 2001
 11. D. J. C. MacKay and R. M. Neal, “Near Shannon limit
performance of low-density parity-check codes,” Electron. Lett.,
vol. 32, no. 18, pp.1645–1646, March 1996, reprinted Electron.
Lett, vol. 33(6), pp. 457–458, March
118 1997. 3/8/2021
Thank You

You might also like