Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Linear Block Code Decoding Using Neural

Network
Kiran Maru Subham Jadav
Electronics and Communication Engineering
Institute Of Technology, Nirma University Electronics and Communication Engineering
Ahmedabad, Gujarat Institute Of Technology, Nirma University
15bec061@nirmauni.ac.in Ahmedabad, Gujarat
16bec050@nirmauni.ac.in

Abstract—In this paper the linear block code certain level of difficulty to most of the NN classifiers
decoder is constructed by neural network. A block because there are a large number of categories to be
code uses an encoder that accepts a block of message
symbols, and generates a block of code word symbols classified and the classification requires very high
at the output. This type is in contrast to a precision so as to discriminate among patterns of only
convolutional code where the encoder accepts a one Hamming distance from one to the others .We shall
continuous stream of symbols and similarly generates show, in this paper, that the decoding rules of a number
a continuous encoded out- put stream.The neural of LBCs have a close connection with the class of high-
network will be adapted for a single-bit error. Each order .
layer of a neural network will simulate a linear block
code decoder stage. The syndrome generator, the error
• LINEAR BLOCK CODE DECODER
detection, and the error correction stages of the linear
block code decoder will be simulated by the proposed The binary linear block codes have been defined
over the binary Galois Field GF (2) of two elements {0, 1}
neural network.
using modulo-2 arithmetic. The linear block code is called
a linear (n, k) code, where “k” is the message length to be
Index Terms—Linear Block Code, Neural Network, coded and “n” is the length of the message after coding.
Syndrome, Error detection, and Error Correction The linear block codes are described in terms of their
generator matrix (G), parity check matrix (H), minimum
distance (dmin), and syndrome (S). All the three phases in
• INTRODUCTION the linear block code decoder will be constructed by the
Here we have field of neural networks is now use of neural network. So a neural network with five layers
extremely vast and interdisciplinary, drawing interest will be constructed. The first and second layers will
from researchers in many different areas such as estimate the syndrome vector, the second and third layers
engineering (including biomedical engineering), physics, will detect the error, and the third, fourth, and fifth layers
will correct the error, as shown in Fig. 2. The neural
neurology, psychology, medicine, mathematics, network (plotted in bold lines) is just for delaying the input
computer science, chemistry, and economics .Artificial in order to be ready for being exclusively ORedwith the
neural networks provide a neurocomputing approach to error vector “e”.
solve complex problems that might otherwise not have a
tractable solution. Applications of neural networks
include (but are not limited to) prediction and
forecasting, associative memory, function
approximation, clustering, data compression, speech
recognition, feature extraction, combinational
optimization, solution of matrix algebra problems, blind
source separation and solution of differential equations.
When a digital message is transmitted over a long
distance, the received message may not be exactly as it
was sent since there may be some interference. In these
situations we should be able to detect and if possible to
correct errors.
Neural networks (NNs) are powerful computational This phase is similar to the first block in the linear block
models that have attracted much attention in many code decoder; which is responsible for generating the
applications [4]. However, the decoding problem poses a syndrome vector “S”. According to S r. HT, so that the
synaptic weights of this phase must be equal to the
transpose of the parity check matrix “H

• The Random Neural Network (RNN) Model

In the random neural network model [5] signals in the


form of spikes of unit amplitude circulate among the
neurons. Positive signals represent excitation and neg-
• Decoding of Linear Block Codes
active signals represent inhibition. Each neuron's
The general idea of decoding an LBC can be described as
state is a non-negative its potential, which in-creases
follows. A systematic (n,k) code, in which there are k
when an excitation signal arrives to it, and decreases
information bits and m = n-k parity bits for each codeword,
when an inhibition signal arrives. An excitatory spike
can be represented by a parity-check matrix H = [hij]m,n, hij
is interpreted as a \+1" signal at a receiving neuron,
{0,1}. Let A = [a1, a2, … , ak] be an information word. After
while an inhibitory spike is interpreted as a \ 1" signal.
A being added with some parities and then transmitted, the
A neuron i emitting a spike, whether it is an ex-
receiving end receives a word V = [v1,v2, … , vn] probably
citation or an inhibition, will lose potential of one
with some errors in its bits. The received word V is then
unit, going from some state whose value is ki to the
multiplied (mudulo2) by the parity-check matrix H to result in
state of value ki 1. The state of the n-neuron network
a syndrome vector S = VHT = [s1, s2, … , sm],
at time t, is represented by the vector of non-negative
The syndrome S provides the information to decode the
integers k(t) = (k1(t); :kn(t)), where ki(t) is the
codewords V: If S is a zero vector, there would be no errors;
potential or integer state of neuron I .We will denote
the parity bits vi, i = k+1, k+2, … , n, are discarded and the
by k and ki arbitrary values of the state vector and of
information bits are directly accessed, i.e., ai = vi for i = 1, 2,
the i-th neuron's state.
… , k. If a single error occurs in bit vj, then S matches the jth
column of H and the jth bit vj has to be complemented. For t-
error correcting codes, if S matches the sum (in modulo 2) of
• Genetic Algorithms and Simulated P (P,t) columns of H, then the P bits corresponding to these
Annealing constituent p columns of H are in error.

Genetic algorithm (GA) and simulated annealing (SA)


are two kinds of useful stochastic techniques which • The Neural Decoders
can be used to solve optimization problems
efficiently. SA is based on thermodynamics and can The conventional decoding rules for cyclic and BCH codes
be viewed as an algorithm which generates a sequence have no direct transforms resulting in decoding structures
of Markov chains to approach the optimal solutions of representable by high-order perceptrons. To reduce the
the problem. This sequence of Markov Chains is number of terms required, one depends on a robust
controlled by a gradually decreasing temperature of optimization method and the method we choose is the
genetic evolution. GAs are more suitable for this structure-
the system. Theoretically, the probability distribution
adaptation application than other major optimization methods
of the system configurations generated by SA will such as gradient-based algorithms, enumeration, and
approach to the Boltzmann distribution when the random search due to several reasons. First, the search
system has reached equilibrium at a certain fixed space of possible network structures is large, possible
temperature. When the system temperature decreases network structures for n binary inputs. Such multimodal
gradually to zero. GAs are general purpose surface would baffle most of the hill-climbing methods.
optimization techniques which borrow the spirit of Another more favorable reason for using GAs is that highly
natural selection from evolution theory. Based on fit terms are less likely to be destroyed under the genetic
natural selection, GA tries to inherit the genes with operators and thus often lead to faster convergence,
good fitness from generation to generation. For especially when comparing with the method of SA.
eliminating the poor fitting candidates in each A GA begins with a randomly generated population of
individuals, which are often bit-string encodings of potenitial
iteration, GA uses the reproduction plan to exhibit
solutions to the concerned problem. Each individual is
selection pressure, forcing bad ones hard to survive. evaluated to show its fitness. High-fitness individuals receive
In addition, GA applies genetic operations, such as a high selection probability for reproduction. Simple selection
crossover and mutation, to existing genes for surfing rule might use a roulette wheel to select an individual
over the search space. Reproduction plans or genetic according to the percentage of its fitness over all population
operators may behave in very different ways, but fitness. A number of genetic operators such as mutation
generally speaking, a typical GA often consists of a crossover are applied to the selected individuals to
reproduction phase and a manipulation phase. produce new populations of the next generation

• Training Patterns
capability of the used code.
Hamming code are used excluding the all 00s code and In future, we can consider using the RNN model as a
the all 10s code as mentioned before. Thus, we dynamical system in which the codewords are the
have n = 7(inputs), N = 14 (patterns), m = 14(outputs) constant attractors of the system.
and I = f1; 2;::: 14g. As an example, the codeword x4 =
( 1; 1; 1; 1; 1; 1; 1) will be associated with the tar- get
output y4 = (1; 1; 1; 0:1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1) and
hence its class label is c = 4. The training process • References
iscarried out using the RNNSIM v1.0 package1
(Random Neural Network Simulator) after some slight E. R. Berlekamp, R. McEliece,and H.Tilborg, On the inherent
modifications to incorporate the general recurrent intractability of certain coding problems," IEEE Transaction
training algorithm into the program. on Inf. Theory, vol. 24, No. 3, pp. 384-386, May1978

Blahut, R. E., Theory and Practice of Error Control Codes,


Addison- Wesley, MA, 1984.
• Simulation Results
F.M.Ham and I. Kostanic, Principles of Neurocomputing for
The proposed neural decoder was implemented in C Science and Engineering. McGraw-Hill, 2001.
language on Sun Sparc-II workstation. Unless
otherwise stated, a typical parameter setting chooses a E. Gelenbe, \Learning in the recurrent random neural
crossover rate 0.9, a mutation rate 0.05, and an iteration network," Neural Computation, vol. 5, no. 1, pp. 154-
bound IB=10. Population size and maximum generation 164, 1993.
are chosen depending on the number of training
patterns.

Simulation examples are taken from short-length cyclic


codes. Short-length codes yield reasonable sizes of
training sets and can be extended to useful long codes
by various techniques in the theory of ECCs, such as
product codes and nested codes [1]. They also serve as
good examples to illustrate the effectiveness of the
presented approach.

The first example is the class of (2m-1,2m-1-m) single-


error correcting Hamming code. The simplest nontrivial
Hamming code is the (7,4) code. It contains a total of
128 codewords, in which 16 are legal codewords and
the others are composed by the 7 erroneous codewords
of Hamming distance 1 from each legal ones. The four
bipolar information bits are denoted as x1, x2, x3, x4
and the three parity bits are x5, x6, x7. They are related
as x5= x1x2x4, x6= x1x3x4, and x7= x2x3x4. This
relationship is often described by a parity matrix:

• Conclusions

The linear block code decoder that works on the Galois


fields two (GF (2)), have been solved by the proposed
neural network
The proposed decoder can reduces the error probability
to zero in the range of the error correcting capacity of
the used code.It have been shown through simulation
that the neural based decoder is much better than
the hard decision decoder for decoding codewords
corrupted with errors more than the correction

You might also like