Professional Documents
Culture Documents
Linear Block Code Decoding Using Neural Network: 15bec061@nirmauni - Ac.in
Linear Block Code Decoding Using Neural Network: 15bec061@nirmauni - Ac.in
Network
Kiran Maru Subham Jadav
Electronics and Communication Engineering
Institute Of Technology, Nirma University Electronics and Communication Engineering
Ahmedabad, Gujarat Institute Of Technology, Nirma University
15bec061@nirmauni.ac.in Ahmedabad, Gujarat
16bec050@nirmauni.ac.in
Abstract—In this paper the linear block code certain level of difficulty to most of the NN classifiers
decoder is constructed by neural network. A block because there are a large number of categories to be
code uses an encoder that accepts a block of message
symbols, and generates a block of code word symbols classified and the classification requires very high
at the output. This type is in contrast to a precision so as to discriminate among patterns of only
convolutional code where the encoder accepts a one Hamming distance from one to the others .We shall
continuous stream of symbols and similarly generates show, in this paper, that the decoding rules of a number
a continuous encoded out- put stream.The neural of LBCs have a close connection with the class of high-
network will be adapted for a single-bit error. Each order .
layer of a neural network will simulate a linear block
code decoder stage. The syndrome generator, the error
• LINEAR BLOCK CODE DECODER
detection, and the error correction stages of the linear
block code decoder will be simulated by the proposed The binary linear block codes have been defined
over the binary Galois Field GF (2) of two elements {0, 1}
neural network.
using modulo-2 arithmetic. The linear block code is called
a linear (n, k) code, where “k” is the message length to be
Index Terms—Linear Block Code, Neural Network, coded and “n” is the length of the message after coding.
Syndrome, Error detection, and Error Correction The linear block codes are described in terms of their
generator matrix (G), parity check matrix (H), minimum
distance (dmin), and syndrome (S). All the three phases in
• INTRODUCTION the linear block code decoder will be constructed by the
Here we have field of neural networks is now use of neural network. So a neural network with five layers
extremely vast and interdisciplinary, drawing interest will be constructed. The first and second layers will
from researchers in many different areas such as estimate the syndrome vector, the second and third layers
engineering (including biomedical engineering), physics, will detect the error, and the third, fourth, and fifth layers
will correct the error, as shown in Fig. 2. The neural
neurology, psychology, medicine, mathematics, network (plotted in bold lines) is just for delaying the input
computer science, chemistry, and economics .Artificial in order to be ready for being exclusively ORedwith the
neural networks provide a neurocomputing approach to error vector “e”.
solve complex problems that might otherwise not have a
tractable solution. Applications of neural networks
include (but are not limited to) prediction and
forecasting, associative memory, function
approximation, clustering, data compression, speech
recognition, feature extraction, combinational
optimization, solution of matrix algebra problems, blind
source separation and solution of differential equations.
When a digital message is transmitted over a long
distance, the received message may not be exactly as it
was sent since there may be some interference. In these
situations we should be able to detect and if possible to
correct errors.
Neural networks (NNs) are powerful computational This phase is similar to the first block in the linear block
models that have attracted much attention in many code decoder; which is responsible for generating the
applications [4]. However, the decoding problem poses a syndrome vector “S”. According to S r. HT, so that the
synaptic weights of this phase must be equal to the
transpose of the parity check matrix “H
• Training Patterns
capability of the used code.
Hamming code are used excluding the all 00s code and In future, we can consider using the RNN model as a
the all 10s code as mentioned before. Thus, we dynamical system in which the codewords are the
have n = 7(inputs), N = 14 (patterns), m = 14(outputs) constant attractors of the system.
and I = f1; 2;::: 14g. As an example, the codeword x4 =
( 1; 1; 1; 1; 1; 1; 1) will be associated with the tar- get
output y4 = (1; 1; 1; 0:1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1) and
hence its class label is c = 4. The training process • References
iscarried out using the RNNSIM v1.0 package1
(Random Neural Network Simulator) after some slight E. R. Berlekamp, R. McEliece,and H.Tilborg, On the inherent
modifications to incorporate the general recurrent intractability of certain coding problems," IEEE Transaction
training algorithm into the program. on Inf. Theory, vol. 24, No. 3, pp. 384-386, May1978
• Conclusions