Professional Documents
Culture Documents
Random Neural Network Decoder For Error Correcting Codes
Random Neural Network Decoder For Error Correcting Codes
Random Neural Network Decoder For Error Correcting Codes
Codes
Let qi (t) be the probability that neuron i is excited at Figure 1: Structure of the RNND.
time t. The stationary probability distribution associ-
ated with the model is given by: 4.1 Classication Technique
p(k) = tlim
!1 p(k; t); qi = tlim !1 qi (t); i = 1; :::; n: (1) Let T = f(x; c)g be a training set of N vectors,
and where x 2 Rn represents a codeword of the block code
P P family under consideration and c 2 I is its class la-
+i = j qj wji+ + i ; ;i = j qj wji; + i bel from an index set I . Let y 2 Rm be the desired
output vector associated with the input vector x. In
qi = i ; if qi < 1:
+
(2) our application, we relate c to y via the following re-
ri + i lation: c = fj : yj < yk k = 1; 2; : : :; mg (c is the
index of the output neuron that produces the mini-
4 RNN Based Decoder (RNND) mum value) . The RNN decoder acts as a mapping
Rn ! Rm , which assigns a vector in Rm to each vector
The model presented here is designed to decode the in Rn or equivalently, it can be considered as a map-
(7; 4) Hamming code discussed in Section 2. The ping C : Rn ! I , which assigns a class label in I to
RNND, as shown in Figure 1, consists of two layers, each vector in Rn . The RNN species a partitioning of
the input layer, which has 7 neurons, is a fully inter- the codewords
S into regions
T Rj fx 2 Rn : C (x) = j g,
connected layer. This layer is connected to the out- where j Rj R and j Rj ;. It also induces a
n
put layer, which consists of 14 neurons, through feed- partitioning of the training set into sets Tj T where
forward connections. Thanks to the general training Tj ffx; cg : x 2 Rj ; (x; c) 2 T g. A training pair
(x; c) 2 T is misclassied if C (x) 6= c. The perfor- 0
10
Probability of word error versus SNR for HDD, RNND, and SDD
X X X
−2
10
−4
10
Hamming code are used excluding the all 00 s code and Figure 2: Probability of word error for (7; 4) Hamming
the all 10 s code as mentioned before. Thus, we have code in an AWGN channel.
n = 7(inputs), N = 14(patterns), m = 14(outputs) and In the second simulation, three groups of testing
I = f1; 2; : : : 14g. As an example, the codeword x4 = codewords, each of them consists of 100000 randomly
(;1; 1; ;1; ;1; 1; ;1; 1) will be associated with the tar-
get output y4 = (1; 1; 1; 0:1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1) and chosen (7; 4) Hamming codewords, are applied to the
hence its class label is c = 4. The training process is HDD, SDD and RNND. AWGN is added to each test-
carried out using the RNNSIM v1.0 package1 (Random ing group so as to produce a single error, two errors,
Neural Network Simulator) after some slight modica- and three errors in the codewords of the rst, second,
tions to incorporate the general recurrent training al- and third testing groups respectively. The noisy testing
gorithm into the program. codewords are then decoded using each of the consid-
ered techniques and the number of erroneously detected
codewords is recorded for each of the testing groups.
5 Simulation Results The results are shown in Tables 1, 2, and 3.
After training the RNN, the simulation is performed SNR (dB) HDD RNND SDD
for (7; 4) Hamming code transmitted over AWGN chan- -1 0 0.071 0.057
nel. The whole communication system has been sim- 0 0 0.043 0.036
ulated with the aid of an interactive JAVA program2. 1 0 0.027 0.022
To evaluate the performance of the proposed RNND 2 0 0.006 0.004
and compare it to the HDD and SDD, two dierent
simulations have been carried out.
In the rst simulation, the probability of word error Table 1: Probability of word error for the rst group
is recorded versus the SNR in dB as shown in Figure 2.
The SNR is related to the channel variance 2 through
the relation Eb = 212 . The vertical axis represents log- SNR (dB) HDD RNND SDD
arithmically the probability of word error associated -1 1 0.409 0.363
with HDD, SDD, and RNND. The RNND shows cod- 0 1 0.314 0.297
ing gain relative to the HDD and its performance ap- 1 1 0.253 0.201
proaches that of the optimum SDD. 2 1 0.217 0.190
1 Free GUI based software available via anonymous ftp at
ftp://ftp.mathworks.com/pub/contrib/v5/nnet/rnnsim Table 2: Probability of word error for the second group
2 http://www.cs.ucf.edu/~ahossam/rnndecoder/
HammRnndApp.html
SNR (dB) HDD RNND SDD
-1 1 0.774 0.772
0 1 0.686 0.648