Optimization of Hopfield Neural Network For Improved Pattern Recall and Storage Using Lyapunov Energy Function and Hamming Distance - MC HNN1 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/360834650

Optimization of Hopfield Neural Network for Improved Pattern


Recall and Storage Using Lyapunov Energy Function and
Hamming Distance: MC-HNN

Article in International Journal of Fuzzy System Applications · May 2022


DOI: 10.4018/IJFSA.296592

CITATIONS READS

3 105

3 authors:

Jay Kant Pratap Singh Yadav Zainul Abdin Jaffery


Ajay Kumar garg engineering college ghaziabad Jamia Millia Islamia
23 PUBLICATIONS 61 CITATIONS 123 PUBLICATIONS 1,320 CITATIONS

SEE PROFILE SEE PROFILE

Laxman Singh
Noida Institute of Engineering and Technology
70 PUBLICATIONS 306 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Ph.D Research work View project

Renewable Energy based Power Management for Wireless Sensor Networks View project

All content following this page was uploaded by Laxman Singh on 25 May 2022.

The user has requested enhancement of the downloaded file.


International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Optimization of Hopfield Neural


Network for Improved Pattern Recall
and Storage Using Lyapunov Energy
Function and Hamming Distance:
MC-HNN
Jay Kant Pratap Singh Yadav, Jamia Millia Islamia, New Delhi, India*
Zainul Abdin Jaffery, Jamia Millia Islamia, New Delhi, India
Laxman Singh, Noida Institute of Engineering and Technology, Greater Noida, India

ABSTRACT

In this paper, the authors propose a multiconnection-based Hopfield neural network (MC-HNN)
based on the hamming distance and Lyapunov energy function to address the limited storage and
inadequate recalling capability problems of Hopfield neural networks (HNN). This study uses the
Lyapunov energy function and Hamming distance to recall correct stored patterns corresponding to
noisy test patterns during the convergence phase. The proposed method also extends the traditional
HNN storage capacity by storing the individual patterns in the form of etalon arrays through the unique
connections among neurons. Hence, the storage capacity now depends on the number of connections
and is independent of the total number of neurons in the network. The proposed method achieved the
average recall success rate of 100% for bit map images with a noise level of 0, 2, 4, 6 bits, which is
a better recall success rate than traditional and genetic algorithm-based HNN methods, respectively.
The proposed method also shows quite encouraging results on handwritten images compared with
some of the latest state-of-the-art methods.

Keywords
Associative Memory, Hamming Distance, Hebb Rule, Hopfield Neural Network, Lyapunov Energy Function,
Multiple Connections, Patterns Association, Recall Success Rate

1. INTRODUCTION

Nowadays, artificial intelligence (AI) has a significant impact on our daily life. However, unfortunately,
human aspects like voice, picture, video, handwritten character, etc., have not yet got considerable
attention due to standard solutions (Liang & Li, 2020) (Hopfield, 1982). Associative Memory (AM)
is an emerging research topic in pattern recognition that still needs an optimal solution due to the
unavailability of a standard solution. Associative memory works as a content addressable memory
that stores the data in a distributed manner and can be addressed through its contents. An AM has

DOI: 10.4018/IJFSA.296592 *Corresponding Author



Copyright © 2022, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.


1
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

the capability to recall the complete patterns when triggered with partial or noisy patterns. Figure 1
illustrates the working of content addressable memory or associative memory. In artificial intelligence
literature, associative memories are broadly classified into two types: auto-associative memory and
hetero-associative memory. In an auto-associative memory, the primary focus is given to recall the
perfect pattern when a distorted or noisy version of the pattern is given as input. On the other hand,
Hetero-associative memory stores input-output pattern pairs in which input pattern may differ from
output pattern and recalling of output pattern is triggered by a noisy or partial version of input pattern
of the pair. Associative memories are usually implemented by artificial neural networks. Hopfield
neural network is a widely used artificial neural network to implement auto-associative memory that
mimics the functionality of the human brain (Hopfield’, 1984;Hu et al., 2015)

Figure 1. Working of content addressable memory

Hopfield Neural Network (HNN) (Hopfield, 1982) is considered as a dynamic feedback system,
where the output of the previous iteration is fed as input to the next iteration. This network is also
termed “recurrent networks” owning to the presence of feedback connections and tends to behave
like a nonlinear dynamic system (Hopfield’, 1984), leading to the generation of multiple behavior
patterns. Out of these, one potential behavior pattern leads to the stability of the network, i.e., the
network converges to a fixed or motionless point. Owning to this capability of the network, the same
fixed point can be treated as input as well as output for such networks (Hopfield’, 1984). This keeps
the network in the same state. The network also shows oscillations or chaotic behavior.
It has been observed that Hopfield neural networks can work as a stable system with more than
one fixed point (Hebb, 1949). The convergence of the network to a fixed point can be determined by
the initial point chosen at the beginning of the iteration. In the case of the Hopfield neural network,
these fixed points are called attractors, and a set of points attracted towards a particular attractor
during the iteration is known as attraction basin. All the points, which are part of the attraction
basin, are connected with an attractor. This can be understood by considering the following example,
wherein a specific (desirable) image is considered as an attractor, and the basin of attraction contains
the noisy or partial version of the desired image. Therefore, the noisy or partial image that vaguely
recalls the desired image may be remembered by the network associated with this image. The set of
these attractors are called memory, and, in this case, the network can operate as associative memory.
However, HNN suffers from a large number of spurious memory attractors. The network may be stuck
in these attractors and thus prevent memory attractors from being retrieved. Thus, the presence of
these spurious minima’s (or false minima’s) increases the probability of error in recalling the stored
patterns. If we consider the Hopfield neural network as a dynamic system, then these attractors are
at minimum energy value in the energy landscape, but spurious patterns are nearer to starting point
of the attraction basin.
Storage of a single pattern results in two attractors (pattern and inverse of pattern) that minimize
the possibility of spurious attractors. Therefore, to overcome the problem of spurious patterns, we
propose Multiple-connection based Hopfield Neural Network (MC-HNN) in this paper, wherein
each pattern is stored through the dedicated connections among neurons resulting in the formation
of an etalon array. In this, multiple patterns are stored through multiple connections creating two-
dimensional multiple etalon arrays (Garimella et al., 2021). The ordered collection of these etalon
arrays forms a 3-D weight matrix, in which the third dimension represents connection number or
pattern number. This idea leads the network to store as many patterns as required by increasing

2
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

interconnections among neurons. However, the selection of the correct attraction basin (etalon array)
in which output patterns are stored corresponding to the input test pattern is a challenging task with
this arrangement. Nevertheless, the Lyapunov energy function computation with each etalon array
gives a clue to select the correct attraction basin (etalon array). However, in some cases, it may select
more than one attraction basin corresponding to the input test pattern. Then, this conflict can be
resolved by applying Hamming Distance with the Hebb rule. Hamming Distance (HD) is given by
American mathematician Richard Hamming (Hamming, 1950) and is a measure that determines the
number of bit positions at which two equal-length binary strings are different. Hence, after recalling
the output patterns by Hebb rule from selected etalon arrays, we apply Hamming Distance with input
pattern to all output patterns and select one who has minimum Hamming Distance as correct recalled
output pattern. Therefore, this study presents the use of the Lyapunov energy function and Hamming
Distance to select the correct etalon array for perfect recalling of the patterns from its noisy version.
The major contribution of this study is:

1. To propose a Multi-connection based Hopfield neural network (MC-HNN) architecture that


improves the limited storage capacity of HNN and reduces the problem of spurious patterns.
2. To improve the recall success rate of the MC-HNN network by using Lyapunov energy function
and Hamming Distance with Hebb rule for the perfect pattern recalling corresponding its noisy
version.
3. To analyze the performance of the proposed method using bit map images of English alphabet
characters.
4. To evaluate and compare the performance of MC-HNN with other state-of-the-art methods in
terms of storage capacity and recall success rate on the real-time dataset of handwritten character
images.

This paper contains six sections. Section 2 presents a related artwork describing the different
variants of the Hopfield neural network. Section 3 describes the architecture of the MC-HNN method.
Section 4 talks about the implementation of the proposed architecture. Section 5 includes results and
discussion. Finally, the conclusion is presented in section 6.

2. RELATED WORKS

The purpose of this review is to present the capability of some commonly used HNN-based approaches
that have been employed in literature efficiently to recall and handle the closely related objectives.
In this reviewed study, some technical aspects of prevailing approaches are discussed. Numerous
researchers have put their sincere efforts into optimizing the storage and recalling efficiency of the
Hopfield Neural Network. For instance, Hillar et al. (2012) proposed an algorithm for improving the
learning and storage of the Hopfield network using the Ising model with minimum probability flow,
which fitted parameters into energy based probabilistic model. Ising model with minimum probability
flow is used to learn a set of binary target patterns. After learning, model parameters are used to
define the Hopfield network to store all patterns as fixed points. This approach outperforms over
other algorithms in terms of speed, storage recovery, and recalling of noisy patterns. Network
complexity is a major drawback of this approach. It increases with an increase in training patterns.
Kumar & Singh (2010) investigated the methods for generating weight matrices using the genetic
algorithm. The fitness function checks the suitability of the newly generated population of weight
matrices. In their study, authors found the performance of the genetic algorithm better than that of
the Hebbian rule for retrieval of stored patterns. Singh & Jabin (2012) used the genetic algorithm to
compute the optimal weight matrices for patterns storage and recalling of the induced noisy patterns.
In Ref. Kumar’ & Singh (2012), authors employed two different fitness functions to minimize GA
randomness, resulting in reduced population and search time. However, the major limitation of this

3
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

algorithm was its poor performance for noisy signals. Davey et al. (2004) introduced the concept of
pseudo-inverse algorithms to create a weight matrix that ensures the convergence of training patterns
at stable points and can store up to n patterns in a network of n units.M. P. Singh & Dixit (2013)
optimized the Hopfield’s neural network using simulated annealing (SA) technique in order to store
and retrieve patterns. In their study, the authors tried to minimize the effect of false minima for
improving the recalling success rate. Rodriuez et al. (2007) proposed an algorithm to increase the
storage capacity of the Hopfield neural network based on random and fixed reference points. However,
the proposed approach did not yield a satisfactory result. Dehghan et al. (2009) applied finite
differential techniques of Helmholtz-equation to minimize the energy functions to mimic the property
“monotone minimization of energy as time proceeds” of HNN (Wu et al., 2005). Gosti et al. (2019)
made an effort to improve the storage capacity of HNN using autapses with a stable state. This strategy
allows the HNN to store an exponential number of memory patterns. Based on the heterogeneous
connections among neurons, a new method named scale-free Hopfield neural network was introduced
by Kim et al. (2017) that tremendously enhanced the storage capacity of the network. However, the
developed method presents some errors during memory retrieval. Kobayashi (2017) implemented
Hopfield neural network based on complex neurons and used the projection rule to eliminate local
minima and thereby accelerated the recall process. Recently Tsuji et al. (2021) extends this complex
HNN idea using gradient descent learning but still noisy pattern retrieval is a big issue. Rebentrost
et al. (2018) developed a quantum algorithm for HNN to store an exponentially extensive network in
a polynomial number of quad-bits to reduce the computational complexity. Mutter et al. (2006)
proposed an idea of a multi-connect Hopfield neural network for the storage of grayscale and color
images. In this study, each bit plane of grayscale or color image was stored separately by designing
multiple connections (Mutter et al., 2006;Mutter et al., 2007). Kareem et al. (2012) later applied this
approach by dividing patterns in size of three bits and storing the complete pattern through multi-
connections. Yadav et al. (2017) presented a comparative analysis between HNN and Hamming
network. Among these two approaches, Hamming network produced better results in comparison to
HNN. Sahoo & Pradhan (2020) used Hopfield Neural Network with HOG features to extract the
compact feature set of images used for storage and recall purposes. In the HOG feature descriptor,
gradient orientations of an image are counted by dividing into small connected regions. This method
improves the convergence rate; however, it offers limited storage capability and therefore tends to
suffer from the storage problem. Goel et al. (2019) applied discrete wavelet transform (DWT) for the
decomposition of handwritten images into four different bands as LL, LH, HL, and HH. These
extracted bands were used for storage and recall of real test patterns from noisy images. This method
presented a good performance for recalling the noisy patterns. However, the proposed method fails
to provide an adequate storage capacity.
The prime focal point of this paper is to develop multiple connections-based architecture of
HNN that improves the storage and the recalling capability of the network through suppression of
spurious minima’s. In MC-HNN, patterns are stored as an etalon array in terms of the weights matrix
comprising of neuron interconnections pertaining to specific connections. The correct stored pattern
is recalled from the noisy prototype test pattern by applying the test image on its respective etalon
array using the Hebb rule. But, the selection of the correct etalon array is made based on the storage
of a perfect pattern corresponding to the test prototype pattern. It is very difficult to select a correct
etalon array. However, in this paper, we have used the energy function for selecting a correct etalon
array. Using energy function, the selection is not strictly confined to a single etalon array. For some
specific cases, it may select more than one etalon array for the stored pattern. This particular issue was
resolved using the concept of hamming distance, which is another feature of the proposed method.
The performance evaluation in terms of pattern recalling is done for both the methods viz., HNN
and MC-HNN on noisy input patterns of English alphabets. The simulation results demonstrate that
MC-HNN outperforms over HNN method in terms of recall success rate for noisy English alphabets.

4
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Figure 2. Hopfield neural network architecture

In addition to this, the proposed method also improves the storage capacity of the network that is
multiple times higher than that of the traditional HNN method.

3. MODELS

Fig. 2 presents the architecture of the discrete Hopfield neural network. Hopfield Neural Network is
a fully connected network, where the output of each neuron is fed back as input to every other neuron
(excluding itself). In HNN, the input or output of each neuron is either binary (0,1) or bipolar (−1,1) .
The connection weight between neuron i and j is denoted by wij . In general, the network has
symmetric weights without self-connections, i.e., wij = w ji and wii = 0 . During the learning phase,
the network constructs a symmetric weight matrix W = wij  of integers with wii = 0
  n×n
During the convergence phase, the input vector I is multiplied with weight matrix W using
conventional matrix and vector multiplication. The input vector I is fed into their respective neurons,
and then the corresponding output vector is computed. If we update all the neurons together, then
this updation is called synchronous updation. Synchronous updation suffers from a lesser size of the
basin of attraction. Hence asynchronous updation is most widely used, in which, for each iteration,
only one randomly selected component of output vector O = oi  gets updated by entering in
thresholding operation whose output is bipolar. The corresponding component of the input vector is
replaced by this value and work as an input vector for the next iteration. This process continues until
equivalence between input and output vector is not achieved, i.e., the fixed or motionless point is not
reached. In the subsequent sub-section, the algorithm of the Hopfield neural network is discussed as
follows:

3.1. Learning Phase


The learning phase is also called the training phase in which, weight matrix computation is performed
using the Hebb rule (Hebb, 1949;Kohonen & Ruohonen, 1973). The weight matrix is the summation
of etalon arrays. Usually, these etalon arrays are the fixed points of the Hopfield neural network, but

5
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

this statement is not always true. In order to ensure these etalon arrays to act as attractors, the weight
matrix can be defined as:
Let storing a set of bipolar patterns S (p ) , p = 1… P

(
where S (p ) = s1 (p ), s2 (p ), .., si (p ), ……..,sn (p ) , )
The weight matrix is defined as:

Wij = ∑si (p)s j (p) for i ≠ j (1)


p =1

Here, the weights have no self-connections i.e. wij = 0 for i = j .

3.2. Convergence Phase


This phase is referred to as the testing phase of a Hopfield Neural Network, wherein we test a network
with noisy or partial patterns to recall the perfect patterns.

Algorithm 1: Recall/Convergence
1. Initialize the weight values as per the Hebb rule for bipolar patterns.
2. Set initial input signal Oi for the network equal to external signal I i , i.e., Oi = I i (i = 1,2 …,n ) .
3. Select the input units i = 1, 2….,n , in random order (asynchronous update units). Compute
the net input of the units:

Onet _ = I i + ∑O j wij (2)


i
j

Apply the activation over the net input to calculate the output:

 1if O net > θ 


 i 
Oi = 0 if O
 net == θ (3)
 
 net < θ 
i

 −1if O
 i 

θ is threshold value.
5. Now feedback the obtained output Oi to other units. Thus, the activation vector is updated.
6. Repeat step 3, 4, 5 until the pattern got converged or ceased to change.

Here, a scalar quantity is associated with each state of the Hopfield neural network called
‘energy’ and is given by energy function (also called Lyapunov function), which is a bounded and
non-increasing function. This determines the stability property of HNN and can be expressed as
follows (Haykin, 1998):

1
∑ 1=i ∑ jj =≠11OiOj wij − ∑ i =1 I iOi + ∑ i =1 θiOi
n n n n
E =− (4)
2

6
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

If HNN is stable, the value of energy function E decreases or remains constant whenever the
state of any node changes.
Let us assume that a node ‘i’ has changed its state from Oik to Oik +1 , i.e.; output has changed,
then change in energy ∆E is given by:

( )
∆E = E Oik +1 − E Oik ( )
 
 n 
 
 j =1
(
= − ∑O j wij + I i − θi  Oik +1 − Oik

) ­

 j ≠i 

= − (neti ) ∆Oi (5)

Where ∆Oi = Oik +1 − Oik . The change in energy is dependent on the fact that only one node
caupdate its activation at a time. The change in energy equation ∆E exploits the fact that Oik +1 = Oik
for j ≠ i and wij = w ji and wii = 0 . There exist three cases in which a change ∆Oi will occur in


the activation of neuron ’i ’ . If Oi is +1, then it will change to -1; thus ∆Oi = −2 , with equation
(2), we get neti < 0 substituting in equation (5) & hence, we get ∆E ≤ 0 . On the other hand, if
Oi is -1 and will change to +1, thus ∆Oi = 2 and with equation (2), we get neti ≥ 0 . , and after
substituting this value in equation (5), we get ∆E ≤ 0 . There is also a case where Oi will not change;
thus, ∆Oi = 0 , substituting in equation (5), we get ∆E = 0 . Therefore, we found that the energy
continuously decreases or remains the same. As a result, because the energy is bounded, the HNN
must reach a stable state equilibrium, such that the energy does not change with further iteration.
From this, it is inferred that the energy change depends on the change of activation of one unit and
on the symmetry of the weight matrix with zeros on diagonal.
HNN always converges to a state which is a local minimum in the energy function in a finite
number of updating steps. In case a state is a local minimum in the energy function, it is considered a
stable state for the network. A stable state is a retrieved pattern corresponding to a noisy or incomplete

Figure 3. Energy Landscape of Hopfield Neural Network

7
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

input pattern. Fig.3 shows the energy landscape of HNN, highlighting the current state of the network,
an attractor state (a local minimum) (to which the network will converge), and a basin of attraction.
Storage capacity of an HNN is another crucial factor (Amit et al., 1985; Abu-mostafa & Jacques,
1985; Abu-Mostafa et al., 2012). J. Hopfield suggested that an HNN can store and recall approximately
C binary _ patterns = .15 × n binary patterns correctly if the network consists of n nodes (Hopfield’, 1984).
Abu-Mostafa et al. modified this value for bipolar patterns and calculated the storage capacity
C bipolar _ patterns = n / 2log 2 n for a network of n nodes (Abu-mostafa & Jacques, 1985).
The application of HNN as an auto-associative memory suffers from various limitations. Some
of them are (Abu-mostafa & Jacques, 1985):

1. HNN has limited storage capacity. If large numbers of patterns are stored, the network may
converge to a spurious pattern (false pattern) and might differ from all example patterns.
2. During the network convergence, it is obviously desirable to reach global minima rather than to
set down at a local minimum. Fig. 4 shows the graph of energy function to illustrate the concept
of the local and global minimum. Within this graph, A and B indicate a minimum energy level
relative to other points in the attraction basin, but point B has a lower energy value than A. Hence,
point A corresponds to the local minima. If point C is reached, further movement to point B

Figure 4. Global and Local minima

is expected; point A is not needed. Similarly, if the point has arrived close to point A, the next
movement to avoid settling will proceed to point B instead.

3. Hopfield neural network suffers from correlation problems (Sampath & Srivastava, 2020). Two
or more patterns are correlated if they share multiple bits. Therefore, converge of one pattern
may land to the stable state of another in the network. This problem can be avoided by orthogonal
patterns, i.e., all stored patterns must be orthogonal to each other.
4. Missing and mistaken bits ratio is limited in noisy or incomplete input patterns.
5. Cross association is an important issue in which a network converges to a reverse pattern.

To overcome the spurious pattern and limited storage capacity of HNN, we proposed a modified
Hopfield neural network architecture with multiple connections. The reason for applying multiple
connections is to store each training pattern as an isolated weight matrix using Hebbian learning. If
there is a single pattern stored in the network, then local minima in the energy function landscape is
missing, and the stored pattern acts as an attractor. This idea motivated us to use multiple connections.

8
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Figure 5. Proposed MC-HNN Architecture

Usage of multiple connections leads to improvement in storage capacity due to network storage
capacity being reliant on these connections.
Fig. 5 shows the architecture of Multiple Connections based Hopfield Neural Network (MC-
HNN). In this architecture, each neuron is connected to other neurons with multiple connections. The
connection weight between ‘i’ and ‘j’ neurons for the kth connection is given by wijk (with
 wiik = 0 ). During the learning phase of the network, each pattern is stored using the specific
wijk = w kji and
connections in the network such that there is an etalon array corresponding to that particular connection

Figure 6. Snapshot of weight matrix of MC-HNN method

9
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

(For example, a kth pattern is stored in the network by using a kth connection of all neurons and so on).
During the learning phase, the weight matrix Wn×n×k is determined by collecting these etalon arrays (where
n is the size of the pattern and k is the number of connections), as shown in Fig. 6. In order to determine
the stability of states (i.e., global minima), energy values of each pattern are computed by using the
Lyapunov function and is given by equation (6):

1
E (p ) = − ∑ j =1 s ( p ) s j ( p ) wij for p = 1 …… P and θ = 0
i =1 ∑ j ≠1 i
n n p
(6)
2

In order to recall a complete pattern, it is quite difficult to locate the stable state corresponding to
a particular pattern because each etalon array contains a stable state. This problem can be resolved by
computing the energy value of the input pattern with each etalon array using the Lyapunov function
as given in equation (7). The minimum energy value of the input pattern with etalon arrays denotes
the affinity of the input pattern to attractors. In other words, we can represent the input pattern as a
point in the attraction basin that corresponds to its respective etalon array.

1
SE (p ) = − ∑ i =1 ∑ j ≠1 i j ij
n n
j =1 I I w for p = 1 …… P and θ = 0 (7)
p

Thereafter, we compute the output vectors corresponding to each etalon array using asynchronous
updates.
Equation (8) and (9) presents the computation of output patterns using the input units i = 1, 2….,n
in random order (asynchronous update units) and also computes the energy value corresponding to
output patterns that lead to convergence of input pattern to output pattern at minimum energy value
using equation (10).

Onet _ i (p ) = I i + ∑O j (p ) wijp for p = 1…… P (8)


j

 1if O
 net (p ) > θ 
 
 i

Oi (p ) = 0 if O
 net (p ) == θ (9)
 i 
 −1if O  net (p ) < θ 
 i 

1
SE 1 (p ) = − ∑ j =1O ( p )O j ( p ) wij for p = 1 …… P and θ = 0
i =i ∑ j ≠1 i
n n p
(10)
2

To get the correct output, the affinity of the input pattern to attraction basin(s) of various
motionless points can be determined by computing the minimum value of energy function of SE (p )
( )
(for p = 1, 2, ….P ), i.e., min SE (1),SE (2), SE (3),………SE (P ) . In the second step, we choose
the output patterns corresponding to those etalon arrays that represent the affinity with input test
patterns. If more than one output patterns are the candidate solution, hamming distance (Hamming,

10
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

1950) plays an important role in order to determine the correct output pattern. Hamming distance
also gives information about the number of bit differences in two binary vectors. A pattern with a
lesser Hamming distance from the input test pattern gives the correct output pattern. Algorithm 2 for
hamming distance computation is given below:
Algorithm 2: Hamming Distance
1. Let A and B are two vectors. Compute the length of A (or B).
2. C=0;
3. for i=1: length(A)
4. if (A(i)~=B(i))
5. C=C+1;
6. end_if
7. end_for
To ensure the stability of output patterns, we compared the energy value of the output pattern
with the energy value of the stable point. If the energy value of both patterns is similar, then recalled
pattern is considered as stable (or correct), otherwise not. The learning phase and convergence phase
of the MC-HNN method are discussed in more detail in the subsequent section.

3.3. Learning Phase


During the learning phase, weight matrix computation is performed. The weight matrix is determined
by computing etalon arrays corresponding to each pattern. Thereafter, etalon arrays thus obtained are
stored in a weight matrix. Thus, the arrangement of these etalon arrays in order, corresponding to the
third dimension forms the weight matrix. These etalon arrays are known as the fixed points of MC-
HNN. In order to ensure that these etalon arrays act as an attractor, the weight matrix is defined as:
Let store a set of bipolar patterns S (p ) , p = 1… P

( )
where S (p ) = s1 (p ), s2 (p ), .., si (p ),……..,sn (p ) ,

The etalon arrays corresponding to each pattern is defined as:

Wijp = si (p ) s j (p ) for i ≠ j (11)

Here, the weights have no self-connections i.e. wijp = 0 for i = j .

The weight matrix is a collection of the etalon arrays given below:

(
W = W 1,W 2 ,W 3 …………,W p ) (12)

3.4. Convergence Phase


This phase is known as the testing phase of an MC-HNN method and is used here to test the network
with noisy or partial patterns for perfect recalling of patterns.
Algorithm 3: Convergence Phase
1. Initialize the weight matrix as per the Hebb rule for the
bipolar pattern given in equation (12).
2. Calculate energy function value E (p ) (for p = 1, 2, …….P )
corresponding to each etalon array and associated patterns using

11
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

equation (6).
3. For the input testing pattern, compute the energy function
value SE (p ) (for p = 1, 2, …..P ) corresponding to each etalon array
and input pattern using equation (7).
4. Compute the output patterns O (p ) (for p = 1, 2, ….P ) corresponding
to the input pattern and each etalon array by asynchronous update
and using equations (8) and (9).
5. Compute the energy function value SE (p ) ( p = 1, 2, ….P ) for output
pattern corresponding to etalon array using equation (10).
6. To get the correct output, the affinity of the input pattern to
attraction basin/s of various motionless point or attractors is
identified by determining the minimum energy function value/s of
(
SE (p ) (for p = 1, 2, ….P ), i.e., min SE (1),SE (2), SE (3),………SE (P ) . )
7. Select the output pattern(s) corresponding to the minimum
energy function value as given in step 6.
8. If more than one output patterns are the candidate solutions,
then select the pattern which has minimum Hamming distance from
the input test pattern as given in algorithm 2.
9. Compare the energy function values E (p ) and SE (p ) to check the
stability of the output pattern, and in case both the values are
found to be the same, then the pattern is perfectly recalled,
otherwise print an error message.

4. IMPLEMENTATION

The proposed method was implemented on MATLAB-9.6 (R2019a) using a personal computer with a
processor speed of 1.8GHz and 6 GB memory. In the current study, the proposed network consists of
35 neurons but contains 26 connections between all neurons to store 26 letters of the English alphabet.
In this study, we used self-generated bit-map images of English alphabet letters as input patterns,
as shown in Fig 7. Each bit-map image is of size 7 × 5 pixels, wherein 0 and 1 represent black and
white pixels. In order to make input data suitable for storage in HNN, patterns are converted into a
series of bipolar values 1 and -1. For instance, the pattern vector corresponding to the bit-map image
of a letter.
A can be represented as:

[-1 -1 1 -1 -1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 -1 -1 1 1 1 1 1 1 1 -1 -1 -1 1 1 -1 -1 -1 1]

Figure 7. The set of patterns (Bit-map image for alphabet letter A) used for training

12
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

The pattern shown in Fig.7 contains 35 bipolar values of 1 and -1. Therefore, 35 neurons are
needed in the network to store and recall the patterns. In this study, HNN was trained with the Hebbian
learning rule using patterns of size 35 corresponding to each alphabet letter. The pattern information
corresponding to these alphabets was encoded in the form of a weight matrix that ultimately represents
the encoded information of memorized patterns. Whereas, in the case of MC-HNN, each pattern of
the English alphabet is used to be memorized in the form of etalon arrays through the specific set
of 26 independent connections among neurons that leads to the formation of 26 etalon arrays. The
ordered collection of these etalon arrays represents encoded information corresponding to input
patterns in terms of the weight matrix.
During the recalling process, noisy prototype input patterns are presented to the HNN network.
The network continues to iterate using the Hebb rule till it reaches the verge of stability. The attained
stable state may represent any one of the memorized patterns or false-minima. In the MC-HNN
algorithm, each pattern is encoded in the form of an etalon array. Due to this, it is not possible to apply
the Hebb rule directly. Therefore, in this study, we used the energy function to identify the attraction
basin for the noisy input pattern. Using energy function, a suitable etalon array was selected, wherein
noisy patterns were encoded in the form of perfect patterns. Subsequently, we applied the Hebb rule
on the selected etalon array in an iterative manner to generate output pattern(s). Moreover, in this
study, we also used hamming distance to determine the closeness between the two binary patterns
(stored binary input pattern and noisy test input image pattern). The hamming distance was used to
resolve the conflict by counting the difference of bits between two binary patterns if we obtained
more than one solution at a time, in the particular case.

5. RESULT AND DISCUSSION

Simulation results presented in this section demonstrate that MC-HNN outperforms over traditional
HNN algorithm for recalling English alphabet letters using the Hebb rule. Figure 8 (a) & (b) presents
the stored original input image patterns and noisy test image patterns. The image patterns are shown
in Fig. 8 (b) contain the noise with errors of 2- bits, 4-bits, 6-bits, 8-bits, and 10 bits, respectively.
The noise was randomly generated and intuitively introduced in the original input image patterns to
test and evaluate the performance of the proposed method. Fig. 8 (c) illustrates the recalled image
patterns of English alphabet letters. The recalled image patterns demonstrate that images have been
recalled perfectly using the MC-HNN method. Here, the recall success rate is given as:

Number of Successful trials


Recall Success Rate (RSR) =
Total number of trials with noisy test patterns

Recall success rate defines the number of successful trials out of the total number of trials with
the noisy test pattern. A trial is said to be successful when it achieves global minima.
Tables 1-5 illustrate the recall success rate for test pattern images containing zero, two, four, six,
and eight-bit errors, respectively. The results clearly show that the traditional HNN gives a satisfactory
performance with regard to recalling of input patterns with zero bit error for all the example images,
but its performance subsequently deteriorates, and the recall success rate drops to 0.48% for two-bit
errors and 0.000% for four-bit and six-bit errors, respectively. While, the MC-HNN method achieved
the recall success rate of 100% up to 6-bit errors and decreased to 93.30% and 81.62% for 8-bit
and 10-bit errors, respectively, as illustrated in Table 1-5 (Results of 10-bit error is not shown in
Table 1-5). Though, we also conducted a test for 10-bit error and observed the recall success rate of
81.62%, 0%, and 0% for MC-HNN, HNN, and HNN-GA (Table 6-9) methods, respectively. It was
observed that both methods failed to recall the image patterns at the higher noise level (i.e., beyond
2-bit error for HNN and 4-bit error for HNN-GA) except the proposed method that worked well up

13
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Figure 8. Recalling simulation results of MC-HNN method for alphabet character A - Column 1: (a) Stored input patterns, (b) Noisy
image patterns with 0, 2, 4, 6, 8, & 10-bit error noise, (c) Recalled pattern using the proposed method

14
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Table 1. Comparative results of recalling of letters of the English alphabet with 0-bit error

Letter Recall success rate (%) Letter Recall success rate (%)
Multiple
Traditional Multiple Traditional Connections
HNN Connections HNN HNN HNN
A 69.30 100 N 85.60 100
B 92.50 100 O 95.10 100
C 90.10 100 P 90.60 100
D 96.60 100 Q 81.30 100
E 88.00 100 R 92.10 100
F 86.30 100 S 86.40 100
G 90.10 100 T 93.60 100
H 91.00 100 U 87.00 100
I 98.80 100 V 84.60 100
J 91.60 100 W 95.50 100
K 77.40 100 X 87.30 100
L 76.80 100 Y 89.60 100
M 88.10 100 Z 85.40 100

Table 2. Comparative results of recalling of letters of the English alphabet with 2-bit error

Letter Recall success rate (%) Letter Recall success rate (%)
Multiple
Traditional Multiple Traditional Connections
HNN Connections HNN HNN HNN
A 0.00 100 N 0.30 100
B 0.40 100 O 0.20 100
C 0.30 100 P 0.20 100
D 0.30 100 Q 0.00 100
E 0.20 100 R 0.20 100
F 0.30 100 S 0.30 100
G 0.20 100 T 0.20 100
H 0.10 100 U 0.10 100
I 0.60 100 V 0.00 100
J 0.20 100 W 0.10 100
K 0.00 100 X 0.20 100
L 0.20 100 Y 0.20 100
M 0.10 100 Z 0.00 100

15
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Table 3. Comparative results of recalling of letters of the English alphabet with 4-bit error

Letter Recall success rate (%) Letter Recall success rate (%)
Multiple
Traditional Multiple Traditional Connections
HNN Connections HNN HNN HNN
A 0.00 100 N 0.00 100
B 0.00 100 O 0.00 100
C 0.00 100 P 0.00 100
D 0.00 100 Q 0.00 100
E 0.00 100 R 0.00 100
F 0.00 100 S 0.00 100
G 0.00 100 T 0.00 100
H 0.00 100 U 0.00 100
I 0.00 100 V 0.00 100
J 0.00 100 W 0.00 100
K 0.00 100 X 0.00 100
L 0.00 100 Y 0.00 100
M 0.00 100 Z 0.00 100

Table 4. Comparative results of recalling of letters of English alphabet with 6-bit error

Letter Recall success rate (%) Letter Recall success rate (%)
Multiple
Traditional Multiple Traditional Connections
HNN Connections HNN HNN HNN
A 0.00 100 N 0.00 100
B 0.00 100 O 0.00 100
C 0.00 100 P 0.00 100
D 0.00 100 Q 0.00 100
E 0.00 100 R 0.00 100
F 0.00 100 S 0.00 100
G 0.00 100 T 0.00 100
H 0.00 100 U 0.00 100
I 0.00 100 V 0.00 100
J 0.00 100 W 0.00 100
K 0.00 100 X 0.00 100
L 0.00 100 Y 0.00 100
M 0.00 100 Z 0.00 100

16
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Table 5. Comparative results of recalling of letters of English alphabet with 8-bit error

Letter Recall success rate (%) Letter Recall success rate (%)
Multiple
Traditional Multiple Traditional Connections
HNN Connections HNN HNN HNN
A 0.00 97.00 N 0.00 97.00
B 0.00 95.20 O 0.00 94.50
C 0.00 95.30 P 0.00 94.80
D 0.00 94.80 Q 0.00 95.10
E 0.00 94.40 R 0.00 94.30
F 0.00 93.30 S 0.00 96.40
G 0.00 94.40 T 0.00 95.50
H 0.00 96.60 U 0.00 94.70
I 0.00 94.50 V 0.00 96.80
J 0.00 94.20 W 0.00 97.30
K 0.00 93.80 X 0.00 96.60
L 0.00 93.80 Y 0.00 95.70
M 0.00 96.40 Z 0.00 95.50

to 10-bit noise level. Thus, we found our method to be more competent than the methods given in
the literature for recalling the input test patterns from the noisy images. Our method achieved the
consistent recall success rate of 100% up to the 6-bit error and dropped down to about 80% for the
10-bit error, indicating the remarkable achievement of the developed method.
Using the same dataset, we also compared and analyzed the performance of our method with the
sub-optimal genetic algorithm (GA) based HNN method in detail that was proposed by Kumar’ &
Singh (2012). Results of both the methods were compared for different noise levels viz., zero, two,
four- and six-bit errors in input test patterns as listed in Table 6-9. From the results shown in Tables
6-9, it is quite clear that the Sub-Optimal GA based HNN algorithm obtained the recall capability
of 100% for the noiseless pattern (i.e., 0-bit error); however, its performance degrades subsequently
with an increase in noise level in terms of higher bit error in test patterns. It achieved the maximum
recall success rate of 98.7%, 29.2%, and 0.0% for 2-bit error, 4-bit error, and 6-bit error test patterns,
respectively. While, the proposed algorithm achieved the recall rate of 100% for the same noise level
(i.e., 2 bit, 4 bit, and 6-bit errors).
Storage capacity of traditional HNN with Hebb rule is approximately) for noisy prototype bipolar
patterns (Abu-mostafa & Jacques, 1985), where n n / (2 × log2 n is the number of neurons in the
HNN. If such a network is overloaded with a number of patterns exceeding its capacity. Its recalling
success rate declines towards zero very rapidly, as can be noted down from the results given in Table
1-5.
If each neuron is connected to other neurons through multiple connections, then the storage
capacity of the multiple connections based HNN network gets modified as per the following equation:

C = m × n / (2 × log 2n ) (13)

17
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Table 6. Comparative results of recalling of letters of English alphabet with 0-bit error

Letter Recall success rate (%) Letter Recall success rate (%)
HNN HNN
Sub Sub Multiple
optimal Multiple optimal Connections
GA Connections HNN GA HNN
A 100 100 N 100 100
B 100 100 O 100 100
C 100 100 P 100 100
D 100 100 Q 100 100
E 100 100 R 100 100
F 100 100 S 100 100
G 100 100 T 100 100
H 100 100 U 100 100
I 100 100 V 100 100
J 100 100 W 100 100
K 100 100 X 100 100
L 100 100 Y 100 100
M 100 100 Z 100 100

Table 7. Comparative results of recalling of letters of English alphabet with 2-bit error

Letter Recall success rate (%) Letter Recall success rate (%)
HNN HNN
Sub Sub Multiple
optimal Multiple optimal Connections
GA Connections HNN GA HNN
A 81.20 100 N 72.25 100
B 87.19 100 O 99.50 100
C 90.98 100 P 86.80 100
D 98.70 100 Q 64.00 100
E 75.73 100 R 84.50 100
F 85.85 100 S 83.90 100
G 89.44 100 T 92.20 100
H 87.30 100 U 87.30 100
I 99.50 100 V 74.30 100
J 86.50 100 W 89.75 100
K 70.00 100 X 93.90 100
L 70.50 100 Y 86.43 100
M 82.55 100 Z 76.12 100

18
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Table 8. Comparative results of recalling of letters of English alphabet with 4-bit error

Letter Recall success rate (%) Letter Recall success rate (%)
HNN HNN
Sub Sub Multiple
optimal Multiple optimal Connections
GA Connections HNN GA HNN
A 6.70 100 N 14.90 100
B 20.25 100 O 27.10 100
C 22.60 100 P 18.65 100
D 29.95 100 Q 11.20 100
E 16.05 100 R 19.05 100
F 16.75 100 S 17.75 100
G 19.70 100 T 21.70 100
H 19.80 100 U 21.30 100
I 29.25 100 V 14.15 100
J 20.50 100 W 25.50 100
K 8.90 100 X 22.95 100
L 10.70 100 Y 13.60 100
M 19.15 100 Z 12.95 100

Table 9. Comparative results of recalling of letters of English alphabet with 6-bit error

Letter Recall success rate (%) Letter Recall success rate (%)
HNN HNN
Sub Sub Multiple
optimal Multiple optimal Connections
GA Connections HNN GA HNN
A 0.00 100 N 0.00 100
B 0.00 100 O 0.00 100
C 0.00 100 P 0.00 100
D 0.00 100 Q 0.00 100
E 0.00 100 R 0.00 100
F 0.00 100 S 0.00 100
G 0.00 100 T 0.00 100
H 0.00 100 U 0.00 100
I 0.00 100 V 0.00 100
J 0.00 100 W 0.00 100
K 0.00 100 X 0.00 100
L 0.00 100 Y 0.00 100
M 0.00 100 Z 0.00 100

19
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Where m denotes the number of multiple connections, and n / (2 × log2 n ) represents the stored
patterns corresponding to each connection.
In the proposed MC-HNN architecture, each connection among 35 neurons is employed to store
a single pattern to minimize spurious patterns; therefore, the term n / (2 × log2 n ) is set to 1. Hence,
equations (13) turn out to be as follows:

C = m ×1 = m (14)

Equation (14) shows that the storage capacity of MC-HNN is improved m times than that of the
HNN method.
In this study, HNN offers the storage capacity of 35 / (2 × log2 35) = 3.41 ≈ 4 , for 26 English
alphabet letters of size 35 (5 × 7) , whereas MC-HNN offers the storage capacity of 26 (computed
using equation 14). This happens because, in MC-HNN, we have 26 connections to store 26 English
alphabet letters, where each letter is stored corresponding to dedicated connections among 35 neurons.
The results presented in this study illustrate that the storage capacity of MC-HNN got improved by
84.61% (i.e., enhanced 6.5 times) than that of the HNN method, which shows a great success of the
proposed method.
In the presented method, the storage capacity depends upon the number of connections in the
network; hence it can be increased by increasing the number of connections as per requirement and vice
versa. The major advantage of the MC-HNN method is that it extends the capability of the traditional
HNN method by storing the individual patterns into etalon arrays through the special connections
among neurons. Hence, its storage capacity ultimately depends upon the number of connections
and is independent of the total number of neurons in the network, which ultimately enhances the
storage capacity of the HC-HNN method. However, this network might need more computational
power. However, nowadays, the problem of high computational power is not a big issue due to the
easy availability of fast graphical processing units (GPUs) and owning to the power of parallel and
distributed computing.
In this study, Hopfield neural network achieved the average success recall rate of 88.10%, 0.18%,
0%, 0%, 0%, 0% for images containing the noise level of 0-, 2-, 4-, 6-, 8- bit errors, respectively.
However, the HNN-GA method obtained the average results of 100%, 84.6%, 18.51% for 0-, 2, 4- bit
errors, respectively. Although, HNN-GA method produced good results for 0, 2, and 4-bit noisy images
but failed to yield satisfactory results for 6 and 8 bits noisy images, as illustrated in Fig.9. In the case
of our proposed method, the average recall success rate was 100% for 0-,2-, 4-, 6- bit errors, which
slowly decreases to 95.30% and 81.62% for 8- and 10-bits noisy images. The comparative analysis
of the average recall success rate among different methods is shown in Fig. 9.
Apart from Ref (Kumar’ & Singh, 2012), we have also compared the performance of our method
with those of current state-of-the-art methods (Sahoo & Pradhan, 2020) and (Goel et al., 2019)
as shown in Table 10. In (Goel et al., 2019), authors used recurrent neural networks and discrete
wavelet transform (DWT) for recalling the static images. The DWT and image dilation algorithm
was used to preprocess the input images. After preprocessing, each input image was converted into
a bipolar pattern vector of size 3600×1 and subsequently fed as an input to RNN for encoding using
the Hebbian correlation rule. Sahoo & Pradhan (2020) used HNN with HOG features to recall the
handwritten Odia characters (vowels, numbers, and consonants). To recall the noisy images from the
stored patterns of the network, 0-90% external noise was introduced into the original images. The
proposed method recalled 31.6% of the images correctly, even with 90% noise.
The results shown in Table 10 illustrate the efficacy of the proposed method over other state-of-the-
art methods on 36 handwritten Hindi alphabet character images of 50 × 50 pixels, as shown in fig 10.

20
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Figure 9. Comparative analysis of average recall success rate of HNN, HNN-GA, and MC-HNN method

Table 10. Performance comparison of the proposed method with other state-of-the-art methods

Methods Average Recall success rate (%)


0% 20% 40% 60% 80% 90%
HNN-sub-optimal GA [12] 80% 30.6% 0.7% 0.0% 0.0% 0.0%
HNN-HOG [26] 100% 93.7% 16.2% 0.0% 0.0% 0.0%
RNN-DWT [27] 100% 90.4% 12.4% 0.9% 0.0% 0.0%
MC-HNN (Proposed Method) 100% 100% 100% 100% 70.30% 31.62%

Figure 10. Snapshot of Hindi alphabet characters images

6. CONCLUSION

The simulation results presented in this paper indicate that the MC-HNN yields a higher recall success
rate than those of other HNN-based traditional methods available in the literature. In traditional
HNN-based methods, all the stored vectors information is encoded in a single weight matrix, which
increases the probability of noisy input prototype test patterns to get stuck in local minima resulting
in spurious patterns. However, this problem has been resolved in the MC-HNN method by using
etalon arrays corresponding to each pattern to be stored in the network that reduces the chances of
test patterns getting trapped into local minima, resulting in perfect recalling. In addition to this, the
proposed methodology also offers a higher storage capacity in comparison to traditional based HNN
methods. Due to the usage of etalon arrays for the storage of individual patterns, the MC-HNN method
can store as many patterns as demanded merely by introducing extra interconnections among neurons.
In this research, the direct applications of multiple connections and an energy function to
pattern associations have been explored. The aim of the proposed study was to explore an alternative
architecture to resolve the issues associated with the traditional HNN method and other methods which

21
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

are reliant upon it. The experimental results with the modified architecture are quite encouraging.
Nevertheless, more work needs to be performed, especially on the tests of noisy input patterns. We
also see the applicability of the proposed network for identifying objects, shapes, overlapped images,
etc. The performance, as well as applicability of the proposed architecture, can be further enhanced
in the future by introducing the concept of orthogonal patterns in the network. Apart from this, its
application can be extended to store and recall grayscale images in the form of bit-planes corresponding
to multiple connections (Eight-connections) among neurons, and further can be extended for color
images too. It is also noticed that the major limitation of this proposed method is high computational
time as compared to traditional HNN. In the future, we will also try to reduce it as same as traditional
HNN by incorporating feature extraction and selection methods on patterns.

22
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

REFERENCES

Abu-mostafa, Y. S., & Jacques, J. S. (1985). Information Capacity of the Hopfield Model. IEEE Transactions
on Information Theory, IT-31(4), 461–464. doi:10.1109/TIT.1985.1057069
Abu-Mostafa, Y. S., Magdon-Ismail, M., & Lin, H. T. (2012). Learning From Data. AMLBook.
Amit, D. J., Gutfreund, H., & Sompolinsky, H. (1985). Storing Infinite Numbers of Patterns in a Spin-Glass
Model of Neural Networks. Physical Review Letters, 55(14), 1530–1533. doi:10.1103/PhysRevLett.55.1530
PMID:10031847
Davey, N., Hunt, S. P., Adams, R. G., & Davey, N. (2004). High Capacity Recurrent Associative Memories.
Neurocomputing-IJON, 62, 459–491. doi:10.1016/j.neucom.2004.02.007
Dehghan, M., Nourian, M., & Menhaj, M. B. (2009). Numerical solution of helmholtz equation by the modified
hopfield finite difference techniques. Numerical Methods for Partial Differential Equations, 25(3), 637–656.
doi:10.1002/num.20366
Garimella, R. M., Singh, A., Jyothi Prasanna, G., Jagannadan, M., Vankam, V. S., & Bairaju, M. L. (2021).
1-D/2-D/3-D Hopfield Associative Memories. IOP Conference Series. Materials Science and Engineering,
1049(1), 1–6. doi:10.1088/1757-899X/1049/1/012001
Goel, R. K., Vishnoi, S., & Shrivastava, S. (2019). Image denoising by hybridizing preprocessed discrete wavelet
transformation and recurrent neural networks. International Journal of Innovative Technology and Exploring
Engineering, 8(10), 3451–3457. doi:10.35940/ijitee.J9718.0881019
Gosti, G., Folli, V., Leonetti, M., & Ruocco, G. (2019). Beyond the maximum storage capacity limit in hopfield
recurrent neural networks. Entropy (Basel, Switzerland), 21(8), 1–12. doi:10.3390/e21080726 PMID:33267440
Hamming, R. W. (1950). Error detecting and error correcting codes. The Bell System Technical Journal, 29(2),
147–160. doi:10.1002/j.1538-7305.1950.tb00463.x
Haykin, S. (1998). Neural Networks: A Comprehensive Foundation (2nd ed.). Prentice Hall PTR.
Hebb, D. (1949). The Organization of Behavior a Neuropsychological Theory. Wiley.
Hillar, C., Dickstein, J. S., & Koepsell, K. (2012). Efficient and optimal Little-Hopfield auto-associative memory
storage using minimum probability flow. Neural Information Processing Systems (NIPS) Workshop on Discrete
Optimization in Machine Learning (DISCML), 1–6.
Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities.
Proceedings of the National Academy of Sciences of the United States of America, 79(8), 2554–2558. doi:10.1073/
pnas.79.8.2554 PMID:6953413
Hopfield, J. J. (1984). Neural Networks, Physical systems with emergent collective computational abilities.
Proceedings of the National Academy of Sciences of the United States of America, 81, 3088–3092.
Hu, S., Hu, S. G., Liu, Y., Liu, Z., Chen, T. P., Wang, J. J., Yu, Q., Deng, L. J., Yin, Y., & Hosaka, S. (2015). A
memristive Hopfield network for associative memory. Nature Communications, 6, 1–8. https://doi.org/10.1038/
protex.2015.070
Kareem, E. A., Kareem, A., Ali, W. A. H., & Jantan, A. (2012). MCA: A Developed Associative Memory
Using Multi-Connect Architecture. Intelligent Automation and Soft Computing, 18(3), 291–308. https://www.
researchgate.net/publication/330013030
Kim, D. H., Park, J., & Kahng, B. (2017). Enhanced storage capacity with errors in scale-free Hopfield neural
networks: An analytical study. PLoS One, 12(10), 1–12. https://doi.org/10.1371/journal.pone.0184683
Kobayashi, M. (2017). Fast Recall for Complex-Valued Hopfield Neural Networks with Projection Rules.
Computational Intelligence and Neuroscience, 2017, 1–6. https://doi.org/10.1155/2017/4894278
Kohonen, T., & Ruohonen, M. (1973). Representation of associated data by matrix operators. Institute of Electrical
and Electronics Engineers Transactions on Computers, C22(7), 701–708.

23
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Kumar, S., & Singh, M. P. (2010). Pattern recall analysis of the Hopfield neural network with a genetic algorithm.
Computers & Mathematics with Applications (Oxford, England), 60(4), 1049–1057. https://doi.org/10.1016/j.
camwa.2010.03.061
Kumar’, S., & Singh, M. P. (2012). Study of Hopfield neural network with sub-optimal and random GA for
pattern recalling of English characters. Applied Soft Computing, 12(8), 2593–2600. https://doi.org/10.1016/j.
asoc.2012.03.049
Liang, K.-R., & Li, D.-F. (2020). A Biobjective Biform Game Approach to Optimizing Strategies in Bilateral
Link Network Formation. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 1–10. 10.1109/
TSMC.2020.3034480
Mutter, K. N., Kaream, I. I. A., & Moussa, H. A. (2006). Gray Image Recognition Using Hopfield Neural Network
With Multi-Bitplane and Multi-Connect Architecture. International Conference on Computer Graphics, Imaging
and Visualisation (CGIV’06), 2006, 236–242.
Mutter, K. N., Mat, Z., Azlan, J., & Aziz, A. (2007). Hopfield Neural Network (HNN) Improvement for Color
Image Recognition Using Multi-Bitplane and Multi-Connect Architecture. IEEE International Conference on
Computer Graphics, Imaging and Visualisation (CGIV’07), 403–407.
Rebentrost, P., Bromley, T. R., Weedbrook, C., & Lloyd, S. (2018). Quantum Hopfield neural network. Physical
Review. A, 8, 1–13. https://doi.org/10.1103/PhysRevA.98.042308
Rodriuez, D. L., Casermerio, E. M., & Ortiz-de-Lazcano-Labato, J. M. (2007). hopfield-network-as-associative-
memory-with-multiple-reference-points (1). International Journal of Mathematical and Computational Sciences,
1(7), 324–329.
Sahoo, R. C., & Pradhan, S. K. (2020). Pattern Storage and Recalling Analysis of Hopfield Network for
Handwritten Odia Characters Using HOG. Advances in Machine Learning and Computational Intelligence,
Algorithms for Intelligent Systems. http://www.springer.com/series/16171
Sampath, S., & Srivastava, V. (2020). On stability and associative recall of memories in attractor neural networks.
PLoS One, 15(9), 1–22. https://doi.org/10.1371/journal.pone.0238054
Singh, M. P., & Dixit, R. S. (2013). Optimization of stochastic networks using simulated annealing for the storage
and recalling of compressed images using SOM. Engineering Applications of Artificial Intelligence, 26(10),
2383–2396. https://doi.org/10.1016/j.engappai.2013.07.003
Singh, T. P., & Jabin, S. (2012). Evolving Connection Weights for Pattern Storage and Recall in Hopfield Model
of Feedback Neural Networks Using a Genetic Algorithm. International Journal of Soft Computing, 3(2), 55–62.
https://doi.org/10.5121/ijsc.2012.3205
Tsuji, M., Isokawa, T., Kobayashi, M., Matsui, N., & Kamiura, N. (2021). Gradient Descent Learning for
Hyperbolic Hopfield Associative Memory. Transactions of the Institute of Systems, Control and Information
Engineers, 34(1).
Wu, Y. D., Chen, Y. H., & Zhang, H. Y. (2005). An improved algorithm for image restoration based on modified
Hopfield neural network. 5th International Conference on Machine Learning and Cybernetics, 2005, 4720–4723.
Yadav, J. K. P. S., Singh, L., & Jaffery, Z. A. (2017). Comparative Analysis of Recurrent Networks for Pattern
Storage and Recalling of Static Images. International Journal of Computers and Applications, 170(10), 975–8887.

24
International Journal of Fuzzy System Applications
Volume 11 • Issue 2

Jay Kant Pratap Singh Yadav completed his B. Tech. and M. Tech. (NIT, Surat) and currently pursuing his Ph.D.
from the Department of Electrical Engineering, Jamia Millia Islamia (A Central University), New Delhi. His area of
Interest is Soft Computing, Image Processing, and Machine Learning.

Zainul Abdin Jaffery is at present professor in the Department of Electrical Engineering, Jamia Millia Islamia, New
Delhi, India. He completed his B.Sc. (Engineering) and M.Sc. (Engineering) from Aligarh Muslim University Aligarh
and Ph.D. from JMI New Delhi. He is a Senior Member of IEEE and other academic societies. His areas of interest
are Digital Signal Processing, Soft Computing, Digital Image Processing, Computer Vision, Embedded System
Design, Applications in Power & Electronics Engineering.

Laxman Singh completed his BE (Electronics and Communication Engineering) in 2004 from C.R. State (Govt.)
College of Engineering, Murthal, Haryana and M.Tech in Instrumentation and Control Engineering in 2009 from
APJ College of Engineering, Gurugram. He obtained his Ph.D in Digital Image Processing and Artificial Intelligence
in 2016 from Jamia Millia Islamia, Central University, New Delhi. He has total teaching experience (UG and PG) of
17 years and currently working as an Associate Professor in Dept. of Electronics & Communication Engineering
at NIET, Greater Noida. The total number of undergraduate and postgraduate projects guided by him for both
Electronics and Communication Engineering and Computer Science Engineering is around 50. He has worked
as a Head of Department of Electronics and Communication Engineering at SIMT, Mathura. At present, total 7
PhD research scholars are working with him in the field computer science and AI. The total number of research
articles in International/National Journals/Conferences is around 50. He has chaired 1 International Conferences
and 2 National Conferences. He is working on two research projects funded by Dr. A.P.J Abdul Kalam University,
Lucknow. He has five International patents granted in his name and has published four national patents as of now.
He is a member of various professional bodies like IAENG, and ISTE etc. His research areas include Modeling
and Simulation, Neural Networks, Fuzzy Systems and Genetic Algorithm, Pattern Recognition, Medical Image
Analysis, Signal and Image Processing, Optimization techniques, and AI.

25
View publication stats

You might also like