Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Hopfield Networks

Recurrent networks of non-linear units are generally very hard to analyze. They
can behave in many different ways: settle to a stable state, oscillate, or follow
chaotic trajectories that cannot be predicted far into the future. Hopfield neural
network consists of a single layer which contains one or more fully connected
recurrent neurons. A Hopfield network is composed of binary threshold units
with recurrent connections between them. In 1982, John Hopfield realized that if
the connections are symmetric, there is a global energy function. Each binary
“configuration” of the whole network has an energy; while the binary threshold
decision rule causes the network to settle for a minimum of this energy function.
Discrete Hopfield neural network is a type of algorithms which is called -
Autoassociative memories. It can store useful information in memory and later it
is able to reproduce this information from partially broken patterns. This is like
human memory. For instance, imagine that you look at an old picture of a place
where you were long time ago, but this picture is of very bad quality and very
blurry. It can be a building, a colony or anything. By looking at the picture you are
still able to recognize a few objects even though they are blurry. Though you
don’t clearly see all objects in the picture, you start to remember things and
withdraw from your memory some images, that cannot be seen in the picture.

Architecture and configuration of Hopfield network


A Hopfield network is a simple assembly of perceptrons that is able to overcome
the XOR problem. The array of neurons is fully connected, although neurons do
not have self-loops. This leads to K(K − 1) interconnections if there are K nodes,
with a wij weight on each. In this arrangement, the neurons transmit signals back
and forth to each other in a closed-feedback loop, eventually settling in stable
states.

An important assumption is that the weights are symmetric, wij = wji, for
neural interactions. This is unrealistic for real neural systems, in which two
neurons are unlikely to act on each other symmetrically. The state Si of each
neuron is either +1 or −1. These states are initially the input to the Hopfield
network and ultimately become the output of the network. It is activated by the
following rule:

si⟵ {+1 if ∑j wijsj ≥ θi


−1 otherwise
where θi is a threshold value corresponding to the node. The threshold value is
generally 0. This equation calculates state S of the neuron i . The state of a given
neuron greatly depends on the states of other neurons. The equation multiplies
and sums the weight w and the state S of other neurons j.
Because the state of a given neuron greatly depends on the states of other
neurons, the order in which the equation calculates the neurons is very
important. Following two strategies are employed to calculate the states for all
neurons in the Hopfield networks: asynchronous and synchronous.
The asynchronous strategy is closer to real biological systems: a node is picked to
start the update, and consecutive nodes are activated in a predefined order. So
only one neuron is updated at a time.
In synchronous strategy, all units are updated at the same time, which is much
easier to deal with computationally. This strategy is less realistic since biological
organisms lack a global clock that synchronizes the neurons.
The Hopfield network is run till the values of all neurons stabilize. Despite the fact
that the state of a given neuron greatly depends on the states of other neurons,
the network will usually converge to a stable state. It is important to have some
indication of how close the network is to converging to a stable state. The energy
value for Hopfield networks can be calculated. This value decreases as the
network moves to a more stable state. To evaluate the stability of the Hopfield
network the energy equation shown below is used.

E=−(∑i<j wijsisj+ ∑iθisi)


In a model called Hebbian learning, simultaneous activation of neurons leads to
increments in synaptic strength between those neurons. The higher the value of a
wij weight, the more likely that the two connected neurons will activate
simultaneously. In Hopfield networks, Hebbian learning manifests itself in the
following form:
The Hebbian rule is both local and incremental. For the Hopfield Networks, it is
implemented in the following manner, when learning N binary patterns. The
constant N represents the number of training set elements( x). xki represents bit i
for pattern k.

wij=1/N ∑ xki xkj ( for k = 1 to N)


Here xk is in binary representation—that is, the value xki is a bit for each i.

If the bits corresponding to neurons i and j are equal in pattern x, then the
product xki xkj will be positive and this in turn will have a positive effect on
the weight wij and the values of i and j will tend to become equal. The
opposite will happen if the bits corresponding to neurons i and j are
different.
References
https://www.sciencedirect.com/topics/computer-science/hopfield-network

Artificial Intelligence for Humans, Volume 3: Deep Learning and Neural Networks [Heaton, Jeff]

You might also like