Professional Documents
Culture Documents
Neuralntw 4202108121054
Neuralntw 4202108121054
Neuralntw 4202108121054
The neural network architectures of these given models and the structure of the
corresponding association weight matrix w of the associative memory are depicted.
Linear Associate model (two layers): The linear associate model is a feed-forward
type network where produced output is in the form of single feed-forward computation.
The model comprises of two layers of processing units, one work as an input layer while
the other work as an output layer. The input is directly associated with the outputs,
through a series of weights. The connections carrying weights link each input to every
output. The addition of the products of the weights and the input is determined in each
neuron node. The architecture of the linear associate is given below.
All p inputs units are associated to all q output units via associated weight matrix
The aim of an associative memory is, to produce the associated output pattern
whenever one of the input pattern is applied to the neural network. The input pattern
may be applied to the network either as input or as initial state, and the output pattern
is observed at the outputs of some neurons constituting the network.
1) According to the way that the network handles errors at the input pattern, they are
classified as interpolative and accretive memory.
In the interpolative memory it is allowed to have some deviation from the desired
output pattern when added some noise to the related input pattern.
However, in accretive memory, it is desired the output to be exactly the same as the
associated output pattern, even if the input pattern is noisy.
2) Another classification of associative memory is such that while the memory in which
the associated input and output patterns differ are called hetero associative memory, it
is called auto associative memory if they are the same.
The neural networks working on the basis of pattern association, which means they
can store different patterns and at the time of giving an output they can produce one of
the stored patterns by matching them with the given input pattern. These types of
memories are also called Content-Addressable Memory (CAM). Associative memory
makes a parallel search with the stored patterns as data files.
Following are the two types of associative memories we can observe −
Architecture
As shown in the following figure, the architecture of Auto Associative memory network
has ‘n’ number of input training vectors and similar ‘n’ number of output target vectors.
Training Algorithm
For training, this network is using the Hebb or Delta learning rule.
Step 1 − Initialize all the weights to zero as wij = 0 i=1ton,j=1toni=1ton,j=1ton
Step 2 − Perform steps 3-4 for each input vector.
Step 3 − Activate each input unit as follows −
xi=si(i-1to n)
Step 4 − Activate each output unit as follows −
yj=sj(j=1to n)
Step 5 − Adjust the weights as follows −
wij(new)=wij(old)+xiyj
Testing Algorithm
Step 1 − Set the weights obtained during training for Hebb’s rule.
Step 2 − Perform steps 3-5 for each input vector.
Step 3 − Set the activation of the input units equal to that of the input vector.
Step 4 − Calculate the net input to each output unit j = 1 to n
Step 5 − Apply the following activation function to calculate the output
Neural networks are usually used to implement these associative memory models
called neural associative memory (NAM). The linear associate is the easiest artificial
neural associative memory.
Architecture
As shown in the following figure, the architecture of Hetero Associative Memory
network has ‘n’ number of input training vectors and ‘m’ number of output target
vectors.
Training Algorithm
For training, this network is using the Hebb or Delta learning rule.
Step 1 − Initialize all the weights to zero as wij = 0 i=1to n, j=1to m
Step 2 − Perform steps 3-4 for each input vector.
Step 3 − Activate each input unit as follows −
xi=si (i=1to n)
Step 4 − Activate each output unit as follows −
Yj =sj (j=1to m)
Step 5 − Adjust the weights as follows −
wij(new)=wij(old)+xiyj
Testing Algorithm
Step 1 − Set the weights obtained during training for Hebb’s rule.
Step 2 − Perform steps 3-5 for each input vector.
Step 3 − Set the activation of the input units equal to that of the input vector.
Step 4 − Calculate the net input to each output unit j = 1 to m;
Step 5 − Apply the following activation function to calculate the output
Hopfield Network
Hopfield network is a special kind of neural network whose response is different from
other neural networks.
It is calculated by converging iterative process. It has just one layer of neurons relating
to the size of the input and output, which must be the same. When such a network
recognizes, for example, digits, we present a list of correctly rendered digits to the
network. Subsequently, the network can transform a noise input to the relating perfect
output.
Hopfield neural network was invented by Dr. John J. Hopfield in 1982. It consists of a
single layer which contains one or more fully connected recurrent neurons. The
Hopfield network is commonly used for auto-association and optimization tasks.
Discrete Hopfield Network
A Hopfield network which operates in a discrete line fashion or in other words, it can be
said the input and output patterns are discrete vector, which can be either
binary 0,10,1 or bipolar +1,−1+1,−1 in nature. The network has symmetrical weights
with no self-connections i.e., wij = wji and wii = 0.
Architecture
Following are some important points to keep in mind about discrete Hopfield network −
This model consists of neurons with one inverting and one non-inverting output.
The output of each neuron should be the input of other neurons but not the input
of self.
Weight/connection strength is represented by wij.
Connections can be excitatory as well as inhibitory. It would be excitatory, if the
output of the neuron is same as the input, otherwise inhibitory.
Weights should be symmetrical, i.e. wij = wji
Testing Algorithm
Step 1 − Initialize the weights, which are obtained from training algorithm by using
Hebbian principle.
Step 2 − Perform steps 3-9, if the activations of the network is not consolidated.
Step 3 − For each input vector X, perform steps 4-8.
Step 4 − Make initial activation of the network equal to the external input vector X as
follows −
For a given state X ∈ {−1, 1} N of the network and for any set of association
weights Wij with Wij = wji and wii =0 let,
Here, we need to update Xm to X'm and denote the new energy by E' and show that.
Thus, E' - E ≤ 0
Similarly if Xm =1 and Xm'= -1 then Xm - Xm' = 2 and hm= ∑iWmiXi < 0
Here
The change in energy depends on the fact that only one unit can update its activation
at a time.
5. Associative memory with online learning into a spiking neural network on neuro-morphic
hardware
4. It is used in page tables used by the virtual memory and used in neural networks.
2. Each cell must have storage capability and logical circuits for matching its content with
external argument.