Neuralntw 4202108121054

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Unit-IV: Single layer feedback Networks

Associative memories: Linear Association, Basic Concepts of recurrent Auto associative


memory, retrieval algorithm, storage algorithm, Hopfield networks, Applications of associative
memories

Single node with its own feedback

1. When outputs can be directed back as inputs to the same layer or


proceeding layer nodes, then it results in feedback networks.
2. Recurrent networks are feedback networks with closed loop. The figure
below shows a single recurrent having a single neuron with feedback to
itself.

Single Layer Recurrent Network

1. This network is a single-layer network with a feedback connection in which


the processing element's output can be directed back to itself or to other
processing elements or both.
2. A recurrent neural network is a class of artificial neural network where the
connection between nodes forms a directed graph along a sequence.
3. This allows is it to exhibit dynamic temporal behavior for a time sequence.
Unlike feed forward neural networks, RNNs can use their internal state
(memory) to process sequences of inputs.
Associative Memory-
Associative memory searches stored data only by the data value itself rather by an
address. This type of search helps in reducing the search time by a large extent. When
data is accessed by data content rather than data address, then the memory is refered
to as associative memory or content addressable memory.
Associative memory organization
The associative memory hardware structure consists of:
 memory array,
 logic for m words with n bits per word, and
 several registers like input resgister, mask register, select register and output
register.
The block diagram showing organization of associative memory is shown below:

Associative memories can be implemented either by using feed forward or recurrent


neural networks. Such associative neural networks are used to associate one set of
vectors with another set of vectors, say input and output patterns.
Network architectures of Associate Memory Models:
The neural associative memory models pursue various neural network architectures to
memorize data. The network comprises either a single layer or two layers. The linear
association model refers to a feed-forward type network, comprises of two layers of
different processing units- The first layer serving as the input layer while the other layer
as an output layer. The Hopfield model refers to a single layer of processing elements
where each unit is associated with every other unit in the given network.
The bidirectional associative memory (BAM) model is the same as the linear
associate, but the associations are bidirectional.

The neural network architectures of these given models and the structure of the
corresponding association weight matrix w of the associative memory are depicted.

Linear Associate model (two layers): The linear associate model is a feed-forward
type network where produced output is in the form of single feed-forward computation.
The model comprises of two layers of processing units, one work as an input layer while
the other work as an output layer. The input is directly associated with the outputs,
through a series of weights. The connections carrying weights link each input to every
output. The addition of the products of the weights and the input is determined in each
neuron node. The architecture of the linear associate is given below.

All p inputs units are associated to all q output units via associated weight matrix

W = [wij]p * q where wij describes the strength of the unidirectional association of


the ith input unit to the jth output unit.

The connection weight matrix stores the z different associated pattern pairs {(Xk,Yk); k=


1,2,3,…,z}. Constructing an associative memory is building the connection weight
matrix w such that if an input pattern is presented, the stored pattern associated with
the input pattern is recovered.

The aim of an associative memory is, to produce the associated output pattern
whenever one of the input pattern is applied to the neural network. The input pattern
may be applied to the network either as input or as initial state, and the output pattern
is observed at the outputs of some neurons constituting the network.
1) According to the way that the network handles errors at the input pattern, they are
classified as interpolative and accretive memory.
In the interpolative memory it is allowed to have some deviation from the desired
output pattern when added some noise to the related input pattern.
However, in accretive memory, it is desired the output to be exactly the same as the
associated output pattern, even if the input pattern is noisy.
2) Another classification of associative memory is such that while the memory in which
the associated input and output patterns differ are called hetero associative memory, it
is called auto associative memory if they are the same.
The neural networks working on the basis of pattern association, which means they
can store different patterns and at the time of giving an output they can produce one of
the stored patterns by matching them with the given input pattern. These types of
memories are also called Content-Addressable Memory (CAM). Associative memory
makes a parallel search with the stored patterns as data files.
Following are the two types of associative memories we can observe −

 Auto Associative Memory


 Hetero Associative memory

Auto Associative Memory


This is a single layer neural network in which the input training vector and the output
target vectors are the same. The weights are determined so that the network stores a
set of patterns. An auto-associative memory recovers a previously stored pattern that
most closely relates to the current pattern. It is also known as an auto-associative
correlator.
Consider x[1], x[2], x[3],….. x[M], be the number of stored pattern vectors, and
let x[m] be the element of these vectors, showing characteristics obtained from the
patterns. The auto-associative memory will result in a pattern vector x[m] when putting
a noisy or incomplete version of x[m].

Architecture

As shown in the following figure, the architecture of Auto Associative memory network
has ‘n’ number of input training vectors and similar ‘n’ number of output target vectors.

Training Algorithm

For training, this network is using the Hebb or Delta learning rule.
Step 1 − Initialize all the weights to zero as wij = 0 i=1ton,j=1toni=1ton,j=1ton
Step 2 − Perform steps 3-4 for each input vector.
Step 3 − Activate each input unit as follows −
xi=si(i-1to n)
Step 4 − Activate each output unit as follows −
yj=sj(j=1to n)
Step 5 − Adjust the weights as follows −
wij(new)=wij(old)+xiyj

Testing Algorithm

Step 1 − Set the weights obtained during training for Hebb’s rule.
Step 2 − Perform steps 3-5 for each input vector.
Step 3 − Set the activation of the input units equal to that of the input vector.
Step 4 − Calculate the net input to each output unit j = 1 to n

Step 5 − Apply the following activation function to calculate the output

Hetero Associative memory


Similar to Auto Associative Memory network, this is also a single layer neural network.
However, in this network the input training vector and the output target vectors are not
the same. The weights are determined so that the network stores a set of patterns.
Hetero associative network is static in nature, hence, there would be no non-linear and
delay operations. In a hetero-associate memory, the recovered pattern is generally
different from the input pattern not only in type and format but also in content. It is also
known as a hetero-associative correlation.
Consider we have a number of key response pairs {a(1), x(1)}, {a(2),x(2)},…..,{a(M),
x(M)}. The hetero-associative memory will give a pattern vector x(m) when a noisy or
incomplete version of the a(m) is given.

Neural networks are usually used to implement these associative memory models
called neural associative memory (NAM). The linear associate is the easiest artificial
neural associative memory.

These models follow distinct neural network architecture to memorize data.

Architecture
As shown in the following figure, the architecture of Hetero Associative Memory
network has ‘n’ number of input training vectors and ‘m’ number of output target
vectors.

Training Algorithm
For training, this network is using the Hebb or Delta learning rule.
Step 1 − Initialize all the weights to zero as wij = 0 i=1to n, j=1to m
Step 2 − Perform steps 3-4 for each input vector.
Step 3 − Activate each input unit as follows −
xi=si (i=1to n)
Step 4 − Activate each output unit as follows −
Yj =sj (j=1to m)
Step 5 − Adjust the weights as follows −
wij(new)=wij(old)+xiyj
Testing Algorithm
Step 1 − Set the weights obtained during training for Hebb’s rule.
Step 2 − Perform steps 3-5 for each input vector.
Step 3 − Set the activation of the input units equal to that of the input vector.
Step 4 − Calculate the net input to each output unit j = 1 to m;

Step 5 − Apply the following activation function to calculate the output

Hopfield Network
Hopfield network is a special kind of neural network whose response is different from
other neural networks.
It is calculated by converging iterative process. It has just one layer of neurons relating
to the size of the input and output, which must be the same. When such a network
recognizes, for example, digits, we present a list of correctly rendered digits to the
network. Subsequently, the network can transform a noise input to the relating perfect
output.
Hopfield neural network was invented by Dr. John J. Hopfield in 1982. It consists of a
single layer which contains one or more fully connected recurrent neurons. The
Hopfield network is commonly used for auto-association and optimization tasks.
Discrete Hopfield Network
A Hopfield network which operates in a discrete line fashion or in other words, it can be
said the input and output patterns are discrete vector, which can be either
binary 0,10,1 or bipolar +1,−1+1,−1 in nature. The network has symmetrical weights
with no self-connections i.e., wij = wji and wii = 0.
Architecture
Following are some important points to keep in mind about discrete Hopfield network −
 This model consists of neurons with one inverting and one non-inverting output.
 The output of each neuron should be the input of other neurons but not the input
of self.
 Weight/connection strength is represented by wij.
 Connections can be excitatory as well as inhibitory. It would be excitatory, if the
output of the neuron is same as the input, otherwise inhibitory.
 Weights should be symmetrical, i.e. wij = wji

The output from Y1 going to Y2, Yi and Yn have the weights w12, w1i and w1n respectively.


Similarly, other arcs have the weights on them.
Training Algorithm
During training of discrete Hopfield network, weights will be updated. As we know that
we can have the binary input vectors as well as bipolar input vectors. Hence, in both
the cases, weight updates can be done with the following relation
Case 1 − Binary input patterns
For a set of binary patterns Spp, p = 1 to P
Here, Spp = s1pp, s2pp,..., sipp,..., snpp
Weight Matrix is given by

Case 2 − Bipolar input patterns


For a set of binary patterns Spp, p = 1 to P
Here, Spp = s1pp, s2pp,..., sipp,..., snpp

Weight Matrix is given by

Neurons pull in or push away from each other:


Suppose the connection weight Wij = Wji between two neurons I and j.
If Wij > 0, the updating rule implies:
o If Xj = 1, then the contribution of j in the weighted sum, i.e., WijXj, is positive.
Thus the value of Xi is pulled by j towards its value Xj= 1
o If Xj= -1 then WijXj , is negative, and Xi is again pulled by j towards its value Xj =
-1
Thus, if Wij > 0 , then the value of i is pulled by the value of j. By symmetry, the value
of j is also pulled by the value of i.
If Wij < 0, then the value of i is pushed away by the value of j.
It follows that for a particular set of values Xi ∈ { -1 , 1 } for;
1 ≤ i ≤ N, the selection of weights taken as Wij = XiXj for;
1 ≤ i ≤ N correlates to the Hebbian rule.

Testing Algorithm
Step 1 − Initialize the weights, which are obtained from training algorithm by using
Hebbian principle.
Step 2 − Perform steps 3-9, if the activations of the network is not consolidated.
Step 3 − For each input vector X, perform steps 4-8.
Step 4 − Make initial activation of the network equal to the external input vector X as
follows −

Step 5 − For each unit Yi, perform steps 6-9.


Step 6 − Calculate the net input of the network as follows −
Step 7 − Apply the activation as follows over the net input to calculate the output −

Here θi is the threshold.


Step 8 − Broadcast this output yi to all other units.
Step 9 − Test the network for conjunction.
Energy Function Evaluation

Hopfield networks have an energy function that diminishes or is unchanged with


asynchronous updating.

For a given state X ∈ {−1, 1} N of the network and for any set of association
weights Wij with Wij = wji and wii =0 let,

Here, we need to update Xm to X'm and denote the new energy by E' and show that.

E'-E = (Xm-X'm ) ∑i≠mWmiXi.

Using the above equation, if Xm = Xm' then we have E' = E

If Xm = -1 and Xm' = 1 , then Xm - Xm' = 2 and hm= ∑iWmiXi ? 0

Thus, E' - E ≤ 0

Similarly if Xm =1 and Xm'= -1 then Xm - Xm' = 2 and hm= ∑iWmiXi < 0

Thus, E - E' < 0.

An energy function is defined as a function that is bonded and non-increasing function


of the state of the system.
Energy function E f also called Lyapunov function determines the stability of discrete
Hopfield network, and is characterized as follows −
Condition − In a stable network, whenever the state of node changes, the above
energy function will decrease.

Suppose when node i has changed state from  then the Energy


change   is given by the following relation

Here  
The change in energy depends on the fact that only one unit can update its activation
at a time.

Continuous Hopfield Network


In comparison with Discrete Hopfield network, continuous network has time as a
continuous variable. It is also used in auto association and optimization problems such
as travelling salesman problem.
Model − The model or architecture can be build up by adding electrical components
such as amplifiers which can map the input voltage to the output voltage over a
sigmoid activation function.
Energy Function Evaluation

Here λ is gain parameter and gri input conductance.


Applications of associative memory-
Associative memory is also known as content addressable memory (CAM) or associative storage
or associative array. It is a special type of memory that is optimized for performing searches
through data, as opposed to providing a simple direct access to the data based on the address.
Associative memory of conventional semiconductor memory with added comparison circuit that
enables a search operation to complete in a single clock cycle. It is a hardware search engine, a
special type of computer memory used in certain very high searching applications.

Applications of Associative memory:-

1. It can be only used in memory allocation format.

2. It is widely used in the database management systems, etc.

3. Application of associative memory neural networks to the control of a switched


reluctance motor

4. Application of an Associative Memory to the Analysis of Document Fax Images.

5. Associative memory with online learning into a spiking neural network on neuro-morphic
hardware

Advantages of Associative memory:-

1. It is used where search time needs to be less or short.

2. It is suitable for parallel searches.

3. It is often used to speedup databases.

4. It is used in page tables used by the virtual memory and used in neural networks.

Disadvantages of Associative memory:-

1. It is more expensive than RAM.

2. Each cell must have storage capability and logical circuits for matching its content with
external argument.

You might also like