Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

Neural Network

Unit 5
Associative Memory

Er. Sachita Nand Mishra

@Er.S.N.Mishra
@Er.S.N.Mishra
Introduction
• An efficient associative memory can store a large set of
patterns as memories.
• During recall, the memory is excited with a key pattern (also
called the search argument) containing a portion of
information about a particular member of a stored pattern
set.
• This particular stored prototype can be recalled through
association of the key pattern and the information
memorized.
• Associative memories belong to a class of neural networks
that learns according to a certain recording algorithm, usually
acquire information a priori, and their connectivity (weight)
matrices most often need to be formed in advance.
@Er.S.N.Mishra
Introduction
• Associative memory usually enables a parallel search
within a stored data file.
• The purpose of the search is to output either one or all
stored items that match the given search argument,
and to retrieve it either entirely or partially.
• It is also believed that biological memory operates
according to associative memory principles.
• No memory locations have addresses; storage is
distributed over a large, densely interconnected,
ensemble of neurons

@Er.S.N.Mishra
Associative Memory Basic Concepts
• Figure shows a general block diagram of an associative memory
performing an associative mapping of an input vector x into an
output vector v.

• The operator M denotes a


general nonlinear matrix-type
operator, and it has different
meaning for each of the
memory models
• For a given memory model, the
form of the operator M is
usually expressed in terms of
given prototype vectors that
must be stored.
• The algorithm allowing the
computation of M is called the
recording or storage algorithm.
@Er.S.N.Mishra
Associative Memory Basic Concepts
• The mapping as in Equation (6.1) performed on a key vector x is
called a retrieval.
• Retrieval may or may not provide a desired solution prototype, or
an undesired prototype, but it may not even provide a stored
prototype at all.
• The storage algorithm depends on whether an autoassociative or a
heteroassociative type of memory is designed.
• Let us assume that the memory has certain prototype vectors
stored in such a way that once a key input has been applied, an
output produced by the memory and associated with the key is the
memory response.
• Assuming that there are p stored pairs of associations defined as

@Er.S.N.Mishra
Associative Memory Basic Concepts
• Associative memory, which uses neural network concepts,
bears very little resemblance to digital computer memory.
Let us compare their two different addressing modes which
are commonly used for memory data retrieval.
• In digital computers, data are accessed when their correct
addresses in the memory are given.
• As can be seen from Figure 6.2(a), which shows a typical
memory organization, data have input and output lines,
and a word line accesses and activities the entire word row
of binary cells containing word data bits.
• This activation takes place whenever the binary address is
decoded by the address decoder.
• The addressed word can be either "read" or replaced
during the "write“ operation. This is called address-
addressable memory.
@Er.S.N.Mishra
Associative Memory Basic Concepts
• In contrast with this mode of addressing, associative
memories are content addressable.
• The words in this memory are accessed based on the content
of the key vector.
• An associate memory network refers to a content addressable
memory structure that associates a relationship between the
set of input patterns and output patterns. A content
addressable memory structure is a kind of memory structure
that enables the recollection of data based on the intensity of
similarity between the input pattern and the patterns stored
in the memory.
• When the network is excited with a portion of the stored data
x(i), i = 1, 2, . . . , p, the efficient response of the
autoassociative network is the complete x(i) vector.
@Er.S.N.Mishra
Associative Memory Basic Concepts

@Er.S.N.Mishra
Associative Memory Basic Concepts

@Er.S.N.Mishra
Autoassociative and Heteroassociative
memory

@Er.S.N.Mishra
Auto associative memory network
• An autoassociative memory require supressions of the output
noise at memory output. This can be done by thresholding
the output and by recycling the output followed by a forward
pass.
• An auto-associative memory network, also known as a
recurrent neural network, is a type of associative memory
that is used to recall a pattern from partial or degraded
inputs.
• In an auto-associative network, the output of the network is
fed back into the input, allowing the network to learn and
remember the patterns it has been trained on. This type of
memory network is commonly used in applications such as
speech and image recognition, where the input data may be
incomplete or noisy.
@Er.S.N.Mishra
Hetero associative memory network :
• A hetero-associative memory network is a type of associative
memory that is used to associate one set of patterns with
another.
• In a hetero-associative network, the input pattern is
associated with a different output pattern, allowing the
network to learn and remember the associations between the
two sets of patterns.
• This type of memory network is commonly used in
applications such as data compression and data retrieval.

@Er.S.N.Mishra
Hetero associative memory network :

@Er.S.N.Mishra
@Er.S.N.Mishra
Associative Memory Basic Concepts

@Er.S.N.Mishra
Associative Memory Basic Concepts

@Er.S.N.Mishra
Associative Memory Basic Concepts

@Er.S.N.Mishra
What is Hopfield Network?

@Er.S.N.Mishra
Discrete Hopfield Network
• Discrete Hopfield Network: It is a fully interconnected neural
network where each unit is connected to every other unit.
• It behaves in a discrete manner, i.e. it gives finite distinct
output, generally of two types:
 Binary (0/1)
 Bipolar (-1/1)

@Er.S.N.Mishra
Hopfield Network

@Er.S.N.Mishra
Discrete Hopfield Network: Training
Algorithm

@Er.S.N.Mishra
Discrete Hopfield Network: Training
Algorithm

@Er.S.N.Mishra
Discrete Hopfield Network: Example

@Er.S.N.Mishra
Discrete Hopfield Network: Example

@Er.S.N.Mishra
Discrete Hopfield Network: Example

@Er.S.N.Mishra
Discrete Hopfield Network: Example

@Er.S.N.Mishra
Continuous Hopfield Network

The output is defined as:

where, vi = output from the continuous hopfield network


ui = internal activity of a node in continuous hopfield
network.

@Er.S.N.Mishra
Energy Function

• Energy function Ef, also called Lyapunov function determines


the stability of discrete Hopfield network, and is characterized as
follows

@Er.S.N.Mishra
Energy Function

Condition − In a stable network, whenever the state of node changes, the above
energy function will decrease.

=-(neti)Δyi
@Er.S.N.Mishra
Energy Function

@Er.S.N.Mishra
Bidirectional Associative Memory
(BAM)
• The Hopfield network represents an autoassociative type of
memory − it can retrieve a corrupted or incomplete
memory but cannot associate this memory with another
different memory.
• Human memory is necessarily associative. It uses a chain of
mental associations to recover a lost memory like
associations of faces with names, in exam questions with
answers, etc.
• In such memory associations for one type of object with
another, a Recurrent Neural Network (RNN) is needed to
receive a pattern of one set of neurons as an input and
generate a related, but different, output pattern of another
set of neurons.

@Er.S.N.Mishra
Bidirectional Associative Memory
(BAM)
• Bidirectional Associative Memory (BAM) is a supervised
learning model in Artificial Neural Network.
• This is hetero-associative memory, for an input pattern, it
returns another pattern which is potentially of a different size.

@Er.S.N.Mishra
Bidirectional Associative Memory (BAM)
• Bidirectional associative memory (BAM), first proposed by
Bart Kosko in the year 1988.
• The BAM network performs forward and backward associative
searches for stored stimulus responses.
• The BAM is a recurrent hetero associative pattern-marching
nerwork that encodes binary or bipolar patterns using
Hebbian learning rule.
• It associates patterns, say from set A to patterns from set B
and vice versa is also performed.
• BAM neural nets can respond to input from either layers
(input layer and output layer).

@Er.S.N.Mishra
Bidirectional Associative Memory
(BAM)
• The architecture of BAM network consists of two layers of neurons
which are connected by directed weighted pare interconnections.
The network dynamics involve two layers of interaction.
• The BAM network iterates by sending the signals back and forth
between the two layers until all the neurons reach equilibrium.
• The weights associated with the network are bidirectional. Thus,
BAM can respond to the inputs in either layer.
• Perform forward and backward search.
• Encodes binary/bipolar pattern using Hebbian learning rule.
• Two types
 Discrete BAM
 Continuous BAM

@Er.S.N.Mishra
Bidirectional Associative Memory
(BAM): Architecture

@Er.S.N.Mishra
Bidirectional Associative Memory
(BAM): Architecture

@Er.S.N.Mishra
Bidirectional Associative Memory
(BAM): Architecture
• Figure shows a BAM network consisting of n units in X layer
and m units in Y layer.
• The layers can be connected in both directions(bidirectional)
with the result the weight matrix sent from the X layer to the
Y layer is W and the weight matrix for signals sent from the Y
layer to the X layer is WT.
• Thus, the Weight matrix is calculated in both directions.

@Er.S.N.Mishra
Why BAM is required?
• The main objective to introduce such a network
model is to store hetero-associative pattern pairs.
• This is used to retrieve a pattern given a noisy or
incomplete pattern.

@Er.S.N.Mishra
BAM Architecture:
• When BAM accepts an input of n-dimensional vector X from set A then
the model recalls m-dimensional vector Y from set B.
• Similarly when Y is treated as input, the BAM recalls X.

@Er.S.N.Mishra
Determination of Weights
• Let the input vectors be denoted by s(p) and target vectors by
t(p). p = 1, ... , P. Then the weight matrix to store a set of input
and target vectors, where
s(p) = (s1(p), .. , si(p), ... , sn(p))
t(p) = (t1(p), .. , tj(p), ... , tm(p))
• can be determined by Hebb rule training algorithm.
• In case of input vectors being binary, the weight matrix W =
{wij} is given by

• When the input vectors are bipolar, the weight matrix W =


{wij} can be defined as

@Er.S.N.Mishra
Determination of Weights
• The activation function is based on whether the input target
vector pairs used are binary or bipolar

@Er.S.N.Mishra
Determination of Weights
• Step 0: Initialize the weights to store p vectors. Also
initialize all the activations to zero.
• Step 1: Perform Steps 2-6 for each testing input.
• Step 2: Set the activations of X layer to current input
pattern, i.e., presenting the input pattern x to X layer and
similarly presenting the input pattern y to Y layer. Even
though, it is bidirectional memory, at one time step,
signals can be sent from only one layer. So, either of the
input patterns may be the zero vector
• Step 3: Perform Steps 4-6 when the activations are not
converged.

@Er.S.N.Mishra
BAM: Algorithm

@Er.S.N.Mishra
BAM: Algorithm

@Er.S.N.Mishra
Limitations of BAM
• Storage capacity of the BAM: In the BAM, stored number of
associations should not be exceeded the number of neurons
in the smaller layer.
• Incorrect convergence: Always the closest association may
not be produced by BAM.

@Er.S.N.Mishra

You might also like