Soft Computing Question Answer

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

McCulloch-Pitts Neuron Model Numerical questions follow this link

(https://www.ques10.com/p/39252/all-types-of-numericals-1/)

Question - Implement AND function using McCulloch-Pitts Neuron (take binary data).
OR
Question - Implement XOR function using McCulloch-Pitts Neuron (take binary data).

KOHONEN SELF ORGANISING NEURAL NETWORK


Artificial Neural Network (ANN) is based on a collection of connected nodes called artificial
neurons, which models the neurons like a human brain. The self-organizing map (SOM) or
Kohonen Network is a type of Artificial Neural Network that is trained by a unsupervised
learning .

Kohonen Self-Organizing Neural Network:


Kohonen Map or Self-Organizing Map (SOM) is a type of neural network. This was developed
by Tuevo Kohonen in 1982. The name Self-organizing because it does not require supervision.
This follows an unsupervised learning approach and its network is trained through a competitive
learning algorithm.
The major characteristic of this algorithm is that the input data that are close in high dimensional
data are mapped to the nearby nodes in the 2 dimensional spaces (2D). This technique is used in
the method called dimensionality reduction, as it maps the high-dimension input to the low
discredited representation. The advantage is the nodes are self-organizing, so that supervision is
not needed.
Feature Maps: - The self-organizing maps (SOM) are also called as feature maps, as they are
retaining the feature of the input data, and training to define the similarities between the nodes.
This makes SOM useful for visualization by creating low-dimensional views of high-dimension
and even representing their relationships between them.
Vector quantization: - Vector quantization is one of the properties of Self-Organizing Maps,
which is a compression technique. It provides a way to represent multi-dimensional data in a
lower dimensional space typically in one or two dimensions. The SOM utilizes competitive
learning instead of error correction, to modify the weights. It implies only on an individual node.
Let’s discuss about the architectural structure of self-organizing neural network.

Architecture of Self-Organizing Maps:


The self-organizing Neural Network differs from the ANNs both in architectural and algorithmic
properties.

This self-organizing neural network consists of a single layer linear 2D grid of neurons, rather
than a series of layers. All the nodes on this lattice are associated directly to the input vector. The
SOM network consists of 2 layers input layer and the output layer.
The weight gets updated on the basis of weights as a function of the input data. The grid itself
maps the coordinates at each iteration as a function of the input data. Only single node is
activated at each iteration in which the features of an instance of the input vector are presented to
the neural network as all nodes respond to the input.
Stages of operations:
The function of self-organizing neural network is divided into three stages:

Construction: - The self-organizing network consists of few basic elements. The input signals
are stimulated in a matrix of neurons. These signals are grouped and transferred to every neuron.
Learning: - This mechanism defines the similarities between the every neurons and the input
signal. This assigns the neurons with shortest distance as the winner. At the start of process the
wages are of small random numbers, after learning those wages are modified and show the
internal structure of input data.
Identification:- Thus at the final stage the net values of winning neurons and its neighbors are
get adapted and the net topology is defined by determining the neighbors of every input neurons.

Properties:
Some of the properties to be known are:
Best Matching Unit (BMU):The node is chosen by determining the between the current
input values and all the nodes in the network.
Distance from input= i=0i-n(Ii-Wi)2
Where I- current input vector
W- Node’s weight vector
N=number of weights
Algorithm:
Step:1
Initialize the weights wij. And initialize random weights
Step:2
Choose a random input vector x.
Step:3
Repeat steps 4 and 5 for all nodes on the map.
Step:4
Calculate the Euclidean distance between weight vector wij and the input vector x(t), and
calculate the square of the distance
Step:5
Track the node that generates the smallest distance t and generate the winning weight using
formula.
Step:6
Calculate the overall Best Matching Unit (BMU). It means the node with the smallest distance
from all calculated ones.
Step:7
Discover topological neighborhood of BMU in Kohonen Map.
Step:8
Repeat for all nodes in the BMU neighborhood:
Update the winning weight of the first node in the neighborhood of the BMU by including a
fraction of the difference between the input vector x(t) and the weight w(t) of the neuron.
Step:9
Repeat the complete iteration until reaching the iteration.
Here, step 1 represents initialization phase, while step 2 to 9 represents the training phase.

Advantages:
 It is easily interpreted and understood.

 The reduction of dimensionality.

 Grid clustering makes it easy to observe similarities in the data.

Disadvantages:
 It does not build a generative model for the data

 It relies on a predefined distance in feature space (a problem shared by most clustering


algorithms, to be fair)

 The magnification factors not well understood (at least to my best knowledge)

 The 1D (proven) topological ordering property does not extend to 2D

 slow training, hard to train against slowly evolving data

 It not so intuitive : neurons close on the map (topological proximity) may be far away in
feature space

 It does not behave so gently when using categorical data, even worse for mixed data
Adaptive Resonance Theory (ART)

Adaptive resonance theory is a type of neural network technique developed by Stephen


Grossberg and Gail Carpenter in 1987. The basic ART uses unsupervised learning technique.
The term “adaptive” and “resonance” used in this suggests that they are open to new
learning(i.e. adaptive) without discarding the previous or the old information(i.e. resonance). The
ART networks are known to solve the stability-plasticity dilemma i.e., stability refers to their
nature of memorizing the learning and plasticity refers to the fact that they are flexible to gain
new information. ART networks implement a clustering algorithm. Input is presented to the
network and the algorithm checks whether it fits into one of the already stored clusters.

Types of Adaptive Resonance Theory (ART)

Carpenter and Grossberg developed different ART architectures as a result of 20 years of


research. The ARTs can be classified as follows:
 ART1 – It is the simplest and the basic ART architecture. It is capable of clustering binary
input values.
 ART2 – It is extension of ART1 that is capable of clustering continuous-valued input data.
 Fuzzy ART – It is the augmentation of fuzzy logic and ART.
 ARTMAP – It is a supervised form of ART learning where one ART learns based on the
previous ART module. It is also known as predictive ART.
 FARTMAP – This is a supervised ART architecture with Fuzzy logic included.

Basic of Adaptive Resonance Theory (ART) Architecture

The adaptive resonant theory is a type of neural network that is self-organizing and competitive.
It can be of both types, the unsupervised ones(ART1, ART2, ART3, etc) or the supervised
ones(ARTMAP). Generally, the supervised algorithms are named with the suffix “MAP”.
But the basic ART model is unsupervised in nature and consists of :
 F1 layer or the comparison field(where the inputs are processed)
 F2 layer or the recognition field (which consists of the clustering units)
 The Reset Module (that acts as a control mechanism)
The F1 layer accepts the inputs and performs some processing and transfers it to the F2 layer that
best matches with the classification factor.

There exist two sets of weighted interconnection for controlling the degree of similarity between
the units in the F1 and the F2 layer.

The F2 layer is a competitive layer.The cluster unit with the large net input becomes the
candidate to learn the input pattern first and the rest F2 units are ignored.

The reset unit makes the decision whether or not the cluster unit is allowed to learn the input
pattern depending on how similar its top-down weight vector is to the input vector and to the
decision. This is called the vigilance test.
Thus we can say that the vigilance parameter helps to incorporate new memories or new
information. Higher vigilance produces more detailed memories, lower vigilance produces more
general memories.

Advantage of Adaptive Resonance Theory (ART)


 It exhibits stability and is not disturbed by a wide variety of inputs provided to its network.
 It can be integrated and used with various other techniques to give more good results.
 It can be used for various fields such as mobile robot control, face recognition, land cover
classification, target recognition, medical diagnosis, signature verification, clustering web
users, etc.
 It has got advantages over competitive learning (like bpnn etc). The competitive learning
lacks the capability to add new clusters when deemed necessary.
 It does not guarantee stability in forming clusters.

Limitations of Adaptive Resonance Theory


Some ART networks are inconsistent (like the Fuzzy ART and ART1) as they depend upon the
order in which training data, or upon the learning rate.

You might also like