CS 403-Soft Computing QA-Part-1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Q1.

Discuss and explain about Artificial Neural Model

Solution- Artificial Neural Network Tutorial provides basic and advanced concepts of
ANNs. Our Artificial Neural Network tutorial is developed for beginners as well as
professions.

The term "Artificial neural network" refers to a biologically inspired sub-field of


artificial intelligence modelled after the brain. An Artificial neural network is usually a
computational network based on biological neural networks that construct the structure
of the human brain. Similar to a human brain has neurons interconnected to each other,
artificial neural networks also have neurons that are linked to each other in various
layers of the networks. These neurons are known as nodes.

Artificial neural network tutorial covers all the aspects related to the artificial neural
network. In this tutorial, we will discuss ANNs, Adaptive resonance theory, Kohonen
self-organizing map, Building blocks, unsupervised learning, Genetic algorithm, etc.

The term "Artificial Neural Network" is derived from Biological neural networks that
develop the structure of a human brain. Similar to the human brain that has neurons
interconnected to one another, artificial neural networks also have neurons that are
interconnected to one another in various layers of the networks. These neurons are
known as nodes.

The given figure illustrates the typical diagram of Biological Neural Network.

The typical Artificial Neural Network looks something like the given figure.
Dendrites from Biological Neural Network represent inputs in Artificial Neural
Networks, cell nucleus represents Nodes, synapse represents Weights, and Axon
represents Output.

An Artificial Neural Network in the field of Artificial intelligence where it attempts


to mimic the network of neurons makes up a human brain so that computers will have
an option to understand things and make decisions in a human-like manner. The
artificial neural network is designed by programming computers to behave simply like
interconnected brain cells.

There are around 1000 billion neurons in the human brain. Each neuron has an
association point somewhere in the range of 1,000 and 100,000. In the human brain,
data is stored in such a manner as to be distributed, and we can extract more than one
piece of this data, when necessary, from our memory parallelly. We can say that the
human brain is made up of incredibly amazing parallel processors.
Q2. Define ANN Architecture. Also discuss classification Taxonomy of ANN
Connectivity.

Solution- To understand the concept of the architecture of an artificial neural network,


we must understand what a neural network consists of. To define a neural network that
consists of many artificial neurons, which are termed units arranged in a sequence of
layers. Let’s us look at various types of layers available in anartificial neural network.
Artificial Neural Network primarily consists of three layers:

Input Layer:
As the name suggests, it accepts inputs in several different formats provided by the
programmer.
Hidden Layer:
The hidden layer is present in-between input and output layers. It performs all the
calculations to find hidden features and patterns.
Output Layer:
The input goes through a series of transformations using the hidden layer, which finally
results in output that is conveyed using this layer.

The artificial neural network takes input and computes the weighted sum of the inputs
and includes a bias. This computation is represented in the form of a transfer function.

It determines weighted total is passed as an input to an activation function to produce


the output. Activation functions choose whether a node should fire or not. Only those
who are fired make it to the output layer. There are distinctive activation functions
available that can be applied upon the sort of task we are performing.
Taxonomy of ANNs

Artificial neural networks (ANN) are adaptive models that can establish almost any
relationship between data. They can be regarded as black boxes to build mappings
between a set of input and output vectors. ANNs are quite promising in solving
problems where traditional models fail, especially for modelling complex phenomena
which show a non-linear relationship.

Neural networks can be roughly divided into three categories:

• Signal transfer networks. In signal transfer networks, the input signal is


transformed into an output signal. Note that the dimensionality of the signals
may change during this process. The signal is propagated through the network
and is thus changed by the internal mechanism of the network. Most network
models are based on some kind of predefined basis functions (e.g. Gaussian
peaks, as in the case of radial basis function networks (RBF networks), or
sigmoid function (in the case of multi-layer perceptrons).
• State transition networks. Examples: Hopfield networks, and Boltzmann
machines.
• Competitive learning networks. In competitive networks (sometimes also
called self-organizing maps, or SOMs) all the neurons of the network compete
for the input signal. The neuron which "wins" gets the chance to move towards
the input signal in n-dimensional space. Example: Kohonen feature map.

Q3. Discuss and explain about Learning Strategy in detail.


Solution- Artificial neural networks are among the most powerful learning models.
They have the versatility to approximate a wide range of complex functions
representing multi-dimensional input-output maps. Neural networks also have inherent
adaptability, and can perform robustly even in noisy environments. An Artificial Neural
Network (ANN) is an information processing paradigm that is inspired by the way
biological nervous systems, such as the brain, process information. The key element of
this paradigm is the novel structure of the information processing system. It is
composed of a large number of highly interconnected simple processing elements
(neurons) working in unison to solve specific problems. ANNs, like people, learn by
example. AnANN is configured for a specific application, such as pattern recognition
or data classification, through a learning process. Learning in biological systems
involves adjustments to the synaptic connections that exist between the neurons. This
is true of ANNs as well. ANNs can process information at a great speed owing to their
highly massive parallelism. A trained neural network can be thought of as an "expert"
in the category of information it has been given to analyse. This expert can then be used
to provide projections given new situations of interest and answer "what if" questions.
Advantages of ANN:

1. Adaptive learning: An ability to learn how to do tasks based on the data given
fortraining or initial experience.
2. Self-Organisation: An ANN can create its own organisation or representation of
theinformation it receives during learning time.
3. Real Time Operation: ANN computations may be carried out in parallel, and
specialhardware devices are being designed and manufactured which take advantage of
this capability.
4. Fault Tolerance via Redundant Information Coding: Partial destruction of a
network leads to the corresponding degradation of performance. However, some
network capabilities may be retained even with major network damage.

Q4. What do you mean by Learning Rules? Explain briefly about Error
Correction.

Solution- Learning rule or Learning process is a method or a mathematical logic. It


improves the Artificial Neural Network’s performance and applies this rule over the
network. Thus, learning rules updates the weights and bias levels of a network when a
network simulates in a specific data environment. Applying
learning rule is an iterative process. It helps a neural network to learn from the
existing conditions and improve its performance. Let us see
different learning rules in the Neural network:
• Hebbian learning rule – The Hebbian rule was the first learning rule. In 1949
Donald Hebb developed it as learning algorithm of the unsupervised neural
network. We can use it to identify how to improve the weights of nodes of a
network.

The Hebb learning rule assumes that – If two neighbour neurons activated and
deactivated at the same time. Then the weight connecting these neurons should
increase. For neurons operating in the opposite phase, the weight between them
should decrease. If there is no signal correlation, the weight should not change.
When inputs of both the nodes are either positive or negative, then a strong positive
weight exists between the nodes. If the input of a node is positive and negative for
other, a strong negative weight exists between the nodes.

• Perceptron learning rule – As you know, each connection in a neural network has
an associated weight, which changes in the course of learning. According to it, an
example of supervised learning, the network starts its learning by assigning a
random value to each weight.Calculate the output value on the basis of a set
of records for which we can know the expected output value. This is the learning
sample that indicates the entire definition. As a result, it is called
a learning sample. The network then compares the calculated output value
with the expected value. Next calculates an error function ∈, which can be the sum
of squares of the errors occurring for each individual in the learning sample.
• Delta learning rule – Developed by Widrow and Hoff, the delta rule, is one of the
most common learning rules. It depends on supervised learning. This
rule states that the modification in sympatric weight of a node is equal to the
multiplication of error and the input.
• Correlation learning rule – The correlation learning rule based on a similar
principle as the Hebbian learning rule. It assumes that weights between responding
neurons should be more positive, and weights between neurons with opposite
reaction should be more negative.

• Outstar learning rule – We use the Out Star Learning Rule when we assume that
nodes or neurons in a network arranged in a layer. Here the weights connected to a
certain node should be equal to the desired outputs for the neurons connected through
those weights. The out-start rule produces the desired response t for the layer of n
nodes.

Q5. Write and explain about Pattern Clustering. Explain with suitable example.

Solution- Pattern recognition is a mature field in computer science with well- established
techniques for the assignment of unknown patterns to categories, or classes.A pattern is
defined as a vector of some number of measurements, called features. Usually, a pattern
recognition system uses training samples from known categories to form a decision rule
for unknown patterns. The unknown pattern is assigned to one of the categories
according to the decision rule. Since we are interested in the classes of documents that
have been assigned by the user, we can use pattern recognition techniques to try to
classify previously unseen documents into the user's categories. While pattern
recognition techniques require that the number and labels of categories are known,
clustering techniques are unsupervised, requiring no external knowledge ofcategories.
Clustering methods simply try to group similar patterns into clusters whosemembers are
more like each other (according to some distance measure) than to members of other
clusters. There is no a priori knowledge of patterns that belong to certain groups, or even
how many groups are appropriate.
Unlike traditional clustering methods that focus on grouping objects with similar values
on a set of dimensions, pattern-based clustering finds objects that exhibit coherent
patterns in subspaces. Pattern-based clustering extends the concept of traditional
clustering and benefits a wide range of applications.
Example: Customer Segmentation for an E-commerce Website
Imagine you are working for an e-commerce website, and you want to better understand
your customers to provide targeted marketing strategies. You have collected data on
customer purchases, including items bought, purchase frequency, and total spending.
You decide to use pattern clustering to segment your customers into distinct groups based
on their purchasing behavior.
Q6. What do you mean by Function Approximation? Explain.

Solution- Function approximation is the study of selecting functions in a class that match
target functions. It’s a process that is useful in applied mathematics and computerscience.
Function approximation is often related to a Markov decision process (MDP) which
consists of an agent and various states.
To understand function approximation well, it's important to know that in this term the
word "function" doesn't refer to an object-oriented programming function that takes a
variable and provides a result. The word "function" refers to the mathematical use of
function, where a function matches one item in a data set to another single item in another
data set.

Another key point is that function approximation often works with value iteration in a
MDP process. Mathematicians show how function approximation and value iteration can
be used to build gameplay strategies for various video games, which is one of the most
prominent and easiest ways to show how MDPs work.

Key Concepts:
➢ Target Function: The function that you want to approximate is known as the
target function. It could be a real-world process, a mathematical concept, or a
mapping between inputs and outputs.
➢ Approximating Function: The function used to approximate the target function
is called the approximating function or the model. It is typically chosen from a
specific class of functions that are suitable for the problem at hand.
➢ Parameters: In many cases, the approximating function has parameters that need
to be adjusted to achieve a better fit to the target function. The process of adjusting
these parameters is known as parameter estimation or model training.
➢ Fitting: The process of finding the optimal parameters for the approximating
function is often referred to as fitting the model to the data. This involves
minimizing the discrepancy between the predictions of the approximating
function and the actual values of the target function.

You might also like