Professional Documents
Culture Documents
CS 403-Soft Computing QA-Part-1
CS 403-Soft Computing QA-Part-1
CS 403-Soft Computing QA-Part-1
Solution- Artificial Neural Network Tutorial provides basic and advanced concepts of
ANNs. Our Artificial Neural Network tutorial is developed for beginners as well as
professions.
Artificial neural network tutorial covers all the aspects related to the artificial neural
network. In this tutorial, we will discuss ANNs, Adaptive resonance theory, Kohonen
self-organizing map, Building blocks, unsupervised learning, Genetic algorithm, etc.
The term "Artificial Neural Network" is derived from Biological neural networks that
develop the structure of a human brain. Similar to the human brain that has neurons
interconnected to one another, artificial neural networks also have neurons that are
interconnected to one another in various layers of the networks. These neurons are
known as nodes.
The given figure illustrates the typical diagram of Biological Neural Network.
The typical Artificial Neural Network looks something like the given figure.
Dendrites from Biological Neural Network represent inputs in Artificial Neural
Networks, cell nucleus represents Nodes, synapse represents Weights, and Axon
represents Output.
There are around 1000 billion neurons in the human brain. Each neuron has an
association point somewhere in the range of 1,000 and 100,000. In the human brain,
data is stored in such a manner as to be distributed, and we can extract more than one
piece of this data, when necessary, from our memory parallelly. We can say that the
human brain is made up of incredibly amazing parallel processors.
Q2. Define ANN Architecture. Also discuss classification Taxonomy of ANN
Connectivity.
Input Layer:
As the name suggests, it accepts inputs in several different formats provided by the
programmer.
Hidden Layer:
The hidden layer is present in-between input and output layers. It performs all the
calculations to find hidden features and patterns.
Output Layer:
The input goes through a series of transformations using the hidden layer, which finally
results in output that is conveyed using this layer.
The artificial neural network takes input and computes the weighted sum of the inputs
and includes a bias. This computation is represented in the form of a transfer function.
Artificial neural networks (ANN) are adaptive models that can establish almost any
relationship between data. They can be regarded as black boxes to build mappings
between a set of input and output vectors. ANNs are quite promising in solving
problems where traditional models fail, especially for modelling complex phenomena
which show a non-linear relationship.
1. Adaptive learning: An ability to learn how to do tasks based on the data given
fortraining or initial experience.
2. Self-Organisation: An ANN can create its own organisation or representation of
theinformation it receives during learning time.
3. Real Time Operation: ANN computations may be carried out in parallel, and
specialhardware devices are being designed and manufactured which take advantage of
this capability.
4. Fault Tolerance via Redundant Information Coding: Partial destruction of a
network leads to the corresponding degradation of performance. However, some
network capabilities may be retained even with major network damage.
Q4. What do you mean by Learning Rules? Explain briefly about Error
Correction.
The Hebb learning rule assumes that – If two neighbour neurons activated and
deactivated at the same time. Then the weight connecting these neurons should
increase. For neurons operating in the opposite phase, the weight between them
should decrease. If there is no signal correlation, the weight should not change.
When inputs of both the nodes are either positive or negative, then a strong positive
weight exists between the nodes. If the input of a node is positive and negative for
other, a strong negative weight exists between the nodes.
• Perceptron learning rule – As you know, each connection in a neural network has
an associated weight, which changes in the course of learning. According to it, an
example of supervised learning, the network starts its learning by assigning a
random value to each weight.Calculate the output value on the basis of a set
of records for which we can know the expected output value. This is the learning
sample that indicates the entire definition. As a result, it is called
a learning sample. The network then compares the calculated output value
with the expected value. Next calculates an error function ∈, which can be the sum
of squares of the errors occurring for each individual in the learning sample.
• Delta learning rule – Developed by Widrow and Hoff, the delta rule, is one of the
most common learning rules. It depends on supervised learning. This
rule states that the modification in sympatric weight of a node is equal to the
multiplication of error and the input.
• Correlation learning rule – The correlation learning rule based on a similar
principle as the Hebbian learning rule. It assumes that weights between responding
neurons should be more positive, and weights between neurons with opposite
reaction should be more negative.
• Outstar learning rule – We use the Out Star Learning Rule when we assume that
nodes or neurons in a network arranged in a layer. Here the weights connected to a
certain node should be equal to the desired outputs for the neurons connected through
those weights. The out-start rule produces the desired response t for the layer of n
nodes.
Q5. Write and explain about Pattern Clustering. Explain with suitable example.
Solution- Pattern recognition is a mature field in computer science with well- established
techniques for the assignment of unknown patterns to categories, or classes.A pattern is
defined as a vector of some number of measurements, called features. Usually, a pattern
recognition system uses training samples from known categories to form a decision rule
for unknown patterns. The unknown pattern is assigned to one of the categories
according to the decision rule. Since we are interested in the classes of documents that
have been assigned by the user, we can use pattern recognition techniques to try to
classify previously unseen documents into the user's categories. While pattern
recognition techniques require that the number and labels of categories are known,
clustering techniques are unsupervised, requiring no external knowledge ofcategories.
Clustering methods simply try to group similar patterns into clusters whosemembers are
more like each other (according to some distance measure) than to members of other
clusters. There is no a priori knowledge of patterns that belong to certain groups, or even
how many groups are appropriate.
Unlike traditional clustering methods that focus on grouping objects with similar values
on a set of dimensions, pattern-based clustering finds objects that exhibit coherent
patterns in subspaces. Pattern-based clustering extends the concept of traditional
clustering and benefits a wide range of applications.
Example: Customer Segmentation for an E-commerce Website
Imagine you are working for an e-commerce website, and you want to better understand
your customers to provide targeted marketing strategies. You have collected data on
customer purchases, including items bought, purchase frequency, and total spending.
You decide to use pattern clustering to segment your customers into distinct groups based
on their purchasing behavior.
Q6. What do you mean by Function Approximation? Explain.
Solution- Function approximation is the study of selecting functions in a class that match
target functions. It’s a process that is useful in applied mathematics and computerscience.
Function approximation is often related to a Markov decision process (MDP) which
consists of an agent and various states.
To understand function approximation well, it's important to know that in this term the
word "function" doesn't refer to an object-oriented programming function that takes a
variable and provides a result. The word "function" refers to the mathematical use of
function, where a function matches one item in a data set to another single item in another
data set.
Another key point is that function approximation often works with value iteration in a
MDP process. Mathematicians show how function approximation and value iteration can
be used to build gameplay strategies for various video games, which is one of the most
prominent and easiest ways to show how MDPs work.
Key Concepts:
➢ Target Function: The function that you want to approximate is known as the
target function. It could be a real-world process, a mathematical concept, or a
mapping between inputs and outputs.
➢ Approximating Function: The function used to approximate the target function
is called the approximating function or the model. It is typically chosen from a
specific class of functions that are suitable for the problem at hand.
➢ Parameters: In many cases, the approximating function has parameters that need
to be adjusted to achieve a better fit to the target function. The process of adjusting
these parameters is known as parameter estimation or model training.
➢ Fitting: The process of finding the optimal parameters for the approximating
function is often referred to as fitting the model to the data. This involves
minimizing the discrepancy between the predictions of the approximating
function and the actual values of the target function.