Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

Module 3 ( Neural Networks & Genetic Algorithms for solving business problems )

Explain in brief the History of Artificial Neural Network (ANN) ?

A neural network is a method in artificial intelligence that teaches computers to process data in a
way that is inspired by the human brain. It is a type of machine learning process, called deep
learning, that uses interconnected nodes or neurons in a layered structure that resembles the human
brain. It creates an adaptive system that computers use to learn from their mistakes and improve
continuously. Thus, artificial neural networks attempt to solve complicated problems, like
summarizing documents or recognizing faces, with greater accuracy.

In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper on how
neurons might work. In order to describe how neurons in the brain might work, they modeled a
simple neural network using electrical circuits.

In 1949, Donald Hebb took the idea further in his book, The Organization of Behaviour, a work which
pointed out the fact that neural pathways are strengthened each time they are used.

As computers became more advanced in the 1950's, it was finally possible to simulate a hypothetical
neural network.

In machine learning, the perceptron (or McCulloch-Pitts neuron) is an algorithm for supervised
learning of binary classifiers. It was created in 1943 by McCulloch and Pitts.

In 1950, Undergraduate students at Harvard, Marvin Minsky and Dean Edmonds, built the first neural
network computer. It was called SNARC, made of vacuum tubes.

Two major concepts that are precursers to Neural Networks are:

Threshold Logic - Converting continuous input to discrete output

Hebbian Learning - A model of learning based on neural plasticity, proposed by Donald Hebb in his
book "The Organization of Behaviour" often summarized by the phrase: "Cells that fire together, wire
together."

McCulloch-Pitts neuron model is a simplified model of how the brain works. It consists of inputs that
are either on or off, and each input has a weight. If the sum of the weighted inputs exceeds a certain
threshold, the neuron will "fire" and produce an output. It's kind of like a light switch that turns on
when enough pressure is applied.
Explain the Evolution of Artificial Neural Network ?

Artificial Neural Networks (ANNs) are computing systems inspired by the structure and function of
the human brain. The idea of ANNs was first proposed in the 1940s and 50s, but it wasn't until the
1980s that they started to become more widely used.

At first, ANNs were quite simple, with just a few layers and nodes. But as researchers learned more
about how the human brain works, they began to develop more complex ANNs with multiple layers
and thousands of nodes.

In the 1990s and early 2000s, ANNs began to be used for various tasks, such as image and speech
recognition, natural language processing, and even playing games like chess and Go.

Today, ANNs are an integral part of many technologies we use, from virtual assistants like Siri and
Alexa to self-driving cars. Researchers continue to work on improving ANNs and finding new ways to
apply them in various fields.

Explain Mark 1 perception model of McCulloch ?

The Mark 1 Perception model was developed by Warren McCulloch and Walter Pitts in 1943. It's a
mathematical model of a neural network, which describes how signals are processed and
transmitted within the brain.

The model consists of a network of artificial neurons, which are connected to each other in a specific
way. Each neuron receives input from other neurons, and then produces an output based on that
input.

The basic idea behind the Mark 1 Perception model is that it can be used to simulate the way that
humans perceive patterns in the world around us. For example, the model can be used to recognize
visual patterns in images, or to recognize speech patterns in audio recordings.

Overall, the Mark 1 Perception model was an important step forward in the development of artificial
intelligence and neural networks.
Explain in brief Multilayer perceptron MLP invented by Minsky and Papert ?

A multilayer perceptron (MLP) is a type of artificial neural network that consists of one or more
layers of interconnected nodes, also known as neurons. These neurons process information by
transmitting signals from one layer to the next, with each layer performing its own unique
computations. The MLP was invented by Marvin Minsky founder of the MIT AI lab and Seymour
Papert director of the lab in the 1960s as a way to model human cognition. It is a flexible and
powerful tool for many different types of machine learning tasks, including classification, regression,
and prediction.

Minsky and Papert wrote the book Perceptrons in 1969 that showed a simple form of neural
network, called perceptrons, could learn anything that they could represent. However, they could
represent very little.

Two main issues found:

• A two-input perceptron could not be trained to recognize when its two inputs were different (XOR).

• Computers at that time could not handle long run-time needed in large neural networks.

Explain Hopfield’s Energy approach 1982 ?

Hopfield's energy approach is a mathematical model that explains how a system made of
interconnected nodes, such as a neural network, can settle into a stable pattern. This model uses the
concept of energy to describe the stability of such a pattern. The energy level of the system
decreases as it moves towards a stable state, much like a ball rolling down a hill. Hopfield's approach
is often used in pattern recognition and optimization problems.

Explain in brief Biological Neural network and how it works compared to Artificial Neuron
network ?
A biological neural network is a network of neurons that are interconnected in the brain. These
neurons receive and send signals to each other through electrochemical impulses. The neurons are
linked together through synapses, which act as the connectors between the neurons. The neurons
are activated when the signals they receive reach a certain threshold, which then triggers the neuron
to send an impulse to other neurons.

On the other hand, an artificial neural network is a computational model that mimics the structure
and function of biological neural networks. It consists of multiple layers of artificial neurons, which
are connected to each other, and each neuron has a set of inputs, weights, and biases that determine
the output. The neural network learns from a set of training data, adjusting the weights and biases to
minimize the error in the output.

Overall, biological neural networks and artificial neural networks function similarly, but artificial
neural networks are designed to be more efficient and faster than biological neural networks.
Explain in brief what is Artificial Neuron network and how it works ?

An artificial neural network (ANN) is a type of machine learning algorithm designed to simulate the
way the human brain works. It is made up of interconnected nodes, called neurons, which process
and transmit information.

In a basic neural network, there are three types of neurons: input neurons, hidden neurons, and
output neurons. Input neurons receive data from the outside world, and send it to the hidden
neurons. The hidden neurons then process this information and pass it on to the output neurons,
which produce the final result.

During training, the neural network adjusts the strength of the connections between neurons, based
on the input it receives and the desired output. This allows the network to learn and improve over
time, making it better at performing the task it was designed for.

ANNs are used in a wide variety of applications, including image recognition, speech recognition, and
natural language processing.

Explain in brief Biological Neural network and how it works ?

A biological neural network is a network of neurons that are interconnected in the brain. These
neurons receive and send signals to each other through electrochemical impulses. The neurons are
linked together through synapses, which act as the connectors between the neurons. The neurons
are activated when the signals they receive reach a certain threshold, which then triggers the neuron
to send an impulse to other neurons.

Working of a Biological Neuron

As shown in the above diagram, a typical neuron consists of the following four parts with the help
of which we can explain its working:

Dendrites − They are tree-like branches, responsible for receiving the information from other
neurons it is connected to. In other sense, we can say that they are like the ears of neuron.

Soma − It is the cell body of the neuron and is responsible for processing of information, they have
received from dendrites.

Axon − It is just like a cable through which neurons send the information.

Synapses - It is the connection between the axon and other neuron is called synapses
Explain in brief XOR Truth Table ?

The exclusive-or (XOR) circuit's mathematical computation could not be processed till the
backpropagation algorithm was created by Werbos in 1975.

An XOR truth table is a table that shows the output of an XOR (exclusive OR) gate for all possible
combinations of inputs. An XOR gate takes two binary inputs and produces a binary output. The
output of an XOR gate is 1 if the two inputs are different, and 0 if they are the same.

For example, if the inputs are 1 and 0, the output is 1, but if the inputs are both 1 or both 0, the
output is 0. The XOR truth table helps to understand the behavior of the XOR gate and is useful in
digital circuit design and logic operations.
Explain in brief Rumelhart and McClelland detailed idea of using Connectionism in computers to
simulate neural processes ?

The parallel distributed processing or connectionism:

Rumelhart and McClelland (1986) gave detailed idea of using connectionism in computers to
simulate neural processes.

Rumelhart and McClelland proposed using Connectionism, a type of artificial neural network, to
simulate the way the brain processes information. They suggested that instead of relying on pre-
programmed rules, computers could learn and adapt by making connections between different
pieces of information. This means that the computer would be able to recognize patterns and
relationships in the data it is given, and use that information to make predictions or produce outputs.

In essence, Connectionism allows computers to "think" like a human brain, making it a powerful tool
for artificial intelligence research.

Explain history of neural network in 1990 ?

By 1990's, Neural networks were back, this time truly catching the imagination of the world and
finally coming to par with, if not overtaking, its expectations.

Yet again, Today we are asking the same questions of Al, and projecting onto it, our all too human
fears, and yet again we are farther than we think from bowing in deference to our digital overlords.

Expalin Deep Learning ?

Deep learning is a type of artificial intelligence that involves training neural networks to learn and
recognize patterns in data. In simpler terms, it's a way for computers to learn and make decisions on
their own by analyzing large amounts of data. It's used in a lot of different applications, like image
and speech recognition, natural language processing, and even self-driving cars.
 Last 10 years, best-performing Al examples such as speech recognizers on phones or the latest
automatic translator from Google were created using deep learning.
 Since 2010, remarkably successful return fueled mainly by powerful graphics chips.
 GPUs pack thousands of cores (that do simple calculations).
 Researchers quickly realized that its architecture is very similar to a neural net.

Deep Learning history

• 1960s: 1 layered network

• 1980s: 2-3 layered

• Today: 10-50+ layers in. network

Explain Machine learning ?

Machine learning is a subset of artificial intelligence that involves training machines to learn from
data. The idea is to enable machines to automatically improve their performance on a specific task
over time, without being explicitly programmed to do so.

For example, let's say we have a dataset of cat and dog images, and we want to build a machine
learning model that can classify new images as either a cat or a dog. We could use a type of machine
learning algorithm called a neural network, which can automatically learn features from the images
and make predictions based on those features.

We would train the model on a portion of the dataset (the "training set"), and then evaluate its
performance on another portion (the "test set"). The model would keep adjusting its parameters
until it can accurately classify new images.

Once the model is trained, we can use it to classify new images it hasn't seen before. And if the
model makes mistakes, we can retrain it on more data to improve its performance. That's the basic
idea behind machine learning!
Classification of Machine learning or forms of learning

1. Supervised Learning

Supervised learning is the types of machine learning in which machines are trained using well
"labelled" training data, and on basis of that data, machines predict the output. The labelled data
means some input data is already tagged with the desired output.

For example:

The machine has a "teacher" who guides it by

providing sample inputs along with the desired output.

The machine then maps the inputs and the outputs.

This is similar to how we teach very young children with picture books.

2. Unsupervised Learning

This is the most important and most difficult type of learning and would be better titled Predictive
Learning.

In unsupervised learning, the computer system identifies patterns and relationships in data of its
own and it won’t give any specific instructions or labeled examples.

For example, let's say we want to group customers based on their purchase history, but we don't
know how many groups there are or what the groups should be called. Unsupervised learning
algorithms could automatically identify patterns in the data and group customers based on
similarities in their purchasing behavior. This can help businesses identify target markets, personalize
marketing strategies, and improve customer retention.

3. Reinforcement Learning

Reinforcement learning is a type of machine learning that focuses on how an agent interacts with an
environment to learn how to make decisions that maximize rewards.

Here's an example: Imagine you're teaching a dog to perform a trick. You would give the dog a treat
when they perform the trick correctly and withhold the treat when they perform the trick incorrectly.
Over time, the dog learns which actions lead to rewards and which actions don't.

In reinforcement learning, the agent is like the dog and the environment is like the trick. The agent
takes actions in the environment, and receives feedback in the form of rewards or penalties. Through
trial and error, the agent learns which actions lead to the highest rewards and modifies its behavior
accordingly.
Explain Inductive Learning ?

Inductive learning is a type of machine learning that uses data to make predictions or generating a
set of classification rules, which produces rules in the form “IF-THEN”, IF-THIS format.

When the output and examples of the function are fed into the A.l. system, inductive Learning
attempts to learn the function for new data.

It is based on the idea that if a set of data points have certain characteristics, then future data points
will also have those characteristics.

There are basically two methods for knowledge extraction firstly from domain experts and then with
machine learning. For a very large amount of data, the domain experts are not very useful and
reliable. So we move towards the machine learning approach for this work. To use machine learning
One method is to replicate the expert’s logic in the form of algorithms but this work is very tedious,
time taking, and expensive. So we move towards the inductive algorithms which generate the
strategy for performing a task and need not instruct separately at each step.

Some practical examples of inductive learning:

• Credit risk assessment.

• Disease diagnosis.

• Face recognition.

• Automatic steering (autonomous driving).

Inductive Learning may be helpful in the following four situations:

1. Problems in which no human expertise is available. People cannot write a program to solve a
problem if they do not know the answer. These are areas ripe for exploration.

2. Humans can complete the task, but no one knows how to do it. There are situations in which
humans can do things that computers cannot or do not do well. Riding a bike or driving a car are two
examples.

3. Problems where the desired function is frequently changing. Humans could describe it and write a
program to solve it, but the problem changes too frequently. It is not economical. The stock market is
one example.

4. Problems where each user requires a unique function. Writing a custom program for each user is
not cost-effective. Consider Netflix or Amazon recommendations for movies or books.

Explain Computational Learning Theory ?

Computational learning theory is a subfield of artificial intelligence (AI) that is concerned with
mathematical methods and deals with the design and analysis of machine learning algorithms.
Computational learning theory uses formal methods to study learning tasks and learning algorithms.

CLT can help managers understand the limits and possibilities of these algorithms, enabling them to
make data-driven decisions and predictions.

By understanding the trade-offs between the complexity of the model and the amount of data
required to achieve a certain level of accuracy, managers can get more efficient systems

Computational learning theory can be considered to be an extension of statistical learning theory or


SLT for short, that makes use of formal methods for the purpose of quantifying learning algorithms.

Computational Learning Theory (CoLT): Formal study of learning tasks.

Statistical Learning Theory (SLT): Formal study of learning algorithms.

There are two kinds of time complexity results:

Positive results - Showing that a certain class of functions is learnable in polynomial time.

Negative results - Showing that certain classes cannot be learned in polynomial time.

Explain Statistical learning theory ?

Statistical learning theory is a framework in artificial intelligence that deals with designing algorithms
that can learn patterns and relationships in data. In simple words, it is a way to train computers to
make predictions or decisions based on existing data.

For example, let's say we have a dataset of customer transactions at a store which includes
information such as age, gender, location, and the items purchased. Using statistical learning theory,
we can build a machine learning model that can analyze this data and make predictions about which
products are most likely to be purchased by a customer based on their age, gender, and location.

The model learns from the existing data and can improve over time by learning from new data as it
becomes available. This can be applied in various industries, such as healthcare, finance, and
marketing to help make predictions and decisions more accurately and efficiently.

Explain Probably Approximately learning theory developed by leslie Valiant ?

The Probably Approximately Correct (PAC) learning theory, developed by Leslie Valiant, is a machine
learning concept that focuses on creating algorithms that can make accurate predictions based on
incomplete or uncertain data.

PAC learning seeks to qualify the difficulty of a learning task, and might be considered the subfield of
computational leading theory.

In simple terms, it means that even if the data we have is not 100% accurate, we can still create
algorithms that will give us a close enough approximation of the correct answer.
PAC learning theory helps us to understand the relationship between the amount of data we have
and the accuracy of the predictions we can make. By using this theory, we can create algorithms that
can learn from examples and adapt to new situations, even with limited data

You might also like