Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 26

Subject: Artificial Neural Network Prof.

Trupti Farande
(Mo.No-8208706107)

Course Contents

Unit I: Introduction to ANN

1.1 Introduction to ANN

Artificial Neural Network Tutorial provides basic and advanced concepts of ANNs. Our
Artificial Neural Network tutorial is developed for beginners as well as professions.

The term "Artificial neural network" refers to a biologically inspired sub-field of artificial
intelligence modeled after the brain. An Artificial neural network is usually a computational
network based on biological neural networks that construct the structure of the human brain.
Similar to a human brain has neurons interconnected to each other, artificial neural networks
also have neurons that are linked to each other in various layers of the networks. These
neurons are known as nodes.

Artificial neural network tutorial covers all the aspects related to the artificial neural network.
In this tutorial, we will discuss ANNs, Adaptive resonance theory, Kohonen self-organizing
map, Building blocks, unsupervised learning, Genetic algorithm, etc.

What is Artificial Neural Network?

The term "Artificial Neural Network" is derived from Biological neural networks that
develop the structure of a human brain. Similar to the human brain that has neurons
interconnected to one another, artificial neural networks also have neurons that are
interconnected to one another in various layers of the networks. These neurons are known as
nodes.

An Artificial Neural Network in the field of Artificial intelligence where it attempts to mimic


the network of neurons makes up a human brain so that computers will have an option to
understand things and make decisions in a human-like manner. The artificial neural network
is designed by programming computers to behave simply like interconnected brain cells.

There are around 1000 billion neurons in the human brain. Each neuron has an association
point somewhere in the range of 1,000 and 100,000. In the human brain, data is stored in such
a manner as to be distributed, and we can extract more than one piece of this data when
necessary from our memory parallelly. We can say that the human brain is made up of
incredibly amazing parallel processors.

We can understand the artificial neural network with an example, consider an example of a
digital logic gate that takes an input and gives an output. "OR" gate, which takes two inputs.
If one or both the inputs are "On," then we get "On" in output. If both the inputs are "Off,"
then we get "Off" in output. Here the output depends upon input. Our brain does not perform
the same task. The outputs to inputs relationship keep changing because of the neurons in our
brain, which are "learning."

1.2 History of Neural Network

A Brief History of ANN

The history of ANN can be divided into the following three eras −

ANN during 1940s to 1960s

Some key developments of this era are as follows −

1943 − It has been assumed that the concept of neural network started with the work of
physiologist, Warren McCulloch, and mathematician, Walter Pitts, when in 1943 they
modeled a simple neural network using electrical circuits in order to describe how neurons in
the brain might work.

1949 − Donald Hebb’s book, The Organization of Behavior, put forth the fact that repeated
activation of one neuron by another increases its strength each time they are used.

1956 − An associative memory network was introduced by Taylor.

1958 − A learning method for McCulloch and Pitts neuron model named Perceptron was
invented by Rosenblatt.
1960 − Bernard Widrow and Marcian Hoff developed models called "ADALINE" and
“MADALINE.”

ANN during 1960s to 1980s

Some key developments of this era are as follows −

1961 − Rosenblatt made an unsuccessful attempt but proposed the “backpropagation” scheme
for multilayer networks.

1964 − Taylor constructed a winner-take-all circuit with inhibitions among output units.

1969 − Multilayer perceptron MLP was invented by Minsky and Papert.

1971 − Kohonen developed Associative memories.

1976 − Stephen Grossberg and Gail Carpenter developed Adaptive resonance theory.

ANN from 1980s till Present

Some key developments of this era are as follows −

1982 − The major development was Hopfield’s Energy approach.

1985 − Boltzmann machine was developed by Ackley, Hinton, and Sejnowski.

1986 − Rumelhart, Hinton, and Williams introduced Generalised Delta Rule.

1988 − Kosko developed Binary Associative Memory BAM and also gave the concept of
Fuzzy Logic in ANN.

The historical review shows that significant progress has been made in this field. Neural
network based chips are emerging and applications to complex problems are being
developed. Surely, today is a period of transition for neural network technology.
1.3 Structure and working of Biological Neural Network

Humans have made several attempts to mimic the biological systems, and one of them
is artificial neural networks inspired by the biological neural networks in living organisms.
However, they are very much different in several ways. For example, the birds had inspired
humans to create airplanes, and the four-legged animals inspired us to develop cars.

The artificial counterparts are definitely more powerful and make our life better. The
perceptrons, who are the predecessors of artificial neurons, were created to mimic certain
parts of a biological neuron such as dendrite, axon, and cell body using mathematical models,
electronics, and whatever limited information we have of biological neural networks.

Working of a Biological Neuron

As shown in the above diagram, a typical neuron consists of the following four parts with the
help of which we can explain its working −

Dendrites − They are tree-like branches, responsible for receiving the information from other
neurons it is connected to. In other sense, we can say that they are like the ears of neuron.
Soma − It is the cell body of the neuron and is responsible for processing of information,
they have received from dendrites.

Axon − It is just like a cable through which neurons send the information.

Synapses − It is the connection between the axon and other neuron dendrites.

1.4 Topology of neural network architecture

Processing of ANN depends upon the following three building blocks −

 Network Topology
 Adjustments of Weights or Learning
 Activation Functions

In this chapter, we will discuss in detail about these three building blocks of ANN

Network Topology

A network topology is the arrangement of a network along with its nodes and connecting
lines. According to the topology, ANN can be classified as the following kinds −

Feedforward Network

It is a non-recurrent network having processing units/nodes in layers and all the nodes in a
layer are connected with the nodes of the previous layers. The connection has different
weights upon them. There is no feedback loop means the signal can only flow in one
direction, from input to output. It may be divided into the following two types −

Single layer feedforward network − The concept is of feedforward ANN having only one
weighted layer. In other words, we can say the input layer is fully connected to the output
layer.
Multilayer feedforward network − The concept is of feedforward ANN having more than
one weighted layer. As this network has one or more layers between the input and the output
layer, it is called hidden layers.

Feedback Network

As the name suggests, a feedback network has feedback paths, which means the signal can
flow in both directions using loops. This makes it a non-linear dynamic system, which
changes continuously until it reaches a state of equilibrium. It may be divided into the
following types −

Recurrent networks − They are feedback networks with closed loops. Following are the two
types of recurrent networks.

Fully recurrent network − It is the simplest neural network architecture because all nodes
are connected to all other nodes and each node works as both input and output.
Jordan network − It is a closed loop network in which the output will go to the input again
as feedback as shown in the following diagram.

Adjustments of Weights or Learning

Learning, in artificial neural network, is the method of modifying the weights of connections
between the neurons of a specified network. Learning in ANN can be classified into three
categories namely supervised learning, unsupervised learning, and reinforcement learning.

Supervised Learning

As the name suggests, this type of learning is done under the supervision of a teacher. This
learning process is dependent.

During the training of ANN under supervised learning, the input vector is presented to the
network, which will give an output vector. This output vector is compared with the desired
output vector. An error signal is generated, if there is a difference between the actual output
and the desired output vector. On the basis of this error signal, the weights are adjusted until
the actual output is matched with the desired output.

Unsupervised Learning

As the name suggests, this type of learning is done without the supervision of a teacher. This
learning process is independent.

During the training of ANN under unsupervised learning, the input vectors of similar type are
combined to form clusters. When a new input pattern is applied, then the neural network
gives an output response indicating the class to which the input pattern belongs.

There is no feedback from the environment as to what should be the desired output and if it is
correct or incorrect. Hence, in this type of learning, the network itself must discover the
patterns and features from the input data, and the relation for the input data over the output.

Reinforcement Learning

As the name suggests, this type of learning is used to reinforce or strengthen the network over
some critic information. This learning process is similar to supervised learning, however we
might have very less information.
During the training of network under reinforcement learning, the network receives some
feedback from the environment. This makes it somewhat similar to supervised learning.
However, the feedback obtained here is evaluative not instructive, which means there is no
teacher as in supervised learning. After receiving the feedback, the network performs
adjustments of the weights to get better critic information in future.

1.5 Model of Artificial Neural Network

The following diagram represents the general model of ANN followed by its processing.
Features Of Artificial Neural Network

(1)    Artificial neural networks are extremely powerful computational devices (Universal


computers).
 
(2)    ANNs are modeled on the basis of current brain theories, in which information is
represented by weights.
 
(3)    ANNs have massive parallelism which makes them very efficient.
 
(4)    They can learn and generalize from training data so there is no need for enormous feats
of programming.
 
(5)    Storage is fault tolerant i.e. some portions of the neural net can be removed and there
will be only a small degradation in the quality of stored data.
 
(6)    They are particularly fault tolerant which is equivalent to the “graceful degradation”
found in biological systems.
 
(7)    Data are naturally stored in the form of associative memory which contrasts with
conventional memory, in which data are recalled by specifying address of that data.
 
(8)    They are very noise tolerant, so they can cope with situations where normal symbolic
systems would have difficulty.
 
(9)    In practice, they can do anything a symbolic/ logic system can do and more.
 
(10)        Neural networks can extrapolate and intrapolate from their stored information. The
neural networks can also be trained. Special training teaches the net to look for significant
features or relationships of data.
 

Characteristics of ANNs

1. It Is Central to a Machine Learning Subset Called Deep Learning

Machine learning equips computer systems with the capabilities to learn from training
datasets. Deep learning is one of its subsets. It has advantages over other machine learning
models because it uses an artificial neural network that surpasses the capabilities of
traditional networks because it can process and learn from huge amounts of data.

2. It Is an Algorithm Modeled After Biological Neural Systems

An ANN is a computational model or an algorithm modeled after the human brain. Take note
that an algorithm is a set of rules or instructions followed in calculations, computer
applications, or other problem-solving operations. A particular ANN has a set of rules that
simulate the electrical activity within a biological neural system.

3. It Can Be a Hardware-Based Network or a Software-Based Network

There are two types of artificial neural networks based on their structure and properties:
physical hardware-based and software-based neural networks. Physical neural networks
depend on the hardware components used to emulate neurons. Software-based neural
networks are algorithms or digitized computer models written in computer language.

4. It Has Three Major Components Connected Via Artificial Neurons

Note that the simulated biological system is a multi-layer network architecture with three
major components called main layers: input layer, hidden layers, and output layer. These
main layers are connected by network nodes called artificial neurons or neurodes. Each
neuron can process input and forward output to other neurons in the network.

5. It Has Multiple Hidden Layers in its Network Architecture

The defining characteristic of an artificial neural network is its multiple hidden layers in its
network architecture. Note that traditional or shallow networks have one to two hidden
layers. An ANN has hidden layers that can range from several to hundreds. Multiple layers
can perform much more complex processing and representation of data.

6. It Has Expanded to Different Types with Different Characteristics

There are also different types of artificial neural networks defined by their unique
characteristics and applications. These include convolution neural networks or CNNs, which
are useful for implementing computer vision, and recurrent neural networks or RNNs, which
are proficient in natural language processing and large language models.

Types of Artificial Neural Network

There are various types of Artificial Neural Networks (ANN) depending upon the human
brain neuron and network functions, an artificial neural network similarly performs tasks. The
majority of the artificial neural networks will have some similarities with a more complex
biological partner and are very effective at their expected tasks. For example, segmentation or
classification.

Feedback ANN

In this type of ANN, the output returns into the network to accomplish the best-evolved
results internally. As per the University of Massachusetts, Lowell Centre for Atmospheric
Research. The feedback networks feed information back into itself and are well suited to
solve optimization issues. The Internal system error corrections utilize feedback ANNs.

Feed-Forward ANN

A feed-forward network is a basic neural network comprising of an input layer, an output


layer, and at least one layer of a neuron. Through assessment of its output by reviewing its
input, the intensity of the network can be noticed based on group behavior of the associated
neurons, and the output is decided. The primary advantage of this network is that it figures.

Advantages of Artificial Neural Network

Parallel processing capability:

Artificial neural networks have a numerical value that can perform more than one task
simultaneously.
Storing data on the entire network:

Data that is used in traditional programming is stored on the whole network, not on a
database. The disappearance of a couple of pieces of data in one place doesn't prevent the
network from working.

Capability to work with incomplete knowledge:

After ANN training, the information may produce output even with inadequate data. The loss
of performance here relies upon the significance of missing data.

Having a memory distribution:

For ANN is to be able to adapt, it is important to determine the examples and to encourage
the network according to the desired output by demonstrating these examples to the network.
The succession of the network is directly proportional to the chosen instances, and if the
event can't appear to the network in all its aspects, it can produce false output.

Having fault tolerance:

Extortion of one or more cells of ANN does not prohibit it from generating output, and this
feature makes the network fault-tolerance.

Disadvantages of Artificial Neural Network:

Assurance of proper network structure:

There is no particular guideline for determining the structure of artificial neural networks.
The appropriate network structure is accomplished through experience, trial, and error.

Unrecognized behavior of the network:

It is the most significant issue of ANN. When ANN produces a testing solution, it does not
provide insight concerning why and how. It decreases trust in the network.

Hardware dependence:

Artificial neural networks need processors with parallel processing power, as per their
structure. Therefore, the realization of the equipment is dependent.

Difficulty of showing the issue to the network:


ANNs can work with numerical data. Problems must be converted into numerical values
before being introduced to ANN. The presentation mechanism to be resolved here will
directly impact the performance of the network. It relies on the user's abilities.

1.6 McCulloch-Pitts Neuron

The McCulloch-Pitts neural model, which was the earliest ANN model, has only two types of
inputs — Excitatory and Inhibitory. The excitatory inputs have weights of positive magnitude
and the inhibitory weights have weights of negative magnitude. The inputs of the McCulloch-
Pitts neuron could be either 0 or 1. It has a threshold function as an activation function. So,
the output signal yout is 1 if the input ysum is greater than or equal to a given threshold
value, else 0. The diagrammatic representation of the model is as follows

Simple McCulloch-Pitts neurons can be used to design logical operations. For that purpose,
the connection weights need to be correctly decided along with the threshold function (rather
than the threshold value of the activation function).

The McCulloch–Pitt neural network is considered to be the first neural network. The neurons
are connected by directed weighted paths. McCulloch–Pitt neuron allows binary activation (1
ON or 0 OFF), i.e., it either fires with an activation 1 or does not fire with an activation of 0.
If w > 0, then the connected path is said to be excitatory else it is known as inhibitory.
Excitatory connections have positive weights and inhibitory connections have negative
weights. Each neuron has a fixed threshold for firing. That is, if the net input to the neuron is
greater than the threshold, it fires. Different Matlab Programs have been done to generate
output of various logical function using McCulloch-Pitt neural network algorithm.

1.7 Activation Functions

It may be defined as the extra force or effort applied over the input to obtain an exact output.
In ANN, we can also apply activation functions over the input to get the exact output.
Followings are some activation functions of interest.

An Activation Function decides whether a neuron should be activated or not. This means


that it will decide whether the neuron’s input to the network is important or not in the process
of prediction using simpler mathematical operations. 

The role of the Activation Function is to derive output from a set of input values fed to a node
(or a layer).

Linear Activation Function

The linear activation function, also known as "no activation," or "identity function"
(multiplied x1.0), is where the activation is proportional to the input.
The function doesn't do anything to the weighted sum of the input, it simply spits out the
value it was given.

However, a linear activation function has two major problems :

It’s not possible to use backpropagation as the derivative of the function is a constant and has
no relation to the input x. 

All layers of the neural network will collapse into one if a linear activation function is used.
No matter the number of layers in the neural network, the last layer will still be a linear
function of the first layer. So, essentially, a linear activation function turns the neural network
into just one layer.

Non-Linear Activation Functions

The linear activation function shown above is simply a linear regression model. 

Because of its limited power, this does not allow the model to create complex mappings
between the network’s inputs and outputs. 

Non-linear activation functions solve the following limitations of linear activation functions:

They allow backpropagation because now the derivative function would be related to the
input, and it’s possible to go back and understand which weights in the input neurons can
provide a better prediction.

They allow the stacking of multiple layers of neurons as the output would now be a non-
linear combination of input passed through multiple layers. Any output can be represented as
a functional computation in a neural network.

Now, let’s have a look at ten different non-linear neural networks activation functions and
their characteristics. 

Non-Linear Neural Networks Activation Functions

Sigmoid / Logistic Activation Function 

This function takes any real value as input and outputs values in the range of 0 to 1. 

The larger the input (more positive), the closer the output value will be to 1.0, whereas the
smaller the input (more negative), the closer the output will be to 0.0, 
Binary sigmoidal function − This activation function performs input editing between 0 and
1. It is positive in nature. It is always bounded, which means its output cannot be less than 0
and more than 1. It is also strictly increasing in nature, which means more the input higher
would be the output.

Bipolar sigmoidal function − This activation function performs input editing between -1 and
1. It can be positive or negative in nature. It is always bounded, which means its output
cannot be less than -1 and more than 1. It is also strictly increasing in nature like sigmoid
function.

Learning and Adaptation

As stated earlier, ANN is completely inspired by the way biological nervous system, i.e. the
human brain works. The most impressive characteristic of the human brain is to learn, hence
the same feature is acquired by ANN.

What Is Learning in ANN?

Basically, learning means to do and adapt the change in itself as and when there is a change
in environment. ANN is a complex system or more precisely we can say that it is a complex
adaptive system, which can change its internal structure based on the information passing
through it.

Why Is It important?

Being a complex adaptive system, learning in ANN implies that a processing unit is capable
of changing its input/output behavior due to the change in environment. The importance of
learning in ANN increases because of the fixed activation function as well as the input/output
vector, when a particular network is constructed. Now to change the input/output behavior,
we need to adjust the weights.

Classification

It may be defined as the process of learning to distinguish the data of samples into different
classes by finding common features between the samples of the same classes. For example, to
perform training of ANN, we have some training samples with unique features, and to
perform its testing we have some testing samples with other unique features. Classification is
an example of supervised learning.
1.8 Neural Network Learning Rules

We know that, during ANN learning, to change the input/output behavior, we need to adjust
the weights. Hence, a method is required with the help of which the weights can be modified.
These methods are called Learning rules, which are simply algorithms or equations.
Following are some learning rules for the neural network −

Hebbian Learning Rule

This rule, one of the oldest and simplest, was introduced by Donald Hebb in his book The
Organization of Behavior in 1949. It is a kind of feed-forward, unsupervised learning.

Basic Concept − This rule is based on a proposal given by Hebb, who wrote −

“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes
part in firing it, some growth process or metabolic change takes place in one or both cells
such that A’s efficiency, as one of the cells firing B, is increased.”

From the above postulate, we can conclude that the connections between two neurons might
be strengthened if the neurons fire at the same time and might weaken if they fire at different
times.

Mathematical Formulation − According to Hebbian learning rule, following is the formula to


increase the weight of connection at every time step.

Perceptron Learning Rule

This rule is an error correcting the supervised learning algorithm of single layer feedforward
networks with linear activation function, introduced by Rosenblatt.

Basic Concept − As being supervised in nature, to calculate the error, there would be a
comparison between the desired/target output and the actual output. If there is any difference
found, then a change must be made to the weights of connection.

Mathematical Formulation − To explain its mathematical formulation, suppose we have ‘n’


number of finite input vectors, x(n), along with its desired/target output vector t(n), where n =
1 to N.

Now the output ‘y’ can be calculated, as explained earlier on the basis of the net input, and
activation function being applied over that net input can be expressed as follows −
Delta Learning Rule

It is introduced by Bernard Widrow and Marcian Hoff, also called Least Mean Square (LMS)
method, to minimize the error over all training patterns. It is kind of supervised learning
algorithm with having continuous activation function.

Basic Concept − The base of this rule is gradient-descent approach, which continues forever.
Delta rule updates the synaptic weights so as to minimize the net input to the output unit and
the target value.

Competitive Learning Rule

It is concerned with unsupervised training in which the output nodes try to compete with each
other to represent the input pattern. To understand this learning rule, we must understand the
competitive network which is given as follows −

Basic Concept of Competitive Network − This network is just like a single layer feedforward
network with feedback connection between outputs. The connections between outputs are
inhibitory type, shown by dotted lines, which means the competitors never support
themselves.

Basic Concept of Competitive Learning Rule − As said earlier, there will be a competition
among the output nodes. Hence, the main concept is that during training, the output unit with
the highest activation to a given input pattern, will be declared the winner. This rule is also
called Winner-takes-all because only the winning neuron is updated and the rest of the
neurons are left unchanged.

Condition of sum total of weight − Another constraint over the competitive learning rule is,
the sum total of weights to a particular output neuron is going to be 1. For example, if we
consider neuron k then −

Outstar Learning Rule

This rule, introduced by Grossberg, is concerned with supervised learning because the desired
outputs are known. It is also called Grossberg learning.

Basic Concept − This rule is applied over the neurons arranged in a layer. It is specially
designed to produce a desired output d of the layer of p neurons.

Supervised Learning

As the name suggests, supervised learning takes place under the supervision of a teacher. This
learning process is dependent. During the training of ANN under supervised learning, the
input vector is presented to the network, which will produce an output vector. This output
vector is compared with the desired/target output vector. An error signal is generated if there
is a difference between the actual output and the desired/target output vector. On the basis of
this error signal, the weights would be adjusted until the actual output is matched with the
desired output.

Perceptron

Developed by Frank Rosenblatt by using McCulloch and Pitts model, perceptron is the basic
operational unit of artificial neural networks. It employs supervised learning rule and is able
to classify the data into two classes.

Operational characteristics of the perceptron: It consists of a single neuron with an arbitrary


number of inputs along with adjustable weights, but the output of the neuron is 1 or 0
depending upon the threshold. It also consists of a bias whose weight is always 1. Following
figure gives a schematic representation of the perceptron.
Perceptron thus has the following three basic elements −

Links − It would have a set of connection links, which carries a weight including a bias
always having weight 1.

Adder − It adds the input after they are multiplied with their respective weights.

Activation function − It limits the output of neuron. The most basic activation function is a
Heaviside step function that has two possible outputs. This function returns 1, if the input is
positive, and 0 for any negative input.

Training Algorithm for Multiple Output Units

The following diagram is the architecture of perceptron for multiple output classes.
Adaptive Linear Neuron (Adaline)

Adaline which stands for Adaptive Linear Neuron, is a network having a single linear unit. It
was developed by Widrow and Hoff in 1960. Some important points about Adaline are as
follows −

It uses bipolar activation function.

It uses delta rule for training to minimize the Mean-Squared Error (MSE) between the actual
output and the desired/target output.

The weights and the bias are adjustable.

Architecture

The basic structure of Adaline is similar to perceptron having an extra feedback loop with the
help of which the actual output is compared with the desired/target output. After comparison
on the basis of training algorithm, the weights and bias will be updated.

1.9 Applications of Neural Networks

Neural Networks are regulating some key sectors including finance, healthcare, and
automotive. As these artificial neurons function in a way similar to the human brain. They
can be used for image recognition, character recognition and stock market predictions. Let’s
understand the diverse applications of neural networks
 1. Facial Recognition 

 Facial Recognition Systems are serving as robust systems of surveillance. Recognition


Systems matches the human face and compares it with the digital images. They are used in
offices for selective entries. The systems thus authenticate a human face and match it up with
the list of IDs that are present in its database. 

Convolutional Neural Networks (CNN) are used for facial recognition and image


processing. Large number of pictures are fed into the database for training a neural network.
The collected images are further processed for training. 

 2. Stock Market Prediction

 Investments are subject to market risks. It is nearly impossible to predict the upcoming
changes in the highly volatile stock market. The forever changing bullish and bearish phases
were unpredictable before the advent of neural networks. 

To make a successful stock prediction in real time a Multilayer Perceptron MLP (class of


feedforward artificial intelligence algorithm) is employed.  MLP comprises multiple layers of
nodes, each of these layers is fully connected to the succeeding nodes. Stock’s past
performances, annual returns, and non profit ratios are considered for building the MLP
model.

 3. Social Media

 No matter how cliche it may sound, social media has altered the normal boring course of life.
Artificial Neural Networks are used to study the behaviours of social media users. Data
shared everyday via virtual conversations is tacked up and analyzed for competitive analysis. 

  Neural networks duplicate the behaviours of social media users. Post analysis of individuals'
behaviours via social media networks the data can be linked to people’s spending
habits. Multilayer Perceptron ANN is used to mine data from social media applications.  

 MLP forecasts social media trends, it uses different training methods like Mean Absolute
Error (MAE), Root Mean Squared Error (RMSE), and Mean Squared Error (MSE). MLP
takes into consideration several factors like user’s favourite instagram pages, bookmarked
choices etc. These factors are considered as inputs for training the MLP model.

 4. Aerospace
Aerospace Engineering is an expansive term that covers developments in spacecraft and
aircraft. Fault diagnosis, high performance auto piloting, securing the aircraft control
systems, and modeling key dynamic simulations are some of the key areas that neural
networks have taken over. Time delay Neural networks can be employed for modelling non-
linear time dynamic systems.

Time Delay Neural Networks are used for position independent feature recognition.  The


algorithm thus built based on time delay neural networks can recognize
patterns. (Recognizing patterns are automatically built by neural networks by copying the
original data from feature units).

Other than this TNN are also used to provide stronger dynamics to the NN models. As
passenger safety is of utmost importance inside an aircraft, algorithms built using the neural
network systems ensures the accuracy in the autopilot system. As most of the autopilot
functions are automated, it is important to ensure a way that maximizes the security. 

5. Defence

 Defence is the backbone of every country. Every country’s state in the international domain
is assessed by its military operations. Neural Networks also shape the defence operations of
technologically advanced countries. The United States of America, Britain, and Japan are
some countries that use artificial neural networks for developing an active defence strategy. 

 Neural networks are used in logistics, armed attack analysis, and for object location. They
are also used in air patrols, maritime patrol, and for controlling automated drones. The
defence sector is getting the much needed kick of artificial intelligence to scale up its
technologies. 

 Convolutional Neural Networks(CNN), are employed for determining the presence of


underwater mines. Underwater mines are the underpass that serve as an illegal commute route
between two countries. Unmanned Airborne Vehicle (UAV), and Unmanned Undersea
Vehicle (UUV) these autonomous sea vehicles use convolutional neural networks for the
image processing. 
 Convolutional layers form the basis of Convolutional Neural Networks. These layers use
different filters for differentiating between images. Layers also have bigger filters that filter
channels for image extraction. 

 6.  Healthcare

The age old saying goes like “Health is Wealth”. Modern day individuals are leveraging the
advantages of technology in the healthcare sector. Convolutional Neural Networks are
actively employed in the healthcare industry for X ray detection, CT Scan and ultrasound. 

As CNN is used in image processing, the medical imaging data retrieved from
aforementioned tests is analyzed and assessed based on neural network models. Recurrent
Neural Network (RNN) is also being employed for the development of voice recognition
systems. 

 7. Signature Verification and Handwriting Analysis 

Signature Verification , as the self explanatory term goes, is used for verifying an individual’s
signature. Banks, and other financial institutions use signature verification to cross check the
identity of an individual. 

Usually a signature verification software is used to examine the signatures. As cases of


forgery are pretty common in financial institutions, signature verification is an important
factor that seeks to closely examine the authenticity of signed documents. 

Artificial Neural Networks are used for verifying the signatures. ANN are trained to


recognize the difference between real and forged signatures. ANNs can be used for the
verification of both offline and online signatures. 

Handwriting analysis plays an integral role in forensics. The analysis is further used to
evaluate the variations in two handwritten documents. The process of spilling words on a
blank sheet is also used for behavioural analysis. Convolutional Neural Networks (CNN) are
used for handwriting analysis and handwriting verification. 

 8. Weather Forecasting

The forecasts done by the meteorological department were never accurate before artificial
intelligence came into force. Weather Forecasting is primarily undertaken to anticipate the
upcoming weather conditions beforehand. In the modern era, weather forecasts are even used
to predict the possibilities of natural disasters. 

Multilayer Perceptron (MLP), Convolutional Neural Network (CNN) and Recurrent Neural
Networks (RNN) are used for weather forecasting. Traditional ANN multilayer models can
also be used to predict climatic conditions 15 days in advance. A combination of different
types of neural network architecture can be used to predict air temperatures .

1.10 Comparison of BNN and ANN

You might also like