Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Artificial Neural Network

 An artificial neural network (ANN) may be defined as an information-processing model that


is inspired by the way biological nervous systems, such as the brain, process information.
This model tries to replicate only the most basic functions of the brain.
 The key element of ANN is the novel structure of its information processing system.
 A neural network is a processing device, either an algorithm or an actual hardware, whose
design was inspired by the design and functioning of animal brains and components
thereof. The computing world has a lot to gain from neural networks, also known as
artificial neural networks or neural net.
 The neural networks have the ability to learn by example which makes them very flexible
and powerful.
 The neural networks, there is no need to devise an algorithm to perform a specific task,
that is, there is no need to understand the internal mechanism of that task.
 These networks are also well suited for real system because of their fast response and
computational times which are because of their parallel architecture.
Advantages of ANN
 Adaptive learning:
o An ANN is endowed with the ability m learn how to do tasks based on the data given
for training or initial experience.
 Self-organization:
o An ANN can create its own organization or representation of the information it
receives during learning time.
 Real-time operation:
o ANN computations may be carried out in parallel. Special hardware devices are
being designed and manufactured to rake advantage of this capability of ANNs.
 Fault tolerance via redundant information coding.
o Partial destruction of a neural network leads to the corresponding degradation of
performance. However, some network capabilities may be retained even after
network damage.
Characteristics of ANN
 It is a neurally implemented mathematical model.
 There exist a large number of highly interconnected processing elements called neurons in
ANN.
 The interconnections with their weighted linkages hold the informative knowledge.
 The input signals arrive at the processing elements through connections and connecting
weights.
 The processing elements of the ANN have the ability to learn, recall and generalize from
the given data by suitable assignment or adjustment of weights.
 The computational power can be demonstrated only by the collective behavior of neurons,
and it should be noted that no single neuron carries specific information.

Copyright to Raghuvansh Narayan Gupta Page 1


Application of neural network

 Air traffic control could be automated with the location, altitude, direction and speed of
each radar blip taken as input to the network. The output would be the air traffic
controller's instruction in response to each blip.
 Animal behavior, predator/prey relationships and population cycles may be suitable for
analysis by neural networks.
 Criminal sentencing could be predicted using a large sample of crime details as input and
the resulting sentences as output.
 Data mining, cleaning and validation could be achieved by determining which records
suspiciously diverge from the pattern of their peers.
 Handwriting and typewriting could be recognized by imposing a grid over the writing, and
then each square of the grid becomes an input to the neural network. This is called
"Optical Character Recognition."
 Medical diagnosis is an ideal application for neural networks.
 Lake water levels could be predicted based upon precipitation patterns and river/dam
flows.
 Photos and fingerprints could be recognized by imposing a fine grid over the photo. Each
square of the grid becomes an input to me neural network.
 Voice recognition could be obtained by analyzing the audio oscilloscope pattern, much like
a stock market graph.
Important terminology of ANN
 Activation function
 The activation function is applied over the net input to calculate the output of an
ANN.
 Weights
 In the architecture of an ANN, each neuron is connected to other neurons by means
of directed communication links, and each communication link is associated with
weights. The weights contain information about the input signal. This information is
used by the net to solve a problem.
 Bias
 The bias included in the network has its impact in calculating the net input.
 The bias is considered like another weight.
 Threshold
 Threshold is a set of value based upon which the final output of the network may be
calculated. The threshold value is used in the activation function.
 A comparison is made between the calculated net input and the threshold to obtain
the network output.
 Learning rate
 The learning rate is denoted by . It is used to control the amount of weight
adjustment at each step of training.
 The learning rate, ranging from 0 to 1 determines the rate of learning at each step.

Copyright to Raghuvansh Narayan Gupta Page 2


Artificial Neuron VS Biological Neuron

AN BN
The cycle time of execution in the ANN is of The cycle time of execution in the biological
few nanoseconds. neuron of a few milliseconds.
The total number of neurons in the brain is The size and complexity of an ANN is based on
about 1011 and the total number of the chosen application and the network
interconnections is about 1015. designer.
An artificial neuron stores information in its The biological neuron stores the information in
contiguous memory locations. its interconnections or in synapse strength.
There is control unit in case of artificial There is no control unit on biological neuron.
neuron.

Terminology relationship between biological and artificial neurons

Biological Neuron Artificial Neuron


Cell Neuron
Dendrites Weights or interconnections
Soma Net input
Axon Output

Layers in ANN
 A neural network may have different layers of neurons like input layer, hidden layer, and
output layer.
 The input layer receives input data from the user and propagates a signal to the next layer
called the hidden layer. While doing so it multiplies the weight along with the input signal.
 The hidden layer is a middle layer which lies between the input and the output layers.
 The output layer sends its calculated output to the user from which decision can be made.

Copyright to Raghuvansh Narayan Gupta Page 3


Types of neural network

 Single layer ANN


 Multi-layer ANN
o Fully connected
o Partially connected
 Feed forward ANN
 Feed backward ANN
 Recurrent ANN

Feed forward Neural Network

 This neural network is one of the simplest forms of ANN, where the data or the input
travels in one direction. The data passes through the input nodes and exit on the output
nodes.
 This neural network may or may not have the hidden layers.
 If the neural network has no hidden layer, then it is referred as single layer feed
forward neural network.
 If the neural network has one or more hidden layer, then it is referred as multi-layer
feed forward neural network.
 Application of Feed forward neural networks are found in computer vision and speech
recognition where classifying the target classes are complicated. These kinds of Neural
Networks are responsive to noisy data and easy to maintain.

Recurrent neural network

 When output of neural network becomes input of one or more neuron, then it is known as
recurrent neural network.
 It may be single layer recurrent neural network or multi-layer recurrent neural network.

Copyright to Raghuvansh Narayan Gupta Page 4


Perceptron
 Perceptron network come under single layer multi feed forward network also called simple
perceptron.
 It is a learning algorithm which reaches solution in finite state.
 A Perceptron maps an m-dimensional input vector, onto an n-dimensional output vector. A
distinct feature of a Perceptron is that the weights are not pre-calculated but are adjusted
by an iterative process called training.

Error back propagation


 Multi-layered perceptron can be trained using the back-propagation algorithm
 Goal: To learn the weights for all links in an interconnected multilayer network.
 Algorithm
 Create a network with n in input nodes, n hidden internal nodes, and n out output
nodes.
 Initialize all weights to small random numbers.
 Until error is small do:
For each example X do
 Propagate example X forward through the network
 Propagate errors backward through the network

Copyright to Raghuvansh Narayan Gupta Page 5


Self-organizing maps

 A self-organizing map or self-organizing feature map is a type of artificial neural network


that is trained using unsupervised learning to produce a low-dimensional, discretized
representation of the input space of the training samples called maps. Self-organizing
maps differ from other artificial neural networks as they apply competitive learning as
opposed to error correction learning.
 Components of self-organizing maps
o Initialization: All the connection weights are initialized with small random values.
o Competition: For each input pattern, the neurons compute their respective values
of a discriminant function which provides the basis for competition. The particular
neuron with the smallest value of the discriminant function is declared the winner.
o Cooperation: The winning neuron determines the spatial location of a topological
neighborhood of excited neurons, thereby providing the basis for cooperation among
neighboring neurons.
o Adaptation: The excited neurons decrease their individual values of the
discriminant function in relation to the input pattern through suitable adjustment of
the associated connection weights, such that the response of the winning neuron to
the subsequent application of a similar input pattern is enhanced.

 Euclidian Distance = √( ) ( )

Copyright to Raghuvansh Narayan Gupta Page 6

You might also like