Professional Documents
Culture Documents
CHP 5 New ANN
CHP 5 New ANN
ARTIFICIAL NEURAL
NETWORK
Introduction – Fundamental concept– Basic Models of Artificial Neural
Networks – Important Terminologies of ANNs – McCulloch-Pitts Neuron
5.2 Neural Network Architecture: Perceptron, Single layer Feed Forward ANN,
Multilayer Feed Forward ANN, Activation functions, Supervised Learning:
Delta learning rule, Back Propagation algorithm.
• Has 3 parts
– Dendrites (hair like structures):- collect stimuli from the neighboring neurons
and pass it on to soma
– Soma or cell body(Processing element of neuron):- accumulates stimuli
received through dendrites.
– Axon(long cylindrical fibre): carries electric impulses(stimuli) of the neuron to
neighboring neurons.
• The small gap between the axon terminal and adjacent dendrite of the
neighboring neuron is called synapse.
• Electric impulse is transmitted across the synaptic gap by means of
electrochemical process. Megha V Gupta, NHITM
QUIZ TIME
https://www.menti.com/75pm97s2y7
• This weight w and together with other synaptic weights embody the
knowledge stored in the network of neurons.
The axon endings almost touch the dendrites or cell body of the
next neuron.
Transmission of an electrical signal from one neuron to
the next is effected by neurotransmitters.
Megha V Gupta, NHITM
How do our brains work?
▪ A processing element
Neurotransmitters are chemicals which are released from the first neuron
and which bind to the Second.
“Machine Learning”
by Anuradha Srinivasaraghavan & Vincy Joseph
Copyright © 2019 Wiley India Pvt. Ltd. All rights reserved.
ARTIFICIAL NEURON
Correct Answer: D
y = f ( yin )
The function to be applied over the net input is called activation function.
Megha V Gupta, NHITM
QUIZ
• Correct answer: c
[x1,x2,x3]=[0.3,0.5,0.6][x1,x2,x3]=[0.3,0.5,0.6]
[w1,w2,w3]=[0.2,0.1,−0.3][w1,w2,w3]=[0.2,0.1,−0.3]
The net input can be calculated as,
yin=x1w1+x2w2+x3w3yin=x1w1+x2w2+x3w3
yin=0.3×0.2+0.5×0.1+0.6×(−0.3)yin=0.06+0.05−0.18=−0.07
Basic Models of
ANN
Activation
Interconnections Learning rules
function
Correct answer: c
• Correct answer:c
“Machine Learning”
by Anuradha Srinivasaraghavan & Vincy Joseph
Copyright © 2019 Wiley India Pvt. Ltd. All rights reserved.
MULTILAYER FEED FORWARD
NETWORK
“Machine Learning”
by Anuradha Srinivasaraghavan & Vincy Joseph
Copyright © 2019 Wiley India Pvt. Ltd. All rights reserved.
SINGLE NODE WITH OWN
FEEDBACK
“Machine Learning”
by Anuradha Srinivasaraghavan & Vincy Joseph
Copyright © 2019 Wiley India Pvt. Ltd. All rights reserved.
SINGLE LAYER RECURRENT
NETWORKS
A Single layer network with a feedback connection in which PE’s output can be
directed back to the PE itself or to the other PE or to both.
•If the feedback of the output of the PEs is directed back as input to the PEs in
the same layer then it is called lateral feedback
Megha V Gupta, NHITM
LEARNING
Neural
X Network Y
W
(Input)
(Actual output)
Error
Error
(D-Y) Signal
signals Generator (Desired Output)
X
Y
(Input) NN
W (Actual output)
Error
signals Error
Signal R
Generator Reinforcement
signal
Correct answer: c
Correct answer: a
Internal state of neuron is called __________, is the function of the inputs the
neurons receives.
a. Weight
b. Activation or activity level of neuron
c. Bias
d. Node
Correct answer: b
• Weights
• Bias
• Threshold
• Learning rate
• Momentum factor
• Vigilance parameter
• Notations used in ANN
W= w1
T
w T
w11w12 w13...w1m
2
wT
3 w 21w 22 w 23...w 2 m
.
. = ..................
.
.
...................
.
T
wn w n1w n 2 w n 3...w nm
y =x w
1
X1 bj = w0j
inj i ij
w1j i =0
A) Linear function
B) Sigmoidal function
C) Thresholding function
D) Activation function
Correct answer: b
A) Learning rate
B) Bias
C) Activation function
D) Momentum
Correct answer: a
Correct answer: d
• Logical AND
x1 θ=2
1
x1 x2 y
0 0 0 y
0 1 0
1 0 0 x2 1
1 1 1
x1 x2 y
x2 2
0 0 0
0 1 1
1 0 1 x1
1 θ= 1
1 1 1
y
x2 1
Megha V Gupta, NHITM
Megha V Gupta, NHITM
Megha V Gupta, NHITM
LOGIC GATES WITH MP
NEURONS
x1 θ= 1
1
• Logical AND NOT
y
x1 x2 y
x2 -1
0 0 0
0 1 0
1 0 1 x1 θ= 2
2
1 1 0
y
x2 -1
What was the name of the first model which can perform weighted sum of
inputs?
a. McCulloch-pitts neuron model
b. Marvin Minsky neuron model
c. Hopfield model of neuron
d. Perceptron
Correct answer: a
the decision line is drawn separating the positive response region from
the negative response region.Megha V Gupta, NHITM
LINEAR SEPARABILITY
• Separation of the input space into regions is based on whether the
network response is positive or negative
• Line of separation is called linear-separable line.
• Example:-
– AND function & OR function are linear separable Example
– EXOR function Linearly inseparable Example
Problems (such as XOR) which cannot be classified in this way are said
to be non-linearly separable.
When two classes can be separated by a separate line, they are known as?
a. linearly separable
b. linearly inseparable classes
c. may be separable or inseparable, it depends on system
d. inseparable
Correct answer: a
A simple perceptron is
A) Evolutionary algorithms
B) Particle swarm optimization
C) Genetic algorithms
D) Gradient descent method
• Structure of Neighborhoods
• Structure of Neighborhoods
• Structure of Neighborhoods
• Architecture of SOM
SOM consists of 2
layers input layer and
output layer(cluster)
Winning unit is
identified by using
either dot product or
Euclidean distance
method and the
weight updation using
kohonen learning rules
is performed over
winning cluster unit.
KOHONEN SELF ORGANIZING MAPS
Architecture
neuron i
Kohonen layer
wi
Winning neuron
Input vector X
KOHONEN SOM (SELF ORGANIZING MAPS)
• Algorithm:
Medical Applications
Information
Searching & retrieval
Chemistry
Education
Business & Management
Applications of ANNs
• Signal processing
• Pattern recognition, e.g. handwritten characters or face
identification.
• Diagnosis or mapping symptoms to a medical case.
• Speech recognition
• Human Emotion Detection
• Educational Loan Forecasting
140