Professional Documents
Culture Documents
Multilayer Percept Ron
Multilayer Percept Ron
e Lógica Fuzzy
Aula 3
Multilayer Percetrons
Baseado em:
Neural Networks, Simon Haykin, Prentice-Hall, 2nd
edition
Slides do curso por Elena Marchiori, Vrije
Unviersity
Multilayer Perceptrons
Architecture
Input Output
layer layer
Hidden Layers
-10 -8 -6 -4 -2 2 4 6 8
vj
10
vj w
i 0 ,...,m
ji yi
• Back-propagation algorithm
Function signals
Forward Step
Error signals
Backward Step
• It adjusts the weights of the NN in order to
minimize the average squared error.
e
1 2 in output
• Average squared error: E(n) 2 j (n) layer
jC
N: size of
• Measure of learning
N training set
performance:
E (n)
1
EAV N
n 1
E
w ji - Step in direction opposite to the gradient
w ji
E E v j
w ji v j w ji
E v j
j yi
v j w ji
PMR5406 Redes Neurais e Multilayer Perceptrons 11
Lógica Fuzzy
Compute local gradient of
neuron j
ej dj - yj
Then
j (d j - y j ) ' (v j )
E(n) 1
2 k (n)
e
kC
2
E e k e k v k
ek e k y
y j kC y j kC v k j
e k v k
from ' (vk ) w kj
v k y j
E
We obtain
y j
kC
k w kj
w1j e1 Signal-flow
1 ’(v1) graph of
j ’(vj) wkj back-
’(vk) ek propagation
wm j k error signals
to neuron j
em
m ’(vm)
ay [1 y j k
] w if j hidden node
j j
k
kj
ay j [1 y j ][d j y j ] If j output node
Continuous functions:
• Every bounded continuous function can be
approximated with arbitrarily small error, by network
with one hidden layer
• Any function can be approximated with arbitrary
accuracy by a network with two hidden layers
Generalized
w ji ( n) w ji ( n 1) j ( n)yi ( n) delta
function
momentum constant
PMR5406 Redes Neurais e Multilayer Perceptrons 29
Lógica Fuzzy
Generalized delta rule
v 1 Sigmoidal function is
e
1 av nonsymmetric
v 1
1 / 2
w m m=number of weights
Y1,j
MLP Y2,j
YM,j
1, x j C k 0
d k,j
0, x j C k
1 Kth element
0