Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

1

V01
1

V11 Z1
X1 W11 W0

V21
Y y
V12 W21

X2 Z2
V22

V02
1

Error back propagation training algorithm / MLP


X = x1…xi
Z = z1 …zj
Y = y1..yk
Using back-propagation network, find the new weights for the network shown in above figure.
It is presented with the input pattern [x1, x2] = [0, 1] and the target output is 1. Use a Learning
rate = α = 0.25 and use binary sigmoidal activation function
[v11, v21 v01] = [0.6, -0.1, 0.3]
[v12, v22 v02] = [0.4, 0.1, -0.2]
[w11, w21 w0] = [-0.3, 0.4, 0.5]
Feedforward phase:
Zin1 = V01+x1v11+x2v21
Zin2 = v02+x1v12+x2v22
Z1 = f(Zin1) = 1/1+e-(zin1)
Z2 = f(Zin2) = 1/1+e-(zin2)
yin = w0+Z1w11+Z2w21
y = f(yin) = 1/1+e-(yin)
Backpropagation Phase:
k=1
Error between hidden – output layer
δk = (tk-yk)*f`(yk)
f’(yk) = f(yin)*(1-f(yin))
j = 1, 2
δj = δinj* f’(zinj)
f’(zinj) = f(zinj)*(1-f(zinj))
δinj=summation(δkwjk) for k=1 to m
δin1 = δ1w11
δin2 = δ1w21
Error between input – hidden layer
δ1 = δin1* f’(zin1)
δ2= δin2* f’(zin2)
Calculation of Change in weights:
Δw1 = α δkz1
Δw2 = α δkz2
Δw0 = α δ1
Δv11 = 0
Δv21 = 0.0029475
Δv01 = 0.0029475
Δv12 = 0
Δv22 = 0.00061195
Δv02 = 0.00061195
Δv11 = α δ1x1
Δv21 = α δ1x2
Δv01 = α δ1
Δv12 = α δ2x1
Δv22 = α δ2x2
Δv02 = α δ2
New weights Calculation:
v11(new) = v11(old)+ Δv11
v21(new) = v21(old) + Δv21
v01(new) = v01(old)+ Δv01
v12(new) = v12(old)+ Δv12
v22(new) = v22(old)+ Δv22
v02(new) = v02(old)+ Δv02
Wnew = Wold + Δw
w1(new) = w1(old)+ Δw1
w2(new) = w2(old)+ Δw2
w0(new) = w0(old)+ Δw0
v11(new) = 0.6
v21(new) = -0.0970525
v01(new) = 0.3029475
v12(new) = -0.3
v22(new) = 0.40061195
v02(new) = 0.50061195
w1(new) = 0.41637
w2(new) = 0.121167
w0(new) = -0.170225

You might also like