Multi Layer Perceptron - Neural Network

You might also like

You are on page 1of 3

1.

For the data shown in the following table, show the first iteration in trying to compute the
membership values for the input variables x1, x2, x3, and x4 in the regions R1, and R2. Use a
4×3×2 neural network with a random set of weights.

x1 x2 x3 x4 R1 R2

4 2 -4 1 0 1

SOLUTION NO 6

x1 x2 x3 x4 R1 R2

4 2 -4 1 0 1

𝑾111
X1

𝑾113 𝑾112 2
𝑾11

2
𝑾12 R1
𝑾121 𝑾221

X2 𝑾122

𝑾123

𝑾131

X3 𝑾132
𝑾222
𝑾133 𝑾231 R2

𝑾141 𝑾142
𝑾232

X4
𝑾143

Assigning random set of weights


w(1,1,1) = 0.2 w(2,1,1) = 0.9
w(1,2,1) = 0.4 w(2,2,1) = 0.2
w(1,3,1) = 0.6 w(2,3,1) = 0.1
w(1,4,1) = 0.7 w(2,1,2) = 0.2
w(1,1,2) = 0.5 w(2,2,2) = 0.3
w(1,2,2) = 0.3 w(2,3,2) = 0.5
w(1,3,2) = 0.8
w(1,4,2) = 0.2
w(1,1,3) = 0.4
w(1,2,3) = 0.3
w(1,3,3) = 0.2
w(1,4,3) = 0.3

Suppose threshold value t=0, so the output of the 2nd and 3rd layer can be determined below
Output of the 2nd layer
1
𝑂12 = 1) 1) 1) 1)
(1 + 𝐸𝑋𝑃[(𝑋1 ∗ 𝑊11 + (𝑋2 ∗ 𝑊21 + (𝑋3 ∗ 𝑊31 + (𝑋4 ∗ 𝑊41 − 𝑡])
1
= = 0.475021
(1 + 𝐸𝑋𝑃[(4 ∗ 0.2) + (2 ∗ 0.4) + (−4 ∗ 0.6) + (1 ∗ 0.7))

1
𝑂22 = 1) 1) 1) 1)
(1 + 𝐸𝑋𝑃[(𝑋1 ∗ 𝑊12 + (𝑋2 ∗ 𝑊22 + (𝑋3 ∗ 𝑊32 + (𝑋4 ∗ 𝑊42 − 𝑡])
1
= = 0.401312
(1 + 𝐸𝑋𝑃[(4 ∗ 0.5) + (2 ∗ 0.3) + (−4 ∗ 0.8) + (1 ∗ 0.2))
1
𝑂32 = 1) 1) 1) 1)
(1 + 𝐸𝑋𝑃[(𝑋1 ∗ 𝑊13 + (𝑋2 ∗ 𝑊23 + (𝑋3 ∗ 𝑊33 + (𝑋4 ∗ 𝑊43 − 𝑡])
1
= = 0.845535
(1 + 𝐸𝑋𝑃[(4 ∗ 0.4) + (2 ∗ 0.3) + (−4 ∗ 0.2) + (1 ∗ 0.3))

Output of the 3nd layer

1
𝑂13 =
(1 + 𝐸𝑋𝑃[(𝑂12 ∗ 2)
𝑊11 + (𝑂22 ∗ 𝑊21
2)
+ (𝑂32 ∗ 𝑊31
2)
− 𝑡])
1
= = 0.643901
(1 + 𝐸𝑋𝑃[(0.475021 ∗ 0.9) + (0.401312 ∗ 0.2) + (0.845535 ∗ 0.1)))

1
𝑂23 =
(1 + 𝐸𝑋𝑃[(𝑂12 ∗ 𝑊12
2)
+ (𝑂22 ∗ 𝑊22
2)
+ (𝑂32 ∗ 𝑊32
2)
− 𝑡])
1
= = 0.654339
(1 + 𝐸𝑋𝑃[(0.475021 ∗ 0.2) + (0.401312 ∗ 0.3) + (0.845535 ∗ 0.5)))

After that, we have to determine error

𝑅1 : 𝐸13 = 𝑅1 − 𝑂13 = 0 − 0.643901 = −0.643901

𝑅 2 : 𝐸23 = 𝑅 2 − 𝑂23 = 1 − 0.654339 = 0.345661

Determine error to the 2nd layer

𝐸12 = 𝑂12 ∗ (1 − 𝑂12 ) ∗ [(𝑊11


2
∗ 𝐸13 ) + (𝑊12
2
∗ 𝐸13 )]
= 0.475021 ∗ (1 − 0.475021 ) ∗ [(0.9 ∗ (−0.643901)) + (0.2 ∗ 0.345661)] = −0.127276

𝐸22 = 𝑂22 ∗ (1 − 𝑂22 ) ∗ [(𝑊21


2
∗ 𝐸13 ) + (𝑊22
2
∗ 𝐸13 )]
= 0.401312 ∗ (1 − 0.401312 ) ∗ [(0.2 ∗ (−0.643901)) + (0.3 ∗ 0.345661)] = −0.006026
𝐸32 = 𝑂32 ∗ (1 − 𝑂32 ) ∗ [(𝑊31
2
∗ 𝐸13 ) + (𝑊32
2
∗ 𝐸13 )]
= 0.845535 ∗ (1 − 0.845535 ) ∗ [(0.1 ∗ (−0.643901)) + (0.5 ∗ 0.345661)] = 0.014163

Finally we can updating the weight for first iteration

w(1,1,1)= 0.2+0.3*(-0.127276)*4 = 0.047269


w(1,2,1)= 0.4+0.3*(-0.127276)*2 = 0.323634
w(1,3,1)= 0.6+0.3*(-0.127276)*-4 = 0.752731
w(1,4,1)= 0.7+0.3*(-0.127276)*1 = 0.700000
w(1,1,2)= 0.5+0.3*(-0.006026)*4 = 0.492769
w(1,2,2)= 0.3+0.3*(-0.006026)*2 = 0.296384
w(1,3,2)= 0.8+0.3*(-0.006026)*-4 = 0.807231
w(1,4,2)= 0.2+0.3*(-0.006026)*1 = 0.198192
w(1,1,3)= 0.4+0.3*(0.014163)*4 = 0.416996
w(1,2,3)= 0.3+0.3*(0.014163)*2 = 0.308498
w(1,3,3)= 0.2+0.3*(0.014163)*-4 = 0.183004
w(1,4,3)= 0.3+0.3*(0.014163)*1 = 0.304249
w(2,1,1)= 0.9+0.3*(-0.643901)* 0.475021 = 0.808240
w(2,2,1)= 0.2+0.3*(-0.643901)* 0.401312 = 0.122478
w(2,3,1)= 0.1+0.3*(-0.643901)* 0.845535 = -0.063332
w(2,1,2)= 0.2+0.3*(0.345661)* 0.475021 = 0.249259
w(2,2,2)= 0.3+0.3*(0.345661)* 0.401312 = 0.341615
w(2,3,2)= 0.5+0.3*(0.345661)* 0.845535 = 0.587681

Detail for this answer, please open the excel file “Check_For_Number_6.xlsx”

You might also like