Artificial Neural Networks For Engineers and Scientists. 24

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

Preliminaries of Artificial Neural Network 7

Step 3: Error value is computed as

1
( dk - ok ) + E
2
E=
2

Here,
dk is the desired output
ok is the output of ANN
Step 4: The error signal terms of the output layer and the hidden layer
are computed as

dok = éë( dk - ok ) f ¢ ( vk y ) ùû ( error signal of the output layer )

d yj = éë( 1 - y j ) f ( w j x ) ùû dok v kj ( error signal of the hidden layer )

where ok = f(vky),   j = 1 , 2 , 3 ,   and k = 1.


Step 5: Compute components of error gradient vectors as
∂E/∂wji = δyjxi for j = 1, 2, 3 and i = 1. (For the particular ANN model,
see Figure 1.4.)
∂E/∂vkj = δokyj for j = 1, 2, 3 and k = 1. (For the particular ANN model,
see Figure 1.4.)
Step 6: Weights are modified using the gradient descent method from
the input layer to the hidden layer and from the hidden layer to the
output layer as

æ ¶E ö
w nji+1 = w nji + Dw nji = w nji + çç -h n ÷÷
è ¶w ji ø

æ ¶E ö
vkjn+1 = vkjn + Dvkjn = vkjn + çç -h n ÷÷
è ¶vkj ø

where
η is the learning parameter
n is the iteration step
E is the error function
Step 7: If E = Emax, terminate the training session; otherwise, go to step 2
with E ← 0 and initiate a new training.

The generalized delta learning rule propagates the error back by one layer,
allowing the same process to be repeated for every layer.

You might also like