Professional Documents
Culture Documents
Ci - Adaline & Madaline Network1
Ci - Adaline & Madaline Network1
Prof. B. R. Suthar
Learning rule/Learning process
• The net input is calculated based on training input patterns and the
weights.
• The training process is continued until the error, which is the difference
between the target and the net input becomes minimum.
Algorithm Steps
1. Initialize weights( not zero, small random values are used).
Set learning rule α.
2. While stopping condition is false, do Steps 3-7.
3. For each bipolar training pair s:t, perform Steps 4-6.
4. Set activations of input units xi = si, for i=1 to n.
5. Compute net input to output unit y-in = ∑ xiwi + b; i=1 to n
6. Update bias and weights, i=1 to n.
wi(new) = wi(old) + α * (t-y-in) * xi
bi(new) = b(old) + α * (t-y-in)
f(p)= 1, if p>=0,
if p<0
1. Initialize weights, bias and set learning rate as α.
v1=v2=0.5 and b3=0.5. Other weights may be small random values.
2. When stopping condition is false do Steps 3-9.
3. For each bipolar training pair s:t, do steps 4-8.
4. Set activations of input units:
xi=si for i=1 to n
5. Calculate net input of hidden Adaline units.
z-in1 = b1 + x1w11 + x2w21
z-in2 = b2 + x1w12 + x2w22
MRI Algorithm Steps (cont…)
6. Find output of hidden Adaline unit using activation mentioned above.
z1 = f(z-in1)
z2 = f(z-in2)
7. Calculate net input to output
Y-in = b3 + z1v1 + z2v2
Apply activation to get the output of net.
Y = f (y-in)
8. Find the error and do weight updation.
If t = y, no weight updation.
If t != y, then,
If t=1, then update weights on all units zj unit whose net input is closest to 0.
wij (new) = wij(old) + α*(-1-zinj)*xi
bj(new) = bj(old) + α* (-1-zinj)
If t= -1, then update weights on all units zk which have positive net input.