Unit - I Introduction To ANN: S. Vivekanandan Cabin: TT 319A E-Mail: Svivekanandan@vit - Ac.in Mobile: 8124274447

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Unit –I Introduction to ANN

S. Vivekanandan

Cabin: TT 319A
E-Mail: svivekanandan@vit.ac.in
Mobile: 8124274447
Content

Adaline
 Delta Learning Rule
 Adaline introduction
 Architecture
 Algorithm
 Examples for Logic function

09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 2


Delta Learning Rule
• Widrow-Hoff rule named due to the originator Widrow and Hoff in1960
• Valid only for continuous activation function and in the supervised training
mode.
• “ The adjustment made to a synaptic weight of a neuron is proportional to
the product of the error signal and the input signal of the synapse”.
• This rule assumes that the error signal is directly measurable. The aim is to
minimize the error.
• Adjusts the weights to reduce the difference between the net input to the
output unit, and the desired output, which results in least mean squared
error (LMS error)
• Also called as LMS learning rule.

09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 3


ADALINE

• ADALINE = ADAptive LINEar neuron


or ADAptive LINEar system
• Uses Bipolar activations for its input
1 and target.
• The weights and the bias are adjustable.
X1 b
• Bias activation is always 1.
w1 • It resembles like a single layer network.
• X1 to Xn are the inputs and y is the
Xi Y output.
w2 • W1 to Wn are weights which changes
as the training progress.
• The training process is continued until
the error, which is the difference
Xn wn
between the target and the net input
becomes minimum.
09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 4
Algorithm
Step 1 : Initialize all weights( Not zero but small random values).
Set learning rate α (0 to1 )
Step 2 : While stopping condition is false do steps 3- 7
Step 3 : For each input /target pair (s: t) perform steps 4-6
Step 4 : Set activation for input vector
xi = Si (i = 1 to n )
Step 5 : Compute the output unit response

Yin  b  Xiwi
Step 6 :The weights and bias are updated
wi(new) = wi(old) + α(t- Yin)xi
b(new) = b(old) + α(t- Yin)
Step 7: Test for stopping condition.
* Stopping condition is ‘ when the weight change reaches small or no. of
iterations etc.
09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 5
Contn..
• It is commonly the value of α = 0.1 is chosen, if too large value is
chosen the learning process will not converge: if too small learning
will be extremely slow.

09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 6


AND NOT function (Bipolar inputs and targets)
X1 X2 t
1 1 -1
1 -1 1
-1 1 -1
-1 -1 -1
• AND NOT function gives a high ‘1’when x1 is high and x2 is low
• Initially the weights and bias are assumed with a random no.
W1 = W2 = b = 0.2, α = 0.2
• Set activation of input units Xi = (1,1)
• Calculated the net input
Yin  b  XiWi

• The operations are carried out for 6 epochs, where the mean square
is minimised
09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 7
Inputs Tar Net Weight changes Weights Error (E)
get ( XiY)

X1 X2 b t Y-In t - Y-In ΔW1 ΔW2 Δb W1 W2 B (t - Y-In )2

E
0.2 0.2 0.2
P
O
1 1 1 -1 0.6 -1.6 -0.32 -0.32 -0.32 -0.12 -0.12 -0.12 2.56
C
H
1 -1 1 1 -0.12 1.12 0.22 -0.22 0.22 0.10 -0.34 0.10 1.25
1
-1 1 1 -1 -0.34 -0.66 0.13 -0.13 -0.13 0.24 -0.48 -0.03 0.43

-1 -1 1 -1 0.21 -1.2 0.24 0.24 -0.24 0.48 -0.23 -0.27 1.47

E= 5.71

09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 8


Inputs Tar Net Weight changes Weights Error (E)
get ( XiY)

X1 X2 b t Y-In t - Y-In ΔW1 ΔW2 Δb W1 W2 B (t - Y-In )2

E
0.48 -0.23 -0.27
P
O
1 1 1 -1 -0.02 -0.98 -0.195 -0.195 -0.195 0.28 -0.43 -0.46 0.95
C
H
1 -1 1 1 0.25 0.76 0.15 -0.15 0.15 0.43 -0.58 -0.31 0.57
2
-1 1 1 -1 -1.33 0.33 -0.065 0.065 0.065 0.37 -0.51 -0.25 0.106

-1 -1 1 -1 -0.11 -0.90 0.18 0.18 -0.18 0.55 -0.33 0.43 0.8

E= 2.43

09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 9


Inputs Tar Net Weight changes Weights Error (E)
get ( XiY)

X1 X2 b t Y-In t - Y-In ΔW1 ΔW2 Δb W1 W2 B (t - Y-In )2

E
0.55 -0.33 0.43
P
O
1 1 1 -1 0.64 -1.64 -0.33 -0.33 -0.33 0.22 -0.66 0.1 2.69
C
H
1 -1 1 1 0.98 0.018 0.036 0.036 0.036 0.22 -0.69 0.14 0.003
3
-1 1 1 -1 -0.79 -0.21 0.043 0.043 0.043 0.27 -0.74 0.09 0.046

-1 -1 1 -1 0.57 -1.57 0.313 0.313 -0.313 0.58 -0.43 0.22 2.46

E= 5.198

09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 10


Inputs Tar Net Weight changes Weights Error (E)
get ( XiY)

X1 X2 b t Y-In t - Y-In ΔW1 ΔW2 Δb W1 W2 B (t - Y-In )2

E
0.58 -0.43 0.22
P
O
1 1 1 -1 - -0.93 -0.186 -0.186 -0.186 0.39 -0.61 -0.41 0.8668
C
0.069
H
1 -1 1 1 0.601 0.39 0.08 0.08 -0.08 0.47 -0.69 -0.33 0.159
4
-1 1 1 -1 -1.49 0.49 -0.099 0.099 0.099 0.37 -0.59 -0.23 0.248

-1 -1 1 -1 0.006 - 0.2 0.2 -0.2 0.57 -0.4 -0.45 0.988


0.994
E=2.2257

09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 11


Inputs Tar Net Weight changes Weights Error (E)
get ( XiY)

X1 X2 b t Y-In t - Y-In ΔW1 ΔW2 Δb W1 W2 B (t - Y-In )2

E
P
O
1 1 1 -1 - - -0.145 -0.145 -0.145 0.43 -0.55 -0.59 0.528
C
0.273 0.727
H
1 -1 1 1 0.33 0.62 0.124 -0.124 0.124 0.55 -0.67 -0.47 0.382
5
-1 1 1 -1 -1.69 0.69 -0.138 0.138 0.138 0.42 -0.53 -0.33 0.476

-1 -1 1 -1 -0.21 -0.79 0.157 0.157 -0.157 0.57 -0.37 -0.49 0.612

E =2.004

09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 12


Inputs Tar Net Weight changes Weights Error (E)
get ( XiY)

X1 X2 b t Y-In t - Y-In ΔW1 ΔW2 Δb W1 W2 B (t - Y-In )2

E
0.57 -0.37 -0.49
P
O
1 1 1 -1 - - -0.142 -0.142 0.142 0.43 -0.52 -0.63 0.5055
C
0.289 0.711
H
1 -1 1 1 0.317 0.68 0.137 -0.137 0.137 0.57 -0.65 - 0.4665
6 0.492
-1 1 1 -1 - 0.71 -0.142 0.142 0.142 0.42 -0.6 -0.35 0.49
1.712 5
-1 -1 1 -1 - -0.74 0.147 0.147 -0.147 0.57 -0.452 - 0.541
0.264 2 0.497
E= 2.004

09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 13


W1 = 0.5 W2 = -0.5 b = -0.5
Wkt
Yin  b  Xiwi
E= (t- Yin)2
Inputs Targt Net input( Y-in)
X1 X2 b t W1 =0.5,w2=-0.5, b=-0.5 E =(T-yin)2
1 1 1 -1 -0.5 0.25
1 -1 1 1 0.5 0.25
-1 1 1 -1 -1.5 0.25
-1 -1 1 -1 -0.5 0.25

E= (t  yin)2 = 1
Thus the error is minimized from 5.7 at epoch 1 to 2 at epoch 6

09-02-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 14

You might also like