Professional Documents
Culture Documents
Soft Computing
Soft Computing
Soft Computing
Perceptron Networks
The basic networks in supervised learning
Perceptron networks better than Hebb rule
The perceptron network consist of three units
– Sensory unit (input unit)
– Associator unit (hidden unit)
– Response unit (output unit)
Perceptron Networks
• Input unit is connected to hidden units with fixed weights -
1,0,-1 assigned at random
• binary activation function is used in input and hidden unit
• Output unit (1,0,-1) activation. The binary step with fixed
threshold ϴis used as activation.
• Output of perceptron is
y f ( yin )
1ifyin
f ( yin ) 0if yin
1 yin
Perceptron Networks
• Weight updation between hidden and output unit
• Checks out for error between hidden and calculated output
layer
• Error=target-calculated
• weights are adjusted in case of error
wi (new) wi (old ) txi
b(new) b(old ) t
• α is the learning rate, ‘t’ is the target which is -1 or 1.
• No error-no weight change-training is stopped
Single classification perceptron
network
Perceptron Training Algorithm for
Single Output Classes
Step 0: initialize weights, bias, learning rate( 0< <=1)
Step 1: perform step 2-6 until final stopping condition is false
Step 2: perform steps 3-5 for each bipolar or binary training pair indicated by s:t
Step 3: input layer is applied with identity activation fn:
xi=si
Step 4: calculate output response of each input j=1 to m
first, net input is calculated
activation are applied over the net input to calculate the output response.
Perceptron Training Algorithm for
Single Output Classes
Step 5: Make adjustment in weight and bias j=1 to m and i=1 to n
wij (t yinj ) xi
Adaline Model
x0=1
1
b
x1 w1
X1
yin= xiwi
f(yin)
w2
x2
X2 wn
yin
xn
Xn e=t-yin
Output error t
Adaptive
generator
algorithm
Adaline Training Algorithm
Step 0: Weights and bias are set to some random values other
than zero. Learning rate parameter α
Step 2: Perform steps 3-5 for each bipolar training pair s:t
Y
Initialize weights and
bias and α
If Ei=Es
Calculate error
Ei=Σ(t-yin)2
For
each wi (new) wi (old ) (t yin ) xi
s:t b(new) b(old ) (t yin )
Step 7: Each hidden unit sum its delta inputs from the output units.
The term δinj gets multiplied with the derivative of f(zinj) to calculate the error
term.
BACK-Propagation Network(BPN)
Weight and bias updation (Phase-III)
Problem
Initial weight
[V11,V21,VO1]= [0.6,-0.1,0.3]
[V12,V22,VO2]= [-0.3,0.4,0.5]
[W1,W2,W0]= [0.4,0.1,-0.2]
where x^ji; is the center of the RBF unit for input variables; σithe width
of ith RBF unit; xji the jth variable of input pattern.
Step 7:calculate the output of the neural network
K-means ci
A w
xp Basis Linear
Functions Regression
ci
K-Nearest i
Neighbor