Professional Documents
Culture Documents
Back Propagation ALGORITHM
Back Propagation ALGORITHM
During feed
forward stage each input unit receives an input signal and transmits to each of the hidden units
z1..zp.Then each hidden unit calculates the activation function and sends its signals Zj to each output
unit. The output unit calculates the activation function to form the response of the net for the given input
pattern
During back propagation of errors, each output compares its computed activation Yk with its target value T k to
determine the associated error for that pattern with that unit . Based on error ,the factor k (k=1,2,.m)
is computed and is used to distribute, the error at output unit Y, back for all units, in the previous layer.
Similarly the factor j(j=1,2.m)is computed for each hidden unit Zj.
During final stage ,the weight and bias are updated using the factor delta and activation.
INITIALIZATION OF WEIGHTS:
FEED FORWARD:
Step4:each input unit receives the input signal x, and transmits to all units in the hidden layer.
Step5: each hidden unit sums its weighted input signals
Each hidden unit (Zj;j=1,2.p) updates its bias and weights (i=0,1,2n)
.the weight correction term is vij = j xi
the bias correction term is Voj= j
VJJ(new) = VJJ(old) + Vjj ; Voj (new) = Voj (old) + Voj
Step 10: Test the stopping condition
If the total error reduces to an acceptable value, is the stopping condition
MERITS OF BPA:
1.The mathematical formula present here, can be applied to any network
2.The computation is reduced if the weight chosen ate small at the beginning.
DEMERTIS OF BPA:
1.Slow convergence
2.Local minima problem
3.Scaling
4.Need for teacher
APPLICATIONS:
Image compression
Data compression
Control problems
Nor linear simulation
Fault detection problems
Face recognition
Load forecasting problems
LEARNING RATE:
MOMENTUM:
CHARACTERISTICS:
1.Can operate in unknown environment
2.can operate in stationary and non stationary environment
3.minimize instantaneous square error
4.stochastic
5.Approximate
Advantages:
1.simplicity in implementation
2.stable and robust performance against different signal condition
Disadvantages:
1.slow convergence
The estimated output in computed form a linear combination of the input signal ,x(t) and weight vector w(k).The
estimated output y(k) is then compared to the desired output ,to find the error. If there is any difference, the error
signal will be used as feedback mechanism to adjust the weight vector w(k) on the other hand, if there is no
difference between these signals, no adjustment is needed, since estimated output is desirable
XOR PROBLEM:
XOR problem can be solved by introducing a single hidden layer between input and output layers. consider two
neurons in hidden layer because the input to input layer are two
The Architecture for an XOR problem is drawn below:
FOR N1:
The neuron labeled as N1 in hidden layer has the features as w11=w12=1
By including one hidden layer we can solve an XOR problem i.e. linearly separable
A set is defined as collection of objects