Professional Documents
Culture Documents
PART - I - Chapter - 3 - Neural Network
PART - I - Chapter - 3 - Neural Network
PART 1:
Neural
Network
Instructor: Dr. Dang Xuan Ba
Email: badx@hcmute.edu.vn
Chapter 2: Single-layer
Feedforward Neural Network
Number of layer: L
Each layer contains ns neurons.
Each neuron has : + Input signal: x j = [[i −1] y1 ... [i −1] yni−1 1] T
[i ]
+ Weight matrix:
[i ]
w j = [[i ] w1 ... [i ] wni−1 [i ] wni−1 +1 ]T
+ Integration function ([i]netj): linear, quadratic… functions.
+ Activation function ([i]a(net)): linear, s-shape, threshold… functions
supervised learning:
Consider k sample:
x = x1 ; x2 ;...xn ;1
T
Input signal: Labelled desired signal:
NN output signal: y = f ( wn , x)
L
Estimation error: E= ( yi − di ) 2 T
i =1
E
Learning law (Back-propagations): [i ] w jk = − [ i ] w E = −
jk
(
[i ] w
jk )
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 6
Chapter 3: Multi-layer Feedforward Neural Network
Implementation:
, Estop , E = 0, w0 ,
Initialization
supervised learning: epod max , epod = 0
E = 0, k = 0
Consider k sample: epod + +
Forward wk , k + +
computation
x = x1 ; x2 ;...xn ;1
T
Input signal: net , y Updating
law
Labelled desired signal:
Error kK
NN output signal: y = f ( w, x) Calculation ek = d k − yk
nL
kK
Estimation error: E= i =1
( yi − di ) 2 E = E + ek
2
X1 X2 d =x1 xor x2
0 0 0
0 1 1
1 0 1
1 1 0
Learning process:
Forward propagation (Lan truyền thuận)
Hidden layers: Output layer:
INPUT: Xh = [x1; x2; -1]T; INPUT: Xo = [yh; -1]T;
Weight matrix: Wh = [wh11 wh12 wh13; wh21 wh22 wh23]T; Weight matrix: Wo = [wo1; wo2;wo3];
1 1
yh = yo =
1 + e− o neto
− h neth
1+ e
Learning process:
Back-propagation (Lan truyền ngược)
COMMON RULE: LEARNING LAWS OF OUTPUT WEIGHT MATRIX:
E
T
E = ( y − y )
WO = −o T
WO
E
WO = − yo o d
E
T
WO
Wh = − h yo o e−o neto
Wh E E yo neto =
W = y net W neto ( )
2
1 + e−o neto
O o o o
neto
= x0T
Wo
o e − o neto
WO (k + 1) = WO (k ) − to ( yo − yd ) xo
(1 + e )
Error function: E = 0.5(yo - yd)2 − o neto 2
Learning process:
Back-propagation (Lan truyền ngược)
COMMON RULE: LEARNING LAWS OF HIDDEN WEIGHT MATRIX:
E
T
E
T
Wh = − neto
WO = −o x = W T
h
W o
WO o
E E yo neto xo yh neth xo
E
T
W = = I 32'
Wh = − h h yo neto xo yh neth Wh
hy
W
h
E
y = ( yo − yd )
− h neth
h = diag o e
y
o neth
( )
Error function: E = 0.5(yo - yd)2 2
y 1 + e− h neth
o e−o neto
o
=
neto
( ) neth = tensor ( X T )
2
1 + e− o neto
W 22
h
T
o e − o neto − net
o e h h T
Wh (k + 1) = Wh ( k ) − th ( yo − yd ) T
Wo I 32' diag tensor ( X )
( ) ( )
2 2 2 2
1 + e − o neto 1 + e − h neth
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 12
Chapter 3: Multi-layer Feedforward Neural Network
Result:
[ L]
(
e j = a ' net j ) [ L]
e1 j = a ' net j ( ) ( y j − ydj )
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 14
Chapter 3: Multi-layer Feedforward Neural Network
Learning process:
Back-propagation (Lan truyền ngược) (Simple)
COMMON RULE: LEARNING LAWS OF OUTPUT WEIGHT MATRIX:
E
T
WO = −o
WO
( ) ( ) (y
E
T
[ L]
ej = a ' [ L]
net j [ L]
e1 j = a ' [ L]
net j j − ydj )
Wh = − h
Wh
o e − o neto
WO (k + 1) = WO (k ) − to ( yo − yd ) xo
(1 + e )
− 2
o neto
[ L]
e
Learning process:
Back-propagation (Lan truyền ngược)
COMMON RULE: LEARNING LAWS OF HIDDEN WEIGHT MATRIX:
( ) ( ) (y
)
T
E
WO = −o
WO
[ L]
ej = a ' [ L]
net j [ L]
e1 j = a ' [ L]
net j j − ydj
ns +1
( ) ( )
T
E [ s +1] [ s +1]
Wh = − h
Wh
[s]
ej = a ' [s]
net j er wrj
r =1
Error function: E = 0.5(yo - yd)2
[s]
w ji (k + 1) =[ s ] w ji (k + 1) − h[ s ]e j[ s −1] y j
Example 2: Derive the learning rule for the following network using
steepest descent method:
Example 3: Derive the learning rule for the following network using
steepest descent method:
Example 4: Derive the learning rule for the following network using
steepest descent method:
Example 5: Derive the learning rule for the following network using
steepest descent method: