Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

INTELLIGENT CONTROL SYSTEM

PART 1:
Neural
Network
Instructor: Dr. Dang Xuan Ba
Email: badx@hcmute.edu.vn

Department of Automatic Control


Chapter 1: Introduction of Neural
network

Chapter 2: Single-layer
Feedforward Neural Network

Chapter 3: Multi-layer Neural


Content Network

Chapter 4: RBF neural Network

Chapter 5: Several Applications.

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 2


CHAPTER 3 – MULTI-LAYER
FEEDFORWARD NEURAL NETWORK

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 3


Chapter 3: Multi-layer Feedforward Neural Network
1. Motivation (Động lực)
Example 1: Classification of two distinguish Example 3: Design a network to separate
sets the following data to two distinguished
class?
X1 X2 d =x1 xor x2
0 0 0
0 1 1
1 0 1
1 1 0

Example 4: Build a function representing


Example 2: Design a network to separate the following data?
the following data to two distinguished
class?

What will we do exactly?


Part I: Neural Network Presenter: Dr. Dang Xuan Ba 4
Chapter 3: Multi-layer Feedforward Neural Network
2. Structure:

Number of layer: L
Each layer contains ns neurons.
Each neuron has : + Input signal: x j = [[i −1] y1 ... [i −1] yni−1 1] T
[i ]

+ Weight matrix:
[i ]
w j = [[i ] w1 ... [i ] wni−1 [i ] wni−1 +1 ]T
+ Integration function ([i]netj): linear, quadratic… functions.
+ Activation function ([i]a(net)): linear, s-shape, threshold… functions

Its applications include complex classification, approximation, recognition….


Part I: Neural Network Presenter: Dr. Dang Xuan Ba 5
Chapter 3: Multi-layer Feedforward Neural Network
2. Learning algorithm:

supervised learning:
Consider k sample:
x =  x1 ; x2 ;...xn ;1
T
Input signal: Labelled desired signal:
NN output signal: y = f ( wn , x)

L

Estimation error: E= ( yi − di ) 2 T
i =1  
E
Learning law (Back-propagations): [i ] w jk = − [ i ] w E = −  
jk
 (
  [i ] w
jk ) 

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 6
Chapter 3: Multi-layer Feedforward Neural Network
Implementation:
 , Estop , E = 0, w0 ,
Initialization
supervised learning: epod max , epod = 0
E = 0, k = 0
Consider k sample: epod + +
Forward wk , k + +
computation
x =  x1 ; x2 ;...xn ;1
T
Input signal: net , y Updating
law
Labelled desired signal:
Error kK
NN output signal: y = f ( w, x) Calculation ek = d k − yk
nL
kK 
Estimation error: E= i =1
( yi − di ) 2  E = E + ek
2

Learning law (Back-propagations): Stop checking E , epod


No
Yes
T
  Done
E
[i ] w jk = − [ i ] w E = −  
jk
 (
  [i ] w
jk ) 

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 7
Chapter 3: Multi-layer Feedforward Neural Network

Example 3: Design a network to separate


the following data to two distinguished
class?

X1 X2 d =x1 xor x2
0 0 0
0 1 1
1 0 1
1 1 0

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 8


Chapter 3: Multi-layer Feedforward Neural Network

Solution: A multi-layer neural network is


designed as follows:

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 9


Chapter 3: Multi-layer Feedforward Neural Network

Solution: A multi-layer neural network is


designed as follows:

Learning process:
Forward propagation (Lan truyền thuận)
Hidden layers: Output layer:
INPUT: Xh = [x1; x2; -1]T; INPUT: Xo = [yh; -1]T;

Weight matrix: Wh = [wh11 wh12 wh13; wh21 wh22 wh23]T; Weight matrix: Wo = [wo1; wo2;wo3];

Integration function: neth = WTh*Xh Integration function: neto = WTo*Xo

Activation function : Activation function :

1 1
yh = yo =
1 + e− o neto
− h neth
1+ e

Error function: E = 0.5(yo - yd)2


Part I: Neural Network Presenter: Dr. Dang Xuan Ba 10
Chapter 3: Multi-layer Feedforward Neural Network

Solution: A multi-layer neural network is


designed as follows:

Learning process:
Back-propagation (Lan truyền ngược)
COMMON RULE: LEARNING LAWS OF OUTPUT WEIGHT MATRIX:
  E 
T

  E = ( y − y )
WO = −o  T
  WO 
  E 

 WO = −    yo o d
  E 
T
  WO  
Wh = − h     yo o e−o neto

  Wh   E E yo neto =

 W = y net W  neto ( )
2
1 + e−o neto
 O o o o 
 neto
 = x0T
 Wo
o e − o neto
WO (k + 1) = WO (k ) −  to ( yo − yd ) xo
(1 + e )
Error function: E = 0.5(yo - yd)2 − o neto 2

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 11


Chapter 3: Multi-layer Feedforward Neural Network

Solution: A multi-layer neural network is


designed as follows:

Learning process:
Back-propagation (Lan truyền ngược)
COMMON RULE: LEARNING LAWS OF HIDDEN WEIGHT MATRIX:
  E 
T
  E 
T
Wh = −   neto
WO = −o    x = W T
  
 h
W o

  WO    o
  E E yo neto xo yh neth  xo
  E 
T
 W =  = I 32'
Wh = − h    h yo neto xo yh neth Wh 
 hy

  W  
h
 E   
 y = ( yo − yd )
− h neth
 h = diag  o e
y 
 o  neth  
( )
Error function: E = 0.5(yo - yd)2 2
 y  1 + e− h neth 
o e−o neto   
 o
= 
 neto
( )  neth = tensor ( X T )
2
1 + e− o neto
  W 22
 h
T
   
 o e − o neto −  net
 o e h h  T 
Wh (k + 1) = Wh ( k ) −  th  ( yo − yd ) T
Wo I 32' diag   tensor  ( X )
( ) ( )
2 2 2 2
 1 + e − o neto  1 + e − h neth  
   
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 12
Chapter 3: Multi-layer Feedforward Neural Network

Solution: A multi-layer neural network is


designed as follows:

Result:

Error function: E = 0.5(yo - yd)2


Part I: Neural Network Presenter: Dr. Dang Xuan Ba 13
Chapter 3: Multi-layer Feedforward Neural Network

Solution: A multi-layer neural network is


designed as follows:

Learning process (Alternative way):


Back-propagation (Lan truyền ngược) (Simple)
COMMON RULE: LEARNING LAWS OF OUTPUT WEIGHT MATRIX:

WO = −o 
 E 
T
  E 
T

  WO 

WO = −  

  E 
T 
  WO 
Wh = − h 



 Wh   E E yo neto
 =
 WO yo neto Wo
 o
e

[ L]
(
e j = a ' net j ) [ L]
e1 j = a ' net j ( ) ( y j − ydj )
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 14
Chapter 3: Multi-layer Feedforward Neural Network

Solution: A multi-layer neural network is


designed as follows:

Learning process:
Back-propagation (Lan truyền ngược) (Simple)
COMMON RULE: LEARNING LAWS OF OUTPUT WEIGHT MATRIX:
  E 
T
WO = −o  
  WO 
( ) ( ) (y


  E 
T
[ L]
ej = a ' [ L]
net j [ L]
e1 j = a ' [ L]
net j j − ydj )
Wh = − h  

  Wh 

o e − o neto
WO (k + 1) = WO (k ) −  to ( yo − yd ) xo
(1 + e  )
− 2
o neto

[ L]
e

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 15


Chapter 3: Multi-layer Feedforward Neural Network

Solution: A multi-layer neural network is


designed as follows:

Learning process:
Back-propagation (Lan truyền ngược)
COMMON RULE: LEARNING LAWS OF HIDDEN WEIGHT MATRIX:

( ) ( ) (y

)
T
 E 
WO = −o 

  WO 

[ L]
ej = a ' [ L]
net j [ L]
e1 j = a ' [ L]
net j j − ydj
 ns +1

( ) ( )
T
  E  [ s +1] [ s +1]
Wh = − h 

  Wh


[s]
ej = a ' [s]
net j er wrj
r =1
Error function: E = 0.5(yo - yd)2

[s]
w ji (k + 1) =[ s ] w ji (k + 1) − h[ s ]e j[ s −1] y j

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 16


Chapter 3: Multi-layer Feedforward Neural Network

Example 2: Derive the learning rule for the following network using
steepest descent method:

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 17


Chapter 3: Multi-layer Feedforward Neural Network

Example 3: Derive the learning rule for the following network using
steepest descent method:

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 18


Chapter 3: Multi-layer Feedforward Neural Network

Example 4: Derive the learning rule for the following network using
steepest descent method:

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 19


Chapter 3: Multi-layer Feedforward Neural Network

Example 5: Derive the learning rule for the following network using
steepest descent method:

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 20

You might also like