Introduction To Artificial Neural Networks in Control: Andrew Paice 2009

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Introduction to

Artificial Neural Networks


in Control
Andrew Paice 2009

1
Outline

2
Motivation

!
"
#
$
% &
"

$
"

3
Neurons

'
( ') !
* '* !
'+ "
!,
!

4
Artificial Neuron

"
"

5
Types of Neural Networks
'
,
-
, '
+ '

/! !
)
% &
/! !

6
Multi-Layer Perceptron Neural Network (MLPNN)

y1 y2 ym outputs

1
y=
1+ ex wo1

w11 w pm
( ' 1
wom w p1

z1 zp
" v o1
zj hidden layer

! 0 vop v np
v11
' 1

.
inputs
x1 x2 xn
*

7
Hopfield Neural Network

( '

+ !

2 "

!
8
Neural Network Learning (for MLPNNs)

1
'
y = f(x)
y NN(x,P(1), P(2), P(n))

" P(i)
!

"
{(xi,yi)}ni=1!
2 !
2 9
The Back-Propogation Algorithm
, " 1
y=
" 3 1 ex
2 !

y
= y (1 y )
x
!

NN ( n 1)
= y i (1 y i ) x j
Pij( n )

10
Backpropogation I
Step 1: Initialize weight to small random values.
Step 2: While stopping condition is false, do steps 3-10
Step 3: For each training pair do steps 4-9

Feedforward pass
Step 4: Each input unit receives the input signal xi and transmits this signals to all units in
the hidden layer
Step 5: Each input unit receives the input signal z j , j = 1,..., p sums its weighted input
signals
n
z inj = voj + xi vij
i =1

applying activation function Z j = f ( z inj ) and sends this to all units in the output layer
Step 6: Each output unit y k , k = 1,..., m sums its weighted input signals
p
y ink = wok + z j w jk
j =1

and applies its activation function to calculate the output signal Yk = f ( y ink )

11
Backpropogation II
Backward pass
Step 7: Each output unit y k , k = 1,..., m receives a target pattern corresponding to an input
pattern, error information term is calculated as
k = (t k y k ) f ( y ink )
Step 8: Each hidden unit z j , j = 1,..., p sums its delta inputs from units in the layer above
m
inj = j w jk
k =1
The error information term is calculated as
j = inj f ( z inj )

Updating Weight and Biases


Step 9: Each output unit y k , k = 1,..., m updates its bias and weights ( j = 0,..., p ) . The weight
correction term is given by
W jk = k z j
and the bias correction term is given by
Wok = k
Therefore, W jk (new) = W jk (old ) + W jk , Wok (new) = Wok (old ) + Wok
Each hidden unit z j , j = 1,..., p updates its bias and weights (i = 0,..., n) .
The weight correction term
Vij = j xi , Voj = j
Therefore, Vij (new) = Vij (old ) + Vij , Voj (new) = Voj (old ) + Voj
12
Step 10: Test the stopping condition, e.g. minimization of the errors, number of iterations, ...
Radial Basis Function Neural Network (RBFNN)
1
" 4

n
2
yj = wij ( x ci )
i =1

13
Effectiveness of NN Learning

2 "

%! ! )
1 / &

14
Application to Control
+ "

x+ = NN((x;u),P1)
y = NN((x;u),P2)

x+ = NN((Ax+Bu),P1)
y = NN(x,P2)

, "
yk+1 = NN((Y;U;wk),P1)
Y = (yk, yk-1, , yk-n)
U = (uk, uk-1, , uk-m)
15
Examples from CDC 2006
1(/

! "

4 )
" #$ $% !

0
5 ( !

16
Summary

%
&
"

3
6

% &
!

/ "
!

17
)$. 1
+ 7

18

You might also like