Professional Documents
Culture Documents
EE699 (Adaptive Neurofuzzy Control
EE699 (Adaptive Neurofuzzy Control
Funkhouser 313
ADAPTIVENEUROFUZZYCONTROL
Introduction (1-2): Actually 3 including 2 for the example
Adaptive Control of Linear Systems (3-5)
Identification of Linear Models (2-3)
Project 1
Control of Nonlinear Systems (1-2)
Neural and Fuzzy Control (1-2)
Neural and Fuzzy Modeling (4-6)
Project 2: Modeling
Adaptive Neurofuzzy Control Design (7-9) & Project 2: Control
Design Examples (2)
Presentation (2)
Final Examination (1)
Projects:
1. Adaptive control of a linear system
2. Neurofuzzy modeling and control of a non-linear system
CHAPTER I: INTRODUCTION
Primary References:
Y. M. Zhang and R. Kovacevic, Neurofuzzy model based control of weld fusion zone
geometry," IEEE Transactions on Fuzzy Systems, 6(3): 389-401.
R. Kovacevic and Y. M. Zhang, "Neurofuzzy model-based weld fusion state estimation," IEEE
Control Systems, 17(2): 30-42, 1997.
1. Linear Systems
Adaptive Control: Identify the real parameters of the model to minimize the mismatch
Robust Control: Allow the mismatch
2. Non-linear Systems
Unified model structure for non-linear systems: neural network models and fuzzy models
Comparison
Modeling: Disadvantage: large number of parameters
Advantages: adequate accuracy, simplicity
Control: Disadvantages: performance evaluation
Advantage: unified methods
Neural network and fuzzy methods
Modeling:
Neural networks: large number of parameters, but automated algorithm
Fuzzy models: moderate number of parameters, lack of automated algorithm
Control design:
Neural networks: large number of parameters
Fuzzy models: moderate number of parameters, time consuming
Neurofuzzy Control
Introduction
2. Simple Methods
sT
Industrial Processes G(s ) e
y(t ) KU (1 e ( t Td )/ T )
K
1 sT
parameters: Td , K , T
(t Td )
u( t ) U ( t 0 )
u( t ) 0 ( t 0 )
Step Response
5
4.5
4
Amplitude
3.5
3
2.5
2
1.5
1
0.5
0
10
15
20
25
30
Time (sec.)
Td 0, K 5, T 5 sec ond
Step Response
5
4.5
4
Amplitude
3.5
3
2.5
2
1.5
1
0.5
0
Time (sec.)
Td 0, K 5, T 1 sec ond
dy
y Ku(t )
dt
t2
dy
(T dt y)dt
t t1
T [ y(t 2 ) y(t1 )]
t2
Ku(t )dt
t t1
t2
t2
y(t )dt K
t t1
u(t )dt
t t1
t2
t t1
k 1
t2
u(t )dt
t t1
k 1
h u(ih)
i j
Then
a1 T + a 2 Kb1
a1 T b(1) K a2
a1 (l )T b(l ) K a 2 (l )
(l 1,2,... )
y(t 1) Ku(t )
e( t ) w ( t ) y ( t )
-On-line Identification
At t 1 instant: the estimate of the gain is K (t 1)
Predicted output y (t / t 1) K (t 1)u(t 1)
At t instant:
(t ) u(t 1)( K
(t ) K
(t 1)) (t )
u(t 1) K
(t ) K
(t 1) (t ) / u(t 1)
K
3. Plant Model
M( )
First-order system
b0
a1s a0
{a0 , a1 , b0 )
M( )
Second-order system
b1s b0
a2 s 2 a1s a0
{a0 , a1 , a2 , b0 , b1 )
b0
M( )
s a0
First-order system
{a0 , b0 )
M( )
Second-order system
M( ) K
or
{K , a1 )
b1s b0
s 2 a1s a0
M( ) K
or
{a0 , a1 , b0 , b1 )
1
a1s 1
b1s 1
a2 s 2 a1s 1
{K, a1 , a2 , b1 )
B( s)
u(t Td ) d (t )
A( s )
hi u(t i)
i 1
i=7; a 0.9
i 1
i=44
nb
j 1
j 1
y ( t ) a j y ( t j ) b j u( t j )
Backward-shift operator z
z 1 y( j ) y( j 1), z n y( j ) y( j n )
z 1u( j ) u( j 1), z n u( j ) u( j n)
A( z 1 ) y(t ) B( z 1 )u(t )
B( z 1 )
u( t )
A( z 1 )
Disturbance Modeling: zero-mean disturbance
B( z 1 )
y
(
t
)
u(t ) d (t )
Additive disturbance
A( z 1 )
y( t )
C1 ( z 1 )
e(t ) (stationary random sequence)
D1 ( z 1 )
Uncorrelated Random Sequence e(t ) (while noise):
Modeling of disturbance: d (t )
10
e( t )
e 2 (t )
e( t )e( t
(
2
e
j)
t 2
1
t 2 t1 1 t t 1
1
e 2 (t )
t 2 t1 1
( j 0, j: positi
e( t )
e( t )e( t
j)
1
t 2 t1 1
nb
nc
j 1
j 0
(c 0 1)
C1 ( z 1 )
( t ) d (t ) c
e(t ) c (c: unknown constant or slowly changing)
D1 ( z 1 )
(1 z 1 )C1 ( z 1 )
(t ) d (t ) c
e(t ) (Difference operator 1 z 1 )
D1 ( z 1 )
CARIMA model: A( z 1 )y(t ) B( z 1 )u(t ) C( z 1 )e(t )
4A. Least Squares Method
Model:
y(t ) a1 y(t 1) a2 y(t 2)... a na y(t na) b1u(t 1) b2 u(t 2)... bnb u(t nb) e(t )
na
nb
j 1
j 1
x(t ) ( y(t 1), y(t 2),... y(t na), u(t 1), u(t 2),..., u(t nb)) T
e (t ) y(t ) x T (t )
11
E Y X
x (t 0 2 )
X
......
T
x (t 0 N )
T
N ( na nb )
2
T
T
Cost Function J e (t 0 j ) E E (Y X ) (Y X )
j 1
*
Criterion for determining the optimal estimate :
min J
R( na nb )
dJ
2 X T Y 2 X T X
d
X T Y X T X *
( X T X ) 1 X T Y
*
12
Principle
y (t / t 1) x T (t ) (t 1)
y(t ) x T (t ) e(t )
x (t ) (t ) x (t ) x T (t )( (t ) (t 1))
( x (t ) x T (t )) 1 x (t ) (t ) (t ) (t 1)
A Recursive Estimator
t
T
T
2
- Cost Function J (t ) ( (t ) (0 )) S(0)( (t ) (0 )) ( y(i ) x (i ) (t ))
i 1
Recursive Form
S(t ) S(t 1) x (t ) x T (t )
(t ) (t 1) S 1 (t ) x (t ) (t )
Initials: S( 0) , (0)
Effects of S( 0) , (0) :
Gain vector: k (t )
(t ) S(0 )
( 0 ) [ S(t ) S( 0 )]
S(t )
P(t 1) x (t )
1 x T (t ) P(t 1) x (t )
Parameter Update: (t ) (t 1) k (t ) (t )
Covariance Update: P(t ) [ I k (t ) x T (t )]P(t 1)
Initials: P(0) 2 I and (0)
13
Forgetting Factor
i 1
i 1
P(t 1) x (t )
x T (t ) P(t 1) x (t )
Parameter Update: (t ) (t 1) k (t ) (t )
Covariance Update: P(t ) [ I k (t ) x T (t )]P(t 1) /
(0 2 1)
14
5. Predictive Models
y( t )
1
e(t ) ( 0.9 j z j )e(t )
1
(1 0.9 z )
j 1
- k -step-ahead prediction:
Model: y(t k ) e(t k ) 0.9e(t k 1) 0.9 2 e(t k 2) 0.9 3 e(t k 3)...
Prediction:
y (t k / t ) 0.9 k e(t ) 0.9 k 1 e(t 1) 0.9 k 2 e(t 2)...
0.9 k {e(t ) 0.9e(t 1) 0.9 2 e(t 2)...}
e( t )
0.9 k
0.9 k y(t )
1 0.9 z 1
Prediction Error:
~
y (t k ) y(t k ) y (t k / t )
Variance of Prediction Error:
Variance of y :
k 1
0.9 j e(t k j )
j 0
k 1
0.9 2 j 2 2
j 0
1 0.9 2 k
1 0.9 2
1
2
1 0.9 2
1
*
1
1
Model: y(t k ) N ( z )e(t k ) N k ( z )e(t k ) N k ( z )e(t )
N ( z 1 )
1
k-Step-Ahead Prediction: y (t k / t ) N k ( z )e(t ) k 1 y(t )
N (z )
C ( z 1 )
e(t )
ARMA Model: y(t )
A( z 1 )
15
1
C ( z 1 )
)
1
k F( z
E
(
z
)
z
N k* ( z 1 ) z 1 N k ( z 1 )
1
1
A( z )
A( z )
E( z 1 ) N k* ( z 1 ) e0 e1 z 1 ... e k 1 z ( k 1)
F( z 1 ) f 0 f1 z 1 ...
Diophantine Identity: C( z 1 ) A( z 1 ) E( z 1 ) z k F( z 1 )
{
C: nc,
E: ne k 1
A: na,
k-step-ahead prediction
y (t k / t )
C ( z 1 )
F( z 1 )
F ( z 1 )
e
(
t
k
)
e
(
t
)
y(t )
A( z 1 )
A( z 1 )
C( z 1 )
Prediction error
~
y (t k / t ) y(t k ) y (t k / t ) E( z 1 )e(t k )
A( z 1 ) 1 0.9z 1
Example:
1
1
C( z ) 1 0.7z
k2
na 1, nc 1, ne 1, nf 0
Diophantine Identity:
1 0.7z 1 (1 0.9 z 1 )(e0 e1 z 1 ) z 2 f 0
1 0.7z 1 e0 (e1 0.9e0 )z 1 ( 0.9e1 f 0 )z 2
Solution: e0 1, e1 0.9e0 0.7 1.6, f 0 0.9e1 1.44
Two-step-ahead prediction:
F ( z 1 )
1.44
y (t 2 / t )
y(t )
y( t )
1
C( z )
1 0.7z 1
16
e(t k )
A( z 1 )
A( z 1 )
Diophantine Identity: EA C z k F
y( t k )
F
EB
y( t )
u(t ) Ee(t k )
C
C
F
EB
u( t )
Prediction: y (t k / t ) y(t )
C
C
y( t k )
y (t k / t ) E ( z 1 )e(t k )
Prediction Error: ~
F ( z 1 )
y( t )
E ( z 1 ) B( z 1 )
Potential Problem: nonminimum-phase system
MV Control: u(t )
MV Controller: u(t )
1.44 / 0.5
y( t )
(1 1.6 z 1 )
17
7. Minimum-Variance Self-Tuning
Direct Adaptive Control: identify control model
Indirect Adaptive Control: identify process model design controller
{ G( z 1 ) E( z 1 ) B( z 1 ) g0 g1 z 1 ... g( k 1) nb z {k 1 nb}
F( z 1 ) f0 f1 z 1 ... f nf z nf
F
G
y(t ) u(t )
C
C
C( z 1 ) y (t k / t ) F ( z 1 ) y(t ) G( z 1 )u(t )
C( z 1 ) y (t / t k ) F( z 1 ) y(t k ) G( z 1 )u(t k )
y (t / t k ) F( z 1 ) y(t k ) G( z 1 )u(t k ) c j y (t / t k )
y (t / t k ) F ( z 1 ) y(t k ) G( z 1 )u(t k )
y(t ) F ( z 1 ) y(t k ) G( z 1 )u(t k ) E ( z 1 )e(t )
x T (t ) (t ) LS
18
Problems of MV:
(1) non-minimum phase
F ( z 1 )
u( t )
y( t )
E ( z 1 ) B( z 1 )
(2) Nominal delay < Actual Delay
na
nb
j 1
j 1
nb
j 1
j 2
nb
j 1
j 2
nb
j 2
j 3
19
G=
Y GU P
minN E T E
U R
U (G T G ) 1 G T (W P )
NU N
20
1 03. 00002
. t
. t
2 09. 00002
. t (t 1000)
b3 1 00005
b 05. 0.0002t
4
e ~ N(0, 01. 2 )
Design an adaptive system to control y(t ) (0 t 1000) for set-point y 0 1 .
Report Requirements:
(1)
(2)
(3)
(4)
(5)
(6)
Method selection
System Design
Program
Simulation Results
Results Analysis
Conclusions
21
23
Membershipfunction A ( x ) :
( x)
( x)
Equivalence:Set A A ( x ) membershipfunction
Example1:Cars:color,domestic/foreign,cylinders
B. Fuzzy Sets
Membershipfunction A ( x ) [0, 1] :ameasurementofthedegreeofsimilarity
Example1(contd.):domestic/foreign
anelementcanresidesinmorethanonefuzzysetswith
differentdegreesofsimilarity(membershipfunction)
Representationoffuzzyset
-
F {(x, F (x) x U)
(pairsofelementandmembershipfunction)
U F ( x ) / x (continuousdiscourseU),or
F U F ( x ) / x
Example2:F=integerscloseto10
F=0.1/7+0.5/8+0.8/9+1/10+0.8/11+0.5/12+0.1/13
(Elementswithzero A ( x ) ,subjectivenessof A ( x ) ,symmetry)
C. Linguistic Variables
LinguisticVariables:variableswhentheirvaluesarenotgivenbynumbersbutbywordsor
sentences
u:nameofa(linguistic)variable
x:numericalvalueofa(linguistic)variable x U
(ofteninterchangeablewithuwhenuisasingleletter)
SetofTermsT(u):linguisticvaluesofa(linguistic)variable
Specificationofterms:fuzzysets(namesofthetermsandmembershipfunctions)
24
Example3:Pressure
- Nameofthevariable:pressure
- Terms:T(pressure)={week,low,okay,strong,high}
- UniverseofdiscourseU=[100psi,2300psi]
- Week:below200psi,low:closeto700psi,okay:closeto1050psi,
strong:closeto1500psi,high:above2200psi
linguisticdescriptionsmembershipfunctions
D. Membership Functions
F ( x)
Examples
Numberofmembershipfunctions(terms)ResolutionComputationalComplexity
Overlap(glasscanbepartiallyfullandpartiallyemptyatthesametime)
E. Some Terminology
Thesupportofafuzzyset
Crossoverpoint
Fuzzysingleton:afuzzysetwhosesupportisasinglepointwithunitymembershipfunction.
F. Set Theoretic Operations
F1. Crisp Sets
AandB:subsetsofU
UnionofAandB: A B
1 if x A or x B
( x )
A B
0 if x A and x B
IntersectionofAandB: A B
25
1 if x A and x B
( x )
A B
0 if x A or x B
ComplementofA: A
1 if x
( x )
A
0 if x A
A B A B ( x ) max[ A ( x ), B ( x )]
A B A B ( x ) min[ A ( x ), B ( x )]
A ( x) 1 A ( x)
Unionandintersection:commutative,associative,anddistributive
DeMorgan'sLaws: A B A B
A B A B
Thetwofundamental(Aristotelian)lawsofcrispsettheory:
- LawofContradiction: A A U
- LawofExcludedMiddle: A A
F2. Fuzzy Sets
FuzzysetA: A ( x )
FuzzysetB: B ( x )
Operationoffuzzysets:
A B ( x ) max[ A ( x ), B ( x )]
A B ( x ) min[ A ( x ), B ( x )]
A ( x) 1 A ( x)
LawofContradiction? A A U ?
LawofExcludedMiddle? A A ?
Multipledefinitions:
Fuzzyunion:maximumandalgebraicsum A B ( x ) A ( x ) B ( x ) A ( x ) B ( x )
Fuzzyintersection:minimumandalgebraicproduct A B ( x ) A ( x ) B ( x )
26
Fuzzyunion:tconorm(snorm)
Fuzzyintersection:tnorm
Examples:
tconorm
Boundedsum: A B min(1, A B )
A if B 0
Drasticsum: A B B if A 0
1 if > 0 and 0
A
B
tnorm
Boundedproduct: A B max(0, A B 1)
A if B 1
Drasticproduct: AB B if A 1
0 if <1 and 1
A
B
GeneralizationofDeMorgan'sLaws
s[ A ( x ), B ( x )] c{t[c( A ( x )), c( B ( x ))]}
t[ A ( x ), B ( x )] c{s[c( A ( x )), c( B ( x ))]}
27
III. SHORT PRIMER ON FUZZY LOGIC
A. Crisp Logic
Rules:aformofpropositions
Proposition:anordinarystatementinvolvingtermswhichhavebeendefined
Example:IFthedampingratioislow,THENthesystem'simpulseresponseoscillatesalong
timebeforeitdies.
Proposition:true,false
Logicalreasoning:theprocessofcombininggivenpropositionsintootherpropositions,....
Combination:
- Conjunction p q (simultaneoustruth)
- Disjunction p q (truthofeitherorboth)
- Implication p q (IFTHENrule).Antecedent,consequent
- OperationofNegation ~ p
- EquivalenceRelation p q (bothtrueorfalse)
TruthTable
Thefundamentalaxiomsoftraditionalpropositionallogic:
- Everypropositioniseithertrueorfalse
- Theexpressiongivenbydefinedtermsarepropositions
- Thetruetableforconjunction,disjunction,implication,negation,andequivalence
Tautology:apropositionformedbycombiningotherpropositions(p,q,r,...)whichistrue
regardlessofthetruthorfalsehoodofp,q,r,...
Example: ( p q) ~ [ p (~ q)]
( p q ) (~ p) q
Membershipfunctionfor p q :
p q ( x, y ) 1 p q ( x, y) 1 min[ p ( x ), 1 q ( y)]
p q ( x, y) p q ( x, y) max[1 p ( x ), q ( y)]
p q ( x, y ) 1 p ( x )(1 q ( y ))
p q ( x, y) min[11
, p ( x ) q ( y)]
InferenceRules:
ModusPonens:Premise1:"xisA";Premise2:"IFxisATHENyisB"
Consequence:"yisB" ( A B)
( p ( p q )) q )
ModusTollens:Premise1:"yisnotB";Premise2:"IFxisATHENyisB"
28
Consequence:"xisA"
(q ( p q)) p )
B. Fuzzy Logic
MembershipfunctionoftheIFTHENstatement:"IFuisA,THENvisB" (u U , v V )
A B ( x, y ) :truthdegreeoftheimplicationrelationbetweenxandy
B1.CrispLogicFuzzyLogic?
Fromcrisplogic:
A B ( x, y) 1 min[ A ( x ), 1 B ( y)]
A B ( x, y) max[1 A ( x ), B ( y)]
A B ( x, y ) 1 A ( x )(1 B ( y ))
Dotheymakesenseinfuzzylogic?
GeneralizedModusPonensPremise1:"uisA*";Premise2:"IFuisATHENvisB"
Consequence:"visB*"
Example:"IFamanisshort,THENhewillmakeavery
goodprofessionalbasketballplayer"
A:shortman,B:notaverygoodplayer
"Thismanisunder5feettall"A*:manunder5feettall
"Hewillmakeapoorprofessionalbasketballplayer"B*:poorplayer
Crisplogic ( A ( A B)) B) (compositionofrelations)
( y) sup[ A* ( x) A B ( x, y)]
B*
x A*
( y) sup[ A* ( x ) A B ( x, y)]
Examine B*
x A*
using A B ( x, y ) borrowedfromcrisplogic
andsingletonfuzzifier A ( x ' ) 1 & A ( x x ' ) 0
B* ( y ) sup[ A* ( x ) A B ( x, y )]
*
x A*
A* ( x ' ) A B ( x ' , y)
= 1 A B ( x ' , y) min[1, A B ( x ' , y)]
= A B ( x ' , y) 1 min[ A ( x' ), 1 - B ( y)]
If x x'
29
If x x'
30
B2.EngineeringImplicationsofFuzzyLogic
Productimplication: A B ( x, y ) A ( x ) B ( y)
Disagreementwithpropositionallogic
v is
Gl
l 1, 2, ..., M
Fi l s: fuzzy sets in U i R
Gl
: fuzzy set in V R
Multiple Antecedents
Example 18: Ball on beam
Objective: to drive the ball to the origin and maintain it at origin
d 2
Control variable:
dt 2
Nonlinear system, states: r ,
dr
d
, ,
dt
dt
Rules:
dr
R (1) : IF r is positive and dt is near zero and is positive and dt is near zero,
THEN u is negative
dr
dr
31
Rules:
Example 21: Time Series x(k), k=1, 2,
Problem: x(k-n+1), x(k-n+2),.x(k) (predict) x(k+1)
Given: x(1), x(2),, x(D)
D-n training pairs:
x (1) : [x(1), x(2),, x(n): x(n+1)]
x ( 2 ) : [x(2), x(3),, x(n+1): x(n+2)]
( j)
( j)
( j)
( j)
D ( R ( j ) ) X ( x1 ) X ( x 2 ).... X ( x n ) X ( y )
Nonobvious Rules:
33
v is
Gl
Gl B
R (l ) : A B
R( l ) ( x, y) A B ( x, y ) F l ( x1 ) * F l ( x 2 ) * ... F l ( x p ) * G l ( y)
1
B l Ax R ( l )
Combining Rules:
(1)
(2)
(M)
M
(l )
M
l
Final fuzzy set: B Ax [ R , R ,..., R ] l 1 Ax R l 1 B
Using t-conorm: B B1 B 2 ... B M
Additive combiner: weights
Example 22: Truck backing up
(t i ) 140 0 , x (t i ) 6
(t i ): B1, B2
x (t i ): S1, S2
C. Fuzzification
Maps a crisp point x col ( x1 , x 2 ,..., x n ) U into a fuzzy set A * defined in U
Singleton fuzzifier:
1 x=x'
A* (x')
0 x x'
34
Bl ( y ) A R( l ) ( y )
x
sup X Ax [ Ax ( x ) * A B ( x, y)]
sup X1 ( x1 ) * X2 ( x 2 )*...* X p ( x p ) * F l ( x1 ) * F l ( x 2 )*...* F l ( x p ) * Gl ( y)
1
x U
x U
x U
Xk()exk xp{1/2[(xkmXk)/Xk]}
2
k k
Q ( x k ) X ( x k ) F ( x k )
l
k
l
k
x ( m m )/( )
2 2 2 2
maximized at
k , ma x X k F l F l X k X k F l
k k
k
X 2 Xk 2 k
m Xk x k'
35
x ( m x')/( )
2 2 2 2
k,max X Fl Fl X Fl
k k
k
Fuzzier: prefilter
p
B ( y) G ( y) Q ( x k ,max )
l
k 1
l
k
x k ,max x k '
D. Defuzzifier
1) Maximum Defuzzifier
2) Mean of Maximum Defuzzifier
3) Centroid Defuzzifier
4) Height Defuzzifier
5) Modified Defuzzifier
E. Possibilities
36
l 1
i 1
l
y f s ( x ) [ y F l ( x i )] / [ F l ( x i )]
l 1 i 1
(
y
)
(
x'
,
y
)
(
x
'
)]
(
y
)
l
l
l
B
i
B
Fi
G
l
i 1
l ( y l ) max l ( y) 1
G
G
M M
l
s i1,. p Fl i i1,. p Fl i
i i
l1 l1
l
Bl ( y ) Qkl ( x k ,max )
k 1
37
l 1
k 1
l
y fns ( x ) [ y Q l ( xk ,max )] / [ Q l ( xk ,max )]
k
l 1 k 1
y f ( x ) y l l ( x )
l 1
FBF l ( x ) (l=1,...,M):
p
l ( x )
k 1
M p
l
k
( xk )
(singleton)
F j ( xk )
k
j 1 k 1
l ( x )
Q ( xk ,max )
k 1
M p
l
k
Q j ( xk ,max )
j 1 k 1
(nonsingleton,Gaussian)
FBFs:dependonfuzzifier,membershipfunctions,
composition,inference,defuzzifier,andnumberoftherules
Combiningrulesfromnumericaldataandexpertlinguisticknowledge
M
MN
ML
j 1
i 1
k 1
j
i
k
y f ( x ) y j (x ) y N N ,i ( x ) y L L ,k ( x )
FBFsfromthenumericaldata
p
N ,i ( x ) Fsi ( x s ) / [ Fsji ( x s )]
s 1
j 1 s 1
L ,k ( x ) F ( x s ) / [ F ( x s )]
s 1
VI.
ki
s
j 1 s 1
ji
i 1,..., M N
k 1,..., M L
38
x (1) : y (1)
x (2) : y (2)
x(N): y(N)
ParameterSet
y f ( , x )
Minimizetheamplitudeof y ( i ) f ( , x (i) )
Nonlinearoptimizationofcostfunction
N
(i)
(i) 2
min J ( ) [y f ( , x )]
i 1
39
INTRODUCTION
FUZZY SETS, FUZZY RULES, FUZZY REASONING, AND FUZZY MODELS
ADAPTIVE NETWORKS
H. Architecture
Feedforwardadaptivenetwork&Recurrentadaptivenetwork
Fixednodes&Adaptivenodes
Layeredrepresentation&Topologicalorderingrepresentation(nolinksfromnodeitoj, i j )
Example3:Anadaptivenetworkwithasinglelinearnode
x 3 f3 ( x1 , x 2 ; a1 , a2 , a3 ) a1 x1 a2 x 2 a3
Example4:Abuildingblockfortheperceptronorthebackpropagationneuralnetwork
x 3 f3 ( x1 , x 2 ; a1 , a2 , a3 ) a1 x1 a2 x 2 a3
1 if x 3 0
x f ( x )
4 4 3
0 if x3 0
LinearClassifier
Buildingblockoftheclassicalperceptron
Stepfunction:discontinuousgradient
Sigmoidfunction:continuousgradient
x 4 f 4 ( x 3 )
1 e x3
x 7 1 exp[ ( w x w x w x t )]
4, 7 4
5, 7 5
6, 7 6
7
40
41
N ( L)
( d k x L,k ) 2
k 1
CostFunctionforTraining E
Orderedderivative l ,i
Ep
p 1
Ep
x l ,i
E p
Ep
Example6:Ordinarypartialderivative
andtheorderedderivative l ,i
x l ,i
x l ,i
y f ( x)
z g( x, y)
z
g( x, y)
x
x
g( x , f ( x ))
g( x , y )
x
x
y f ( x )
g( x, y )
y
y f ( x )
f ( x )
x
42
y f ( x) 2 x
z g( x, y)
z
5
522
x
x
x
z g( x , y ) 5 x 2 y
BackPropagationEquation:
Ep
E p
L ,i
x L ,i
x L ,i
l ,i
N ( l 1) E
N ( l 1)
Ep
f
p f l 1, m
l 1, m l 1, m (0 l L 1)
x l ,i
x l ,i
x l ,i
m 1 x l 1, m
m 1
Ep
or
E p f l ,i
Ep
xl ,i
x*S
l ,i
f l ,i
E p f *
x *
as a parameter)
P
P E
E
p
p 1
Update formula:
: can be determined by
: learning rate
E 2
)
43
OnLineLearning
DifferentWaysofCombiningGDandLSE
K. Neural Networks as Special Cases of Adaptive Networks
D1.BackPropagationNeuralNetworks(BPNN's)
Nodefunction:compositionofweightedsumandanonlinear
function(activationfunctionortransferfunction)
Activationfunction:differentiablesigmoidalorhypertangenttypefunction
whichapproximatesthestepfunction
Fourtypesofactivationfunctions
Stepfunction
1 if x 0
f ( x)
0 if x 0
1
1 ex
1 ex
Hypertangentfunction f ( x )
1 ex
Sigmoidalfunction f ( x )
Identityfunction f ( x ) x
Example:threeinputsnode
Inputs: x1 , x 2 , x 3
Outputofthenode: x 4
3
Weightedsum: x 4 wi 4 x i t 4
i 1
Sigmoidfunction: x 4
1
1 e x4
w i 4 :
t 4 :
Example:twolayerBPNNwith3inputsand2outputs
D2.TheRadialBasisFunctionNetworks(RBFN's)
Radialbasisfunctionapproximation:localreceptivefields
Example:AnRBFNwithfivereceptivefieldunits
44
Activationleveloftheithreceptivefiled:
wi Ri ( x ) Ri ( x ci / i )
i 1,2,..., H
2
x ci
Gaussianfunction Ri ( x ) exp(
)
2i
or
1
2
x ci
Logisticfunction
1 exp{
}
i2
Maximizedatthecenter x ci
Finaloutput:
Ri ( x )
f ( x )
i 1
fi wi fi Ri ( x )
i 1
or
H
f ( x )
fi wi
i 1
H
wi
i 1
fi Ri ( x )
i 1
H
Ri ( x )
i 1
Identification:
ci s :clusteringtechniques
i s :heuristic
then
fi ai x bi leastsquaresmethod
IV.
A. ANFIS Architecture
Example:Atwoinputs(xandy)andoneoutput(z)ANFIS
Rule1:IFxis A1 andyis B1 ,then f1 p1 x q1 y r1
Rule2:IFxis A2 andyis B2 ,then f 2 p2 x q 2 y r2
ANFISarchitecture
Layer1:adaptivenodes
O1,i Ai ( x ) i 1, 2
O1,i Bi 2 ( x ) i 3, 4
45
Ai ( x ) and Bi ( x ) :anyappropriateparameterizedmembershipfunctions
1
Ai ( x )
( x ci ) 2 bi
1[
]
ai 2
{ai , bi , ci
}premiseparameters
Layer2:fixednodeswithfunctionofmultiplication
i 1, 2 (firingstrengthofarule)
O2,i wi Ai ( x ) Bi ( x )
Layer3:fixednodeswithfunctionofnormalization
wi
i 1, 2 (normalizedfiringstrength)
O3,i wi
w1 w 2
Layer4:adaptivenodes
O4,i wi f i w i ( pi x qi y ri )
{pi , qi , t i } consequentparameters
Layer5:afixednodewithfunctionofsummation
O5,1 overall output wi fi
i
Example:AtwoinputfirstorderSugenofuzzymodelwithninerules
B. HybridLearningAlgorithm
Whenthepremiseparametersarefixed:
w1
w2
f
f1
f2
w1 w 2
w1 w2
w1 f1 w 2 f 2
linearfunctionofconsequentparameters
( w1 x ) p1 ( w1 y)q1 ( w1 )r1
+ ( w 2 x )q 2 ( w 2 y)q 2 ( w 2 )r2
Hybridlearningscheme
C. ApplicationtoChaoticTimeSeriesPrediction
Example:MackeyGlassdifferentialdelay
0.2 x (t )
0.1x (t )
x (t )
1 x 10 (t )
Predictionproblem
x (t ( D 1) ),..., x (t ), x (t ) x (t P)
D 4, P 6, 1000datapairs
x (t 18 ), x (t 12 ), x (t 6), x (t ) x (t 6)
46
500pairsfortraining,500forverification
Inputpartition:2Rules:16numberofparameters:104
(premise:24,consequent:80)
Predictionresults:
nosignificantdifferenceinpredictionerrorfortrainingandvalidating
47
Reasonsforexcellence
48
V.
NEURO-FUZZY CONTROL
DynamicModel: x = f ( x(t ), u(t ))
DesiredTrajectory: x d (t )
ControlLaw: u(t ) = g( x(t ))
DiscreteSystem
DynamicModel: x( k 1) = f ( x( k ), u( k ))
DesiredTrajectory: x d ( k )
ControlLaw: u( k ) = g( x( k ))
A. MimickingAnotherWorkingController
Skilledhumanoperators
Nonlinearapproximationability
Refiningthemembershipfunctions
B. InverseControl
Minimizingthecontrolerror
C.SpecializedLearning
Minimizingtheoutputerror:needsthemodeloftheprocess
D. BackPropagationThroughTimeandRealTimeRecurrentLearning
Principle
ComputationandImplementation:OffLineOnline
L. FeedbackLinearizationandSlidingControl
M. GainScheduling
Sugenofuzzycontroller
Ifpoleisshort,then f1 k11 k12 k13 z k14 z
Ifpoleismedium,then f 2 k 21 k 22 k 23 z k 24 z
Ifpoleislong,then f3 k31 k32 k33 z k34 z
OperatingPointslinearcontrollersfuzzycontrolrules
G.AnalyticDesign
49
Project2:NeuroFuzzyNonLinearControlSystemDesign
1.GivenProcess
isdescribedbythefollowingfuzzymodel
A.Rules:
Rule1:IF u( k 1) isVERYSMALL,then y1 ( k ) a1u(k 1) b1u(k 2)
Rule2:IF u( k 1) isSMALL,then y2 ( k ) a2 u( k 1) b2 u(k 2)
Rule3:IF u( k 1) isMEDIUM,then y3 ( k ) a3 u( k 1) b3 u( k 2)
Rule4:IF u( k 1) isLARGE,then y 4 (k ) a 4 u( k 1) b4 u( k 2)
Rule5:IF u( k 1) isVERYLARGE,then y 5 ( k ) a5 u( k 1) b5 u( k 2)
where u istheinput, yi (i 1, 2, ..., 5) istheoutputfromRulei,and a1 0.5, a 2 0.4,
a3 0.3, a 4 0.2, a 5 0.1, b1 1, b2 0.9, b3 0.8, b4 0.7, and b5 0.6 arethe
consequentparameters.
B.Membershipfunctions:
(u 0) 2
)
0.5 2
(u 1) 2
Small (u) exp(
)
0.5 2
(u 2 ) 2
Medium (u) exp(
)
0.5 2
(u 3) 2
Large (u) exp(
)
0.5 2
(u 4 ) 2
VeryLarge (u) exp(
)
0.5 2
C.Systemoutput
5
y wi yi
i 1
i 1
wi
5
wj
yi
j 1
where wi (i 1, 2, 3, 4, 5) isthefiringstrengthofRulei.
50
2.Thedesiredtrajectoryofthesystemoutputis:
y0 (k ) 0.5 (0 k 400)
y 0 ( k ) 1 ( 400 k 700)
y (k ) 15. (700 k 1000)
0
3. Assumethattheconsequentparameters a j ' s and b j ' s areunknown.Designanadaptive
controlsystemforthegivensystemtoachievethedesiredtrajectoryoftheoutputunderthe
constraint 0 u 4 .(Offlineidentificationproceduremaybeusedtoobtaintheinitialsof
thepremiseparameters.)
Report Requirements:
(1) Method selection
System Design
Program
Simulation Results
Results Analysis
Conclusions
Due: 12/14/98
51
Rules:
Rule1:IF u( k 1) isVERYSMALL,then y1 ( k ) a1u(k 1) b1u(k 2)
Rule2:IF u( k 1) isSMALL,then y2 ( k ) a2 u( k 1) b2 u(k 2)
Rule3:IF u( k 1) isMEDIUM,then y3 ( k ) a3 u( k 1) b3 u( k 2)
Rule4:IF u( k 1) isLARGE,then y 4 (k ) a 4 u( k 1) b4 u( k 2)
Rule5:IF u( k 1) isVERYLARGE,then y 5 ( k ) a5 u( k 1) b5 u( k 2)
where u istheinput, yi (i 1, 2, ..., 5) istheoutputfromRulei,
Membershipfunctions:
(u 0 ) 2
VerySmall (u) exp(
)
0.5 2
(u 1) 2
Small (u) exp(
)
0.5 2
(u 2 ) 2
Medium (u) exp(
)
0.5 2
(u 3) 2
Large (u) exp(
)
0.5 2
(u 4 ) 2
VeryLarge (u) exp(
)
0.5 2
Iftheoutputofthesystemisgivenby
y
w i yi ,
i 1
explaintheroleof wi (i 1, 2, 3, 4, 5) andgiveawaytodetermine wi (i 1, 2, 3, 4, 5) .
(20%)
4.Thesystemis
y k ay k 1 bu k 1 k
where
a and b :parametersofthesystem
y k and u k :outputandinputatinstantk,and
52
k :system'snoiseatinstantk, E( k ) 0
(k ) ,
and
E( k j ) 0 (k j )) .
(k ) ,
and
E( k j ) 0 (k j )) ,
53