Professional Documents
Culture Documents
Intellgnt CNTR Cheok s1
Intellgnt CNTR Cheok s1
=
1
1
1 if 0
lim hard limits
0 if 0
1
log
0 if 0
if 0 saturation linear function
1 if
1
s
s
s s s
s
s
hard y
s
sig y
e
s
>
<
=
+
=
1 if 0
lim hard limits symmetric
1 if 0
1
tan hyperbolic tangent sig
logarithmic sigmoi
moid
1
li
d
m
s
s
s
hard s y
s
e
sig y
e
sat
>
=
=
+
2
1
1 1
1
0 if 1 or 1
tribas 1 if 1 0 triangular basis
1 if 0 1
1 if
if saturation limits symmetric
1 if
s
s s
s y s s s s
s s
s s
y s s
s s
radbas y e
< >
=
<
+ <
= radial basis
Math expression
y = f(s)
Transfer function
Matlab function
June 2002 7
Feedforward Neural Network (FNN) Model
Nonlinear activation function
output
inputs
u
y
1
w
b
f
s
1-input 1-output single feedforward neuron model
( )
s wu b
y f s
= +
=
Math model
-5 -4 -3 -2 -1 0 1 2 3
-1
-0.5
0
0.5
1
logsig with w = 1 &b = 1
y
u
s = w*u + b; y = logsig(s)
-5 -4 -3 -2 -1 0 1 2 3
-1
-0.5
0
0.5
1
radbas with w = 1 &b = 1
y
u
s = w*u + b; y = ra dba s(s)
w weight
b bias
=
=
Examples of I/O mapping
June 2002 8
-10 -5 0 5 10
0
0.2
0.4
0.6
0.8
1
u
y
Effe cts of bias b on logsig
w=1,b=-1
w=1,b=0
w=1,b=1
-10 -5 0 5 10
0
0.2
0.4
0.6
0.8
1
u
y
Effe cts of we ight w on logsig
w=0.5,b=0
w=1.0,b=0
w=2.0,b=0
-10 -5 0 5 10
0
0.2
0.4
0.6
0.8
1
u
y
Effe cts of bias b on radbas
w=1,b=-1
w=1,b=0
w=1,b=1
-10 -5 0 5 10
0
0.2
0.4
0.6
0.8
1
u
y
Effe cts of we ight w on radbas
w=0.5,b=0
w=1.0,b=0
w=2.0,b=0
( )
s wu b
y f s
= +
=
Math model
June 2002 9
1 1 1 2 2 1
1 1
( )
s wu w u b
y f s
= + +
=
Nonlinear activation function
output
inputs
1
u
2
u
1
y
1
1
w
2
w
1
b
f
1
s
2-input 1-output single feedforward neuron model
Examples of I/O mapping
June 2002 10
Nonlinear activation function
output
inputs
1
u
2
u
1
y
1
11
w
12
w
1
b
f
1
s
2
y
1
21
w
22
w
2
b
f
2
s
y
1 11 1 12 2 1
1 1
( )
s w u w u b
y f s
= + +
=
2 21 1 22 2 2
2 2
( )
s w u w u b
y f s
= + +
=
2-2-1 FNN (2 inputs, 2 hidden neurons, 1 output neuron)
Note: Feedforward neural networks (FFNN) are capable of mapping various input-output patterns!
Examples of I/O mapping
June 2002 11
More example
2-2-1 FNN (2 inputs, 2 hidden neurons, 1 output neuron)
June 2002 12
Matrix-Vector Models for FNN
1 1 2 2
( )
s wu w u b
y f s
= + +
=
[ ]
1
1 2
2
u
s w w b
u
(
= +
(
( )
s b
y f s
= +
=
Wu
[ ]
1 2
1
2
w w
u
u
(
=
(
W
u
!
1
u
2
u
y
1
1
w
2
w
1
b
f
s
u
y
1
W
b
f
s
Individual Variable Form Matrix-Vector Form
1 1
( )
r r
s wu w u b
y f s
= + + +
=
L [ ]
1
1 r
r
u
s w w b
u
(
(
= +
(
(
L M
( )
s b
y f s
= +
=
Wu
[ ]
1
1
r
r
w w
u
u
(
(
=
(
(
W
u
! L
M
u
y
1
W
b
f
s
Matrix-Vector Form
1
u
r
u
y
1
1
w
r
w
b
f
s
Individual Variable Form
M
June 2002 13
1 11 1 1 1
1 1
2 21 1 2 2
2 2
( )
( )
r r
r r
s w u w u b
y f s
s w u w u b
y f s
= + + +
=
= + + +
=
L
L
Individual Variable Form
1
u
r
u
1
y
1
11
w
1r
w
1
b
f
1
s
M
2
y
21
w
2r
w
2
b
f
2
s
M
M
{ { {
{
1 11 1 1 1
1
1 1 1
( )
( )
( )
R
m m mr r m
m m m
s w w u b
s w w u b
y f s
y f s
( ( ( (
( ( ( (
= +
( ( ( (
( ( ( (
( (
( (
=
( (
( (
s u b
W
y
f s
L
M M M M M
L
1442443
M M
14243
( )
=
=
s Wu +b
y f s
Matrix-Vector Form
u
y
1
W
b
f
s
June 2002 14
1 11 1 1 1
1 1
1 1
( )
( )
r r
m m mr r m
m m
s w u w u b
y f s
s w u w u b
y f s
= + + +
=
= + + +
=
L
M
L
Individual Variable Form
{ { {
{
1 11 1 1 1
1
1 1 1
( )
( )
( )
R
m m mr r m
m m m
s w w u b
s w w u b
y f s
y f s
( ( ( (
( ( ( (
= +
( ( ( (
( ( ( (
( (
( (
=
( (
( (
s u b
W
y
f s
L
M M M M M
L
1442443
M M
14243
1
u
r
u
1
y
1
11
w
1r
w
1
b
f
1
s
M
m
y
1 m
w
mr
w
m
b
f
m
s
M
M M
r-input m-output FNN
( )
=
=
s Wu +b
y f s
Matrix-Vector Form
u
y
1
W
b
f
s
June 2002 15
r-n-m FNN (r-inputs, n-nodes, m-outputs)
11 111 1 11 11
11 11
1 1 1 1 1 1
1 1
( )
( )
r r
n n nr r n
n n
s w u w u b
y f s
s w u w u b
y f s
= + + +
=
= + + +
=
L
M
L
{ { {
{
{
11 111 11 1 11
1 1 1 1 1
1 1 1
11 11 11
1 1 1
1
1 1
21 211 21
2 2 1 2
2
( )
( )
( )
r
n n nr r n
n n n
n
m m mn
s w w u b
s w w u b
y f s
y f s
s w w
s w w
( ( ( (
( ( ( (
= +
( ( ( (
( ( ( (
( (
( (
=
( (
( (
( (
(
=
(
(
s u b W
y
f s
s
L
M M M M M
L
1442443
M M
14243
L
M M M
L
{ {
{
11 21
1 2
1 2
2
21 21 21
2 2 2
2
2 2
( )
( )
( )
n m
m m m
y b
y b
y f s
y f s
( (
( ( (
+
( ( (
( ( (
( (
( (
=
( (
( (
y b
W
y
f s
M M
144424443
M M
14243
Matrix-Vector Form
Individual Variable Form
1
u
r
u
11
y
1
111
w
11r
w
11
b
11
f
11
s
M
1n
y
1 1 n
w
1nr
w
1n
b
1n
f
1n
s
M
M
M
21
y
2m
y
M M
21
b
211
w
21n
w
M
2m
b
2 1 m
w
2mn
w
M
1
21
f
2m
f
21
s
2m
s
21 211 11 21 1 21
21 21
2 2 1 11 2 1 2
2 2
( )
( )
n n
m m mn n m
m m
s w y w y b
y f s
s w y w y b
y f s
= + + +
=
= + + +
=
L
M
L
1 1 1
1 1
( )
=
=
s Wu +b
y f s
u
1
y
1
1
W
1
b
1
f
1
s
2
y
2
W
2
b
2
f
2
s
1
2 2 1 2
2 2
( )
=
=
s W y + b
y f s
June 2002 16
r-n
1
n
2
-m FNN (r-inputs, n
1
-nodes, n
1
-nodes m-outputs) (3-layers)
1 1 1
1 1
( )
=
=
s Wu + b
y f s
u
1
y
1
1
W
1
b
1
f
1
s
Matrix-Vector Form
2
y
2
W
2
b
2
f
2
s
1
2 2 1 2
2 2
( )
=
=
s W y + b
y f s
3
y
3
W
3
b
2
f
3
s
3 3 2 3
3 3
( )
=
=
s W y + b
y f s
1
{ { {
{
11 111 11 1 11
1 1 1 1 1
1 1
1
11 11 11
1 1 1
1 1 1
( )
( )
( )
r
n n nr r n
n n n
s w w u b
s w w u b
y f s
y f s
( ( ( (
( ( ( (
= +
( ( ( (
( ( ( (
( (
( (
=
( (
( (
s u b
W
y f s
L
M M M M M
L
1442443
M M
14243
{ { {
1
2 2 2 1 1 2
2 2 2
211 21
21 11 21
2 2 1 2 1 2
2 1 2
2
21 21 21
2 2 2
2 2 2
( )
( )
( )
n
n n n n n n
n n n
w w
s y b
s w w y b
y f s
y f s
( ( ( (
( ( ( (
= +
( ( ( (
( ( ( (
( (
( (
=
( (
( (
s y b
W
y f s
L
M M M M M
L
144424443
M M
123 14243
{ {
{
2
2 2
311 31
31 21 31
3 3 1 3 2 3
3 3
2 3
31 31 31
3 3 3
3
3 3
( )
( )
( )
n
m m mn n m
m m m
w w
s y b
s w w y b
y f s
y f s
( (
( (
( (
( (
= +
( (
( (
( (
( (
( (
( (
=
( (
( (
s b
y W
y
f s
L
M M M M M
L
123 144424443
M M
14243
June 2002 17
FNN Can Emulate Examples
Human learns to imitate actions of others.
Biological neural networks are responsible for decision making capability of a person.
Artificial neural networks can be programmed to imitate the perception involved in the decision.
For example
Biological Neural Network (BNN)
Observation !Perception !Decision
y
Decision
u
Observation
Artificial Neural Network (ANN) firmware
1 1 1 1 1 1
2 2 1 2 2 2 2
1 1 2 1 1 1 1
1
Layer 1 ( )
Layer 2 ( )
Layer N-1 ( )
Layer N ( )
N N N N N N N
N N N N N N N
= + =
= + =
= + =
= + =
s Wu b y f s
s W y b y f s
s W y b y f s
s W y b y f s
M
u
y
N
may be replaced by
Note: In practice, wed need to specify
The number of layers of neural network
The number of neurons in each layer
The type of activation functions
The weights and biases for these
Note: In general, we need
not restrict an ANN to
emulate only a BNN
June 2002 18
Training FNN to Learn from Examples
Acquire several (if possible) sets of observation (input u) ! decision (output) pattern
representing the perception we would like to retain.
Specify the configuration for the FNN (E.g., r-n
1
- n
2
-m layers and type of activation functions).
Guess the weights and biases, and compare the FNN outputs to the Training Patterns, as shown
in the figure below)
Tune FNN by adjusting the weights and biases so that its output y
N
matches that of the pattern y.
The goal then is to make the error e = y - y
N
, as small as possible when the FNN is presented
with observation u.
Observation !Decision Pattern
u
y
N
Artificial Neural Network (ANN) firmware
y
-
+ e = y - y
N
Adjustments
1 1 1 1 1 1
2 2 1 2 2 2 2
1 1 2 1 1 1 1
1
Layer 1 ( )
Layer 2 ( )
Layer N-1 ( )
Layer N ( )
N N N N N N N
N N N N N N N
= + =
= + =
= + =
= + =
s Wu b y f s
s W y b y f s
s W y b y f s
s W y b y f s
M
June 2002 19
Optimization of Compared Outcome
The observation u and decision y in a multi-
input multi-output pattern are vectors
There will be many instances/samples of the
patterns:
Similarly, the ANN output can be expressed as
The error vectors (compared outcome) can then
be represented as
A typical cost function for judging the
goodness of fit in the emulation is given by
Find weights W
i
and b
j
such that the cost
function is minimized.
1 1
and
u y
n n
u y
u y
(
(
( (
= =
( (
(
(
u y M M
[ ]
[ ]
(1) (2) ( )
(1) (2) ( )
K
K
=
=
U u u u
Y y y y
L
L
[ ]
(1) (2) ( )
N N N N
K = Y y y y L
[ ]
[ ]
(1) (1) (2) (2) ( ) ( )
(1) (2) ( )
N N N
K k
K
=
=
E y y y y y y
e e e
L
L
[ ]
1
1
'(1) (1) '(2) (2) '( ) ( )
2
1
'( ) ( )
2
K
k
J K K
k k
=
= + + +
=
e e e e e e
e e
L
J
W
i
b
j
Local
minimum
Non-unique minima Global
minimum
June 2002 20
Optimization Techniques
Given a cost function, there are several ways to find an optimum solution. Optimization
techniques can be categorized onto the following approaches
Calculus Gradient techniques which adjust a parameter p based on
sensitivity ( = variation of cost function over variation of parameter).
Examples: The Delta Rule, Hill Climbing, Back-Propagation, etc.
AI Heuristics techniques which expand a branch in tree search based on
evaluation of the cost function. Examples: Breadth first, depth first, A* algorithm, etc.
Evolution type techniques which use a population of parameters to evolve into
better and better generations of parameters. Examples: Evolution algorithm, Genetic
Algorithm, Genetic Programming
J p
June 2002 21
Delta Rule & Back-Propagation
may be a vector
Although drawn here as a scalar
0
J
>
0
J
>
( ) J
= +
=
Back-propagation is a well-known gradient search technique for training FNN that is based
on the Delta Rule
June 2002 22
Training FNN using Back-Propagation
Back-Propagation for a 1-1 Neuron
1 1 1
( ) s wu b y f s = + =
Math model of neuron:
{ (1), (2), , ( )} { (1), (2), , ( )} u u u K y y y K L L
Input-output pattern to be emulated:
1 1 1
{ (1), (2), , ( { (1) )} , (2), , ( )} y u u u y y K K L L
Input-output generated by neuron model:
Cost function to be minimized: ( ) ( ) ( )
1 1
2 2
1
2
(1) (
1
2
(1) (2 2 ( ) ) ( ) ) J K K
(
= + + +
y y y y y y L
Back-Propagation update is given by:
( )( ) ( )
( ) ( ) ( )
1 1 1 1 1
1 1
1
1 1 1 1 1
1 1 1
1 1
1
1
1
( )
= = ( )( 1) ( ) ( )
= = ( )( 1) ( ) 1
u
s y s y f s J J J
g g s
w s w y s w s s
s y s J J
y
J
s
s g
b s b
y y
y
y s b
| | | || |
| | | |
= = =
| | | | |
\ . \ .
\ . \ .\ .
| | | || |
| | | |
=
| | | | |
\ . \ .
\ . \ .\ .
w b
w w w b b
J J
w b
w b
b
= + = +
= =
June 2002 23
Example: Back-Propagation Training for a 1-1 Neuron
A pattern or phenomenon that we can observe.
In this example, it so happens that the pattern
behaves like this. (y is normalized)
u
y
y
u
w
b 1
y
1
s
1
A single neuron model with logsig activation
1
1
1
s
e
( )( )( )
( )( )( )
1 1 1
2 1 1
Back-propagation algorithm
1 1
1 1
new old
new old
J J
w w w w e y y u
w w
J J
b b b b e y y
b b
= + = =
= + = =
-10 -8 -6 -4 -2 0 2 4 6 8 10
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
P
Y
&
Y
1
1-input 1-output FeedforwardNeuron
ObservedY
TrainedY1
0 2 4 6 8 10 12 14 16
10
-3
10
-2
10
-1
10
0
17 Epochs
T
r
a
i
n
i
n
g
-
B
l
u
e
G
o
a
l
-
B
l
a
c
k
Performance is 0.00930709, Goal is 0.001
June 2002 24
A pattern or phenomenon
that we can observe. In this
example, it so happens that
the pattern behaves like
this. (y is normalized)
y
y
1
( ) ( )( )
( ) ( )( )
( )( )( )
1 1 1 1 1 1 1 1
1 1
2 2 2 2 1 1 1 2
2 2
2 1 1
1 1
1 1
1 1
new old
new old
new old
J J
w w w w e y y u
w w
J J
w w w w e y y u
w w
J J
b b b b e y y
b b
= + = =
= + = =
= + = =
1
2
u
u
(
=
(
u
( )
s b
y f s
= +
=
Wu
[ ]
1 2
1
2
w w
u
u
(
=
(
W
u
!
1
W
b
f
-2
-1
0
1
2
-4
-2
0
2
4
0
0.2
0.4
0.6
0.8
1
P1
2-input 1-output FFNeuron
P2
Y1
Y
Back-Propagation Training for a 2-1 Neuron
1 layer NN may not fit all data points
June 2002 25
A pattern or phenomenon that we can observe.
In this example, it so happens that the pattern
behaves like this. (y is normalized)
y
y
2
u
Back-Propagation Training for a Two-Layer r-n-m Neural Network
1 1 1
1 1
( )
=
=
s Wu +b
y f s
1
y
1
1
W
1
b
1
f
1
s
2
y
2
W
2
b
2
f
2
s
1
2 2 1 2
2 2
( )
=
=
s W y +b
y f s
( ) ( ) ( ) ( )
[ ]
1
1
11 11 211 12 12 212
1 11 12 2
11
1 1 1 1
1 1
12
1 1 1 1
1 1
( ) ( )
0
, ,
0
, , 1
J J J J
g s w g s w
s s s
J
s
J J
u u
J
s
J J
( ( (
( = =
( ( (
(
(
( (
(
= + = =
( (
(
(
( (
= + = =
( (
'
W
'
b
s
W W W W
W W
b b b b
b b
[ ]
11
12
0
1
0
J
s
J
s
(
(
(
(
(
( ) ( )
( )
( )
2
2
2 2 2
2
2 2 2 2 1
2 2 2
2 2 2 2
2 2 2
= ( - )( 1) ( )
, ,
, , 1
b
J
y y g s
s
J J J
s
J J J
b b b b
b b s
( ( (
= + = =
( ( (
( ( (
= + = =
( ( (
'
W
'
W W W W y
W W
-2
-1
0
1
2
-4
-2
0
2
4
-2
-1
0
1
2
P1
2-5-1FFNeuron
P2
Y
&
Y
2
2 layer NN fits data points better
June 2002 26
Example Application of FNN
Use an FNN to mimic an operator eye-hand coordination
PC-based Simulation/Animation
Human-in-the-loop
Eyes
Decision
Hand
Visual Animation
VRML View of Driving Scenery
Dynamics System Simulation
Motorized Kinematics Automobile Model
Your neurons and you
Input Device
Joystick
Output Device
Monitor
Vehicle
Throttle & Steer
(Hand action)
Artificial Neural Network
Emulation of Driving Skill
PC-based Simulation/Animation
Visual Animation
VRML View of Driving Scenery
Dynamics System Simulation
Motorized Kinematics Automobile Model
Youve been replaced!
Preview deviations
(Eye observation)
June 2002 27
Ve loc it y
St e e ring
X
Y
H
VRML Driving Simula tion
RUN1. ma t
To File
X
Y
H
LD
LO 3m
LO 6m
LO 12m
La ne Offse t
X
Y
Z
But tons
J oystick
-1 0
Displa y
s1 y1 s2 y2
ta n sig
pu re lin n e tsu m 1
n e tsu m
Ve loc it y
S t e e ring
X
Y
H
VRML Drivin g S im u la tio n
S co p e
W1 * u
Ma trix
Ga in 1
W2 * u
Ma trix
Ga in
X
Y
H
LD
LO 3m
LO 6m
LO 12m
La ne Offse t
-1
-0 . 2 8 1 5
-0 . 6 4 5 6
-1 . 3 3 5
-1 . 3 6 9
Disp la y
m
-1
Co n sta nt2
b 2
Co n sta n t1
b 1
Co n sta n t
4
4
4
5
5
5 2
2
5
2
2
2
2
Virtual Simulation of Lane Keeping
Trained FNN outputs (NNThrottle, NNSteer) replaces human operator
Human outputs (Throttle, Steer)
June 2002 28
0 5 10 15 20 25 30 35 40 45 50
-20
-10
0
10
20
LD
LD3
LD6
LD12
0 5 10 15 20 25 30 35 40 45 50
-1.5
-1
-0.5
0
0.5
1
Throttle
Steer
NNTrhrottle
NNSteer
Sec
Comparison of Trained FNN outputs (NNThrottle, NNSteer) to Human outputs (Throttle, Steer)
1
s
2
y
2
W
2
b
2
f
2
s
1
2 2 1 2 2 2
2 2
( )
d
= +
=
s W y +b V y
y f s
1
z
I
1
V
2
V
1
z
I
1 1
delayed
d
= y y
2 2
delayed
d
= y y
Recursive Neural Network (RNN)
Neural networks learn and mimic I/O behavior of a pattern which depends on output past values
June 2002 32
Intelligent Control Systems Methods
Fuzzy Logic
Prof Ka C Cheok Prof Ka C Cheok
Dept of Electrical and Systems Engineering
Oakland University
Rochester MI 48309
Summer Technical Workshop Series
NDIA 2
nd
Annual Intelligent Vehicle Systems Symposium
Grand Traverse Resort & Spa
Travers City, MI
June 3-5, 2002
June 2002 33
Historical notes:
1920-30. Heisenberg & Max Black (mathematician) introduced principle of uncertainty
1930s Probability theory
Lofti Zadeh introduced fuzzy set theory & fuzzy logic.
Many claimed that fuzzy logic theory which represents possibility theory resembled
probability theory even though they are mathematically and conceptually different.
1970s. Several successful commercial application of FL in Japan made the world aware
of its potential. Accelerating & decelerating a subway train; camera adjustments,
consumer appliances, etc.
June 2002 34
Precision & Non-Precision
Precise Statement
Mathematics is a precise language for describing scientific, engineering and business principles.
E.g. You will earn 3.4% A.P.R as interest for your saving account. The
interest dividend will be computed and distributed at the end of each month. Bank
statements?
E.g. Force-Acceleration-Speed-Displacement calculus
0
0
( ) ( ) (0)
( ) ( ) (0)
t
t
ma bv kd f
v t a d v
d t v d d
= +
= +
= +
Non-Precise Statement
Adjectives and adverbs are non-precise words for describing fuzzy meaning ideas.
You look nice.
Better still, You look like shit. Whats your secret?
Did you like it? Er, interesting.
Few, several, many, plentiful, lots
June 2002 35
Fuzzy Variables, Values and Membership
Rating a movie from 0 to 10 Spiderman is a nine out of 10. Or simply marvelous.
Hes twenty five years old, means he could be between 25.000 and 25.999 years of age.
Human are non-precise in thinking and speaking:
E.g., I would like to spend fifty grand dollars on a car.
$50,000 is really close to the verbal fifty grand . (100% membership)
$55,000 is on the high side of fifty grand . (Say 50% membership of fifty grand)
$42,000 is on the low side of fifty grand . (Say 25% membership)
5
0
,
0
0
0
6
0
,
0
0
0
4
0
,
0
0
0
1.00
0.00
fifty grand
A computer, on the other hand, can operate with high precision
E.g. a double precision floating number can deal with any number in the range
Variable = +/-y.yyyyyyyyyyyyyyy x 10
+/-308
E.g., " = 3.141592653589793 is a magical number
Precise Variables and Values
Numbering system would be simpler and more natural if we have only 2 or 4 or 8 or 2^n fingers.
Octopi got the number systems right.
June 2002 36
Boolean Sets versus Fuzzy Sets
characteristic function
1 if
( )
0 if
A
x A
f x
x A
=
x X !
A X
A
f
1
0
Class
A
membership function
( ) : [0, 1]
A
x X
A X
x X !
1
0
Class
Fuzzy Set
(soft membership values varying between 0 or 1)
Classical Boolean Set
(crisp membership value, either 0 or 1)
June 2002 37
Shapes of Membership Functions
0 1 2 3 4 5 6 7 8 9 10
0
0. 1
0. 2
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1
0 1 2 3 4 5 6 7 8 9 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5 6 7 8 9 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5 6 7 8 9 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5 6 7 8 9 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5 6 7 8 9 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5 6 7 8 9 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5 6 7 8 9 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5 6 7 8 9 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5 6 7 8 9 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Triangular
zmf
max min , , 0
x a c x
b a c b
| | | |
=
| |
\ .
\ .
trimf
Trapezoidal
Gaussian standard bell shape
Gaussian with two different sides
Generalized Gaussian bell shape
Sigmoid
Difference of 2 sigmoids
Product of 2 sigmoids
trapmf
Z-shape spline
-shape spline
S-shape spline
0 1 2 3 4 5 6 7 8 9 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
gaussmf
gauss2mf
gbellmf
sigmf
dsigmf
psigmf
pimf
smf
max min ,1, , 0
x a d x
b a d c
| | | |
=
| |
\ .
\ .
2
2
( )
2
x c
e
=
2
1
1
b
x c
a
=
+
( )
1
1
a x c
e
=
+
1 1 2 2
( ) ( )
1 1
1 1
a x c a x c
e e
=
+ +
1 1 2 2
( ) ( )
1 1
1 1
a x c a x c
e e
=
+ +
ANY OTHER CONVEX TYPE SHAPE
WILL ALSO DO!!!
June 2002 38
If-Then Rules
Antecedent
Consequent Antecedent
Consequent
A single antecedent-single consequence If-Then rules has the form:
Rule1. If x is A1 then y is B1
Rule2. If x is A2 then y is B2
Rule3. If x is A3 then y is B3
Rule1. If x is A1 and/or y is B1 then z is C1
Rule2. If x is A2 and/or y is B2 then z is C2
Rule3. If x is A3 and/or y is B3 then z is C3
Compounded-antecedent-single consequence rules take the form
If-Then rules are the most commonly used fuzzy logic statements.
They can be used to represent knowledge.
If Road is Left then Steer to Left
If Road is Middle then Steer to Middle
If Road is Right then Steer to Right
If Road is Left or Obstacle is Right then Steer to Left
If Road is Left and Obstacle is Left then Steer to Middle
And so on
June 2002 39
Fuzzy Inference Systems (FIS)
Knowledge & Data Base
Rule1. If x is A1 and/or y is B1 then z is C1
Rule2. If x is A2 and/or y is B2 then z is C2
Rule3. If x is A3 and/or y is B3 then z is C3
Fuzzification
Convert crisp value
into fuzzy association
Defuzzification
Convert fuzzy set into
a crisp value
Crisp numerical
input value
Crisp numerial
output value
Sensors Actuators
June 2002 40
FUZZIFICATION
Fuzzy values with degree of membership
A crisp numerical value
Example: Sometimes we rate a movie on the scale of 0 to 10. Lets say we just want to use a fuzzy labels
like 'Horrible', 'Bad', 'OK', 'Good', 'Excellent, to describe the movie. Suppose
that the membership function for these fuzzy labels/values are as shown below:
0 1 2 3 4 5 6 7 8 9 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Horrible
Bad
OK
Good
Excellent
We associate a crisp
value 6.50 to
0% Horrible
22% Bad
75% OK
97% Good
27% Excellent
0.00
0.22
0.75
0.97
0.27
Associate a crisp numerical input value to fuzzy values with degree of membership.
June 2002 41
DEFUZZIFICATION
Fuzzy set
A crisp numerical value
Example: Suppose we have the fuzzy set shown below:
Convert fuzzy sets into a crisp numerical output value.
0 1 2 3 4 5 6 7 8 9 10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
5.8791 6.6232 6.3
Aggregrated Y
Centroid
Bis ector
Maximum
YAggregate(y)
y
10
0 1
10
1
0
( )
( )
Aggregrate Aggregrate
Centroid
Aggregrate
Aggregrate
y y y dy y y
y
y
y y dy
= =
!
( ) sec
( )
Bi tor Aggregrate
y median y y y =
( )
, ( )
( )
AggMax Max Aggregrate
Max Max
y i max y y
y y i
( =
=
There are many ways to do this:
Centroid.
Bisector/Median.
Maximum
June 2002 42
A B A and B min(A, B)
0 0 0 0
0 1 0 0
1 0 0 0
1 1 1 1
A
B
A and B
min(A, B)
LOGICAL OPERATIONS
An AND operation can be treated as equivalent to a MIN or PRODUCT operation
Boolean AND
operation
Fuzzy AND
operation
A B
A and B
min(A, B)
Note: 0.5 is the largest value
A B A and B A*B
0 0 0 0
0 1 0 0
1 0 0 0
1 1 1 1
A B
A and B
A*B
1
0
A and B
A*B
A
B
Note: 0.25 is the largest value
June 2002 43
A B A or B max(A, B)
0 0 0 0
0 1 1 1
1 0 1 1
1 1 1 1
A B A or B A+B-A*B
0 0 0 0
0 1 1 1
1 0 1 1
1 1 1 1
A
B
A or B
max(A, B)
A
B
A or B
max(A, B)
Note: 0.5 is the smallest value
A B
A or B
A+B-A*B
A B
A or B
A+B-A*B
Note: 0.75 is the smallest value
An OR operation can be treated as equivalent to a MAX or PROBOR operation
Probor operation
Max operation
Probabilistic OR
Boolean OR
operation
Fuzzy OR
operation
June 2002 44
y
B2
B1
0
1
0
1
y
x
A2
A1
0
1
0
1
x
z
C2
C1
0
1
0
1
z
AND
(MIN)
OR
(MAX)
0
1
z
2. Fuzzy operation
1.Fuzzification
4. Aggregation
5. Defuzzification
z
1
z
2
s
2
s
1
s
1
s
2
3. Implication
1
0
( )
A
x
1
0
( )
B
y
1
( )
C
z
2
0
( )
A
x
2
0
( )
B
y
2
( )
C
z
0
x
0
y
1 1 2 2
0
1 2
s z s z
z
s s
+
=
+
1 1 2 2
( ) @ @
Aggregrate
C
z s z s z = +
1 1
2 2
1 0 0
2 0 0
min( ( ), ( ))
max( ( ), ( ))
A B
A B
s x y
s x y
=
=
s
1
s
2
Rule1. If x is A1 and y is B1 then z is C1.
Rule2. If x is A2 or y is B2 then z is C2.
Mamdani Style FUZZY LOGIC
Example
(Singleton)
(Singleton)
1 1
2 2
1 1
1 0 0
2 0 0
2 2
0
1 2
min( ( ), ( ))
max( ( ), ( ))
A B
A B
s z
s x y
s
s z
z
s
y
s
x
+
=
+
=
=
The Math for the Fuzzy Logic
Surprisingly simple!!!
June 2002 45
Sugeno Style FUZZY LOGIC
Example
Rule1. If x is A1 and y is B1 then u is
Rule2. If x is A2 or y is B2 then u is
Rule3. If x is A3 and y is B3 then u is
1 1 1 1
2 2 2 2
3 3 3 3
u m x n y c
u m x n y c
u m x n y c
= + +
= + +
= + +
y
B3
B2
B1
0
1
0
1
0
1
y
y
x
A3
A2
A1
0
1
0
1
0
1
x
x
x
0
1
0
1
0
1
x
x
and
or
and
0
1
z
1.Fuzzification
3. Defuzzification
y
y
y
y
2. Fuzzy operation
0
x
0
y
1 1
2 2 2 2
3 3
1
2
3
1 1 2 2 3 3
1 2 3
( ) ( )
( ) ( ) ( ) ( )
( ) ( )
( , )
A B
A B A B
A B
s x y
s x y x y
s x y
s u s u s u
u x y
s s s
=
= +
=
+ +
=
+ +
1
( )
A
x
1
( )
B
y
1 1 1 1
( , ) u x y m x n y c = + +
2
( )
A
x
2
( )
B
y
2 2 2 2
( , ) u x y m x n y c = + +
3
( )
A
x
3
( )
B
y
3 3 3 3
( , ) u x y m x n y c = + +
The Math for the Fuzzy Logic
Surprisingly simple!!!
June 2002 46
ANFIS (Adaptive Network Fuzzy Inference System)
ANFIS is a Sugeno-style Fuzzy Logic that learns to emulate an I/O pattern
(similar to a neural network).
1 1 1 1 1 4 4 4
1 1 1 1
2 2 2 2
3 3 3 3
( ) ( )
If x is A1 and y is B1, then z is .
If x is A2 and y is B2, then z is .
If x is A3 and y is B3, then z is .
1 1 1 1
( ) ( )
1 1 1 1
A B a x c s a y c s
g l x m y n
g l x m y n
g l x m y n
x y
e e e e
= + +
= + +
= + +
= = = =
+ + + +
2
5 2
5 2 5 2
2 2
3 3 3 6 6 6
1 1
2 2 2
( ) 3( 3)
( ) ( )
1 1 1 1
( ) ( )
1 1 1 1
y c x c
s s
A B
A B s a y c s a x c
x e e y e e
x y
e e e e
| | | |
| |
\ . \ .
= = = =
= = = =
+ + + +
z
y
x
Observation
z
obs
y
x
y
x
Sugeno style Fuzzy Logic
ANFIS will automatically tune the parameters
a
1
,
2
, a
3
, a
4
,
5
, a
6
, c
1
, c
2
, c
3
, c
4
, c
5
and c
6
,, and l
1
, m
1
, n
1
, l
2
, m
2
, n
2
, l
3
, m
3
, n
3
so that the fuzzy logic output z matches up with the observed z
obs.
June 2002 47
1 1 1 4
5 2
2 2
3 3 3 6
1 1 1 4 4 4 1
2 2
5 2
2 5 2
2 5
3 3 3 6 6 6 3
1 1 1 1 1
2 2
1 1
( ) ( ) ( ) ( )
1 1
( ) ( ) 1 1
( ) ( )
2 2
1 1
( ) ( ) ( ) ( )
1 1
( , )
(
A B
s s
s s
A B
A B
s s
s a x c s a y c w x y
e e
y c x c
s s w x y e e
s a x c s a y c w x y
e e
g g x y l x m y n
g g x
= = = =
+ +
| | | |
= = = =
| |
\ . \ .
= = = =
+ +
= = + +
=
2 2 2
3 3 3 3 3
1 1 2 2 3 3
1 2 3
, )
( , )
y l x m y n
g g x y l x m y n
w g w g w g
z
w w w
= + +
= = + +
+ +
=
+ +
Sugeno fuzzy logic
( )
2
1
1
2
N
k k
k
J R Z
=
=
ANFIS scheme
Cost function to be minimized
1. Initialize the coefficients a
1
, s
2
, a
3
, a
4
, s
5
, a
6
, c
1
, c
2
, c
3
, c
4
, c
5
and c
6
, to some estimated values.
2. Calculate w
1
, w
2
& w
3
for each set of input data, Xi and Yi.
3. Apply least square estimation technique to calculate l
1
, m
1
, n
1
, l
2
, m
2
, n
2
, l
3
, m
3
, n
3
4. Apply gradient search technique to update a
1
, s
2
, a
3
, a
4
, s
5
, a
6
, c
1
, c
2
, c
3
, c
4
, c
5
and c
6
5. Repeat from Step 2, until satisfactory
June 2002 48
Example Application of ANFIS
Use an ANFIS to mimic an operator eye-hand coordination
PC-based Simulation/Animation
Human-in-the-loop
Eyes
Decision
Hand
Visual Animation
VRML View of Driving Scenery
Dynamics System Simulation
Motorized Kinematics Automobile Model
Your neurons and you
Input Device
Joystick
Output Device
Monitor
Vehicle
Throttle & Steer
(Hand action)
Adaptive Network Fuzzy Inference System
Emulation of Driving Skill
PC-based Simulation/Animation
Visual Animation
VRML View of Driving Scenery
Dynamics System Simulation
Motorized Kinematics Automobile Model
Youve been replaced!
Preview deviations
(Eye observation)
June 2002 49
y2
Ve loc it y
S t e e ring
X
Y
H
VRML Drivin g S im u la tio n
S co p e
X
Y
H
LD
LO 3m
LO 6m
LO 12m
La n e Offse t
-1
Fuzzy Logi c
Control l er
-4 . 0 1 2
8 . 7 9
1 4 . 6 2
2 6 . 3 2
Disp la y
-1
Co n sta n t2
4
4
Ve loc it y
St e e ring
X
Y
H
VRML Driving Simula tion
RUN1. ma t
To File
X
Y
H
LD
LO 3m
LO 6m
LO 12m
La ne Offse t
X
Y
Z
But tons
J oystick
-1 0
Displa y
Virtual Simulation of Lane Keeping
Trained ANFIS outputs (FLThrottle, FLSteer) replaces human operator
Human outputs (Throttle, Steer)