Download as pdf or txt
Download as pdf or txt
You are on page 1of 231

. .

. .


.
.

ISBN: 978-960-603-078-9

Copyright , 2015

Creative Commons -
- 3.0.
https://creativecommons.org/licenses/by-nc-nd/3.0/gr/



9, 15780
www.kallipos.gr
( )

-1

0: 0-1

-: -1

1: 1-1
1.1 1-1
1.2 Perceptrons 1-4
1.3 1-9
1.4 - 1-11
1.5 1-12
1.6 1-17
1-17
1-19

2: 2-1
2.1 2-1
2.2 2-3
2.3 2-5
2.4 2-8
2.5 Mamdani, Sugeno 2-13
2.6 2-24
2-26
2-27

3: 3-1
3.1 3-1
3.2 3-2
3.3 3-3
3.4 3-4
3.5 3-10
3.6 3-11
3-12
3-13

-: -1

4: 4-1
4.1 - 4-1
4.2 4-2
4.3 4-4
4.4 4-5
4.5 4-12
4-13
4-14
5: , , 5-1
5.1 5-1
5.2 5-5
5.3 5-15
5-17
5-19

6: 6-1
6.1 Bayes 6-1
6.2 6-2
6.3 6-3
6-12
6-14

-: -1

7: 7-1
7.1 7-1
7.2 7-2
7.3 7-7
( ) 7-12
7-18
7-18

8: 8-1
8.1 8-1
8.2 8-4
8.3 8-6
8.4 8-7
8-8
8-9

9: 9-1
9.1 9-1
9.2 9-10
9.3 9-15
9-18
9-23

: MATLAB -1

: -1

: -1


:
:
:
: Bayes
: Markov
:
2: -2
:
:
:
:
:
:
:
:
:
:
: Hebb
VC: Vapnik-Chervonenkis
: Hebb
:
:
/:
:
:
:
:
MATLAB: MATrix LABoratory
: - Hebb
:
:
MM:
:
: -
- : -
: -
:
:
:
:
-:
- ( -):
1: -1
2: -2
:
:
:

-1
0:
- Shockley, Bardeen Brattain
Bell ... 1947, .
,
(. )
Turing,
.
, 1990 ,
() (computational intelligence (CI)). ,

,
. ,
.
() (soft computing (SC)).

. ,
(Polycarpou, 2013). ,
, , (big
data), . ,
, (Kaburlasos & Papakostas,
2015). , ,
.
() ()
(artificial intelligence (AI)) ( .., 2006 Russell & Norvig, 2005),
.
John McCarthy, ,
, (Andresen,
2002). , . ,
, . ,
,
- ,
. , .

/. , () (artificial neural network
(ANN)), , .
, () (fuzzy systems (FSs)),
. () (evolutionary computation (EC)),
, () .
(classic CI).
, / ,
, ,
(enhanced CI).
- , , ,
.. , () ,
. -
, ,
.
(function) (
7).
. ,
.
(model).
f: RNT, R ,

0-1
T T= R ( ), (
) (Kaburlasos & Kehagias, 2014), .
(x1,f(x1)),,(xn,f(xn)),
f : RNT f: RNT, x0RN, x0xi,
i{1,,n}, f ( x0 ) , ,
f(x0). f : RNT (x1,f(x1)),,(xn,f(xn))
(learning) , , ,
f ( x0 ) , x0xi, i{1,,n}, (generalization).
.
: ()
(clustering), (clusters) x1,,xn
f(x1),,f(xn), () (classification), f : RNT
(pattern recognition), . T ,
() () (regression (statistical)), f : RNRM
.
1990 2000, ..
(2007) (1998), ,
/ . , (, 2007)
, (, 1998)
.
,
( & , 2010 , 2008),
.

.
, : ) (data)
, /,
) (information) , )
(knowledge) .
, .
,
(number crunching) ,
.
(. -)
. ,
(research driven education).
. -
. , 1
.
, . 2

.
. 3
.
.
- . ,
4 / .
4
/ ,
. 5
/ .
- . 6

0-2
, -
. -
, .
- , , (order theory
, , lattice theory), ,

.
. , 7
.
.
8 ,
, ,
, . 9
() (intervals number (INs)) ,
.
.
. ,
MATLAB , ,

. .
:
.

, 2015
,

Andresen, S.L. (2002). John McCarthy: Father of AI. IEEE Intel Syst, 17(5), 84-85.
, ., , ., , ., , . & , . (2006).
(3 .). , : . .
, . (2007). . , : .
Kaburlasos, V.G. & Kehagias, A. (2014). Fuzzy inference system (FIS) extensions based on lattice theory.
IEEE Transactions on Fuzzy Systems, 22(3) 531-546.
Kaburlasos, V.G. & Papakostas, G.A. (2015). Learning distributions of image features by interactive fuzzy
lattice reasoning (FLR) in pattern recognition applications. IEEE Computational Intelligence Magazine,
10(3), 42-51.
, .-. (1998). . , : .
, . & , . (2010). . , :
.
Polycarpou, M.M. (2013). Computational intelligence in the undergraduate curriculum. IEEE Computational
Intelligence Magazine, 8(2) 3.
Russell, S. & Norvig, P. (2005). : (2 .) (. ,
). , : .
, .. (2008). : . , :
.

0-3
-:
-

.
(Castro & Delgado, 1996 Gori .., 1998 Hornik .., 1989 Kaburlasos & Kehagias,
2014 Kosko, 1994 Nauck & Kruse 1999, Zeng & Singh, 1996). -
,
, .


Castro, J.L. & Delgado, M. (1996). Fuzzy systems with defuzzification are universal approximators.
IEEE Transactions on Systems, Man, Cybernetics B, 26(1), 149-152.
Gori, M., Scarselli, F. & Tsoi, A.C. (1998). On the closure of the set of functions that can be realized
by a given multilayer perceptron. IEEE Transactions on Neural Networks, 9(6), 1086-1098.
Hornik, K., Stinchcombe, M. & White, H. (1989). Multilayer feedforward networks are universal
approximators. Neural Networks, 2(5), 359-366.
Kaburlasos, V.G. & Kehagias, A. (2014). Fuzzy inference system (FIS) extensions based on lattice
theory. IEEE Transactions on Fuzzy Systems, 22(3), 531-546.
Kosko, B. (1994). Fuzzy systems as universal approximators. IEEE Transactions on Computers, 43(11),
1329-1333.
Nauck, D. & Kruse, R. (1999). Neuro-fuzzy systems for function approximation. Fuzzy Sets and
Systems, 101(2), 261-271.
Zeng, X.J. & Singh, M.G. (1996). Approximation accuracy analysis of fuzzy systems as function
approximators. IEEE Transactions on Fuzzy Systems, 4(1), 44-63.

I-2
1:
(/) von Neumann
. , /
. ,
, /, . ,
, / .
/
/ . , ,
/ . ,
(>109) (.
) . (neuron)
,
.
.
, , ,
.
, ,
20 .
. , Hebb 1949
,
(Hebb, 1949). Hebb (Hebbian
learning).

, , T
(transpose) .

1.1
, f: RNT
() .
.
1.1 ,
. x1,,xn
w1,,wn
(). , . xn+1= 1,
(bias) , n+1 , . wn+1= . ,
n n 1
() : wi xi wi xi w T x , wx
i 1 i 1
x = (x1,,xn,1) w =
(w1,,wn,wn+1) .
()
(-) f(), (transfer
function), 1.1. f()
.
() : f() = ,
, 0
() Perceptron: f ( ) ,
0 0

1, T
() () T: f ( ) ,
0 T

1-1
1
() (, ) : f ( ) ,
1 e
1 e
() : f ( ) .
1 e

x1 w1

x2 w2

f() u


xn wn

xn+1= 1

1.1 .


. ,
1.2,
(feedforward ANN) (. )
.

(directed connection) ( ),
( ).
,
. , . ,
.

1-2
opj

wji

i j

wji

xpi

1.2 . xp
.
op. wji
wji j i .

1.2 1.1.
, i, ,
xpi ( xp )
. (w).
. wji
j i
, 1.2.
.
() , ,
(supervision), , () ,
. , ,
(xi, ti), i{1,,n} (, ) .
xi ,
ti, i{1,,n} .
ti, . ,
ti, ,
i , ,
ti, i{1,,n}. xi,
i{1,,n} , ti, i{1,,n}.

1-3
: (1) (. )
-. ,
, , (2) -
-, (3)
, (4) (5) .
,

. , f: RNT
.
() ( 1.2) ()
( 1.1), . f() = . W1
W2
. h
h= W1x, o
o= W2h = W2W1x = Wx, W= W2W1 x .
,
,
.
, perceptron.

1.2 Perceptrons
,
() Widrow-Hoff , , (Delta rule).
(ip, tp), p{1,,n} (, )
. tpj
j tp, ipi i
ip.
tpjipi tpj ipi
, , wji.
wji tpj ipi, :

pwji = tpjipi

W,
(ip,tp) :

pW = tpipT

W()=0, (ip,tp), p{1,,n}



n
W t p i Tp
p 1

ip, p{1,,n}
, ipiq 0 pq, 1 p=q.
, ()
, , () .

n
Wi k t p i Tp i k t k
p 1

1-4
(ip,tp), p{1,,n}
, ip, p{1,,n}
, tp, p{1,,n}.
, ip, p{1,,n}.
, , (ip,tp), p{1,,n}
ip . , -
ip, p{1,,n} ,
, W,

W(n ) W( n 1) (n )i T (n ) , (1.1)

W(n) n i(n),
() (learning rate), (n) = t(n) - o(n) = t(n)-W(n-1)i(n)
t(n)
o(n) = W(n-1)i(n) i(n).
(neuron
representation),
().
(pattern representation),
().
, (. )
. ,
,
.
, PI i
i* PI i , , P
t t* PT t .
o, ,
, o o* PTo .
W .
i, o o = Wi.
, W*
i* o*. , o* = W*i*,
o* PT o i* PI i . , o* = W*i* PT o = W*PI i o = PT1W*PI i = Wi PT1W*PI = W
W* = PT WPI1 .
.(1.1)
PT PI1 , . :

PT WPI1(n ) PT WPI1(n 1) PT(n )i T (n )PI1 ,

(n ) t(n ) W(n 1)i(n ) PT (n ) PT t(n ) PT W(n 1)i(n ) =

= t* (n ) PT W( n 1)PT1i* ( n ) = t* (n ) W* (n 1)i* (n ) = t* ( n ) o* ( n ) = ( n ) .

1-5
W* (n) = W* ( n 1) PT (n ) i T (n )PI1 = W* ( n 1) * ( n ) i( n ) PI1 =
T


= W* ( n 1) * ( n ) PI1i* ( n ) PI1 = W* ( n 1) * ( n ) i* ( n ) PI1 PI1 .
T T T


, :
T
W* (n) = W* ( n 1) * ( n ) i* ( n ) C , (1.2)


T
C PI1 PI1
. PI
, .
, i *j 1 j 0 .
i j PI1i *j , j PI1 j


T
. , C PI1 PI1 iiij
ii ij, .

, .

, 8 6 ,
(x1,y1), (x2,y2), (x3,y3) (x4,y4):

0.36 0.57
0.20 0.05
0.10 0.07
0.49 0.66 0.12 0.39

. . . .
x1 y x2 y
0 44 0 30 0 23 0 74
0.01 , 0.08 ,
0.08 0.21
1 2


0.28 0.45 0.47 0.46
0.09 0.36
0.50 0.19

0.55 0.50

0.25 0.11
0.48 0.08
0.10 0.47
0.48 0.40 0.03 0.17

0.56 0.27 0.36 0.21
x3 y3 , x4 y4 ,
0.18 0.27 0.01 0.01

0.28 0.56 0.71 0.66
0.04 0.54
0.61 0.51
0.22 0.24

1-6
x1, x2, x3 x4 ,
, , . C (.
) x1, x2, x3 x4 .

1 0.75 0 0 x1 x1 x1 x 2 x1 x3 x1 x 4
0.75 x 2 x 4
1 0 0 x 2 x1 x2 x2 x 2 x3
C
0 0 1 0.25 x3 x1 x3 x 2 x3 x3 x3 x 4

0 0 0.25 1 x4 x1 x4 x2 x 4 x3 x4 x4

( x1y1, x2y2, x3y3


x4y4) .(1.1), .(1.2), (epoch).
1.3 , , W W*
, , 1, 4 8 .
W W* .(1.1) .(1.2)
W(0) = W*(0) = [0], . W W* n=0
.
1.3 W* W.
, W
, (crosstalk)
. , W*
,
. ,
, () (mean square error (MSE))
, .
*j ( n ) j
n *j (n ) t *j W* ( n )i *j . W* ( n ) .(1.2)

: *j (n) = t *j W* ( n 1) * ( n ) i* ( n ) C i *j = t *j W* ( n 1)i *j * ( n ) i * (n) Ci *j =


T T


T
*j ( n 1) *j ( n ) i*j Ci*j .
i *j 1, 0. ,
T
i *j Ci *j C jj . , , : *j (n) = *j (n 1) *j ( n )C jj
1
*j ( n ) 1 C jj = *j ( n 1) *j ( n ) = *j ( n 1) .
1 C jj
C jj j , . C jj 0 ,
1
1 . , *j ( n ) < *j ( n 1) . , . ,
1 C jj
, ,
(, )
.
() ,
(. )

.
.
() , ,
.

1-7
(W) (W*)

1
= 0.05 = 0.09
0.12 0.13 0.02 0.10 0.01 0.42 0.35 0.12
0.03 0.08 0.59 0.00 0.12 0.29 0.18 0.40
0.39 0.18 0.00 0.00
0.57 0.28 0.08 0.35 0.10 0.52 0.03 0.33 0.90 1.20 0.00 0.00
W W*
0.04 0.13 0.23 0.07 0.05 0.14 0.10 0.23 0.00 0.00 1.11 0.06
0.30 0.33 0.17 0.59 0.04 0.94 0.13 0.23
0.00 0.00 0.30 1.20

0.05 0.50 0.17 0.20 0.06 0.20 0.81 0.42

4
= 0.00 = 0.00
0.15 0.18 0.07 0.01 0.04 0.34 0.31 0.13
0.07 0.02 0.57 0.28 0.20 0.41
0.08 0.10 0.98 0.01 0.00 0.00
0.59 0.13 0.30 0.11 0.16 0.50 0.29 0.13 0.03 1.01 0.00 0.00
W W
*

0.05 0.16 0.10 0.14 0.02 0.15 0.17 0.16 0.00 0.00 1.00 0.00
0.36 0.42 0.47
0.81 0.08 0.86 0.53 0.12 0.00 0.00 0.00 1.00

0.06 0.66 0.16 0.53 0.02 0.09 1.01 0.19

8
= 0.00 = 0.00
0.15 0.18 0.07 0.01 0.04 0.34 0.31 0.13
0.07 0.02 0.57 0.08 0.28 0.20 0.41
0.10 1.00 0.00 0.00 0.00
0.59 0.13 0.32 0.10 0.16 0.50 0.30 0.12 0.00 1.00 0.00 0.00
W W*
0.05 0.16 0.10 0.14 0.02 0.15 0.18 0.15 0.00 0.00 1.00 0.00
0.35 0.43 0.49
0.82 0.08 0.85 0.54 0.13 0.00 0.00 0.00 1.00

0.06 0.66 0.17 0.54 0.02 0.10 1.01 0.18

1.3 W ( ) W* (
) 1, 4 8 . .

XOR, 1.4().
1.4() XOR . ,
, XOR
, () ,
. , . ,
, .
perceptron XOR (Minsky & Papert, 1969)
1960.
1980 , 1.4(),
XOR.
- , 2 .
, 1.4() 1.5

1-8
0.5. 1.4()
1.4() XOR.

1.3
(backpropagation
learning) , .
Werbos (1974),
(Rumelhart .., 1986) (generalized Delta rule).

1
x1 x2 XOR
0 0 0
0 1 1
1 0 1
1 1 0

0
() () 1

0.5

+1 +1
-2

1.5

+1 +1

x1 x2
()
1.4 () XOR x1 x2, ()
XOR , () , ,
, XOR.

1-9
1.3.1
xp (. 1.2)
tp .
, op op tp.
wji
Ep tpj opj op tp.
1 2

2 j
Ep wji .
Ep, wji () pwji
Ep Ep Ep
, . pwji . ,
wji wji wji
Ep Ep pj
. , , pj wjk opk . , pj
=
wji pj wji k

(. 1.1) j, wjk j
k opk k.
Ep Ep Ep Ep
,
wji
=
pj wji k
wjk opk =
pj
opi .
pj
(pj)

j. , pwji = pjopi, () .
pj :
Ep Ep opj Ep
pj = = , opj = fj(pj). , pj = f j( pj ) , f j( pj )
pj opj pj opj

f(pj) pj. :
, j . :
Ep 1 2
tpj opj (tpj opj ) . , pj = (tpj opj ) f j( pj ) .
opj opj 2 j
, j . :
Ep Ep pk Ep Ep
= wki opi = wkj = wkj .
opj pk opj pk opj i
pk
k k k pk k


, pj pk wkj f j( pj )
k
.
, ,
3.

1.3.2
, (
) . , , ,
(recurrent ANN) (Hopfield, 1982 Hochreiter .., 2001
Kosko, 1988). ,
/.

1-10
1.4 -
- (self-organizing maps) (Kohonen, 1995)
.
-
RN RM, M<N,
. , RN
RM. .

( - R2) ( -
R), 1.5. ,
L1 L2, , 1.5.
= (1,2).
-,
y , - ( )
-.
(topologically ordered maps).
-
(1 2), ,
-, 1.5().
, k - netk : netk= Twk.
, y*, . ,
, :

wki(t+1) = wki(t) + (t)(|y-y*|)i (1.3)

(t) t . (|y-y*|)
, 1 |y-y*| = 0 y = y*
|y-y*|. :
- ,
-, .
, ||w|| = 1. (t) ,
.

x2 P(px,py)
L2 (|y-y*|)
2
y* y
L1
1 w
1 2
x1

() ()
1.5 () , (P) , ()
- , , (
1.5 ()) y ,
y.

1-11
.(1.3) : , y*
- , .
y* ,
. -
-.
, y - -.
- ,

. ,
,
/ .
- -
p(x) -. ,
- -. -
, , -
-
.
- ,
-. ,
, , /ee/ /eh/
, , /ee/ /oo/
.
(visualization ) .
- .

1.5
() (adaptive resonance theory (ART)) (Carpenter &
Grossberg, 1987) ,
Perceptrons (Minsky, 1969)
.


.
(dynamic field theory) (Sandamirskaya .., 2013),

.
()
(Carpenter .., 1991 Simpson, 1993) () (Carpenter .., 1992 Simpson,
1992). .

1.5.1
(objective
function), , .
. ,
,
, .
, , , .
, , . ,
, ,
. /
(stability/plasticity dilemma).

1-12

,
.
(competitive learning), , ,
, .
d- ,
x0=1, , ||x||=1. ,
(d+1)- .
wj, , ||wj||=1, j=1,,c.
. x ,
wj netj= wTjx.
net ( x)

w(t+1) = w(t) + x,

. wTx
w x. wj
netj
.
.

1. , n, c, k, w1, w2,, wc.


2. xi {1,xi}, i=1,n ( )
3. xi xi/||xi||, i=1,n ( )
4. x.
5. j arg max w tj x ( x)
j

6. wj wj + x ( )
7. wj wj/||wj|| ( )
8. - w k .
9. w1, w2,, wc.


, . ( )
(), ..
(t)= (0)t 0<<1, t . ,
, x
() , x.
(c)
, c .
c , .

c
.
.
.
.
, -
(k-means algorithm),

,

1-13
. c
,
.
- (leader-follower algorithm) , wi
i, .

1. , .
2. w1 x.
3. x.
4. j arg max x w j ( )
j
5. ||x-wj|| <
6. wj wj + x ( )
7. w x
8. w w/||w|| ( )
9. .
10. w1, w2,, wc.

:
,
. ,
. , -
, ..
. - -
, .

1.5.2 -
-, , - (hyperboxes)
N- RN. -
.
. x
[0,1]. , - x=
(x1,,xN) x= (x1,,xN,1-x1,,1-xN)
(complement coding).
w , . x
(x)=0. xw x w.

1. .
2. w1 x.
3. x.
(w j )
4. j arg max ( )
j ( x w )
j

5. (xwj) < ( )
6. wj xwj ( )
7. w x
8. .
9. w1, w2,, wc.

1-14
1.6 x w , xw
. 1.6 (x) = 0.4, (w) = 1.4
(xw) = 2.4.

1
0.9 x

0.7
xw

0.5

0.3
w

0.1

0.1 0.3 0.5 0.7 0.9 1

1.6 x w . xw
x w.

- :
xRN, w, w
xw, (xw) .
(reset) .
x , x .
w . -
,
(). .
-
. ,
().
1.7
.

1.5.3 -
- 1.8
f: INT
IN. XRN ,
- , l(X)T ,
- . Fab
F2 .

IN -
(complete lattice) (Kaburlasos & Petridis, 2000 Kaburlasos .., 2013).

(category proliferation problem), . .

1-15
F2
:

1 2 K

(xw1) (xw2) (xw3) (xwK)

w2 w3
wK
w1

F1
&

1.7 - (.
) IN. F2
x . x,
w1,,wK F1.
,
. , , x.

Fab

xab

F2a ya yb F2b
ab




F1a a F1b b
a b

X l(X)

1.8 -
f: INT .

1-16
1.6
1
- RN,
(principal
component analysis), (independent component analysis), ..
(, 2007 Bishop, 2004).
2 (deep learning)
(Farabet .., 2013 LeCun ., 2015).
,
.

.
(-)
.
Neocognitron (Fukushima, 1980).
, (deep convolutional ANN),
(recursive ANN) ..

. ,
, , ,
(sequential data)
.
3 (spiking ANN),
(Kasabov .., 2015).
, .

-.
Arbib
(2003). , ,
/ RN.
4 - /
RN. ,
(Hirvensalo, 2003)
. ,
(semantics) (partially ordered
relations) (Kaburlasos .., 2013) , ,
.

1.1)
.
1.2) -- f: RNRM.

.
1.3) perceptron () ,
() . .

1-17
1.4)
pj j xp .
:
() pj ,
() pj .
1.5) . w1,,w5
T1 T2, XOR.
t

T2

w3 w5
w4

T1

w1 w2

x1 x2

1.6) 342 f()=


1 2 1 3
. W1= 6 2 8 5
9 7 4 1
3 2
1 9
, W2=
7 3

6 1
.
f: R3R2 -
.
1.7) 1.4 -
5 y (. 1.5())
w1= (2.1, -1), w2= (-1, 2), w3= (1.7, 1.4), w4= (1.8, -1.8) w5= (1.5,-1.5).
(|y-y*|) 1 2.
= (1,2) w2.
1.8) . ( /) w1=
[0.1,0.2][0.7,0.8] w2= [0.3,0.7][0.2,0.5] x= (0.25,0.6) .
w1 w2
x. .

1-18

Arbib, M.A. (d.). (2003). The Handbook of Brain Theory and Neural Networks (2nd ed.). Cambridge, MA:
The MIT Press.
Bishop, C.M. (2004). Neural Networks for Pattern Recognition. Oxford, Great Britain: Oxford University
Press.
Carpenter, G.A. & Grossberg, S. (1987). A massively parallel architecture for a self-organizing neural pattern
recognition machine. Computer Vision, Graphics, and Image Processing, 37(1), 54-115.
Carpenter, G.A., Grossberg, S. & Rosen, D.B. (1991). Fuzzy ART: fast stable learning and categorization of
analog patterns by an adaptive resonance system. Neural Networks, 4(6), 759-771.
Carpenter, G.A, Grossberg, S., Markuzon, N., Reynolds, J.H. & Rosen, D.B. (1992). Fuzzy ARTMAP: a
neural network architecture for incremental supervised learning of analog multidimensional maps. IEEE
Transactions on Neural Networks, 3(5), 698-713.
, . (2007). . , : .
Farabet, C., Couprie, C., Najman, L. & LeCun, Y. (2013). Learning hierarchical features for scene labeling.
IEEE Transactions on Pattern Analysis & Machine Intelligence, 35(8), 1915-1929.
Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern
recognition unaffected by shift in position. Biological Cybernetics, 36, 193-202.
Hebb, D.O. (1949). The Organization of Behavior. New York, N.Y.: Wiley.
Hirvensalo, M. (2003). Quantum Computing (2nd ed.) (Series in Natural Computing). Berlin, Germany:
Springer.
Hochreiter, S., Bengio, Y., Frasconi, P. & Schmidhuber, J. (2001). Gradient flow in recurrent nets: the
difficulty of learning long-term dependencies. In S.C. Kremer & J.F. Kolen (Eds.), A Field Guide to
Dynamical Recurrent Neural Networks. Piscataway, NJ: IEEE Press.
Hopfield, J.J. (1982). Neural networks and physical systems with emergent collective computational abilities.
In Proceedings of the National Academy of Sciences of the USA, 79(8), 2554-2558.
Kaburlasos, V.G., Petridis, V. (2000). Fuzzy lattice neurocomputing (FLN) models. Neural Networks, 13(10),
1145-1170.
Kaburlasos, V.G., Papadakis, S.E. & Papakostas, G.A. (2013). Lattice computing extension of the FAM neural
classifier for human facial expression recognition. IEEE Transactions on Neural Networks and Learning
Systems, 24(10), 1526-1538.
Kasabov, N., Scott, N., et al. (to be published). Evolving spatio-temporal data machines based on the
NeuCube neuromorphic framework: Design methodology and selected applications. Neural Networks.
Kohonen, T. (1995). Self-Organizing Maps (Springer Series in Information Sciences 30). Heidelberg,
Germany: Springer-Verlag, 1995.
Kosko, B. (1988). Bidirectional associative memories. IEEE Transactions on Systems, Man, and Cybernetics,
18(1), 49-60.
LeCun, Y., Bengio, Y. & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
Minsky, M. & Papert, S. (1969). Perceptrons. Oxford, England: MIT Press.
Rumelhart, D.E., McClelland, J.L. & the PDP Research Group (1986). Parallel Distributed Processing,
Volume 1: Foundations. Cambridge, MA: The MIT Press.
Sandamirskaya, Y., Zibner, S.K.U., Schneegans, S. & Schner, G. (2013). Using dynamic field theory to
extend the embodiment stance toward higher cognition. New Ideas in Psychology, 31(3), 322-339.
Simpson, P.K. (1992). Fuzzy min-max neural networks - part1: classification. IEEE Transactions on Neural
Networks, 3(5), 776-786.
Simpson, P.K. (1993). Fuzzy min-max neural networks - part2: clustering, IEEE Transactions on Fuzzy
Systems, 1(1), pp. 32-45.
Werbos, P. (1974). Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences.
PhD thesis, Harvard University, Cambridge, MA.

1-19
2:

(Zadeh,
1999).
(concept)
.
, , ,
.

2.1
.
, , , (universe of discourse).
(fuzzy set) (characteristic
function) : [0,1]. A(x) , ,
(membership function) (Zadeh, 1965).
(,).
(information granule) (Kaburlasos, 2006 Zadeh, 1999)
, / .
A(x) . ,
A(x) (), ,
' .

= {(x,A(x))| x A(x)[0,1]} (2.1)

A(x) x . ,
A( x )
= { | x A(x)[0,1]}.
x
n
A( xi )
(.. ) (countable set), : = ,
i 1 xi
, (). ,
A( x )
- (.. = [0,1]), : = ,

x
,
() .

= {, , , , }

: [0,1] , (x)
x . ,
= {(,0.2), (,0.9), (,0), (,1), (,0.2)}.
0. 2
, , = +

0.9 0 1 0.2
+ + + , +

2-1
A( x )
(). (x,A(x)), , A(x)=0 . ,
x
(x)
x (x).
, () = 1 ,
() = 0 . ,
() = 0.8 () = 0.3 ..,
(x).

{0,1}
[0,1]. ,

= {, , , , , , }


.
, (x), {0, 1}. ,
() = 1 () = 0, .
, , .

2.1.1
:

1) - (-cut) ,
:

= {x| (x) }, [0,1] (2.2)

= 1, 1- 1 (core) .
- (
) -
.

2) , , (support)
, supp() :

supp() = {x| (x) > 0} (2.3)

3) (height) ,
(least upper bound , , (sup)premum) (degrees of
membership) :
hgt() = sup{ A( x)} (2.4)
x

, ,
, (sup) (max)
. hgt() = 1,
(normal). , (
) , 1.

4) (empty) ,
x (x) = 0.

2-2
5) (equal)
x (x) = (x). ,
' () .
6) , , .
() , ( )
, (x) (x), x. ,
'
() .
7) (fuzzy power set)
, F() = [0,1] = {| : [0,1]}.
(), () () F() :

(AB)(x) = max{A(x),B(x)}
(AB)(x) = min{A(x),B(x)} (2.5)
A(x) = 1-A(x)

max (. maximum )
, , sup (. supremum
) . min (. minimum
), - inf, .
(greatest lower bound , , (inf)imum)) .
, max min, ,
() () (AB)(x) =
A(x)B(x) (AB)(x) = A(x)B(x), .
(F(),) (
7).
, ,
(, 2010) lattice.
, , .
, .


.
.

2.2
, , .

2.2.1

(resolution identity theorem), , ,
- (Zadeh, 1975). -
--.
, :
- .

-.

2.2.2
(extension principle) (fuzzification)
, . , () X Y
f: XY. X,
Y, f:

2-3
B( y ) f ( A( x)) sup { A( x)} (2.6)
xX | y f ( x )
- :

[f(A)]a = f(Aa) (2.7)


. ,

.

2.2.3
(fuzzy relation) R () X Y
XY R: XY[0,1]. ,
(Cartesian product) N () . , ()
= {, } =
{, }. R =
0.8 0.2 0.1 0.6
R(x,y) = + + + .
(, ) (, ) (, ) (, )
: .. 0.2/(,)
( ) 0.2.
0.8 0.2
R R = .
0.1 0.6

2.2.4
(projection) . ,
(x1,,xN) X=X1XN
H Xi1 Xim {i1,,im}= I I ={1,,N} X. (projection)
(x1,,xN) H Xi1 Xim

ProjHA(x1,,xN) = P( xi1 , , xim ) = max { A( x1, , xN )} (2.8)


i{i1, ,i m }

- A(x1,,xN) max
sup , , . , R(x,y), xX=, yY=
. R(x,y) Y B(y) =
ProjYR(x,y) = R( x, y ) .
x

2.2.5
, : [0,1]
x [0,1].
-1 (type-1). , -1
.
[0,1]. , x
[0,1]. , (interval-valued fuzzy set).
,
[0,1][0,1]. ,
x = [0,1], :
F([0,1]), (F([0,1]),) , .

2-4
() -2 ((generalized) type-2 fuzzy set).
() -2.

: L, (L,) . .. L =
[0,1] -1, L = F([0,1]) -2 ..
, ,
,
, , .. ,
(intuitionistic fuzzy set) (, 2014), , ,
: [0,1] , - (non-membership
function) : [0,1] 0 (x) + (x) 1.

2.3
(fuzzy logic)
() , () , (degree of
truth). , (two-valued)
{0,1}, (multi-valued) () [0,1].
.


,
, , , .

, , . ,
, . . ,
,
.


, . ,

. , ,
, ,
,
/.

, ,
() [0,1]. , ,
, () (Goguen, 1967).
()
, () . ,
() .

2.3.1
P (composite proposition),
/ / .. P1,,Pn.
, , P, P1 Pn. ,
P , P1,,Pn .
, (rules),
R: - -


R: PQ

2-5
P - Q -. P
(antecedent) Q (consequent) R: PQ.
R: PQ ( ) ,
p q P() Q () .
2.1 R: PQ (Boolean)
(implication function) pq. , 2.1 pq
pq pq. ,
pq p q, pq
( 1), ( 0), p q.

p q pq pq pq

0 0 1 1 1
0 1 1 1 1
1 0 0 0 0
1 1 1 1 1

2.1 pq, p q
P () Q () PQ. pq p q
pq.


R: ,
P , Q .
P Q , R: PQ
, , pq 2.1
.
, P , ,
, Q . , P Q,
, , R: PQ
. , R: PQ
P Q . ,
R: PQ P Q
- , ,
(P,Q) .
, R: PQ ()
( P) ( Q).
.
(algorithm) , , (system) ,
,
.
.

2.3.2
[0,1].

, ()
. , T: [0,1][0,1][0,1] ,
, - (triangular norm , , t-norm), ,
S: [0,1][0,1][0,1] , , - -

2-6
(triangular conorm , , t-conorm s-norm), , ,
, N: [0,1][0,1] (Klir & Yuan, 1995 Nguyen & Walker, 2005).

, :

1. T(x,y) = T(y,x) .
2. T(x,T(y,z) = T(T(x,y),z) .
3. y z T(x,y) T(x,z) .
4. T(x,1) = x 1.

1. S(x,y) = S(y,x) .
2. S(x,S(y,z) = S(S(x,y),z) .
3. y z S(x,y) S(x,z) .
4. S(x,0) = x 0.

, :

1. x y N(x) N(y).


TM(x,y) = min{x,y} - , , - Gdel.
TP(x,y) = xy - .
TL(x,y) = max{x+y-1,0} - Lukasiewicz.


SM(x,y) = max{x,y} - , , - Gdel.
SP(x,y) = x+y-xy - , , .
SL(x,y) = min{x+y,1} - Lukasiewicz , , .


N(x) = 1-x.

() /
/ , 1,,n.
1,,n,
-, - .
, 1 n, a
a = min{a1,,an} a =
a1an .., a1,,an A1,,An .
, : [0,1][0,1][0,1]
(a,b) a b, a,b{0,1} I(a,b)
2.1.
(Hatzimichailidis .., 2012):

IS(a,b) = S(n(a),b) -S,


IR(a,b) = sup{x[0,1]| T(a,x) b} -R,
IQL(a,b) = S(n(a), T(a,b)) -QL,

T(.,.) -, S(.,.) - n(.) .


:
IB(a,b) = (1-a)b Boole.

2-7
IL(a,b) = 1(1-a+b) Lukasiewicz.
IZ(a,b) = (ab)(1-a) Zadeh.
IM(a,b) = ab Mamdani.
(. Boole, Lukasiewicz
Zadeh) 2.1 . ,
Mamdani ,
, .
,
.
() . () :
() ( )
( ). , ,
. ,
.
.
(fuzzy algorithm) , ,
(fuzzy system) (fuzzy model)
(spoken language) .

2.4
.
, ,
, .

2.4.1
(categorical),
(), .
: .
(subject) (Margaris, 1990),
(predicate), (predicative) .

/, ,
, , . ,
() , . .
(fuzzy variable)
(Zadeh, 1975). ,
() : , , , .. , ,
- : ,
, , , . , ()
.
() (label) , .
. , ()
(descriptor) (hedge). ,
, ,
.
/.
. ,
,
A (. ) X, = .
/ /
.
()
. ()
.

2-8
2.4.2
, , , , , ,
, ..
. , ,
, ,
. (measurement)
. /
:
. , ,
.

. ,
, . 1. ,
: [0,1] ()
-, [0,1], () .
: [0,1] .
(-1)
(fuzzy number) , , (fuzzy interval).
- () .
.
F1.
()
. , , ..
.
2.1.
(fuzzy number arithmetics) -
, () . 9
(Belohlavek, 2007).

1 1

0 0
() ()

2.1 () (
), () .

, .
, 2.2 i: [0,1], i{1,2,3,4}. ,
A1 A2 , A3 A4 . , 2.3 ()
- 0.6 hgt() = 0.8 ()
supp(B) = B0 B. 2.4 (
) ( )
, , A(x) B(x), x. (), () ()
, 2.5.

2-9
2
1 1
1

0 0
() ()

3 4
1 1

0 0

() ()

2.2 () , - 1. () -, 2. () 3
() 4 (.
) .

B
1 1
A
0.8
0.6

A0.6 supp(B) = B0
0 0
() ()

2.3 () , - A hgt(A) = 0.8. - A0.6


. () B. supp(B) -
B0.

B
1

0 x

2.4 A ( ) B (
) AB , , A(x)B(x), x.

2-10
A B AB
1 1

0 0
() ()
B B
1 1

AB

0 0
() ()

2.5 () A B. () AB. () AB. () B (


B) - .

2.4.3
, () ()
, (.
) . , = {(1,1),
(2,0.9), (3,0.6), (4,0.2)}, = {(3,0.1), (4,0.5), (5,0.7), (6,0.9), (7,1), (8,1), (9,1)}, = {(1,0.1), (2,0.3),
(3,0.9), (4,1), (5,0.6), (6,0.2)} = {1, 2, 3, 4, 5, 6, 7, 8, 9}.
, .

= {(3,0.1), (4,0.2)}.
= {(1,0.1), (2,0.3), (3,0.9), (4,1), (5,0.7), (6,0.9), (7,1), (8,1), (9,1)}.
() = {(1,0.1), (2,0.3), (3,0.6), (4,0.5), (5,0.6), (6,0.2)}.
= {(1,1), (2,1), (3,0.9), (4,0.5), (5,0.3), (6,0.1)}.

.

2.4.4
(inference) .
modus ponens (
) .

: .
: .
, : .

, :
: .
: .

2-11
, , modus ponens,
60. , .
, Modus
Ponens 60.
Modus Ponens
(expert systems). ,
.
: 40 C
.
: 39.9 C .
Modus Ponens
, ,
39.9 C 40 C.
,
Modus Ponens .
: .
: A, A , , A.
, : , B ( ) ,
, B.
, R: (X A(x)) (Y B(y)), R: (X =
A(x))(Y = B(y)) , , R: A(x)B(y), A(x) B(y)
, .
Modus Ponens :

: R: (X = A(x))(Y = B(y)).
: X = A(x)

(y) A(x)
, R.
(compositional
rule of inference) Zadeh:
B = AR,
R R: (X=(x))(Y=(y)),
-, :

B(y0) = T ( A( x), R( x, y )) (2.9)


( x,y0 ) ( x, y )

, - min B

B(y0) = [ A( x) R( x, y )] . (2.10)
( x,y0 ) ( x, y )

- . ,
:

1) Boole: R(x,y) = (1-A(x))B(y)


2) Lukasiewicz: R(x,y) = 1(1-A(x)+B(y))
3) Zadeh: R(x,y) = (A(x)B(y))(1-A(x))
4) Mamdani:R(x,y) = A(x)B(y)

R
[0,1][0,1] XY,
R(A(x),B(y)). R(x,y) R: (x)(y)

2-12
. B(y0)
Zadeh R(x,y) = T ( A( x), R( x, y )) y.
( x, y )


Zadeh
B R: () ()
. - TM(x,y) = min(x,y)
Mamdani R(x,y) = A(x)B(y), . .(2.10). , A =
{(1,0), (2,0.1), (3,0.4), (4,0.6), (5,0.9), (6,1)} B = {(0,1), (10,1), (20,0.7), (30,0.3), (40,0)}.
R 2.2.
A = {(1,0), (2,0.2), (3,0.7), (4,1), (5,1), (6,0.7)} R:
. A(x) 2.2.
R(x,y) = A( x) R( x, y ) 2.3. ,
( x, y )
R(x,y) Y = {0, 10, 20, 30, 40} B(y0) = R( x, y ) B
( x,y0 )
= {(0,0.9), (10,0.9), (20,0.7), (30,0.3), (40,0)}, y0Y. , (
) A A, B
B.

(y) 0 10 20 30 40
(x) 1 1 0.7 0.3 0 (x)
1 0 0 0 0 0 0 0
2 0.1 0.1 0.1 0.1 0.1 0 0.2
3 0.4 0.4 0.4 0.4 0.3 0 0.7
4 0.6 0.6 0.6 0.6 0.3 0 1
5 0.9 0.9 0.9 0.7 0.3 0 1
6 1 1 1 0.7 0.3 0 0.7

2.2 (x)(y) Mamdani A(x)B(y).


(x).

0 0 0 0 0
0.1 0.1 0.1 0.1 0
0.4 0.4 0.4 0.3 0
0.6 0.6 0.6 0.3 0
0.9 0.9 0.7 0.3 0
0.7 0.7 0.7 0.3 0
2.3 R(x,y) R( x, y ) Zadeh.
( x,y0 )

2.5 Mamdani, Sugeno



f: NT (Kaburlasos & Kehagias, 2014), 2.6,
T M,
(regressor), () L,

2-13
(classifier) N f .
, , f: NT,

f.

x=(x1,,xN) y = f(x1,,xN)

-
()

2.6 () f:
NT. (fuzzifier) x1,,xN . o-
(de-fuzzifier)
T f: NT.

() (fuzzy inference
systems (FISs)) Mamdani Sugeno (Guillaume, 2001) .

2.5.1 o Mamdani
(fuzzy rule) Mamdani (Mamdani & Assilian, 1975)
:
RM: ( ) ( ).

RM: = =
RM: ( = ) ( = )
RM: .
RM ,
. , , = ,
= RM.
Mamdani .
Mamdani Ri: = i = i, i{1,,I}
(i,i), i{1,,I} f: F1F1,
. , Mamdani
(experts).

2.5.2 o Sugeno
Sugeno (Sugeno & Kang, 1988 Takagi & Sugeno, 1985)
:
RS: ( ) ( y f(x)).

RS: = y = f(x)
RS: ( = ) (y = f(x))
RS: y = f(x),
x=(x1,,xN) 2.6.

2-14
RS ,
y f(x) . , , = ,
y = f(x) RS.
Sugeno Ri: = i y = fi(X), i{1,,I}
. fi(X) .
, Sugeno .
, Mamdani/Sugeno
,
. , Mamdani
Sugeno.

2.5.3 Mamdani
Mamdani
(X A) (Y B) (x) (y) (x) (y)
: [0,1] : [0,1]. x y (x) (y),
,
. , - (X A)
/ / . ,
Mamdani (1, 2) (1).

K1: EAN ( 1 ) ( 2 ) ( 1 )
K2: EAN ( 1 ) ( 2 ) ( 1 )
K3: EAN ( 1 ) ( 2 ) ( 1 )

, 1, 2 1
. 1 [0,10],
2 [10,20], 1 [50,100].
/
. , 1
, 2.7().
2 1 2.7 () (), .

.
()
,
.
() (overlapping)
, ()
f: NT (. 2.6), .
2.8 1, 2 3
2.7. 1, 2 3 () 2.6.
2.9 x1=6 x2=15 (. 2.6 N=2).
x1=6 x2=15
, 2.10.
( x1 x2)
x1=6 x2=15 ,
(knowledge base) . , 2.10 3
x1=6 0.75, x2=15
0.167.
0.75 0.167
2.10. , 2.10, 2,
0.4 0.5, ,
. 1
0 1 .

2-15

1

0 1 1.5 4 5 6 9 10

()

5 10 11 14 15 18 19 20

()

50 60 70 80 85 100

()

2.7 (. )
() 1, () 2 () 1 .

2-16

1 1 1
1:

0 1.5 6 10 10 15 19 20 50 70 100

1 1 1
2:

0 4 9 10 10 12 18 20 50 60 85 100

1 1 1
3:

0 1 5 9 10 10 14 20 50 80 100

2.8 1, 2 3
2.7.

1, 2, 3
. , -,
TM(x,y) = min{x,y}. , x1=6 x2=15
(degree of activation) 1 1(0,1)= min{0,1}= 0,
2 2(0.4,0.5)= min{0.4,0.5}= 0.4 3
3(0.75,0.167)= min{0.75,0.167}= 0.167.

2.11 1 .
,
, , ,
Mamdani
2.11 1, 2 3. ,
2.11 ()
.

2-17

1 1 1
1:

0 1.5 6 10 10 15 19 20 50 70 100

1 1 1
2:
0.4 0.5

0 4 9 10 10 12 18 20 50 60 85 100



1 m 1 1
3: 0.75
0.167

0 1 5 9 10 10 14 20 50 80 100

x1=6 x2=15

2.9 x1=6 x2=15 1, 2 3


2.8 ( ) .

2-18

1 1 1

1:

0 1.5 6 10 10 15 19 20 50 70 100

1 1 1

2: 0.4 0.5
0.4

0 4 9 10 10 12 18 20 50 60 85 100

1 1 1

3: 0.75
0.167 0.167

0 1 5 9 10 10 14 20 50 80 100

x1=6 x2=15

2.10 . x1=6 x2=15


,
[0,1].
(min) x1=6 x2=15 .
1 .

2-19

1 1 1

1:

0 1.5 6 10 10 15 19 20 50 70 100

1 1 1
2: 0.5
0.4 0.4

0 4 9 10 10 12 18 20 50 60 85 100

1 1 1
3: 0.75

0.167 0.167

0 1 5 9 10 10 14 20 50 80 100

x1=6 x2=15

0.4
0.167

50 60 85 100

2.11 ()
. ()
. -
.

2.6
- (de-fuzzification), .
T f: NT. T=
B(y) ,

2.11. - (Runkler, 1997).
.

2-20
- :
yM argmaxB( y )

yM
B(y).
(). , 2.12() -
yM argmaxB( y ) = 5. , 2.12()
yM argmaxB( y ) = (3+5)/5 = 4.

- :
y
B( y )dy y
B( y )dy

y ,
B(y) . ,
2.12() -

y 9
1 ( 0.25 y + 2.25)dy ( 0.25 y + 2.25)dy y 2 18 y 69 0 y

5.536 (
5 y
y 12.464 ). , 2.12()

y y 7
B( y )dy y
B( y )dy 1.5 + 2 + 5 ( 0.3 y + 2.5)dy ( 0.3 y + 2.5)dy
y
+ 2 + 0.4

0.3 y 2 5 y 17.8 0 y 5.153 ( y 11.513 ).

B(y)
1

0 3 5 9 10
()

B(y)
1

0.4

0 3 5 7 12 14 20
()

2.12 -
. () B(y) = 0.5y-1.5 B(y) = -0.25y+2.25.
() [5,7] B(y) = -0.3y+2.5.

2-21
- :

yB( y )dy
y

B( y )dy
-, .
, . , 2.12()
-

5 9
yB( y )dy
y =
3 y(0.5 y - 1.5)dy + 5 y(0.25 y - 2.25)dy =
17
.

B( y )dy
3 3

y 2.12() .
- (video), Mamdani
MATLAB :
Multimedia File. , 2.13
(toolbox) fuzzy MATLAB.

2.13 fuzzy MATLAB


() Mamdani Sugeno. .

2-22
.
(. ) . , -

B = {(yk,B(yk))| yk, B(yk)[0,1] k{1,,K}}

yM arg max B( yk ) . , yk ,
k{1, ,K }
B.
, ().

2.5.4 Sugeno
Sugeno (1, 2)
( 1).

K1: EAN ( 1 ) ( 2 ) y1(x1,x2)=2x1+x2


K2: EAN ( 1 ) ( 2 ) y2(x1,x2)=3x1+x2-5
K3: EAN ( 1 ) ( 2 ) y3(x1,x2)=2x1+1.2x2-1

K1, K2 K3 K1,
K2 K3, , Mamdani . , ,
x1=6 x2=15. ,
, . ,
K1 K1(x1,x2) = min{0,1} = 0, K2 K2(x1,x2) =
min{0.4,0.5} = 0.5, K3 K3(x1,x2) = min{0.75,0.167} = 0.167. ,
, . ,
K1
y1(x1,x2)=2x1+x2 K1. y1(6,15) = 2*6+ 15 = 27.
K2 y2(6,15) = 3*6+ 15 5 = 28 K3
y3(6,15) = 2*6+ 1.2*15 1 = 29. Sugeno
, . ,
,
, . ,
- , ,
, Sugeno,
.

K1( x1, x2 )y1( x1, x2 ) K2( x1, x2 )y2 ( x1, x2 ) K3( x1, x2 )y3 ( x1, x2 )
y ( x1, x2 ) = =
K1( x1, x2 ) K2( x1, x2 ) K3( x1, x2 )
(0)( 27) (0.4)( 28) (0.167)( 29)
= 28.2945
0 0.4 0.167

2.5.5
(
Mamdani, Sugeno) (.. )
1
f: RNRM, 2

f: RNRM, 2= 2 1 >1 1 R
(Kaburlasos & Kehagias, 2007). , ,
(.. ) .

2-23
2.5.6
Mamdani RM: , Sugeno RS: y = f(x).

L, . . RN f:
RNT ( 2.6) .

2.5.7
2.6
.
IM(a,b) = ab. , . ,
- TM(x,y) =
min{x,y}. , -. ,
, , .
.

1,,, 1,, L.

Mamdani/Sugeno . .
4
f: NT NT.

2.5.8
1,,
(. 2.7). ,
(exponential explosion),
,
(Kczy & Hirota, 1997).
(. 2.6)
(uncertainty) . , x=(x1,,xN) 2.6
, - .
9
.

2.6
, -2
: [0,1][0,1], . x
[0,1][0,1].

-2, x=
[0,1]. -2 (2) (type-2
fuzzy number) (Karnik .., 1999 Mendel, 2014 Mendel & John, 2002). 2.14
2, .
2.14() f, t
F, = . f, t F
(primary) . , f F,
(lower)- (upper)- , . , 2
f F f(x) F(x) x.
x0 2.14 () f
F (x0, f(x0)) (x0, F(x0)), . 2.14 ()
(secondary)
(x0, f(x0)) (x0, F(x0)).
x x x
() t.

2-24
2 , ,
.
2
() . , (computing with words)
2. () , 2
2
.
( ) 2
.

F
1 1
F(x0) t

t(x0)
f
f(x0)

0 x x0 x 0 f(x0) t(x0) F(x) 1

() ()

2.14 () 2 f (
) F ( ). t
() . () , x
= x 0.

2-25

2.1) = {(1,0.1), (2,0.4), (3,0.7), (4,0.9), (*,1)}, =


{(,0.4), (1,1), (2,0.5), (3,0.9), (4,0.2) } = {(,1), (1,0.3), (2,0.1), (4,0.9), (*,0.5)}
= {, 1, 2, 3, 4, *} ()
().
2.2) .
: 2 = {0.1/0, 0.4/1, 1/2, 0.3/3, 0.1/4}
: 4 = {0.1/2, 0.2/3, 1/4, 0.5/5, 0.1/6}
| |
= | 2 - 4|.
2.3) A(x1,x2,x3)= {((1,,c),0.7), ((1,,c),0.4), ((2,,d),0.2), ((2,,e),1), ((2,,e),0.3),
((3,,c),0.4), ((3,,d),0.8), ((3,,e),1)} H= X1 X=X1X2X3=
{1,2,3}{,,}{c,d,e}. A(x1,x2,x3) H=X1.
2.4) -
y - B(y) 2.12.
2.5) Sugeno ( input1, input2) ( output1)
:

S1: EAN ( input1 small) ( input2 low) y1(x1,x2)=x1+x2+4,


S2: EAN ( input1 medium) ( input2 normal) y2(x1,x2)=2x1+x2,
S3: EAN ( input1 large) ( input2 high) y3(x1,x2)=3x1+x2-3,

2.7. x1=5 x2=16.


y ( x1, x2 ) .
2.6) = {(a,1) (b,1), (c,0.7), (d,0.5), (e,0.2)}, = {(1,0.1), (2,0.3), (3,0.6),
(4,0.8), (5,0.9), (6,1)}, : , = {(a,0.6) (b,.8), (c,1), (d,0.7), (e,0.4)}.
Zadeh
Lukasiewicz .

2-26

Belohlavek, R. (2007). Do exact shapes of fuzzy sets matter?. International Journal of General Systems, 36(5),
527-555.
Goguen, J.A. (1967). L-fuzzy sets. Journal of Mathematical Analysis and Applications, 18(1), 145-174.
Guillaume, S. (2001). Designing fuzzy inference systems from data: an interpretability-oriented review. IEEE
Trans. on Fuzzy Systems, 9(3), 426-443.
Hatzimichailidis, A.G., Papakostas, G.A. & Kaburlasos, V.G. (2012). A study on fuzzy D-implications. In
Proceedings of the 10th International Conference on Uncertainty Modeling in Knowledge Engineering
and Decision Making (FLINS 2012), Istanbul, Turkey, 26-29 August.
, . (2014). . , .
Kaburlasos, V.G. (2006). Towards a Unified Modeling and Knowledge-Representation Based on Lattice
Theory. Heidelberg, Germany: Springer, series: Studies in Computational Intelligence, 27.
Kaburlasos, V.G. & Kehagias, A. (2007). Novel fuzzy inference system (FIS) analysis and design based on
lattice theory. IEEE Transactions on Fuzzy Systems, 15(2), 243-260.
Kaburlasos, V.G. & Kehagias, A. (2014). Fuzzy inference system (FIS) extensions based on lattice theory.
IEEE Trans. on Fuzzy Systems, 22(3), 531-546.
Karnik, N.N., Mendel, J.M. & Liang, Q. (1999). Type-2 fuzzy logic systems. IEEE Transactions on Fuzzy
Systems, 7(6), 643-658.
Klir, G.J. & Yuan, B. (1995). Fuzzy Sets and Fuzzy Logic: Theory and Applications. Upper Saddle River, NJ:
Prentice Hall.
Kczy, L.T. & Hirota, K. (1997). Size reduction by interpolation in fuzzy rule bases. IEEE Transactions on
Systems, Man, Cybernetics B, 27(1), 14-25.
Mamdani, E.H. & Assilian, S. (1975). An experiment in linguistic synthesis with a fuzzy logic controller. Intl.
Journal of Man-Machine Studies, 7(1), 1-13.
Margaris, A. (1990). First Order Mathematical Logic. Mineola, NY: Dover Publications.
Mendel, J.M. (2014). General type-2 fuzzy logic systems made simple: a tutorial. IEEE Trans. on Fuzzy
Systems, 22(5), 1162-1182.
Mendel, J.M. & John, R.I. (2002). Type-2 fuzzy sets made simple. IEEE Trans. on Fuzzy Systems, 10(2), 117-
127.
Nguyen, H.T. & Walker, E.A. (2005). A First Course in Fuzzy Logic. 3rd ed. Boca Raton: Chapman & Hall /CRC.
Runkler, T.A. (1997). Selection of appropriate defuzzification methods using application specific properties.
IEEE Trans. on Fuzzy Systems, 5(1), 72-79.
Sugeno, M. & Kang, G.T. (1988). Structure identification of fuzzy model. Fuzzy Sets and Systems, 28(1), 15-
33.
Takagi, T. & Sugeno, M. (1985). Fuzzy identification of systems and its applications to modeling and control.
IEEE Trans. on Systems, Man and Cybernetics, 15(1), 116-132.
, .. (2010). (Fuzzy Logic). , .
Zadeh, L.A. (1965). Fuzzy sets. Information and Control, 8, 338-353.
Zadeh, L.A. (1975). The concept of a linguistic variable and its application to approximate reasoning I.
Information Sciences, 8(3), 199-249.
Zadeh, L.A. (1999). From computing with numbers to computing with words. From manipulation of
measurements to manipulation of perceptions. IEEE Trans. on Circuits and Systems - I, 46(1), 105-119.

2-27
3:

(optimization),
, ,
() (constraints). , ,
, (least-squares method) Gauss,

.
(emulation)
. '
(nature inspired).
,
. ,
(genetic algorithms).

3.1
(Boyd & Vandenberghe, 2004 Deb, 2001),
f : n
, :
min f ( x)
(3.1)
gi x i , i = 1,2,3,...,m.

x = x1 , x 2 , x 3 ,..., x n n , gi x
i , i = 1,2,3,...,m () , .
, , (unconstrained)
, , (constrained),

() .
. .(3.1)
(linear) - (non-linear) ,
f x gi x - .
x
() (solution (search) space)
.
,
.(3.1) ( 1 / f x ), ( - f x )
.
f x
gi x (convex), (Bertsekas .., 2003 Deb, 2001):

f ( x1 1 x2 ) f x1 1 f x2 (3.2)

0 1 . 3.1. ,
.(3.2) -.

(global minimum)
(local minimum).
. , .
.(3.1)
.
(multi-objective optimization) (Deb, 2001),

3-1

, . .
, (multi-modal) ,
( )
.
.
,
.

.

3.1 .

3.2
,
(descent) :

xk+1 = xk + xk (3.3)

k = 0,1,... , xk , xk+1
f(.) k k + 1 . ,
(step size), xk (search direction).

f xk+1 < f xk , . .
,
. xk = -f xk , (gradient descent).
.(3.3) (Newtons
method) (Coleman & Li, 1994) (conjugate descent) (Mller, 1993).

, 1. ,
, ,

.

3-2
3.2.1

. ,
.
1. . ,
.
2. , ()
,
. , . ,
()
,
. ,
.
3. ,
, . ,

.


.
, ,
.

3.3
1950 1960.
,
, (individual) ( ).

, (.
). , ,
--,
.
(exploration),
(exploitation) .
(. )
(operators), . ,
(Darwin, 2014), . ,
, (. )
.
.
.

(premature conevrgence).
, . ,
( )
.
(diversity) (Toffolo & Benini, 2003), .
.

3.4
() (genetic algorithms (GA))

(Darwin, 2014). 1960
Holland (1992) . Holland

3-3
(genetic plan),
.

3.4.1

,
. ,
. ,
, , .
, ,
. , ,
. ,

.
.
(. )
, (natural selection) ,
(crossover). (. )
(mates) .
(. ) (genes), .
,
.
:
, ,
, , .., . , .
, . ,
() .

.
(. ).
(chromosome)
. ( )
(degree of fitness), ,
.
. ,
,
,
(. ) . ,

(. ) .
, , .
,
. () (.
) .

.

3.4.2
,
(Coley, 1999 Mitchell, 1998)
3.2. (encoding)
. ,
(fitness function),
. .

3-4
3.2 .

()
. , (mutation)

.
,
, .

, 3.2.

3.4.2.1
.

/ .
. .

3-5
(binary encoding). ,
(bits) 0 1.

. , i
li, i= 1,,M, (L) ,
, :
M
L = li (3.4)
i=1

3.1
, 8 24 .


00000000 00000001 00000100
00010000 00000100 00000001

3.1 .

,
. ,
(Coley, 1999) :

rmax - rmin
r= z + rmin (3.5)
2l - 1

[rmin , rmax ] , z l
(. ) , .

(permutation encoding).
.
, , .

(tree encoding).
, () /
.
(evolving programs), ..

(value encoding).
, , , ..
,
, .

,
, .. . ,
, .
,
.

3.4.2.2
,
,
.

3-6
.
.

.
,

. , ,
,
. .

3.4.2.3

, .
,
.
.
(stochastic) (deterministic).
(Coley, 1999).

(top mate selection). ,


. ,
,
. , ( 50%)
, (. )
. .
,
.

. -
,
, .

(roulette wheel selection). .


. ,
, , .
, .
,
. , 3.3
, ( 30%)
.
. ,
.

(tournament selection). A .

.
.

(local selection). .
. ,

.

(stochastic universal sampling).


.

3-7

.
,
. ,
.
. . ,
[0,1/] ,
1/ ()
. , 4
/, 1/4=0.25.
,
, , .

3.3 .

3.4.2.4

. (. )
,
.
, Pc.
, , .
. .
,
. .

(multi-point crossover).
( 1 N )
, 3.2. ,
, .

.
,
. .

(arithmetic crossover).
, ,
. , (..
AND ..) , 3.2.

3-8
o (uniform crossover).
.
.
, (mask crossover) ,
.
3.2 .
, 0
, 1
. 0 1 0.5
( ).



1 110|010|10 11001011 10110001
2 001|001|11 11011111 00011110
- - 00110011
2 110|001|10 1 1 0 0 1 0 1 1 (AND) 00111101
1 001|010|11 0 0 0 1 0 1 0 0 (XOR) 10010010

3.2 .

3.4.2.5
,
. , ,
, . ,
.

, . ,
Pm, .
[0,1] ,
Pm, .
, , , . , ..
0 1, . 3.3

.
,

.



01001001 153790 12.1 21.2 35.4 56

01001101 153890 12.1 22.3 35.4 56

3.3 ( )
.

3-9

, (elitism),
, .
,
.

3.4.2.6

(reinsertion) ,
. ,
, .

,
(termination criteria):

1)
2) .

3.5
() (particle swarm optimization (PSO))
Eberhart & Kennedy (1995).
(swarm) , , , ..,
.
, , ,
,
. ,
/ , , ( ) (particle (of the
swarm)), , . ,
,
( ) . ,
,
. , ,
,
.

3.5.1

T
i xi xi1 , xi2 , xi3 ,.., xin


T
n- ui ui1 ,ui2 ,ui3 ,..,uin ,
. f x

. , :
. , , , () ,
, .
i , i{1,,M} x*i
, .
g* x*i
. o
3.4.

3-10
(t)
:

uit+1 uit r1 g* - xit r2 x*i - xit (3.6)

xit+1 xit uit+1 (3.7)

,,
.
, , 2
(Yang, 2008). ,
[0,umax], umax . ,
r1, r2 0,1 .

(Parsopoulos
&Vrahatis, 2002 del Valle .., 2008).


f x


xi ui
t=0
f min = min f x1 ,..., f xM

t = t +1 ( )

r1, r2 0,1
.(3.6)
.(3.7)
f(xi), i=1,,M
t+1
/ f min = min f x1 ,..., f xM

x*i g*
( )
x*i g*

3.4 .

3.6

. ,

(Yang, 2008).
(heuristic)
-- (trial-and-error).

.
(evolution strategy) (Rechenberg, 1971)
(differential evolution) (Storn & Price, 1997)

3-11
.
.

(ant colony) (Colorni .., 1991) (artificial bee colony)
(Karaboga, 2005).
(swarm intelligence)
.
(gravitational search) (Rashedi .., 2009),
.
(firefly) (Yang, 2008),
(cuckoo search method) (Yang & Deb, 2009),
(bat) (Yang, 2010) (biogeography-based optimization)
(Simon, 2008). ()
/
.

3.1) ()
00010110 00100101 ()
.
3.2)
10,
Pm= 0.1.
3.3) 3 ,
4 . 3 xi 0,10 ,i = 1,2,3
, 010100110010.
3.4) , ,
x 12.3,2.3,3.1 u 3,7.1,2.8 . ,, 1, 2, 2
T T

(t+1),

x * 10,3,4 .
T

3.5)
perceptron ,
, ;
3.6) Global Optimization Toolbox MATLAB
Schwefel (p=3).

x * 0,0,0,0,0 , f x* = 0 ;
2
p i
f x = x j , xi 65.536, 65.536

i=1 j1


3.7) Global Optimization Toolbox MATLAB
Rastrigin (p=5).

x * 0,0,0,0,0 , f x* = 0 ;


p
f x = 10p + xi2 - 10cos 2xi , xi 2.048, 2.048
i=1

3-12
3.8) MATLAB,
.
3.9) MATLAB, .
3.10) MATLAB,
perceptron .

(sin(x)).


Bertsekas, D.P., Nedi, A. & Ozdaglar, A.E. (2003). Convex Analysis and Optimization. Belmont, MA:
Athena Scientific.
Boyd, S. & Vandenberghe, L. (2004). Convex Optimization. UK: Cambridge University Press.
Coleman, T.F. & Li, Y. (1994). On the convergence of interior-reflective Newton methods for nonlinear
minimization subject to bounds. Mathematical Programming, 67(1-3), 189-224.
Coley, D.A. (1999). An Introduction on Genetic Algorithms for Scientists and Engineers. Singapore: World
Scientific Publishing.
Colorni, A., Dorigo, M. & Maniezzo, V. (1991). Distributed Optimization by Ant Colonies. Actes de la
premire confrence europenne sur la vie artificielle, Paris, France, Elsevier Publishing, 134-142.
Darwin, C. (2014) On the Origin of Species by Means of Natural Selection. Tustin, CA: Xist Publishing.
Deb, K. (2001). Multi-Objective Optimization using Evolutionary Algorithms. UK: John Wiley & Sons, 2001.
Eberhart, R.C. & Kennedy, J. (1995). A new optimizer using particle swarm theory. In Proceedings of the 6th
International Symposium on Micro Machine and Human Science, 39-43, Nagoya - Japan.
Holland, J.H. (1992). Adaptation in Natural and Artificial Systems. USA: MIT Press.
Karaboga, D. (2005). An Idea Based On Honey Bee Swarm for Numerical Optimization. Technical Report-
TR06, Erciyes University, Engineering Faculty, Computer Engineering Department.
Mitchell, M. (1998). An Introduction to Genetic Algorithms. USA: MIT Press.
Mller, M.F. (1993). A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks,
6(4), 525-533.
Parsopoulos, K.E. & Vrahatis, M.N. (2002). Recent approaches to global optimization problems through
Particle Swarm Optimization. Natural Computing, 1(2-3), 235-306.
Rashedi, E., Nezamabadi-pour, H. & Saryazdi, S. (2009). GSA: a gravitational search algorithm. Information
Sciences, 179(13), 2232-2248.
Rechenberg, I. (1971). Evolutionsstrategie Optimierung technischer Systeme nach Prinzipien der
biologischen Evolution. Phd Thesis.
Simon, D. (2008). Biogeography-based optimization. IEEE Transactions on Evolutionary Computation, 12,
702-713.
Storn, R., & Price, K. (1997). Differential evolution - a simple and efficient heuristic for global optimization
over continuous spaces. Journal of Global Optimization, 11, 341-359.
Toffolo, A. & Benini, E. (2003). Genetic diversity as an objective in multi-objective evolutionary algorithms.
Evolutionary Computation, 11(2), 151-167.
del Valle, Y., Venayagamoorthy, G.K., Mohagheghi, S., Hernandez, J.-C. & Harley, R.G. (2008). Particle
Swarm Optimization: basic concepts, variants and applications in power systems. IEEE Transactions on
Evolutionary Computation, 12(2), 171-195.
Yang, X.S. (2008). Nature-Inspired Metaheuristic Algorithms. Luniver Press.
Yang, X.S. (2010). A new metaheuristic Bat-inspired algorithm. In: J. R. Gonzalez et al., eds., Nature
Inspired Cooperative Strategies for Optimization (Studies in Computational Intelligence 284: 65-74).
Berlin, Germany: Springer.
Yang, X.S. & Deb, S. (2009). Cuckoo search via Lvy flights. In World Congress on Nature & Biologically
Inspired Computing (NaBIC, 2009), 210-214.

3-13
-:
-
, -
. ,

. -
(. ), .
(Duda .., 2001), ,
. ,
, (natural computing),

(Pal & Meher, 2013).

.


Duda, R.O., Hart, P.E. & Stork, D.G. (2001). Pattern Classification (2nd ed.). New York, N.Y.: John
Wiley & Sons.
Pal, S.K. & Meher, S.K. (2013). Natural computing: A problem solving paradigm with granular
information processing. Applied Soft Computing, 13(9), 3944-3955.

II-2
4:

, .

4.1 -
, , (
) f: RNRM ,
. , ,
, ,
. ,
, .

,
. , - () (neuro-
fuzzy systems (NFSs)) (Mitra & Hayashi, 2000).
- () (adaptive neuro-fuzzy inference
systems (ANFISs)) (Jang .., 1997 Kaburlasos & Kehagias, 2014),
, 4.1.
, 4.1
Mamdani (. 2.5.3).
(x1,f(x1)),,(xn,f(xn)) f: RNRM
f : RNRM f,
x0RN, x0xi, i{1,,n}, f ( x ) ,
0

, f(x0), f .
: (1)
(2) .
/ /- .
,
,
.

4.1 - Mamdani.
.

4-1
, Sugeno,
(..
) .

.

4.2

() (radial basis function (RBF) network).
,
.
,
. 4.2.

4.2 .

x x1, x 2 , x 3,..., x N
, F f1, f 2 , f3,..., f K
(
) ,
:

w f x c b
K
yi ji j j i (4.1)
j1

.(4.1) w ji j i , f j
, c j j bi
i .
Gaussian

4-2
xc 2

f x c exp (4.2)
2 2

(spread) , x
c , . x c . (||.||)
.(4.2) , (.. Minkowski,
Mahalanobis) .
,

(Looney, 1997).
.
, (universal approximator) (Poggio
& Girosi, 1990 Park & Sandberg, 1991), . ,
.

Cover
(1965): -
.
(. )
- .

4.2.1

. ,
.

C c1, c2 ,..., cK


M
W w1j, w 2 j,..., w Kj . , ,
j1

C c1, c2 ,..., cK : (1)


, (2)
, .. - (k-means algorithm),
(3) ,
.. . ,

M
W w1j, w 2 j,..., w Kj (Looney, 1997 Haykin, 1999).
j1
- (pseudo-inverse) (Broomhead & Lowe, 1988),
.
:

w F t (4.3)

t F -

F f ji .
,
.. (singular value decomposition). ,

, .

4-3
4.3
, , -
,
() (k nearest neighbors (kNN) model).

.
.
x , t
N
, j j j1 , x j n

t j 1, 1 j .
, .
.
x 0 ,
- dist(.,.)
x 0 , :

di dist x0 , xi , i 1, 2,..., N (4.4)

x 0
x 0 .
(Wikipedia, 2015) (. )
(. )
=3 =5 4.3. , ,
=3,
. , =5
.

4.3
=3 ( ) =5 ( ).


.
(
). ,

4-4

. - ,
.
(Paredes & Vidal,
2006), :

distEuclidean x0 , x j x xij
n 2
i
0 (4.5)
i 1

distCityBlock x0 , x j x0i xij


n
(4.6)
i 1

distChebyshev x0 , x j max x0i xij


i
(4.7)

(Theodoridis &
Koutroumbas, 2003) :

1
PB P PB (4.8)
e

PB Bayesian .
, .

4.4
, ..
, .., ,
.
.
,
.

() (support vector
machines (SVMs)) o Vapnik (Boser .., 1992 Cortes & Vapnik, 1995
Vapnik, 1995).

4.4.1
-
, (Haykin,
1999), .
,
.

(support vectors),
.

, 4.4().
+ .
. ,
4.4(), (1, 2, 3, ...)
. (*) ,
, 4.4().

4-5
() ()

4.4 () 1, 2, 3
.. () *.

.
- (Suykens
.., 2002) .

4.4.2
. ,
xk , t k k 1 , t k 1, 1
N
xk n

k .
: - .

.
,
:

wT x b 0 (4.9)

x , w b ,
, . ,
:

w T x k b 1, t k 1
(4.10)
w T x k b 1, t k 1


t k wT xk b 1, k 1, 2, 3,..., N (4.11)

,
( 2 / w 2 )
. .(4.11)

4-6

.
, ,

1
w * , b* J w w T w ,
2
:

1
min J w w T w ,
w ,b 2 (4.12)

t k w x k b 1, k 1, 2, 3,..., N
T

, (primal),
w (Suykens .., 2002).
Lagrange,
Lagrange:


1
L w, b, w T w k t k w T xk b 1 (4.13)
2
k 1

k 0, k 1,..., N, Lagrange.

(saddle point) L w, b, . ,
w b (Haykin, 1999
Suykens .., 2002), .:

max min L w, b, (4.14)


w,b

.(4.13) ,
:

L w, b,
0
w
(4.15)
L w, b,
0
b

.(4.15) :

N
w t x
k 1
k k k

N
(4.16)
t
k 1
k k 0

w .(4.13) (dual
problem) , :

N N N

k
1
min Q mt t m xT x m ,
2
k 1 1 m 1
N
(4.17)
t
k 1
k k 0, k 0, k 1,..., N.

4-7
(quadratic programming),
- k .
Lagrange k* , w *
.(4.16), b * (4.10).

. -
.
, ,
- , .
,
.
(slack
parameters) (Cortes & Vapnik, 1995),
. :


t k wT xk b 1 k , k 1, 2, 3,..., N (4.18)

k 0 ,
.(4.12) :


1
min J w, w T w c k ,
w,b
2 k 1 (4.19)

t k w T x k b 1 k , k 0, k 1, 2, 3,..., N

c .

Lagrange :


N N N

k t k w T x k b 1 k v k k
1 T
L w, b, , w w c k (4.20)
2
k 1 k 1 k 1

vk 0, k 1,..., N, Lagrange, k .
:

max min L w, b, , , v (4.21)


,v w,b,

, :

N N N

k
1
min Q mt t m xT x m ,
2
k 1 1 m 1
N
(4.22)
t
k 1
k k 0, 0 k c, k 1,..., N

.(4.17) k c .
w * .(4.16), b * k c
k 0 (Haykin, 1999).

4-8
4.4.3 -
-
Vapnik (1995). -
Cover (1965) Cover :

-
.

( )
( ) - x .
(Cristianini & Shawe-Taylor, 2000):
m

w x b 0
i 1
i i (4.23)

m - x , .
n
. 0 x 1, x , w 0 b x 0 x ,1 x ,...,m x , .(4.23)
T

w x w x 0
i 0
i i
T
(4.24)

x

, Lagrange :

N
w t x
k 1
k k k (4.25)

.(4.24) .(4.25) :

t x x 0
k 1
k k
T
k (4.26)

.(4.26) T xk x
. (kernel) (Herbrich, 2002 Schlkopf &
Smola, 2002) ,

K x k , x T x k x (4.27)

Mercer (1909) :

m
K xk , x x x ,
i 0
i k i k 1, 2,..., N (4.28)

(kernel trick). , .(4.28)


:

4-9
N

t K x , x 0
k 1
k k k (4.29)

N N N


1
min Q k mt t m K x , x m ,
2
k 1 1 m 1
(4.30)
N

t
k 1
k k 0, k 0, k 1,..., N.

Lagrange
w * ,

N
w* t x
k 1
*
k k k (4.31)

w * b * .

. o
Mercer (1909)
. :


K x, x k xTk x
d

1
K x, x k exp 2 x x k
2
(4.32)
2 2


K x, x k tanh 1xTk x 2
.(4.32) d ,
perceptron
. Mercer
(d, , ),
1, 2 .

4.4.4 VC
(. ) ,
Vapnik-Chervonenkis (VC) (VC dimension (VCdim)). VC
, (Vapnik, 1995),
. VC .
5.1.3. ,
X . c , .
cX. C
. S . DX
. C C: DX 2 DX
C(S)= {cS| cC}. C(S) (dichotomy) S.
|C(S)| = 2|S|, S (shattered) C.
, S C, C
S.

4-10
VC (cardinality) SX
C. VC C d,
d. ,
VC C d, d+1
(Kearns & Vazirani, 1994). .


X , C
() [a,b]. C
2 , 3 ,
4.5. , VC
VC = 2.

+ - +

4.5 C
.


X Rd, C
, - Rd.
, d= 2 (. ) -,
4.6(). , ,
-, 4.6().
,
-, 4.6 () (). ,
VC VC = 3.
Rd, VC= d+1. ,
Perceptron d 1 VC
d+1.

+ + +
+ -

+ -
-
+
+
-

() () ()

4.6 () C
-. () ()
-.

4-11

X , C
.

, 4.7().
.
,
, 4.7 (). ,
(-),
(+). .
VC VC = 4.

+
+

- +
-
-

+ +
+

() ()

4.7 ()
. ()
.

VC ,
:

h(log(2N / h) + 1) - log( / 4)
P 1 h
N

VC , 0 1 N
( h << N).

4.5
,
,
(Domingos,1996 Jain .., 1999 Kolodner, 1993 Mitchell, 1997 Mitra .., 2002 Pearl, 2000).
, ,
, - . ,
, . ,
, ,
.

. , ,
. ,
() (extreme
learning machines (ELMs)) (Cambria, 2013 Huang .., 2006).
,

4-12
,
.
(Huang, 2003),
.
,
, (Hirose, 2003),
.
, ,
.
, ,
(rough sets) (Pawlak, 1991), (Atanassov,
2012) .. .


4.1)
. , Minkoski,
Mahalanobis, .
MATLAB.
4.2) *
.
().

4 *

3 *

1 *

2 4 6 8 10

4.3) X , C
-. VC .
4.4) .
.
4.5) MATLAB .
Iris dataset.
4.6) .

(>2).
4.7) MATLAB
.

4-13

Atanassov, .. (2012). On Intuitionistic Fuzzy Sets Theory. Berlin, Germany: Springer.
Boser, B.E., Guyon, I.M. & Vapnik, V.N. (1992). A training algorithm for optimal margin classifiers. In 5th
Annual Workshop on Computational Learning Theory (COLT'92), 144-152.
Broomhead, D.S. & Lowe, D. (1988). Multivariable functional interpolation and adaptive networks. Complex
Systems, 2, 321355.
Cambria, E. (2013). Extreme learning machines. IEEE Intell. Syst., 28(6), 30-59.
Cortes, C. & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20, 273-297.
Cover, T.M. (1965). Geometrical and statistical properties of systems of linear inequalities with applications
in pattern recognition. IEEE Transactions on Electronic Computers, EC-14, 326-334.
Cristianini, N. & Shawe-Taylor, J. (2000). An Introduction to Support Vector Machines and Other Kernel-
based Learning Methods. Cambridge, UK: Cambridge University Press.
Domingos, P. (1996). Unifying instance-based and rule-based induction. Machine Learning, 24(2), 141-168.
Haykin, S. (1999). Neural Networks - A Comprehensive Foundation. 2nd Ed., New Jersey, USA: Prentice-
Hall.
Herbrich, R. (2002). Learning Kernel Classifiers: Theory and Algorithms. Cambridge, MA: The MIT Press.
Hirose A. (Ed.). (2003). Complex-Valued Neural Networks: Theories and Applications (Series on Innovative
Intelligence, 5). Singapore: World Scientific.
Huang, G.-B. (2003). Learning capability and storage capacity of two-hidden-layer feedforward networks.
IEEE Transactions on Neural Networks, 14(2), 274-281.
Huang, G.-B., Chen L. & Siew, C.-K. (2006). Universal approximation using incremental constructive
feedforward networks with random hidden nodes. IEEE Transactions on Neural Networks, 17(4), 879-
892.
Jain, A.K., Murty, M.N. & Flynn, P.J. (1999). Data clustering: a review. ACM Computing Surveys, 31(3),
264-323.
Jang, J.S.R., Sun, C.T. & Mizutani, E. (1997). Neuro-fuzzy and Soft Computing A Computational Approach
to Learning and Machine Intelligence. Upper Saddle River, NJ: Prentice Hall.
Kaburlasos, V.G. & Kehagias, . (2014). Fuzzy inference system (FIS) extensions based on lattice theory.
IEEE Transactions on Fuzzy Systems, 22(3), 531-546.
Kearns, M.J. & Vazirani, U.V. (1994). An Introduction to Computational Learning Theory. Cambridge, MA:
The MIT Press.
Kolodner, J. (1993). Case-Based Reasoning. San Mateo, CA: Morgan Kaufmann.
Looney, C.G. (1997). Pattern Recognition Using Neural Networks. New York, USA: Oxford University
Press.
Mercer, J. (1909). Functions of positive and negative type and their connection with the theory of integral
equations. Philos. Trans. Roy. Soc. London A, 209, 415-446.
Mitchell, T.M. (1997). Machine Learning. New York, NY: McGraw-Hill.
Mitra, S. & Hayashi, Y. (2000). Neuro-fuzzy rule generation: survey in soft computing framework. IEEE
Transactions on Neural Networks, 11(3), 748-768.
Mitra, S., Pal, S.K. & Mitra, P. (2002). Data mining in soft computing framework: A survey. IEEE
Transactions on Neural Networks, 13(1), 3-14.
Paredes, R. & Vidal, E. (2006). Learning weighted metrics to minimize nearest-neighbor classification error.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(7), 1100-1110.
Park, J. & Sandberg, I.W. (1991). Universal approximation using radial-basis-function networks. Neural
Computation, 3, 246-257.
Pawlak, Z. (1991). Rough Sets: Theoretical Aspects of Reasoning About Data. Boston, MA: Kluwer.
Pearl, J. (2000). Causality. Cambridge, UK: Cambridge University Press.
Poggio, T. & Girosi, F. (1990). Networks for approximation and learning. Proc. IEEE, 78(9), 1484-1497.
Schlkopf, B. & Smola, A.J. (2002). Learning with Kernels Support Vector Machines, Regularization,
Optimization, and Beyond. Cambridge, MA: The MIT Press.
Suykens, J.A.K., Gestel, T.V., De Brabanter, J., De Moor, B. & Vandewalle, J. (2002). Least Squares Support
Vector Machines. Singapore: World Scientific Publishing.

4-14
Theodoridis, S. & Koutroumbas, K. (2003). Pattern Recognition (2nd ed.). Amsterdam, The Netherlands:
Academic Press - Elsevier.
Vapnik, V. (1995). The Nature of Statistical Learning Theory. New York, USA: Springer.
Wikipedia, (2015). k-nearest neighbors algorithm. https://en.wikipedia.org/wiki/K-
nearest_neighbors_algorithm.

4-15
5: , ,
, ,
. , (Jain
.., 2000 Vapnik, 1999). ,
/ (Kuncheva, 1996),
.

5.1
(boosting) (machine learning) ,
(weak classifiers/learners)
(strong classifier/learner) (Freund & Schapire, 1999 Zhou, 2012),
. ,
, ,
,
.
.
- (meta-algorithm).
,
(ensemble of classifiers)
(committee of experts) (Kittler .., 1998 Kuncheva, 2004 Opitz & Maclin, 1999).
(Schapire, 1990):
;
, (bias),
(variance) .

5.1.1 Adaboost
Adaboost (Adaptive boosting) (Freund & Schapire,
1997), .

Y={0,1}. X .
h: X[0,1], ( h)
. h(x) x 1, 1-h(x)
x 0.

AdaBoost

: (x1, y1),,(xN, yN) N , X = { x1,,


xN}, D N , WeakLearn,
T, .

wi1 D(i ) i=1,,N.


t = 1, 2,,T
wt
1. pt .
i 1 wit
N

2. WeakLearn p t . ht: X[0,1].


3. t ht : t i 1 pit ht ( xi ) yi .
N

4. t t / (1 t ) .
1 ht ( xi ) yi
5. wit 1 wit t

5-1

1 T
log1 / t
T
1, (log1 / t )ht ( x)
h f ( x) t =1
2 t 1

0,

,
.
. ,

.
-
,
,
.
, .
, .
, ,
. ,
,
,
(leveraging algorithms).

5.1.2

. , ,
, .
,
,
, ..
(x1,f(x1)),,(xn,f(xn)) f(xi), i{1,,n}.
-,
.
- (active learning)

X , .
, U,
X. ,
, q U X,
q .
,
(pixels) (Tuia .., 2011).
(bootstrap) ,
- (bootstrap aggregating ,
, bagging) (Breiman, 1996). ,
, ,
/,
(overfitting). , ,
(Breiman .., 1998), .
D n, -
m Di, i{1,,m} n ,
.
Di, i{1,,m}. n=n,

5-2
1
n, Di, i{1,,m} 1 ()
e
63.2% ,
. m ,
Di, i{1,,m}. , m
( ),
( ).
(decision trees)
(random forests) (Breiman, 2001).
(Ho, 1998).
.

5.1.3
() (probably approximately correct (PAC)) (Valiant,
1984) .
(Kearns & Vazirani, 1994).
R 5.1. p
D. R(.),
, R(p) = 1 p R, R(p) = 0.
(p, R(p))
R R,
.

y -

+
-
-
+ +

x
-

5.1 R (+),
R (-).

R
, 5.2 ( R = ).
R R, . R R. () R
R p D
RR = (R-R)(R-R). , R, RR,
RR = R-R.
R
0< 0<<1/2 m (p,
R(p)), 1- R R .
R
R R, 5.3.
m (p, R(p)), T
/4. m.

5-3
y -


+ -
- + R +
+
R

x
-

5.2 R (+).

/4.
T. R T,
T.
p x-y
1-/4. , m
(1-/4)m. , m
(1-/4)m. , m
4(1-/4)m.
m , 4(1-/4)m
m

. , m 4e 4
.
m m

m

(1-x) e-x 4 1 4e 4 . , 4e 4 m
4
(4/)ln(4/) 4(1-/4) .
m

,
R, (4/)ln(4/) D,
1- R (
D) .

y -


T

+ + -
- +
R +
R

x
-

5.3 () , /4.
m (p, R(p)) , T
/4.

5-4
,
4.4.4. , X
( X
x-y). c , .
cX. c
x-y. C
. , C
x-y. (c, D)
(oracle), (x, c(x)), xX
D, c(x)=1 xc, c(x)=0.
L C , L
: cC, D , (0,0.5)
(0,0.5), L (c, D), 1-
hC .

. , ,
Boolean (Kearns .., 1994).

5.2

, .
(unsupervised learning).

:
1) ,
, .. .


- .
2) - ,
, ..
(data mining).
3) (features , , attributes)
.

5.2.1
Pearson (1894),
Gauss.
. :

1) c .
2) P(wj) , j=1,c, .
3) p(x|wj,j) , j=1,c,
.
4) c 1,,c .
5) .

: , wj
P(wj) , x p(x/wj,j).
, :

5-5
c
p(x|)= p(x|wj,j)P(wj), (5.1)
j 1

= (1,,c)t. (mixture
density), p(x/wj,j) ()
(component (probability) densities), P(wj)
(mixing parameters). .
x. ,
, (a posteriori).

.
x
p(x|) .
p(x|), .
p(x|),
. .
p(x|) (identifiable),
x , p(x|) p(x|). p(x|) ,
, .

1
:
1
1 x 1 ( 1 2 ) if x 1
P(x|)= 1 (1-1)1-x + 2x (1-2)1-x = 2
2 2 1
1 ( 1 2 ) if x 0
2

, , P(x=1|) = 0.6, P(x=0|) = 0.4. ,


P(x|) ,
=[1, 2]. 1+2=1.2.
.

5.2.2
D={x1,,xn} n -
:
c
p(x|)= p(x|wj,j)P(wj),
j 1

.
(Duda .., 2001) :
n
p(D|)= p(xk|)
k 1

,
p(D|). , p(D|) ,
. l
i l l . :
n
l= ln p(D|)= ln p(xk|)
k 1
:
n
1 c
i l= p ( x k | )
i p ( x k | w j , j ) P ( w j )
k 1 j 1

5-6
:

1) i j ( ij)
p(x k | wi , i ) P( wi )
2) P(wi|xk, )=
p ( x k | )

:
n n n
1 1
i l= p(x | )
i p(xk|)= P(wi | x k , ) p(x | wi , i ) P( wi )
i p(xk|i)= P(wi|xk, ) i ln p(xk|i).
k 1 k k 1 k k 1

i l i l,
:

n
P(wi|xk, ) i ln p(xk|i) = 0, i= 1,,c.
k 1

i
.
,
P(wj) .
() p(D|) P(wi),
:
c
P(wi) 0, i=1,c P(wi) = 1.
i 1

P ( w j ) P(wi), i
i. ,
P ( wi ) 0 i, P ( wi ) i
.(5.2) .(5.3):

1 n
P ( wi ) = P ( wi | x k , ) (5.2)
n k 1
n
P (wi | x k , ) i ln p(x k | i ) =0 (5.3)
k 1
,
p(x k | wi , i ) P ( wi )
P ( wi | x k , ) (5.4)
j 1 p(x k | w j , j ) P (w j )
c

. .(5.2)

,
. .(5.4) Bayes.
wi
i , .

5.2.3
,
(normal), . p(x/wj,j) (i, i). 5.1
, (. )
(. ?).

5-7
1 . 2 , 3
.

I i P(wi) c
1 ?
2 ? ? ?
3 ? ? ? ?

5.1 .

1:

i = i. :

1
ln p(x|wj,i) = -ln [(2)d/2|i|1/2] - (x-i)t i1 (x-i),
2

i ln p(x|wj,i) = i1 (x-i).
i :
n
P(wi|xk, ) i1 (x- i ) = 0, = ( 1 ,, c )t.
k 1
:

k 1 P(wi | x k , )x k
n

i = (5.5)
k 1 P(wi | x k , )
n

i
. , xk
xk i. , P(wi|xk, )
1.0 0.0 , i
i.
i . ,
:

p(x k | wi , i ) P( wi )
P( wi | x k , )
p(x k | w j , j ) P (w j )
c
j 1

p(x/wj, i ) ( i , i), -
. , ,

.
i (0) ,
()
:

k 1 P(wi | x k , ( j ))x k
n

i ( j 1) = (5.6)
k 1 P(wi | x k , ( j ))
n

5-8
.(5.6)
. , ,
.

2
,
:

1 1 2 1
p(x/1,2)= exp ( x 1 ) 2 exp ( x 2 ) 2
3
2 2 2
3 2
w1 w2

1= -2 2= 2 25
5.2.
:
n
l(1, 2)= ln p( x k | 1 , 2 )
k 1
1 2. l 1 = -2.130 2 = 1.668,
1= -2 2= 2. , l
() 1 = 2.085 2 = -1.257. 1
2. , ,
. , ,
, .

k xk 1 2 k xk 1 2 k xk 1 2
1 0.608 9 0.262 17 -3.458
2 -1.590 10 1.072 18 0.257
3 0.235 11 - 19 2.569
1.773
4 3.949 12 0.537 20 1.415
5 -2.249 13 3.240 21 1.410
6 2.704 14 2.400 22 -2.653
7 -2.473 15 - 23 1.396
2.499
8 0.672 16 2.608 24 3.286
25 -0.712

5.2 25 .

E.(5.6) E.(5.5),
1 (0) 2 (0). 1 (0)= 2 (0)
,
P(i|xk, 1 (0), 2 (0)) = P(i). E.(5.6)
1 (0) 2 (0) .

2:

i, i, P(wi) ,
, - ,
.

5-9
3
p(x|,) :
1 1 x 2 1 1
p(x|,)= exp exp x 2 .
2 2 2 2 2 2
n
n p(x|,). = x1, :
1 1 1
p(x1|,)= exp x12 .
2 2 2 2 2

:
1 1
p(xk|,) exp x k2 .
2 2 2
,
1 1 1 1 n 2
p(x1,,xn|,) exp x12 exp x k .
2 (2 2 ) n 2 k 2

, ,
-.
.
(5.2) - (5.4) i, i, P(wi).
i i,
.
i1 , i.
:
| i1 |1 / 2 1 t 1
ln p(xk|wi,i) = ln (x k i ) i (x k i )
(2 )
d /2
2

i i1 . xp(k) p xk, p(i) p


i, pq(i) pq i, pq(i) pq i1 . :

i ln p(xk|wi,i) = i1 (xk-i)

pq
ln p(x k | wi , i )
= 1
pq (i) ( x p (k ) p (i))( x q (k ) q (i))
2

pq
(i)

pq Kronecker. .(5.3)
i , i , P ( wi ) :

1 n
P ( wi ) = P ( wi | x k , ) (5.7)
n k 1

k 1 P (wi | x k , )x k
n

i = (5.8)
k 1 P (wi | x k , )
n

k 1 P (wi | x k , )( x k i )(x k i )t
n

i1 = (5.9)
k 1 P (wi | x k , )
n

5-10
p(x k | wi , i ) P ( wi )
P (wi | x k , ) = =
j 1 k j j ) P ( w )
c
p ( x | w , j

1
| i | 1 / 2 exp (x k i ) i1 (x k i ) P ( wi )
2 . (5.10)
1
j 1| j | exp 2 (x k j ) i (x k j ) P(w j )
1 / 2 1
c

,
i , i , P ( wi ) , P (wi | x k , )
.(5.10), (5.7) - (5.9)
i , i , P ( wi ) , ..
.

. -
,
.
- (k-means algorithm),
(Duda .., 2001).
.(5.10) P (wi | x k , ) ,
1
Mahalanobis, (x k i ) i1 (x k i ) , .
2
, ||xk- i ||2,
xk, P (w | x , ) :
m i k
1, i m
P ( wi | x k , ) .
0,

1,2,,c.
n
c .

1: -

1. : n, c, 1, 2,, c.
2. n i.
3. i.
4. i.
5. 1, 2,, c.
6. .

c
. - O(ndcT), d
(. ) T
. T n .
-
. -
,
. , , 1, 2,, c -
,
.

5-11
. -
-, .
,

. P (wi | x k , ) ,
.(5.10), .
- (fuzzy k-means algorithm) (Duda .., 2001)
:
c n
Jfuz= [ P (wi | x j , )]b || x j i ||2
i 1 j 1

b .

:
c
P (wi | x j ) 1 , j = 1,,n, (5.11)
i 1

. ,
P j P ( w j ) . , ()
Jfuz :
J fuz J fuz
0 0,
i Pj
:

j 1[ P (wi | x j )]b x j
n

i , (5.12)
j 1[ P (wi | x j )]b
n

(1 / d ij )1 /(b 1)
P ( wi | x j ) dij = ||xj-i||2. (5.13)
(1 / d rj )1 /(b 1)
c
r 1
(5.12) (5.13) ,
.

2: -
1. : n, c, b, 1, 2,, c, P (wi | x j ) , i=1,c; j=1,,n.
2. P (wi | x j ) .(5.11).
3. - i .(5.12).
4. P (wi | x j ) .(5.13).
5. () i P (wi | x j ) .
6. 1, 2,, c.
7. .

, - ,
:

1 || x j - i |||| x j - i ||, i i
P(wi|xj) = ,
0

.(5.5).

5-12
, ,
-. ,
.(5.11), xj i
, ,
.

5.2.4 Bayes

, .
,
.

. Bayes
Bayes
(Looney, 1997). ,
p()
D p(|D).

1) c .
2) P(wj) , j=1,,c.
3) p(x|wj, j), j=1,,c ,
=(1,,c)t .
4)
p().
5) D n
x1,,xn :
c
p(x|)= p(x|wj,j)P(wj).
j 1

p(|D) Bayes
P(wj|x,D) .
P(wi) , , x
p(x|wi,i). Bayes :
p(x| wi ,D) P( wi | D)
P( wi | x,D) .

c
j 1
p ( x| w j , D ) P ( w j | D)

P(wi) D
, . P(wi|D) = P(wi), Bayes:

p(x| wi ,D) P( wi )
P( wi | x,D) . (5.14)

c
j 1
p ( x| w j , D ) P ( w j )

p(x|wi,D), :
p(x|wi,D)= p(x, | wi , D)d = p(x | , wi , D) p( | wi , D)d .

5-13
. , x
D, p(x|,wi,D) = p(x|wi,i). , x
, p(|wi,D) = p(|D). :

p(x|wi,D)= p(x | wi , i ) p( | D)d . (5.15)

, p(x|wi) p(x|wi,i)
i. p(|D).
p(|D).

.
Bayes

p( D | ) p( )
p(|D)= . (5.16)
p( D| ) p( )d
:

n
p(D|)= p ( x k | ) .
k 1

, Dn n , .(5.16)
:

p(x n | ) p( | D n-1 )
p(|Dn)= .
p(x n | ) p( | D
n-1
)d

/ Bayes. .(5.16)
Bayes .
p() p(D|)
. p(|D) .
= , , , .(5.15) :

p(x|wi,D) p(x|wi, ),

.(5.14)

p(x| w i , i ) P ( w i )
P( w i | x,D) .
p(x| w j , i ) P (w j )
c
j 1

, ,
Bayes.
Bayes
( ) ,
.
, Bayes
. ,
, Bayes.

5-14
5.3
(evidence theory)
,
. ,
(. ) : -1
60%
30% 10%. -2
30% 20%
50%.
.
() ,
.
,
,
. ,
. (-) , ,
.
, , . ,
,

. , () ,
. .

5.3.1
,
(theory of belief functions) Dempster-Shafer
(Dempster, 1967 Shafer, 1976). .
X 2X (power set).
m: 2 [0,1] (mass function) , ,
X

(basic belief assignment) (probability mass assignment),


:
1. m() = 0,
2.
A X
m( A) 1 .

1 () , 2
. m(A) AX
A
A.

:
1) bel: 2X[0,1] (belief) bel(A)=
B| B A
m( B) ,

2) pl: 2X[0,1] (plausibility) pl(A)=


B| B A
m( B) .

: pl(A) + bel(A) = 1, A= X\A


(complement) . bel(.)
. , A m(.)
:

m(A) = (1)
B| B A
| A B|
bel( B) ,

|A-B| A B.

5-15

.

4
,
, , . (. )
,
.



0 0 0
0.2 0.2 0.5
0.5 0.5 0.8
0.3 1.0 1.0

5.3 , .

X,
, ,
. : bel(A) pl(A). ,
() P(A) : bel(A) P(A) pl(A).
, (.. /
) . ,
,
Dempster, .
m1: 2X[0,1] m2: 2X[0,1] .
Dempster (Dempsters rule of combination) (joint mass
function) m1,2= m1m2: 2X[0,1] , :
1) m1,2() = 0,
1
2) m1,2(A) = (m1m2)(A) = m1 ( B)m2 (C ) , K B
1 K B C A C
m1 ( B)m2 (C ) .

1-
1 2 .
Dempster .

, .
99% 1%,
99% 1%.
Dempster -
(1) m1({X})m2({Y})= (m1m2)({})= (0.99)(0.01), (2) m1({X})m2({Z})=
(m1m2)({})= (0.99)(0.99), (3) m1({Y})m2({Y})= (m1m2)({Y})= (0.01)(0.01), (4) m1({Y})m2({Z})=
(0.01)(0.01)
(m1m2)({})= (0.01)(0.99). = (1.01)(0.99), (m1m2)({Y})= = 1. ,
1 (1.01)(0.99)
100%. ,
.
, Dempster ,
- , .
.
( 99%), ( 1%),

5-16
( 99%),
( 1%). Dempster
100% . -
, ,
Dempster
100% .
-
Dempster (Jsang & Simon, 2012).
Dempster & Shafer
( Bayes), .
7.3.2 (probability space) 1 2
. , m(A) + m(A) = 1,
. .
{,
, } ,
. , ( ) =
() + (), ,
( ) = () + (),
( ) () + ().
.
m: 2X[0,1] (2X,)
X. ,
, m: L[0,1] (L,)
, Dempster (Grabisch, 2009).


5.1)
. R=
[0.2,0.7][0.5,0.8] .
() (m) ,
99% R 1%.
() (m)
R.

0.8
R

0.5

0.2 0.7 1

5-17
5.2) , P ( wi ) 0
i, P ( wi ) i .(5.2) .(5.3)
1 x
5.3) 1 P(x|)= (1-1)1-x +
2 1
1
1 x ( 1 2 ) if x 1
2 (1-2) = 2
1-x
, P(x=1|) = 0.6
2 1
1 ( 1 2 ) if x 0
2
1+2=1.2.
;
5.4) MATLAB -.
5.5) MATLAB -
.
5.6) {(),
(), ()}.
.
Dempster
m1,2= m1m2.
m1,2;


m1 (-1) m2 (-2)
{} 0 0
{} 0.05 0.15
{} 0 0
{} 0.05 0.05
{, } 0.15 0.05
{, } 0.10 0.20
{, } 0.05 0.05
{, , } 0.60 0.50

5-18

Breiman, L. (1996). Bagging Predictors. Machine Learning, 24(2), 123-140.
Breiman, L. (2001). Random Forests. Machine Learning, 45(1), 5-32.
Breiman, L., Friedman, J.H., Olshen, R.A. & Stone, C.J. (1998). Classification and Regression Trees. Boca
Raton, FL: Chapman & Hall /CRC.
Dempster, A.P. (1967). Upper and lower probabilities induced by a multivalued mapping. The Annals of
Mathematical Statistics, 38(2), 325339.
Duda, R., Hart, P.E. & Stork, D.G. (2001). Pattern Classification (2nd edition). New York, NY: Wiley & Sons
A Wiley-Interscience Publication.
Freund, Y. & Schapire, R.E. (1997). A decision-theoretic generalization of on-line learning and an application
to boosting. J. of Computer and System Sciences, 55, 119-139.
Freund, Y. & Schapire, R.E. (1999). A short introduction to boosting. J. Japan Soc. Artificial Intelligence,
14(5), 771-780.
Grabisch, M. (2009). Belief functions on lattices. International Journal of Intelligent Systems, 24(1), 76-95.
Ho, T.K. (1998). The Random Subspace Method for Constructing Decision Forests. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 20(8), 832844.
Jain, A.K., Duin, R.P.W. & Mao, J. (2000). Statistical pattern recognition: a review. IEEE Transactions on
Pattern Analysis Machine Intelligence, 22(1), 4-37.
Jsang, A. & Simon, P. (2012). Dempster's rule as seen by little colored balls. Computational Intelligence,
28(4), 453-474.
Kearns, M.J. & Vazirani, U.V. (1994). An Introduction to Computational Learning Theory. Cambridge, MA:
The MIT Press.
Kearns, M., Li, M. & Valiant, L. (1994) Learning Boolean formulas. Journal of the Association of Computing
Machinery, 41(6), 1298-1328.
Kittler, J., Hatef, M., Duin, R.P.W. & Matas, J. (1998). On combining classifiers. IEEE Transactions on
Pattern Analysis Machine Intelligence, 20(3), 226-239.
Kuncheva, L.I. (1996). On the equivalence between fuzzy and statistical classifiers. Intl. J. Uncertainty,
Fuzziness and Knowledge-Based Systems, 4(3), 245-253.
Kuncheva, L.I. (2004). Combining Pattern Classifiers: Methods and Algorithms. Cambridge, UK: Wiley-
Interscience.
Long, P.M. & Tan, L. (1998). PAC learning axis-aligned rectangles with respect to product distributions from
multiple-instance examples. Machine Learning, 30(1), 7-21.
Looney, C.G. (1997). Pattern Recognition Using Neural Networks. New York, NY: Oxford University Press.
Opitz, D. & Maclin, R. (1999). Popular ensemble methods: an empirical study. J. Artificial Intelligence
Research, 11, 169-198.
Pearson, K. (1894). Contributions to the mathematical theory of evolution. Phil. Trans. Roy. Soc. London A,
185, 71-110.
Schapire, R.E. (1990). The strength of weak learnability. Machine Learning, 5(2), 197-227.
Shafer, G. (1976). A Mathematical Theory of Evidence. Princeton, NJ: Princeton University Press.
Tuia, D., Volpi, M., Copa, L., Kanevski, M. & Muoz-Mar, J. (2011). A survey of active learning algorithms
for supervised remote sensing image classification. IEEE Journal of Selected Topics in Signal
Processing, 5(3), 606-617.
Valiant, L.G. (1984). A theory of the learnable. Comm ACM, 27(11), 1134-1142.
Vapnik, V.N. (1999). An overview of statistical learning theory. IEEE Transactions on Neural Networks,
10(5), 988-999.
Zhou, Z.-H. (2012). The term boosting refers to a family of algorithms that are able to convert weak learners
to strong learners. Ensemble Methods: Foundations and Algorithms (p. 23). Boca Raton, FL: Chapman
& Hall /CRC.

5-19
6:

. (graph theory)
(2000). RP2
PP, 7.
,
, , .. ,
7. .

6.1 Bayes
Bayes () (Bayesian networks (BNs)) Pearl (1985).
(Neapolitan, 1989 Pearl, 1988).
(belief networks)
() (directed acyclic graphs (DAGs)) (Edwards, 2000 Jordan &
Sejnowski, 2001). ,
. (. )
- Markov () (Markov networks (MNs)).
6.1,
, ().
, ().
, (), -
, ().

, ().
, (
T), ( F).

, ,
..
-.
, 6.1. ,
(joint probability distribution function)
(chain rule):
P(,,,,) = P()P(|)P(|,)P(|,,)P(|,,,)
P(,,,,) = P()P()P(|)P(|,)P(|).

6.1.1
:

() (prediction) Xi
Xi / Xi
(top-down reasoning).

() (diagnosis) Xi
Xi / Xi
(bottom-up reasoning).

6.1.
, .
-
, :

6-1
P ( T , T )
P(=T|=T)= ,
P ( T )
P(=T,=T)= {T , F }
P( T )P() P( | T ) P( | , T ) P( T | ) ,
P(=T)= {T , F }
P( )P() P( | ) P( | , ) P( T | ) .

, TN.

P(=) P(=F) P(=) P(=F)


0.8 0.2 0.02 0.98

() ()

P(=|) P(=F|)
0.9 0.1
F 0.01 0.99

P(=|,) P(=F|,)
() () 0.9 0.1
F 0.2 0.8
F 0.9 0.1
P(=|) P(=F|) F F 0.01 0.99
0.7 0.3
F 0.1 0.9

()

6.1 Bayes.

6.2
. ,
6.2,
.
, ,
(induction) () ,
. ,
/
(Quinlan, 1992 Kotsiantis, 2013).

6.2.1
, -- (divide-and-
conquer). , () ,
, - () k C1,
C2, , Ck. :

6-2

75% >75%

6.2 .
.

1. ,
Cj. ,
Cj.
2. .
, ,
() .
3. .
(partition) ,
, . ,
(test), ,
{O1, O2,, On}. 1, 2,, n,
Ti T
i. , ()
() O1, O2,, On.

,
.
. 6.3
.
6.3 ,
--
, .
()
: , .

(. ), ,
(. ).
, . ,
()
: 75% > 75%. , , .
, ,
() : .
6.4.

6-3
6.5. 6.5
6.2. 6.2
, 6.5
.
.

(oC) (%)
20 70
25 90
27 85
19 95
18 70
19 90
26 78
17 65
25 75
19 80
15 70
20 80
18 80
19 96

6.3 .

6.2.2
.
. -
Ti, ( )
. , ,
,
.
, ,
.
.
(blind search),
. .
- ,
. , 6.3
106 .
(greedy) . ,

.

.

6.2.3

.
Shannon (1948) (information theory)
. ,
, , .
, PA A.
A -log2(P) bits. ,
, -log2(8) = 3 bits.

6-4
=
75%
(oC) (%)
20 70
18 70

> 75%
(oC) (%)
25 90
27 85
19 95

=
(oC) (%)
19 90
26 78
17 65
25 75

=
=
(oC) (%)
19 80
15 70

=
(oC) (%)
20 80
18 80
19 96

6.4 .

= :
75%:
> 75%:
= :
= :
= :
= :

6.5 6.4.

6-5
n A1,,An P1,,Pn, .
(entropy), info(A1,,An),
n
, . info(A1,,An) = Pi log 2 ( Pi ) .
i 1
.
|| freq(Cj,T)
Cj, j{1,,k}. , Cj
freq(C j , T )
log 2 bits. ,
|T |
:
k freq(C , T )
freq(C j , T )
info(T)=
j
log 2 .
j 1 |T | |T |

n
n
|T |
1,,Tn X: infoX(T)= i info(Ti ) . gain(X) =
i 1 | T |
info(T) - infoX(T) (information gain)
(mutual information), T X.
.
T 6.3. :
9 5 . T
9 9 5 5
info(T) = log 2 log 2 = 0.940 bits.
14 14 14 14
X T
5 2 2 3 3 4 4 4 0 0
infoX(T) = log 2 log 2 + log 2 log 2 +
14 5 5 5 5 14 4 4 4 4
5 3 3 2 2
log 2 log 2 = 0.694 bits. , 0.940
14 5 5 5 5
- 0.694 = 0.246 bits.
,
X, T infoX(T) =
6 3 3 3 3 8 6 6 2 2
log 2 log 2 + log 2 log 2 = 0.892 bits. ,
14 6 6 6 6 14 8 8 8 8
0.940 - 0.892 = 0.048 bits. , (gain
criterion) .
, . ,

.
.
,
( )
. , infoX(T) =
n
1
info(Ti ) = 0 info(Ti) = 0. ,
i 1 | T |
, . , ,
.

(normalization)

6-6
n
| Ti | |T |
(split information) split info(X) = log 2 i . ,
i 1 | T | |T |
gain(X)
(gain ratio) gain ratio(X) = ,
split info(X)
. ,
split info(X) = log2(n), n
T n , .

, .
, .
,
() ,
() . ,
, ,
AZ A>Z, A () Z
(threshold).

6.3
() (cognitive maps (CM))
(Axelrod, 1976).
.
.
.

.
6.6
, .

.

6.6 .

6-7
6.3.1
() (fuzzy cognitive maps (FCM))
.
:

1.
.
2.
.
(. 1).
3. .
4. , .

.


, , .
Kosko (1986, 1992)
, -
. 6.7
.

6.7 .

6.7,
(), Ci, i=1,2,3,,N, N , ,
.
, .
(Stylios & Groumpos, 1998) Ci Cj :
() , (cause),
(effect) Wij, () ,
,
Wij () , . , .. Wij,
Ci Cj [-1,1].
Ai

f :

Ait 1 f Ait W ji Atj , (6.1)
i 1,i j

6-8
Ait 1 Ait Ci t+1 t , A tj
Cj t, Wji Cj Ci f

, [0,1].
.(6.1)

(Stylios & Groumpos, 1998):
1. .
2. .
3. .

,

, .

6.3.1.1
,

. ,
.

(Papageorgiou, 2012 Papakostas .., 2012). ,

, ,
(Papakostas .., 2012).

1
Heb (Papakostas .., 2012). ,
: Hebb () (differential Hebbian learning (DHL))
(Dickerson & Kosko, 1994), - Hebb () (non-linear Hebbian learning
(NHL)) (Papageorgiou .., 2003) Hebb () (active Hebbian learning
(AHL)) (Papageorgiou .., 2004), .

. Hebb ()

. Dickerson & Kosko (1994)
.
:

wij
i j
w t A t A t wt , A t 0
t 1 ij t ij i
(6.2)
wij t , At
0
i

Ai Ai Ai
t t t 1
(6.3)

t
:
t
t 0.1 1 , (6.4)
1.1N
t
. ,
.

6-9
. - Hebb ()
Papageorgiou .. (2004).
Oja (1989) (.(6.5)),
, :

wij n 1 yi n x j n yi n wij n (6.5)

j i
n+1, . -
() x j - () yi .
:
t t

wij wij Aj Ai Aj wij
t 1 t t t
(6.6)

, (Papageorgiou .., 2006),


.
sgn(.), ,
:
t t

wij wij Aj Ai sgn wij Aj wij
t 1 t t t t
(6.7)

, ,
() , :
t t
t

wij wij Aj Ai sgn wij Aj wij
t 1 t t t
(6.8)

. Hebb ()

.(6.1). ,
.
. ,
.
.(6.1) :

t N
Ai f Ai
t 1
W ji Aj
act t

(6.9)
i 1,i j

act .
:

t
t t

wij 1 wij Ai Aj wij Ai
t 1 t t t act t
(6.10)

t,

t b1e t
1

(6.11)
t b2 e t
2

0.01<b1<0.09, 0.1<1<1, b2, 2


.
,
(
).

6-10
6.3.1.2

,
(Papageorgiou & Salmeron, 2013).

() (fuzzy cognitive maps FCMper) (Papakostas &
Koulouriotis, 2010 Papakostas .., 2008, 2012).
, , .
,
,
. 6.8.
, ,
(Papageorgiou &
Kannappan, 2012).

.
() (fuzzy grey cognitive maps (FGCM))
(Salmeron, 2010), ()
(evidential cognitive maps (ECM)) Dempster Shafer
(Kang .., 2012) () (intuitionistic fuzzy cognitive
maps (IFCM)), (intuitionistic fuzzy logic)
(Papageorgiou & Iakovidis, 2013). ,
(granular computing) (Pedrycz & Homenda, 2014),
(Papakostas .., 2015). ,
6.9.

6.8 .

6-11
6.9 .


6.1) Bayes () ,
(. ), ,
.
, .

() ()

() (B)
(B) T F T F
F 0.4 0.6 0.2 0.8
T 0.01 0.99
()

()
() () T F
F F 0.0 1.0
F T 0.8 0.2
T F 0.9 0.1
T T 0.99 0.01

6-12
:

P(=T|=T)=
P( T , T )

{T , F }
P( T , , T )
P( T ) , {T , F }
P( T , , )

{T , F }
P( T , , T ) P(=T, =T, =T) + P(=T, =F, =T) =
P(=T|=T,=T)P(=T|=T)P(=T) + P(=T|=F,=T)P(=F|=T)P(=T) =
(0.99)(0.01)(0.2) + (0.8)(0.99)(0.2) = 0.00198 + 0.1584 = 0.16038.

, {T , F }
P( T , , ) P(=T, =T, =T) + P(=T, =F, =T) + P(=T, =T, =F) + P(=T,
=F, =F) = P(=T|=T,=T)P(=T|=T)P(=T) + P(=T|=F,=T)P(=F|=T)P(=T) +
P(=T|=T,=F)P(=T|=F)P(=F) + P(=T|=F,=F)P(=F|=F)P(=F) = (0.99)(0.01)(0.2) +
(0.8)(0.99)(0.2) + (0.9)(0.4)(0.8) + (0.0)(0.6)(0.8) = 0.00198 + 0.1584 + 0.288 + 0 = 0.44838.

, P(=T|=T) = 0.16038/0.44838 = 0.357687.

6.2)
.
, .
6.3) ,
. ,
.
, 0.68 H 0.74
0.74 G 0.80 . , 5 (H, G, V1, V2, V3)
:
0 0.4 0.25 0 0.3
0.36 0 ,
0 0 0 (0.10, 0.01, 0.45, 0.39 0.04).
0.45 0 0 0 0

0.9 0 0 0 0
0 0.6 0 0.3 0

6.4) MATLAB,
.(6.1), .
6.5) MATLAB, DHL.

6-13

Axelrod, R. (1976). Structure of Decision: The Cognitive Maps of Political Elites. Princeton, NJ: Princeton
University Press.
Dickerson, J.A. & Kosko, B. (1994). Virtual worlds as fuzzy cognitive maps. Presence, 3(2), 173-189.
Edwards, D. (2000). Introduction to Graphical Modelling (2nd ed.). Springer, New York, NY.
Jordan, M.I. & Sejnowski. T.J. (Eds.). (2001). Graphical Models: Foundations of Neural Computation.
Cambridge, MA: The MIT Press.
Kang, B., Deng, Y., Sadiq, R. & Mahadevan, S. (2012). Evidential cognitive maps. Knowledge-Based
Systems, 35, 77-86.
Kosko, B. (1986). Fuzzy cognitive maps. Intl. Journal Man-Machine Studies, 24, 65-75.
Kosko, B. (1992). Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine
Intelligence. Upper Saddle River, NJ: Prentice-Hall.
Kotsiantis, S.B. (2013). Decision trees: a recent overview. Artificial Intelligence Review, 39(4), 261-283.
, . (2000). - - . ,
, .
Neapolitan, R.E. (1989). Probabilistic Reasoning in Expert Systems: Theory and Algorithms. New York,
N.Y.: Wiley.
Oja, E. (1989). Neural networks, principal components and subspaces. International Journal of Neural
Systems, 1, 61-68.
Papageorgiou, E.I. (2012). Learning algorithms for fuzzy cognitive maps - a review study. IEEE Transactions
on Systems, Man, and Cybernetics, Part C, 42(2), 150-163.
Papageorgiou, E.I. & Iakovidis, D.K. (2013). Intuitionistic fuzzy cognitive maps. IEEE Transactions on Fuzzy
Systems, 21(2), 342-354.
Papageorgiou, E.I. & Kannappan, A. (2012). Fuzzy cognitive map ensemble learning paradigm to solve
classification problems: application to autism identification. Applied Soft Computing, 12(12), 3798-
3809.
Papageorgiou, E.I. & Salmeron, J.L. (2013). A review of fuzzy cognitive maps research during the last
decade. IEEE Transactions on Fuzzy Systems, 21(1), 66-79.
Papageorgiou, E.I., Stylios, C.D. & Groumpos, P.P. (2003). Fuzzy cognitive map learning based on nonlinear
Hebbian rule. Advances in Artificial Intelligence, (LNAI 2903, pp. 254-266). Berlin, Germany:
Springer.
Papageorgiou, E.I., Stylios, C.D. & Groumpos, P.P. (2004). Active Hebbian learning algorithm to train fuzzy
cognitive maps. International Journal of Approximate Reasoning, 37(3), 219-249.
Papageorgiou, E.I., Stylios, C.D. & Groumpos, P.P. (2006). Unsupervised learning techniques for fine-tuning
fuzzy cognitive map casual links. International Journal of Human-Computer Studies, 64(8), 727-743.
Papakostas, G.A. & Koulouriotis, D.E. (2010). Classifying patterns using fuzzy cognitive maps. In M. Glykas
(Ed.), Fuzzy Cognitive Maps: Advances in Theory, Methodologies, Tools and Applications (pp. 291-
306). Berlin, Germany: Springer.
Papakostas, G.A., Papageorgiou, E.I. & Kaburlasos, V.G. (2015). Linguistic Fuzzy Cognitive Map (LFCM)
for pattern recognition. In IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2-5 August
2015.
Papakostas, G.A., Boutalis, Y.S., Koulouriotis, D.E. & Mertzios, B.G. (2008). Fuzzy cognitive maps for
pattern recognition applications. International Journal of Pattern Recognition and Artificial
Intelligence, 22(8), 1461-1468.
Papakostas, G.A., Koulouriotis, D.E., Polydoros, A.S. & Tourassis, V.D. (2012). Towards Hebbian learning
of fuzzy cognitive maps in pattern classification problems. Expert Systems with Applications, 39(12),
10620-10629.
Pearl, J. (1985). Bayesian networks: a model of self-activated memory for evidential reasoning. In
Proceedings of the 7th Conference of the Cognitive Science Society, University of California, Irvine,
CA, 329-334.
Pearl, J. (1988) Probabilistic Reasoning in Intelligent Systems. San Francisco, CA: Morgan Kaufmann.
Pedrycz, W. & Homenda, W. (2014). From fuzzy cognitive maps to granular cognitive maps. IEEE
Transactions on Fuzzy Systems, 22(4), 859-869.

6-14
Quinlan, J.R. (1992). C4.5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufman.
Salmeron, J.L. (2010). Modelling grey uncertainty with Fuzzy Grey Cognitive Maps. Expert Systems with
Applications, 37(12), 7581-7588.
Shannon, C.E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379-
423.
Stylios, C.D. & Groumpos, P.P. (1998). The challenge of modelling supervisory systems using fuzzy
cognitive maps. Journal of Intelligent Manufacturing, 9, 339-345.
Swartout, W. & Tate, A. (1999). Ontologies. IEEE Intelligent Systems, 14(1), 18-19.

6-15
-:

-

. -
, . ,
(Birkhoff, 1967),
.
, . ,
(),

R .
R (.
) f: RNRM (Kaburlasos, 2006). ,
, , ..
RN.
f: RNRM (Kaburlasos & Kehagias, 2007).
, RN
- ,
.
/ , ,
- , , , () , , ..
- .
, .. , ..
()
.
(lattice computing)

- ,
, , , , .. (Esmi ..,
Graa & Chyzhyk, Kaburlasos & Papakostas, 2015 Kaburlasos .., 2013
Sussner, Valle & Sussner, 2013).
- , ,
.. .
, ,
, . ,
R
( ) , , f: LK,
L K .
, ,

. ,
(Liu .., 2013 Sussner & Esmi, 2011),
(Pedrycz ..,
2008 Zadeh, 1997).
- . 7
.
8 (1)
, (2) (3) ,
. , 9
(),

/.
(Kaburlasos, 2011 Kaburlasos
& Ritter, 2007 Kaburlasos .., 2008 Kaburlasos .., 2014 Liu .., ).

III-2

Birkhoff, G. (1967). Lattice Theory (Colloquium Publications 25). Providence, RI: American
Mathematical Society.
Esmi, E., Sussner, P. & Sandri, S. (to be published). Tunable equivalence fuzzy associative memories.
Fuzzy Sets and Systems.
Graa, M. & Chyzhyk, D. (to be published). Image understanding applications of lattice
autoassociative memories. IEEE Transactions on Neural Networks and Learning Systems.
Kaburlasos, V.G. (2006). Towards a Unified Modeling and Knowledge-Representation Based on
Lattice Theory (Studies in Computational Intelligence 27). Heidelberg, Germany: Springer.
Kaburlasos, V.G. (Ed.). (2011). Special Issue on: Information Engineering Applications Based on
Lattices. Information Sciences, 181(10), 1771-1773.
Kaburlasos, V.G & Kehagias, A. (2007). Novel fuzzy inference system (FIS) analysis and design based
on lattice theory. IEEE Transactions on Fuzzy Systems, 15(2), 243-260.
Kaburlasos, V.G. & Papakostas, G.A. (2015). Learning distributions of image features by interactive
fuzzy lattice reasoning (FLR) in pattern recognition applications. IEEE Computational
Intelligence Magazine, 10(3), 42-51.
Kaburlasos, V.G. & Ritter, G.X. (Eds.). (2007). Computational Intelligence Based on Lattice Theory
(Studies in Computational Intelligence 67). Heidelberg, Germany: Springer.
Kaburlasos, V.G., Papadakis, S.E. & Papakostas, G.A. (2013). Lattice computing extension of the
FAM neural classifier for human facial expression recognition. IEEE Transactions on Neural
Networks and Learning Systems, 24(10), 1526-1538.
Kaburlasos, V., Priss, U. & Graa, M. (Eds.). (2008). Proc. of the Lattice-Based Modeling (LBM)
Workshop, in conjunction with The Sixth International Conference on Concept Lattices and
Their Applications (CLA). Olomouc, Czech Republic: Palack University.
Kaburlasos, V.G., Tsoukalas, V. & Moussiades, L. (2014). FCknn: a granular knn classifier based on
formal concepts. In Proceedings of the World Congress on Computational Intelligence (WCCI)
2014, FUZZ-IEEE Program, 6-11 July 2014, (pp. 61-68).
Liu, Y., Kaburlasos, V.G. & Xu, Y. (to be published). Lattice implication algebra with application in
fuzzy lattice reasoning. In G.A. Papakostas, A.G. Hatzimichailidis, V.G. Kaburlasos (Eds.),
Handbook of Fuzzy Sets Comparison - Theory, Algorithms and Applications (Gate to Computer
Science and Research 7). Xanthi, Greece: Science Gate Publishing.
Liu, H., Xiong, S. & Wu, C.-a. (2013). Hyperspherical granular computing classification algorithm
based on fuzzy lattices. Mathematical and Computer Modelling, 57(3-4), 661-670.
Pedrycz, W., Skowron, A. & Kreinovich, V. (Eds.). (2008). Handbook of Granular Computing.
Chichester, U.K.: Wiley.
Sussner, P. (to be published). Lattice fuzzy transforms from the perspective of mathematical
morphology. Fuzzy Sets and Systems.
Sussner, P. & Esmi, E.L. (2011). Morphological perceptrons with competitive learning: Lattice-
theoretical framework and constructive learning algorithm. Information Sciences, 181(10),
1929-1950.
Valle, M.E. & Sussner, P. (2013). Quantale-based autoassociative memories with an application to the
storage of color images. Pattern Recognition Letters, 34(14), 1589-1601.
Zadeh, L.A. (1997). Toward a theory of fuzzy information granulation and its centrality in human
reasoning and fuzzy logic. Fuzzy Sets Systems, 90(2), 111-127.

III-3
7:

, ,
( ) ,
, ( ), () , , , ..
.
,
.
.
.

7.1
() (L,), x, y L (x,y), (x,y). ,
x y , . ,
, x y .
,
(L,), x, y L L .
L L .
(fuzzy lattice) (L,,), (L,)
L L, , x, y 1 , x y.
(L,,) (L,).

, (Ajmal & Thomas, 1994 Kehagias & Konstantinidou, 2003
Tepavevi & Trajkovski, 2001). , -
(Kaburlasos, 1992).
(. )
(Chakrabarty, 2001 Nanda, 1989), Bayes
(Knuth, 2005).
,
- (. ) . ,
x, y L L x, y 0,1
x - y. x y, x y
( 1), ,
x || y ( x y -). ,
() (weak (fuzzy) partial order relation).
, ,
x, y 1 y, z 1 , x, z 1 . x, y 1 y, z 1 ,
x, z [0,1].
L- (Goguen,
1967). ()
, , ,
[0,1] , .

(Kaburlasos & Papakostas, 2015).

7-1
(L,). (fuzzy order)
: L L 0,1 .

C1. uw u, w 1 .
C2. uw x, u x, w . ()


/ . , (xy)
x, y .
C1 (u,w)=1
, u / w. , C1

(L,). , C2
. u w, C2
x u , w.
(xy)
() (Fan
.., 1999 Sinha & Dougherty 1993 Sinha & Dougherty 1995 Young, 1996 Zhang & Zhang, 2009).
, , ()
, . ,
, ()
() , () .
()
() (fuzzy lattice reasoning (FLR)) (Kaburlasos & Kehagias, 2014). ,
() ,
(reasoning by analogy), .
, (deductive
reasoning) , ,
() ac, () a0 a0a, c.
, (approximate
reasoning) . ,
() aici, i{1,,L}, () a0 a0ai, i{1,,L},
aJcJ,
J arg max (a0 ai ) . cJ.
i{1,...,L}

v :L R0 (L,)
v u
() - (sigma-join): (x,u) = , () - (sigma-
v x u
v x u
meet): (x,u) = () .
v x
(L,) : L L 0,1 . , (L,,)
. , () (L,)
(L,)
(L,), - .

7-2
(L,) o,
(x,o) = 0 = (x,o), xo, . (-)
o . v(o)= 0.

7.2
, ,
.

7.2.1
(L,)
(constituent) (Li,), i=1,,N. (L,) = (L1,)(LN,)
(Li,), i=1,,N . ,
, , , .
o i, ,
, . () () (L,) =
(L1,)(LN,) , ,
xy = (x1,,xN)(y1,,yN) = (x1y1,,xNyN)
xy = (x1,,xN)(y1,,yN) = (x1y1,,xNyN).

, (x1,,xN) (y1,,yN) x1y1,,xNyN.


v1,,vN (L1,),,(LN,), ,
v: L=L1LNR v= v1++vN
(L=L1LN,). v1,,vN v= v1++vN
. , , v1,,vN v
.
()
.
, vi: LiR () (Li,)
di: Li R 0 di(xi,yi)= vi(x1y1)-vi(x1y1), i 1,..., N . ,
(L,)= (L1,)(LN,), Minkowski

d (.;p) : L L R
0

d x, y;p d1 x1 , y1 ... d N xN , yN , p R, xi , yi Li i 1,..., N


p 1/p p

x x1 ,..., xN , y y1 ,..., yN .

, v1,,vN () (L1,),,(LN,)
, v : L L1 ... L N R0
v v1 ... vN . , : LL[0,1] : LL[0,1]
.

7-3
v u v u
v u1 , , uN
N

-: x, u i=1 i i

v x u v x u , , x 1 u v x u
1 N N
N
i=1 i i i

v x u v x u , , x u v x u
N

-: x, u i=1 i
1 1 N N i i

v x v x , , x 1 v x N
N
i=1 i i

(L,)= (L1LN,).
,
(L,)= (L1LN,). , vi :Li R0
(Li,), i{1,,N}, , ()
- : LiLi[0,1], () - : LiLi[0,1], . i: LiLi[0,1],
i{1,,N} (Li,), . i(.,.)
(.,.), (.,.). - 1,,N 1++N = 1,
c: LL[0,1] (L,)= (L1LN,) (convex)
N
(combination) c ( x ( x1 ,..., xN ), u (u1 ,..., uN )) i i ( xi , ui ) .
i 1
N
() (x,u)= min i ( xi , ui ) () (x,u)=
i{1,..., N }
i ( xi , ui ) .
i 1

7.2.2
(I,) () (L,).
(I{},), () (1) x= x(I{}) (2) [a,b][c,d] =
[ac,bd] acbd [a,b][c,d] = acbd, [a,b],[c,d]I. , ()
(1) x=x, x(I{}) (2) [a,b][c,d] = [ac,bd], [a,b],[c,d]I.
(I{},) , [a,b]I ,
[a,b] = [a,a][b,b]. (I{},)
-, .
(L,)
o i . (1=I{},)
I=[o,i]. O (),
O=[i,o]. I1= I{[i,o]} -1 (1) (type-1
(T1) interval). O=[i,o] .
O=[i,o] [a,b][c,e],
[a,b][c,e] ca..be (I1,), [a,b][c,e]
cabe (I,). O=[i,o]
(I1,)
(I{},), () () (I1,) [a,b][c,e] = [ac,be]
acbe, [a,b][c,e] = [i,o], acbe () () (I1,) [a,b][c,e] =
[ac,be]. , , (I{},) (I1,)
, (I{},) (I1,). (I1,) .

7-4
, , (I1,).

. , , , (I1,)
- .
a, b , c, e c, e (R,) ,
a b c c e , . 7.1. c, e c, e .
: v1 : I {} I {} R0 .
:

1) v1 ( a, b) v1 (c, e) v1 (a, b c, e) v1 (a, b c, e) v1 (a, e) v1 () .


2) v1 ( a, b) v1 (c, e) v1 (a, b c, e) v1 (a, b c, e) v1 (a, e) v1 () .

: v1 (c, e) v1 (c, e) . ,
v1 (c, e) v1 (c, e) c, e c, e , v1 : I {} I {} R0
. , . ,
v1 : I {} I {} R0 .

a b c c e

7.1 a b c c e .


(I1,) 1 , .
(I1,) (G,),
v1: G R 0 ()
.
v1: G R 0 (
), ( 7.1)
(G,). (. )
(I1,),
.
(I1,) 1,
(L,) o i . (L,)
v: L R 0 7.1,
, (reasonable constraints)
v(o)=0 v(i)<+. (I1,)
(L,)(L,) = (LL,)
. (generalized interval) [a,b],
a,bL. () (LL,) [a,b][c,d] = [ac,bd], ()

7-5
(LL,) [a,b][c,d] = [ac,bd]. (LL,)
O=[i,o] I=[o,i], .

(L,) (L,)
(LL,). v:
L R 0 (L,), v(o)=0
v(i)<+. (L,).
v: L R 0 (L,)
(L,), . ,
x,y(L,) v(x)+v(y) = v(xy)+v(xy). , v(.) (L,),
xy (L,) xy (L,), xy (L,) v(x)>v(y).
(L,),
v: L R 0 (L,),
() : (L,)(L,), xy (L,)
(x)(y) (L,). , xy (L,) (x)(y) (L,)
, , v((x)) < v((y)). , (composite) v (.)
(L,) v1([x,y]) = v((x))+v(y)
(LL,) . ,
, (.) , (o)=i,
(i)=o. , v(.)
v1(.). , , v1(O=[i,o])= v((i))+v(o)= 2v(o)= 0 , ,
v1(O=[o,i])= v((o))+v(i)= 2v(i) < +.
d1: (LL,)(LL,) R 0 ,
d1([a,b],[c,e]) = v1([a,b][c,e]) - v1([a,b][c,e])= v1([ac,be]) - v1([ac,be])= v((ac))+ v(be) - v((ac))
- v(be) = v((a)(c))-v((a)(c)) + v(be)-v(be)= d((a),(c))+d(b,e),
(I1,). , : (LL,)(LL,)[0,1],
v1 ([c, e]) v1 ([c, e]) v((c)) v(e)
([a, b],[c, e]) = = ,
v1 ([a, b] [c, e]) v1 ([a c, b e]) v((a c)) v(b e)
: 11[0,1] () (I1,).
v1([a,b][c,e]) = 0 [a,b][c,e] = [a,b] = = [c,e] ([a, b],[c, e]) 1 . ,
: (LL,)(LL,)[0,1],
v ([a, b] [c, e]) v ([a c, b e]) v((a c)) v(b e)
([a, b],[c, e]) 1 = 1 =
v1 ([a, b]) v1 ([a, b]) v((a)) v(b)
: 11[0,1] () (I1,). v1([a,b]) = 0
[a,b] = ([a, b],[c, e]) 1 .
v :L R (L,) (I,)
. - 1: I R 0 , 1([a,b])= v(b)-v(a),
, .
1([a,b]) = d(a,b)= max d x, y , d :L L R0 d(x,y)= v(xy)-
x , y a ,b

7-6
v(xy) (L,), . [a,b]
x y [a,b]. (trivial) [a,a]
, . ([a,a])= 0. , [a,a]
(I1,) 1, = (I1,).

7.2.3
2L (L,).
L N
.
2L 2L U ,W 2L U W ,
u U , w W : u w . , 2L , U W {u w}
uU , wW

U W {u w} .
uU , wW

S L (simplified) , , (quotient),
S - L . - S
, X S
X . (2L ) 2L 2 L
L .
((2L ), ) , (2L) : 2L2L[0,1]
C2 (). ,
c : (2L ) (2L ) [0,1] ((2L ), ) .
: L L 0,1 (L,).
N
c : (2L ) (2L ) [0,1] , c(UW)=
i 1
i max (ui
j{1, ,N}
wj ) ,

, U={u1,,uM}, W={w1,,wN}(2L).

7.3

. , .
.
.
.

7.3.1
(R,) ,
, , 9.
(strictly increasing) v: RR (R,). ,
(strictly decreasing) : RR
(R,). (R,) ,
(RN,)= (R,)N.
v(x)= x (RN,)
1/p 1/p
d(x,y;p)= d ( x1 , y1 ) ... d ( xN , yN ) = x1 y1 ... xN yN , x=(x1,,xN)
p p p p

y=(y1,,yN), Lp (metric). , L1
d x, y;1 x1 y1 ... xN yN Hamming (Hamming distance ,

7-7
, city-block distance), L2 (Euclidean distance) d x, y;2

x1 y1 ... xN yN , L d x, y; max | x1 y1 |,...,| xN yN | .


2 2

RR
- (R,).
(F,) R
. , f,gF, f g f ( x) g ( x)
x R , () . ()
() f g (F,) fg = f(x)g(x) := min{ f ( x), g ( x)} , ()
xR

(F,) fg = f(x)g(x) := max{ f ( x), g ( x)} .


xR

7.3.2

(measure space) , , m , ,
- m . -
.
- (-algebra)
:

1. ,
2. A (\A) ,
3. Ai , i D,
( iD
Ai ) .

, -
.
(measure) , - m : R0
:

1. m () 0 ,
2. D
AiS m ( iD Ai ) iD m (Ai ) .

(,) (measurable space).


- ,

. , , , m , (,),
, o= i=.
(,) ()
(), . m (.)
(,). , (A)= \= A, A A,
(,). ,
(,).

7-8
,

, , m m () 1 . , ,

,
, m , - 2.

, F() = [0,1]. (F(),) .

7.3.3
(/) Boole (Birkhoff, 1967).
(/)
. , .
m : R0 , . ,
, , , m . ,
.

7.3.4

/ , / ()
/ . , 7.2
.
(join lattice), x y
xy, xy. , xy
x y . ,
7.2 c1c4=c13, c5c12=c14, c2c14=c15 ..

c15 -0 ()

c13 c14 -1

c9 c10 c11 c12 -2

-3
c1 c2 c3 c4 c5 c6 c7 c8

7.2 23=8 () -3.

7.2
, ,
, -3,
7.3(). , -2
, 7.3(). , ci cj
ck v(ci) v(cj) , ck v(ck)=
v(ci)+v(cj). , ,
7.3().

7-9
7.3()
() -4, (o) 0 (. 7.3())
(i) 7.3() -
c15, . i= c15. (d),
(). , d(c1,c13)= v(c1c13)-v(c1c13)= v(c13)-v(c1)= 11.7-2= 9.7 d(c3,c13)= v(c3c13)-
v(c3c13)= v(c13)-v(c3)= 11.7-5= 6.7. c1 c3
-3, c3 c13 c1
. , c1 c3
d(c1,c3)= v(c1c3)-v(c1c3)= v(c13)-v()= 11.7-0= 11.7. . c1 c3
, c13. .

c15 c15

c13 c14 c13 c14

c9 c10 c11 c12 3.5 8.2 5.8 2.8

2 1.5 5 3.2 1 4.8 0.4 2.4 2 1.5 5 3.2 1 4.8 0.4 2.4

() ()

20.3 20.3

11.7 8.6 11.7 8.6

3.5 8.2 5.8 2.8 3.5 8.2 5.8 2.8

2 1.5 5 3.2 1 4.8 0.4 2.4 2 1.5 5 3.2 1 4.8 0.4 2.4

-4
0
() ()

7.3 7.2. ()
. () -2 . ()
. ()
( -4) (o) .

7-10
, c5 c14, (c5c14)=
v c14 v c14 v c5 c14 v c5
= = 1, (c5c14)= = = 1. - ,
v c5 c14 v c14 v c5 v c5
v c14 v c14 8.6
c1 c14, (c1c14)= = = 0.423
v c1 c14 v c15 20.3
v c1 c14 0 v
(c1c14)= = = 0. ,
=
v c1 v c1 2
, -
v c5 v c5 1 v c14 c5
. , (c14c5)= = = 0.116 = (c14c5).
v c14 c5 v c14 8.6 v c14
.
: ci cj ck
v(ci) v(cj) , ck v(ck) > v(ci)+v(cj).
7.3(),
ci cj cicj v(cicj) > 0. ,
/ , v(cicj) cicj
. v(cicj) ,
, v(x)+v(y) = v(xy)+v(xy) x y v x v y .
v( x) v(u ) v x u
, : d(ci,cj)= 2v(cicj)-v(ci)-v(cj) (x,u) = .
v x
: .

, . ck
ci, iI. v(ck) v(ck) > max{v(ci ) v(c j )} .
i, jI
i j

, . , .
,
.
cicj , ci cj ,
( ). , ,
cicj ,
7.2,
7.3(). , , ,
, .
(L,),
v(.), (.),
(L,), . (L,).
9.1.2,
(1,) (RR,)
v(.), (.).
, ,
(L,).

7-11
7.3.5

. ,
(ontologies) (Guarino, 2009) , ..
(semantic Web),
, 7.3.3 ,
.

. ,
. , ()
, ,
(disparate data fusion)
. , , ..
, .

9.

( )
.


(binary relation) R ( P Q)
P Q , . R P Q . p, q R pRq . P Q ,
P. (inverse) R
R 1 , . qR1 p : pRq .
RPQ (function), (p,q1)R
(p,q2)R q1q2. , f p
P f(p) Q. f(p) (image) p.
Q P, f
(surjection). , ,
(bijection). ,
1-1 . .
R P P P (partially
ordered), :

1. ( x, x) R ()
2. ( x, y) R x y ( y, x) R ()
3. ( x, y) R ( y, z) R ( x, z) R ()

2 :

2 ( x, y) R ( y, x) R x y ()

xRy ( x, y) R x y (x,y)
x y x y x y. x y
x y x y x y x
y. x y x y R 1 .

7-12

( )
.
R P P P
(equivalence relation), :

1. ( x, x) R ()
2. ( x, y) R ( y, x) R ()
3. ( x, y) R ( y, z) R ( x, z) R ()

() (partially ordered set (poset)) (P,),


P () P. P
() , .. 1 2.
: PQ (P,) (Q,)
(order preserving) (monotone), x y (x) (y), x, y P . ,
, x y (x) (y),
(order embedded).
(isomorphism). (P,)
(Q,), (P,) (Q,) , (P,) (Q,). (dual)
(P,) (P,) = (P,) = (P,)
. (P,) (Q,) (P,) (Q,), (P,) (Q,)
(dually) .
(duality principle) ( ):
. ,
-1 . ,
.
(dual statement) S S ,
S . S ,
S .
x y (comparable), x y y
x. - (incomparable) x y (parallel), x || y .
. (chain) , ,
(totally ordered set) (P,),
. , R
R, .
7.4.
(P,) Hasse (Hasse
diagram), P () ,
a P b P , ,
a b a (cover) b (P,) a b
x P , a x b. 7.4 Hasse
2{a,b,c}, {a,b,c} {a,b}, {a,c} {c} ..
S, 2S, S.

7-13
{a,b,c}

{a,b} {a,c} {b,c}

{a} {c}
{b}

{}

7.4 Hasse 2{a,b,c} S a, b, c .

(least) , , X P (P,)
a X , , ax x X .
X P (greatest) . , ,
o. , i.
(P,) o. x P o, x ,
(atom). .. a , b , c 7.4 .
(P,) a, b P a b. () ((ordinary) interval)
a, b a, b := {xP: a x b}.
I () (P,). (I,),
. [a,b][c,e] cabe. (I,)
(). ,
(I{},).
(I{},),
() . [x,x]I (I{},) .
(P,) , a : x P : x a (principal
ideal) ( a ), b : x P : x b (principal
filter) ( b ).
(size) (P,) (-)
: P R 0 , :

S1. u w (u) < (w).

N (P1,1)(PN,N) (P1PN,) =
(P1PN,1N) (x1,,xN)(y1,,yN) : x11y1, , xNNyN.
(P,) ( P, ) ( Pi , i) ,
i

, i: Pi R 0 (Pi,i)

, , P . , A(P,), AiPi,

i, : P R , :
0

7-14
(A) = i ( Ai )dP
.

, . ,
P : [0,1] .


(lattice theory), , (order theory),
Garrett Birkhoff (Birkhoff, 1967 Davey & Priestley, 1990 Grtzer, 2003).
.
(P,) X P. (upper bound) aP x
a, xX. (least upper bound) , ,
. (lattice join),
(join), supX X. (lower bound)
(greatest lower bound) .
(lattice meet), (meet), infX X.
X x, y , xy supX xy infX .
.
(lattice) (L,) , x, y L
xy, xy.
, R, , .
(L,) (complete), X
L . X L , (-)
o i .
(semantic lattice definition).
, , , (algebraic
lattice definition) (operations) ()
() (algebra) A [S,F], S -
, F fa,
Sn(a) S S n(a). .
(L,) (L,,)
L1 - L4, .

L1. xx = x, xx = x ()
L2. xy = yx, xy = yx ()
L3. x(yz) = (xy)z, x(yz) = (xy)z ()
L4. x(xy) = x(xy) = x ()

x y

xy = x xy = y ()

() () . ,
, . yz xy xz xy xz, xL. ,

7-15
(L,) ox = o, ox = x, xi = x xi = i, xL. , (atomic)
.

, .
,
, . ,
,
7.3.
, (L,) (L,) (L,), (L,)
. (L,)
(L,),
. .
( ): ()
, , , , o, i , , , i, o .
(sublattice) (S,) (L,) SL. (L,)
(superlattice) (S,). , (S,) (L,).
(a,b) (L,) ([a,b],), [a,b] [a,b]:=
{xL: axb}, . (S,) (L,) , a,bS
[ab,ab]S.
(distributive),
x, y, z .

L5. x(yz) = (xy)(xz) L5. x(yz) = (xy)(xz)

(complement), , x (L,)
() o(i) yL, , xy = o xy = i. ,
xL xL. (L,)
(complemented), . ,
Boole (Boolean lattice).
Boole (2S,), 2S
S .
Boole (2S,) () () .
Boole (Boolean algebra) (L,,,) ,
L1 - L8.

L6. xx = o, xx = i
L7. x x
L8. (xy) = xy (xy) = xy

Boole
7.3.3, , , / Boole
.
8.1.

7-16
(L,) (M,). : LM :

() (join morphism), (xy) = (x)(y), x, y L .


() (meet morphism), (xy) = (x)(y), x, y L .

() ((lattice) morphism),
- -.
() ((lattice) isomorphism).
, .
(valuation function) (L,)
v: LR v(x)+v(y) = v(xy)+v(xy).
(monotone), x y v x v y (positive) x y
v x v y .

. , , - ,
.
() ,
. ,
, , ()
. , ,
, . 7.1,
.
v: LR (L,)
d :L L R0 , d(x,y)=v(xy)-v(xy), x, y L . , ,
v(.) , d(x,y) = v(xy)-v(xy)
() .
X, (metric) - d : X X R0
:

M1. d ( x, y) 0 x y ()
M2. d ( x, y) d ( y, x) ()
M3. d ( x, z) d ( x, y) d ( y, z) ( )

1 , d ( x, y) 0 x y ,
2 3 , d(.,.) - (pseudo-metric).
X d (metric space), (X,d).
(L,) i v(.) ,
d(x,i) < + xL, .
xL i . d(x,i) =
v(xi)-v(xi) = v(i)-v(x)< + v(i)< +.

7-17

7.1) 2
2.
7.2) . ,
.
7.3) (I{},) . :
() (), 7.2.2,
.
7.4) () (,) () (L,).
d :L L R0 : L L [0,1]
. ()
(,).


Ajmal, N. & Thomas, K.V. (1994). Fuzzy lattices. Information Sciences, 79(3-4), 271-291.
Birkhoff, G. (1967). Lattice Theory (Colloquium Publications 25). Providence, RI: American Mathematical
Society.
Chakrabarty, K. (2001). On Fuzzy Lattice. In: W. Ziarko, Y.Y. Yao, eds., Rough Sets and Current Trends in
Computing (Lecture Notes in Computer Science 2005: 238-242). Berlin, Germany: Springer-Verlag.
Davey, B.A. & Priestley, H.A. (1990). Introduction to Lattices and Order. Cambridge, UK: Cambridge
University Press.
Fan, J., Xie, W. & Pei, J. (1999). Subsethood measure: new definitions. Fuzzy Sets and Systems, 106(2), 201-
209.
Goguen, J.A. (1967). L-fuzzy sets. Journal of Mathematical Analysis and Applications, 18(1), 145-174.
Grtzer, G. (2003). General Lattice Theory (2nd ed.). Basel, Switzerland: Birkhuser Verlag AG.
Guarino, N., Oberle, D. & Staab, S. (2009). What is an ontology?. In: S. Staab, R. Studer, eds., Handbook on
Ontologies (International Handbooks on Information Systems: 1-17). Berlin, Germany: Springer-
Verlag.
Kaburlasos, V.G. (1992). Adaptive Resonance Theory With Supervised Learning and Large Database
Applications (Library of Congress-Copyright Office). Reno, NV: Univ. Nevada, Ph.D. Dissertation.
Kaburlasos, V.G. & Kehagias, A. (2014). Fuzzy inference system (FIS) extensions based on lattice theory.
IEEE Transactions on Fuzzy Systems, 22(3), 531-546.
Kaburlasos, V.G. & Papakostas, G.A. (2015). Learning distributions of image features by interactive fuzzy
lattice reasoning (FLR) in pattern recognition applications. IEEE Computational Intelligence Magazine,
10(3), 40-49.
Kehagias, A. & Konstantinidou, M. (2005). L-fuzzy valued inclusion measure, L-fuzzy similarity and L-fuzzy
distance. Fuzzy Sets and Systems, 136(3), 313-332.
Knuth, K.H. (2005). Lattice duality: the origin of probability and entropy. Neurocomputing, 67, 245-274.
Nanda, S. (1989). Fuzzy lattice. Bulletin Calcutta Mathematical Society, 81, 201-202.
Sinha, D. & Dougherty, E.R. (1993). Fuzzification of set inclusion: theory and applications. Fuzzy Sets and
Systems, 55(1), 15-42.
Sinha, D. & Dougherty, E.R. (1995). A general axiomatic theory of intrinsically fuzzy mathematical
morphologies. IEEE Transactions on Fuzzy Systems, 3(4), 389-403.
Tepavevi, A. & Trajkovski, G. (2001). L-fuzzy lattices: an introduction. Fuzzy Sets and Systems, 123(2),
209-216.
Young, V.R. (1996). Fuzzy subsethood. Fuzzy Sets and Systems, 77(3), 371-384.
Zhang, H.-Y. & Zhang, W.-X. (2009). Hybrid monotonic inclusion measure and its use in measuring
similarity and distance between fuzzy sets. Fuzzy Sets and Systems, 160(1), 107-118.

7-18
8:
H
. , 19
George Boole Boole.
Boole 19 Peirce Schrder
() , Dedekind
(ideals of algebraic numbers) ()
(Grtzer, 2003).
, , .
Birkhoff (1967) 1930 Harvard
, ,
. Birkhoff
, , , ..
/ (logicians), :
Jnsson, Kurosh. Malcev, Ore, von Neumann Tarski (Rota, 1997).
,
-.
(Bloch & Maitre, 1995 Maragos, 2005 Nachtegael & Kerre,
2001). (paradigms)
#1. , #2. #3.
. , #1 #2
, #3
.

8.1
(Birkhoff & von Neumann, 1936
Edmonds, 1980 Gaines, 1978 Halmos & Givant, 1998). ,
- (L-fuzzy set) (Goguen, 1967). ,
-
, [0,1] .
, (reasoning),

. , -
(Xu .., 2003). 2
.

8.1.1 . , .
() , ( )
.

0 1
0 1 1
1 0 1

8.1 () ( ).

0,1 :
a b ab.

( )
0,1 . , 0,1

8-1
, , 0,1 . ,
:
I : 0,1 0,1 0,1 ,

, 0,1
0,1 .
a b a b , , :
I S a, b S n a , b , a, b 0,1 (8.1)
S , T n , - ( ), - ( ) ,
. S T n , . De Morgan.
, a b a b , :

a b max x 0,1 a x b , a, b 0,1

a b a a b , a, b 0,1


I R a, b supx 0,1 T a, x b , a, b 0,1 (8.2)

IQL a, b S n a , T a, b , a, b 0,1 (8.3)

.(8.1) S,
.(8.2) R-,
.(8.3) QL. ,
, .
,
.

A1. a b I a, x I b, x . , ,
.
A2. a b I x, a I x, b . , ,
.
A3. I 0, a 1 . , .
A4. I 1, b b . .
A5. I a, a 1 . ,
.
A6. I a, I b, x I b, I a, x . a b x
b a x , .
A7. I a, b 1 a b . , ,
, .
A8. I a, b I n b , n a n . ,
, .
A9. I .
,
.

8-2
1 9 , .. A3 A5
A7, . 1 9.
1 9,
.

Smets Magrez. I : 0,1 0,1 0,1 A1 - A9


n ,
f : 0,1 0, , f 0 0 . :

I a, b f ( 1) f 1 f a f b , a,b 0,1
n a f 1 f 1 f a , a 0,1 .

,
, :

Fodor Roubens. :

I : 0,1 0,1 0,1 ,

, 0,1 0,1
a,b 0,1 :

(i) a b I a, x I b, x .
(ii) a b I x, a I x, b .
(iii) I 0, a 1 .
(iv) I a,1 1 .
(v) I 1,0 0 .

8.1.2
( L, , , O, I ) O I, , '
, : L L L . ( L, , , , , O, I )
(lattice implication algebra),
x, y, z L :

(1) x ( y z) y ( x z) .
(2) x x I .
(3) x y y ' x ' .
(4) x y y x I x y .
(5) ( x y) y ( y x) x .
(L1) ( x y) z ( x z) ( y z) .
(L2) ( x y) z ( x z) ( y z) .

1
( L, , , ) Boole. x, y L x y x' y . , ( L, , , )
.

8-3
2
L [0,1] , , , x, y, z L :
x y max{x, y} , x y min{x, y} , x' 1 x x y min{x,1 x y} . ,
([0,1], , ,', ,0,1) , ukasiewicz
[0, 1].

3
L {ai | i 1,2, , n} , , , 1 j, k n :
a j ak amax{ j ,k } , a j ak amin{ j ,k } , (a j )' an j 1 a j ak amin{n j k ,n} . ,
( L, , ,', , a1 , an ) , ukasiewicz
a1,a2,,an.

( L, , ,', , O, I ) . , x, y, z L
:

(1) I x I x I .
(2) I x x , x O x' .
(3) OxI , xI I .
(4) ( x y) y x y .
(5) x y x y I .
(6) (( x y) y) y x y .
(7) ( x y) x' ( y x) y ' .
(8) x y x z y z z x z y .
(9) ( x y) (( y z) ( x z)) I .

-
(resolution principle) (automated reasoning) (.
)
.

8.2
(formal context) (G, M, I),
G M I G M. G
(objects), M (attributes).
g I m, gIm (g,m)I g
m.
AG :

= {mM| gIm gA}

A.
, BM :

B = {gG| gIm mB}

B B.
(formal concept), , (G, M, I)
(A,B) AG, BM, =B B=A.

8-4

(Ganter & Wille, 1999).
() (formal concept analysis (FCA))
, ..
, .
.

4
8.2 19
8.1.


a b c d e f g h i
1
2
3
4
5
6
7
8

8.2 : a:
, b: , c: , d:
, e: , f: , g: , h: , i: .

123
45678
ag ac ab ad
1234 5678
agh 34 12 adf
234 678 356 568
acgh abg abc acd abdf
34 123 678
36 56
acg abgh acdf
23 68
hi acde
abcgh abcdf
4 3 7 6
abcde
fghi

8.1 8.1.


(Caro-Contreras & Mendez-Vazquez, 2013). ,
(Belohlavek, 2000). (retrieval)
(data bases) (Carpineto & Romano, 1996 Priss, 2000).
(Formica, 2006).

8-5
8.3
(MM) (mathematical morphology (MM))
. Matheron (1975) Serra (1982),
,
.
( )
(Dougherty & Sinha, 1995). ,
,
(Bloch .., 2007). ,
(structure element),
: (dilation), (erosion), (opening)
(closing), /
(patterns) . (L,) (M,),
: LM : LM :
( M) (M) ( M) (M) ,
() () {()|} {()|} .

8.3.1
E n
2 E .

2 , Boole (Meyer, 1991).



2 , {1,2,}. X, Y 2 ,
X Y , X Y , X \ Y X c , ,
, . h E X, B E .
Xh x h : x X (translation) X h , Xt x : x X
X .

, o Minkowski
:
X B Xb (8.4)
bB

X B Xb (8.5)
bB

X , B
, X B , X B .
B P E :

B X B (8.6)
B X B (8.7)

. ,
,
, :

X B Xc Bt
c
(8.8)
X B X c Bt
c
(8.9)

8-6
,
. , ,
.
,
. :
X B= X B B (8.10)
X B= X B B (8.11)

,
.
, ,
. :
X B B= X B (8.12)
X B B= X B (8.13)

8.3.2
, ,
, ,
, .. , ,
. , .
, .
, .
: (1)
, (2) , (3)
(4) .
,
(coordinate logical filter) (Mertzios & Tsirikolias, 1998)
(amoeba filter) (Lerallut .., 2007). ,
,
.

8.3.3
. ,
Hopfield (. 1) .

(. min) . ,
,
, .. (Ritter
& Gader, 2006 Ritter & Urcid, 2003 Ritter & Wilson, 2000).

8.4
,
. ,
(Pessoa & Maragos, 2000 Sussner & Graa, 2003 Yang & Maragos, 1995).
, ,
/,
,

.
7
. ,
: 1) -
2) .

8-7

v b
8.1) a, b , [0,1], b(0,1]. v(x)=x
v a b
a, b =1 =b=0.
() o ( ).
() o Fodor & Roubens.
() o ;
() o () ,
,b[0,1] .
8.2)
( I ,b,c[0,1]):
1. I a, b, c I a, b , I a, c .


2. I a, b , I n a , b b .
3. I 0.5, b , I 0.5, b b .
4. I a, I b, c I a, b , c .
b
a, b .
ab
/ , ,b,c[0,1],
.
b
8.3) a, b , ,b[0,1] ( )
ab
a b a, b . :
1. a b c = a c b c .
2. a b c a c b c .
3. a b c a c b c .
4. a b c a c b c .
8.4) a b a, b , ,b[0,1].
:

1. a1 a2 a3 ... an an , a1 ... an n ,

a a a ... a
1 2 3 n 1 , a1 ... an n .
2. a1 a2 a2 a3 ... an1 an 1 , a1 ... an .
8.5) 8.4 :
1. a,0 0 , (0,1] .
2. a,1 1 a 0,1 .
3. a, b 1 , a b 0 b 1 .
4. a, b , c a, b, c , ,b,c 0,1 .
5. a, a, b = a, a, b , a b .
6. a, a, b b , a b .
7. a, b b a, b 0,1 .

8-8
b
8.6) a, b , ,b[0,1] x =1-x.
ab
:

1. a, b a, b a, b 0,1 .
2. a, a a a 0,1 .
3. a, a a a 0,1 .
4. a, b a , a b .
v b
8.7) a, b , v
v a b
v : R R+0 . v ,
=, , 8.1 8.6; .
v a b
8.8) a, b , a (0,1], b [0,1]
va
a, b 1, =b=0.
.
8.9) MATLAB ,
.
8.10) MATLAB ,
.
8.11) MATLAB
8.3.2 .
.
;


Belohlavek, R. (2000). Representation of concept lattices by birectional associative memories. Neural
Computation, 12(10), 2279-2290.
Birkhoff, G. (1967). Lattice Theory (Colloquium Publications 25). Providence, RI: American Mathematical
Society.
Birkhoff, G. & von Neumann, J. (1936). The logic of quantum mechanics. Annals of Mathematics, 37(4), 823-
843.
Bloch, I. & Maitre, H. (1995). Fuzzy mathematical morphologies: a comparative study. Pattern Recognition,
28(9), 1341-1387.
Bloch, I., Heijmans, H. & Ronse, C. (2007). Mathematical morphology. In M. Aiello, I. Pratt-Hartmann & J.
van Benthem (Eds.), Handbook of Spatial Logics (pp. 857-944). Heidelberg, Germany: Springer.
Caro-Contreras, D.E. & Mendez-Vazquez, A. (2013). Computing the concept lattice using dendritical neural
networks. In M. Ojeda-Aciego & J. Outrata (Eds.), CLA (pp. 131-152). University of La Rochelle,
France: Laboratory L3i.
Carpineto, C. & Romano, G. (1996). A lattice conceptual clustering system and its application to browsing
retrieval. Machine Learning, 24(2), 95-122.
Dougherty, E.R. & Sinha, D. (1995). Computational gray-scale mathematical morphology on lattices (a
comparator-based image algebra) part II: image operators. Real-Time Imaging, 1, 283-295.
Edmonds, E.A. (1980). Lattice fuzzy logics. Intl. J. Man-Machine Studies, 13(4), 455-465.
Formica, A. (2006). Ontology-based concept similarity in Formal Concept Analysis. Information Sciences,
176(18), 26242641.

8-9
Gaines, B.R. (1978). Fuzzy and probability uncertainty logics. Information and Control, 38, 154-169.
Ganter, B. & Wille, R. (1999). Formal Concept Analysis. Heidelberg, Germany: Springer.
Goguen, J.A. (1967). L-fuzzy sets. Journal of Mathematical Analysis and Applications, 18(1), 145-174.
Grtzer, G. (2003). General Lattice Theory. Basel, Switzerland: Birkhuser Verlag AG.
Halmos, P. & Givant, S. (1998). Logic as Algebra (The Dolciani Mathematical Expositions 21). Washington,
D.C.: The Mathematical Association of America.
Lerallut, R., Decencire, . & Meyer, F. (2007). Image filtering using morphological amoebas. Image and
Vision Computing, 25(4), 395-404.
Maragos, P. (2005). Lattice image processing: a unification of morphological and fuzzy algebraic systems. J.
Math. Imaging and Vision, 22(2-3), 333-353.
Matheron, G. (1975). Random Sets and Integral Geometry. New York, N.Y.: Wiley & Sons.
Mertzios, B.G. & Tsirikolias, K. (1998). Coordinate logic filters and their applications in image processing
and pattern recognition. Circuits, Systems, and Signal Processing, 17(4), 517-538.
Meyer, F. (1991). Un algorithme optimal pour la ligne de partage des eaux. 8me Congrs de Reconnaissance
des formes et Intelligence Artificielle 2, Lyon, France, 847-857.
Nachtegael, M. & Kerre, E.E. (2001). Connections between binary, gray-scale and fuzzy mathematical
morphologies. Fuzzy Sets and Systems, 124(1), 73-85.
Pessoa, L.F.C. & Maragos, P. (2000). Neural networks with hybrid morphological /rank /linear nodes: a
unifying framework with applications to handwritten character recognition. Pattern Recognition, 33(6),
945-960.
Priss, U. (2000). Lattice-based information retrieval. Knowledge Organization, 27(3), 132-142.
Ritter, G.X. & Gader, P.D. (2006). Fixed Points of Lattice Transforms and Lattice Associative Memories. In
P. Hawkes (Ed.), Advances in Imaging and Electron Physics, 144 (pp. 165-242). Amsterdam, The
Netherlands: Elsevier.
Ritter, G.X. & Urcid, G. (2003). Lattice algebra approach to single-neuron computation. IEEE Transactions on
Neural Networks, 14(2), 282-295.
Ritter, G.X. & Wilson, J.N. (2000). Handbook of Computer Vision Algorithms in Image Algebra (2nd ed.).
Boca Raton, FL: CRC Press.
Rota. G.C. (1997). The many lives of lattice theory. Notices of the American Mathematical Society, 44(11),
1440-1445.
Serra, J. (1982). Image Analysis and Mathematical Morphology. London, U.K.: Academic Press.
Sussner, P. & Graa, M. (Eds.). (2003). Special Issue on: Morphological Neural Networks. J. Math. Imaging
and Vision, 19(2), 79-80.
Xu, Y., Ruan, D., Qin, K. & Liu, J. (2003). Lattice-Valued Logic (Studies in Fuzziness and Soft Computing
132). Heidelberg, Germany: Springer.
Yang, P.-F. & Maragos, P. (1995). Min-max classifiers: learnability, design and application. Pattern
Recognition, 28(6), 879-899.

8-10
9:
7
, (R,)
. ,
. R .
R (measurements)
(Kaburlasos, 2006). ,
, , .
.
R 2.500
(6 . ..), ()
. ,
.
(rational).
, - (.
(irrational)) , . 20
-, .
. ,
( R). R .
, , R
. R
.

9.1
.
.

9.1.1 -0: (R,)


(R,) (Davey & Priestley, 1990). ,
o

i , R R {, }, .
(L=[o,i],) ,
o R i R , o<i. x y
, xy, ,
xy. v: L R 0 (L,)
, v(o)=0 v(i)<+. ,
: LL (L,)
, (o)=i, (i)=o. , , (L=[-
1
,+],) v( x) (x)
1 e x
= -x. , (L=[0,1],) v(x) = x (x) =
1-x. , (.) v(.) -,
, .. ..
, v: L R 0 ,
d: LL R 0 , d(x,y)= v(xy) v(xy).

9.1.2 -1: (1,) -1 (1)



(Alefeld & Herzberger, 1983 Moore, 1979 Tanaka & Lee 1998).

9-1
, ,
, .
(1,) 1
(L=[o,i],) . 1 [a,b] [c,e]
[a,b][c,e]= [ac,be], acbe [a,b][c,e]= = [i,o], ac>be. ,
1 [a,b] [c,e] [a,b][c,e]= [ac,be].
(L=[o,i],) [i,o].
() v: L R 0 () : LL
(L,), -0,
v1: LL R 0 (LL,) ,
v1([a,b])= v((a))+v(b). , d1(.,.),
(.,.) (.,.) (LL,).
(1,),
(LL,). , (1,).
d1: 11 R 0 (I1,) :

d1([a,b],[c,e])= [v((ac))-v((ac))] + [v(be)-v(be)] = d((a),(c)) + d(b,e) (9.1)

: I1I1[0,1] : I1I1[0,1] (1,)


:
1, x

( x [a, b], y [c, e]) v1 ( x y ) v((a c)) v(b e) (9.2)
v ( x) v((a)) v(b)
, x
1
1, x y

( x [a, b], y [c, e]) v1 ( y ) v((c)) v(e) (9.3)
v ( x y ) v((a c)) v(b e) , x y
1

1 , 1.
1 (I1p,). 1: I1p R 0 ,
1([a,b])= v1([a,b])= v((a))+v(b), (I1p,).
, 7 1([a,b])= v(b)-v(a)
.
(.) v(.) . ,
(x)= -x v(.), v(x) = -v(-x), v1([a,b]) = v(b)-
v(a) = 1([a,b]). , d1([a,b],[c,e])= [v(ac)-v(ac)] + [v(be)-v(be)]. ,
v(x)= x L1 (Hamming) d1([a,b],[c,e])= |a-c| + |b-e|.

9.1.3 -2: (2,) -2 (2)


-2 (2) (type-2 (T2) interval) 1.
, 2 [[a1,a2],[b1,b2]], [a1,a2] [b1,b2] 1, .
[a1,a2], [b1,b2](1,) [a1,a2][b1,b2].

9-2
(2,) 2
(1,) 1. 2,
[[a1,a2],[b1,b2]] [[c1,c2],[e1,e2]], [[a1,a2],[b1,b2]] [[c1,c2],[e1,e2]]=
[[a1c1,a2c2],[b1e1,b2e2]], [a1c1,a2c2] [b1e1,b2e2] [[a1,a2],[b1,b2]] [[c1,c2],[e1,e2]]= =
[[o,i],[i,o]], [a1c1,a2c2] [b1e1,b2e2].
[[a1,a2],[b1,b2]] [[c1,c2],[e1,e2]]= [[a1c1,a2c2],[b1e1,b2e2]].
(2,) [[o,i],[i,o]].
-1 v1: LL R 0
(LL,), v1([a,b])= v((a))+v(b). , 1: LLLL,
1([a,b])= [b,a], (LL,)
,
. v2: LLLL R 0 ,
(LLLL, ) ,
v2([[a1,a2],[b1,b2]])= v1(1([a1,a2]))+v1([b1,b2])= v(a1)+v((a2))+v((b1))+v(b2). ,
d2(.,.), (.,.) (.,.)
(LLLL, ). (2,),
(LLLL, ). ,
(2,).
d2: 22 R 0 (I2,) :

d2([[a1,a2],[b1,b2]],[[c1,c2],[e1,e2]])= d(a1,c1)+d((a2),(c2))+d((b1),(e1))+d(b2,e2) (9.4)

: I2I2[0,1] : I2I2[0,1] (2,)


:

([[a1 , a2 ],[b1 , b2 ]],[[c1 , c2 ],[e1, e2 ]])

1, b1 b2
0, b1 b2 , b1 d1 b2 d 2

0, b1 b2 , b1 d1 b2 d 2 , [a1 c1 , a2 c2 ] [b1 e1 , b2 e2 ] (9.5)
v ([[a , a ],[b , b ]] [[c , c ],[e , e ]])
2 1 2 1 2 1 2 1 2
,
v2 ([[a1 , a2 ],[b1 , b2 ]])


1, b1 b2

([[a1 , a2 ],[b1 , b2 ]],[[c1 , c2 ],[e1 , e2 ]]) 0, b1 b2 , e1 e2 (9.6)
v2 ([[c1 , c2 ],[e1 , e2 ]])
,
v2 ([[a1 , a2 ],[b1 , b2 ]] [[c1 , c2 ],[e1 , e2 ]])

2 , 2.
2 (I2p,).
2, [[a1,a2],[b1,b2]], 2:

I2p R 0 , 2([[a1,a2],[b1,b2]])= v1([b1,b2])-v1([a1,a2])= v((b1))+v(b2)- v((a1))-v(a2).

9-3
9.1.4 -3: (F1,) -1 ( 1)
2 ,
-
. . ,
() . ,
-. , (),
. , , .
() (generalized intervals number (GIN))
f: [0,1]( R R ,), ( R R ,) .
G . (G,) , (-)
( R R ,). ,
().
() -1 (1) (intervals number (IN) type-1 (T1)),
, F: [0,1]I1, 1)
h1 h2 Fh1 Fh2 2) X [0,1] : Fh F X .
hX
F1 (1) , ,
(F1,). (Kaburlasos & Papadakis, 2006). F1
. , F1
(Kaburlasos, 2004) 1 R
(Kaburlasos & Kehagias, 2006) , , .
, F1 (Papadakis & Kaburlasos, 2010).
, , Fh, h[0,1] (
- (interval-representation)),
F(x)= {h : x Fh } ( -- (membership-function-
h[0,1]

representation)), 9.1. () () (F1,)


(FG)h = FhGh (FG)h = FhGh, . , 9.2
() () F G, ,
-.
, , Fh .
h(0,1]

,
Fh F0 , . FF1 h=0.
h(0,1]

F,GF1 (Kaburlasos & Kehagias, 2014):

F G (h[0,1]: FhGh) (xL: F(x)G(x))


E, hgt(E), ()
, . hgt ( E ) E ( x) . , 9.1() hgt(E)= 1,
x[ o ,i ]

9.2() hgt(FG)= h1.


: F1F1[0,1] : F1F1[0,1],
: I1I1[0,1] : I1I1[0,1], .
1
(E,F)=
0
( Eh , Fh )dh (9.7)

1
(E,F)=
0
( Eh , Fh )dh (9.8)

9-4
E
1
E(x)

0
1 2 3 4 5 6 7 8 9 10

()

E
1
h

0
1 2 3 4 5 6 7 8 9 10
x

()

9.1 . () --. ()
-.

h
F G
1

h1

1 2 4 5 6 8 9 10 x
a ()
h h
FG
1 1
FG
h1

1 2 4 5 6 8 10 x 1 2 4 5 6 8 9 10 x
a () ()

9.2 () () (F1,)
-. () 1 F G. () FG. () FG.

9-5
,
9.3 (Kaburlasos & Kehagias, 2014):

1
F(x=t) = (T,F) =
0
(Th , Fh )dh , (9.9)

FF1 T= [t,t], h[0,1] ,


. , .(9.9)
, -- -,
: F1F1[0,1], (.,.)
T= [t,t], h[0,1], (.,.)
FF1 F(x). (T,F)
F(x) x=t. ,
:
, (T,F) (T,F),
- / ,
(T,F) -
F0 F , (.,.) (.,.) ,
. ()
, () ,
(principled decision-making) C1 C2
.
.(9.9) ,
(.,.) (.,.) 9.1.6, f: RNR

(). f: RNR ,
, .. Gauss ,
, .

h
T F
1

hF

h1

U1 t U2 x

9.3 FF1 T= [t,t], h[0,1] .(9.9) F(x=t)


1
= (T,F) =
0
(Th , Fh )dh .

9-6
, D1: F1F1 R 0
1 :
1
D1(F,G)= d1 ( Fh , Gh )dh , (9.10)
0

d1: 11 R 0 .(9.1).
To 1, F, hgt(F), 1: F1 R 0 ,
:
hgt ( F )
1 ( F ) 1 ( Fh ) p(h)dh , (9.11)
0

1: I1p R 0 1 p(h)
(probability density) = [0,1],
(weight) p(h)= 1, h[0,1].
, , 1 1, . hgt(F)= 1.

9.1.5 -4: (F2,) -2 ( 2)


() -2 (2) (intervals number (IN) type-2 (T2)), T2
, 1. , T2 [U,W] {XF1: U
X W}, U (lower) , W (upper) ( 2 [U,W]).
2 , , (F2,).
2 (Kaburlasos & Papadakis, 2006).
2 , , [U,W]h, h[0,1]
( -), U(x)= {h : x U h } W(x)=
h[0,1]

{h : x Wh } ( --). 9.4
h[0,1]

, , 2. () ()
(F2,) (FG)h = FhGh (FG)h = FhGh , h[0,1]. ,
() () (F2,) 9.5
-. , 9.5() 2 [f,F]
[g,G], f,F,g,GF1, fF gG. [f,F][g,G] = [fg,FG] 9.5(),
(fg)h = h(h1,1]. To 9.5() [f,F][g,G] = [fg,FG], (fg)h
= h(h3,1], (FG)h = h(h4,1].
: F2F2[0,1] : F2F2[0,1]
.(9.7) .(9.8) :
I2I2[0,1] : I2I2[0,1]. .(9.5) .(9.6) .
M D2: F2F2 R 0 2 :
1
D2(F,G)= d 2 ( Fh , Gh )dh (9.12)
0

d2: 22 R 0 .(9.4).
To 2, F= [U,W], 2: F2 R 0 , :
2(F) = 1(W) 1(U), (9.13)
1: F1 R 0 1 .(9.11).

9-7
F
1

0.6471

0
0 2 4 6 8 10
()

F
1

0.6471

0
0 2 4 6 8 10

()

9.4 2. () -- ()
-.

h F G
1

h4
h3
h2 f
h1 g

1 2 4 5 6 8 9 10 x
()

FG
1 1

FG
h4
h3 fg
fg h2
h1

1 2 4 5 6 8 10 x 1 2 4 5 6 8 9 10 x
() ()

9.5 () () (F2,)
-. () 2 [f,F] [g,G], f,F,g,GF1, fF gG. ()
[f,F] [g,G] = [fg,FG]. () [f,F] [g,G] = [fg,FG].

9-8
9.1.6 -5: - 1/2
G= G1GN, (Gi,), i{1,,N}
(F1,). iv: Li R 0 i: LiLi (Li,)
, -0, iv1([a,b])= iv(i(a))+ iv(b)
(LiLi,) , ,
(G,), .
,
(G,), .
, h[0,1] N-
(I1 , ) . (Kaburlasos & Papadakis, 2009):
N

N
1 i v1 ( Ei )h
(F,E) = N
i 1
dh , (9.14)
0
v1 ( Fi
i
Ei ) h
i 1

N
1 i v1 ( Fi Ei ) h
(F,E) = i 1
N
dh . (9.15)
0
v1 ( Fi )h
i

i 1

, ( F1N ,). ,
i{1,,N} i: F1F1[0,1] .(9.7),
.(9.8). , c: FF[0,1] (F,)
c(F= (F1,...,FN), E= (E1,...,EN))= 11(F1,E1)++NN(FN,EN), 1,,N 0,
1++N = 1. () (F= (F1,...,FN), E= (E1,...,EN))=
N
min i ( Fi , Ei ) () (F= (F1,...,FN), E= (E1,...,EN))=
i{1,..., N }
i ( Fi , Ei ) , (Kaburlasos &
i 1
Kehagias, 2014). , (F= (F1,...,FN), E= (E1,...,EN))
(Fi,Ei), i{1,...,N}, (F= (F1,...,FN), E= (E1,...,EN))
(Fi,Ei), i{1,...,N}.
, D: F1N R 0 ( F1N ,) .

1/p
D(F= (F1,...,FN), E= (E1,...,EN)) = D1 ( F1 , E1 ) D1 ( FN , EN )
p p
, (9.16)

pR, D1: F1F1 R 0 .(9.10).
- 1 A= (A1,,AN)

(A) = p11(A1)++pN1(AN) (9.17)

( P, ) ( Pi , i) (.
i

7) = {1,,}. , 1 :F1 R0
.(9.17) , .(9.11),
= [0,1].
- 2.

9-9
9.1.7

1/2 . , 1/2 (2-)
, (3-),
(Kaburlasos & Papakostas, 2015).
3- 1 (, 2) F: [0,1]F, F= F1 (,
F= F2), z1 z2 Fz1 Fz2 . , 3- 1 (, 2) F
Fz, z= z0 Fz0 ,
- (zSlice), 2- 1 (, 2). , 9.6() 3-
2 Fz, z[0,1], z= 0.5. - F0.5 2- 2,
9.6().
Fg 3- 1, 3- 2.
(Fg,) E F Ez Fz, z[0,1]. F g :
FgFg[0,1] (Fg,) :
1 1
F g ( E , F ) I ( Ez )h , ( Fz )h dhdz , (9.18)
0 0
I(.,.) (9.2), (9.3), (9.5), (9.6).

1
h axis

0.6471
z axis

0.5
1

0
0.6471
0

6
h axis
8 0
0 0 2 4 6 8 10
10
x axis
x axis

() ()

9.6 () (3-) 2, F. () H - F0.5 ( 3- 2 F) 2- 2


.

9.2
-1, (1) .

9.2.1
() (possibility
(distribution)) () (probability (distribution)) (Ralescu & Ralescu, 1984
Wonneberger, 1994).
, , . , ,
, ,
, .

9-10
,
(. ) 9.7(). = 5.96 (median)
, . .
9.7() , (cumulative
distribution function) c(.) c(5.96)= 0.5. 9.7() E
: x = 5.96 2c(x), x > = 5.96
2(1-c(x)). CALCIN E
9.7() -- E(x),
. , F ,
CALCIN, F(h) 100(1-h)% , 100h%
- - F(h) (Kaburlasos, 2004 Kaburlasos, 2006 Kaburlasos &
Pachidis, 2014 Kaburlasos & Papadakis, 2006 Papadakis & Kaburlasos, 2010).

1 distr. function 1

c
0.5

0 0
0 2 4 5.96 8 10 0 2 4 5.96 8 10

() ()

E
1
2(1-c)
E(x)

2c

0
0 2 4 5.96 8 10

()

9.7 () (. ) [0, 10], M= 5.96.


() c(.). () E c(.)
CALCIN.

[a,b], h[0,1],
[a,b] . ,
(Kaburlasos, 2006), (Dietterich ..,
1997 Long & Tan, 1998 Salzberg, 1991 Samet, 1988)
. , ,
(inclusion measure),
- . ,
(. ) .

9-11
9.2.2
, F L2 [a1 b1; a2
b2;; aL bL] , L , h1,
h2,, hL, 0<h1h2hL=1. L=16 L=32
[0,1]. 16 32
- (Kaburlasos &
Kehagias, 2014 Uehara & Fujise, 1993 Uehara & Hirota, 1998).
, 2- 2, [U,W], L4
, L h :
U W. , 3- 2
L4L , L z 2-
2.

9.2.3
,
(interval arithmetic) (Kaufmann & Gupta, 1985 Moore & Lodwick, 2003).
,
.
(L=[-,+], ). (LL, )
(linear space) (Papadakis & Kaburlasos, 2010),
, , .
:

[a,b] + [c,e] = [a+c, b+e],


:
k[a,b] = [ka, kb].

G .
, F,HG, F+H = Fh+Hh, h[0,1],
k kF = kFh, h[0,1].
(G,) .
[a,b],[c,e]I1 [a+c, b+e]I1. ,
1 [a,b]I1 kR, k[a,b]
1. , k<0, k[a,b] 1
ka > kb.
1
. (cone)
C , x1,x2C - 1,2 0
(1x1+2x2) C.
1 1.
F1 . , F G
(F + G) = Fh + Gh, h[0,1], F
k kF = kFh, h[0,1]. , ,
F1 (Papadakis & Kaburlasos, 2010).
, -
- (Zhang & Hirota, 1997), .
-,

(Luxemburg & Zaanen, 1971 Vulikh, 1967).

9-12
- (Kaburlasos .., 2013)
1 f: RR. ,
1 [a,b] 1 [f(a), f(b)].
F1 , , f: RR,
[f(F)]h = f(Fh), h[0,1]. , 9.8()
, f(x) = (1-e-x)/(1+e-x),
X1, X2 X3 9.8() Y1= f(X1), Y2= f(X2) Y3=
f(X3), , 9.8().

1.5

X1 X2 X3
1

0.5

-0.5

f(x)
-1

-8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8
()

Y1 Y2 Y3
1

0.8

0.6

0.4

0.2

0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

()

9.8 () f(x) = (1-e-x)/(1+e-x) X1, X2 X3. () [0,1]


Y1= f(X1), Y2= f(X2) Y3= f(X3) [0,1] f(x) = (1-e-
x
)/(1+e-x).


, (-)
,
.

(C2) (. ) ([0,1],)([0,1],),
9.9 () (), v(x)= x (x)= 1-x ([0,1],). ,
9.9 () () u= [0.5,0.6][0.3,0.4] w= [0.4,0.9][0.2,0.8] uw.
9.9(), u w, x= [0.15,0.2][0.15,0.2],
u w. , u w, 9.9() x=
[0.85,0.9][0.55,0.6], u, w.

9-13
1 1
0.8 0.8
w w
x
0.4 0.4
u u
0.3 x 0.3
0.2 0.2

0 0.2 0.4 0.5 0.6 0.9 1 0 0.2 0.4 0.5 0.6 0.9 1
() ()

9.9 () (): uw x, u x, w u
w, x ( x) , , w u.

xu= [0.15,0.6][0.15,0.4] xw=


[0.15,0.9][0.15,0.8], xu= [0.5,0.9][0.3,0.6] xw= w. ,
V (u )
(.,.). (x,u)= =
V (x u)
v( ( 0.5)) v( 0.6) v(( 0. 3)) v( 0. 4) 2.2 3 .1
= 0.8148 (x,w)= 0.9118. ,
v( ( 0.15)) v( 0.6) v( ( 0. 15)) v( 0. 4) 2.7 3.4
V (u )
(x,u)(x,w) 9.9(). , (x,u)= =
V ( x u )
v( (0.5)) v( 0.6) v(( 0. 3)) v( 0. 4) 2.2
= 0.8148 (x,w)= 1. ,
v( (0.5)) v( 0.9) v(( 0. 3)) v( 0. 6) 2.7
(x,u)(x,w) 9.9().
,
(missing data) / (dont care data)
. ,
,
.
() [i,o], ()
[o,i] (L=[o,i],).
/ (C2).
([0,1],)([0,1],) ([0,1],) v(x)= x
(x)= 1-x , 9.10 () (). , 9.10 () ()
u= [0.6,0.7][0.5,0.6] w= [0.5,0.9][0.4,0.8] uw. 9.10()
xm= [0.3,0.3][1,0] , 9.10()
xd= [0.3,0.3][0,1] .
xmu= [0.3,0.7][0.5,0.6], xmw= [0.3,0.9][0.4,0.8],
xdu= [0.3,0.7][0,1] xdw= [0.3,0.9][0,1]. , (.,.).
V (u ) v( (0.6)) v(0.7) v( (0.5)) v(0.6) 2.2
, (xm,u)= = = 0.8800
V ( xm u ) v( (0.3)) v(0.7) v( (0.5)) v(0.6) 2.5
2 .8
(xm,w)= 0.9333. , (xm,u)(xm,w) 9.10(). ,
3.0

9-14
V (u ) v( (0.6)) v(0.7) v( (0.5)) v(0.6) 2.2 2 .8
(xd,u)= = = 0.6471 (xd,w)=
V ( xd u ) v( (0.3)) v(0.7) v( (0)) v(1) 3.4 3.6
0.7778. , (xd,u)(xd,w) 9.10().

1 1
xm w xd w
0.8 0.8
u u
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3

0 0.3 0.5 0.6 0.7 0.9 1 0 0.3 0.5 0.6 0.7 0.9 1
() ()

9.10 uw x, u x, w /
. () , xm= [0.3,0.3] [1,0]. ()
, xd= [0.3,0.3][0,1].

9.3
,
, .

9.3.1
()
() (Kaburlasos & Kehagias, 2014).
-, ( 9.11),
( 9.12), ( 9.13),
F1N .
() - RN F1N .
(Duda .., 2001)
. /
(Kaburlasos,
2004 Kaburlasos & Papadakis, 2009 Kaburlasos .., 2012). ,
, .
v(.) (.) -0.
, -
W1 ,...,W|Ca | Xi (.
) - ,
v(.) (.), v1([a,a])= v((a))+v(a)= 1
.
v1([a,b])= v((a))+v(b)= v(b)-[1-v((a))]+1= [v(b)-v(a)]+1= 1([a,b])+1.
V([a1,b1][aN,bN])= v1([a1,b1])++ v1([aN,bN])
V (Xi ) N
: (WJXi)= .
V (WJ X i ) N (WJ X i )

9-15
16 9.12 Xi WJ
WJ WJXi, (WJXi) a .
N N (1 a )
a (WJ X i ) . , WJ WJXi
N (WJ X i ) a
(WJXi) WJXi
N (1 a )
. ,
a
( F1N ,) ( F2N ,), (WJXi) a
(WJXi) a WJ WJXi.
v(.) (.), v1([a,a])= v((a))+v(a)= 1
(L=[0,1],) v(x)= x (x) = 1-x. ,
1
v1([a,a])= v((a))+v(a)= 1 (L=[-,+],) v(x)= ( x )
(x) =
1 e
1
2-x. v(x)= ()
1 e ( x )
(logistic) (Kleinbaum & Klein, 2002) ()
(sigmoid) (Duda .., 2001).
v(.) .

N
1: {W1 ,...,W|C| } C 2F1 , K = |C| C
(vigilance parameter) [0,1] .
2: i = 1 i = ntrn
3: () Xi F1N .
4: S C.
5: J = argmax[( X i W j )] .
j{1,...,| S |}
W j S

6: (S {}).KAI.((WJXi) < )
7: S S\{WJ}.
8: J = argmax[( X i W j )]
j{1,...,| S |}
W j S

9: //
10: S {}
11: C C{Xi}.
12: K K+1.
13:
14: WJ WJXi.
15: //
16: //

9.11 .

9-16
N
1: {W1 ,...,W|Ca | } Ca 2F1 , K = |Ca| Ca,
a [0,1], , , B=
{b1,,bL} , : F1N B Ca.
2: i = 1 i = ntrn
3: (Xi,(Xi)) F1N B .
4: S Ca.
5: J = argmax[( X i W j )] .
j{1,...,| S |}
W j S

6: (WJ) (Xi) a = (WJXi) + .


7: (S {}).KAI.((WJXi) < a )
8: S S\{WJ}.
9: J = argmax[( X i W j )]
j{1,...,| S |}
W j S

10: (WJ) (Xi) a = (WJXi) + .


11: //
12: S {}
13: Ca Ca{Xi} K K+1.
14: (Xi)B (B B{(Xi)} L L+1).
15:
16: WJ WJXi.
17: //
18: //

9.12 .

N
1: {W1 ,...,WK } C 2F1 , |C| C, B=
{b1,,bL} , : F1N B .
2: i = 1 i = ntst
3: (Xi,bi) F1N B .
4: J = argmax[( X i W j )] .
j{1,...,|C |}
W j C

5: Xi (WJ).
6: //
7: .

9.13 .

9-17
9.3.2
. , -
/ - () .
(. )
, - - . ,
1.2 .
, 1.2 f: F1N F1
(Kaburlasos .., 2013). , 1.2 -
, . 2 (Mendel, 2013)
.
.
, 1.2 (
) ( ). , ()

(Kaburlasos & Papakostas, 2015).


1
9.1) v( x) ( x )
(x) = 2-x
1 e
v1([a,a])= v((a))+v(a) [a,a] (I1,)
1 (L=[-,+],). ;
v(x)= x (x) = 1-x (L=[0,1],).
9.2) () v1(.)
v1(O=[i,o])=0 v1(I=[o,i])<+ () v2(.)
v2(O= [[o,i],[i,o]])=0 v2(I=[[i,o],[o,i]])<+.
9.3) d1: 11 R 0 (I1,) 1,
(9.1): d1([a,b],[c,e])= d((a),(c)) + d(b,e), .
9.4) (2,) 2 (L,)
. d2: 22 R 0 (9.4):
d2([[a1,a2],[b1,b2]],[[c1,c2],[e1,e2]])= d(a1,c1) + d((a2),(c2))+ d((b1),(e1))+ d(b2,e2).
9.5) (L,). 1: LLLL,
1([a,b])= [b,a], (LL,)
.
9.6) C (Kaburlasos,
2006). ,
,
, . (C,)
, p,qC (pq)C,
(pq)C.
CC ,
() .
(C,) , .

9-18
pq

pq ()

()
p
p q

pq q
pq

() ()

: () p,qC. ,
() cC () [ac,bc]
, acbc, c (bc-ac)/2 ac>bc,
c (bc-ac)/2. ,
[ac,bc] cC, c
[ac,bc] (bc-ac)/2. ,
C
([-,-][-,-],).
[ap,bp] [aq,bq] p,qC
[ap,bp][aq,bq]= [apaq,bpbq] [ap,bp][aq,bq]= [apaq,bpbq].
v: R R 0
. , : RR

. v(.) (.)
. , v(.) (.)
(isotropic).
- RN.
:
() (1,1) r= 1
(2,3) R= -2.
() v: CC R 0
v(O)=0 v(I)<+.

9.7) v: L R 0 (L=[o,i],) ,
v(o)=0 v(i)<+.
x
v(.) v( x) m(t )dt m: L R 0 ,
o

m(x)0 xL.
:
(1) m1: R R 0 , v1(x)=
1/[1+e-(x-)] (2) m2: [0,1][0,1],
v2(x)= x.

9-19
0.5 x 0.5, 1 x 3
9.8) F1 F2 f1 x
x 4, 3 x 4
x 6 1, 5 x 6
2

f2 x , .
0.5 x 4,
6 x 8
(x)= -x v(x)= x.

() F1 F2.
() D1(F1,F2).
() (F1,F2) (F1,F2).
() (x)= -ex v(x)= 1/e-x/2.
x 1, 1 x 2
9.9) F1 F2 f1 x
0.5 x 2, 2 x 4
x 7 1, 6 x 8
2

f2 x , .

0,
(x)= -2x v(x)= x+1.
() F1 F2.
() D1(F1,F2).

() (F1,F2) (F1,F2).

() (x)= ln(1/x) v(x)= 1/(1+e-x).


9.10) F1 F2
x 2 1, 1 x 3
x 10 x 24, 4 x 6
2 2
f1 x f2 x ,

0, 0,
. (x)= -x v(x)= 2x+1.
() F1 F2.
() D1(F1,F2).

() (F1,F2) (F1,F2).

() (x)= e-x v(x)= ln(x).


9.11) F1 F2
x 1, 1 x 2 x 6, 6 x 7

f1 x 1, 2 x 3 f 2 x 1, 7 x 10 , .
0.5 x 2.5, 3 x 5 0.5 x 6, 10 x 12

(x)= -2x+1 v(x)= x+1.
() F1 F2.
() D1(F1,F2).

() (F1,F2) (F1,F2).
() (x)= e-x v(x)= ln(x).

9-20
9.12) F1 F2.
2 x 1, 0.5 x 1

F1 f1 x 1 5 , F2
x , 1 x 5
4 4
0.5 x 3, 6 x8

f 2 x 1, 8 x 10 . (x)= -2x+1 v(x)= ex.
x 11, 10 x 11

() F1 F2.
() D1(F1,F2).
() (F1,F2) (F1,F2).

9.13) F1, F2, F3 F4


2 1 2 10
3 x 3 , 0.5 x 2 3 x 3 , 5 x 6.5
f1 x , f2 x ,
2 x 7 , 2 x 3.5 2 x 16 , 6.5 x 8
3 3 3 3
1 9
0.5 x, 0 x 2 2 x 4 , 4.5 x 6.5
f3 x f 4 x , .
0.5 x 2, 2 x 4 1 x 17 , 6.5 x 8.5
2 4
(x)= -2x v(x)= 3x+2.

() F1, F2, F3 F4.


() D1(F1,F2) D1(F3,F4).
() (F1,F2) (F3,F4).
() () ,
() .

9.14) F1, F2, F3 F4


2 1 2 10
3 x 3 , 0.5 x 2 3 x 3 , 5 x 6.5
f1 x , f2 x ,
2 x 7 , 2 x 3.5 2 x 16 , 6.5 x 8
3 3 3 3
1 9
0.5 x, 0 x 2 2 x 4 , 4.5 x 6.5
f3 x f 4 x , .
0.5 x 2, 2 x 4 1 x 17 , 6.5 x 8.5
2 4
- (x)= e v(x)= e2x+1.
-x/2

() F1, F2, F3 F4.


() D1(F1,F2) D1(F3,F4).
() (F1,F2) (F3,F4).
() () ,
() .

9-21
9.15) D1(F,E) F =
F(h) = [ah,bh], h[0,1] E = E(h) = [ch,dh], h[0,1] ,
( x) x v( x) x .

F E
1

0 2 4 6 9 12 x

9.16) D1(F,E) F =
F(h) = [ah,bh], h(0,1] E = E(h) = [ch,dh], h(0,1] , ( x) x
v( x) x .

F E
1

1 3 4 6 x

9.17) D1(F,E) F = F(h) = [ah,bh],


h[0,1] E = E(h) = [ch,dh], h[0,1] , ( x) x
x, x k
v( x) .
kx x k
k > 0 , D1(F,E) = 21.5.

F E
1

0 2 5 k-1 k k+2 x


1 1
ah bdh a ln ah b C0
2 (ah b)3
ah bdh
3a
C0

9-22

Alefeld, G. & Herzberger, J. (1983). Introduction to Interval Computation. New York, NY: Academic Press.
Davey, B.A. & Priestley, H.A. (1990). Introduction to Lattices and Order. Cambridge, UK: Cambridge
University Press.
Dietterich, T.G., Lathrop, R.H. & Lozano-Perez, T. (1997). Solving the multiple-instance problem with axis-
parallel rectangles. Artificial Intelligence, 89(1-2), 31-71.
Duda, R.O., Hart, P.E. & Stork, D.G. (2001). Pattern Classification. 2nd ed. New York, N.Y.: John Wiley &
Sons, Inc..
Kaburlasos, V.G. (2004). FINs: lattice theoretic tools for improving prediction of sugar production from
populations of measurements. IEEE Transactions on Systems, Man and Cybernetics Part B, 34(2), 1017-
1030.
Kaburlasos, V.G. (2006). Towards a Unified Modeling and Knowledge-Representation Based on Lattice
Theory. Heidelberg, Germany: Springer, series: Studies in Computational Intelligence, 27.
Kaburlasos, V.G. & Kehagias, A. (2006). Novel fuzzy inference system (FIS) analysis and design based on
lattice theory. part I: working principles. International Journal of General Systems, 35(1), 45-67.
Kaburlasos, V.G. & Kehagias, A. (2014). Fuzzy inference system (FIS) extensions based on lattice theory.
IEEE Transactions on Fuzzy Systems, 22(3), 531-546.
Kaburlasos, V.G. & Pachidis, T. (2014). A Lattice-Computing ensemble for reasoning based on formal fusion
of disparate data types, and an industrial dispensing application. Information Fusion, 16, 68-83.
Kaburlasos, V.G. & Papadakis, S.E. (2006). Granular self-organizing map (grSOM) for structure identification.
Neural Networks, 19(5), 623-643.
Kaburlasos, V.G. & Papadakis, S.E. (2009). A granular extension of the fuzzy-ARTMAP (FAM) neural
classifier based on fuzzy lattice reasoning (FLR). Neurocomputing, 72(10-12), 2067-2078.
Kaburlasos, V.G. & Papakostas, G.A. (2015). Learning distributions of image features by interactive fuzzy
lattice reasoning (FLR) in pattern recognition applications. IEEE Computational Intelligence Magazine,
10(3), 42-51.
Kaburlasos, V.G., Papadakis, S.E. & Amanatiadis, A. (2012). Binary image 2D shape learning and recognition
based on lattice computing (LC) techniques. Journal of Mathematical Imaging and Vision, 42(2-3), 118-
133.
Kaburlasos, V.G., Papakostas, G.A., Pachidis, T. & Athinellis, A. (2013). Intervals numbers (INs) interpolation
/extrapolation. In Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE
2013), Hyderabad, India, 7-10 July.
Kaufmann, A. & Gupta, M.M. (1985). Introduction to Fuzzy Arithmetic Theory and Applications. New
York, NY: Van Nostrand Reinhold.
Kleinbaum, D.G. & Klein, M. (2002). Logistic Regression A Self-Learning Text. (2nd ed.) (Statistics for
Biology and Health). New York, NY: Springer Science.
Long, P.M. & Tan, L. (1998). PAC learning axis-aligned rectangles with respect to product distributions from
multiple-instance examples. Machine Learning, 30(1), 7-21.
Luxemburg, W.A.J. & Zaanen, A.C. (1971). Riesz Spaces. Amsterdam, The Netherlands: North-Holland.
Mendel, J.M., Hagras, H. & John, R.I. (Eds.) (2013). Special issue on: Type-2 fuzzy sets and systems. IEEE
Transactions on Fuzzy Systems, 21(3), 397-398.
Moore, R.E. (1979). Methods and Applications of Interval Analysis. Philadelphia, PA: SIAM Studies in
Applied Mathematics.
Moore, R. & Lodwick, W. (2003). Interval analysis and fuzzy set theory. Fuzzy Sets and Systems, 135(1), 5-9.
Papadakis, S.E. & Kaburlasos, V.G. (2010). Piecewise-linear approximation of nonlinear models based on
probabilistically/possibilistically interpreted Intervals Numbers (INs). Information Sciences, 180(24),
5060-5076.
Ralescu, A.L. & Ralescu, D.A. (1984). Probability and fuzziness. Information Sciences, 34(2), 85-92.
Salzberg, S. (1991). A nearest hyperrectangle learning method. Machine Learning, 6(3), 251-276.
Samet, H. (1988). Hierarchical representations of collections of small rectangles. ACM Computing Surveys,
20(4), 271-309.
Tanaka, H. & Lee, H. (1998). Interval regression analysis by quadratic programming approach. IEEE
Transactions on Fuzzy Systems, 6(4), 473-481.

9-23
Uehara, K. & Fujise, M. (1993). Fuzzy inference based on families of -level sets. IEEE Transactions on
Fuzzy Systems, 1(2), 111-124.
Uehara, K. & Hirota, K. (1998). Parallel and multistage fuzzy inference based on families of -level sets.
Information Sciences, 106(1-2), 159-195.
Vulikh, B.Z. (1967). Introduction to the Theory of Partially Ordered Vector Spaces. Gronigen, The
Netherlands: Wolters-Noordhoff Scientific Publications.
Wonneberger, S. (1994). Generalization of an invertible mapping between probability and possibility. Fuzzy
Sets and Systems, 64(2), 229-240.
Zhang, K. & Hirota, K. (1997). On fuzzy number lattice (R,). Fuzzy Sets and Systems, 92(1), 113-122.

9-24
:
MATLAB
MATLAB MATrix () LABoratory ().
MATLAB ,
. MATLAB
- . ,
( , ..) (toolbox)
MATLAB. MATLAB.

MATLAB
.1 MATLAB

.1.1
:
=[1 2 3 4]
=[1,2,3,4];

:
=[1 2 3 4;5 6 7 8]
=[1 2 3 4];

/:
='

.1.2
ASCII :
load IN.DAT

VarName1 VarName2 VarName3 Filename.


save Filename VarName1 VarName2 VarName3

VarName ASCII Filename.Extension.


save Filename.Extension VarName -ascii

.1.3
:
format short
format long

MATLAB help.

dir, cd MATLAB .

clear .

clc .

-1
.2
.2.1
MATLAB -
.


A(i,j)


x(4)=[]


size(A)


length(A)


who


whos

.2.2
, , ,
, ..:
x=0:0.01:5;
x=0:pi/4:pi

, , ,

k=linspace(-pi,pi,4)


k=logspace(p1, p2, #points)

p1 p2 () 10, .. logspace(1, 2)
[101, 102].

.2.3

B=A(1:3,1:3)

a k n- k 0 1 a(k)
a 1 k.

a=[2 4 45 22];
k=[0 1 0 1];
a(find(k))
ans=
4 22

-2
.2.4
:
eye(4)
eye(3,4)

:
zeros(3)

:
ones(1,5)

(2 ):
A=diag([1 2 3 4])
diag(A)

.2.5

A=str2mat('today','it will','rain')

.3
MATLAB:
ans .
eps .
computer /.
pi .
i, j 1 .
Inf .
NaN -, .. 0/0.
clock /.
cputime .
date .
realmax .
realmin .
nargin .
nargout .

.3.1
+ , - , * , / .

a x a*x.

x y x*y.

x y --
x.*y.
x./y x.^y.

-3
.3.2
:
- round .
- fix .
- floor .
- ceil .

:
- rem .
- rat .
- rats .

:
- gcd .
- lcm .

:
- real .
- imag .
- conj .
- abs .
- angle .

( ): cart2pol.
( ) : pol2cart.
: cart2sph.
: sph2cart.

.3.3

x=(1:0.1:5)';
y=log(x);
[x y]


e3tsin(5t)
t=linspace(-2,2,45)';
y=exp(3*t).*sin(5*pi*t);
[t y]


3s2 +5s+7
f(s)= s=j [10-2, 102].
s3 +5s2 +7s+12
omega=logspace(-2,2);
s=j*omega;
x1=3*s.^2+5*s+7;
x2=s.^3+5*s.^2+7*s+12;
x=x1./x2;
x=abs(x);
[s x]

-4
.3.4
, =, , =, ==, ~=.
&, |, xor, ~.

A=[1 2 3 4;5 6 7 8];


P=(rem(A,2)==0)

ans =
0 1 0 1
0 1 0 1

.4
. ..
1 3 e1 e3
A= C=exp(A) C = 4 2
.
4 2 e e

.4.1
='
( ' ).

.4.2
+, -, * .
inv(A) A^(-1).

.4.3
max(x) (min(x)) () .
, ()
. max(max(A)).
sort(x) . [y,ind]=sort(x)
ind y.
sum(x) mean(x)
.
rank(A) .
det(A) .
poly(A) p()
, p()=|I-A|.
trace(A) (.
).
norm(X) .

An
expm(A) e A n! .
n0

-5

1 2

5 1
R =
3 3

2 4
.
1 1

2 2
A=sort(R) A = ,
3 3

5 4
.

[S,i1]=sort(R);
A=R(i1(:,1),:);

1 2

2 4
A = .
3 3

5 1

.4.4
any(x), x 0 1, 1
x 1.
all(x), x 0 1, 1 x
1.

, .. all(all(A0.5))
1 0.5.

find - .


t=0:0.005:20;
y=sin(t);
i=find(abs(y-0.5)0.05);

y 0.05
0.5 (
t y 0.5).

.5
, ..
p(s)=s3+4s2+2s+5 p=[1 4 2 5], s3+1 [1 0 0 1].

-6
.5.1
poly(A) p()
, p()=|I-A|.
poly , ..
1, -2, j, -j, poly([1,-2,j,-j])
[1 1 -1 1 -2].
conv , .. a=[1 2 3] b=[4 5
6] c=conv(a,b) c=[4 13 28 27 18].
roots(p).
polyder(p) dp(s)/ds p.
b ( s)

a ( s)
b( s ) r1 r2 rn
k ( s)
a ( s) s p1 s p2 s pn

[r,p,k]=residue(b,a).

.5.2
x
y p=polyfit(x,y,n) n-
(xi,yi) .
spline, interpft, interp1,
interp2.

.6
.6.1

t=0:0.05:4*pi;
y=sin(t);
plot(t,y)

.6.2

t1=(0:0.1:3);
y1=sin(t1);
t2=(1:0.1:4);
y2=cos(t2);
plot(t1,y1,t2,y2)

.6.3
axis([xmin xmax ymin ymax]) x y
.
axis(square) .
axis(equal) x y.
grid .

-7
.6.5

:
x=0:0.1:4;
y=-2:0.1:1;
[X,Y]=meshgrid(x,y);
Z=sin(X).*cos(Y);
mesh(X,Y,Z);

view(az,el) , az el
azimuth elevation, .1. az==-37.5o el==30o.

z
y

.1 azimuth () elevation ().

.7 MATLAB
.7.1
MATLAB

if ,
( 1)
elseif ,
( 2)
else
( 3)


end

MATLAB

1. for i=1:n,
( )
end
2. while
( )
end

-8
.7.2

% An M-file to compute the Fibonacci numbers


f=[1 1]; i=1;
while f(i)+f(i+1)1000
f(i+2)=f(i)+f(i+1);
i=i+1;
end
plot(f)

.7.3
(script file),

function [output variables]=FunctionName(input variables)


function t = trace(a)
% TRACE Sum of diagonal elements,
% TRACE(A) is the sum of the diagonal elements of A,
% which is also the sum of the eigenvalues of A.

t =sum(diag(a));

.8

1 1
humps( x) 6
( x 0.3) 0.01 ( x 0.9) 2 0.04
2

function y = humps(x)
y=1./((x-0.3).^2+0.01) + 1./((x-0.9).^2+0.04) - 6;


x=-1:0.01:2;
plot(x,humps(x))

.8.1
diff , ..
diff(x)=[x(2)-x(1), x(3)-x(2), ,x(n)-x(n-1)].
Dy=diff(y)./diff(x) .
s=quad(humps,0,1) humps 0 1.
( quad(humps,0,1)=29.858).

.8.2 -
fzero(FunctionName,x0) ( )
FunctionName x0.

xz1=fzero(humps,0)
xz2=fzero(humps,1)
( xz1 -0.131, xz2 1.299)

-9

x=fminbnd(FunctionName,x1,x2).

: ( )
pic= fminbnd (cos,3,4)

MATLAB.


MATLAB.

.1 1
1, y f x .
(. )
. , 1 MATLAB,
y = sin(x).
, , , (50)
.
2 MATLAB,
(Fisher, 1936), ()
. ,
,
, 50 ,
150 .
( ), ( ),
(20 ) .
,
, ,
,
.

-10
1

%% MATLAB
clear all;
close all;
clc;

%% (training data)
x_train = 0:pi/16:2*pi;
y_train = sin(x_train);
plot(x_train, y_train);%

%% (testing data)
x_test = 0:pi/7:2*pi;
y_test = sin(x_test);

%% feedforward (1 10
)
net = newff(x_train,y_train,10);

%%
net = train(net,x_train,y_train);

%% ,
%
y_net = net(x_test);

%%
%
figure;
plot(x_test,y_test,x_test,y_net,'--r');
legend('Training Sine','Simulated Sine');

-11
2
%% MATLAB
clear all;
close all;
clc;

%% iris data
load fisheriris

%%
y = zeros(3,length(species));
y(1,1:50) = 1;
y(2,51:100) = 1;
y(3,101:150)= 1;

%% (70%) (30%)
inds = randperm(size(meas,1));
x_train = meas(inds(1:105),1:4)';
y_train = y(:,inds(1:105));
x_test = meas(inds(106:end),1:4)';
y_test = y(:,inds(106:end));

%% feedforward (2 20
)
net = newff(x_train,y_train,[20 20]);

%%
net = train(net,x_train,y_train);

%% ,
%
y_net = net(x_test);

%%
[vals1,y_labels] = max(y_test);
[vals2,y_net_labels] = max(y_net);
cp = classperf(y_labels, y_net_labels);

%%
disp(['Classification Rate(%) = ' num2str(cp.CorrectRate*100)]);

-12
.2 2
.
, Mamdani, 3
MATLAB
,
.
0 C 50 C, 20% 100%.
120 1200 RPM (Rotations Per Minute).
5 :



1 8 20 300
2 15 40 400
3 21 50 600
4 32 70 1000
5 41 90 900

.1 .

Sugeno, 4 MATLAB
PAC
: 1) PH 2) AL ,
PAC
. x1 PH, x2 AL
y () PAC
. :

1. PH AL , y= 2x1+x2+1.
2. PH AL , y= x1+2x2+3.
3. PH AL , y= 2x1+3x2-1.

, , ,
PH AL, . ,
PH, AL PAC [6,8], [40,60] [50,200] .

-13
3 Mamdani
%% MATLAB
clear all;
close all;
clc;

%% Mamdani
model = newfis('MamModel','mamdani');

%% ,
model = addvar(model,'input','temp',[0 50]);
model = addvar(model,'input','hum',[20 100]);
model = addvar(model,'output','speed',[120 1200]);

%%
model = addmf(model,'input',1,'low','gaussmf',[7.5 0]);
model = addmf(model,'input',1,'medium','gaussmf',[7.5 25]);
model = addmf(model,'input',1,'high','gaussmf',[7.5 50]);
model = addmf(model,'input',2,'low','gaussmf',[10 20]);
model = addmf(model,'input',2,'medium','gaussmf',[10 60]);
model = addmf(model,'input',2,'high','gaussmf',[10 100]);
model = addmf(model,'output',1,'low','gaussmf',[120 120]);
model = addmf(model,'output',1,'medium','gaussmf',[120 660]);
model = addmf(model,'output',1,'high','gaussmf',[120 1200]);

%
subplot(1,3,1);plotmf(model,'input',1);
subplot(1,3,2);plotmf(model,'input',2);
subplot(1,3,3);plotmf(model,'output',1);

%%
rule1 = [1 1 1 1 1];
rule2 = [2 2 2 1 1];
rule3 = [2 2 3 1 1];
rule4 = [3 3 3 1 1];
ruleList = [rule1;rule2;rule3;rule4];
model = addrule(model,ruleList);

%
showrule(model)

%%
figure;
plotfis(model);

%%
inputs = [8 20];
outputs = evalfis(inputs, model)

-14
4 Sugeno
%% MATLAB
clear all;
close all;
clc;

%% Mamdani
model = newfis('SugModel','sugeno');

%% ,
model = addvar(model,'input','PH',[6 8]);
model = addvar(model,'input','AL',[40 60]);
model = addvar(model,'output','PAC',[50 200]);

%%
model = addmf(model,'input',1,'low','trimf',[4.2 6 6.8]);
model = addmf(model,'input',1,'medium','trimf',[6.2 7 7.8]);
model = addmf(model,'input',1,'high','trimf',[7.2 8 9.8]);
model = addmf(model,'input',2,'low','trimf',[32 40 48]);
model = addmf(model,'input',2,'medium','trimf',[42 50 58]);
model = addmf(model,'input',2,'high','trimf',[52 60 68]);
model = addmf(model,'output',1,'low','linear',[2 1 1]);
model = addmf(model,'output',1,'medium','linear',[1 2 3]);
model = addmf(model,'output',1,'high','linear',[2 3 -1]);

%
subplot(1,2,1);plotmf(model,'input',1);
subplot(1,2,2);plotmf(model,'input',2);

%%
rule1 = [2 1 1 1 1];
rule2 = [3 2 2 1 1];
rule3 = [1 2 3 1 1];
ruleList = [rule1;rule2;rule3];
model = addrule(model,ruleList);

%
showrule(model)

%%
figure;
plotfis(model);

%%
inputs = [7 40];
outputs = evalfis(inputs, model)

-15
.3 3

Rastrigin ( n 2 ):


n
f x 10 n xi2 10 cos2 xi 5.12 xi 5.12
i 1

: xi 0 f x 0

Rastrigin (. .2) ,
f (x) 0 x 0 .

.2 Rastrigin .

(. ) Rastrigin, ,
(), 5 MATLAB ,
(), 6 MATLAB.
6 (Mostapha Kalami Heris, 2015).

-16
5 (. )
Rastrigin ()
%% MATLAB
clear all;
close all;
clc;

%%
nVar = 2; %
LB = -5.12*ones(1,nVar); %
UB = 5.12*ones(1,nVar); %
CostFunction = @(x)rastriginsfcn(x); % /

%%
nPop = 50; %
MaxIt = 50; %
options =
gaoptimset('PopulationSize',nPop,'PopInitRange',[LB;UB],'EliteCount',2,'C
rossoverFraction',0.8,'Generations',MaxIt,'PlotFcns',{@gaplotbestf});

%%
[x, fval, exitflag, output] =
ga(CostFunction,nVar,[],[],[],[],LB,UB,[],options);

%%
disp(['Minimum = ' num2str(fval) ' for x1=' num2str(x(1)) ' x2='
num2str(x(2))]);

-17
6 (. )
Rastrigin ()
% Copyright (c) 2015, Yarpiz (www.yarpiz.com)
% All rights reserved. Please read the "license.txt" for license terms.
%
% Project Code: YPEA102
% Project Title: Implementation of Particle Swarm Optimization in MATLAB
% Publisher: Yarpiz (www.yarpiz.com)
%
% Developer: S. Mostapha Kalami Heris (Member of Yarpiz Team)
%
% Contact Info: sm.kalami@gmail.com, info@yarpiz.com
%

clc;
clear;
close all;

%% Problem Definition

CostFunction=@(x) rastrigin(x); % Cost Function

nVar=2; % Number of Decision Variables

VarSize=[1 nVar]; % Size of Decision Variables Matrix

VarMin=-5.12; % Lower Bound of Variables


VarMax= 5.12; % Upper Bound of Variables

%% PSO Parameters

MaxIt=50; % Maximum Number of Iterations

nPop=50; % Population Size (Swarm Size)

% PSO Parameters
w=1; % Inertia Weight
wdamp=0.99; % Inertia Weight Damping Ratio
c1=1.5; % Personal Learning Coefficient
c2=2.0; % Global Learning Coefficient

% If you would like to use Constriction Coefficients for PSO,


% uncomment the following block and comment the above set of parameters.

% % Constriction Coefficients
% phi1=2.05;
% phi2=2.05;
% phi=phi1+phi2;
% chi=2/(phi-2+sqrt(phi^2-4*phi));
% w=chi; % Inertia Weight

-18
% wdamp=1; % Inertia Weight Damping Ratio
% c1=chi*phi1; % Personal Learning Coefficient
% c2=chi*phi2; % Global Learning Coefficient

% Velocity Limits
VelMax=0.1*(VarMax-VarMin);
VelMin=-VelMax;

%% Initialization

empty_particle.Position=[];
empty_particle.Cost=[];
empty_particle.Velocity=[];
empty_particle.Best.Position=[];
empty_particle.Best.Cost=[];

particle=repmat(empty_particle,nPop,1);

GlobalBest.Cost=inf;

for i=1:nPop

% Initialize Position
particle(i).Position=unifrnd(VarMin,VarMax,VarSize);

% Initialize Velocity
particle(i).Velocity=zeros(VarSize);

% Evaluation
particle(i).Cost=CostFunction(particle(i).Position);

% Update Personal Best


particle(i).Best.Position=particle(i).Position;
particle(i).Best.Cost=particle(i).Cost;

% Update Global Best


if particle(i).Best.Cost<GlobalBest.Cost

GlobalBest=particle(i).Best;
end
end

BestCost=zeros(MaxIt,1);

%% PSO Main Loop

for it=1:MaxIt

for i=1:nPop

% Update Velocity
particle(i).Velocity = w*particle(i).Velocity ...
+c1*rand(VarSize).*(particle(i).Best.Position-
particle(i).Position) ...
+c2*rand(VarSize).*(GlobalBest.Position-
particle(i).Position);

-19
% Apply Velocity Limits
particle(i).Velocity = max(particle(i).Velocity,VelMin);
particle(i).Velocity = min(particle(i).Velocity,VelMax);

% Update Position
particle(i).Position = particle(i).Position +
particle(i).Velocity;

% Velocity Mirror Effect


IsOutside=(particle(i).Position<VarMin |
particle(i).Position>VarMax);
particle(i).Velocity(IsOutside)=-particle(i).Velocity(IsOutside);

% Apply Position Limits


particle(i).Position = max(particle(i).Position,VarMin);
particle(i).Position = min(particle(i).Position,VarMax);

% Evaluation
particle(i).Cost = CostFunction(particle(i).Position);

% Update Personal Best


if particle(i).Cost<particle(i).Best.Cost

particle(i).Best.Position=particle(i).Position;
particle(i).Best.Cost=particle(i).Cost;

% Update Global Best


if particle(i).Best.Cost<GlobalBest.Cost

GlobalBest=particle(i).Best;
end
end
end

BestCost(it)=GlobalBest.Cost;

disp(['Iteration ' num2str(it)': Best Cost= 'num2str(BestCost(it))]);

w=w*wdamp;

end

BestSol = GlobalBest;

%% Results
figure;
%plot(BestCost,'LineWidth',2);
semilogy(BestCost,'LineWidth',2);
xlabel('Iteration');
ylabel('Best Cost');
grid on;
disp(['Minimum = ' num2str(GlobalBest.Cost) ' for x1='
num2str(GlobalBest.Position(1)) ' x2=' num2str(GlobalBest.Position(1))]);

-20
.4 9
7 MATLAB
() . , Preprocessing: Load the data
() area20050301_7a.txt ()
population1. , --
fin(population) pts1 () val1
() , -
in(pts1,val1,L) L=32 [0,1].
plot a population as a histogram 7
(-) .3().
plot a population as a IN membership function 7
.3(), -- . ,
plot a population as a IN set of a-cuts 7
.3() -.
8 MATLAB ,
x. , -
- pts
val. o 8
CALCIN.
9 MATLAB --
pts
val - L (-)
[0,1].

7 ()
clear all
clc

%Preprocessing: Load the data


load area20050301_7a.txt
population1= area20050301_7a;
step= 2;
width= 50;
[pts1, val1] = fin(population1);
L= 32;
[level, acuts1]= in(pts1, val1, L);

whitebg('w')
figure(1)

%plot a population as a histogram


CDF= [];
for i = 0:1:width/step,
CDF= [CDF; sum(pts1<i*step)];
end
CDF;
dif= diff(CDF);
ppdf= (dif/max(CDF));

subplot(3,1,1);
x= step:step:width;

-21
bar(x-step/2,ppdf,0.8,'w');
grid;
axis([0 width 0 0.25]);
set(gca,'XTick',[0:step:width]);
set(gca,'YTick',0:0.05:0.25);

%plot a population as a IN membership function


subplot(3,1,2);
plot(pts1,val1,'k-');
grid;
axis([0 width 0 1.2]);
set(gca,'XTick',[0:step:width]);
set(gca,'YTick',0:0.2:1);

%plot a population as a IN set of a-cuts


subplot(3,1,3);
for i = 1:1:L,
plot([acuts1(i,1) acuts1(i,2)],[level(i) level(i)],'k-');
hold on;
% grid;
end
grid;
axis([0 width 0 1.2]);
set(gca,'XTick',[0:step:width]);
set(gca,'YTick',0:0.2:1);

0.25
0.2
0.15
()
0.1
0.05
0
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50

1
0.8
0.6 ()
0.4
0.2
0
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50

1
0.8
0.6 ()
0.4
0.2
0
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50

A.3 () (-) . () -- (
, ) , CALCIN.
() - .

-22
8 CALCIN
()
function [pts, val] = fin(x)
%[pts, val] = fin(x) computes a FIN out of a (real number) data
population vector 'x'
%Column vector 'pts' returns the (FIN's) domain points
%Column vector 'val' returns the values on the domain points

epsilon= 0.000001; %used to produce nearby copies of identical numbers

% The following lines "arrange" such that there are no identical points
in vector 'x' by adding a small positive (real) number where appropriate
%
x= sort(x);
if min(abs(diff(x))) == 0, %condition for identifying identical numbers
for i=1:length(x),
j= i+1;
while (j <= length(x))&(x(i) == x(j)),
x(j)= x(i) + epsilon;
j= j+1;
end %while
end %for
end %if

% Next, the FIN which corresponds to vector 'x' is calculated


pts= [median(x)];
x_left= x(find(x<median(x)));
x_right= x(find(x>median(x)));
data=[x_left; x_right];

while length( data(1,:) ) ~= 1,


[M,N]= size(data);
rslt= [];
for i=1:M,
x= data(i,:);
pts= [pts; median(x)];
x_left= x(find(x<median(x)));
x_right= x(find(x>median(x)));
rslt=[rslt; x_left; x_right];
end
data= rslt;
end

pts= [pts; data];


pts= sort(pts);
step= power(2, -(log2(length(pts)+1)-1) );
val= [];
for i= 1:ceil(length(pts)/2),
val= [val; i*step];
end
flipped= flipud(val);
val= [val; flipped(2:length(flipped),:)];

-23
9 -
() --
function [level, acuts] = in(pts, val, L)
% [level, acuts] = in(pts, val, L) computes a IN in terms of its acuts
([ah, bh]) per level
% Column vector 'pts' includes the x-
values of a FIN membership function
% Column vector 'val' includes the y-
values of a FIN membership function
% Integer number 'L' is the number of a-
cut levels

epsilon = 0.000001;

%preprocess so as to create the desired a-cut levels


dmy= linspace(0,1,1+L)';
level= dmy(2:1:1+L);

%if necessary, we augment vectors pts and val at both ends, so as


%the corresponding IN starts from zero
if val(1)> 0
dmy1= pts(1) - (pts(2)-pts(1))*val(1)/(val(2)-val(1));
n= length(pts);
dmy2= pts(n) + (pts(n)-pts(n-1))*val(n)/(val(n-1)-val(n));

pts= [dmy1; pts; dmy2];


val= [0; val; 0];
end

%it follows the main loop that computes a-cuts, one at a loop
acuts = [];
for i = 1:1:L,
indices= find(level(i)-epsilon <= val);
minIndex= min(indices);
maxIndex= max(indices);

ah= pts(minIndex) - (pts(minIndex)-pts(minIndex-1))*(val(minIndex)-


level(i))/(val(minIndex)-val(minIndex-1));
bh= pts(maxIndex) + (pts(maxIndex+1)-pts(maxIndex))*(val(maxIndex)-
level(i))/(val(maxIndex)-val(maxIndex+1));
acuts= [acuts; ah bh];
end

-24
-1, (1,) 1
() .(9.1): d1([a,b],[c,e])= [v((ac))-v((ac))] + [v(be)-v(be)]
= d((a),(c)) + d(b,e). (.) v(.) (x)= -x v(.), v(x)=
-v(-x), L1 (Hamming) d1([a,b],[c,e])= |a-c| + |b-e|. ,
[a,a] [b,b], a b,
d1([a,a],[b,b])= |a-b| + |a-b|= 2|a-b|. d1(.,.)
() |a-b| a b, 0.5
.(9.1). d1([a,b],[c,e])= 0.5[d((a),(c))+d(b,e)]
- [a,b] [c,e], (x) -x v(x) -v(-x).
: E1 E2 , F1 F2 , G1 G2
.4. ,
x 2, 2 x 3 x 7, 7 x 8

E1 e1 x x 4, 3 x 4 , E2 e2 x x 9, 8 x 9 , F1
0, 0,

0.5 x 0.5, 1 x 3 0.5 x 3, 6 x 8

f1 x 0.5 x 2.5, 3 x 5 , F2 f 2 x 0.5 x 5, 8 x 10 , G1
0, 0,

0.334 x, 0 x 3 0.334 x 1.667 5 x 8

g1 x 0.334 x 2, 3 x 6 , G2 g 2 x 0.334 x 3.667 8 x 11 .
0, 0,

1 E1 E2
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 ( ) 6 7 8 9 10 11

1 F1 F2
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 () 6 7 8 9 10 11

1 G1 G2
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10 11
()

A.4 .

-25
( E1 )h = [ah , bh ] ( E2 )h = [ch , dh ] ah h 2 , bh 4 h , ch h 7 ,
dh 9 h . ( F1 )h = [ah , bh ] ( F2 )h = [ch , dh ] ah 2h 1 , bh 5 2h , ch 2h 6 ,
dh 10 2h . (G1 )h = [ah , bh ] (G2 )h = [ch , dh ] ah 3h , bh 6 3h , ch 3h 5 ,
dh 11 3h .


(.) v(.).
, (L [0,11], ) ( x) 11 x
v( x) x . (.) (o 0) 11 i
(i 11) 0 o , v(.) v(o 0) 0
v(i 11) 11 . , E1 E2 :

1
D1 ( E1 , E2 ) 0.5* 10dh 5 .
0

13 2h
1
( E1 , E2 ) 18 2h dh = ( E2 , E1 ) .
0

MATLAB ASCII
func.m :

function y = func(h)
y= y= (13 - 2*h)./(18 - 2*h)

Command Window MATLAB quad('func',0,1)


:

ans =
0.7055

- E1 E2 ( E1 , E2 ) 0 ( E2 , E1 ) .

F1 F2 :

1
D1 ( F1 , F2 ) 0.5* 10dh 5
0

15 4h
1
( F1 , F2 ) 20 4h dh ( F2 , F1 )
0

ASCII func.m :

function y = func(h)
y= (15 - 4*h)./(20 - 4*h)

Command Window MATLAB quad('func',0,1)


:
ans =
0.7211

-26
- F1 F2 ( F1 , F2 ) 0 ( F2 , F1 ) .

G1 G2 :

1
D1 (G1 , G2 ) 0.5* 10dh 5
0

17 6h
1
(G1 , G2 ) 22 6h dh = (G2 , G1 )
0
ASCII func.m :

function y = func(h)
y= (17 - 6*h)./(22 - 6*h)

Command Window MATLAB quad('func',0,1)


:
ans =
0.7346
12 6h
1/ 6
(G1 , G2 ) dh (G2 , G1 ) .
0
17 6h
ASCII func.m :

function y = func(h)
y= (12 - 6*h)./(17 - 6*h)

Command Window MATLAB quad('func',0,1/6)


:
ans =
0.1161

, (R, ) ( x) x
v( x) 1/ (1 e( x 3) ) , () .5.
(.) (o ) i (i ) o ,
v(.) v(o ) 0 v(i ) 1 . ,
E1 E2 :

0.9

0.7

0.5

0.3

0.1

-5 -3 -1 1 3 5 7 9 11

1
A.5 () v( x) .
1 e ( x 3)

-27
1
1 1 1 1
D1 ( E1 , E2 ) 0.5* h 5
h 10
h 6
dh
0 1 e 1 e 1 e 1 eh 1

ASCII func.m :

function y = func(h)
y= 0.5*(1./(1+exp(h+5)) - 1./(1+exp(h+10)) + 1./(1+exp(h-6)) - 1./(1+exp(h-1)))

Command Window MATLAB quad('func',0,1)


Q

ans =
0.1899
1 1
1 h 10

( E1 , E2 ) 1 e 1 e h 6 dh
1 1
0
h 5

1 e 1 e h 6

ASCII func.m :

function y = func(h)
y= (1./(1+exp(h+10)) + 1./(1+exp(h-6)))./(1./(1+exp(h+5)) + 1./(1+exp(h-6)))

Command Window MATLAB quad('func',0,1)


:

ans =
0.9958
1 1
1 h5

( E2 , E1 ) 1 e 1 e h 1 dh
1 1
0
h 5

1 e 1 e h 6

ASCII func.m :

function y = func(h)
y= (1./(1+exp(h+5)) + 1./(1+exp(h-1)))./(1./(1+exp(h+5)) + 1./(1+exp(h-6)))

Command Window MATLAB quad('func',0,1)


:

ans =
0.6242

- E1 E2 ( E1 , E2 ) 0 ( E2 , E1 ) .

F1 F2 :
1
1 1 1 1
D1 ( F1 , F2 ) 0.5* 2h4
2 h 9
2 h 7
2h2
dh
0 1 e 1 e 1 e 1 e

ASCII func.m :

-28
function y = func(h)
y= 0.5*(1./(1+exp(2*h+4)) - 1./(1+exp(2*h+9)) + 1./(1+exp(2*h-7)) - 1./(1+exp(2*h-2)))

Command Window MATLAB quad('func',0,1)


:

ans =
0.1440
1 1
1 2 h 9

( F1 , F2 ) 1 e 1 e2 h 7 dh
1 1
0
1 e 2 h 4 1 e 2 h 7

ASCII func.m :

function y = func(h)
y= (1./(1+exp(2*h+9)) + 1./(1+exp(2*h-7)))./(1./(1+exp(2*h+4)) + 1./(1+exp(2*h-7)))

Command Window MATLAB quad('func',0,1)


:

ans =
0.9923
1 1
1 2h4

( F2 , F1 ) 1 e 1 e 2 h 3 dh
1 1
0
1 e 2 h 4 1 e 2 h 7

ASCII func.m :

function y = func(h)
y= (1./(1+exp(2*h+4)) + 1./(1+exp(2*h-3)))./(1./(1+exp(2*h+4)) + 1./(1+exp(2*h-7)))

Command Window MATLAB quad('func',0,1)


:

ans =
0.8709

- F1 F2 ( F1 , F2 ) 0 ( F2 , F1 ) .

G1 G2 :

1
1 1 1 1
D1 (G1 , G2 ) 0.5* 3h 3
3 h 8
3h 8
3 h 3
dh
0 1 e 1 e 1 e 1 e

ASCII func.m :

function y = func(h)
y= 0.5*(1./(1+exp(3*h+3)) - 1./(1+exp(3*h+8)) + 1./(1+exp(3*h-8)) - 1./(1+exp(3*h-3)))

-29
Command Window MATLAB quad('func',0,1)
:

ans =
0.1140
1 1
1 3 h 8

(G1 , G2 ) 1 e 1 e3h 8 dh
1 1
0
3h 3

1 e 1 e3h 8

ASCII func.m :

function y = func(h)
y= (1./(1+exp(3*h+8)) + 1./(1+exp(3*h-8)))./(1./(1+exp(3*h+3)) + 1./(1+exp(3*h-8)))

Command Window MATLAB quad('func',0,1)


:

ans =
0.9851
1 1
1 3h 3

(G2 , G1 ) 1 e 1 e3h 3 dh
1 1
0
3h 3

1 e 1 e 3 h 8

ASCII func.m :

function y = func(h)
y= (1./(1+exp(3*h+3)) + 1./(1+exp(3*h-3)))./(1./(1+exp(3*h+3)) + 1./(1+exp(3*h-8)))

Command Window MATLAB quad('func',0,1)


:

ans =
0.7885
1 1
1/6 3 h 8

1 e 1 e3h 3 dh
(G1 , G2 ) 1 1
0
3h 3

1 e 1 e3 h 3

ASCII func.m :

function y = func(h)
y= (1./(1+exp(3*h+8)) + 1./(1+exp(3*h-3)))./(1./(1+exp(3*h+3)) + 1./(1+exp(3*h-3)))

Command Window MATLAB quad('func',0,1/6)


:

ans =
0.1603

-30
1 1
1/6 3h 8

1 e 1 e3h 3 dh
(G2 , G1 ) 1 1
0
3h 8

1 e 1 e 3 h 8

ASCII func.m :

function y = func(h)
y= (1./(1+exp(3*h+8)) + 1./(1+exp(3*h-3)))./(1./(1+exp(3*h+8)) + 1./(1+exp(3*h-8)))

Command Window MATLAB quad('func',0,1/6)


:

ans =
0.1566

, F E, (. )
D1(F,E) ,
(.) v(.) . D1(F,E),
(F,E) (.) v(.). , (.) v(.)
D1(F,E),
(F,E).


Fisher, R.A. (1936). The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7(2), 179-
188.
Mostapha Kalami Heris, S. (2015). Yarpiz. (www.yarpiz.com).

-31
:

, ,
f: RNT, R ,
T T= R ( ),
( ). ,
(Devijver & Kittler, 1982).
, .
(holdout validation) ,
. ,
(training data set)
(testing data set).
, () , 70%
,
. , , ,
.
, () (least
squares error (LSE)) , ,
.

, (.
(validation data set))
,
. ,
/ .
(variability)
, , .
(cross validation)
,
. ,
() .
.
H (exhaustive)
. A
,
.
- , .
-k- (leave-k-out) k ( n)
, n-k
n n!
. , Ckn
k k !( n k )!
k n .
n Ckn ,
.
. -1- (leave-1-out)
-k- k=1. C1n n,
, .
.
-k- ,
k ,
. ( )

-1
. , -k-
, -
, .
k- (k-fold) () k
. k ,
, k-1
. k ,
k .
, (.
) , .
. k
() . H (McLachlan .., 2004) k =
10. k = 1 k- -1- .
H (repeated random sub-sampling
validation)
, k, .
.
.
() .
k-
- .
,
.
, -k-
.

, . , 20
()
. 20
,
.
, ..
, ,
.

, .

F* (expected value)
EF .
F* .
F* (unbiased)
EF. ()
. , -1- ,
n-1 n . ,
F* (Efron & Tibshirani, 1997 Stone, 1977).
,
EF.
(confidence intervals)
F* (Efron & Tibshirani, 1997),
.
,
(Kohavi, 1995), .
(bootstrap)
(.. )
(Efron,

-2
1979 Efron & Tibshirani, 1993). ,
, (
)
, .
.
.
, N. ,
N .
N
.
, .
, - (re-sampling),
N
N .
( 1.000 10.000 ) ,
(bootstrap estimate). ,
, . H
.
.
, . , 0 1
. 10 X= x1, x2,, x10 10
(0 1) . 10
1
x ( x1 x2 ... x10 ) .
10
x .
X 1* = x2, x1, x10, x10, x3, x4, x6, x7, x1, x9. , X 1* ( )
10 , X 1*
.
, 1* , X 1* .
, X 2* , 2* .
, 100, 1* ,
2* , , 100
*
, ( )
.
(statistical hypothesis testing).


Devijver, P.A. & Kittler, J. (1982). Pattern Recognition: A Statistical Approach. London, G.B.: Prentice-Hall.
Efron, B. (1979). Bootstrap methods: Another look at the jackknife. The Annals of Statistics, 7(1), 1-26.
Efron, B. & Tibshirani, R. (1993). An Introduction to the Bootstrap. Boca Raton, FL: Chapman & Hall/CRC.
Efron, B. & Tibshirani, R. (1997). Improvements on cross-validation: The .632 + Bootstrap Method. J.
American Statistical Association, 92(438), 548-560.
Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. In:
Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence 2 (12): 1137-
1143. San Mateo, CA: Morgan Kaufmann.
McLachlan, G.J., Do, K.-A. & Ambroise, C. (2004). Analyzing Microarray Gene Expression Data. New
York, N.Y.: Wiley.
Stone, M. (1977). Asymptotics for and against cross-validation. Biometrika, 64(1), 29-35.

-3
:


. ,
: (1) , (2) , (3) , (4) ,
(5) , (6) , (7) .
H . ,
.
,
.
,
, ,
.

.1
(Bishop, 2006 Shalev-Shwartz & Ben-David,
2014). ,
(reinforcement learning) (software agents)

(Sutton & Barto, 1998).
Kaburlasos (2006), - .
,
, .

.2
, ,
, ..,
. ,
(Salmeron, 2012), (Papageorgiou, 2011), ..
, ,
(Papakostas .., 2008) (Jamshidi &
Kaburlasos, 2014 Kaburlasos .., 2013).
, ,
. ,
(Vonk .., 1997),
.

.3

,
(Papakostas .., 2014).
(Rashedi .., 2011), (Nezamabadi-pour
.., 2006).

.4

(Petridis & Kehagias, 1998).
(Froelich & Salmeron, 2014).

-1
.5

: (Akay & Karaboga, 2012),
(Jamshidi .., 2003) ..
(Stylios & Groumpos, 2000 Kottas .., 2006),
(Rodriguez-Repiso .., 2007), (precision
farming) (Papageorgiou .., 2009).

.6
.
(Liu & Satur,
1999 Xirogiannis .., 2004) ..

.7

. , ,
,
(Papakostas, 2015). ,
(Papadakis .., 2005).


Akay, B. & Karaboga, D. (2012). Artificial bee colony algorithm for large-scale problems and engineering
design optimization. J. of Intel. Manuf., 23(4) 1001-1014.
Bishop, C.M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics).
Heidelberg, Germany: Springer.
Froelich, W. & Salmeron, J.L. (2014). Evolutionary learning of fuzzy grey cognitive maps for the forecasting
of multivariate, interval-valued time series. International Journal of Approximate Reasoning, 55(6),
1319-1335.
Jamshidi, M., Coelho, L.D.S., Krohling, R.A. & Fleming, P.J. (2003). Robust Control Systems with Genetic
Algorithms. Boca Raton, Florida: CRC Press.
Jamshidi, Y. & Kaburlasos, V.G. (2014). gsaINknn: A GSA optimized, lattice computing knn classifier.
Engineering Applications of Artificial Intelligence, 35, 277-285.
Kaburlasos, V.G. (2006). Towards a Unified Modeling and Knowledge-Representation Based on Lattice
Theory (Studies in Computational Intelligence 27). Heidelberg, Germany: Springer.
Kaburlasos, V.G., Papadakis, S.E. & Papakostas, G.A. (2013). Lattice computing extension of the FAM
neural classifier for human facial expression recognition. IEEE Transactions on Neural Networks &
Learning Systems, 24(10), 1526-1538.
Kottas, T.L., Boutalis, Y.S. & Karlis, A.D. (2006). New maximum power point tracker for PV arrays using
fuzzy controller in close cooperation with fuzzy cognitive networks. IEEE Transactions on Energy
Conversion, 21(3), 793-803.
Liu, Z. & Satur, R. (1999). Contextual fuzzy cognitive map for decision support in geographic information
systems. IEEE Transactions on Fuzzy Systems, 7(5), 495-507.
Nezamabadi-pour, H., Saryazdi, S. & Rashedi, E. (2006). Edge detection using ant algorithms. Soft
Computing, 10(7), 623-628.
Papadakis, S.E., Tzionas, P., Kaburlasos, V.G. & Theocharis, J.B. (2005). A genetic based approach to the Type
I structure identification problem. Informatica, 16(3), 365-382.
Papageorgiou, E.I. (2011). A new methodology for Decisions in Medical Informatics using fuzzy cognitive
maps based on fuzzy rule-extraction techniques. Applied Soft Computing, 11(1), 500-513.

-2
Papageorgiou, E.I., Markinos, A. & Gemptos, T. (2009). Application of fuzzy cognitive maps for cotton yield
management in precision farming. Expert Systems with Applications, 36(10), 12399-12413.
Papakostas, G.A. (2015). Improving the Recognition Performance of Moment Features by Selection. In U.
Stanczyk & L.C. Jain (Eds.), Feature Selection for Data and Pattern Recognition (Studies in
Computational Intelligence 584, pp. 305-327). Berlin, Germany: Springer.
Papakostas, G.A., Boutalis, Y.S., Koulouriotis, D.E. & Mertzios, B.G. (2008). Fuzzy cognitive maps for
pattern recognition applications. International Journal of Pattern Recognition and Artificial
Intelligence, 22(8), 1461-1468.
Papakostas, G.A., Tsougenis, E.D., Koulouriotis, D.E. & Tourassis, V.D. (2014). Moment-based local image
watermarking via genetic optimization. Applied Mathematics and Computation, 227, 222-236.
Petridis, V. & Kehagias. A. (1998). Predictive modular neural networks Applications to time series. Norwell,
MA: Kluwer Academic Publishers.
Rashedi, E., Nezamabadi-Pour, H. & Saryazdi, S. (2011). Filter modeling using gravitational search
algorithm. Engineering Applications of Artificial Intelligence, 24(1), 117-122.
Rodriguez-Repiso, L., Setchi, R. & Salmeron, J.L. (2007). Modelling IT projects success with Fuzzy
Cognitive Maps. Expert Systems with Applications, 32(2), 543-559.
Shalev-Shwartz, S. & Ben-David, S. (2014). Understanding Machine Learning: From Theory to Algorithms.
New York, NY: Cambridge University Press.
Salmeron, J.L. (2012). Fuzzy cognitive maps for artificial emotions forecasting. Applied Soft Computing,
12(12), 3704-3710.
Stylios, C.D. & Groumpos, P.P. (2000). Fuzzy cognitive maps in modeling supervisory control systems.
Journal of Intelligent and Fuzzy Systems, 8(2), 83-98.
Sutton, R.S. & Barto, A.G. (1998). Reinforcement Learning An Introduction. Cambridge, MA: The MIT
Press.
Vonk, E., Jain, L.C. & Johnson, R.P. (1997). Automatic Generation of Neural Network Architecture Using
Evolutionary Computation. Singapore: World Scientific.
Xirogiannis, G., Stefanou, J. & Glykas, M. (2004). A fuzzy cognitive map approach to support urban design.
Expert Systems with Applications, 26(2), 257-268.

-3

6-8
6-8
() 2-6, 2-14, 2-15, 2-17, 2-20, 2-23, 2-24, 7-2
7-15
Boole 7-16, 8-1
8-3

- 5-12
3-12
3-1
- 1-14
- 5-11
3-2
0-2, 1-17, -2
7-12
0-2, -10, -1
1-17

9-12
- 9-4
1-5
1-5
-- 9-4
1-10
7-7, 7-12
1-13
o 8-4
- 2-20, 2-21, 2-22, 2-23

6-1, 6-13
5-16, 5-18
9-14, 9-15
Hamming 7-7
() 2-6, 2-14, 2-15, 2-17, 2-20, 2-23, 7-2
9-6

2-9, 9-12
9-12
() 9-4,
-2 (2) 9-4

8-4
7-13, 7-16
2-3

7-1
2-1
2-5, 4-13
-1 (1) 2-4, 2-9
() -2 (2) 2-5, 2-24
() 2-8, 2-15
() 2-14, 2-23

-1

2-9
-2 (2) 2-24
2-14, 2-24
2-1, 2-5, 2-6, 2-8, 2-15
2-8, 2-15
2-4, 2-13
2-3, 2-15, 2-19
-k- -1
5-1
7-4, 7-16

( ) 7-7, 7-14
( ) 3-3, 3-4, 3-5, 3-7, 3-8, 3-10
8-4
1-17

2-5, 2-7, 2-17, 7-1


7-2, 7-3, 7-4, 7-5, 7-6, 7-7, 7-9, 7-10, 7-11, 7-17, 9-11
2-17, 2-19, 2-20, 2-23
3-4
2-2, 2-15, 2-18, 2-19, 2-23

2-15
8-5
3-1
3-12
3-1
3-1
3-1
() 3-10
1-9, 1-10
0-2, 4-1, 4-5, 5-1, 5-2, -1
[, , ] () 6-2, 6-3, 8-4, 8-5
0-2
0-2
-2
5-2, 5-3
Hasse 7-13, 7-14
-- 6-2, 6-3
4-5, 4-6, 4-7, 4-8
Vapnik-Chervonenkis 4-10
3-4, 3-8, 3-9, 3-10

7-5, 9-12
-2, -3
-1 (1) 7-4, 7-5, 7-7, 9-1, 9-2, 9-12, 9-13
-2 (2) 9-2, 9-3

() 4-2, 4-5, 4-10


Bayes () 6-1, 6-2, 6-12
Markov (M) 6-1
6-1
/ 1-12
7-13, 7-16

-2
() 7-6, 7-7, 7-8, 7-11
() 4-7, 4-8, 4-10
() 7-13, 7-16
5-15, 5-17, 7-7, 7-9, 7-13, 7-14, 7-16, 8-6
() 2-3, 7-2
2-24
[, , ] 0-2, -1
0-2
2-2, 2-3, 2-12, 7-7, 7-15, 9-1, 9-2, 9-3, 9-4, 9-18
3-8, 3-10
2-12
5-2, 6-10
2-1, 4-10
6-4, 6-6
/ 8-7, B-2, -2
-1
3-6
( ) 3-3, 3-9
5-5, -1
6-2, 6-4
() 3-10
1-15, 1-16
3-7
3-7
3-7, 3-8
3-7
3-7
5-1
[, , ] 1-3, 4-3
(MATLAB) 2-22, -1
7-8
- 9-10
2-3, 9-4

6-1
[, , ] -2, 7-1, 7-2, 7-7, 7-12, 7-15, 7-16, 7-17
6-4
() 1-12, 1-14, 9-15
5-15
5-15, 5-17
7-13, 7-17
5-1
k- -2

6-1
1-4, 1-5, 1-6
3-2, 4-3, 4-12

9-10, 9-18
9-10, 9-11, 9-13, 9-18
() 6-1
2-8
0-2, 1-3, 1-12, 1-15, 1-16, 4-3, 4-4, 4-5, 4-6, 4-7, 4-8, 4-12, 5-1, 5-2, 5-3, 5-5
2-8
6-4, 6-6, 6-13

-3
2-1, -2, 9-4, 9-7
-2
1-7
3-1, 3-2, 3-3, 7-4, 7-7, 7-16, 9-6, 9-9
() 3-4, 3-5, 3-6, 3-9
9-4, 9-12
7-3, 7-5, 7-6, 7-17
5-15, 5-16
[, , ] 0-2, -1
Hebb 1-1
1-3
1-3, 5-5
5-5
0-2, 4-1
2-3, 7-15, 9-1, 9-2, 9-3, 9-18
8-1, 8-7
3-1

7-1, 7-7, 7-12, 7-13, 8-1, 8-7, 9-12


() 7-4, 7-6, 7-13, 7-14, 7-15, 7-18, 9-2, 9-3, 9-9, 9-18
- 5-1, 5-2
3-5, 3-9
9-1, 9-13
7-8
7-17
9-11

() 4-12
() 4-5
5-1, 9-15, -1
0-1, 0-2, 0-3, 1-2, 1-12, 2-4, 2-6, 2-8, 2-13, 2-14, 2-23, 2-24, 4-5, 4-12, 5-1, 5-3, 5-17, 6-11,
-2, -13, -14, -15, -1, -2, -1, -2
() 4-4
0-2, 0-3, 1-12, 2-5, 2-6, 2-8, 2-23, 3-4, 6-8, 6-9, 6-11, 7-12, -2
- () 4-1
() 4-1, 4-2
1-1, 1-2, 1-3, 1-4, 1-9, 1-10, 1-13, 1-16, 1-17, 4-1, 4-2, 4-3, 4-12
0-2, 1-1
3-12
0-2, 1-3, 1-11, 1-12, 1-13, 1-14, 1-15, 1-16, -2, 4-3, 5-5, 5-9, 5-11, 5-12, 5-13, 5-14, 9-
15, 9-16
2-5, 2-6, 2-8, 2-14, 2-25
7-12, 8-5
1-9
1-12
() 0-2, 4-4, 4-5, 5-2, 5-3, 9-13, 9-15, 9-18, -1
() 5-2, 5-3, 5-5
7-1, 7-2, 7-3, 7-15
7-9
7-15
() 2-5, 2-7
2-23, 4-11, 5-15, 6-6, 8-7, 9-4, 9-16, 9-17
0-2
5-2
-1

-4
7-15

5-2, 5-3
3-1
1-15
2-4
1-2, 1-3, 1-10, -10
2-5, 2-6
2-8
2-8, 2-12
3-3

( ) 2-2
() 4-9, 4-10
1-11, 1-13,
- 7-8
4-7, 5-9
1-17, -2, 7-7, 7-12

( ) 3-10
( ) 0-2, 5-11, 5-13
-3
-1
5-3
7-12

6-1
6-1
() 7-2, 9-6, 9-15, 9-16, 9-17
7-2
7-2
7-2
( ) 2-11, 2-14

9-11
[, , ] 1-12, 1-13, 3-1, 3-2, 3-3, 3-4, 3-8, 3-10, 3-11, 3-12, 4-3,
4-7, 5-2, 5-12, -17, -1
9-7
7-13
3-4, 3-6, 3-7
1-2, 9-16
5-15, 5-16, 5-17, 9-19
7-6, 7-7, 7-14, 7-17, 9-2, 9-3, 9-7, 9-9, 9-16
1-1, 1-2, 1-4, 1-7, 1-8, 1-9
7-3, 7-5, 7-6, 7-7, 7-9, 7-10, 7-11, 7-17, 9-1, 9-2, 9-3, 9-7, 9-9
7-8
5-15, 5-16
5-6, 5-13, 9-7
- 7-2, 7-4
- 7-2, 7-4
9-16
2-1, 2-3, 2-5, 2-8, 2-23, 2-24, 2-25
2-6, 2-8
7-3, 7-6, 7-10, 7-17
7-3

-5
7-2, 7-3, 7-4, 7-5, 7-6, 7-7, 7-8, 7-10, 7-11, 7-17, 9-1, 9-2, 9-3, 9-9, 9-15, 9-18
Dempster 5-16, 5-17
( Zadeh) 2-12, 2-13, 2-26

[, , ] 2-1, 2-2, 2-3, 2-4, 2-5, 2-8, 2-9, 2-10, 2-11, 2-15,
2-23, 2-24
5-1
-2

() 5-11, -1
() 1-7
7-3, 7-6, 7-8, 7-12, 7-13, 7-14
7-1
7-1, 7-7, 7-12, 7-15
7-13
() 3-10, 3-11
2-3, 2-4, 2-6, 2-7, 2-12, 3-3, 3-4, 3-5, 3-8, 3-9, 3-10
0-1, 4-12, 9-18
() 0-1
1-11

() 7-17
[, , -] 2-6, 2-7
[, , - /-] 2-6, 2-7
2-11, 7-2
8-4
8-4, 8-5
6-4
- 1-14
() 2-8, 2-14

2-24, 2-25
-2, 7-7, 8-1

-1
-1
-1
3-4
-2, -1
[, , ] () 6-2, 6-3, 8-4, 8-5
2-1, 2-2, 2-3, 2-4, 2-9, 2-10, 2-11, 2-20, 2-24, 2-25
2-24
2-24
3-4, 3-5, 3-6, 3-7, 3-8, 3-9, 3-10, 3-12, 3-13

3-1, 3-3, 3-4, 3-6, 3-9, 3-10


7-8, 7-9
5-17, 7-9, 7-14
-
4-3
7-17
5-1, 5-2

-6

-: -cut
: uncertainty
: dont care data
: effect
: cause
(): antecedent (of a rule)
: deterministic
: algebra
Boole: Boolean algebra
: lattice implication algebra
(): algebraic definition (of a lattice)

: leveraging algorithm
-: k-means algorithm
: gravitational search algorithm
: differential evolution algorithm
: nature-inspired algorithm
-: leader-follower algorithm
-: k-means algorithm
: descent algorithm
: classical optimization algorithm
: bat algorithm
: firefly algorithm
: evolution strategy algorithm
: artificial bee colony algorithm
: number crunching
: chain
: unbiased
: mutual information
: ambiguity

: bijection
: pattern recognition
: recursive ANN
: assignment
: basic belief assignment
: probability mass assignment
: rank-based fitness assignment
(): reflexivity (condition)
: retrieval

: independent component analysis


: singular value decomposition
: principal component analysis
: representation
-: interval-representation
: neuron representation
: pattern representation
--: membership-function-representation
(): (matrix) transpose
: recurrent ANN
: identifiable

-1
(): opening (MM)
: disparate data types
: competitive learning
o: object
: commutativity
: inverse relation
(): antisymmetry (condition)
: holdout validation
: greedy
-: de-fuzzification
-: de-fuzzifier
: joint function
: joint probability distribution function
: joint mass function
: missing data
(): simplified (subset)
(): absorption (law)
Hamming: Hamming distance [, , city-block distance]
(): consequent (of a rule)
: principled decision-making
: countable set
: arithmetic
: fuzzy number arithmetic
: interval arithmetic
(): intervals number (IN)
: upper IN
: generalized IN
: lower IN
-1 (1): intervals number (IN) type-1 (T1)
-2 (2): intervals number (IN) type-2 (T2)
: negation
(): irrational (number)
: script file
: principle
: resolution principle
: duality principle
: extension principle
: fuzzy
: fuzzy interval
: fuzzy power set
: fuzzy model
: fuzzy lattice
: fuzzy set
: intuitionistic fuzzy set
: interval-valued fuzzy set
: normal fuzzy set
: empty fuzzy set
-1 (1): type-1 (T1) fuzzy set
() -2 (2): (generalized) type-2 (T2) fuzzy set
(): fuzzy system (FS)
(): fuzzy inference system (FIS)
: fuzzy
: fuzzy algorithm
: fuzzy number
-1 (1): type-1 (1) fuzzy number

-2
-2 (2): type-2 (2) fuzzy number
: fuzzy rule
: fuzzy logic
: fuzzy variable
: fuzzy relation
: fuzzification
: fuzzifier
-1-: leave-1-out
-k-: leave-k-out
: weak
() : weak (fuzzy) partial order relation
: weak classifier/learner
: atomic lattice

( ): atom (in a poset)


( ): individual (in a population)
: automated reasoning
: automatic control
- : self-organizing maps
: deep learning

: degree of truth
: fuzzy order
: degree of activation
: degree of fitness
: degree of membership
: base
: knowledge base
: data base
: constituent lattice
: optimization
: ant colony optimization
: biogeography-based optimization
: linear optimization
: constrained optimization
-: non-linear optimization
: multiobjective optimization
(): particle swarm (PSO) optimization
: unconstrained optimization
: genetic plan
: genetic algorithm
: generalized Delta rule
: generalization
: precision farming
[, , ] (): (discrimination) attribute [feature]
: knowledge
(): cognitive maps (CM)
: gene
: linear space
: data
: bootstrap
: tree structure
: decision tree
(): erosion (MM)
: diagnosis

-3
Hasse: Hasse diagram
: disjunction
--: divide-and-conquer
: intuitionistic fuzzy logic
: discrete
: partition
: median
: support vector
: variance
Vapnik-Chervonenkis: Vapnik-Chervonenkis dimension
: crossover
: arithmetic crossover
: uniform crossover
: multi-point crossover
: interval
: generalized interval
: confidence interval
-1 (1): type-1 (T1) interval
-2 (2): type-2 (T2) interval
(): dilation (MM)
(): (lattice) meet
Hebb (): differential Hebbian learning (DHL)
: enhanced CI
: network
(): radial basis function (RBF) network
Bayes (): Bayesian network (BN)
Markov (M): Markov network (MN)
: belief network
/: stability/plasticity dilemma
(): two-valued [, , Boolean] (logic)
: dichotomy
: test
--: trial-and-error
(): structure element (MM)
: binary tree
: dual
() : dual statement
() : dual problem
: dual isomorphism
: dually
: power set
: bit
(): inclusion (set theoretic)
(): image (of a function)
: pixel
: exponential explosion
( ): exploitation (of the search space)
[, , ]: learning
: research driven education
: bootstrap estimate
: least
: least upper bound [, , sup(remum)]
: elitism
: expert system
: expert

-4
: active learning
Hebb (): active Hebbian learning (AHL)
: concept
: a posteriori
: entropy
(): union (set theoretic)

: attribute extraction [, , variable selection]


: feature extraction [, , variable selection]
: reinforcement learning
: exhaustive cross validation
(): evolutionary computation (EC)
: evolving program
( ): exploration (of the search space)
: emulate
: data mining
: induction
-: re-sampling
: repeated random sub-sampling validation
: iterative
(): reinsertion (of chromosomes)
: reset
: pattern analysis
( ): overlapping (of fuzzy sets)
: selection
: tournament selection
: roulette wheel selection
: local selection
: top mate selection
: distributive identity
: distributive
: committee of experts
[, , ]: supervision
(MATLAB): toolbox (MATLAB)
: label
(): soft computing (SC)
: Euclidean distance
: heuristic
: spread
-: zSLice
: resolution identity theorem
: theory
: graph theory
[, , ]: order [, , lattice] theory
: dynamic field theory
: information theory
(): adaptive resonance theory (ART)
: belief functions
: evidence theory
: shatter
: ideals of algebraic numbers
: generalization capacity
( ): equal (fuzzy sets)
: isomorphic
: isomorphism

-5
: lattice isomorphism
: isotropic
: strong classifier/learner
k- : k-fold cross validation
: universal approximator
( ): cover (a poset element)
: rule
: chain rule
: Delta rule
: normal
: normalization
: Cartesian product
: gradient descent
: distribution
: possibility distribution
: probability distribution
(): directed acyclic graph (DAG)
: directed connection
: search direction
: predicate
: classification
: classifier
: predicative
: threshold
: information gain
: classic CI
(): closing (MM)
: information granule
: granular computing
: criterion
: gain criterion
: termination criterion
: epoch
: convex
: convexity
() : bell-shaped (convex) membership function
(): encoding (of a chromosome)
: tree encoding
: binary encoding
: permutation encoding
: complement coding
: value encoding
: cone
: reasonable constraints
: logician
: plausibility
: gain ratio
(): mathematical morphology (MM)
[, , ]: learning
Hebb: Hebbian learning
: supervised learning
: unsupervised learning
: oracle
: mask crossover
: black box

-6
: step size
: greatest
: greatest lower bound [, , inf(imum)]
: paradigm
: method
: cuckoo search method
: least-squares method
: Newtons method
: mutation
(): translation (MM)
: measurement
: inclusion measure
: partial order
: partially
: partially ordered relation
(): partially ordered set (poset)
: bias
: expected value
-: meta-algorithm
(): transitivity (condition)
: variability
: measurable space
: metric
: metric space
- Hebb (): non-linear Hebbian learning (NLH)
-: non-convex
- ( ): incomparable (lattice elements)
: machines
(): extreme learning machines (ELMs)
(): support vector machines (SVM)
: machine learning
: mixture density
(): monotonicity (property)
: monotone
: model
(): k nearest neighbors (kNN) model
: modeling
(): (lattice) morphism
: meet morphism
: join morphism
N-: N-tuple
- (): neuro-fuzzy systems (NFS)
(): adaptive neuro-fuzzy inference system (ANFIS)
: neuron
: neural computing , , neuro-computing
: swarm intelligence
: global best
: total order
: global minimum
: totally ordered set
: clustering
: spoken language
: ontology
: backpropagation learning
: visualization

-7
: identity element
- : L-fuzzy set
(): regression (statistical)
: regressor
( ): parallel (lattice elements)
: mixing parameters
: crosstalk
: finite
: belief
(): descriptor (linguistic)
: constraint
: quotient
: stochastic universal sampling
(): probably approximately correct (PAC)
: lattice
: join lattice
: complete lattice
(): multi-valued (logic)
: cardinality
: information
: diversity
: bootstrap aggregating [, , bagging ]
: multimodal
: software agent
: operation
: prediction
: problem
: overfitting problem
: optimization problem
: category proliferation problem
: projection
: associativity property
: feedforward ANN
: proposition
: composite proposition
: categorical proposition
: pattern
: premature convergence
(): primal (problem)
: principal
: principal ideal
: principal filter

( ): core (of a fuzzy set)


(): kernel (SVM)
(): rational (number)
: learning rate
-: -algebra
: saddle point
: sequential data
: semantics
: semantic
: semantic Web
(): semantic definition (of a lattice)

-8
( ): cluster
( ): swarm

: vigilance parameter
(): bias (of a neuron)
: statistical hypothesis testing
: cross validation
[, , ] /: support of a fuzzy set/IN
: stochastic
: random forests
: disparate data fusion
( ): comparable (lattice elements)
: conjunction
: conjugate descent
: mate
: reasoning
: bottom-up reasoning
: top-down reasoning
(): fuzzy lattice reasoning (FLR)
: reasoning by analogy
: deductive reasoning
: approximate reasoning
(): symmetry (condition)
( ): inference (of a fuzzy rule)
: complement
(): complemented (lattice)
: function
: cumulative distribution function
[, , ]: objective [, , cost] function
: weight function
: strictly increasing
: strictly decreasing
: order preserving function
: order embedded function
: injection
: bijection
: surjection
: zeta function
: fitness function
: logistic function
: mass function
: size function
: transfer function
: metric function
: measure function
-: non-membership function
: monotone
: belief function
: probability density function
-: sigma-meet function
-: sigma-join function
: sigmoid function
: membership function
: implication function
: composite function

-9
: valuation function
: monotone valuation function
: positive valuation function
: combination
Dempster: Dempsters rule of combination
(): (lattice) join
(): consistency (law)
: ordinary interval
( Zadeh): (Zadehs) compositional rule of inference
: assimilation condition
(): set (crisp)

[, , ]: universe of discourse
: pool of attributes
: ensemble of classifiers
: pool of features
: set theoretic
(): component (probability) densities
: decision support systems
: error
(): least squares error (LSE)
(): mean square error (MSE)
: relation
: fuzzy relation
: relation binary
: equivalence relation
(): particle (of the swarm)
- : bounded sum t-conorm
(): identity of indiscernibles (condition)
(): idempotent (law)
: velocity
: operator
: big data
: quadratic programming
: trivial
(): artificial intelligence (AI)
: software engineering
(): artificial neural network (ANN)
: spiking ANN
: deep convolutional ANN
(): intersection (set theoretic)
: local minimum
: topologically ordered maps
: rough set
: triangular
(): triangular inequality (condition)
[, , -]: triangular norm [t-norm]
[, , - /-]: triangular conorm [t-conorm /s-norm]
: kernel trick
: modus ponens
: generalized modus ponens
: roulette wheel
: formal context

(): formal concept analysis (FCA)

-10
: formal concept
: blind search
(): hedge (linguistic)
-: hyperbox
: super individual
: superlattice
(): subject (of a proposition)
: computing
: computing with words
: lattice computing
(): computational intelligence (CI)
: sublattice
: data set
: validation data set
: training data set
: testing data set
: subsethood
( /): height (of a fuzzy set/IN)
: filter
: amoeba filter
: coordinate logical filter
[, , ] /: support of a fuzzy set/IN
: natural selection
: natural computing
: slack variables
[, , ] (): (discrimination) feature [attribute]
: characteristic function
: primary characteristic function
: secondary characteristic function
: upper characteristic function
: lower characteristic function
: chromosome
: split information
: space
: solution search space
: measure space
: probability space
: pseudo
-: pseudo-inverse
- : pseudo-metric function
: boosting

-11

You might also like