Professional Documents
Culture Documents
Uncertainty Theory
Uncertainty Theory
Third Edition
Baoding Liu
Uncertainty Theory Laboratory
Department of Mathematical Sciences
Tsinghua University
Beijing 100084, China
liu@tsinghua.edu.cn
http://orsc.edu.cn/liu
Contents
Preface
ix
1 Probability Theory
1.1 Probability Space . . . . . . . .
1.2 Random Variables . . . . . . . .
1.3 Probability Distribution . . . . .
1.4 Independence . . . . . . . . . . .
1.5 Identical Distribution . . . . . .
1.6 Expected Value . . . . . . . . .
1.7 Variance . . . . . . . . . . . . .
1.8 Moments . . . . . . . . . . . . .
1.9 Critical Values . . . . . . . . . .
1.10 Entropy . . . . . . . . . . . . . .
1.11 Distance . . . . . . . . . . . . .
1.12 Inequalities . . . . . . . . . . . .
1.13 Convergence Concepts . . . . . .
1.14 Conditional Probability . . . . .
1.15 Stochastic Process . . . . . . . .
1.16 Stochastic Calculus . . . . . . .
1.17 Stochastic Differential Equation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Credibility Theory
2.1 Credibility Space . . . .
2.2 Fuzzy Variables . . . .
2.3 Membership Function .
2.4 Credibility Distribution
2.5 Independence . . . . . .
2.6 Identical Distribution .
2.7 Expected Value . . . .
2.8 Variance . . . . . . . .
2.9 Moments . . . . . . . .
2.10 Critical Values . . . . .
2.11 Entropy . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
53
. 53
. 63
. 65
. 69
. 74
. 79
. 80
. 92
. 95
. 96
. 100
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
4
7
11
13
14
23
25
26
28
32
32
35
40
43
47
50
vi
Contents
2.12
2.13
2.14
2.15
2.16
2.17
2.18
Distance . . . . . . . . . .
Inequalities . . . . . . . . .
Convergence Concepts . . .
Conditional Credibility . .
Fuzzy Process . . . . . . .
Fuzzy Calculus . . . . . . .
Fuzzy Differential Equation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
106
107
110
114
119
124
128
3 Chance Theory
3.1 Chance Space . . . . . . . .
3.2 Hybrid Variables . . . . . . .
3.3 Chance Distribution . . . . .
3.4 Expected Value . . . . . . .
3.5 Variance . . . . . . . . . . .
3.6 Moments . . . . . . . . . . .
3.7 Independence . . . . . . . . .
3.8 Identical Distribution . . . .
3.9 Critical Values . . . . . . . .
3.10 Entropy . . . . . . . . . . . .
3.11 Distance . . . . . . . . . . .
3.12 Inequalities . . . . . . . . . .
3.13 Convergence Concepts . . . .
3.14 Conditional Chance . . . . .
3.15 Hybrid Process . . . . . . . .
3.16 Hybrid Calculus . . . . . . .
3.17 Hybrid Differential Equation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
129
129
137
143
146
149
151
151
153
153
156
157
157
160
164
169
171
174
4 Uncertainty Theory
4.1 Uncertainty Space . . . .
4.2 Uncertain Variables . . .
4.3 Identification Function .
4.4 Uncertainty Distribution
4.5 Expected Value . . . . .
4.6 Variance . . . . . . . . .
4.7 Moments . . . . . . . . .
4.8 Independence . . . . . . .
4.9 Identical Distribution . .
4.10 Critical Values . . . . . .
4.11 Entropy . . . . . . . . . .
4.12 Distance . . . . . . . . .
4.13 Inequalities . . . . . . . .
4.14 Convergence Concepts . .
4.15 Conditional Uncertainty .
4.16 Uncertain Process . . . .
4.17 Uncertain Calculus . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
177
177
181
184
185
187
190
191
193
193
194
195
196
197
200
201
205
209
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents
vii
213
B Classical Measures
216
C Measurable Functions
218
D Lebesgue Integral
221
E Euler-Lagrange Equation
224
225
G Uncertainty Relations
226
Bibliography
229
244
Index
245
viii
Contents
Preface
There are various types of uncertainty in the real world. Randomness is a
basic type of objective uncertainty, and probability theory is a branch of
mathematics for studying the behavior of random phenomena. The study
of probability theory was started by Pascal and Fermat (1654), and an axiomatic foundation of probability theory was given by Kolmogoroff (1933) in
his Foundations of Probability Theory. Probability theory has been widely
applied in science and engineering. Chapter 1 will provide the probability
theory.
Fuzziness is a basic type of subjective uncertainty initiated by Zadeh
(1965). Credibility theory is a branch of mathematics for studying the behavior of fuzzy phenomena. The study of credibility theory was started by
Liu and Liu (2002), and an axiomatic foundation of credibility theory was
given by Liu (2004) in his Uncertainty Theory. Chapter 2 will introduce the
credibility theory.
Sometimes, fuzziness and randomness simultaneously appear in a system. A hybrid variable was proposed by Liu (2006) as a tool to describe the
quantities with fuzziness and randomness. Both fuzzy random variable and
random fuzzy variable are instances of hybrid variable. In addition, Li and
Liu (2007) introduced the concept of chance measure for hybrid events. After
that, chance theory was developed steadily. Essentially, chance theory is a
hybrid of probability theory and credibility theory. Chapter 3 will offer the
chance theory.
In order to deal with general uncertainty, Liu (2007) founded an uncertainty theory in his Uncertainty Theory, and had it become a branch of
mathematics based on normality, monotonicity, self-duality, and countable
subadditivity axioms. Probability theory, credibility theory and chance theory are three special cases of uncertainty theory. Chapter 4 is devoted to the
uncertainty theory.
For this new edition the entire text has been totally rewritten. More
importantly, uncertain process and uncertain calculus as well as uncertain
differential equations have been added.
The book is suitable for mathematicians, researchers, engineers, designers, and students in the field of mathematics, information science, operations
research, industrial engineering, computer science, artificial intelligence, and
management science. The readers will learn the axiomatic approach of uncertainty theory, and find this work a stimulating and useful reference.
Preface
Baoding Liu
Tsinghua University
http://orsc.edu.cn/liu
March 5, 2008
Chapter 1
Probability Theory
Probability measure is essentially a set function (i.e., a function whose argument is a set) satisfying normality, nonnegativity and countable additivity
axioms. Probability theory is a branch of mathematics for studying the behavior of random phenomena. The emphasis in this chapter is mainly on
probability space, random variable, probability distribution, independence,
identical distribution, expected value, variance, moments, critical values, entropy, distance, convergence almost surely, convergence in probability, convergence in mean, convergence in distribution, conditional probability, stochastic
process, renewal process, Brownian motion, stochastic calculus, and stochastic differential equation. The main results in this chapter are well-known.
For this reason the credit references are not provided.
1.1
Probability Space
Pr
Ai
Pr{Ai }.
i=1
(1.1)
i=1
Definition 1.1 The set function Pr is called a probability measure if it satisfies the normality, nonnegativity, and countable additivity axioms.
Example 1.1: Let = {1 , 2 , }, and let be the power set of .
Assume that p1 , p2 , are nonnegative numbers such that p1 + p2 + = 1.
Define a set function on as
Pr{A} =
pi ,
A .
(1.2)
i A
lim Ai .
(1.3)
(Ai \Ai1 ) = A,
i=1
(Ai \Ai1 ) = Ak
i=1
(Ai \Ai1 )
Pr{A} = Pr
i=1
Pr {Ai \Ai1 }
=
i=1
= lim
k i=1
(Ai \Ai1 )
i=1
= lim Pr{Ak }.
k
Ai Ak
i=k
Ai .
i=k
Pr
Ai
Pr{Ak } Pr
i=k
Note that
Ai
i=k
Ai A,
i=k
Ai A.
i=k
(1.5)
1.2
Random Variables
Definition 1.5 A random variable is a measurable function from a probability space (, , Pr) to the set of real numbers, i.e., for any Borel set B of
real numbers, the set
{ B} = { () B}
is an event.
(1.6)
(1.7)
(1.8)
(1.9)
is an event.
Theorem 1.3 The vector (1 , 2 , , n ) is a random vector if and only if
1 , 2 , , n are random variables.
Proof: Write = (1 , 2 , , n ). Suppose that is a random vector on the
probability space (, , Pr). For any Borel set B of , the set B n1 is
also a Borel set of n . Thus we have
1 () B = 1 () B, 2 () , , n ()
= () B n1
which implies that 1 is a random variable. A similar process may prove that
2 , 3 , , n are random variables.
Conversely, suppose that all 1 , 2 , , n are random variables on the
probability space (, , Pr). We define
{ |() B} .
()
(ai , bi )
i=1
i () (ai , bi ) .
i=1
()
Bi
i=1
{ () Bi }
i=1
(1.10)
1.3
Probability Distribution
[0, 1] of a random
(1.11)
That is, (x) is the probability that the random variable takes a value less
than or equal to x.
Example 1.9: Assume that the random variables and have the same
probability distribution. One question is whether = or not. Generally speaking, it is not true. Take (, , Pr) to be {1 , 2 } with Pr{1 } =
Pr{2 } = 0.5. We now define two random variables as follows,
() =
1, if = 1
1, if = 2 ,
() =
1, if = 1
1, if = 2 .
0, if x < 1
0.5, if 1 x < 1
(x) =
1, if x 1.
However, it is clear that = in the sense of Definition 1.7.
Theorem 1.5 (Sufficient and Necessary Condition for Probability Distribution) A function : [0, 1] is a probability distribution if and only if it is
an increasing and right-continuous function with
lim (x) = 0;
lim (x) = 1.
x+
(1.12)
x+
x+
Proof: The parts (a), (b), (c) and (d) follow immediately from the definition.
Next we prove the part (e). If is a continuous random variable, then
Pr{ = x} = 0. It follows from the probability continuity theorem that
lim ((x) (y)) = lim Pr{y < x} = Pr{ = x} = 0
yx
yx
[0, +) of a ran-
(x) =
(y)dy
(1.13)
(y)dy.
(1.14)
Proof: Let
(y)dy
(1.15)
holds. We will show that contains all Borel sets of . It follows from the
probability continuity theorem and relation (1.15) that is a monotone class.
It is also clear that contains all intervals of the form (, a], (a, b], (b, )
and since
a
Pr{ (, a]} = (a) =
(y)dy,
(y)dy,
b
10
(y)dy,
a
Pr{ } = 0 =
(y)dy
Pr{ C} =
Pr{ Cj } =
j=1
(y)dy =
j=1
Cj
(y)dy.
C
0,
otherwise
(1.16)
denoted by U(a, b), where a and b are given real numbers with a < b.
Exponential Distribution: A random variable has an exponential distribution if its probability density function is defined by
1 exp x , if x 0
(x) =
(1.17)
0,
if x < 0
denoted by EX P(), where is a positive number.
Normal Distribution: A random variable has a normal distribution if
its probability density function is defined by
(x) =
1
(x )2
exp
2 2
2
(1.18)
11
[0, 1] of a ran-
(x1 , x2 , , xn ) = Pr 1 () x1 , 2 () x2 , , n () xn .
Definition 1.14 The joint probability density function :
a random vector (1 , 2 , , n ) is a function such that
x2
x1
[0, +) of
xn
(x1 , x2 , , xn ) =
1.4
Independence
{i Bi }
Pr
Pr{i Bi }
i=1
(1.19)
i=1
{fi (i ) Bi }
Pr
{i fi1 (Bi )}
= Pr
i=1
i=1
, we have
i=1
Pr{fi (i ) Bi }.
i=1
(1.20)
12
Pr{1 C, 2 x2 , , m xm } = Pr{1 C}
Pr{i xi }
(1.21)
i=2
holds. We will show that contains all Borel sets of . It follows from
the probability continuity theorem and relation (1.21) that is a monotone
class. It is also clear that contains all intervals of the form (, a], (a, b],
(b, ) and . Let be the algebra consisting of all finite unions of disjoint
sets of the form (, a], (a, b], (b, ) and . Note that for any disjoint sets
C1 , C2 , , Ck of and C = C1 C2 Ck , we have
Pr{1 C, 2 x2 , , m xm }
m
Pr{1 Cj , 2 x2 , , m xm }
=
j=1
x2
(x1 , x2 , , xm ) =
xm
x1
x2
x1
=
=
xm
x2
xm
2 (t2 )dt2
1 (t1 )dt1
m (tm )dtm
13
for all (x1 , x2 , , xm ) m . Thus 1 , 2 , , m are independent. Conversely, if 1 , 2 , , m are independent, then for any (x1 , x2 , , xm ) m ,
we have (x1 , x2 , , xm ) = 1 (x1 )2 (x2 ) m (xm ). Hence
x1
xm
x2
(x1 , x2 , , xm ) =
1.5
Identical Distribution
(1.24)
holds. We will show that contains all Borel sets of . It follows from
the probability continuity theorem and relation (1.24) that is a monotone
class. It is also clear that contains all intervals of the form (, a], (a, b],
(b, ) and since and have the same probability distribution. Let be
the algebra consisting of all finite unions of disjoint sets of the form (, a],
14
(a, b], (b, ) and . Note that for any disjoint sets C1 , C2 , , Ck of
C = C1 C2 Ck , we have
k
and
Pr{ C} =
Pr{ Cj } =
j=1
1.6
Expected Value
Pr{ r}dr
E[] =
Pr{ r}dr
(1.25)
1,
if r a
(b r)/(b a), if a r b
Pr{ r} =
0,
if r b,
a
E[] =
1dr +
0
br
dr +
ba
0dr
b
0dr =
1,
if r b
(r a)/(b a), if a r b
Pr{ r} =
0,
if r a,
a+b
.
2
15
0dr
E[] =
0dr +
ra
dr +
ba
1dr
a+b
.
2
E[] =
0
Pr{ r} =
(b r)/(b a), if 0 r b
0,
if r b,
Pr{ r} =
0,
if r a
(r a)/(b a), if a r 0,
br
dr +
ba
0dr
0dr +
ra
dr
ba
a+b
.
2
E[] =
pi xi .
i=1
x(x)dx
E[] =
x(x)dx.
(1.26)
16
Pr{ r}dr
Pr{ r}dr
E[] =
(x)dx dr
(x)dx dr
(x)dr dx
=
+
(x)dr dx
x(x)dx +
x(x)dx
0
+
x(x)dx.
xd(x)
E[] =
xd(x).
(1.27)
lim
xd(x) =
y+
xd(x),
and
lim
y+
xd(x) =
xd(x)
lim
xd(x) = 0,
y
lim
xd(x) = 0.
It follows from
+
xd(x) y
y
z+
= y(1 (y)) 0,
if y > 0,
= y(y) 0,
that
lim y (1 (y)) = 0,
y+
lim y(y) = 0.
if y < 0
17
Let 0 = x0 < x1 < x2 < < xn = y be a partition of [0, y]. Then we have
n1
xi ((xi+1 ) (xi ))
xd(x)
0
i=0
and
n1
(1 (xi+1 ))(xi+1 xi )
Pr{ r}dr
0
i=0
as max{|xi+1 xi | : i = 0, 1, , n 1} 0. Since
n1
n1
xi ((xi+1 ) (xi ))
i=0
Pr{ r}dr =
0
xd(x).
0
Pr{ r}dr =
xd(x).
(1.28)
Proof: Step 1: We first prove that E[ + b] = E[] + b for any real number
b. When b 0, we have
Pr{ + b r}dr
E[ + b] =
Pr{ + b r}dr
Pr{ r b}dr
Pr{ r b}dr
0
b
= E[] +
0
= E[] + b.
If b < 0, then we have
0
E[ + b] = E[]
18
Pr{a r}dr
E[a] =
Pr{a r}dr
Pr
=
0
=a
0
r
dr
a
Pr
r
r
Pr
d
a
a
a
r
dr
a
Pr
r
r
d
a
a
= aE[].
If a < 0, we have
Pr{a r}dr
E[a] =
Pr{a r}dr
Pr
=
0
=a
0
r
dr
a
Pr
r
r
Pr
d
a
a
a
r
dr
a
Pr
r
r
d
a
a
= aE[].
Step 3: We prove that E[ + ] = E[] + E[] when both and
are nonnegative simple random variables taking values a1 , a2 , , am and
b1 , b2 , , bn , respectively. Then + is also a nonnegative simple random
variable taking values ai + bj , i = 1, 2, , m, j = 1, 2, , n. Thus we have
m
(ai + bj ) Pr{ = ai , = bj }
E[ + ] =
i=1 j=1
m n
ai Pr{ = ai , = bj } +
=
i=1 j=1
m
ai Pr{ = ai } +
=
i=1
bj Pr{ = ai , = bj }
i=1 j=1
bj Pr{ = bj }
j=1
= E[] + E[].
Step 4: We prove that E[ + ] = E[] + E[] when both and are
nonnegative random variables. For every i 1 and every , we define
k 1 , if k 1 () < k , k = 1, 2, , i2i
2i
2i
2i
i () =
i,
if i (),
19
k 1 , if k 1 () < k , k = 1, 2, , i2i
2i
2i
2i
i () =
i,
if i ().
Then {i }, {i } and {i + i } are three sequences of nonnegative simple
random variables such that i , i and i + i + as i . Note
that the functions Pr{i > r}, Pr{i > r}, Pr{i + i > r}, i = 1, 2, are
also simple. It follows from the probability continuity theorem that
Pr{i > r} Pr{ > r}, r 0
as i . Since the expected value E[] exists, we have
+
E[i ] =
(), if () i
i, otherwise,
i () =
(), if () i
i, otherwise.
Since the expected values E[] and E[] are finite, we have
lim E[i ] = E[],
lim E[i + i ] = E[ + ].
= E[] + E[].
Step 6: The linearity E[a + b] = aE[] + bE[] follows immediately
from Steps 2 and 5. The theorem is proved.
20
(1.29)
Proof: Step 1: We first prove the case where both and are nonnegative simple random variables taking values a1 , a2 , , am and b1 , b2 , , bn ,
respectively. Then is also a nonnegative simple random variable taking
values ai bj , i = 1, 2, , m, j = 1, 2, , n. It follows from the independence
of and that
m
ai bj Pr{ = ai , = bj }
E[] =
i=1 j=1
m n
ai bj Pr{ = ai } Pr{ = bj }
=
i=1 j=1
m
ai Pr{ = ai }
bj Pr{ = bj }
i=1
j=1
= E[]E[].
Step 2: Next we prove the case where and are nonnegative random
variables. For every i 1 and every , we define
k 1 , if k 1 () < k , k = 1, 2, , i2i
2i
2i
2i
i () =
i,
if i (),
k 1 , if k 1 () < k , k = 1, 2, , i2i
2i
2i
2i
i () =
i,
if i ().
Then {i }, {i } and {i i } are three sequences of nonnegative simple random
variables such that i , i and i i as i . It follows from
the independence of and that i and i are independent. Hence we have
E[i i ] = E[i ]E[i ] for i = 1, 2, It follows from the probability continuity
theorem that Pr{i > r}, i = 1, 2, are simple functions such that
Pr{i > r} Pr{ > r}, for all r 0
as i . Since the expected value E[] exists, we have
+
E[i ] =
0
21
E[ + ] = E[ + ]E[ ],
E[ + ] = E[ ]E[ + ],
E[ ] = E[ ]E[ ].
It follows that
E[] = E[( + )( + )]
= E[ + + ] E[ + ] E[ + ] + E[ ]
= E[ + ]E[ + ] E[ + ]E[ ] E[ ]E[ + ] + E[ ]E[ ]
= (E[ + ] E[ ]) (E[ + ] E[ ])
= E[ + ]E[ + ]
= E[]E[]
which proves the theorem.
Expected Value of Function of Random Variable
Theorem 1.18 Let be a random variable with probability distribution ,
and f : a measurable function. If the Lebesgue-Stieltjes integral
+
f (x)d(x)
E[f ()] =
f (x)d(x).
(1.30)
Pr{f () r}dr
E[f ()] =
Pr{f () r}dr.
a1 ,
a2 ,
f (x) =
am ,
function, i.e.,
if x B1
if x B2
if x Bm
Pr{f () r}dr =
E[f ()] =
0
m
ai
i=1
ai Pr{ Bi }
i=1
d(x) =
Bi
f (x)d(x).
(1.31)
22
Pr{fi () r}dr =
E[fi ()] =
fi (x)d(x).
E[f ()] =
= lim
0
+
= lim
fi (x)d(x)
f (x)d(x).
f (x)d(x)
f + (x)d(x)
f (x)d(x).
i = E[]E[1 ].
i=1
i r
Pr
i=1
Pr{ = k} Pr {1 + 2 + + k r} .
=
k=1
(1.32)
23
i =
i r
Pr
0
i=1
dr
i=1
+
Pr{ = k} Pr {1 + 2 + + k r} dr
=
0
k=1
Pr {1 + 2 + + k r} dr
Pr{ = k}
0
k=1
=
k=1
Pr{ = k}kE[1 ]
k=1
= E[]E[1 ].
If i are arbitrary random variables, then i = i+ i , and
i=1
(i+ i ) = E
i = E
i=1
i=1
i=1
i
i=1
i = E[]E[1+ ] E[]E[1 ]
i+ E
=E
i+
E[](E[1+ ]
i=1
E[1 ])
= E[]E[1+ 1 ] = E[]E[1 ].
1.7
Variance
24
V [] =
0
(1.33)
i=1
n
E (i E[i ])2 + 2
i=1
n1
Since 1 , 2 , , n are independent, E [(i E[i ])(j E[j ])] = 0 for all
i, j with i = j. Thus (1.33) holds.
Maximum Variance Theorem
Let be a random variable that takes values in [a, b], but whose probability
distribution is otherwise arbitrary. If its expected value is given, what is the
possible maximum variance? The maximum variance theorem will answer
this question, thus playing an important role in treating games against nature.
25
Theorem 1.23 (Edmundson-Madansky Inequality) Let f be a convex function on [a, b], and a random variable that takes values in [a, b] and has
expected value e. Then
E[f ()]
be
ea
f (a) +
f (b).
ba
ba
(1.34)
() a
b ()
a+
b.
ba
ba
b ()
() a
f (a) +
f (b).
ba
ba
be
, if x = a
ba
Pr{ = x} =
e a , if x = b.
ba
(1.35)
(1.36)
1.8
Moments
E[ k ] = k
(1.37)
26
E[ k ] =
Pr{ k x}dx =
0
Pr{ r}drk = k
0
be k ea k
|a| +
|b| ,
ba
ba
be
ea
(e a)k +
(b e)k .
ba
ba
(1.38)
(1.39)
1.9
Critical Values
Let be a random variable. In order to measure it, we may use its expected
value. Alternately, we may employ -optimistic value and -pessimistic value
as a ranking measure.
Definition 1.20 Let be a random variable, and (0, 1]. Then
sup () = sup r Pr { r}
(1.40)
(1.41)
(1.42)
where sup () and inf () are the -optimistic and -pessimistic values of the
random variable , respectively.
27
Proof: It follows from the definition of the optimistic value that there exists
an increasing sequence {ri } such that Pr{ ri } and ri sup () as
i . Since {|() ri } {|() sup ()}, it follows from the
probability continuity theorem that
Pr{ sup ()} = lim Pr{ ri } .
i
If = 0.8, then sup (0.8) = 0 which makes Pr{ sup (0.8)} = 1 > 0.8. In
addition, inf (0.8) = 1 and Pr{ inf (0.8)} = 1 > 0.8.
Theorem 1.28 Let be a random variable. Then we have
(a) inf () is an increasing and left-continuous function of ;
(b) sup () is a decreasing and left-continuous function of .
Proof: (a) It is easy to prove that inf () is an increasing function of .
Next, we prove the left-continuity of inf () with respect to . Let {i } be
an arbitrary sequence of positive numbers such that i . Then {inf (i )}
is an increasing sequence. If the limitation is equal to inf (), then the leftcontinuity is proved. Otherwise, there exists a number z such that
lim inf (i ) < z < inf ().
28
1.10
Entropy
H[] =
pi ln pi .
(1.43)
i=1
(1.44)
and equality holds if and only if there exists an index k such that pk = 1, i.e.,
is essentially a deterministic number.
29
S(t)
....
........
........................
....
........
.......
...
......
.....
......
... .......
......
... ...
.....
.....
... ...
.....
.......
...
.
.
.
......................................................................................................................................................................
.....
..
.....
...
.....
...
....
....
...
....
...
....
....
...
....
...
....
....
...
...
...
...
...
....
...
.
(1.45)
pi ln pi n
i=1
1
n
pi
i=1
ln
1
n
pi
= ln n
i=1
H[] =
(x) ln (x)dx.
(1.46)
30
(x) ln (x)dx
a
b
a
L=
(x) ln (x)dx
a
(x)dx 1 .
a
It follows from the Euler-Lagrange equation that the maximum entropy probability density function meets
ln (x) + 1 + = 0
and has the form (x) = exp(1 ). Substituting it into the natural
constraint, we get
1
, axb
(x) =
ba
which is just the uniformly distributed random variable, and the maximum
entropy is H[ ] = ln(b a).
31
(x) ln (x)dx
0
(x)dx = 1,
0
x(x)dx = .
0
The Lagrangian is
x(x)dx .
(x)dx 1 2
(x) ln (x)dx 1
L=
1
x
exp
x0
which is just the exponentially distributed random variable, and the maximum entropy is H[ ] = 1 + ln .
Example 1.24: Let be an absolutely continuous random variable on
(, +). Assume that the expected value and variance of are prescribed
to be and 2 , respectively. The maximum entropy probability density
function (x) should maximize the entropy
+
(x) ln (x)dx
(x)dx = 1,
(x )2 (x)dx = 2 .
x(x)dx = ,
The Lagrangian is
+
L=
(x) ln (x)dx 1
(x)dx 1
(x )2 (x)dx 2 .
x(x)dx 3
32
(x )2
1
exp
2 2
2
entropy is H[ ] = 1/2 + ln 2.
1.11
Distance
1.12
Inequalities
E[f ()]
.
f (t)
(1.48)
33
Pr{f () r}dr
E[f ()] =
0
+
Pr{|| f 1 (r)}dr
=
0
f (t)
Pr{|| f 1 (r)}dr
0
f (t)
dr Pr{|| f 1 (f (t))}
= f (t) Pr{|| t}
which proves the inequality.
Theorem 1.36 (Markov Inequality) Let be a random variable. Then for
any given numbers t > 0 and p > 0, we have
Pr{|| t}
E[||p ]
.
tp
(1.49)
V []
.
t2
(1.50)
34
Theorem 1.38 (H
olders Inequality) Let p and q be two positive real numbers with 1/p+1/q = 1, and let and be random variables with E[||p ] <
and E[||q ] < . Then we have
E[||]
E[||p ] q E[||q ].
(1.51)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assumeE[||p ] > 0 and E[||q ] > 0. It is easy to prove that the function
(x, y) D.
(x, y) D.
35
(1.53)
1.13
Convergence Concepts
Convergence
in Probability
Convergence
in Distribution
(1.54)
36
(1.56)
(1.57)
{|i | }
lim Pr
= 0.
(1.58)
i=n
lim i () = () ,
Xi () = |i () ()| .
It is clear that
Xi () .
X=
>0
n=1 i=n
Note that i , a.s. if and only if Pr{X} = 0. That is, i , a.s. if and
only if
Xi ()
Pr
=0
n=1 i=n
Xi ()
i=n
Xi (),
n=1 i=n
lim Pr
Xi ()
i=n
= Pr
Xi ()
n=1 i=n
= 0.
37
{|i | }
lim Pr
=0
i=n
{|n | }
{|i | },
i=n
0, otherwise
1
0
2j
38
2i , if j = i
0, otherwise
1
0.
2i
1
= 1.
2i
2i , if j = i
0, otherwise
0, otherwise
1
0.
2j
39
(1.60)
It follows from (1.59) and (1.60) that i (x) (x). The theorem is proved.
Example 1.31: Convergence in distribution does not imply convergence
in probability. For example, take (, , Pr) to be {1 , 2 } with Pr{1 } =
Pr{2 } = 0.5, and
1, if = 1
() =
1, if = 2 .
We also define i = for all i. Then i and are identically distributed.
Thus {i } converges in distribution to . But, for any small number > 0,
we have Pr{|i | > } = Pr{} = 1. That is, the sequence {i } does not
converge in probability to .
40
1.14
Conditional Probability
Pr{A B}
Pr{B}
Then
(1.61)
Pr{ B}
Pr{B}
=
= 1.
Pr{B}
Pr{B}
Pr
Ai |B
Pr
Ai
i=1
Pr{B}
i=1
Pr{Ai B}
B
i=1
Pr{B}
Pr{Ai |B}.
=
i=1
Pr{Ak } Pr{B|Ak }
n
(1.62)
Pr{Ai } Pr{B|Ai }
i=1
for k = 1, 2, , n.
Proof: Since A1 , A2 , , An form a partition of the space , we have
n
Pr{Ai B} =
Pr{B} =
i=1
Pr{Ai } Pr{B|Ai }
i=1
41
which is also called the formula for total probability. Thus, for any k, we have
Pr{Ak |B} =
Pr{Ak B}
Pr{Ak } Pr{B|Ak }
= n
.
Pr{B}
Pr{Ai } Pr{B|Ai }
i=1
[0, 1] of a
(x|B) = Pr { x|B}
(1.64)
Pr{ x, = y}
Pr{ = y}
(x|B) =
(y|B)dy,
(1.65)
42
(x, y)dy,
f (x) =
(x, y)dx,
g(y) =
Pr{ x, y} =
(r, t)drdt =
(r, t)
dr g(t)dt
g(t)
(x, y)
=
g(y)
(x, y)
+
a.s.
(1.67)
(x, y)dx
Note that (1.66) and (1.67) are defined only for g(y) = 0. In fact, the set
{y|g(y) = 0} has probability 0. Especially, if and are independent random
variables, then (x, y) = f (x)g(y) and (x| = y) = f (x).
Definition 1.31 Let be a random variable. Then the conditional expected
value of given B is defined by
+
Pr{ r|B}dr
E[|B] =
Pr{ r|B}dr
(1.68)
Pr{ x + > x}
.
(1.69)
The hazard rate tells us the probability of a failure just after time x when
it is functioning at time x. If has probability distribution and probability
density function , then the hazard rate
h(x) =
(x)
.
1 (x)
43
1.15
Stochastic Process
(1.72)
(1.73)
Pr{Sn t}.
E[Nt ] =
n=1
(1.74)
44
N. t
4
3
2
1
0
...
..........
...
..
...........
..............................
..
...
...
..
..
...
..........
.........................................................
..
...
..
..
....
..
..
..
..
..
..........
.......................................
..
...
..
..
..
...
..
..
...
..
..
..
..
...
..
.........................................................
.........
..
..
..
..
..
....
..
..
..
..
...
..
..
..
.
..
.
......................................................................................................................................................................................................................................
...
...
...
...
...
....
....
....
....
...
1 ...
2
3 ...
4
...
...
...
..
..
..
..
..
S0
S1
S2
S3
S4
n=1
n1
Pr{Nt r}dr =
E[Nt ] =
0
Pr{Nt r}dr
Pr{Nt n} =
=
n=1
Pr{Sn t}.
n=1
(t)n
,
n!
n = 0, 1, 2,
E[Nt ] = t.
(1.75)
(1.76)
Theorem 1.48 (Renewal Theorem) Let Nt be a renewal process with interarrival times 1 , 2 , Then we have
lim
E[Nt ]
1
=
.
t
E[1 ]
(1.77)
Brownian Motion
In 1828 the botanist Brown observed irregular movement of pollen suspended
in liquid. This movement is now known as Brownian motion. Bachelier used
Brownian motion as a model of stock prices in 1900. An equation for Brownian motion was obtained by Einstein in 1905, and a rigorous mathematical
definition of Brownian motion was given by Wiener in 1931. For this reason,
Brownian motion is also called Wiener process.
45
....
.........
....
..
...
... ..... .....
...
... ... ........ ... ........
...
. ... ...... .... ....... .....
....
. ...
...
.
......
...
...
.......
..
...
...
..
.
...
...........
.
.
...
..
......
...
... ...
.
.
...
.
. .. ........ ...
.
.
.
.
...
.
.
.
.
... ....
.. ......... ... .... ......
...
.
........
.
.
....... ......... .....
.
...
..............
.
.. ... ......
..
...
.
... .... ...
... ...
...
.. .... ... .... ......... .................
... ..
... .. .. .. ... .. ..
...
........
......
...... ... ... ... ...
.... ... ... ..
...
.....
...
.. ............. .... ...
. ... ...
.
...
.
...
. . ..
...
.... ... ...
.
...
.
... ... ....
... .........
... ...
......
..
...............................................................................................................................................................................................................................................................
i
k
, if t =
(k = 0, 1, , n)
n
n
n i=1
Xn (t) =
linear,
otherwise.
Since the limit
lim Xn (t)
exists almost surely, we may verify that the limit meets the conditions of
standard Brownian motion. Hence there is a standard Brownian motion.
46
(1.79)
X3 (t) = Bt+s Bs
(1.80)
max Bs x
= 2 Pr{Bt x}
0st
(1.81)
which is the so-called reflection principle. For any level x < 0 and any time
t > 0, we have
Pr
min Bs x
= 2 Pr{Bt x}.
0st
(1.82)
Example 1.38: Let Bt be a Brownian motion with drift e > 0 and diffusion
coefficient . Then the first passage time that the Brownian motion reaches
the barrier x > 0 has the probability density function
(t) =
(x et)2
x
exp
2 2 t
2t3
t>0
(1.83)
x
,
e
V [ ] =
x 2
.
e3
(1.84)
(1.85)
47
1.16
Stochastic Calculus
V [dBt ] = dt,
E[dBt2 ] = dt,
V [dBt2 ] = 2dt2 .
Xt dBt = lim
(1.87)
i=1
provided that the limit exists in mean square and is a random variable.
Example 1.39: Let Bt be a standard Brownian motion. Then for any
partition 0 = t1 < t2 < < tk+1 = s, we have
k
(Bti+1 Bti ) Bs B0 = Bs .
dBt = lim
0
i=1
48
sBs =
i=1
k
ti (Bti+1 Bti ) +
=
i=1
s
Bti+1 (ti+1 ti )
i=1
Bt dt
tdBt +
0
as 0. It follows that
s
tdBt = sBs
0
Bt dt.
0
Bt2i+1 Bt2i
Bs2 =
i=1
k
Bti+1 Bti
+2
i=1
i=1
s
s+2
Bt dBt
0
as 0. That is,
Bt dBt =
0
1 2 1
B s.
2 s 2
h
h
1 2h
(t, Bt )dt +
(t, Bt )dBt +
(t, Bt )dt.
t
b
2 b2
(1.88)
h
h
1 2h
(t, Bt )t +
(t, Bt )Bt +
(t, Bt )(Bt )2
t
b
2 b2
+
1 2h
2h
(t, Bt )(t)2 +
(t, Bt )tBt .
2
2 t
tb
49
Since we can ignore the terms (t)2 and tBt and replace (Bt )2 with
t, the Ito formula is obtained because it makes
s
Xs = X0 +
0
h
(t, Bt )dt +
t
s
0
h
1
(t, Bt )dBt +
b
2
s
0
2h
(t, Bt )dt
b2
for any s 0.
Remark 1.5: The infinitesimal increment dBt in (1.88) may be replaced
with the Ito process
dYt = ut dt + vt dBt
(1.89)
where ut is an absolutely integrable stochastic process, and vt is a square
integrable stochastic process, thus producing
dh(t, Yt ) =
h
1 2h
h
(t, Yt )dt +
(t, Yt )dYt +
(t, Yt )vt2 dt.
t
b
2 b2
(1.90)
Remark 1.6: Assume that B1t , B2t , , Bmt are standard Brownian motions, and h(t, b1 , b2 , , bm ) is a twice continuously differentiable function.
Define
Xt = h(t, B1t , B2t , , Bmt ).
Then we have the following multi-dimensional Ito formula
m
dXt =
h
h
1
dt +
dBit +
t
b
2
i
i=1
i=1
2h
dt.
b2i
(1.91)
Example 1.42: Ito formula is the chain rule for differentiation. Applying
Ito formula, we obtain
d(tBt ) = Bt dt + tdBt .
Hence we have
s
sBs =
d(tBt ) =
0
That is,
Bt dt +
0
tdBt .
0
tdBt = sBs
0
Bt dt.
0
Bs2 =
d(Bt2 ) = 2
0
Bt dBt +
0
dt = 2
0
Bt dBt + s.
0
50
It follows that
1 2 1
B s.
2 s 2
Bt dBt =
0
Bs3 =
d(Bt3 ) = 3
0
Bt2 dBt + 3
0
That is
Bt2 dBt =
0
Bt dt.
0
1 3
B
3 s
Bt dt.
0
Theorem 1.51 (Integration by Parts) Suppose that Bt is a standard Brownian motion and F (t) is an absolutely continuous function. Then
s
F (t)dBt = F (s)Bs
0
Bt dF (t).
(1.92)
Proof: By defining h(t, Bt ) = F (t)Bt and using the Ito formula, we get
d(F (t)Bt ) = Bt dF (t) + F (t)dBt .
Thus
F (s)Bs =
d(F (t)Bt ) =
0
Bt dF (t) +
0
F (t)dBt
0
1.17
(1.93)
Xs = X0 +
f (t, Xt )dt +
0
g(t, Xt )dBt .
0
(1.94)
51
However, the differential form is convenient for us. This is the main reason
why we accept the differential form.
Example 1.45: Let Bt be a standard Brownian motion. Then the stochastic
differential equation
dXt = adt + bdBt
has a solution
Xt = at + bBt
which is just a Brownian motion with drift coefficient a and diffusion coefficient b.
Example 1.46: Let Bt be a standard Brownian motion. Then the stochastic
differential equation
dXt = aXt dt + bXt dBt
has a solution
Xt = exp
b2
2
t + bBt
dXt = Xt dt Yt dBt
2
1
dY = Y dt + X dB
t
t
t
t
2
have a solution
(Xt , Yt ) = (cos Bt , sin Bt )
which is called a Brownian motion on unit circle since Xt2 + Yt2 1.
Chapter 2
Credibility Theory
The concept of fuzzy set was initiated by Zadeh [245] via membership function
in 1965. In order to measure a fuzzy event, Zadeh [248] proposed the concept
of possibility measure. Although possibility measure has been widely used,
it has no self-duality property. However, a self-dual measure is absolutely
needed in both theory and practice. In order to define a self-dual measure,
Liu and Liu [126] presented the concept of credibility measure. In addition,
a sufficient and necessary condition for credibility measure was given by Li
and Liu [100].
Credibility theory, founded by Liu [129] in 2004 and refined by Liu [132] in
2007, is a branch of mathematics for studying the behavior of fuzzy phenomena. The emphasis in this chapter is mainly on credibility measure, credibility space, fuzzy variable, membership function, credibility distribution, independence, identical distribution, expected value, variance, moments, critical
values, entropy, distance, convergence almost surely, convergence in credibility, convergence in mean, convergence in distribution, conditional credibility,
fuzzy process, fuzzy calculus, and fuzzy differential equation.
2.1
Credibility Space
Let be a nonempty set, and P the power set of (i.e., the larggest algebra over ). Each element in P is called an event. In order to present an
axiomatic definition of credibility, it is necessary to assign to each event A a
number Cr{A} which indicates the credibility that A will occur. In order to
ensure that the number Cr{A} has certain mathematical properties which we
intuitively expect a credibility to have, we accept the following four axioms:
Axiom 1. (Normality) Cr{} = 1.
Axiom 2. (Monotonicity) Cr{A} Cr{B} whenever A B.
Axiom 3. (Self-Duality) Cr{A} + Cr{Ac } = 1 for any event A.
54
(2.1)
(2.2)
The above equations hold for not only finite number of events but also infinite
number of events.
Proof: If Cr{A B} < 0.5, then Cr{A} Cr{B} < 0.5 by using Axiom 2.
Thus the equation (2.1) follows immediately from Axiom 4. If Cr{A B} =
0.5 and (2.1) does not hold, then we have Cr{A} Cr{B} < 0.5. It follows
from Axiom 4 that
Cr{A B} = Cr{A} Cr{B} < 0.5.
A contradiction proves (2.1). Next we prove (2.2). Since Cr{A B} 0.5,
we have Cr{Ac B c } 0.5 by the self-duality. Thus
Cr{A B} = 1 Cr{Ac B c } = 1 Cr{Ac } Cr{B c } = Cr{A} Cr{B}.
The theorem is proved.
55
(2.3)
for any events A and B. In fact, credibility measure is not only finitely
subadditive but also countably subadditive.
Proof: The argument breaks down into three cases.
Case 1: Cr{A} < 0.5 and Cr{B} < 0.5. It follows from Axiom 4 that
Cr{A B} = Cr{A} Cr{B} Cr{A} + Cr{B}.
Case 2: Cr{A} 0.5. For this case, by using Axioms 2 and 3, we have
Cr{Ac } 0.5 and Cr{A B} Cr{A} 0.5. Then
Cr{Ac } = Cr{Ac B} Cr{Ac B c }
Cr{Ac B} + Cr{Ac B c }
Cr{B} + Cr{Ac B c }.
Applying this inequality, we obtain
Cr{A} + Cr{B} = 1 Cr{Ac } + Cr{B}
1 Cr{B} Cr{Ac B c } + Cr{B}
= 1 Cr{Ac B c }
= Cr{A B}.
Case 3: Cr{B} 0.5. This case may be proved by a similar process of
Case 2. The theorem is proved.
Remark 2.1: For any events A and B, it follows from the credibility subadditivity theorem that the credibility measure is null-additive, i.e., Cr{A B} =
Cr{A} + Cr{B} if either Cr{A} = 0 or Cr{B} = 0.
Theorem 2.4 Let {Bi } be a decreasing sequence of events with Cr{Bi } 0
as i . Then for any event A, we have
lim Cr{A Bi } = lim Cr{A\Bi } = Cr{A}.
(2.4)
56
57
lim Ai
(2.5)
Proof: (a) Since Cr{A} 0.5, we have Cr{Ai } 0.5 for each i. It follows
from Axiom 4 that
Cr{A} = Cr {i Ai } = sup Cr{Ai } = lim Cr{Ai }.
i
(b) Since limi Cr{Ai } < 0.5, we have supi Cr{Ai } < 0.5. It follows
from Axiom 4 that
Cr{A} = Cr {i Ai } = sup Cr{Ai } = lim Cr{Ai }.
i
if Ai .
(2.7)
58
Proof: Assume Ai . If limi Cr{Ai } < 0.5, it follows from the credibility semicontinuity law that
Cr{} = lim Cr{Ai } < 0.5
i
(2.8)
Cr{A} =
sup Cr{},
A
(2.9)
59
Proof: We first prove that the set function Cr{A} defined by (2.9) is a
credibility measure.
Step 1: By the credibility extension condition sup Cr{} 0.5, we have
Case 2: sup Cr{} 0.5. For this case, we have sup Cr{} 0.5, and
A
B c
A
c
Case 2: sup Cr{} 0.5. For this case, we have sup Cr{} 0.5, and
Ac
A
c
Ac
Step 4: For any collection {Ai } with supi Cr{Ai } < 0.5, we have
Cr{i Ai } = sup Cr{} = sup sup Cr{} = sup Cr{Ai }.
i Ai
Ai
Case 2: Cr1 {A} > 0.5. For this case, we have Cr1 {Ac } < 0.5. It follows
from the first case that Cr1 {Ac } = Cr2 {Ac } which implies Cr1 {A} = Cr2 {A}.
Case 3: Cr1 {A} = 0.5. For this case, we have Cr1 {Ac } = 0.5, and
Cr2 {A} sup Cr2 {} = sup Cr1 {} = Cr1 {A} = 0.5,
A
Ac
60
Credibility Space
Definition 2.2 Let be a nonempty set, P the power set of , and Cr a
credibility measure. Then the triplet (, P, Cr) is called a credibility space.
Example 2.3: The triplet (, P, Cr) is a credibility space if
= {1 , 2 , }, Cr{i } 1/2 for i = 1, 2,
(2.10)
0, if A =
1, if A =
Cr{A} =
1/2, otherwise.
Example 2.4: The triplet (, P, Cr) is a credibility space if
= {1 , 2 , }, Cr{i } = i/(2i + 1) for i = 1, 2,
(2.11)
sup
,
if A is finite
2i
+
1
i A
Cr{A} =
i
1 sup
, if A is infinite.
c
2i
+
1
i A
Example 2.5: The triplet (, P, Cr) is a credibility space if
= {1 , 2 , }, Cr{1 } = 1/2, Cr{i } = 1/i for i = 2, 3,
(2.12)
sup 1/i,
if A contains neither 1 nor 2
i A
1/2,
if A contains only one of 1 and 2
Cr{A} =
Cr{} = /2 for .
sup ,
if sup < 1
2 A
A
Cr{A} =
1
1 sup , if sup = 1.
2 Ac
A
(2.13)
61
(2.14)
for each (1 , 2 , , n ) .
Theorem 2.10 (Product Credibility Theorem) Let k be nonempty sets on
which Crk are the credibility measures, k = 1, 2, , n, respectively, and =
1 2 n . Then Cr = Cr1 Cr2 Crn defined by Axiom 5 has
a unique extension to a credibility measure on as follows,
sup
min Crk {k },
(1 ,2 ,n )A 1kn
if
sup
min Crk {k } < 0.5
(1 ,2 , ,n )A 1kn
(2.15)
Cr{A} =
min Crk {k },
1
sup
(1 ,2 , ,n )Ac 1kn
if
sup
min Crk {k } 0.5.
(1 ,2 , ,n )A 1kn
sup Cr{} =
sup
(1 ,2 , ,n ) 1kn
(2.16)
k = 1, 2, , n;
(2.17)
k = 1, 2, , n;
(2.18)
k = 1, 2, , n;
(2.19)
k =k
i =i
k =k
62
k = 1, , n.
(2.20)
k =k
min Crk {k }
sup
) 1kn
(1 ,2 , ,n )=(1 ,2 , ,n
min
i+1kn
Crk {k }
= sup Cri {i }.
i =i
i =i
1kn
i =i
That is,
sup Crj {j } > sup Cri {i }
j =j
i =i
(2.21)
i =i
i =i
Thus Cr satisfies the credibility extension condition. It follows from the credibility extension theorem that Cr{A} is just the unique extension of Cr{}.
The theorem is proved.
Definition 2.3 Let (k , Pk , Crk ), k = 1, 2, , n be credibility spaces, =
1 2 n and Cr = Cr1 Cr2 Crn . Then (, P, Cr) is called
the product credibility space of (k , Pk , Crk ), k = 1, 2, , n.
63
sup
inf Crk {k },
(1 ,2 , )A 1k<
if
sup
inf Crk {k } < 0.5
(1 ,2 , )A 1k<
Cr{A} =
1
sup
inf Crk {k },
(1 ,2 , )Ac 1k<
if
sup
inf Crk {k } 0.5
(1 ,2 , )A 1k<
is a credibility measure on
P.
2.2
Fuzzy Variables
(2.22)
64
(2.23)
(2.24)
(2.25)
65
2.3
Membership Function
0, if = 1
1, if = 2 ,
2 () =
1, if = 1
0, if = 2 .
It is clear that both of them are fuzzy variables and have the same membership function, (x) 1 on x = 0 or 1.
Theorem 2.14 (Credibility Inversion Theorem) Let be a fuzzy variable
with membership function . Then for any set B of real numbers, we have
Cr{ B} =
1
2
xB c
(2.27)
66
1
2
sup (2Cr{ = x} 1)
xB
1
sup (x).
2 xB
(2.28)
(2.29)
xB c
1
2
xB
xB
Cr{ x} =
Cr{ x} =
1
2
1
2
yx
1
2
yx
x ;
(2.30)
y=x
x ;
(2.31)
x .
(2.32)
y>x
(x)
,
2
x .
(2.33)
Theorem 2.15 (Sufficient and Necessary Condition for Membership Function) A function :
[0, 1] is a membership function if and only if
sup (x) = 1.
Proof: If is a membership function, then there exists a fuzzy variable
whose membership function is just , and
sup (x) = sup (2Cr{ = x}) 1.
x
67
If there is some point x such that Cr{ = x} 0.5, then sup (x) = 1.
Otherwise, we have Cr{ = x} < 0.5 for each x . It follows from Axiom 4
that
sup (x) = sup (2Cr{ = x}) 1 = 2 sup Cr{ = x} = 2 (Cr{} 0.5) = 1.
x
1
2
It is clear that
sup Cr{x}
x
For any x
1
(1 + 1 1) = 0.5.
2
1
2
(x ) + 1 sup (y)
y=x
= 1
+ sup
y=x
1
2
1
1
sup (y) + sup (y) = 1.
2 y=x
2 y=x
Thus Cr{x} satisfies the credibility extension condition, and has a unique
extension to credibility measure on P( ) by using the credibility extension
theorem. Now we define a fuzzy variable as an identity function from the
credibility space ( , P( ), Cr) to . Then the membership function of the
fuzzy variable is
(2Cr{ = x}) 1 =
1 = (x)
y=x
68
, if a x b
ba
xc
2 (x) =
, if b x c
bc
0,
otherwise.
By a trapezoidal fuzzy variable we mean the fuzzy variable fully determined
by the quadruplet (a, b, c, d) of crisp numbers with a < b < c < d, whose
membership function is given by
xa
b a , if a x b
1,
if b x c
3 (x) =
xd
, if c x d
cd
0,
otherwise.
3 (x)
2 (x)
1 (x)
..
..
..
.........
.........
.........
..
..
...
... . . . . ........................................................... . . . . . . . . . . . ...... . . . . . . . . . . ......... . . . . . . . . . . . . . . . . . . .... . . . . . . . . ............................................
...
...
...
...
.. ..
...
.
.
.
.. .. ....
.....
.
.
.
...
.
...
....
. ...
.
.
.. .
...
....
...
... .. .....
. ..
.
.
.. .
.
.
.
.
.
.
.
.
...
.
.
.
. .
.
.
.. . ...
..
. .....
.
.
....
....
...
.. .. ....
.. ..
. ..
.
.
.
.
.
.
...
.
.
.
.
...
. ...
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
...
. .
.
.
.
...
.. ..
..
. ....
.
.
.
....
....
...
...
..
.. .
. ..
.
.
.
.
.
.
.
.
.
.
.
.
...
...
...
.
.
.
.
.
.
.
.
.
..
.
.
.
.
....
...
.
.
.
.
.
...
...
.
.
.
..
.
.
.
.
.
.
.
...
.
.
.
.
...
...
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
...
.
...
.
.
.
.
.
.
.
..
.
.
..
.
.
....
...
.
.
.
.
.
. ...
.
................................................................................................................................
.............................................................................................................
..............................................................................................
....
....
...
....
....
....
a b
c d
69
credibility measure by
(x) = (2Cr{ = x}) 1,
(2.34)
2.4
Credibility Distribution
[0, 1] of a
(2.35)
That is, (x) is the credibility that the fuzzy variable takes a value less than
or equal to x. Generally speaking, the credibility distribution is neither
left-continuous nor right-continuous.
Example 2.13: The credibility distribution of an equipossible fuzzy variable
(a, b) is
0, if x < a
1/2, if a x < b
1 (x) =
1, if x b.
Especially, if is an equipossible fuzzy variable on
Example 2.14: The credibility distribution
(a, b, c) is
0,
if
2(b a) , if
2 (x) =
x + c 2b
, if
2(c b)
1,
if
70
0,
if x a
,
if a x b
2(b a)
1
,
if b x c
3 (x) =
2
x + d 2c
, if c x d
2(d c)
1,
if x d.
1
0.5
0
3 (x)
2 (x)
1 (x)
....
....
....
.......
.......
........
... . . . . . . . . . . . . . ............................ . . . . . . . . . ...... . . . . . . . . . . . . . . . . . ............ . . . . . . . . . .... . . . . . . . . . . . . . . . . . . . . . . . ...........
.
.
...
.
...
.
...
.
.
....
....
...
....
...
.
.
... .
... .
...
...
...
.. .
.. .
.
.
.
.
.
.
.
...
.
.
.
.
.
....
....
...
.
... .
... .
.
.
.
...
...
...
...
...
.
.
.
..
..
...
...
...
.
.. . . . . .......................................... . . . . . . . . . . . . . . . . ..... . . . . . . . . . . . . ......... . . . .... . . . . . . . . . . . ... . . . . . . . . ...........................................
.
.
.. .
...
.
..
.
.
.
.
.
.
.
.
.
.
.
....
.
.
.
.
... .
.
.
.
.
.
..
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
...
. ..
.
.
.
.
.
.
....
...
.
.
.
.
.
.
.
.
...
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
...............................................................................................................................
................................................................................................................
................................................................................................
....
....
....
...
...
...
b c
a b
c d
1
2
(2.36)
y>x
yx
yx
(2.37)
(2.38)
Proof: It is obvious that a credibility distribution is an increasing function. The inequalities (2.37) follow from the credibility asymptotic theorem
immediately. Assume that x is a point at which limyx (y) > 0.5. That is,
lim Cr{ y} > 0.5.
yx
71
For this case, we have proved that limyx (y) = (x). Thus (2.37) and
(2.38) are proved.
Conversely, if : [0, 1] is an increasing function satisfying (2.37) and
(2.38), then
2(x),
if (x) < 0.5
1,
if lim (y) < 0.5 (x)
(x) =
(2.39)
yx
takes values in [0, 1] and sup (x) = 1. It follows from Theorem 2.15 that
there is a fuzzy variable whose membership function is just . Let us verify
that is the credibility distribution of , i.e., Cr{ x} = (x) for each x.
The argument breaks down into two cases. (i) If (x) < 0.5, then we have
supy>x (y) = 1, and (y) = 2(y) for each y with y x. Thus
Cr{ x} =
1
2
yx
(ii) If (x) 0.5, then we have supyx (y) = 1 and (y) (x) 0.5 for
each y with y > x. Thus (y) = 2 2(y) and
Cr{ x} =
=
1
2
1
2
yx
1 + 1 sup(2 2(y))
y>x
yx
if x < 0
2a,
1,
if x = 0
(x) =
2 2b, if x > 0.
72
(x) =
Thus we have
lim (x) = a,
lim (x) = b.
x+
(x) =
Then its credibility distribution is (x) 0.5. It is clear that (x) is simple
and continuous. But the fuzzy variable is neither simple nor continuous.
Definition 2.13 A continuous fuzzy variable is said to be (a) singular if its
credibility distribution is a singular function; (b) absolutely continuous if its
credibility distribution is absolutely continuous.
Definition 2.14 (Liu [124]) The credibility density function :
of a fuzzy variable is a function such that
[0, +)
(x) =
(y)dy,
x ,
(2.40)
(y)dy = 1
(2.41)
73
2(b
a)
1
(x) =
,
2(c b)
0,
Example 2.19: The credibility density function of a trapezoidal fuzzy variable (a, b, c, d) is
, if a x b
2(b
a)
1
(x) =
, if c x d
2(d c)
0,
otherwise.
Example 2.20: The credibility density function of an equipossible fuzzy
variable (a, b) does not exist.
Example 2.21: The credibility density function does not necessarily exist
even if the membership function is continuous and unimodal with a finite
support. Let f be the Cantor function, and set
if 0 x 1
f (x),
f (2 x), if 1 < x 2
(x) =
(2.42)
0,
otherwise.
Then is a continuous and unimodal function with (1) = 1. Hence
is a membership function. However, its credibility distribution is not an
absolutely continuous function. Thus the credibility density function does
not exist.
Theorem 2.22 Let be a fuzzy variable whose credibility density function
exists. Then we have
x
Cr{ x} =
Cr{ x} =
(y)dy,
(y)dy.
(2.43)
Proof: The first part follows immediately from the definition. In addition,
by the self-duality of credibility measure, we have
+
(y)dy
(y)dy =
(y)dy.
x
74
Cr{a b} =
(y)dy.
a
x2
, and
[0, +) of
xn
(x1 , x2 , , xn ) =
2.5
Independence
The independence of fuzzy variables has been discussed by many authors from
different angles, for example, Zadeh [248], Nahmias [163], Yager [231], Liu
[129], Liu and Gao [148], and Li and Liu [99]. A lot of equivalence conditions
of independence are presented. Here we use the following condition.
Definition 2.17 (Liu and Gao [148]) The fuzzy variables 1 , 2 , , m are
said to be independent if
m
{i Bi }
Cr
i=1
= min Cr {i Bi }
1im
(2.44)
75
{i Bi }
Cr
i=1
= max Cr {i Bi }
1im
(2.45)
{i Bi }
Cr
{i Bic }
= 1 Cr
i=1
i=1
1im
{i = xi }
Cr
i=1
= min Cr {i = xi }
1im
(2.46)
m
m
Cr
{i Bi } = Cr
{i = xi }
xi Bi ,1im i=1
i=1
sup
xi Bi ,1im
= min
{i = xi }
Cr
i=1
sup
min Cr{i = xi }
xi Bi ,1im 1im
sup Cr {i = xi } = min Cr {i Bi } .
1im xi Bi
1im
76
Proof: Suppose that 1 , 2 , , m are independent. It follows from Theorem 2.24 that
m
(x1 , x2 , , xm ) =
{i = xi }
2Cr
i=1
2 min Cr{i = xi } 1
1im
1im
{i = xi }
Cr
i=1
1
2
{i = xi }
2Cr
i=1
1
1
(x1 , x2 , , xm ) =
min i (xi )
2
2 1im
1
2
min (2Cr {i = xi }) 1
1im
= min Cr {i = xi } .
1im
It follows from Theorem 2.24 that 1 , 2 , , m are independent. The theorem is proved.
Theorem 2.26 Let i be credibility distributions of fuzzy variables i , i =
1, 2, , m, respectively, and the joint credibility distribution of fuzzy vector
(1 , 2 , , m ). If 1 , 2 , , m are independent, then we have
(x1 , x2 , , xm ) = min i (xi )
(2.48)
1im
(x1 , x2 , , xm ) = Cr
{i xi }
i=1
1im
77
for any real numbers x1 and x2 . But, generally speaking, a fuzzy variable is
not independent with itself.
Theorem 2.27 Let 1 , 2 , , m be independent fuzzy variables, and f1 , f2 ,
, fn are real-valued functions. Then f1 (1 ), f2 (2 ), , fm (m ) are independent fuzzy variables.
Proof: For any sets B1 , B2 , , Bm of
m
, we have
m
{fi (i ) Bi }
Cr
{i fi1 (Bi )}
= Cr
i=1
i=1
= min Cr{i
1im
fi1 (Bi )}
= min Cr{fi (i ) Bi }.
1im
for any x . Here we set (x) = 0 if there are not real numbers x1 , x2 , , xn
such that x = f (x1 , x2 , , xn ).
Proof: It follows from Definition 2.10 that the membership function of =
f (1 , 2 , , n ) is
(x) = (2Cr {f (1 , 2 , , n ) = x}) 1
= 2Cr
{1 = x1 , 2 = x2 , , n = xn } 1
Cr{1 = x1 , 2 = x2 , , n = xn }
sup
=
=
=
sup
min Cr{i = xi }
sup
min (2Cr{i = xi }) 1
sup
min i (xi ).
(by independence)
78
min
xy,
max
xy .
sup
min i (xi ) + 1
sup
min i (xi ) .
79
2.6
Identical Distribution
Definition 2.18 (Liu [129]) The fuzzy variables and are said to be identically distributed if
Cr{ B} = Cr{ B}
(2.50)
for any set B of
Theorem 2.29 The fuzzy variables and are identically distributed if and
only if and have the same membership function.
Proof: Let and be the membership functions of and , respectively. If
and are identically distributed fuzzy variables, then, for any x , we
have
(x) = (2Cr{ = x}) 1 = (2Cr{ = x}) 1 = (x).
Thus and have the same membership function.
Conversely, if and have the same membership function, i.e., (x)
(x), then, by using the credibility inversion theorem, we have
Cr{ B} =
=
for any set B of
1
2
xB
1
2
xB
= Cr{ B}
xB c
Theorem 2.30 The fuzzy variables and are identically distributed if and
only if Cr{ = x} = Cr{ = x} for each x .
Proof: If and are identically distributed fuzzy variables, then we immediately have Cr{ = x} = Cr{ = x} for each x. Conversely, it follows
from
(x) = (2Cr{ = x}) 1 = (2Cr{ = x}) 1 = (x)
that and have the same membership function. Thus and are identically
distributed fuzzy variables.
Theorem 2.31 If and are identically distributed fuzzy variables, then
and have the same credibility distribution.
Proof: If and are identically distributed fuzzy variables, then, for any
x , we have Cr{ (, x]} = Cr{ (, x]}. Thus and have the
same credibility distribution.
Example 2.28: The inverse of Theorem 2.31 is not true. We consider two
fuzzy variables with the following membership functions,
1.0, if x = 0
1.0, if x = 0
0.6, if x = 1
0.7, if x = 1
(x) =
(x) =
0.8, if x = 2,
0.8, if x = 2.
80
0, if x < 0
0.6, if 0 x < 2
(x) =
1, if x 2.
However, they are not identically distributed fuzzy variables.
Theorem 2.32 Let and be two fuzzy variables whose credibility density
functions exist. If and are identically distributed, then they have the same
credibility density function.
Proof: It follows from Theorem 2.31 that the fuzzy variables and have the
same credibility distribution. Hence they have the same credibility density
function.
2.7
Expected Value
There are many ways to define an expected value operator for fuzzy variables. See, for example, Dubois and Prade [34], Heilpern [59], Campos and
Gonz
alez [13], Gonz
alez [53] and Yager [225][236]. The most general definition of expected value operator of fuzzy variable was given by Liu and Liu
[126]. This definition is applicable to not only continuous fuzzy variables but
also discrete ones.
Definition 2.19 (Liu and Liu [126]) Let be a fuzzy variable. Then the
expected value of is defined by
+
Cr{ r}dr
E[] =
Cr{ r}dr
(2.51)
1, if
0.5, if
Cr{ r} =
0, if
a
E[] =
1dr +
0
ra
a<rb
r > b,
0.5dr +
a
0dr
b
0dr =
a+b
.
2
81
1,
0.5,
Cr{ r} =
0,
a
0dr +
if r b
if a r < b
if r < a,
b
0dr
E[] =
and
0.5dr +
a
1dr
a+b
.
2
0.5, if 0 r b
0, if r > b,
Cr{ r} =
0, if r < a
0.5, if a r 0,
E[] =
0.5dr +
0
0dr
0dr +
0.5dr
a
a+b
.
2
1
2
(x)dx
x0
1
2
x0
(x)dx.
0, if x < 0
x, if 0 x 1
(x) =
1, if x > 1.
82
if x < 0
1,
1 x, if 0 x 1
(x) =
0,
if x > 1.
Then its expected value is .
Example 2.35: The expected value may not exist for some fuzzy variables.
For example, the fuzzy variable with membership function
(x) =
1
,
1 + |x|
Cr{ r}dr
Cr{ r}dr
and
are infinite.
Example 2.36: The definition of expected value operator is also applicable
to discrete case. Assume that is a simple fuzzy variable whose membership
function is given by
1 , if x = x1
2 , if x = x2
(x) =
(2.52)
m , if x = xm
where x1 , x2 , , xm are distinct numbers. Note that 1 2 m = 1.
Definition 2.19 implies that the expected value of is
m
E[] =
wi xi
(2.53)
i=1
1
2
1jm
1jm
1jm
1
2
1ji
1j<i
ijm
i<jm
83
for i = 1, 2, , m.
Example 2.38: Consider the fuzzy variable defined by (2.52). Suppose
x1 < x2 < < xm and there exists an index k with 1 < k < m such that
1 2 k
and k k+1 m .
Note that k 1. Then the expected value is determined by (2.53) and the
weights are given by
,
if i = 1
i i1
,
if i = 2, 3, , k 1
k1 + k+1
wi =
, if i = k
1
i i+1
,
if i = k + 1, k + 2, , m 1
,
if i = m.
2
Example 2.39: Consider the fuzzy variable defined by (2.52). Suppose
x1 < x2 < < xm and 1 2 m (m 1). Then the expected
value is determined by (2.53) and the weights are given by
,
if i = 1
i i1
wi =
, if i = 2, 3, , m 1
1 m1 , if i = m.
2
Example 2.40: Consider the fuzzy variable defined by (2.52). Suppose
x1 < x2 < < xm and 1 2 m (1 1). Then the expected
value is determined by (2.53) and the weights are given by
1
,
if i = 1
i i+1
wi =
, if i = 2, 3, , m 1
,
if i = m.
2
Theorem 2.33 (Liu [124]) Let be a fuzzy variable whose credibility density
function exists. If the Lebesgue integral
+
x(x)dx
84
x(x)dx.
E[] =
(2.54)
Proof: It follows from the definition of expected value operator and Fubini
Theorem that
+
Cr{ r}dr
E[] =
Cr{ r}dr
0
+
(x)dx dr
(x)dx dr
(x)dr dx
=
+
(x)dr dx
x(x)dx +
x(x)dx
0
+
x(x)dx.
E[] =
xd(x).
0,
x,
(x) =
1,
xd(x) =
1
= +.
4
Theorem 2.34 (Liu [129]) Let be a fuzzy variable with credibility distribution . If
lim (x) = 0,
lim (x) = 1
x
xd(x)
85
xd(x).
E[] =
(2.55)
y+
xd(x),
xd(x) =
and
xd(x) = 0,
y+
lim
xd(x)
xd(x) =
lim
lim
xd(x) = 0.
lim
It follows from
+
xd(x) y
y
z+
= y (1 (y)) 0,
for y > 0,
= y(y) 0,
for y < 0
that
lim y (1 (y)) = 0,
y+
lim y(y) = 0.
Let 0 = x0 < x1 < x2 < < xn = y be a partition of [0, y]. Then we have
n1
xi ((xi+1 ) (xi ))
xd(x)
0
i=0
and
n1
(1 (xi+1 ))(xi+1 xi )
Cr{ r}dr
0
i=0
as max{|xi+1 xi | : i = 0, 1, , n 1} 0. Since
n1
n1
xi ((xi+1 ) (xi ))
i=0
i=0
Cr{ r}dr =
0
xd(x).
0
Cr{ r}dr =
xd(x).
86
(2.56)
Proof: Step 1: We first prove that E[ + b] = E[] + b for any real number
b. If b 0, we have
Cr{ + b r}dr
E[ + b] =
Cr{ + b r}dr
Cr{ r b}dr
Cr{ r b}dr
0
b
= E[] +
0
= E[] + b.
If b < 0, then we have
0
E[ + b] = E[]
Cr{a r}dr
E[a] =
0
Cr
=
0
r
dr
a
Cr
=a
0
Cr{a r}dr
Cr
r
r
d
a
a
a
r
dr
a
Cr
r
r
d
= aE[].
a
a
If a < 0, we have
Cr{a r}dr
E[a] =
0
Cr
=
0
r
dr
a
Cr
=a
0
Cr{a r}dr
Cr
r
r
d
a
a
a
r
dr
a
Cr
r
r
d
= aE[].
a
a
87
1 , if x = a1
1 , if x = b1
2 , if x = a2
2 , if x = b2
(x) =
(x) =
m , if x = am ,
n , if x = bn .
Then + is also a simple fuzzy variable taking values ai +bj with membership
degrees i j , i = 1, 2, , m, j = 1, 2, , n, respectively. Now we define
wi =
1
2
1km
1km
wj =
1
2
1km
1ln
1ln
wij =
1
2
1ln
max
1km,1ln
{k l |ak + bl ai + bj }
max
{k l |ak + bl < ai + bj }
max
{k l |ak + bl ai + bj }
max
{k l |ak + bl > ai + bj }
1km,1ln
1km,1ln
1km,1ln
wi =
wij ,
j=1
wj =
wij
i=1
E[] =
ai wi ,
i=1
E[] =
bj wj ,
j=1
E[ + ] =
(ai + bj )wij .
i=1 j=1
88
lim Cr{ y}
y0
(2.57)
k1
k1
, if
Cr{ x} <
2
2i
k
k1
i (x) =
,
if
Cr{ x} <
2
2i
1,
if Cr{ x} = 1
k
, k = 1, 2, , 2i1
2i
k
, k = 2i1 + 1, , 2i
2i
k1
k1
, if
Cr{ x} <
2
2i
k
k1
i (x) =
,
if
Cr{ x} <
2i
2
1,
if Cr{ x} = 1
k
, k = 1, 2, , 2i1
2i
k
, k = 2i1 + 1, , 2i
2i
sup
i x0,y0,x+yr
=
=
Cr{i x} Cr{i y}
sup
sup
Cr{ x} Cr{ y}
x0,y0,x+yr i
x0,y0,x+yr
= Cr{ + r}.
89
That is,
Cr{i + i r} Cr{ + r}, if r 0.
A similar way may prove that
Cr{i + i r} Cr{ + r}, if r 0.
Since the expected values E[] and E[] exist, we have
+
Cr{i r}dr
E[i ] =
Cr{i r}dr
0
+
Cr{ r}dr
0
+
Cr{i r}dr
E[i ] =
Cr{i r}dr
Cr{ r}dr
Cr{i + i r}dr
E[i + i ] =
Cr{i + i r}dr
Cr{ + r}dr
Cr{ + r}dr = E[ + ]
90
Step 6: We prove that E[a + b] = aE[] + bE[] for any real numbers
a and b. In fact, the equation follows immediately from Steps 2 and 5. The
theorem is proved.
Example 2.42: Theorem 2.35 does not hold if and are not independent.
For example, take (, P, Cr) to be {1 , 2 , 3 } with Cr{1 } = 0.7, Cr{2 } =
0.3 and Cr{3 } = 0.2. The fuzzy variables are defined by
0, if = 1
1, if = 1
2, if = 2
0, if = 2
2 () =
1 () =
3, if = 3 .
2, if = 3 ,
Then we have
1, if = 1
2, if = 2
(1 + 2 )() =
5, if = 3 .
Thus E[1 ] = 0.9, E[2 ] = 0.8, and E[1 + 2 ] = 1.9. This fact implies that
E[1 + 2 ] > E[1 ] + E[2 ].
If the fuzzy variables are
0,
1,
1 () =
2,
Then we have
defined by
if = 1
if = 2
if = 3 ,
0, if = 1
3, if = 2
2 () =
1, if = 3 .
0, if = 1
4, if = 2
(1 + 2 )() =
3, if = 3 .
Thus E[1 ] = 0.5, E[2 ] = 0.9, and E[1 + 2 ] = 1.2. This fact implies that
E[1 + 2 ] < E[1 ] + E[2 ].
Expected Value of Function of Fuzzy Variable
Let be a fuzzy variable, and f :
value of f () is
Cr{f () r}dr
E[f ()] =
0
Cr{f () r}dr.
For random case, it has been proved that the expected value E[f ()] is the
Lebesgue-Stieltjes integral of f (x) with respect to the probability distribution
91
0.6, if 1 x < 0
1, if 0 x 1
(x) =
0, otherwise.
Then the expected value E[ 2 ] = 0.5. However, the credibility distribution
of is
0, if x < 1
0.3, if 1 x < 0
(x) =
0.5, if 0 x < 1
1, if x 1
and the Lebesgue-Stieltjes integral
+
Remark 2.6: When f (x) is a monotone and continuous function, Zhu and
Ji [263] proved that
+
E[f ()] =
f (x)d(x)
(2.58)
i = E [
n1 ] .
(2.59)
i=1
i r
Cr
i=1
sup
Cr{
n = n}
sup Cr{
n = n}
nxr
min Cr {i = x}
1in
= sup Cr{
n = n} Cr{1 = x}
nxr
= Cr {
n1 r} .
min Cr {i = xi }
1in
92
On the other hand, for any given > 0, there exists an integer n and real
numbers x1 , x2 , , xn with x1 + x2 + + xn r such that
n
i r
Cr
Cr{
n = n} Cr{i = xi }
i=1
i r
Cr
Cr{
n = n} Cr{1 = x1 } Cr {
n1 r} .
i=1
Letting 0, we get
n
Cr
i r
Cr {
n1 r} .
i r
= Cr {
n1 r} .
i=1
It follows that
n
Cr
i=1
Similarly, the above identity still holds if the symbol is replaced with
. Finally, by the definition of expected value operator, we have
n
i =
0
i=1
dr
i r
Cr
i=1
dr
i=1
Cr {
n1 r} dr
i r
Cr
Cr {
n1 r} dr = E [
n1 ] .
2.8
Variance
Definition 2.20 (Liu and Liu [126]) Let be a fuzzy variable with finite
expected value e. Then the variance of is defined by V [] = E[( e)2 ].
The variance of a fuzzy variable provides a measure of the spread of the
distribution around its expected value.
Example 2.44: Let be an equipossible fuzzy variable (a, b). Then its
expected value is e = (a + b)/2, and for any positive number r, we have
Cr{( e)2 r} =
1/2, if r (b a)2 /4
0,
93
+
2
Cr{( e) r}dr =
V [] =
1
(b a)2
dr =
.
2
8
|x e|
x , > 0.
(2.60)
1 ........ . . . . . . . . . . . . . . . . ............................
.
.
.... . ....
....
.... .. ........
...
....
....
.
..
....
....
.
.....
....
.
.
...
.
.
.....
..
.
...
..
.....
... . . . . . . . . ............ . . . . . . . .. . . . . . . . ...........
.
.
.
.........
.
...
.
.
.... .
. .......
.
.
.
.
...
.
..
.......
.
.
.
... ............
.......
.
.
.
..........
.........
.
.
.
.
............
.....
.
.
.
.
.
.
.
.
.
.
.
.
.
.
....................
.
.
.
..
....................
.
.
.
.
.
.
.
.
.
.
.
.
.
...................................................................................................................................................................................................................................................................
..
..
...
0.434
0
e+
94
E[( e)2 ] =
0
V [] =
0
be
ea
f (a) +
f (b).
ba
ba
(2.61)
() a
b ()
a+
b.
ba
ba
b ()
() a
f (a) +
f (b).
ba
ba
2(b e)
1, if x = a
ba
(x) =
2(e a) 1, if x = b.
ba
(2.62)
(2.63)
95
2.9
Moments
Definition 2.21 (Liu [128]) Let be a fuzzy variable, and k a positive number. Then
(a) the expected value E[ k ] is called the kth moment;
(b) the expected value E[||k ] is called the kth absolute moment;
(c) the expected value E[( E[])k ] is called the kth central moment;
(d) the expected value E[| E[]|k ] is called the kth absolute central moment.
Note that the first central moment is always 0, the first moment is just
the expected value, and the second central moment is just the variance.
Example 2.48: A fuzzy variable is called exponentially distributed if it
has an exponential membership function
x
6m
(x) = 2 1 + exp
x 0, m > 0.
(2.64)
0.434
0
E[ k ] = k
(2.65)
E[ k ] =
Cr{ k x}dx =
0
Cr{ r}drk = k
0
96
Theorem 2.42 (Li and Liu [104]) Let be a fuzzy variable that takes values in [a, b] and has expected value e. Then for any positive integer k, the
kth absolute moment and kth absolute central moment satisfy the following
inequalities,
be k ea k
|a| +
|b| ,
(2.66)
E[||k ]
ba
ba
E[| e|k ]
be
ea
(e a)k +
(b e)k .
ba
ba
(2.67)
2.10
Critical Values
In order to rank fuzzy variables, we may use two critical values: optimistic
value and pessimistic value.
Definition 2.22 (Liu [124]) Let be a fuzzy variable, and (0, 1]. Then
sup () = sup r Cr { r}
(2.68)
(2.69)
b, if 0.5
a, if > 0.5,
inf () =
a, if 0.5
b, if > 0.5.
2b + (1 2)c,
if 0.5
(2 1)a + (2 2)b, if > 0.5,
97
inf () =
(1 2)a + 2b,
if 0.5
(2 2)b + (2 1)c, if > 0.5.
2c + (1 2)d,
if 0.5
(2 1)a + (2 2)b, if > 0.5,
inf () =
(1 2)a + 2b,
(2 2)c + (2 1)d,
if 0.5
if > 0.5.
(2.70)
Proof: It follows from the definition of -pessimistic value that there exists
a decreasing sequence {xi } such that Cr{ xi } and xi inf () as
i . Since { xi } { inf ()} and limi Cr{ xi } > 0.5, it
follows from the credibility semicontinuity law that
Cr{ inf ()} = lim Cr{ xi } .
i
98
is an increasing sequence. If the limitation is equal to inf (), then the leftcontinuity is proved. Otherwise, there exists a number z such that
lim inf (i ) < z < inf ().
99
Theorem 2.47 Suppose that and are independent fuzzy variables. Then
for any (0, 1], we have
( + )sup () = sup () + sup (), ( + )inf () = inf () + inf (),
()sup () = sup ()sup (), ()inf () = inf ()inf (), if 0, 0,
( )sup () = sup () sup (), ( )inf () = inf () inf (),
( )sup () = sup () sup (), ( )inf () = inf () inf ().
Proof: For any given number > 0, since and are independent fuzzy
variables, we have
Cr{ + sup () + sup () }
Cr {{ sup () /2} { sup () /2}}
= Cr{ sup () /2} Cr{ sup () /2}
which implies
( + )sup () sup () + sup () .
(2.71)
(2.72)
0, if = 1
1, if = 2 ,
() =
1, if = 1
0, if = 2 .
100
2.11
Entropy
S(Cr{ = xi })
H[] =
(2.73)
i=1
101
S(t)
...
..........
...
..
... . . . . . . . . . . . . . . . .......................
........
.
.....
...
......
......
.
...
.....
.....
.
.....
.....
...
.
.....
....
.
.
.
...
.
....
...
.
.
.
....
...
.
.
...
....
...
.
.
...
.
.
.
.
...
...
.
...
...
...
.
.
...
..
...
.
.
...
..
.
...
.
...
.
..
...
.
...
.
... ....
...
.
.
...
... ...
.
...
... ...
.
...
... ...
.
...
.
... ...
...
.
......
.
...
.
......
...
.
.....
....................................................................................................................................................................................
....
..
.
ln 2
0.5
S(Cr{ = xi }) n ln 2
H[] =
i=1
and equality holds if and only if Cr{ = xi } = 0.5, i.e., (xi ) 1 for all
i = 1, 2, , n.
This theorem states that the entropy of a fuzzy variable reaches its maximum when the fuzzy variable is an equipossible one. In this case, there is
no preference among all the values that the fuzzy variable will take.
102
H[] =
S(Cr{ = x})dx
(2.76)
(x) (x)
(x)
ln
+ 1
2
2
2
H[] =
ln 1
(x)
2
dx.
(2.77)
Example 2.56: Let be an equipossible fuzzy variable (a, b). Then (x) = 1
if a x b, and 0 otherwise. Thus its entropy is
b
H[] =
a
1
1 1
ln + 1
2 2
2
ln 1
1
2
dx = (b a) ln 2.
Example 2.57: Let be a triangular fuzzy variable (a, b, c). Then its
entropy is H[] = (c a)/2.
Example 2.58: Let be a trapezoidal fuzzy variable (a, b, c, d). Then its
entropy is H[] = (d a)/2 + (ln 2 0.5)(c b).
Example 2.59: Let be an exponentially distributed
fuzzy variable with
second moment m2 . Then its entropy is H[] = m/ 6.
Example 2.60: Let be a normally distributed fuzzyvariable with expected
value e and variance 2 . Then its entropy is H[] = 6/3.
Theorem 2.50 Let be a continuous fuzzy variable. Then H[] > 0.
Proof: The positivity is clear. In addition, when a continuous fuzzy variable
tends to a crisp number, its entropy tends to the minimum 0. However, a
crisp number is not a continuous fuzzy variable.
Theorem 2.51 Let be a continuous fuzzy variable taking values on the
interval [a, b]. Then
H[] (b a) ln 2
(2.78)
and equality holds if and only if is an equipossible fuzzy variable (a, b).
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum ln 2 at t = 0.5.
103
Theorem 2.52 Let and be two continuous fuzzy variables with membership functions (x) and (x), respectively. If (x) (x) for any x ,
then we have H[] H[].
Proof: Since (x) (x), we have S((x)/2) S((x)/2) for any x
It follows that H[] H[].
Theorem 2.53 Let be a continuous fuzzy variable. Then for any real
numbers a and b, we have H[a + b] = |a|H[].
Proof: It follows from the definition of entropy that
+
H[a+b] =
(2.79)
E[ 2 ] =
Cr{ 2 x}dx =
0
2xCr{ x}dx =
0
x(x)dx.
0
(x)
(x) (x)
ln
+ 1
2
2
2
ln 1
x(x)dx = m2 .
0
(x)
2
dx
104
The Lagrangian is
+
L=
0
(x) (x)
(x)
ln
+ 1
2
2
2
ln 1
(x)
2
dx
x(x)dx m2 .
6m
(x) = 2 1 + exp
+ x = 0
x0
1
1
1
sup (y) =
sup sup (z) =
sup (z) Cr{ 2 x}
2 yx
2 yx zy
2 zx
E[ 2 ] =
Cr{ 2 x}dx
0
Cr{ 2 x}dx = E[ 2 ] = m2 .
0
E[ 2 ]
m
.
6
6
6
(2.80)
H[]
3
and the equality holds if is a normally distributed fuzzy variable with expected
value e and variance 2 .
105
V [] =
0
Cr{ e
x}dx
0
+
(x e)(x)dx
H[] = 2
e
(x) (x)
(x)
ln
+ 1
2
2
2
ln 1
(x)
2
dx.
(x) (x)
(x)
ln
+ 1
2
2
2
L = 2
e
ln 1
(x)
2
dx
(x e)(x)dx 2 .
(x)
(x)
ln 1
2
2
+ (x e) = 0
1
|x e|
(x) =
yx
if x e
106
1
1
sup (x) =
sup sup((y) (2e y))
2 xe+r
2 xe+r yx
1
1
sup ((y) (2e y)) =
sup (y)
2 ye+ r
2 (ye)2 r
Cr ( e)2 r
for any r > 0. Thus
+
V [] =
H[] H[]
V []
3
6
.
3
2.12
Distance
Distance between fuzzy variables has been defined in many ways, for example, Hausdorff distance (Puri and Ralescu [192], Klement et. al. [81]), and
Hamming distance (Kacprzyk [72]). However, those definitions have no identification property. In order to overcome this shortage, Liu [129] proposed a
definition of distance as follows.
Definition 2.27 (Liu [129]) The distance between fuzzy variables and is
defined as
d(, ) = E[| |].
(2.81)
Example 2.61: Let and be equipossible fuzzy variables (a1 , b1 ) and
(a2 , b2 ), respectively, and (a1 , b1 )(a2 , b2 ) = . Then | | is an equipossible
fuzzy variable on the interval with endpoints |a1 b2 | and |b1 a2 |. Thus
the distance between and is the expected value of | |, i.e.,
d(, ) =
1
(|a1 b2 | + |b1 a2 |) .
2
1
(|a1 c2 | + 2|b1 b2 | + |c1 a2 |) .
4
107
1
(|a1 d2 | + |b1 c2 | + |c1 b2 | + |d1 a2 |) .
4
Theorem 2.54 (Li and Liu [105]) Let , , be fuzzy variables, and let d(, )
be the distance. Then we have
(a) (Nonnegativity) d(, ) 0;
(b) (Identification) d(, ) = 0 if and only if = ;
(c) (Symmetry) d(, ) = d(, );
(d) (Triangle Inequality) d(, ) 2d(, ) + 2d(, ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the credibility subadditivity
theorem that
+
Cr {| | r} dr
d(, ) =
0
+
Cr {| | + | | r} dr
0
+
0
+
Cr{| | r/2}dr +
=
0
Cr{| | r/2}dr
0
1, if = 3
0, otherwise,
() =
1, if = 1
0, otherwise,
() 0.
It is easy to verify that d(, ) = d(, ) = 1/2 and d(, ) = 3/2. Thus
d(, ) =
2.13
3
(d(, ) + d(, )).
2
Inequalities
There are several useful inequalities for random variable, such as Markov
inequality, Chebyshev inequality, Holders inequality, Minkowski inequality,
108
Cr{f () r}dr
E[f ()] =
0
+
Cr{|| f 1 (r)}dr
=
0
f (t)
Cr{|| f 1 (r)}dr
0
f (t)
dr Cr{|| f 1 (f (t))}
= f (t) Cr{|| t}
which proves the inequality.
Theorem 2.56 (Liu [128], Markov Inequality) Let be a fuzzy variable.
Then for any given numbers t > 0 and p > 0, we have
Cr{|| t}
E[||p ]
.
tp
(2.83)
V []
.
t2
(2.84)
E[||p ] q E[||q ].
(2.85)
109
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assumeE[||p ] > 0 and E[||q ] > 0. It is easy to prove that the function
(x, y) D.
E[| + |p ]
E[||p ] +
E[||p ].
(2.86)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assume
E[||p ] > 0 and E[||p ] > 0. It is easy to prove that the function
(x, y) D.
(2.87)
110
Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) f (y) k (x y). Replacing x with and y with E[], we obtain
f () f (E[]) k ( E[]).
Taking the expected values on both sides, we have
E[f ()] f (E[]) k (E[] E[]) = 0
which proves the inequality.
2.14
Convergence Concepts
This section discusses some convergence concepts of fuzzy sequence: convergence almost surely (a.s.), convergence in credibility, convergence in mean,
and convergence in distribution.
Table 2.1: Relations among Convergence Concepts
Convergence Almost Surely
Convergence
in Mean
Convergence
in Credibility
Convergence in Distribution
Definition 2.28 (Liu [128]) Suppose that , 1 , 2 , are fuzzy variables defined on the credibility space (, P, Cr). The sequence {i } is said to be convergent a.s. to if and only if there exists an event A with Cr{A} = 1 such
that
lim |i () ()| = 0
(2.88)
i
(2.89)
(2.90)
111
(2.91)
E[|i |]
0
i, if j = i
0, otherwise
1
0.
i
i, if j = i
0, otherwise
112
i
1
.
2i + 1
2
113
(2.93)
It follows from (2.92) and (2.93) that i (x) (x). The theorem is proved.
Example 2.67: Convergence in distribution does not imply convergence
in credibility. For example, take (, P, Cr) to be {1 , 2 } with Cr{1 } =
Cr{2 } = 1/2, and define
() =
1, if = 1
1, if = 2 .
We also define i = for i = 1, 2, Then i and are identically distributed. Thus {i } converges in distribution to . But, for any small number
> 0, we have Cr{|i | > } = Cr{} = 1. That is, the sequence {i }
does not converge in credibility to .
Convergence Almost Surely vs. Convergence in Distribution
Example 2.68: Convergence in distribution does not imply convergence a.s.
For example, take (, P, Cr) to be {1 , 2 } with Cr{1 } = Cr{2 } = 1/2, and
define
1, if = 1
() =
1, if = 2 .
We also define i = for i = 1, 2, Then {i } converges in distribution
to . However, {i } does not converge a.s. to .
Example 2.69: Convergence a.s. does not imply convergence in distribution.
For example, take (, P, Cr) to be {1 , 2 , } with Cr{j } = j/(2j + 1) for
j = 1, 2, The fuzzy variables are defined by
i (j ) =
i, if j = i
0, otherwise
0,
if x < 0
1,
if x i,
i = 1, 2, , respectively. The credibility distribution of is
(x) =
0, if x < 0
1, if x 0.
114
It is clear that i (x) (x) at x > 0. That is, the sequence {i } does not
converge in distribution to .
2.15
Conditional Credibility
We now consider the credibility of an event A after it has been learned that
some other event B has occurred. This new credibility of A is called the
conditional credibility of A given B.
The first problem is whether the conditional credibility is determined
uniquely and completely. The answer is negative. It is doomed to failure to
define an unalterable and widely accepted conditional credibility. For this
reason, it is appropriate to speak of a certain persons subjective conditional
credibility, rather than to speak of the true conditional credibility.
In order to define a conditional credibility measure Cr{A|B}, at first we
have to enlarge Cr{A B} because Cr{A B} < 1 for all events whenever
Cr{B} < 1. It seems that we have no alternative but to divide Cr{A B} by
Cr{B}. Unfortunately, Cr{AB}/Cr{B} is not always a credibility measure.
However, the value Cr{A|B} should not be greater than Cr{A B}/Cr{B}
(otherwise the normality will be lost), i.e.,
Cr{A|B}
Cr{A B}
.
Cr{B}
(2.94)
Cr{Ac B}
.
Cr{B}
(2.95)
Cr{Ac B}
Cr{A B}
1.
Cr{B}
Cr{B}
(2.96)
Cr{A B}
Cr{A B}
,
if
< 0.5
Cr{B}
Cr{B}
Cr{Ac B}
Cr{Ac B}
(2.97)
Cr{A|B} =
1
, if
< 0.5
Cr{B}
Cr{B}
0.5,
otherwise
115
Cr{Ac B}
Cr{A B}
Cr{A|B}
.
Cr{B}
Cr{B}
(2.98)
Cr{c B}
Cr{}
=1
= 1.
Cr{B}
Cr{B}
< 0.5,
Cr{B}
Cr{B}
then
Cr{A1 |B} =
If
Cr{A1 B}
Cr{A2 B}
= Cr{A2 |B}.
Cr{B}
Cr{B}
Cr{A2 B}
Cr{A1 B}
0.5
,
Cr{B}
Cr{B}
Cr{A1 B}
Cr{A2 B}
,
Cr{B}
Cr{B}
then we have
Cr{A1 |B} = 1
Cr{Ac1 B}
Cr{Ac2 B}
0.5 1
0.5 = Cr{A2 |B}.
Cr{B}
Cr{B}
This means that Cr{|B} satisfies the monotonicity axiom. For any event A,
if
Cr{A B}
Cr{Ac B}
0.5,
0.5,
Cr{B}
Cr{B}
116
Cr{A B}
Cr{A B}
+ 1
Cr{B}
Cr{B}
= 1.
That is, Cr{|B} satisfies the self-duality axiom. Finally, for any events {Ai }
with supi Cr{Ai |B} < 0.5, we have supi Cr{Ai B} < 0.5 and
sup Cr{Ai |B} =
i
supi Cr{Ai B}
Cr{i Ai B}
=
= Cr{i Ai |B}.
Cr{B}
Cr{B}
Cr { = x| X} =
Cr{ = x}
,
Cr{ X}
if
Cr{ = x}
< 0.5
Cr{ X}
Cr{ = x, X}
Cr{ = x, X}
, if
< 0.5
Cr{ X}
Cr{ X}
0.5,
otherwise.
Example 2.71: Let and be two fuzzy variables, and Y a set of real
numbers such that Cr{ Y } > 0. Then we have
Cr{ = x, Y }
Cr{ = x, Y }
,
if
< 0.5
Cr{ Y }
Cr{ Y }
Cr{ = x, Y }
Cr{ = x, Y }
Cr { = x| Y } =
1
, if
< 0.5
Cr{
Y
}
Cr{ Y }
0.5,
otherwise.
Definition 2.33 (Liu [132]) The conditional membership function of a fuzzy
variable given B is defined by
(x|B) = (2Cr{ = x|B}) 1,
provided that Cr{B} > 0.
(2.99)
117
2(x)
1,
if sup (x) < 1
sup
(x)
xX
xX
(x|X) =
(2.100)
2(x)
1,
if
sup
(x)
=
1
2 sup (x)
xX
xX c
(x|X)
....
........
.......................................
...
.......................................................................
.
...
...
...
... .....
...
...
...
...
...
...
..
...
...
.
...
...
...
...
..
.
.
...
.
...
...
..
..
.
...
.
.
...
.
...
..
...
...
.
.
...
.
...
..
...
..
.
.
.
...
...
.. .....
... ....
...
... ...
.. ...
...
... ...
.......
...
.
... ...
.
... ...
...
....
... ..
... ..... ..
... ...
... .... ..
.....
..
......
... ...
.
.
.
.
.
..................................................................................................................................................................
.. .
..
..............
............................................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
...
....
........
........................................
..........................
.
...
...
...... .
... ..... ..
...
...
... ..
...
...
...
....
...
.
...
.
...
....
..
...
.
.
...
.
..
...
........
.
.
...
...
...
..
.
.
...
.
.
...
..
... .....
.
.
...
...
...
..
.
.
...
.
...
...
..
...
.
.
..
.
...
...
..
.
.
.
...
.
... .....
..
...
.
.
.
...
... ...
.
..
... ..
... .....
... ...
..
... ....
... ..
..
......
... ...
.....................................................................................................................................................................
.. .
.
..
......................
.....................
...
...
2 sup (x, y)
yY
1,
if sup (x, y) < 1
sup
(x, y)
x ,yY
x ,yY
(x|Y ) =
(2.101)
2 sup (x, y)
yY
(x, y) = 1
,yY
x ,yY c
2(x, y)
1,
if sup (x, y) < 1
sup
(x, y)
x
x
(x|y) =
2(x, y)
1, if sup (x, y) = 1
2 sup (x, z)
x
x ,z=y
118
(2.102)
Cr{ x, = y}
Cr{ x, = y}
,
if
< 0.5
Cr{ = y}
Cr{ = y}
Cr{ > x, = y}
Cr{ > x, = y}
(x| = y) =
, if
< 0.5
1
Cr{ = y}
Cr{ = y}
0.5,
otherwise
provided that Cr{ = y} > 0.
Definition 2.35 (Liu [132]) The conditional credibility density function
of a fuzzy variable given B is a nonnegative function such that
x
(x|B) =
x ,
(y|B)dy,
(2.103)
(y|B)dy = 1
(2.104)
Cr{ r|B}dr
E[|B] =
Cr{ r|B}dr
(2.105)
(2.106)
119
The hazard rate tells us the credibility of a failure just after time x when it
is functioning at time x.
Example 2.75: Let be an exponentially distributed fuzzy variable. Then
its hazard rate h(x) 0.5. In fact, the hazard rate is always 0.5 if the
membership function is positive and decreasing.
Example 2.76: Let be a triangular fuzzy variable (a, b, c) with a 0.
Then its hazard rate
0,
if x a
xa
, if a x (a + b)/2
ba
h(x) =
0.5,
if (a + b)/2 x < c
0,
if x c.
2.16
Fuzzy Process
120
N. t
4
3
2
1
0
...
..........
...
..
...........
..............................
..
...
...
..
..
...
..........
.........................................................
..
...
..
..
....
..
..
..
..
..
..........
.......................................
..
...
..
..
..
...
..
..
...
..
..
..
..
...
..
.........................................................
.........
..
..
..
..
..
....
..
..
..
..
...
..
..
..
.
..
.
......................................................................................................................................................................................................................................
...
...
...
...
...
....
....
....
....
...
1 ...
2
3 ...
4
...
...
...
..
..
..
..
..
S0
S1
S2
S3
S4
t
n
(2.109)
Cr 1
Cr{Sn t} =
E[Nt ] =
n=1
n=1
t
n
(2.110)
n=1
n1
Cr{Nt r}dr =
E[Nt ] =
0
Cr{Nt r}dr
Cr{Nt n} =
=
n=1
Cr 1
Cr{Sn t} =
n=1
n=1
t
n
0, if t < na
0.5, if na t < nb
(2.111)
Cr{Nt n} =
1, if t nb,
121
E[Nt ] =
1
2
t
t
+
a
b
(2.112)
Cr{Nt n} =
if t na
0,
t na
,
2n(b a)
if na t nb
(2.113)
nc 2nb + t
, if nb t nc
2n(c b)
1,
if t nc.
Theorem 2.66 (Zhao and Liu [251], Renewal Theorem) Let Nt be a fuzzy
renewal process with interarrival times 1 , 2 , Then
lim
1
E[Nt ]
=E
.
t
1
(2.114)
Nt
r dr,
t
Cr
0
1
=
1
Cr
0
1
r dr.
1
Cr
1
r
1
Cr
and
lim Cr
Nt
r
t
= Cr
1
r
1
for any real numbers t > 0 and r > 0. It follows from Lebesgue dominated
convergence theorem that
lim
Cr
0
Nt
r dr =
t
Cr
0
1
r dr.
1
122
C Process
This subsection introduces a fuzzy counterpart of Brownian motion, and
provides some basic mathematical properties.
Definition 2.41 (Liu [133]) A fuzzy process Ct is said to be a C process if
(i) C0 = 0,
(ii) Ct has stationary and independent increments,
(iii) every increment Cs+t Cs is a normally distributed fuzzy variable with
expected value et and variance 2 t2 .
The parameters e and are called the drift and diffusion coefficients,
respectively. The C process is said to be standard if e = 0 and = 1. Any C
process may be represented by et + Ct where Ct is a standard C process.
C. t
.....
.......
....
...
...
...
...
...
... ... .
...
... .. .....
.. .............
...
.....
.
.
.
...
......
... ... ....
..... ...
... ...
.. ...
.. .. ..
.. ..
...
...
...
.....
.. .... ......
.. .... ........
.. ........
...
.
.
.
.. ....
...
.
....
.
.
.
.
.
.
.. . . . . . .
.
...
...
..
...
...... ..... .. ...... .. ..... .. .............
... .. ....
.
...
.. ..... .... . ... ..... ... ....... . ....... .........
...
... ... ....
...
.....
....
...
..
... ..... ... .. ... ...
...
..... ... ..
....... ... ...
...
...
......
..
.. ...... .... ....
... .... ........ ......
.
.
..
.
.
.
...
.
.
.
.
.
.
.
.
.
.
.. ..
.....
..... ..... .... .....
.. ....... .... ....
...
...
......
...
...... . .....
..
... ....
...
..
...
.
...
...
.. ...
... ...
...
... ...
... ... ......
...
......
...
.. ....... ..
...... ...
.
.
....
...
.. .
.
...... ....
...
... ..... ...
...... .... ...... ...
......... ...
...... .............
..... ... ..
...... .....
..... ....
......
....
...
.........................................................................................................................................................................................................................................................................................
i
k
, if t =
(k = 0, 1, , n)
n
n
n
Xn (t) =
i=1
linear,
otherwise.
123
exists almost surely, we may verify that the limit meets the conditions of C
process. Hence there is a standard C process.
Remark 2.9: Suppose that Ct is a standard C process. It has been proved
that
X1 (t) = Ct ,
(2.115)
X2 (t) = aCt/a ,
(2.116)
X3 (t) = Ct+s Cs
(2.117)
max Cs x
0st
= Cr{Ct x}.
(2.118)
In addition, for any level x < 0 and any time t > 0, we have
Cr
min Cs x
0st
= Cr{Ct x}.
(2.119)
Example 2.80: Let Ct be a C process with drift e > 0 and diffusion coefficient . Then the first passage time that the C process reaches the barrier
x > 0 has the membership function
(t) = 2 1 + exp
|x et|
6t
t>0
(2.120)
124
(2.121)
| ln z et|
6t
z0
(2.122)
t < /( 6).
(2.123)
In addition, the first passage time that a geometric C process Gt reaches the
barrier x > 1 is just the time that the C process with drift e and diffusion
reaches ln x.
A Basic Stock Model
It was assumed that stock price follows geometric Brownian motion, and
stochastic financial mathematics was then founded based on this assumption. Liu [133] presented an alternative assumption that stock price follows
geometric C process. Based on this assumption, we obtain a basic stock
model for fuzzy financial market in which the bond price Xt and the stock
price Yt follow
Xt = X0 exp(rt)
(2.124)
Yt = Y0 exp(et + Ct )
where r is the riskless interest rate, e is the stock drift, is the stock diffusion,
and Ct is a standard C process. It is just a fuzzy counterpart of Black-Scholes
stock model [8]. For exploring further development of fuzzy stock models,
the interested readers may consult Gao [50], Peng [191], Qin and Li [194],
and Zhu [267].
2.17
Fuzzy Calculus
V [dCt2 ] 7dt4 .
125
Definition 2.43 (Liu [133]) Let Xt be a fuzzy process and let Ct be a standard C process. For any partition of closed interval [a, b] with a = t1 < t2 <
< tk+1 = b, the mesh is written as
= max |ti+1 ti |.
1ik
Xt dCt = lim
(2.125)
i=1
provided that the limit exists almost surely and is a fuzzy variable.
Example 2.81: Let Ct be a standard C process. Then for any partition
0 = t1 < t2 < < tk+1 = s, we have
k
(Cti+1 Cti ) Cs C0 = Cs .
dCt = lim
i=1
sCs =
i=1
k
ti (Cti+1 Cti ) +
=
i=1
s
Cti+1 (ti+1 ti )
i=1
tdCt +
0
Ct dt
0
as 0. It follows that
s
tdCt = sCs
0
Ct dt.
0
Ct2i+1 Ct2i
Cs2 =
i=1
k
Cti+1 Cti
=
i=1
+2
i=1
0+2
Ct dCt
0
126
as 0. That is,
Ct dCt =
0
1 2
C .
2 s
This equation shows that fuzzy integral does not behave like Ito integral. In
fact, the fuzzy integral behaves like ordinary integrals.
Theorem 2.68 (Liu [133]) Let Ct be a standard C process, and let h(t, c) be
a continuously differentiable function. Define Xt = h(t, Ct ). Then we have
the following chain rule
dXt =
h
h
(t, Ct )dt +
(t, Ct )dCt .
t
c
(2.126)
Xs = X0 +
0
h
(t, Ct )dt +
t
s
0
h
(t, Ct )dCt
c
for any s 0.
Remark 2.10: The infinitesimal increment dCt in (2.126) may be replaced
with the derived C process
dYt = ut dt + vt dCt
(2.127)
h
h
(t, Yt )dt +
(t, Yt )dYt .
t
c
(2.128)
Remark 2.11: Assume that C1t , C2t , , Cmt are standard C processes,
and h(t, c1 , c2 , , cm ) is a continuously differentiable function. Define
Xt = h(t, C1t , C2t , , Cmt ).
You [243] proved the following chain rule
m
dXt =
h
h
dt +
dCit .
t
c
i
i=1
(2.129)
Example 2.84: Applying the chain rule, we obtain the following formula
d(tCt ) = Ct dt + tdCt .
127
Hence we have
s
sCs =
d(tCt ) =
0
Ct dt +
0
tdCt .
0
That is,
s
tdCt = sCs
Ct dt.
0
Cs2 =
d(Ct2 ) = 2
Ct dCt .
It follows that
Ct dCt =
0
1 2
C .
2 s
Cs3 =
d(Ct3 ) = 3
0
That is
Ct2 dCt .
0
Ct2 dCt =
0
1 3
C .
3 s
Theorem 2.69 (Liu [133], Integration by Parts) Suppose that Ct is a standard C process and F (t) is an absolutely continuous function. Then
s
F (t)dCt = F (s)Cs
0
Ct dF (t).
(2.130)
Proof: By defining h(t, Ct ) = F (t)Ct and using the chain rule, we get
d(F (t)Ct ) = Ct dF (t) + F (t)dCt .
Thus
F (s)Cs =
0
d(F (t)Ct ) =
Ct dF (t) +
0
F (t)dCt
0
128
2.18
(2.131)
Xs = X0 +
f (t, Xt )dt +
0
g(t, Xt )dCt .
(2.132)
However, the differential form is more convenient for us. This is the main
reason why we accept the differential form.
Example 2.87: Let Ct be a standard C process. Then the fuzzy differential
equation
dXt = adt + bdCt
has a solution
Xt = at + bCt
which is just a C process with drift coefficient a and diffusion coefficient b.
Example 2.88: Let Ct be a standard C process. Then the fuzzy differential
equation
dXt = aXt dt + bXt dCt
has a solution
Xt = exp (at + bCt )
which is just a geometric C process.
Example 2.89: Let Ct be a standard C process. Then the fuzzy differential
equations
dXt = Yt dCt
dYt = Xt dCt
have a solution
(Xt , Yt ) = (cos Ct , sin Ct )
which is called a C process on unit circle since Xt2 + Yt2 1.
Chapter 3
Chance Theory
Fuzziness and randomness are two basic types of uncertainty. In many cases,
fuzziness and randomness simultaneously appear in a system. In order to
describe this phenomena, a fuzzy random variable was introduced by Kwakernaak [87][88] as a random element taking fuzzy variable values. In addition, a random fuzzy variable was proposed by Liu [124] as a fuzzy element
taking random variable values. For example, it might be known that the
lifetime of a modern engine is an exponentially distributed random variable
with an unknown parameter. If the parameter is provided as a fuzzy variable,
then the lifetime is a random fuzzy variable.
More generally, a hybrid variable was introduced by Liu [130] as a tool
to describe the quantities with fuzziness and randomness. Fuzzy random
variable and random fuzzy variable are instances of hybrid variable. In order
to measure hybrid events, a concept of chance measure was introduced by Li
and Liu [103]. Chance theory is a hybrid of probability theory and credibility
theory. Perhaps the reader would like to know what axioms we should assume
for chance theory. In fact, chance theory will be based on the three axioms
of probability and four axioms of credibility.
The emphasis in this chapter is mainly on chance space, hybrid variable,
chance measure, chance distribution, independence, identical distribution,
expected value, variance, moments, critical values, entropy, distance, convergence almost surely, convergence in chance, convergence in mean, convergence
in distribution, conditional chance, hybrid process, hybrid calculus, and hybrid differential equation.
3.1
Chance Space
Chance theory begins with the concept of chance space that inherits the
mathematical foundations of both probability theory and credibility theory.
130
Definition 3.1 (Liu [130]) Suppose that (, P, Cr) is a credibility space and
(, , Pr) is a probability space. The product (, P, Cr) (, , Pr) is called
a chance space.
The universal set is clearly the set of all ordered pairs of the form
(, ), where and . What is the product -algebra P ? What
is the product measure Cr Pr? Let us discuss these two basic problems.
P ?
(3.1)
P and Y
(X Y )() =
Then X Y is a subset of .
Y, if X
, if X c
have
() ,
P .
i () =
i=1
(, )
i
i=1
(, ) i .
i=1
P is a -algebra.
131
sup(Cr{} Pr{()}),
(3.2)
Ch{} =
Ch{(2 , 2 )} = 0.3.
b, if b < 0.5
0.5, if 0.5 b < 1
Ch{[0, a] [0, b]} =
1, if a = b = 1.
Theorem 3.2 Let (, P, Cr) (, , Pr) be a chance space and Ch a chance
measure. Then we have
Ch{} = 0,
(3.3)
Ch{ } = 1,
(3.4)
0 Ch{} 1
(3.5)
132
(3.6)
(3.7)
(3.8)
Proof: It follows from the basic properties of probability and credibility that
sup(Cr{} Pr{()}) sup(Cr{} Pr{c ()})
and
1 1 = 1.
The inequalities (3.8) follows immediately from the above inequalities and
the definition of chance measure.
Theorem 3.4 (Li and Liu [103]) The chance measure is increasing. That
is,
Ch{1 } Ch{2 }
(3.9)
for any events 1 and 2 with 1 2 .
Proof: Since 1 () 2 () and c2 () c1 () for each , we have
sup(Cr{} Pr{1 ()}) sup(Cr{} Pr{2 ()}),
sup(Cr{}
Pr{c2 ()})
133
Case 2: sup(Cr{} Pr{2 ()}) 0.5 and sup(Cr{} Pr{1 ()}) < 0.5.
Case 3: sup(Cr{} Pr{2 ()}) 0.5 and sup(Cr{} Pr{1 ()}) 0.5.
(3.10)
c
if sup(Cr{} Pr{c ()}) < 0.5
sup(Cr{} Pr{ ()}),
c
Ch{ } =
1 sup(Cr{} Pr{()}), if sup(Cr{} Pr{c ()}) 0.5.
Case 2: sup(Cr{} Pr{()}) 0.5 and sup(Cr{} Pr{c ()}) < 0.5.
134
Theorem 3.6 (Li and Liu [103]) For any event X Y , we have
Ch{X Y } = Cr{X} Pr{Y }.
(3.11)
Ch{ Y } = Pr{Y }.
(3.12)
Theorem 3.7 (Li and Liu [103], Chance Subadditivity Theorem) The chance
measure is subadditive. That is,
Ch{1 2 } Ch{1 } + Ch{2 }
(3.13)
for any events 1 and 2 . In fact, chance measure is not only finitely subadditive but also countably subadditive.
Proof: The proof breaks down into three cases.
Case 1: Ch{1 2 } < 0.5. Then Ch{1 } < 0.5, Ch{2 } < 0.5 and
Ch{1 2 } = sup(Cr{} Pr{(1 2 )()})
= Ch{1 } + Ch{2 }.
135
Case 2: Ch{1 2 } 0.5 and Ch{1 } Ch{2 } < 0.5. We first have
sup(Cr{} Pr{(1 2 )()}) 0.5.
For any sufficiently small number > 0, there exists a point such that
Cr{} Pr{(1 2 )()} > 0.5 > Ch{1 } Ch{2 },
Cr{} > 0.5 > Pr{1 ()},
Cr{} > 0.5 > Pr{2 ()}.
Thus we have
Cr{} Pr{(1 2 )c ()} + Cr{} Pr{1 ()} + Cr{} Pr{2 ()}
= Cr{} Pr{(1 2 )c ()} + Pr{1 ()} + Pr{2 ()}
Cr{} Pr{(1 2 )c ()} + Pr{(1 2 )()} 1 2
because if Cr{} Pr{(1 2 )c ()}, then
Cr{} Pr{(1 2 )c ()} + Pr{(1 2 )()}
= Pr{(1 2 )c ()} + Pr{(1 2 )()}
= 1 1 2
and if Cr{} < Pr{(1 2 )c ()}, then
Cr{} Pr{(1 2 )c ()} + Pr{(1 2 )()}
= Cr{} + Pr{(1 2 )()}
(0.5 ) + (0.5 ) = 1 2.
Taking supremum on both sides and letting 0, we obtain
Ch{1 2 } = 1 sup(Cr{} Pr{(1 2 )c ()})
= Ch{1 } + Ch{2 }.
Case 3: Ch{1 2 } 0.5 and Ch{1 } Ch{2 } 0.5. Without loss
of generality, suppose Ch{1 } 0.5. For each , we first have
Cr{} Pr{c1 ()} = Cr{} Pr{(c1 () c2 ()) (c1 () 2 ())}
Cr{} (Pr{(1 2 )c ()} + Pr{2 ()})
Cr{} Pr{(1 2 )c ()} + Cr{} Pr{2 ()},
136
i.e., Cr{} Pr{(1 2 )c ()} Cr{} Pr{c1 ()} Cr{} Pr{2 ()}. It
follows from Theorem 3.3 that
Ch{1 2 } = 1 sup(Cr{} Pr{(1 2 )c ()})
Ch{1 } + Ch{2 }.
The theorem is proved.
Remark 3.1: For any events 1 and 2 , it follows from the chance subadditivity theorem that the chance measure is null-additive, i.e., Ch{1 2 } =
Ch{1 } + Ch{2 } if either Ch{1 } = 0 or Ch{2 } = 0.
Theorem 3.8 Let {i } be a decreasing sequence of events with Ch{i } 0
as i . Then for any event , we have
lim Ch{ i } = lim Ch{\i } = Ch{}.
(3.14)
lim i
(3.15)
137
we have
sup(Cr{} Pr{()}) lim sup(Cr{} Pr{i ()}) < 0.5.
i
It follows that Ch{} < 0.5 and the part (b) holds by using (a).
(c) Assume Ch{} 0.5 and i . We have Ch{c } 0.5 and ci c .
It follows from (a) that
lim Ch{i } = 1 lim Ch{ci } = 1 Ch{c } = Ch{}.
(d) Assume limi Ch{i } > 0.5 and i . We have lim Ch{ci } <
i
if i .
(3.17)
3.2
Hybrid Variables
138
Fuzzy
Variable
Hybrid
Variable
Credibility Space
Random
Variable
Probability Space
(3.18)
is an event.
Remark 3.2: A hybrid variable degenerates to a fuzzy variable if the value
of (, ) does not vary with . For example,
(, ) = ,
(, ) = 2 + 1,
(, ) = sin .
(, ) = 2 + 1,
(, ) = sin .
Remark 3.4: For each fixed , it is clear that the hybrid variable (, )
is a measurable function from the probability space (, , Pr) to the set of
real numbers. Thus it is a random variable and we will denote it by (, ).
Then a hybrid variable (, ) may also be regarded as a function from a
credibility space (, P, Cr) to the set {(, )| } of random variables.
Thus is a random fuzzy variable defined by Liu [124].
Remark 3.5: For each fixed , it is clear that the hybrid variable
(, ) is a function from the credibility space (, P, Cr) to the set of real
139
(3.19)
(x)
sup
(y)dy ,
2
x
f (x,y)B
(x)
if sup
2
x
f (x,y)B
Ch{f (
a, ) B} =
(x)
(y)dy ,
1 sup
2
x
f (x,y)B c
(x)
(y)dy 0.5.
if sup
2
x
f (x,y)B
More generally, let a
1 , a
2 , , a
m be fuzzy variables, and let 1 , 2 , , n be
random variables. If f : m+n is a measurable function, then
= f (
a1 , a
2 , , a
m ; 1 , 2 , , n )
(3.20)
a
1 with probability p1
a
2 with probability p2
=
(3.21)
a
m with probability pm
140
i (xi )
{pi | xi B} ,
sup
min
1im
2
x1 ,x2 ,xm
i=1
i (xi )
if
sup
min
1im
2
x1 ,x2 ,xm
i=1
Ch{ B} =
m
i (xi )
{pi | xi B c } ,
1
sup
min
1im
2
x1 ,x2 ,xm
i=1
i (xi )
{pi | xi B} 0.5.
if
sup
min
1im
2
x1 ,x2 ,xm
i=1
Model III
Let 1 , 2 , , m be random variables, and let u1 , u2 , , um be nonnegative
numbers with u1 u2 um = 1. Then
ui
i (x)dx ,
max
1im
2
ui
if max
1im
2
B
Ch{ B} =
ui
1 max
i (x)dx ,
1im
2
Bc
ui
i (x)dx 0.5.
if max
1im
2
B
Model IV
In many statistics problems, the probability density function is completely
known except for the values of one or more parameters. For example, it
141
sup
min
y1 ,y2 ,ym
if
1im
sup
(x; y1 , y2 , , ym )dx ,
B
i (yi )
1im
2
min
y1 ,y2 , ,ym
1 sup
y1 ,y2 ,ym
if
sup
i (yi )
2
i (yi )
1im
2
y1 ,y2 , ,ym
min
min
1im
i (yi )
2
(x; y1 , y2 , , ym )dx
< 0.5
(x; y1 , y2 , , ym )dx ,
Bc
(x; y1 , y2 , , ym )dx
0.5.
Model V
Suppose a fuzzy variable has a normal membership function with unknown
expected value e and variance . If e and are provided as random variables,
then is a hybrid variable. More generally, suppose that has a membership
function
(x; 1 , 2 , , m ), x
(3.24)
in which the parameters 1 , 2 , , m are random variables rather than deterministic numbers. Then is a hybrid variable if (x; y1 , y2 , , ym ) is a
membership function for any (y1 , y2 , , ym ) that (1 , 2 , , m ) may take.
When are two hybrid variables equal to each other?
Definition 3.5 Let 1 and 2 be hybrid variables defined on the chance space
(, P, Cr) (, , Pr). We say 1 = 2 if 1 (, ) = 2 (, ) for almost all
(, ) .
142
Hybrid Vectors
Definition 3.6 An n-dimensional hybrid vector is a measurable function
from a chance space (, P, Cr) (, , Pr) to the set of n-dimensional real
vectors, i.e., for any Borel set B of n , the set
{ B} = (, ) (, ) B
(3.25)
is an event.
Theorem 3.11 The vector (1 , 2 , , n ) is a hybrid vector if and only if
1 , 2 , , n are hybrid variables.
Proof: Write = (1 , 2 , , n ). Suppose that is a hybrid vector on
the chance space (, P, Cr) (, , Pr). For any Borel set B of , the set
B n1 is a Borel set of n . Thus the set
(, ) 1 (, ) B
=
(, ) 1 (, ) B, 2 (, ) , , n (, )
(, ) (, ) B
n1
(, ) (, )
(ai , bi )
(, ) i (, ) (ai , bi )
=
i=1
i=1
(, ) (, ) B
is an event, and
{(, ) (, ) B c } = {(, ) (, ) B}c
is an event. This means that B c ; (iii) if Bi
{(, ) |(, ) Bi } are events and
(, ) (, )
Bi
i=1
for i = 1, 2, , then
{(, ) (, ) Bi }
=
i=1
is an event. This means that i Bi . Since the smallest -algebra containing all open intervals of n is just the Borel algebra of n , the class
contains all Borel sets of n . The theorem is proved.
143
Hybrid Arithmetic
Definition 3.7 Let f : n be a measurable function, and 1 , 2 , , n
hybrid variables on the chance space (, P, Cr) (, , Pr). Then =
f (1 , 2 , , n ) is a hybrid variable defined as
(, ) = f (1 (, ), 2 (, ), , n (, )),
(, ) .
(3.26)
Example 3.8: Let 1 and 2 be two hybrid variables. Then the sum =
1 + 2 is a hybrid variable defined by
(, ) = 1 (, ) + 2 (, ),
(, ) .
(, ) .
3.3
Chance Distribution
Chance distribution has been defined in several ways. Yang and Liu [240]
presented the concept of chance distribution of fuzzy random variables, and
Zhu and Liu [261] proposed the chance distribution of random fuzzy variables.
Li and Liu [103] gave the following definition of chance distribution of hybrid
variables.
Definition 3.8 The chance distribution :
is defined by
(x) = Ch (, ) (, ) x .
(3.27)
(, ) .
144
(, ) .
x+
yx
(3.28)
(3.29)
145
For this case, we have proved that limyx (y) = (x). Thus (3.29) is proved.
Conversely, suppose :
[0, 1] is an increasing function satisfying
(3.28) and (3.29). Theorem 2.19 states that there is a fuzzy variable whose
credibility distribution is just (x). Since a fuzzy variable is a special hybrid
variable, the theorem is proved.
[0, +) of a hybrid
(x) =
(y)dy,
x ,
(3.30)
(y)dy = 1
(3.31)
Ch{ x} =
(y)dy,
Ch{ x} =
(y)dy.
(3.32)
Proof: The first part follows immediately from the definition. In addition,
by the self-duality of chance measure, we have
+
(y)dy
(y)dy =
(y)dy.
x
x2
(x1 , x2 , , xn ) =
, and
[0, +) of a
xn
146
3.4
Expected Value
Expected value has been defined in several ways. For example, Kwakernaak
[87], Puri and Ralescu [193], Kruse and Meyer [86], and Liu and Liu [140]
gave different expected value operators of fuzzy random variables. Liu and
Liu [141] presented an expected value operator of random fuzzy variables. Li
and Liu [103]) suggested the following definition of expected value operator
of hybrid variables.
Definition 3.12 Let be a hybrid variable. Then the expected value of is
defined by
+
Ch{ r}dr
E[] =
Ch{ r}dr
(3.33)
x .
It follows from (3.33) that E[] = E[]. In other words, the expected value
operator of hybrid variable coincides with that of random variable.
Example 3.12: If a hybrid variable degenerates to a fuzzy variable a
, then
Ch{ x} = Cr{
a x},
Ch{ x} = Cr{
a x},
x .
x(x)dx
E[] =
x(x)dx.
(3.34)
147
Proof: It follows from the definition of expected value operator and Fubini
Theorem that
0
Ch{ r}dr
Ch{ r}dr
E[] =
(x)dx dr
(x)dx dr
(x)dr dx
=
+
(x)dr dx
x(x)dx +
x(x)dx
0
+
x(x)dx.
lim (x) = 1
x+
xd(x)
E[] =
xd(x).
(3.35)
lim
y+
xd(x) =
xd(x),
lim
and
lim
y+
xd(x) =
xd(x)
xd(x) = 0,
y
lim
xd(x) = 0.
It follows from
+
xd(x) y
y
z+
= y (1 (y)) 0,
for y > 0,
148
= y(y) 0,
for y < 0
that
lim y (1 (y)) = 0,
y+
lim y(y) = 0.
Let 0 = x0 < x1 < x2 < < xn = y be a partition of [0, y]. Then we have
n1
xi ((xi+1 ) (xi ))
xd(x)
0
i=0
and
n1
(1 (xi+1 ))(xi+1 xi )
Ch{ r}dr
0
i=0
as max{|xi+1 xi | : i = 0, 1, , n 1} 0. Since
n1
n1
xi ((xi+1 ) (xi ))
i=0
i=0
Ch{ r}dr =
0
xd(x).
0
Ch{ r}dr =
xd(x).
(3.36)
Proof: Step 1: We first prove that E[ + b] = E[] + b for any real number
b. If b 0, we have
+
Ch{ + b r}dr
E[ + b] =
Ch{ + b r}dr
0
+
Ch{ r b}dr
Ch{ r b}dr
0
b
= E[] +
0
= E[] + b.
149
E[a + b] = E[]
Ch{a r}dr
E[a] =
Ch{a r}dr
0
+
Ch{ r/a}dr
Ch{ r/a}dr
0
+
Ch{ t}dt a
=a
Ch{ t}dt
= aE[].
If a < 0, we have
+
Ch{a r}dr
E[a] =
Ch{a r}dr
0
+
Ch{ r/a}dr
Ch{ r/a}dr
0
+
Ch{ t}dt a
=a
0
Ch{ t}dt
= aE[].
Step 2: For any real numbers a and b, it follows from Steps 1 and 2 that
E[a + b] = E[a] + b = aE[] + b.
The theorem is proved.
3.5
Variance
The variance has been given by different ways. Liu and Liu [140][141] proposed the variance definitions of fuzzy random variables and random fuzzy
variables. Li and Liu [103] suggested the following variance definition of
hybrid variables.
Definition 3.13 Let be a hybrid variable with finite expected value e. Then
the variance of is defined by V [] = E[( e)2 ].
Theorem 3.18 If is a hybrid variable with finite expected value, a and b
are real numbers, then V [a + b] = a2 V [].
150
E[( e)2 ] =
V [] =
0
be
ea
f (a) +
f (b).
ba
ba
(3.37)
b (, )
(, ) a
a+
b.
ba
ba
b (, )
(, ) a
f (a) +
f (b).
ba
ba
(3.38)
151
3.6
Moments
Liu [129] defined the concepts of moments of both fuzzy random variables and
random fuzzy variables. Li and Liu [103] discussed the moments of hybrid
variables.
Definition 3.14 Let be a hybrid variable. Then for any positive integer k,
(a) the expected value E[ k ] is called the kth moment;
(b) the expected value E[||k ] is called the kth absolute moment;
(c) the expected value E[( E[])k ] is called the kth central moment;
(d) the expected value E[| E[]|k ] is called the kth absolute central moment.
Note that the first central moment is always 0, the first moment is just
the expected value, and the second central moment is just the variance.
Theorem 3.22 Let be a nonnegative hybrid variable, and k a positive number. Then the k-th moment
+
E[ k ] = k
(3.39)
E[ k ] =
Ch{ k x}dx =
0
Ch{ r}drk = k
0
3.7
Independence
fi (i ) =
E
i=1
E[fi (i )]
i=1
(3.42)
152
(3.43)
3.8
153
Identical Distribution
(3.44)
3.9
Critical Values
In order to rank fuzzy random variables, Liu [122] defined two critical values:
optimistic value and pessimistic value. Analogously, Liu [124] gave the concepts of critical values of random fuzzy variables. Li and Liu [103] presented
the following definition of critical values of hybrid variables.
Definition 3.17 Let be a hybrid variable, and (0, 1]. Then
sup () = sup r Ch { r}
(3.45)
(3.46)
154
The hybrid variable reaches upwards of the -optimistic value sup (),
and is below the -pessimistic value inf () with chance .
Example 3.14: If a hybrid variable degenerates to a random variable ,
then
Ch{ x} = Pr{ x},
x .
(0, 1].
In other words, the critical values of hybrid variable coincide with that of
random variable.
Example 3.15: If a hybrid variable degenerates to a fuzzy variable a
, then
Ch{ x} = Cr{
a x},
Ch{ x} = Cr{
a x},
x .
inf () = a
inf (),
(0, 1].
In other words, the critical values of hybrid variable coincide with that of
fuzzy variable.
Theorem 3.32 Let be a hybrid variable. If > 0.5, then we have
Ch{ inf ()} ,
(3.47)
Proof: It follows from the definition of -pessimistic value that there exists
a decreasing sequence {xi } such that Ch{ xi } and xi inf () as
i . Since { xi } { inf ()} and limi Ch{ xi } > 0.5, it
follows from the chance semicontinuity theorem that
Ch{ inf ()} = lim Ch{ xi } .
i
155
Proof: (a) If c = 0, then the part (a) is obvious. In the case of c > 0, we
have
(c)sup () = sup{r Ch{c r} }
= c sup{r/c | Ch{ r/c} }
= csup ().
A similar way may prove (c)inf () = cinf (). In order to prove the part (b),
it suffices to prove that ()sup () = inf () and ()inf () = sup ().
In fact, we have
()sup () = sup{r Ch{ r} }
= inf{r | Ch{ r} }
= inf ().
Similarly, we may prove that ()inf () = sup (). The theorem is proved.
Theorem 3.34 Let be a hybrid variable. Then we have
(a) if > 0.5, then inf () sup ();
(b) if 0.5, then inf () sup ().
1 Ch{ ()}
+ Ch{ ()}
< + 1.
A contradiction proves inf () sup (). The theorem is verified.
Theorem 3.35 Let be a hybrid variable. Then we have
(a) sup () is a decreasing and left-continuous function of ;
(b) inf () is an increasing and left-continuous function of .
Proof: (a) It is easy to prove that inf () is an increasing function of .
Next, we prove the left-continuity of inf () with respect to . Let {i } be
an arbitrary sequence of positive numbers such that i . Then {inf (i )}
is an increasing sequence. If the limitation is equal to inf (), then the leftcontinuity is proved. Otherwise, there exists a number z such that
lim inf (i ) < z < inf ().
156
3.10
Entropy
S(Ch{ = xi })
H[] =
(3.48)
i=1
S(Ch{ = xi }) n ln 2
H[] =
i=1
157
3.11
Distance
Definition 3.19 (Li and Liu [103]) The distance between hybrid variables
and is defined as
d(, ) = E[| |].
(3.51)
Theorem 3.38 (Li and Liu [103]) Let , , be hybrid variables, and let
d(, ) be the distance. Then we have
(a) (Nonnegativity) d(, ) 0;
(b) (Identification) d(, ) = 0 if and only if = ;
(c) (Symmetry) d(, ) = d(, );
(d) (Triangle Inequality) d(, ) 2d(, ) + 2d(, ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the chance subadditivity theorem
that
+
Ch {| | r} dr
d(, ) =
0
+
Ch {| | + | | r} dr
0
+
Ch{| | r/2}dr +
=
0
Ch{| | r/2}dr
0
3.12
Inequalities
Yang and Liu [237] proved some important inequalities for fuzzy random
variables, and Zhu and Liu [262] presented several inequalities for random
fuzzy variables. Li and Liu [103] also verified the following inequalities for
hybrid variables.
Theorem 3.39 Let be a hybrid variable, and f a nonnegative function. If
f is even and increasing on [0, ), then for any given number t > 0, we have
Ch{|| t}
E[f ()]
.
f (t)
(3.52)
158
Ch{f () r}dr
E[f ()] =
0
+
Ch{|| f 1 (r)}dr
=
0
f (t)
Ch{|| f 1 (r)}dr
0
f (t)
dr Ch{|| f 1 (f (t))}
= f (t) Ch{|| t}
which proves the inequality.
Theorem 3.40 (Markov Inequality) Let be a hybrid variable. Then for
any given numbers t > 0 and p > 0, we have
Ch{|| t}
E[||p ]
.
tp
(3.53)
V []
.
t2
(3.54)
E[||p ] q E[||q ].
(3.55)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assumeE[||p ] > 0 and E[||q ] > 0. It is easy to prove that the function
(x, y) D.
159
E[| + |p ]
E[||p ] +
E[||p ].
(3.56)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assume
E[||p ] > 0 and E[||p ] > 0. It is easy to prove that the function
(x, y) D.
(3.57)
160
3.13
Convergence Concepts
Liu [129] gave the convergence concepts of fuzzy random sequence, and Zhu
and Liu [264] introduced the convergence concepts of random fuzzy sequence.
Li and Liu [103] discussed the convergence concepts of hybrid sequence: convergence almost surely (a.s.), convergence in chance, convergence in mean,
and convergence in distribution.
Table 3.1: Relationship among Convergence Concepts
Convergence
in Mean
Convergence
in Chance
Convergence
in Distribution
(3.58)
(3.59)
(3.60)
(3.61)
161
i, if j = i
0, otherwise
i
1
.
2i + 1
2
i, if k/2j (k + 1)/2j
0, otherwise
1
0
2i
162
i, if j = i
0, otherwise
E[|i |] = 1,
i.
2i , if j = i
0, otherwise
i, if k/2j (k + 1)/2j
0, otherwise
163
(3.63)
It follows from (3.62) and (3.63) that i (x) (x). The theorem is proved.
Example 3.23: Convergence in distribution does not imply convergence
in chance. Take a credibility space (, P, Cr) to be {1 , 2 } with Cr{1 } =
Cr{2 } = 1/2 and take an arbitrary probability space (, , Pr). We define
a hybrid variables as
(, ) =
1, if = 1
1, if = 2 .
164
1, if = 1
1, if = 2 .
i, if j = i
0, otherwise
0,
if x < 0
1,
if x i
for i = 1, 2, , respectively. The chance distribution of is
(x) =
0, if x < 0
1, if x 0.
It is clear that i (x) does not converge to (x) at x > 0. That is, the
sequence {i } does not converge in distribution to .
3.14
Conditional Chance
We consider the chance measure of an event A after it has been learned that
some other event B has occurred. This new chance measure of A is called
the conditional chance measure of A given B.
In order to define a conditional chance measure Ch{A|B}, at first we
have to enlarge Ch{A B} because Ch{A B} < 1 for all events whenever
Ch{B} < 1. It seems that we have no alternative but to divide Ch{AB} by
165
Ch{A B}
.
Ch{B}
(3.64)
Ch{Ac B}
.
Ch{B}
(3.65)
Ch{A B}
Ch{Ac B}
1.
Ch{B}
Ch{B}
(3.66)
Ch{A|B} =
Ch{A B}
,
Ch{B}
if
Ch{A B}
< 0.5
Ch{B}
Ch{Ac B}
Ch{Ac B}
, if
< 0.5
Ch{B}
Ch{B}
0.5,
(3.67)
otherwise
Ch{A B}
Ch{Ac B}
Ch{A|B}
.
Ch{B}
Ch{B}
(3.68)
Furthermore, it is clear that the conditional chance measure obeys the maximum uncertainty principle.
166
Remark 3.7: Let X and Y be events in the credibility space. Then the
conditional chance measure of X given Y is
Ch{X |Y } =
Cr{X Y }
,
Cr{Y }
if
Cr{X Y }
< 0.5
Cr{Y }
Cr{X c Y }
Cr{X c Y }
, if
< 0.5
Cr{Y }
Cr{Y }
0.5,
otherwise
Ch{ X| Y } =
Ch{ = x, = y}
Ch{ = x, = y}
,
if
< 0.5
Ch{ = y}
Ch{ = y}
Ch{ = x, = y}
Ch{ = x, = y}
Ch { = x| = y} =
, if
< 0.5
1
Ch{
=
y}
Ch{ = y}
0.5,
otherwise
provided that Ch{ = y} > 0.
Theorem 3.47 (Li and Liu [106]) Conditional chance measure is a type of
uncertain measure. That is, conditional chance measure is normal, increasing, self-dual and countably subadditive.
Proof: At first, the conditional chance measure Ch{|B} is normal, i.e.,
Ch{ |B} = 1
Ch{}
= 1.
Ch{B}
< 0.5,
Ch{B}
Ch{B}
then
Ch{A1 |B} =
Ch{A1 B}
Ch{A2 B}
= Ch{A2 |B}.
Ch{B}
Ch{B}
167
If
Ch{A1 B}
Ch{A2 B}
0.5
,
Ch{B}
Ch{B}
Ch{A1 B}
Ch{A2 B}
,
Ch{B}
Ch{B}
then we have
Ch{A1 |B} = 1
Ch{Ac1 B}
Ch{Ac2 B}
0.5 1
0.5 = Ch{A2 |B}.
Ch{B}
Ch{B}
Ch{Ac B}
0.5,
Ch{B}
Ch{A B}
Ch{A B}
+ 1
Ch{B}
Ch{B}
= 1.
That is, Ch{|B} is self-dual. Finally, for any countable sequence {Ai } of
events, if Ch{Ai |B} < 0.5 for all i, it follows from the countable subadditivity
of chance measure that
Ai B
Ch
Ai B
Ch
i=1
Ch{B}
i=1
Ch{Ai B}
i=1
Ch{Ai |B}.
Ch{B}
i=1
i = 2, 3,
Ai B
Ch
i=1
Ch{Ai |B}.
i=1
If Ch{i Ai |B} > 0.5, we may prove the above inequality by the following
facts:
Ac1 B
Aci B
(Ai B)
i=2
i=1
168
Ch{Ac1 B}
Aci B
Ch{Ai B} + Ch
i=2
i=1
Ai |B
Ch
Aci B
Ch
=1
i=1
i=1
Ch{B}
Ch{Ai B}
Ch{Ac1 B}
+
Ch{Ai |B} 1
Ch{B}
i=1
i=2
Ch{B}
If there are at least two terms greater than 0.5, then the countable subadditivity is clearly true. Thus Ch{|B} is countably subadditive. Hence Ch{|B}
is an uncertain measure.
Definition 3.25 (Li and Liu [106]) The conditional chance distribution :
[0, 1] of a hybrid variable given B is defined by
(x|B) = Ch { x|B}
(3.69)
(x| = y) =
Ch{ x, = y}
,
Ch{ = y}
if
Ch{ x, = y}
< 0.5
Ch{ = y}
Ch{ > x, = y}
Ch{ > x, = y}
, if
< 0.5
Ch{ = y}
Ch{ = y}
0.5,
otherwise
(x|B) =
(y|B)dy,
x ,
(3.70)
(y|B)dy = 1
(3.71)
169
Definition 3.27 (Li and Liu [106]) Let be a hybrid variable. Then the
conditional expected value of given B is defined by
+
Ch{ r|B}dr
E[|B] =
Ch{ r|B}dr
(3.72)
3.15
Hybrid Process
Definition 3.28 (Liu [133]) Let T be an index set, and (, P, Cr)(, , Pr)
a chance space. A hybrid process is a measurable function from T (, P, Cr)
(, , Pr) to the set of real numbers, i.e., for each t T and any Borel set
B of real numbers, the set
{(, ) X(t, , ) B}
(3.73)
is an event.
That is, a hybrid process X( , ) is a function of three variables such
that the function Xt (, ) is a hybrid variable for each t . For each fixed
( , ), the function Xt ( , ) is called a sample path of the hybrid process.
A hybrid process Xt (, ) is said to be sample-continuous if the sample path
is continuous for almost all (, ).
Definition 3.29 (Liu [133]) A hybrid process Xt is said to have independent
increments if
Xt1 Xt0 , Xt2 Xt1 , , Xtk Xtk1
(3.74)
are independent hybrid variables for any times t0 < t1 < < tk . A hybrid
process Xt is said to have stationary increments if, for any given t > 0, the
Xs+t Xs are identically distributed hybrid variables for all s > 0.
Example 3.28: Let Xt be a fuzzy process and let Yt be a stochastic process.
Then Xt + Yt is a hybrid process.
Hybrid Renewal Process
Definition 3.30 (Liu [133]) Let 1 , 2 , be iid positive hybrid variables.
Define S0 = 0 and Sn = 1 + 2 + + n for n 1. Then the hybrid process
Nt = max n Sn t
n0
(3.75)
170
(3.76)
Ch{Sn t}.
E[Nt ] =
(3.77)
n=1
n=1
n1
Ch{Nt r}dr =
E[Nt ] =
0
Ch{Nt r}dr
Ch{Nt n} =
=
n=1
Ch{Sn t}.
n=1
(3.79)
(3.80)
171
C. t
...
..........
..
...
.....
......
....
............ ...
.. ..
.....
..
.. .....
.. ......
.....
.
.
.
.
.
.
...
.
.
.
.
. ....
........................
......... ........
...
...
...
..... ...........
.......
...
...
......
.... ......... .............. ........... .
...
........
...
..... ....
... ........
........
.
.
...
.
.
............ .....
.
.
.
...
.
.
.
.
..........
......
.........
...
.
.
.
.
.
.
.
.
.
...
.
.
.
..... ... ...
.... ...
.
.
.
.
.
.
.....
.
.
.
...
.
.
.
.
.
.
.
.
.........................
. ...
......
..... .... ...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... .....
.
.
.
.
... ...
.....
....
........................
.......
.
.
.
.
.
.
.
.
.
... ......................................
.
................
.....
... ... ...
...
......... ..................
... ....... .......... ............... ............. ......
......
... ... .............
........
.
.
.
.
.
.
.
.
...... ........... ..... ......
.
.
.
.
.
.
.
...
.
.
...........
......... .... ..... ......
..... .. ..........................................
..
...
...
................................. .. ........................................................................................ ..............
...... ....
..
........... .
... .................................................
.
.......
..
..
.....
............
... .............
........................................
...
............. ..........
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
....................
........
......
...
.....
..
...
.......
.. ...
...
...
...
.........
...................... .....
.. ....................... ....
...
.
.
.
.
.
.
.
.
.
.... .......................
.
.
.
.
...
.
...................
..... .......
...
.... ........
......... ................
...
................... ..... ...........
....
.. ...... ...... ........
...
.
... .
.
.
.
.
.
.
.
.
.
.
...
.
.
....... ..... ................
...
..... ...
............
...
...... ....
...
...... ..
......
...
....................................................................................................................................................................................................................................................................
...
Bt
3.16
Hybrid Calculus
Xt dDt = lim
a
i=1
(3.82)
172
provided that the limit exists in mean square and is a hybrid variable.
Remark 3.9: The hybrid integral may also be written as follows,
b
Xt dDt =
a
(3.83)
(1 dBt + 2 dCt ) = 1 Bs + 2 Cs
0
1 2
(B s + Cs2 ).
2 s
Ci = Cti+1 Cti
and obtain
k
Bs Cs =
i=1
k
i=1
s
i=1
s
Bt dCt +
0
Bi Ci
Cti Bi +
Bti Ci +
i=1
Ct dBt + 0
0
as 0. That is,
s
h
h
(t, Bt , Ct )dt +
(t, Bt , Ct )dBt
t
b
h
1 2h
+ (t, Bt , Ct )dCt +
(t, Bt , Ct )dt.
c
2 b2
(3.84)
173
h
h
h
(t, Bt , Ct )t +
(t, Bt , Ct )Bt +
(t, Bt , Ct )Ct
t
b
c
1 2h
1 2h
1 2h
(t, Bt , Ct )(t)2 +
(t, Bt , Ct )(Bt )2 +
(t, Bt , Ct )(Ct )2
2
2
2 t
2 b
2 c2
2h
2h
2h
(t, Bt , Ct )tBt +
(t, Bt , Ct )tCt +
(t, Bt , Ct )Bt Ct .
tb
tc
bc
Since we can ignore the terms (t)2 , (Ct )2 , tBt , tCt , Bt Ct and
replace (Bt )2 with t, the chain rule is obtained because it makes
s
Xs = X0 +
0
h
dt +
t
s
0
h
dBt +
b
s
0
1
h
dCt +
c
2
s
0
2h
dt
b2
for any s 0.
Remark 3.10: The infinitesimal increments dBt and dCt in (3.84) may be
replaced with the derived D process
dYt = ut dt + v1t dBt + v2t dCt
(3.85)
where ut and v2t are absolutely integrable hybrid processes, and v1t is a
square integrable hybrid process, thus producing
dh(t, Yt ) =
h
h
1 2h
2
(t, Yt )dt +
(t, Yt )dYt +
(t, Yt )v1t
dt.
t
b
2 b2
(3.86)
Remark 3.11: Assume that B1t , B2t , , Bmt are standard Brownian motions, C1t , C2t , , Cnt are standard C processes, and
h(t, b1 , b2 , , bm , c1 , c2 , , cn )
is a twice continuously differentiable function. Define
Xt = h(t, B1t , B2t , , Bmt , C1t , C2t , , Cnt ).
You [244] proved the following chain rule
m
dXt =
h
h
h
1
dt +
dBit +
dCjt +
t
b
c
2
i
j
i=1
j=1
i=1
2h
dt.
b2i
(3.87)
Example 3.32: Applying the chain rule, we obtain the following formulas,
d(Bt Ct ) = Ct dBt + Bt dCt ,
d(tBt Ct ) = Bt Ct dt + tCt dBt + tBt dCt ,
2
174
3.17
(3.88)
Xs = X0 +
f (t, Xt )dt +
0
g1 (t, Xt )dBt +
0
g2 (t, Xt )dCt .
(3.89)
However, the differential form is more convenient for us. This is the main
reason why we accept the differential form.
Example 3.33: Let Bt be a standard Brownian motion, and let a
and b be
two fuzzy variables. Then the hybrid differential equation
dXt = a
dt + bdBt
has a solution
Xt = a
t + bBt .
The hybrid differential equation
dXt = a
Xt dt + bXt dBt
has a solution
Xt = exp
b2
2
t + bBt
175
b2
2
t + bBt + cCt
Chapter 4
Uncertainty Theory
A classical measure is essentially a set function satisfying nonnegativity and
4.1
Uncertainty Space
178
{} = 1.
Axiom 2. (Monotonicity) {1 } {2 } whenever 1 2 .
Axiom 3. (Self-Duality) {} + {c } = 1 for any event .
Axiom 1. (Normality)
i
i=1
{i }.
(4.1)
i=1
.
Remark 4.1: Pathology occurs if self-duality axiom is not assumed. For
example, we define a set function that takes value 1 for each set. Then it
satisfies all axioms but self-duality. Is it not strange if such a set function
serves as a measure?
Remark 4.2: Pathology occurs if subadditivity is not assumed. For example, suppose that a universal set contains 3 elements. We define a set
function that takes value 0 for each singleton, and 1 for each set with at least
2 elements. Then such a set function satisfies all axioms but subadditivity.
Is it not strange if such a set function serves as a measure?
Remark 4.3: Pathology occurs if countable subadditivity axiom is replaced
with finite subadditivity axiom. For example, assume the universal set consists of all real numbers. We define a set function that takes value 0 if the
set is bounded, 0.5 if both the set and complement are unbounded, and 1 if
the complement of the set is bounded. Then such a set function is finitely
subadditive but not countably subadditive. Is it not strange if such a set
function serves as a measure?
Definition 4.1 (Liu [132]) The set function is called an uncertain measure if it satisfies the normality, monotonicity, self-duality, and countable
subadditivity axioms.
Example 4.1: Probability, credibility and chance measures are instances of
uncertain measure.
Example 4.2: Let Pr be a probability measure, Cr a credibility measure,
and a a number in [0, 1]. Then
{} = a Pr{} + (1 a)Cr{}
is an uncertain measure.
(4.2)
179
Example 4.3: Let = {1 , 2 , 3 }. For this case, there are only 8 events.
Define
{1 } = 0.6, {2 } = 0.3, {3 } = 0.2,
{} =
{},
if {} < 0.5
1 { }, if {c } < 0.5
0.5,
otherwise
is an uncertain measure.
Theorem 4.1 Suppose that
(4.4)
{1 } {2 } {1 2 } {1 } + {2 }
(4.5)
Proof: The left-hand inequality follows from the monotonicity axiom and
the right-hand inequality follows from the countable subadditivity axiom immediately.
Theorem 4.3 Let = {1 , 2 , }. If
{i } + {j } 1
k=1
{k }
(4.6)
180
Proof: Since
Since = k {k } and
1 = {} =
{k }
k=1
{k }.
k=1
{i } 0
{ i } = i
lim {\i } = {}.
(4.7)
{} by using {i } 0.
Since
{\i } {} {\i } + {i }.
Hence
{\i } {} by using {i } 0.
Remark 4.4: It follows from the above theorem that the uncertain measure
is null-additive, i.e., {1 2 } = {1 } + {2 } if either {1 } = 0
or {2 } = 0. In other words, the uncertain measure remains unchanged if
the event is enlarged or reduced by an event with measure zero.
Uncertainty Asymptotic Theorem
Theorem 4.5 (Uncertainty Asymptotic Theorem) For any events 1 , 2 , ,
we have
lim {i } > 0, if i ,
(4.8)
i
lim
{i } < 1,
if i .
(4.9)
181
{i }.
i=1
0,
if =
,
if
is upper bounded
(4.10)
{} = 0.5, if both and c are upper unbounded
1 , if is upper bounded
1,
if = .
It is easy to verify that is an uncertain measure. Write i = (, i] for
i = 1, 2, Then i and limi {i } = . Furthermore, we have
ci and limi {ci } = 1 .
Uncertainty Space
Definition 4.2 (Liu [132]) Let be a nonempty set, a -algebra over
, and an uncertain measure. Then the triplet (, , ) is called an
uncertainty space.
4.2
Uncertain Variables
(4.11)
is an event.
Example 4.6: Random variable, fuzzy variable and hybrid variable are
instances of uncertain variable.
182
{ = x1 , = x2 , , = xm } = 0;
(4.12)
{ = x1 , = x2 , } = 0.
(4.13)
It is clear that 0 { = x} 1, and there is at most one point x0 such
that { = x0 } > 0.5. For a continuous uncertain variable, we always have
0 { = x} 0.5.
Definition 4.5 Let 1 and 2 be uncertain variables defined on the uncertainty space (, , ). We say 1 = 2 if 1 () = 2 () for almost all .
Uncertain Vector
Definition 4.6 An n-dimensional uncertain vector is a measurable function
from an uncertainty space (, , ) to the set of n-dimensional real vectors,
i.e., for any Borel set B of n , the set
{ B} = () B
(4.14)
is an event.
Theorem 4.6 The vector (1 , 2 , , n ) is an uncertain vector if and only
if 1 , 2 , , n are uncertain variables.
Proof: Write = (1 , 2 , , n ). Suppose that is an uncertain vector on
the uncertainty space (, , ). For any Borel set B of , the set B n1
is a Borel set of n . Thus the set
1 () B
=
1 () B, 2 () , , n ()
() B
n1
183
()
(ai , bi )
i=1
i () (ai , bi )
=
i=1
() B
is an event, and
{ () B c } = { () B}c
is an event. This means that B c
{ |() Bi } are events and
()
Bi
i=1
{ () Bi }
=
i=1
is an event. This means that i Bi . Since the smallest -algebra containing all open intervals of n is just the Borel algebra of n , the class
contains all Borel sets of n . The theorem is proved.
Uncertain Arithmetic
Definition 4.7 Suppose that f : n
is a measurable function, and
1 , 2 , , n uncertain variables on the uncertainty space (, , ). Then
= f (1 , 2 , , n ) is an uncertain variable defined as
() = f (1 (), 2 (), , n ()),
(4.15)
Example 4.7: Let 1 and 2 be two uncertain variables. Then the sum
= 1 + 2 is an uncertain variable defined by
() = 1 () + 2 (),
184
4.3
Identification Function
(x)dx = 1;
(4.16)
{ B} = 21
xB c
(x)dx.
(4.17)
Remark 4.5: It is not true that all uncertain variables have their own
identification functions.
Remark 4.6: The uncertain variable with identification function (, ) is
essentially a fuzzy variable if
sup (x) = 1.
x
{ = x} = (x)
,
2
x .
(4.18)
185
Theorem 4.8 Suppose (x) is a nonnegative function and (x) is a nonnegative and integrable function satisfying (4.16). Then there is an uncertain
variable such that (4.17) holds.
Proof: Let be the universal set. For each Borel set B of real numbers, we
define a set function
{B} = 12
xB c
(x)dx.
B
4.4
Uncertainty Distribution
[0, 1] of an
(x) = () x .
(4.19)
x+
[0, +)
(x) =
(y)dy,
x ,
(4.21)
(y)dy = 1
(4.22)
186
Example 4.8: The uncertainty density function may not exist even if the
uncertainty distribution is continuous and differentiable a.e. Suppose f is the
Cantor function, and set
if x < 0
0,
f (x), if 0 x 1
(4.23)
(x) =
1,
if x > 1.
Then is an increasing and continuous function, and is an uncertainty distribution. Note that (x) = 0 almost everywhere, and
+
(x)dx = 0 = 1.
{ x} =
{ x} =
(y)dy,
(y)dy.
(4.24)
Proof: The first part follows immediately from the definition. In addition,
by the self-duality of uncertain measure, we have
{ x} = 1 { < x} =
(y)dy
(y)dy =
(y)dy.
x
x2
(x1 , x2 , , xn ) =
, and
[0, +)
xn
187
4.5
Expected Value
E[] =
0
{ r}dr
{ r}dr
(4.25)
x(x)dx
E[] =
x(x)dx.
(4.26)
Proof: It follows from the definition of expected value operator and Fubini
Theorem that
+
E[] =
0
{ r}dr
{ r}dr
(x)dx dr
=
0
(x)dx dr
r
+
(x)dr dx
=
0
+
(x)dr dx
x(x)dx +
x(x)dx
0
+
x(x)dx.
188
xd(x)
xd(x).
E[] =
(4.27)
xd(x) =
lim
xd(x),
xd(x) = 0,
y+
xd(x)
lim
xd(x) =
lim
and
lim
xd(x) = 0.
It follows from
+
xd(x) y
y
z+
= y (1 (y)) 0,
for y > 0,
= y(y) 0,
for y < 0
that
lim y (1 (y)) = 0,
y+
lim y(y) = 0.
Let 0 = x0 < x1 < x2 < < xn = y be a partition of [0, y]. Then we have
n1
xi ((xi+1 ) (xi ))
xd(x)
0
i=0
and
n1
(1 (xi+1 ))(xi+1 xi )
0
i=0
{ r}dr
as max{|xi+1 xi | : i = 0, 1, , n 1} 0. Since
n1
n1
xi ((xi+1 ) (xi ))
i=0
{ r}dr =
xd(x).
0
189
{ r}dr =
xd(x).
(4.28)
Proof: Step 1: We first prove that E[ + b] = E[] + b for any real number
b. If b 0, we have
+
E[ + b] =
0
+
=
0
{ + b r}dr
{ r b}dr
b
= E[] +
0
{ + b r}dr
{ r b}dr
({ r b} + { < r b})dr
= E[] + b.
If b < 0, then we have
0
E[a + b] = E[]
b
E[a] =
0
+
=
0
{a r}dr
{ r/a}dr
=a
0
{ t}dt a
{a r}dr
{ r/a}dr
{ t}dt
= aE[].
If a < 0, we have
+
E[a] =
0
+
=
0
{a r}dr
{ r/a}dr
=a
0
= aE[].
{ t}dt a
{a r}dr
{ r/a}dr
{ t}dt
190
Finally, for any real numbers a and b, it follows from Steps 1 and 2 that the
theorem holds.
Theorem 4.14 Let f be a convex function on [a, b], and an uncertain
variable that takes values in [a, b] and has expected value e. Then
E[f ()]
be
ea
f (a) +
f (b).
ba
ba
(4.29)
b ()
() a
a+
b.
ba
ba
() a
b ()
f (a) +
f (b).
ba
ba
4.6
Variance
E[( e)2 ] =
0
which implies
{( e)2 r}dr
191
That is, { = e} = 1.
Conversely, if { = e} = 1, then we have
{( e)2 r} = 0 for any r > 0. Thus
+
V [] =
0
{( e)2
= 0} = 1 and
{( e)2 r}dr = 0.
(4.30)
4.7
Moments
Definition 4.16 (Liu [132]) Let be an uncertain variable. Then for any
positive integer k,
(a) the expected value E[ k ] is called the kth moment;
(b) the expected value E[||k ] is called the kth absolute moment;
(c) the expected value E[( E[])k ] is called the kth central moment;
(d) the expected value E[| E[]|k ] is called the kth absolute central moment.
Note that the first central moment is always 0, the first moment is just
the expected value, and the second central moment is just the variance.
Theorem 4.18 Let be a nonnegative uncertain variable, and k a positive
number. Then the k-th moment
+
E[ k ] = k
0
rk1 { r}dr.
(4.31)
E[ k ] =
0
{ k x}dx =
{ r}drk = k
rk1 { r}dr.
Conversely, if (4.32) holds for some positive number t, then E[||s ] < for
any 0 s < t.
192
E[||t ] =
0
Thus we have
+
lim
xt /2
{||t r}dr = 0.
{||
xt /2
xt
r}dr
xt /2
E[||s ] =
0
a
=
0
a
{||s r}dr +
{||s r}dr +
{||s r}dr + s
{||s r}dr
rst1 dr
0
< +.
by
0
E[| e|k ]
be k ea k
|a| +
|b| ,
ba
ba
be
ea
(e a)k +
(b e)k .
ba
ba
(4.33)
(4.34)
193
4.8
Independence
fi (i ) =
i=1
E[fi (i )]
(4.35)
i=1
(4.36)
4.9
Identical Distribution
194
4.10
Critical Values
Definition 4.19 (Liu [132]) Let be an uncertain variable, and (0, 1].
Then
sup () = sup r { r}
(4.38)
is called the -optimistic value to , and
inf () = inf r
{ r}
(4.39)
{ r/c} }
= csup ().
A similar way may prove (c)inf () = cinf (). In order to prove the part (b),
it suffices to prove that ()sup () = inf () and ()inf () = sup ().
In fact, we have
{ r} }
= inf{r | { r} }
()sup () = sup{r
= inf ().
Similarly, we may prove that ()inf () = sup (). The theorem is proved.
195
1 { < ()}
+ { > ()}
+ > 1.
A contradiction proves inf () sup ().
Part (b): Assume that inf () > sup (). It follows from the definition
1 { ()}
+ { ()}
< + 1.
4.11
Entropy
H[] =
S({ = xi })
(4.40)
i=1
196
{ = xi } = 0 or 1 for each i. That is, there exists one and only one index k
such that { = xk } = 1, i.e., is essentially a deterministic/crisp number.
This theorem states that the entropy of an uncertain variable reaches its
minimum 0 when the uncertain variable degenerates to a deterministic/crisp
number. In this case, there is no uncertainty.
Theorem 4.30 Suppose that is a simple uncertain variable taking values
in {x1 , x2 , , xn }. Then
H[] n ln 2
and equality holds if and only if
(4.42)
Proof: Since the function S(t) reaches its maximum ln 2 at t = 0.5, we have
n
H[] =
S({ = xi }) n ln 2
i=1
This theorem states that the entropy of an uncertain variable reaches its
maximum when the uncertain variable is an equipossible one. In this case,
there is no preference among all the values that the uncertain variable will
take.
4.12
Distance
Definition 4.21 (Liu [132]) The distance between uncertain variables and
is defined as
d(, ) = E[| |].
(4.43)
Theorem 4.31 Let , , be uncertain variables, and let d(, ) be the distance. Then we have
(a) (Nonnegativity) d(, ) 0;
(b) (Identification) d(, ) = 0 if and only if = ;
(c) (Symmetry) d(, ) = d(, );
(d) (Triangle Inequality) d(, ) 2d(, ) + 2d(, ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the countable subadditivity axiom
197
that
+
{| | r} dr
{| | + | | r} dr
{| | r/2}dr +
d(, ) =
0
=
0
+
0
{| | r/2}dr
() =
1, if = 3
0, otherwise,
() =
1, if = 1
0, otherwise,
() 0.
It is easy to verify that d(, ) = d(, ) = 1/2 and d(, ) = 3/2. Thus
d(, ) =
4.13
3
(d(, ) + d(, )).
2
Inequalities
Theorem 4.32 (Liu [132]) Let be an uncertain variable, and f a nonnegative function. If f is even and increasing on [0, ), then for any given
number t > 0, we have
()]
.
{|| t} E[f
f (t)
(4.44)
198
{f () r}dr
{|| f 1 (r)}dr
f (t)
{|| f 1 (r)}dr
f (t)
dr {|| f 1 (f (t))}
E[f ()] =
0
=
0
= f (t) {|| t}
which proves the inequality.
Theorem 4.33 (Liu [132], Markov Inequality) Let be an uncertain variable. Then for any given numbers t > 0 and p > 0, we have
p
]
{|| t} E[||
.
tp
(4.45)
{| E[]| t} Vt[]
.
2
(4.46)
E[||p ] q E[||q ].
(4.47)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assumeE[||p ] > 0 and E[||q ] > 0. It is easy to prove that the function
(x, y) D.
199
E[| + |p ]
E[||p ] +
E[||p ].
(4.48)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assume
E[||p ] > 0 and E[||p ] > 0. It is easy to prove that the function
(x, y) D.
(4.49)
200
4.14
Convergence Concepts
We have the following four convergence concepts of uncertain sequence: convergence almost surely (a.s.), convergence in measure, convergence in mean,
and convergence in distribution.
Table 4.1: Relationship among Convergence Concepts
Convergence
in Mean
Convergence
in Measure
Convergence
in Distribution
Definition 4.22 (Liu [132]) Suppose that , 1 , 2 , are uncertain variables defined on the uncertainty space (, , ). The sequence {i } is said
to be convergent a.s. to if there exists an event with {} = 1 such that
lim |i () ()| = 0
(4.50)
{|i | } = 0
(4.51)
(4.53)
201
(4.55)
It follows from (4.54) and (4.55) that i (x) (x). The theorem is proved.
4.15
Conditional Uncertainty
(4.56)
202
(4.57)
{Ac B} {A B} 1.
(4.58)
{B}
{B}
Hence any numbers between 1 {Ac B}/{B} and {AB}/{B} are
01
reasonable values that the conditional uncertain measure may take. Based
on the maximum uncertainty principle, we have the following conditional
uncertain measure.
{A B} , if {A B} < 0.5
{B}
{B}
c
{A|B} = 1 {A B} , if {Ac B} < 0.5
(4.59)
{B}
{B}
provided that
0.5,
otherwise
{B} > 0.
203
{A2 B}
1 B}
{A1 |B} = {A
{B} {B} = {A2 |B}.
If
{A B}
{B}
= 1.
That is, {|B} satisfies the self-duality axiom. Finally, for any countable
sequence {Ai } of events, if {Ai |B} < 0.5 for all i, it follows from the
countable subadditivity axiom that
Ai B
i=1
Ai B
i=1
{B}
i=1
{Ai B}
{B}
=
i=1
i = 2, 3,
{Ai |B}.
204
If
Ai B
i=1
{Ai |B}.
i=1
If {i Ai |B} > 0.5, we may prove the above inequality by the following
facts:
Ac1 B
i=2
{Ac1 B}
Aci B
(Ai B)
{Ai B} +
Aci B
i=2
Ai |B
i=1
i=1
=1
i=1
Aci B
i=1
{B}
{Ai B}
{Ac1 B}
i=2
{Ai |B} 1 {B} + {B} .
i=1
If there are at least two terms greater than 0.5, then the countable subadditivity is clearly true. Thus {|B} satisfies the countable subadditivity
axiom. Hence {|B} is an uncertain measure. Furthermore, (, , {|B})
is an uncertainty space.
Example 4.12: Let and be two uncertain variables. Then we have
{ = x| = y} =
provided that
{ = x, = y} ,
{ = y}
{ = x, = y} ,
1
{ = y}
0.5,
{ = x, = y} < 0.5
{ = y}
{ = x, = y} < 0.5
if
{ = y}
if
otherwise
{ = y} > 0.
{B} > 0.
(4.61)
205
(x| = y) =
provided that
{ x, = y} < 0.5
{ = y}
{ > x, = y} < 0.5
if
{ = y}
{ x, = y} ,
{ = y}
{ > x, = y} ,
1
{ = y}
if
0.5,
otherwise
{ = y} > 0.
(x|B) =
x ,
(y|B)dy,
(4.62)
(y|B)dy = 1
(4.63)
E[|B] =
0
{ r|B}dr
{ r|B}dr
(4.64)
4.16
Uncertain Process
(4.65)
206
(4.66)
are independent uncertain variables for any times t0 < t1 < < tk . An
uncertain process Xt is said to have stationary increments if, for any given
t > 0, the increments Xs+t Xs are identically distributed uncertain variables
for all s > 0.
Uncertain Renewal Process
Definition 4.34 (Liu [133]) Let 1 , 2 , be iid positive uncertain variables.
Define S0 = 0 and Sn = 1 + 2 + + n for n 1. Then the uncertain
process
Nt = max n Sn t
(4.67)
n0
(4.68)
E[Nt ] =
{Sn t}.
(4.69)
n=1
E[Nt ] =
0
=
n=1
n=1
n1
{Nt r}dr =
{Nt n} =
n=1
{Nt r}dr
{Sn t}.
207
Canonical Process
Definition 4.35 (Liu [133]) An uncertain process Wt is said to be a canonical process if
(i) W0 = 0 and Wt is sample-continuous,
(ii) Wt has stationary and independent increments,
(iii) W1 is an uncertain variable with expected value 0 and variance 1.
Theorem 4.40 (Existence Theorem) There is a canonical process.
Proof: In fact, standard Brownian motion and standard C process are instances of canonical process.
Example 4.14: Let Bt be a standard Brownian motion, and Ct a standard
C process. Then for each number a [0, 1], we may verify that
Wt = aBt + (1 a)Ct
(4.70)
is a canonical process.
Theorem 4.41 Let Wt be a canonical process. Then E[Wt ] = 0 for any t.
Proof: Let f (t) = E[Wt ]. Then for any times t1 and t2 , by using the
property of stationary and independent increments, we obtain
f (t1 + t2 ) = E[Wt1 +t2 ] = E[Wt1 +t2 Wt2 + Wt2 W0 ]
= E[Wt1 ] + E[Wt2 ] = f (t1 ) + f (t2 )
which implies that there is a constant e such that f (t) = et. The theorem is
proved via f (1) = 0.
Theorem 4.42 Let Wt be a canonical process. If for any t1 and t2 , we have
V [Wt1 +t2 ] = V [Wt1 ] + V [Wt2 ],
then V [Wt ] = t for any t.
Proof: Let f (t) = V [Wt ]. Then for any times t1 and t2 , by using the property
of stationary and independent increments as well as variance condition, we
obtain
f (t1 + t2 ) = V [Wt1 +t2 ] = V [Wt1 +t2 Wt2 + Wt2 W0 ]
= V [Wt1 ] + V [Wt2 ] = f (t1 ) + f (t2 )
which implies that there is a constant such that f (t) = 2 t. It follows from
f (1) = 1 that 2 = 1 and f (t) = t. Hence the theorem is proved.
208
V [Wt1 ] +
V [Wt2 ],
V [Wt1 +t2 ] =
V [Wt1 ] +
which implies that there is a constant such that f (t) = t. It follows from
f (1) = 1 that = 1 and f (t) = t. The theorem is verified.
Definition 4.36 For any partition of closed interval [0, t] with 0 = t1 < t2 <
< tk+1 = t, the mesh is written as
= max |ti+1 ti |.
1ik
|Wti Wti |
lim
(4.71)
i=1
provided that the limit exists in mean square and is an uncertain process.
Especially, the -variation is called total variation if = 1; and squared
variation if = 2.
Definition 4.37 (Liu [133]) Let Wt be a canonical process. Then et + Wt
is called a derived canonical process, and the uncertain process
Gt = exp(et + Wt )
(4.72)
209
4.17
Uncertain Calculus
(4.74)
Xt dWt = lim
(4.75)
i=1
provided that the limit exists in mean square and is an uncertain variable.
Example 4.15: Let Wt be a canonical process. Then for any partition
0 = t1 < t2 < < tk+1 = s, we have
k
(Wti+1 Wti ) Ws W0 = Ws .
dWt = lim
i=1
sWs =
i=1
k
ti (Wti+1 Wti ) +
=
i=1
s
Wti+1 (ti+1 ti )
i=1
tdWt +
0
Wt dt
0
as 0. It follows that
s
tdWt = sWs
0
Wt dt.
0
210
Theorem 4.44 (Liu [133]) Let Wt be a canonical process, and let h(t, w) be
a twice continuously differentiable function. Define Xt = h(t, Wt ). Then we
have the following chain rule
dXt =
h
h
1 2h
(t, Wt )dWt2 .
(t, Wt )dt +
(t, Wt )dWt +
t
w
2 w2
(4.76)
h
1 2h
h
(t, Wt )(Wt )2
(t, Wt )t +
(t, Wt )Wt +
t
w
2 w2
+
1 2h
2h
2
(t,
W
)(t)
+
(t, Wt )tWt .
t
2 t2
tw
Since we can ignore the terms (t)2 and tWt , the chain rule is proved
because it makes
s
Xs = X0 +
0
h
(t, Wt )dt +
t
s
0
h
1
(t, Wt )dWt +
w
2
s
0
2h
(t, Wt )dWt2
w2
for any s 0.
Remark 4.10: The infinitesimal increment dWt in (4.76) may be replaced
with the derived canonical process
dYt = ut dt + vt dWt
(4.77)
h
h
1 2h
(t, Yt )dt +
(t, Yt )dYt +
(t, Yt )vt2 dWt2 .
t
w
2 w2
(4.78)
Example 4.17: Applying the chain rule, we obtain the following formula
d(tWt ) = Wt dt + tdWt .
Hence we have
s
sWs =
d(tWt ) =
0
That is,
Wt dt +
0
tdWt = sWs
0
tdWt .
0
Wt dt.
0
211
Theorem 4.45 (Liu [133], Integration by Parts) Suppose that Wt is a canonical process and F (t) is an absolutely continuous function. Then
s
F (t)dWt = F (s)Ws
Wt dF (t).
(4.79)
Proof: By defining h(t, Wt ) = F (t)Wt and using the chain rule, we get
d(F (t)Wt ) = Wt dF (t) + F (t)dWt .
Thus
F (t)dWt
Wt dF (t) +
d(F (t)Wt ) =
F (s)Ws =
which is just (4.79).
4.18
(4.80)
Xs = X0 +
f (t, Xt )dt +
0
g(t, Xt )dWt .
(4.81)
However, the differential form is more convenient for us. This is the main
reason why we accept the differential form.
Example 4.18: Let Wt be a canonical process. Then the uncertain differential equation
dXt = adt + bdWt
has a solution Xt = at + bWt .
Appendix A
Measurable Sets
Algebra and -algebra are very important and fundamental concepts in measure theory.
Definition A.1 Let be a nonempty set. A collection is called an algebra
over if the following conditions hold:
(a) ;
(b) if A , then Ac ;
(c) if Ai for i = 1, 2, , n, then ni=1 Ai .
If the condition (c) is replaced with closure under countable union, then is
called a -algebra over .
Example A.1: Assume that is a nonempty set. Then {, } is the
smallest -algebra over , and the power set P (all subsets of ) is the
largest -algebra over .
Example A.2: Let be the set of all finite disjoint unions of all intervals
of the form (, a], (a, b], (b, ) and . Then is an algebra over , but
not a -algebra because Ai = (0, (i 1)/i] for all i but
Ai = (0, 1) .
i=1
Ai ;
i=1
Ai ;
(A.1)
i=1
lim sup Ai =
i
k=1 i=k
Ai ;
(A.2)
214
lim inf Ai =
i
Ai ;
(A.3)
k=1 i=k
lim Ai .
(A.4)
D=
Dij .
(A.5)
i=1 j=1
x=
i=1
ai
3i
(A.6)
215
= 1 2
Appendix B
Classical Measures
This appendix introduces the concepts of classical measure, measure space,
Lebesgue measure, and product measure.
Definition B.1 Let be a nonempty set, and a -algebra over . A
classical measure is a set function on satisfying
Axiom 1. (Nonnegativity) {A} 0 for any A ;
Axiom 2. (Countable Additivity) For every countable sequence of mutually
disjoint measurable sets {Ai }, we have
Ai
i=1
{Ai }.
(B.1)
i=1
Example B.1: Length, area and volume are instances of measure concept.
Definition B.2 Let be a nonempty set, a -algebra over , and a
measure on . Then the triplet (, , ) is called a measure space.
of
It has been proved that there is a unique measure on the Borel algebra
such that {(a, b]} = b a for any interval (a, b] of .
such that
(a, b]
= 1 2 n .
217
such that
(B.2)
= 1 2
Then there is a unique measure on such that
= 1 2
Appendix C
Measurable Functions
This appendix introduces the concepts of measurable function, simple function, step function, absolutely continuous function, singular function, and
Cantor function.
Definition C.1 A function f from (, ) to the set of real numbers is said
to be measurable if
f 1 (B) = { |f () B}
(C.1)
is always
if and only if
to
is measurable because
219
f (), if f () 0
0,
otherwise,
f () =
. Then its
f (), if f () 0
0,
otherwise
inf fi ();
1i<
(C.2)
k 1 , if k 1 f () < k , k = 1, 2, , i2i
2i
2i
2i
hi () =
i,
if i f ()
(C.3)
(C.4)
for i = 1, 2,
Definition C.3 A function f : is said to be Lipschitz continuous if
there is a positive number K such that
|f (y) f (x)| K|y x|,
x, y .
(C.5)
220
(C.6)
i=1
for every finite disjoint class {(xi , yi ), i = 1, 2, , m} of bounded open intervals for which
m
|yi xi | < .
(C.7)
i=1
ai
ai
g
=
(C.8)
i
i+1
3
2
i=1
i=1
where ai = 0 or 2 for i = 1, 2, Then g(x) is an increasing function and
g(C) = [0, 1]. The Cantor function is defined on [0, 1] as follows,
f (x) = sup g(y) y C, y x .
(C.9)
f (1) = 1,
f (x) = g(x),
x C.
Appendix D
Lebesgue Integral
This appendix introduces Lebesgue integral, Lebesgue-Stieltjes integral, monotone convergence theorem, Lebesgue dominated convergence theorem, and
Fubini theorem.
Definition D.1 Let h(x) be a nonnegative simple measurable function defined by
c1 , if x A1
c2 , if x A2
h(x) =
cm , if x Am
where A1 , A2 , , Am are Borel sets. Then the Lebesgue integral of h on a
Borel set A is
m
ci {A Ai }.
h(x)dx =
A
(D.1)
i=1
hi (x)dx.
(D.2)
Definition D.3 Let f (x) be a measurable function on the Borel set A, and
define
f + (x) =
f (x) =
f + (x)dx
f (x)dx =
A
(D.3)
222
f + (x)dx and
f (x)dx is finite.
A i
fi (x)dx.
(D.4)
fi (x)dx.
Example D.2: The condition |fi | g in the Lebesgue dominated convergence theorem cannot be removed. Let A = (0, 1), fi (x) = i if x (0, 1/i)
and 0 otherwise. Then fi (x) 0 everywhere on A. However,
lim fi (x)dx = 0 = 1 = lim
A i
fi (x)dx.
A
f (x, y)dxdy =
f (x, y)dy dx =
(D.6)
for all a and b with a < b. Such a measure is called the Lebesgue-Stieltjes
measure corresponding to .
223
c1 , if x A1
c2 , if x A2
h(x) =
..
cm , if x Am .
Then the Lebesgue-Stieltjes integral of h on the Borel set A is
m
ci {A Ai }
h(x)d(x) =
A
(D.7)
i=1
hi (x)d(x).
(D.8)
Definition D.7 Let f (x) be a measurable function on the Borel set A, and
define
f (x), if f (x) > 0
0,
otherwise,
f + (x) =
f (x) =
f (x)d(x)
f + (x)d(x)
f (x)d(x) =
A
A
A
f + (x)d(x) and
(D.9)
A
A
f (x)d(x) is finite.
Appendix E
Euler-Lagrange Equation
Let
L() =
(E.1)
where F is a known function with continuous first and second partial derivatives. If L has an extremum (maximum or minimum) at (x), then
d
F
(x) dx
F
(x)
=0
(E.2)
(E.3)
Note that the Euler-Lagrange equation is only a necessary condition for the
existence of an extremum. However, if the existence of an extremum is clear
and there exists only one solution to the Euler-Lagrange equation, then this
solution must be the curve for which the extremum is achieved.
Appendix F
Maximum Uncertainty
Principle
An event has no uncertainty if its measure is 1 (or 0) because we may believe
that the event occurs (or not). An event is the most uncertain if its measure
is 0.5 because the event and its complement may be regarded as equally
likely.
In practice, if there is no information about the measure of an event,
we should assign 0.5 to it. Sometimes, only partial information is available.
For this case, the value of measure may be specified in some range. What
value does the measure take? For the safety purpose, we should assign it the
value as close to 0.5 as possible. This is the maximum uncertainty principle
proposed by Liu [132].
Maximum Uncertainty Principle: For any event, if there are multiple
reasonable values that a measure may take, then the value as close to 0.5 as
possible is assigned to the event.
Perhaps the reader would like to ask what values are reasonable. The
answer is problem-dependent. At least, the values should ensure that all
axioms about the measure are satisfied, and should be consistent with the
given information.
Example F.1: Let be an event. Based on some given information, the
measure value {} is on the interval [a, b]. By using the maximum uncertainty principle, we should assign
a, if 0.5 < a b
{} = 0.5, if a 0.5 b
b, if a b < 0.5.
Appendix G
Uncertainty Relations
Probability theory is a branch of mathematics based on the normality, nonnegativity, and countable additivity axioms. In fact, those three axioms
may be replaced with four axioms: normality, monotonicity, self-duality, and
countable additivity. Thus all of probability, credibility, chance, and uncertain measures meet the normality, monotonicity and self-duality axioms. The
essential difference among those measures is how to determine the measure
of union. For any mutually disjoint events {Ai } with supi {Ai } < 0.5, if
satisfies the countable additivity axiom, i.e.,
Ai
{Ai },
(G.1)
i=1
i=1
Ai
= sup {Ai },
(G.2)
1i<
i=1
Ai
i=1
{Ai },
(G.3)
i=1
227
Credibility
Model
Hybrid
Model
Probability
Model
Uncertainty Model
Bibliography
[1] Alefeld G, Herzberger J, Introduction to Interval Computations, Academic
Press, New York, 1983.
[2] Atanassov KT, Intuitionistic Fuzzy Sets: Theory and Applications, PhysicaVerlag, Heidelberg, 1999.
[3] Bamber D, Goodman IR, Nguyen HT, Extension of the concept of propositional deduction from classical logic to probability: An overview of
probability-selection approaches, Information Sciences, Vol.131, 195-250,
2001.
[4] Bedford T, and Cooke MR, Probabilistic Risk Analysis, Cambridge University
Press, 2001.
[5] Bandemer H, and Nather W, Fuzzy Data Analysis, Kluwer, Dordrecht, 1992.
[6] Bellman RE, and Zadeh LA, Decision making in a fuzzy environment, Management Science, Vol.17, 141-164, 1970.
[7] Bhandari D, and Pal NR, Some new information measures of fuzzy sets,
Information Sciences, Vol.67, 209-228, 1993.
[8] Black F, and Scholes M, The pricing of option and corporate liabilities, Journal of Political Economy, Vol.81, 637-654, 1973.
[9] Bouchon-Meunier B, Mesiar R, Ralescu DA, Linear non-additive setfunctions, International Journal of General Systems, Vol.33, No.1, 89-98,
2004.
[10] Buckley JJ, Possibility and necessity in optimization, Fuzzy Sets and Systems,
Vol.25, 1-13, 1988.
[11] Buckley JJ, Stochastic versus possibilistic programming, Fuzzy Sets and Systems, Vol.34, 173-177, 1990.
[12] Cadenas JM, and Verdegay JL, Using fuzzy numbers in linear programming,
IEEE Transactions on Systems, Man and CyberneticsPart B, Vol.27, No.6,
1016-1022, 1997.
[13] Campos L, and Gonz
alez, A, A subjective approach for ranking fuzzy numbers, Fuzzy Sets and Systems, Vol.29, 145-153, 1989.
[14] Campos L, and Verdegay JL, Linear programming problems and ranking of
fuzzy numbers, Fuzzy Sets and Systems, Vol.32, 1-11, 1989.
[15] Campos FA, Villar J, and Jimenez M, Robust solutions using fuzzy chance
constraints, Engineering Optimization, Vol.38, No.6, 627-645, 2006.
230
Bibliography
Bibliography
231
[34] Dubois D, and Prade H, The mean value of a fuzzy number, Fuzzy Sets and
Systems, Vol.24, 279-300, 1987.
[35] Dubois D, and Prade H, Twofold fuzzy sets and rough sets some issues in
knowledge representation, Fuzzy Sets and Systems, Vol.23, 3-18, 1987.
[36] Dubois D, and Prade H, Possibility Theory: An Approach to Computerized
Processing of Uncertainty, Plenum, New York, 1988.
[37] Dubois D, and Prade H, Rough fuzzy sets and fuzzy rough sets, International
Journal of General Systems, Vol.17, 191-200, 1990.
[38] Dunyak J, Saad IW, and Wunsch D, A theory of independent fuzzy probability for system reliability, IEEE Transactions on Fuzzy Systems, Vol.7, No.3,
286-294, 1999.
[39] Esogbue AO, and Liu B, Reservoir operations optimization via fuzzy criterion
decision processes, Fuzzy Optimization and Decision Making, Vol.5, No.3,
289-305, 2006.
[40] Feng X, and Liu YK, Measurability criteria for fuzzy random vectors, Fuzzy
Optimization and Decision Making, Vol.5, No.3, 245-253, 2006.
[41] Feng Y, Yang LX, A two-objective fuzzy k-cardinality assignment problem,
Journal of Computational and Applied Mathematics, Vol.197, No.1, 233-244,
2006.
[42] Fung RYK, Chen YZ, Chen L, A fuzzy expected value-based goal programing
model for product planning using quality function deployment, Engineering
Optimization, Vol.37, No.6, 633-647, 2005.
[43] Gao J, and Liu B, New primitive chance measures of fuzzy random event,
International Journal of Fuzzy Systems, Vol.3, No.4, 527-531, 2001.
[44] Gao J, Liu B, and Gen M, A hybrid intelligent algorithm for stochastic multilevel programming, IEEJ Transactions on Electronics, Information and Systems, Vol.124-C, No.10, 1991-1998, 2004.
[45] Gao J, and Liu B, Fuzzy multilevel programming with a hybrid intelligent
algorithm, Computers & Mathematics with Applications, Vol.49, 1539-1548,
2005.
[46] Gao J, and Lu M, Fuzzy quadratic minimum spanning tree problem, Applied
Mathematics and Computation, Vol.164, No.3, 773-788, 2005.
[47] Gao J, and Feng X, A hybrid intelligent algorithm for fuzzy dynamic inventory
problem, Journal of Information and Computing Science, Vol.1, No.4, 235244, 2006.
[48] Gao J, Credibilistic game with fuzzy information, Journal of Uncertain Systems, Vol.1, No.1, 74-80, 2007.
[49] Gao J, and Zhou J, Uncertain Process Online, http://orsc.edu.cn/process.
[50] Gao J, Credibilistic option pricing: a new model, http://orsc.edu.cn/process/
071124.pdf.
[51] Gao X, Option pricing formula for hybrid stock model with randomness and
fuzziness, http://orsc.edu.cn/process/080112.pdf.
232
Bibliography
[52] Gil MA, Lopez-Diaz M, Ralescu DA, Overview on the development of fuzzy
random variables, Fuzzy Sets and Systems, Vol.157, No.19, 2546-2557, 2006.
[53] Gonz
alez, A, A study of the ranking function approach through mean values,
Fuzzy Sets and Systems, Vol.35, 29-41, 1990.
[54] Guan J, and Bell DA, Evidence Theory and its Applications, North-Holland,
Amsterdam, 1991.
[55] Guo R, Zhao R, Guo D, and Dunne T, Random fuzzy variable modeling on
repairable system, Journal of Uncertain Systems, Vol.1, No.3, 222-234, 2007.
[56] Guo R, Guo D, Thiart C, and Li X, Bivariate credibility-copulas, Journal of
Uncertain Systems, Vol.1, No.4, 303-314, 2007.
[57] Hansen E, Global Optimization Using Interval Analysis, Marcel Dekker, New
York, 1992.
[58] He Y, and Xu J, A class of random fuzzy programming model and its application to vehicle routing problem, World Journal of Modelling and Simulation,
Vol.1, No.1, 3-11, 2005.
[59] Heilpern S, The expected value of a fuzzy number, Fuzzy Sets and Systems,
Vol.47, 81-86, 1992.
[60] Higashi M, and Klir GJ, On measures of fuzziness and fuzzy complements,
International Journal of General Systems, Vol.8, 169-180, 1982.
[61] Higashi M, and Klir GJ, Measures of uncertainty and information based on
possibility distributions, International Journal of General Systems, Vol.9, 4358, 1983.
[62] Hisdal E, Conditional possibilities independence and noninteraction, Fuzzy
Sets and Systems, Vol.1, 283-297, 1978.
[63] Hisdal E, Logical Structures for Representation of Knowledge and Uncertainty, Physica-Verlag, Heidelberg, 1998.
[64] Hong DH, Renewal process with T-related fuzzy inter-arrival times and fuzzy
rewards, Information Sciences, Vol.176, No.16, 2386-2395, 2006.
[65] Inuiguchi M, and Ramk J, Possibilistic linear programming: A brief review
of fuzzy mathematical programming and a comparison with stochastic programming in portfolio selection problem, Fuzzy Sets and Systems, Vol.111,
No.1, 3-28, 2000.
[66] Ishibuchi H, and Tanaka H, Multiobjective programming in optimization of
the interval objective function, European Journal of Operational Research,
Vol.48, 219-225, 1990.
[67] Jaynes ET, Information theory and statistical mechanics, Physical Reviews,
Vol.106, No.4, 620-630, 1957.
[68] Ji XY, and Shao Z, Model and algorithm for bilevel Newsboy problem
with fuzzy demands and discounts, Applied Mathematics and Computation,
Vol.172, No.1, 163-174, 2006.
[69] Ji XY, and Iwamura K, New models for shortest path problem with fuzzy arc
lengths, Applied Mathematical Modelling, Vol.31, 259-269, 2007.
Bibliography
233
[70] John RI, Type 2 fuzzy sets: An appraisal of theory and applications, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.6,
No.6, 563-576, 1998.
[71] Kacprzyk J, and Esogbue AO, Fuzzy dynamic programming: Main developments and applications, Fuzzy Sets and Systems, Vol.81, 31-45, 1996.
[72] Kacprzyk J, Multistage Fuzzy Control, Wiley, Chichester, 1997.
[73] Karnik NN, Mendel JM, and Liang Q, Type-2 fuzzy logic systems, IEEE
Transactions on Fuzzy Systems, Vol.7, No.6, 643-658, 1999.
[74] Karnik NN, Mendel JM, and Liang Q, Centroid of a type-2 fuzzy set, Information Sciences, Vol.132, 195-220, 2001.
[75] Kaufmann A, Introduction to the Theory of Fuzzy Subsets, Vol.I, Academic
Press, New York, 1975.
[76] Kaufmann A, and Gupta MM, Introduction to Fuzzy Arithmetic: Theory and
Applications, Van Nostrand Reinhold, New York, 1985.
[77] Kaufmann A, and Gupta MM, Fuzzy Mathematical Models in Engineering
and Management Science, 2nd ed, North-Holland, Amsterdam, 1991.
[78] Ke H, and Liu B, Project scheduling problem with stochastic activity duration
times, Applied Mathematics and Computation, Vol.168, No.1, 342-353, 2005.
[79] Ke H, and Liu B, Project scheduling problem with mixed uncertainty of randomness and fuzziness, European Journal of Operational Research, Vol.183,
No.1, 135-147, 2007.
[80] Ke H, and Liu B, Fuzzy project scheduling problem and its hybrid intelligent
algorithm, Technical Report, 2005.
[81] Klement EP, Puri ML, and Ralescu DA, Limit theorems for fuzzy random
variables, Proceedings of the Royal Society of London Series A, Vol.407, 171182, 1986.
[82] Klir GJ, and Folger TA, Fuzzy Sets, Uncertainty, and Information, PrenticeHall, Englewood Cliffs, NJ, 1980.
[83] Klir GJ, and Yuan B, Fuzzy Sets and Fuzzy Logic: Theory and Applications,
Prentice-Hall, New Jersey, 1995.
[84] Knopfmacher J, On measures of fuzziness, Journal of Mathematical Analysis
and Applications, Vol.49, 529-534, 1975.
[85] Kosko B, Fuzzy entropy and conditioning, Information Sciences, Vol.40, 165174, 1986.
[86] Kruse R, and Meyer KD, Statistics with Vague Data, D. Reidel Publishing
Company, Dordrecht, 1987.
[87] Kwakernaak H, Fuzzy random variablesI: Definitions and theorems, Information Sciences, Vol.15, 1-29, 1978.
[88] Kwakernaak H, Fuzzy random variablesII: Algorithms and examples for the
discrete case, Information Sciences, Vol.17, 253-278, 1979.
[89] Lai YJ, and Hwang CL, Fuzzy Multiple Objective Decision Making: Methods
and Applications, Springer-Verlag, New York, 1994.
234
Bibliography
[90] Lee ES, Fuzzy multiple level programming, Applied Mathematics and Computation, Vol.120, 79-90, 2001.
[91] Lee KH, First Course on Fuzzy Theory and Applications, Springer-Verlag,
Berlin, 2005.
[92] Lertworasirkul S, Fang SC, Joines JA, and Nuttle HLW, Fuzzy data envelopment analysis (DEA): a possibility approach, Fuzzy Sets and Systems,
Vol.139, No.2, 379-394, 2003.
[93] Li HL, and Yu CS, A fuzzy multiobjective program with quasiconcave membership functions and fuzzy coefficients, Fuzzy Sets and Systems, Vol.109,
No.1, 59-81, 2000.
[94] Li J, Xu J, and Gen M, A class of multiobjective linear programming
model with fuzzy random coefficients, Mathematical and Computer Modelling,
Vol.44, Nos.11-12, 1097-1113, 2006.
[95] Li P, and Liu B, Entropy of credibility distributions for fuzzy variables, IEEE
Transactions on Fuzzy Systems, Vol.16, No.1, 123-129, 2008.
[96] Li SM, Ogura Y, and Nguyen HT, Gaussian processes and martingales for
fuzzy valued random variables with continuous parameter, Information Sciences, Vol.133, 7-21, 2001.
[97] Li SM, Ogura Y, and Kreinovich V, Limit Theorems and Applications of
Set-Valued and Fuzzy Set-Valued Random Variables, Kluwer, Boston, 2002.
[98] Li SQ, Zhao RQ, and Tang WS, Fuzzy random homogeneous Poisson process and compound Poisson process, Journal of Information and Computing
Science, Vol.1, No.4, 207-224, 2006.
[99] Li X, and Liu B, The independence of fuzzy variables with applications,
International Journal of Natural Sciences & Technology, Vol.1, No.1, 95-100,
2006.
[100] Li X, and Liu B, A sufficient and necessary condition for credibility measures,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.14, No.5, 527-535, 2006.
[101] Li X, and Liu B, New independence definition of fuzzy random variable and
random fuzzy variable, World Journal of Modelling and Simulation, Vol.2,
No.5, 338-342, 2006.
[102] Li X, and Liu B, Maximum entropy principle for fuzzy variables, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.15,
Supp.2, 43-52, 2007.
[103] Li X, and Liu B, Chance measure for hybrid events with fuzziness and randomness, Soft Computing, to be published.
[104] Li X, and Liu B, Moment estimation theorems for various types of uncertain
variable, Technical Report, 2007.
[105] Li X, and Liu B, On distance between fuzzy variables, Technical Report,
2007.
[106] Li X, and Liu B, Conditional chance measure for hybrid events, Technical
Report, 2007.
Bibliography
235
[107] Li X, and Liu B, Cross-entropy and generalized entropy for fuzzy variables,
Technical Report, 2007.
[108] Li X, Expected value and variance of geometric Liu process, http://orsc.
edu.cn/process/071123.pdf.
[109] Liu B, Dependent-chance goal programming and its genetic algorithm based
approach, Mathematical and Computer Modelling, Vol.24, No.7, 43-52, 1996.
[110] Liu B, and Esogbue AO, Fuzzy criterion set and fuzzy criterion dynamic
programming, Journal of Mathematical Analysis and Applications, Vol.199,
No.1, 293-311, 1996.
[111] Liu B, Dependent-chance programming: A class of stochastic optimization,
Computers & Mathematics with Applications, Vol.34, No.12, 89-104, 1997.
[112] Liu B, and Iwamura K, Modelling stochastic decision systems using
dependent-chance programming, European Journal of Operational Research,
Vol.101, No.1, 193-203, 1997.
[113] Liu B, and Iwamura K, Chance constrained programming with fuzzy parameters, Fuzzy Sets and Systems, Vol.94, No.2, 227-237, 1998.
[114] Liu B, and Iwamura K, A note on chance constrained programming with
fuzzy coefficients, Fuzzy Sets and Systems, Vol.100, Nos.1-3, 229-233, 1998.
[115] Liu B, Minimax chance constrained programming models for fuzzy decision
systems, Information Sciences, Vol.112, Nos.1-4, 25-38, 1998.
[116] Liu B, Dependent-chance programming with fuzzy decisions, IEEE Transactions on Fuzzy Systems, Vol.7, No.3, 354-360, 1999.
[117] Liu B, and Esogbue AO, Decision Criteria and Optimal Inventory Processes,
Kluwer, Boston, 1999.
[118] Liu B, Uncertain Programming, Wiley, New York, 1999.
[119] Liu B, Dependent-chance programming in fuzzy environments, Fuzzy Sets
and Systems, Vol.109, No.1, 97-106, 2000.
[120] Liu B, Uncertain programming: A unifying optimization theory in various uncertain environments, Applied Mathematics and Computation, Vol.120, Nos.13, 227-234, 2001.
[121] Liu B, and Iwamura K, Fuzzy programming with fuzzy decisions and fuzzy
simulation-based genetic algorithm, Fuzzy Sets and Systems, Vol.122, No.2,
253-262, 2001.
[122] Liu B, Fuzzy random chance-constrained programming, IEEE Transactions
on Fuzzy Systems, Vol.9, No.5, 713-720, 2001.
[123] Liu B, Fuzzy random dependent-chance programming, IEEE Transactions on
Fuzzy Systems, Vol.9, No.5, 721-726, 2001.
[124] Liu B, Theory and Practice of Uncertain Programming, Physica-Verlag, Heidelberg, 2002.
[125] Liu B, Toward fuzzy optimization without mathematical ambiguity, Fuzzy
Optimization and Decision Making, Vol.1, No.1, 43-63, 2002.
[126] Liu B, and Liu YK, Expected value of fuzzy variable and fuzzy expected value
models, IEEE Transactions on Fuzzy Systems, Vol.10, No.4, 445-450, 2002.
236
Bibliography
[127] Liu B, Random fuzzy dependent-chance programming and its hybrid intelligent algorithm, Information Sciences, Vol.141, Nos.3-4, 259-271, 2002.
[128] Liu B, Inequalities and convergence concepts of fuzzy and rough variables,
Fuzzy Optimization and Decision Making, Vol.2, No.2, 87-100, 2003.
[129] Liu B, Uncertainty Theory, Springer-Verlag, Berlin, 2004.
[130] Liu B, A survey of credibility theory, Fuzzy Optimization and Decision Making, Vol.5, No.4, 387-408, 2006.
[131] Liu B, A survey of entropy of fuzzy variables, Journal of Uncertain Systems,
Vol.1, No.1, 4-13, 2007.
[132] Liu B, Uncertainty Theory, 2nd ed., Springer-Verlag, Berlin, 2007.
[133] Liu B, Fuzzy process, hybrid process and uncertain process, Journal of Uncertain Systems, Vol.2, No.1, 3-16, 2008.
[134] Liu B, Theory and Practice of Uncertain Programming, 2nd ed., http://orsc.
edu.cn/liu/up.pdf.
[135] Liu LZ, Li YZ, The fuzzy quadratic assignment problem with penalty:
New models and genetic algorithm, Applied Mathematics and Computation,
Vol.174, No.2, 1229-1244, 2006.
[136] Liu XC, Entropy, distance measure and similarity measure of fuzzy sets and
their relations, Fuzzy Sets and Systems, Vol.52, 305-318, 1992.
[137] Liu XW, Measuring the satisfaction of constraints in fuzzy linear programming, Fuzzy Sets and Systems, Vol.122, No.2, 263-275, 2001.
[138] Liu YK, and Liu B, Random fuzzy programming with chance measures
defined by fuzzy integrals, Mathematical and Computer Modelling, Vol.36,
Nos.4-5, 509-524, 2002.
[139] Liu YK, and Liu B, Fuzzy random programming problems with multiple
criteria, Asian Information-Science-Life, Vol.1, No.3, 249-256, 2002.
[140] Liu YK, and Liu B, Fuzzy random variables: A scalar expected value operator, Fuzzy Optimization and Decision Making, Vol.2, No.2, 143-160, 2003.
[141] Liu YK, and Liu B, Expected value operator of random fuzzy variable and
random fuzzy expected value models, International Journal of Uncertainty,
Fuzziness & Knowledge-Based Systems, Vol.11, No.2, 195-215, 2003.
[142] Liu YK, and Liu B, A class of fuzzy random optimization: Expected value
models, Information Sciences, Vol.155, Nos.1-2, 89-102, 2003.
[143] Liu YK, and Liu B, On minimum-risk problems in fuzzy random decision
systems, Computers & Operations Research, Vol.32, No.2, 257-283, 2005.
[144] Liu YK, and Liu B, Fuzzy random programming with equilibrium chance
constraints, Information Sciences, Vol.170, 363-395, 2005.
[145] Liu YK, Fuzzy programming with recourse, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.13, No.4, 381-413, 2005.
[146] Liu YK, Convergent results about the use of fuzzy simulation in fuzzy optimization problems, IEEE Transactions on Fuzzy Systems, Vol.14, No.2, 295304, 2006.
Bibliography
237
[147] Liu YK, and Wang SM, On the properties of credibility critical value functions, Journal of Information and Computing Science, Vol.1, No.4, 195-206,
2006.
[148] Liu YK, and Gao J, The independence of fuzzy variables in credibility theory and its applications, International Journal of Uncertainty, Fuzziness &
Knowledge-Based Systems, Vol.15, Supp.2, 1-20, 2007.
[149] Loo SG, Measures of fuzziness, Cybernetica, Vol.20, 201-210, 1977.
[150] Lopez-Diaz M, Ralescu DA, Tools for fuzzy random variables: Embeddings
and measurabilities, Computational Statistics & Data Analysis, Vol.51, No.1,
109-114, 2006.
[151] Lu M, On crisp equivalents and solutions of fuzzy programming with different
chance measures, Information: An International Journal, Vol.6, No.2, 125133, 2003.
[152] Lucas C, and Araabi BN, Generalization of the Dempster-Shafer Theory:
A fuzzy-valued measure, IEEE Transactions on Fuzzy Systems, Vol.7, No.3,
255-270, 1999.
[153] Luhandjula MK, Fuzziness and randomness in an optimization framework,
Fuzzy Sets and Systems, Vol.77, 291-297, 1996.
[154] Luhandjula MK, and Gupta MM, On fuzzy stochastic optimization, Fuzzy
Sets and Systems, Vol.81, 47-55, 1996.
[155] Luhandjula MK, Optimisation under hybrid uncertainty, Fuzzy Sets and Systems, Vol.146, No.2, 187-203, 2004.
[156] Luhandjula MK, Fuzzy stochastic linear programming: Survey and future
research directions, European Journal of Operational Research, Vol.174, No.3,
1353-1367, 2006.
[157] Maiti MK, Maiti MA, Fuzzy inventory model with two warehouses under
possibility constraints, Fuzzy Sets and Systems, Vol.157, No.1, 52-73, 2006.
[158] Maleki HR, Tata M, and Mashinchi M, Linear programming with fuzzy variables, Fuzzy Sets and Systems, Vol.109, No.1, 21-33, 2000.
[159] Merton RC, Theory of rational option pricing, Bell Journal of Economics and
Management Science, Vol.4, 141-183, 1973.
[160] Mizumoto M, and Tanaka K, Some properties of fuzzy sets of type 2, Information and Control, Vol.31, 312-340, 1976.
[161] Mohammed W, Chance constrained fuzzy goal programming with right-hand
side uniform random variable coefficients, Fuzzy Sets and Systems, Vol.109,
No.1, 107-110, 2000.
[162] Molchanov IS, Limit Theorems for Unions of Random Closed Sets, SpringerVerlag, Berlin, 1993.
[163] Nahmias S, Fuzzy variables, Fuzzy Sets and Systems, Vol.1, 97-110, 1978.
[164] Negoita CV, and Ralescu D, On fuzzy optimization, Kybernetes, Vol.6, 193195, 1977.
[165] Negoita CV, and Ralescu D, Simulation, Knowledge-based Computing, and
Fuzzy Statistics, Van Nostrand Reinhold, New York, 1987.
238
Bibliography
[166] Neumaier A, Interval Methods for Systems of Equations, Cambridge University Press, New York, 1990.
[167] Nguyen HT, On conditional possibility distributions, Fuzzy Sets and Systems,
Vol.1, 299-309, 1978.
[168] Nguyen HT, Fuzzy sets and probability, Fuzzy sets and Systems, Vol.90, 129132, 1997.
[169] Nguyen HT, Kreinovich V, Zuo Q, Interval-valued degrees of belief: Applications of interval computations to expert systems and intelligent control,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.5, 317-358, 1997.
[170] Nguyen HT, Nguyen NT, Wang TH, On capacity functionals in interval probabilities, International Journal of Uncertainty, Fuzziness & Knowledge-Based
Systems, Vol.5, 359-377, 1997.
[171] Nguyen HT, Kreinovich V, Shekhter V, On the possibility of using complex
values in fuzzy logic for representing inconsistencies, International Journal of
Intelligent Systems, Vol.13, 683-714, 1998.
[172] Nguyen HT, Kreinovich V, Wu BL, Fuzzy/probability similar to fractal/smooth, International Journal of Uncertainty, Fuzziness & KnowledgeBased Systems, Vol.7, 363-370, 1999.
[173] Nguyen HT, Nguyen NT, On Chu spaces in uncertainty analysis, International Journal of Intelligent Systems, Vol.15, 425-440, 2000.
[174] Nguyen HT, Some mathematical structures for computational information,
Information Sciences, Vol.128, 67-89, 2000.
[175] Nguyen VH, Fuzzy stochastic goal programming problems, European Journal
of Operational Research, Vol.176, No.1, 77-86, 2007.
[176] ksendal B, Stochastic Differential Equations, 6th ed., Springer-Verlag,
Berlin, 2005.
[177] Pal NR, and Pal SK, Object background segmentation using a new definition
of entropy, IEE Proc. E, Vol.136, 284-295, 1989.
[178] Pal NR, and Pal SK, Higher order fuzzy entropy and hybrid entropy of a set,
Information Sciences, Vol.61, 211-231, 1992.
[179] Pal NR, Bezdek JC, and Hemasinha R, Uncertainty measures for evidential reasonning I: a review, International Journal of Approximate Reasoning,
Vol.7, 165-183, 1992.
[180] Pal NR, Bezdek JC, and Hemasinha R, Uncertainty measures for evidential
reasonning II: a new measure, International Journal of Approximate Reasoning, Vol.8, 1-16, 1993.
[181] Pal NR, and Bezdek JC, Measuring fuzzy uncertainty, IEEE Transactions on
Fuzzy Systems, Vol.2, 107-118, 1994.
[182] Pawlak Z, Rough sets, International Journal of Information and Computer
Sciences, Vol.11, No.5, 341-356, 1982.
[183] Pawlak Z, Rough sets and fuzzy sets, Fuzzy Sets and Systems, Vol.17, 99-102,
1985.
Bibliography
239
[184] Pawlak Z, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer,
Dordrecht, 1991.
[185] Pedrycz W, Optimization schemes for decomposition of fuzzy relations, Fuzzy
Sets and Systems, Vol.100, 301-325, 1998.
[186] Peng J, and Liu B, Stochastic goal programming models for parallel machine
scheduling problems, Asian Information-Science-Life, Vol.1, No.3, 257-266,
2002.
[187] Peng J, and Liu B, Parallel machine scheduling models with fuzzy processing
times, Information Sciences, Vol.166, Nos.1-4, 49-66, 2004.
[188] Peng J, and Liu B, A framework of birandom theory and optimization methods, Information: An International Journal, Vol.9, No.4, 629-640, 2006.
[189] Peng J, and Zhao XD, Some theoretical aspects of the critical values of birandom variable, Journal of Information and Computing Science, Vol.1, No.4,
225-234, 2006.
[190] Peng J, and Liu B, Birandom variables and birandom programming, Computers & Industrial Engineering, Vol.53, No.3, 433-453, 2007.
[191] Peng J, Credibilistic stopping problems for fuzzy stock market, http://orsc.
edu.cn/process/071125.pdf.
[192] Puri ML, and Ralescu D, Differentials of fuzzy functions, Journal of Mathematical Analysis and Applications, Vol.91, 552-558, 1983.
[193] Puri ML, and Ralescu D, Fuzzy random variables, Journal of Mathematical
Analysis and Applications, Vol.114, 409-422, 1986.
[194] Qin ZF, and Li X, Option pricing formula for fuzzy financial market, Journal
of Uncertain Systems, Vol.2, No.1, 17-21, 2008.
[195] Qin ZF, and Liu B, On some special hybrid variables, Technical Report, 2007.
[196] Qin ZF, On analytic functions of complex Liu process, http://orsc.edu.cn/
process/071026.pdf.
[197] Raj PA, and Kumer DN, Ranking alternatives with fuzzy weights using maximizing set and minimizing set, Fuzzy Sets and Systems, Vol.105, 365-375,
1999.
[198] Ralescu DA, Sugeno M, Fuzzy integral representation, Fuzzy Sets and Systems, Vol.84, No.2, 127-133, 1996.
[199] Ralescu AL, Ralescu DA, Extensions of fuzzy aggregation, Fuzzy Sets and
systems, Vol.86, No.3, 321-330, 1997.
[200] Ramer A, Conditional possibility measures, International Journal of Cybernetics and Systems, Vol.20, 233-247, 1989.
[201] Ramk J, Extension principle in fuzzy optimization, Fuzzy Sets and Systems,
Vol.19, 29-35, 1986.
[202] Ramk J, and Rommelfanger H, Fuzzy mathematical programming based on
some inequality relations, Fuzzy Sets and Systems, Vol.81, 77-88, 1996.
[203] Saade JJ, Maximization of a function over a fuzzy domain, Fuzzy Sets and
Systems, Vol.62, 55-70, 1994.
240
Bibliography
Bibliography
241
[222] Wang Z, and Klir GJ, Fuzzy Measure Theory, Plenum Press, New York, 1992.
[223] Yager RR, On measures of fuzziness and negation, Part I: Membership in the
unit interval, International Journal of General Systems, Vol.5, 221-229, 1979.
[224] Yager RR, On measures of fuzziness and negation, Part II: Lattices, Information and Control, Vol.44, 236-260, 1980.
[225] Yager RR, A procedure for ordering fuzzy subsets of the unit interval, Information Sciences, Vol.24, 143-161, 1981.
[226] Yager RR, Generalized probabilities of fuzzy events from fuzzy belief structures, Information Sciences, Vol.28, 45-62, 1982.
[227] Yager RR, Measuring tranquility and anxiety in decision making: an application of fuzzy sets, International Journal of General Systems, Vol.8, 139-144,
1982.
[228] Yager RR, Entropy and specificity in a mathematical theory of evidence,
International Journal of General Systems, Vol.9, 249-260, 1983.
[229] Yager RR, On ordered weighted averaging aggregation operators in multicriteria decision making, IEEE Transactions on Systems, Man and Cybernetics,
Vol.18, 183-190, 1988.
[230] Yager RR, Decision making under Dempster-Shafer uncertainties, International Journal of General Systems, Vol.20, 233-245, 1992.
[231] Yager RR, On the specificity of a possibility distribution, Fuzzy Sets and
Systems, Vol.50, 279-292, 1992.
[232] Yager RR, Measures of entropy and fuzziness related to aggregation operators,
Information Sciences, Vol.82, 147-166, 1995.
[233] Yager RR, Modeling uncertainty using partial information, Information Sciences, Vol.121, 271-294, 1999.
[234] Yager RR, Decision making with fuzzy probability assessments, IEEE Transactions on Fuzzy Systems, Vol.7, 462-466, 1999.
[235] Yager RR, On the entropy of fuzzy measures, IEEE Transactions on Fuzzy
Systems, Vol.8, 453-461, 2000.
[236] Yager RR, On the evaluation of uncertain courses of action, Fuzzy Optimization and Decision Making, Vol.1, 13-41, 2002.
[237] Yang L, and Liu B, On inequalities and critical values of fuzzy random variable, International Journal of Uncertainty, Fuzziness & Knowledge-Based
Systems, Vol.13, No.2, 163-175, 2005.
[238] Yang L, and Liu B, A sufficient and necessary condition for chance distribution of birandom variable, Information: An International Journal, Vol.9,
No.1, 33-36, 2006.
[239] Yang L, and Liu B, On continuity theorem for characteristic function of fuzzy
variable, Journal of Intelligent and Fuzzy Systems, Vol.17, No.3, 325-332,
2006.
[240] Yang L, and Liu B, Chance distribution of fuzzy random variable and laws
of large numbers, Technical Report, 2004.
242
Bibliography
[241] Yang N, Wen FS, A chance constrained programming approach to transmission system expansion planning, Electric Power Systems Research, Vol.75,
Nos.2-3, 171-177, 2005.
[242] Yazenin AV, On the problem of possibilistic optimization, Fuzzy Sets and
Systems, Vol.81, 133-140, 1996.
[243] You C, Multidimensional Liu process, differential and integral, Proceedings
of the First Intelligent Computing Conference, Lushan, October 10-13, 2007,
pp.153-158. (Also available at http://orsc.edu.cn/process/071015.pdf)
[244] You C, Some extensions of Wiener-Liu process and Ito-Liu integral, http://
orsc.edu.cn/process/071019.pdf.
[245] Zadeh LA, Fuzzy sets, Information and Control, Vol.8, 338-353, 1965.
[246] Zadeh LA, Outline of a new approach to the analysis of complex systems and
decision processes, IEEE Transactions on Systems, Man and Cybernetics,
Vol.3, 28-44, 1973.
[247] Zadeh LA, The concept of a linguistic variable and its application to approximate reasoning, Information Sciences, Vol.8, 199-251, 1975.
[248] Zadeh LA, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and
Systems, Vol.1, 3-28, 1978.
[249] Zadeh LA, A theory of approximate reasoning, In: J Hayes, D Michie and
RM Thrall, eds, Mathematical Frontiers of the Social and Policy Sciences,
Westview Press, Boulder, Cororado, 69-129, 1979.
[250] Zhao R and Liu B, Stochastic programming models for general redundancy
optimization problems, IEEE Transactions on Reliability, Vol.52, No.2, 181191, 2003.
[251] Zhao R, and Liu B, Renewal process with fuzzy interarrival times and rewards,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.11, No.5, 573-586, 2003.
[252] Zhao R, and Liu B, Redundancy optimization problems with uncertainty
of combining randomness and fuzziness, European Journal of Operational
Research, Vol.157, No.3, 716-735, 2004.
[253] Zhao R, and Liu B, Standby redundancy optimization problems with fuzzy
lifetimes, Computers & Industrial Engineering, Vol.49, No.2, 318-338, 2005.
[254] Zhao R, Tang WS, and Yun HL, Random fuzzy renewal process, European
Journal of Operational Research, Vol.169, No.1, 189-201, 2006.
[255] Zhao R, and Tang WS, Some properties of fuzzy random renewal process,
IEEE Transactions on Fuzzy Systems, Vol.14, No.2, 173-179, 2006.
[256] Zheng Y, and Liu B, Fuzzy vehicle routing model with credibility measure
and its hybrid intelligent algorithm, Applied Mathematics and Computation,
Vol.176, No.2, 673-683, 2006.
[257] Zhou J, and Liu B, New stochastic models for capacitated location-allocation
problem, Computers & Industrial Engineering, Vol.45, No.1, 111-125, 2003.
[258] Zhou J, and Liu B, Analysis and algorithms of bifuzzy systems, International
Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.12, No.3,
357-376, 2004.
Bibliography
243
E
V
H
(, , )
(, , Pr)
(, P, Cr)
(, P, Cr) (, , Pr)
ai a
ai a
Ai A
Ai A
Index
algebra, 213
Borel algebra, 214
Borel set, 214
Brownian motion, 44
canonical process, 207
Cantor function, 220
Cantor set, 214
chance asymptotic theorem, 137
chance density function, 145
chance distribution, 143
chance measure, 131
chance semicontinuity law, 136
chance space, 129
chance subadditivity theorem, 134
Chebyshev inequality, 33, 108, 158
conditional chance, 165
conditional credibility, 114
conditional membership function, 116
conditional probability, 40
conditional uncertainty, 202
convergence almost surely, 35, 110
convergence in chance, 160
convergence in credibility, 110
convergence in distribution, 36, 111
convergence in mean, 36, 110
convergence in probability, 35
countable additivity axiom, 1
countable subadditivity axiom, 178
credibility asymptotic theorem, 57
credibility density function, 72
credibility distribution, 69
credibility extension theorem, 58
credibility inversion theorem, 65
credibility measure, 53
credibility semicontinuity law, 57
credibility space, 60
credibility subadditivity theorem, 55
critical value, 26, 96, 153, 194
distance, 32, 106, 157, 196
246
maximum entropy principle, 30, 103
maximum uncertainty principle, 225
measurable function, 218
membership function, 65
Minkowski inequality, 34, 109, 159
moment, 25, 95, 151, 191
monotone convergence theorem, 222
monotone class theorem, 214
monotonicity axiom, 53, 178
nonnegativity axiom, 1
normal distribution, 10
normal membership function, 93
normality axiom, 1, 53, 178
optimistic value, see critical value
pessimistic value, see critical value
power set, 213
probability continuity theorem, 2
probability density function, 9
probability distribution, 7
probability measure, 3
probability space, 3
product credibility axiom, 61
product credibility space, 62
product credibility theorem, 61
product probability space, 4
product probability theorem, 4
random fuzzy variable, 129
random sequence, 35
random variable, 4
random vector, 5
renewal process, 43, 119, 169, 206
self-duality axiom, 53, 178
set function, 1
-algebra, 213
simple function, 218
singular function, 220
step function, 218
stochastic differential equation, 50
stochastic integral equation, 50
stochastic process, 43
stock model, 124, 171, 208
trapezoidal fuzzy variable, 68
triangular fuzzy variable, 68
uncertain calculus, 209
uncertain differential equation, 211
uncertain integral equation, 211
uncertain measure, 178
uncertain process, 205
Index
uncertain sequence, 200
uncertain variable, 181
uncertain vector, 182
uncertainty asymptotic theorem, 180
uncertainty density function, 185
uncertainty distribution, 185
uncertainty space, 181
uniform distribution, 10
variance, 23, 92, 149, 190
Wiener process, 44