Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 26, NO.

6, DECEMBER 2018 3403

Dynamic Fusion of Multisource Interval-Valued


Data by Fuzzy Granulation
Yanyong Huang, Tianrui Li , Senior Member, IEEE, Chuan Luo, Hamido Fujita , Senior Member, IEEE,
and Shi-Jinn Horng

Abstract—Information fusion is capable of fusing and trans- I. INTRODUCTION


forming multiple data derived from different sources to provide a
ITH the rapid development of society and economy,
unified representation for centralized knowledge mining that facil-
itates effective decision-making, classification and prediction, etc.
Multisource interval-valued data, characterizing the uncertainty
W the data collected from real-applications is described
with uncertainty, multisource characteristic, and dynamicity.
phenomena in the data in the form of intervals in different sources, Interval-valued data is commonly employed to characterize
are the most common symbolic data which widely exist in many
real-world applications. This paper concentrates on efficient fusing
the imprecise and ambiguous phenomena, such as the change
of multisource interval-valued data with the dynamic updating of of temperature [10], the fluctuation of stock price [38], and
data sources, involving the addition of new sources and deletion the range of blood pressure [12]. Furthermore, the interval-
of obsolete sources. We propose a novel data fusion method based valued data may be obtained from multiple different locations
on fuzzy information granulation, which translates multisource or sources. For example, the meteorological data from differ-
interval-valued data into trapezoidal fuzzy granules. Given this ef-
fectively fusing capability, we develop incremental mechanisms and
ent weather stations are collected to forecast the weather more
algorithms for fusing multisource interval-valued data with a dy- accurately. Considering that multisource interval-valued data
namic variation of data sources. Finally, extensive experiments are evolves over time, it is a challenge to fuse these data effectively.
carried out to verify the effectiveness of the proposed algorithms Information fusion technology can combine and transform
when comparing to six different fusion algorithms. Experimental information from multiple different sources to construct a uni-
results show that the proposed fusion method outperforms other
related approaches. Furthermore, the proposed incremental fusion
fied representation, which is beneficial to reduce the ambiguity
algorithms can reduce the computing overhead in comparison with and uncertainty of data and improve the quality of informa-
the static fusion algorithm when adding and deleting multiple data tion [9]. Information fusion methods have been applied to deal
sources. with multisource data [11] and interval-valued data [8]. Xu et al.
Index Terms—Fuzzy granulation, incremental fusion, informa- presented an information fusion approach based on the infor-
tion fusion, multisource interval-valued data. mation entropy to fuse multisource fuzzy incomplete data [42].
Yager proposed a monotonic set measure for the fusion of multi-
source hybrid data, including different types of data [45]. By the
utilization of Dempster-Shafer evidence theory, Xu et al. pre-
Manuscript received April 26, 2017; revised October 31, 2017 and January sented a fault diagnosis method by fusing different diagnosis ev-
26, 2018; accepted April 26, 2018. Date of publication May 2, 2018; date of idences in the form of interval-valued data [41]. However, these
current version November 29, 2018. This work is supported in part by the approaches could not be directly applied to fuse multisource
National Science Foundation of China under Grant 61573292, Grant 61572406,
Grant 61602327, and Grant 61603313; in part by the Fundamental Research interval-valued data. This paper proposes a fusion method for
Funds for the Central Universities under Grant 2682015QM02; in part by the the multisource interval-valued data by utilizing granular com-
China Postdoctoral Science Foundation under Grant 2016M602688; and in part puting (GrC) methodology.
by the MOST under Grant 106-2221-E-011-149-MY2 and Grant 106-3114-E-
011-008. (Corresponding author: Tianrui Li.) GrC is a useful tool for information processing, machine
Y. Huang and T. Li are with the School of Information Science and Technol- learning, knowledge discovery, etc. [1], [30], [52]. There are
ogy, Southwest Jiaotong University, Chengdu 611756, China (e-mail:,yyhswjtu two basic issues in GrC, namely, the generation of informa-
@163.com; trli@swjtu.edu.cn).
C. Luo is with the College of Computer Science, Sichuan University, Chengdu tion granules and computation with granules [26], [34], [49].
610065, China (e-mail:,cluo@scu.edu.cn). Many related works of GrC have been studied in recent three
Hamido Fujita is with the Faculty of Software and Information Science, Iwate decades [28], [40], [43]. Pedrycz et al. presented an information
Prefectural University 020-0693, Iwate, Japan (e-mail:,HFujita-799@acm.org).
S.-J. Horng is with the School of Information Science and Technology, granulation method based on the fuzzy set theory for construct-
Southwest Jiaotong University, Chengdu 611756, China, and also with the De- ing information granules in the analysis of temporal data [27].
partment of Computer Science and Information Engineering, National Taiwan Yao et al. proposed a hierarchical granulation structure in terms
University of Science and Technology, Taipei 106, Taiwan (e-mail:, horngsj@
yahoo.com.tw). of rough set theory and investigated the corresponding rough ap-
This paper has supplementary downloadable material available at http:// proximation structures [47]. Due to the simplicity of information
ieeexplore.ieee.org, provided by the author. granules representation and the convenience of granular com-
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. putation, some information fusion methods via incorporating
Digital Object Identifier 10.1109/TFUZZ.2018.2832608 GrC have been investigated in multisource environment. Yager

1063-6706 © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
3404 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 26, NO. 6, DECEMBER 2018

presented a fusion framework for dealing with the conflict of the core of a fuzzy granule can be obtained based on the order
different data sources by the utilization of granular method [46]. of row sums. Finally, the support of a fuzzy granule is modified
Huang et al. combined the theory of three-way decisions and for- in terms of the updated core and prior acquired information.
mal concept analysis for fusing the local granular concepts [14]. This paper is organized as follows. In Section II, some fun-
Qian et al. proposed a multigranulation fusing method based on damental concepts of interval-valued information systems and
pessimistic rough set model [31]. Lin et al. discussed the relation fuzzy set theory are reviewed briefly. In Section III, an infor-
between multigranulation rough set theory and the evidence the- mation fusion method based on fuzzy granulation is presented
ory and further proposed a granulation fusion method by com- for fusing multisource interval-valued data. In Section IV, two
bining these two theories [21]. Based on internal-confidence incremental fusion mechanisms are presented with the addition
and external-confidence degrees, Xu et al. presented a source and deletion of data sources, respectively. And the correspond-
selection algorithm for multisource data and fused these differ- ing algorithms are developed and analyzed. Section V carries out
ent information sources by employing the method of triangular a series of compared experiments to verify the effectiveness of
fuzzy information granules [44]. However, these methods are the proposed methods. This paper ends with some conclusions
suitable for dealing with multisource single-valued data. They and future works in Section VI.
could not be applied to fuse multisource interval-valued data
directly. In this paper, we present a new information fusion ap-
proach via merging multiple interval-valued data coming from II. PRELIMINARIES
different sources to construct trapezoidal fuzzy granules. First, a In this section, some basic concepts related to interval-valued
novel dominant matrix is proposed and the corresponding prop- information systems (IvIS) [18] as well as fuzzy set theory [51],
erties are investigated. Then, we compute and sort the row sums [53] are briefly reviewed. For convenience, the description of
of dominant matrix to determine the core of a trapezoidal fuzzy symbols used in this paper is summarized in Table A in the
granule. The optimal support of a fuzzy granule can be com- supplementary material.
puted by the derivation of two objective functions. Finally, the
uncertainty measures of fusion results based on rough set theory
are discussed. A. IvIS
Another motivation of our study is that the number of sources Interval-valued data has been widely applied to character-
of multisource interval-value data will continuously change as ize the imprecise and uncertain situations in real applications.
the new sources are inserted or the obsolete sources are deleted Interval-valued data is described in a tabular form, called IvIS.
in the context of dynamic multisource environment. For exam- The definition is as follows.
ple, to improve the precision of weather prediction, more me- Definition II.1: [18] Let IvIS = {U, AT, V, f } be an IvIS,
teorological sensors are installed at different weather monitor where U is a nonempty finite set of objects, called the uni-
stations. Moreover, aged meteorological sensors or unreason- verse;AT is a nonempty finite set of condition attributes;
able monitoring locations will be removed or canceled due to V = a∈A T Va and Va is a domain of attribute a; f : U ×
their ineffectiveness. Since the variation of sources will result AT → V is an information function such that f (x, a) =
in the change of fuzzy granule structures, traditional static fu- [f − (x, a), f + (x, a)] ∈ Va for every a ∈ AT , x ∈ U , where
sion methods need to recompute the whole process of informa- f − (x, a) and f + (x, a) denote the lower and upper bounds of
tion fusion, which is too expensive or even infeasible for large interval, respectively.
datasets. In order to reduce the cost of computing, incremental In many real applications, the collection of interval-valued
fusion methods are developed by the incorporation of newly up- data is usually available from multiple sensors with same func-
dated data and accumulated information. Keong et al. presented tions, which are located in different sites, e.g. measurements
an incremental method to fuse unstructured texts when adding of temperature and humidity in a country [32]. As such, inter-
a new text [6]. Kotwal et al. proposed a fusion rule with con- val information grouped from different sources in the form of
sistency for incrementally evaluating the fusion performance IvIS is called multisource interval-valued information systems
of hyperspectral images [17]. Lhuilier dynamically integrated (MIvIS), which is formulated as follows.
three fusion approaches for combination of GPS and Structure- Definition II.2: Let M IvIS = {IvISi |IvISi = (U, ATi ,
from-Motion based on bundle adjustment [19]. Sklarz et al. VA T i , fi ), i = 1, 2, . . . , N } be an MIvIS, where
suggested a road map estimation approach by fusing a newly 1) IvISi is the i-th IvIS in MIvIS;
track curve incrementally [33]. Nonetheless, these approaches 2) U is a nonempty finite set of objects;
are not suitable for fusing multisource interval-valued data based 3) ATi is a nonempty finite set of attributes of the i-th IvISi ;
on fuzzy granulation under the variation of sources. To improve 4) VA T i is the domain of the attribute set ATi in the i-th
the fusion efficiency, this paper presents the incremental fusion subsystem IvISi ;
mechanisms for updating trapezoidal fuzzy granules, by uti- 5) fi : U × ATi → Vi is an information function in the i-th
lizing previously accumulated results of information granules subsystem IvISi . 
when adding and deleting information sources. Instead of com- Definition II.3: Let M IvDS = {M IvIS M D} be a
puting dominant matrix from scratch, the row sums of dominant multisource interval-valued decision system (MIvDS), where
matrix are first updated by incorporating the prior calculating M IvIS is a multisource interval-valued information system
results and the newly modified (added or discarded) data. Then, and M D = {D, VD , fD }, where D is the decision attribute set,

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
HUANG et al.: DYNAMIC FUSION OF MULTISOURCE INTERVAL-VALUED DATA BY FUZZY GRANULATION 3405

TABLE I
MULTISOURCE INTERVAL-VALUED DECISION SYSTEM

VD is the domain of D and fD : U × D → VD is an information


function.
Example  II.1: Table I shows an MIvDS, M IvDS =
{M IvIS M D}, where M IvIS = {IvISi |IvISi = (U, ATi ,
VA T i , fi ), i = 1, 2, 3}, in which meteorological data is col-
lected from three different weather stations. In each subsystem
IvISi , U = {xj |j ∈ {1, 2, · · · , 5}} denotes 5 different dis-
tricts, Ai = {temperature, relative humidity}  {a1 , a2 } and
D = {precipitation}. The precipitation is divided into 3 states,
i.e., VD = {no rain, little rain, heavy rain}  {0, 1, 2}.
Fig. 1. General trapezoidal fuzzy number.
B. Fuzzy Set Theory and Trapezoidal Fuzzy Numbers
Fuzzy set theory has played a key role in data mining, machine
learning, decision-making systems, and other domains [7], [13], Fig. 1 shows a general class of a trapezoidal fuzzy member-
[15], [16], since it was initially presented by Zadeh [51]. In ship function. [a, d] and [b, c] are defined as the support and core
fuzzy sets, a membership function is proposed to characterize of the trapezoidal fuzzy number A, respectively. b − a is called
the degrees of membership of the objects associated with a the left width and d − c is the right width. Specially, when b = c,
class. It is different with classical set theory, where elements it is commonly named the triangular fuzzy number.
either belong to a set or not. Information granulation is a basic issue of GrC, which decom-
Let X denote a collection of objects. A fuzzy set A on X is poses the whole data space into different granules with vicinal
a set of ordered pairs: A = {< x, A(x) >}, where A(x) is the or similar objects. After granulation, we can utilize these in-
membership function, which satisfies A(x) ∈ [0, 1], ∀x ∈ X. formation granules for data mining, knowledge discovery and
F (X) is the family of fuzzy subsets of X. For any fuzzy subsets others [23], [29], [35]. Pedrycz et al. proposed a method of in-
A, B ∈ F (X), two basic operators intersection ∩ and union ∪ formation granulation based on fuzzy sets, which aims to con-
are defined as follows, respectively. struct an information granule (a fuzzy set) with high legitimation
 and specificity [27]. Following this idea, Yu et al. designed an
(A ∩ B)(x) = A(x) ∧ B(x)
optimization problem to granulate crisp data for obtaining a tri-
(A ∪ B)(x) = A(x) ∨ B(x) angular fuzzy granule (a triangular fuzzy number) [50]. More
where ∧ and ∨ stand for the minimum and maximum operations, formally, given a collection of data {x1 , x2 , . . . , xn }, to con-
respectively. struct the triangular fuzzy granule with highly legitimate (viz.
As the fuzzy number represents imprecise and uncertain in- embrace sufficient experimental data) and enough specificity
formation, it has been employed in many fields, like medical (namely, keep the support of a fuzzy set as compact as possi-
diagnosis, load forecasting of power, and so forth [37], [39], ble), the granulationproblem is transformed into the following
n
1 A (x i )
[48]. In this paper, we focus on trapezoidal fuzzy numbers for optimization task: i =d−a , where a and d are the left and
our study. A trapezoidal fuzzy number A is defined by a quadru- right endpoints of the triangular fuzzy number A, respectively.
ple, viz., A = (a, b, c, d), whose membership function has the The solution procedure can be divided into two steps: 1) sort
following form: the given data {x1 , x2 , . . . , xn } to determine the median point
⎧ x−a m0 and select it as the core of A; 2) the left and right endpoints
⎪ b−a , if a ≤ x < b a and b can be obtainedby solving the following two opti-

⎪ 
⎨ 1, if b ≤ x ≤ c A (x i ) A (x i )
mization problems: max xi < m 0
and max xi > m 0
.
A(x) = d−x a m 0 −a d d−m 0

⎪ , if c≤x<d

⎩ d−c
Zhang et al. extended this method to deal with the granulation
0, otherwise of interval-valued data [54]. The interval-valued data can be

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
3406 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 26, NO. 6, DECEMBER 2018

granulated as a trapezoidal fuzzy granule by the following two


steps, i.e., determining the core [b, c] of a trapezoidal fuzzy num-
ber A, by sorting all interval-valued data {x1 , x2 , . . . , xn }, and
+ +
 x  x
i A (x)dx i A (x)dx Fig. 2. Conditions 2 and 5 in Eq. (1).
xi < b x− xi > c x−
max i
b−a and max i
d−c to obtain the
a d
+
support [a, d] of the trapezoidal fuzzy set A, where x−
i and xi
denote the left and right endpoints of the interval-valued data
xi , respectively.

III. INTERVAL-VALUED INFORMATION FUSION BASED ON


FUZZY GRANULATION
Information fusion is useful to reduce the uncertainty and
ambiguity of multisource data and improve the precision of
discrimination. In this section, we present an information fusion Fig. 3. Integration region of the condition 2 .
method in terms of fuzzy granulation to deal with information
from multiple interval-valued datasets for forming a unified
2 and 5 in (1), respectively. Fig. 2 shows the correspond-
fuzzy information table, in which each element is a trapezoidal
fuzzy number. ing conditions. From the perspective of probability theory,
 the interval-valued data x = [x− , x+ ] indicates that the ob-
Let M IvDS = {M IvIS M D} be an MIvDS, where
M IvIS = {IvISi |IvISi = (U, ATi , VA T i , fi ), i = 1, 2, . . . , ject can take any value in [x− , x+ ] with a uniform probabil-
N } including N subsystems IvIS. According to Pedrycz and ity [2]. Hence, D(x, y) = 0 in Fig. 2 (a) and D(x, y) = 1
Zhang’s method [27], [54], for any x ∈ U , we construct an in Fig. 2 (b), are unreasonable. From the probabilistic per-
information granule (a trapezoidal fuzzy number) A, to fuse spective, since the objects x and y derive from different IvIS
N interval-valued data {f1 (x, a), f2 (x, a), . . . , fN (x, a)} from under the same attribute, we can assume that the objects
different IvIS under the condition attribute a ∈ AT . The core x and y are independent and follow uniform distributions
and support of the optimal trapezoidal fuzzy granule A can U [x− , x+ ] and U [y − , y + ], respectively. Let the dominant de-
1
be determined by the approaches in [27], [54]. The core gree D(x, y) = (x + −x − )(y + −y − ) dxdy, where Ω = {(x, y)|x
Ω
of the information granule A is a numeric representative ≥ y}, which indicates the probability of x ≥ y. When the con-
of the data set {f1 (x, a), f2 (x, a), . . . , fN (x, a)}. It can be dition 2 is satisfied, we calculate the integration D(x, y) =
estimated by the median of these data when the data set 1 (x + −y − ) 2
(x + −x − )(y + −y − ) dxdy = 2(x + −x − )(y + −y − ) , where Ω is the
{f1 (x, a), f2 (x, a), . . . , fN (x, a)} is ordered [27]. Since there Ω
may exist an overlap between the interval-valued data, it could shadow area in Fig. 3. Other cases can be obtained in a similar
not get the order of these data directly. Hence, we first present a way. Hence, the novel definition of dominant degree reasonably
novel dominant degree to determine the order of interval-valued describes the comparisons of interval-valued data based on the
data derived from different sources and obtain the corresponding probabilistic method.
core of a trapezoidal fuzzy number. Then the support of a trape- Proposition III.1: 0 ≤ D(x, y) ≤ 1.
zoidal fuzzy number is determined via solving two optimal prob- Proposition III.2: D(x, y) + D(y, x) = 1.
lems according to the method presented by Zhang et al. [54]. The properties will be employed to incrementally compute
Definition III.1: Given two interval-valued data x = the dominant matrix in next section. 
[x− , x+ ] and y = [y − , y + ], the degree that x dominates y is Definition III.2: Given an MIvDS, M IvDS = {M IvIS
defined as follows: M D}, where M IvIS = {IvISi |IvISi = (U, ATi , VA T i , fi ),
i = 1, 2, . . . , N }. The dominant matrix with respect to
D(x, y) = the object x ∈ U under the condition attribute a ∈ ATi
⎧ is defined as DM a (x) = (Dij a
(x))N ×N , where Dij a
(x) =

⎪ 0 , x+ ≤ y − 1

⎪ D(fi (x, a), fj (x, a)).

⎪ (x + −y − ) 2

⎪ 2(x + −x − )(y + −y − ) , x− ≤ y − < x+ ≤ y + 2 The dominant matrix DM a (x) describes the domi-



⎪ 2x + −y + −y − nant degree among the group of interval-valued data
⎨ , x− < y − ≤ y + < x+ 3
+ −
2(x −x )
. (1) {f1 (x, a), f2 (x, a), . . . , fN (x, a)} from N different IvIS. In


+
x +x −2y − −
, y − ≤ x− ≤ x+ ≤ y + the dominant matrix DM a (x), the sum of the i-th row

⎪ + − 4 N
⎪ 2(y −y )
⎪ j =1 Dij (x) describes how much the object fi (x, a) is larger
a

⎪ + − 2

⎪ 1 − 2(x +(y−x −−x )
, y − < x− ≤ y + < x+ 5 than all other objects. Hence, the order of these interval-valued

⎪ )(y + −y − )
⎩ + − data {f1 (x, a), f2 (x, a), . . . , fN (x, a)} can be determined by
1 ,y ≤ x 6
comparing the values of the row-sums of DM a (x). Then
The dominant degree D(x, y) measures how much x is the median interval, chosen from these interval-valued data
larger than y. Notice that the definition of dominant degree {f1 (x, a), f2 (x, a), . . . , fN (x, a)} according to the correspond-
in [54], D(x, y) = 0 and D(x, y) = 1 satisfies the conditions ing order, is determined as the core (denoted as [ma (x), na (x)])

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
HUANG et al.: DYNAMIC FUSION OF MULTISOURCE INTERVAL-VALUED DATA BY FUZZY GRANULATION 3407

TABLE II different IvIS in MIvDS.


INFORMATION FUSION RESULT OF MIVDS ⎛ ⎞
0 0.9652 0.3473
⎜ ⎟
DM a 1 (x1 ) = ⎝ 0.0348 0 0.0006 ⎠ .
0.6527 0.9994 0
In the following, we name the information fusion result
of M IvDS as a trapezoidal fuzzy fusion information system
(TFFIS) and the corresponding rough approximations and un-
certain measures are introduced. 
 Definition III.3: Let T F F IS = (U, AT = A D, V = VA
VD , Tf ) be a TFFIS, where U is the universe of discourse of
M IvDS, AT is the attribute set of M IvDS, V is the domain
of the trapezoidal fuzzy number A with respect to the object x of AT , including the domain VA of the condition attribute set
under the attribute a. A and the domain VD of the decision attribute set D, and Tf :
After obtaining the core of A, the support of A can be deter- U × AT → V is an information function, such that Tf (x, a) =
mined by solving the following two optimal problems according [la (x), ma (x), na (x), ua (x)] is a trapezoidal fuzzy number for
to the method in [54]: any a ∈ A, x ∈ U .
The distance function proposed by Li et al. [20] is employed
N f k+ (x,a)∧m a (x)
A(x)dx to characterize the similar relation of the objects in T F F IS.
k =1 f k− (x,a)∧m a (x)
max Q(la (x)) =
l a (x) ma (x) − la (x)   [20] Given a TFFIS, T F F IS = (U, AT =
Definition III.4:
A D, V = VA VD , Tf ). The distance between two objects 
and xi and xj (xi , xj ∈ U ) is defined as: d(xi , xj ) = |A1 | a∈A
da (xi , xj ), where da (xi , xj ) = |la (xi ) − la (xj )| + |(ma (xi )
N f k+ (x,a)∨n a (x)
k =1 A(x)dx −la (xi ))−(ma (xj )−la (xj ))|+|(na (xi )−ma (xi ))−(na (xj )
f k− (x,a)∨n a (x)
max Q(ua (x)) = , − ma (xj ))| + |(ua (xi ) − ma (xi )) − (ua (xj ) − ma (xj ))| and
u a (x) ua (x) − na (x) |A| denotes the cardinality of A.
where la (x) and ua (x) denote the left and right endpoints of Definition III.5: The similarity degree of xi and xj is defined
the support of A, respectively. Moreover, when A(x) is a trape- as follows:
zoidal membership function, we can obtain the optimal solutions d (xi , xj )
Sim (xi , xj ) = 1 − (j ≥ i) (4)
la (x) and ua (x) by the derivation of Q(la (x)) and Q(ua (x)), MaxDi
respectively, as follows: where MaxDi = max(d(xi , xk ))(k = i + 1, i + 2, . . . , n). If
N 2 2 i ≥ j, let Sim(xi , xj ) = Sim(xj , xi ).
[(fk+ (x, a) ∧ ma (x)) −(fk− (x, a) ∧ ma (x)) ] The similarity degree Sim(xi , xj ) describes how much the
la (x) = k =1
N
k =1 [(fk+ (x, a) ∧ ma (x))−(fk− (x, a) ∧ ma (x))] object xi is similar to the object xj and it satisfies in [0,1].
− ma (x) (2) Definition III.6: Given a TFFIS, T F F IS = (U, AT =
A D, V = VA VD , Tf ) and a threshold α ∈ (0, 1], where
N 2 2
[(fk+ (x, a) ∨ na (x)) − (fk− (x, a) ∨ na (x)) ] U = {x1 , x2 , . . . , xn }. The similar relation between the ob-
ua (x) = k =1
N + − jects xi and xj is defined as Simα = {(xi , xj ) ∈ U ×
k =1 [(fk (x, a) ∨ na (x)) − (fk (x, a) ∨ na (x))] U |Sim(xi , xj ) ≥ α}.
− na (x) (3) By the utilization of the similar relation, we can further define
the lower and upper approximations in TFFIS. 
Example III.1: (Continuation of Example II.1) To fuse Definition
 III.7: Let T F F IS = (U, AT = A D, V =
multiple interval-valued data derived from three different VA VD , Tf ) be a TFFIS. ∀X ∈ U , the lower and upper
IvIS shown in Table I, according to Definition III.2, we approximations of X with regard to the similar relation Simα
first compute the dominant matrix DM a 1 (x1 ) = (Dij a1
(x))3×3 are defined as follows, respectively.
with respect to the object x1 under the attribute a1 . Then
the core [ma 1 (x1 ), na 1 (x1 )] = [22.9, 31.2] and the support R(α ) (X) = {x|[x]S im α ⊆ X} (5)
[la 1 (x1 ), ua 1 (x1 )] = [20.5, 32.7] can be obtained according to (α )
R (X) = {x|[x]S im α ∩ X = ∅} (6)
the order of the row-sums of DM a 1 (x1 ) and Equations 2 and 3,
respectively. Finally, three different interval-valued data with where [x]S im α = {y|(x, y) ∈ Sim }, α ∈ (0, 1].
α

regard to the object x1 under the attribute a1 are fused as a The approximate classified precision (denoted as AP) and
trapezoidal fuzzy number [20.5, 22.9, 31.2, 32.7]. Other objects approximate classified quality (denoted as AQ) are used to mea-
can be calculated in a similar way. Table II shows the fusion sure the precision of approximation classification [24], [25]. Let
result according to the proposed method. It is obvious that each U/D = {D1 , D2 , . . . , Dr } be a classification
 of the universe
 U
attribute value is an information granule (a trapezoidal fuzzy in a TFFIS, T F F IS = (U, AT = A D, V = VA VD , f ).
number) in place of the original interval-valued data from three The AP and AQ of U/D with respect to the similar relation

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
3408 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 26, NO. 6, DECEMBER 2018

trapezoidal fuzzy granule from Step 12, whose time complexity


is O(N log2 (N )); 4) computes the left and right endpoints of
a fuzzy support from Steps 13 and14, whose time complexity
is O(4N ). Hence, the total complexity of Algorithm 1 (from
Steps 2–16) is O(|U ||A|(2N 2 + N log2 (N ) + 4N )).

IV. INCREMENTAL FUSION OF MIVDS UNDER THE VARIATION


OF DATA SOURCES

The variation of data sources in MIvDS, including the addi-


tion and deletion of source numbers, will result in the change
of fuzzy granule structures in a dynamic multisource environ-
ment. In this section, we discuss the mechanisms of incremental
fusion of multisource interval-valued data under the variation
of data sources. According to the fusion methodology based on
fuzzy granulation in MIvDS, the core of a fuzzy granule can
be induced from the dominant matrix, whose computation is
time consuming. To reduce the computing overhead, an incre-
mental method is proposed to calculate the dominant matrix
instead of the reconstruction of it from scratch. Furthermore,
the incremental updating of the support of a fuzzy granule is
carried out by utilizing the accumulated information to reduce
the computing time. In the following, we introduce the methods
of incremental fusion under the addition and deletion of data
sources, respectively.

A. Incremental Fusion of MIvDS With the Addition of


Data Sources

Let M IvDS t = {M IvIS t M D} be an MIvDS at time t,
where M IvIS t ={IvISi |IvISi = (U, ATi , VA T i , fi ), i =
Simα are defined as follows, respectively. 1, 2, . . . , N }. At time t, ∀x ∈ U and a ∈ ATi , these interval-
 (α ) valued data {{f1 (x, a), f2 (x, a), . . . , fN (x, a)}} from N
D ∈U /D |R (Di )|
different IvIS are fused as [lat (x), mta (x), nta (x), uta (x)] ac-
APSimα (U/D) =  i (α )
(7)
D i ∈U /D |R (Di )| cording to the above-mentioned information fusion method in
 (α )
Section III. In what follows, the incremental mechanisms for
D i ∈U /D |R (Di )| updating the core [mta (x), nta (x)] and the support [lat (x), uta (x)]
AQSimα (U/D) = . (8)
|U | are discussed when adding data sources.
Theorem IV.1: Suppose ΔN IvIS IvISi = (U, ATi , VA T i ,
Obviously, the values of APSimα (U/D) and AQSimα (U/D) fi )}(i = N + 1, N + 2, . . . , N + ΔN ) are appended to
range in [0, 1]. And the higher APSimα (U/D) and M IvDS t at time t + 1. ∀x ∈ U and ∀a ∈ ATi , let Sumt+1 i
AQSimα (U/D) indicate the better precision and quality of ap- denote the sum of the i-th row of the dominant matrix DM a (x)
proximation classification, respectively. with respect to the attribute a in M IvDS t+1 at time t + 1.
Example III.2: (Continuation of Example III.1) It is clear Sumt+1 can be updated as follows:
that Table II is a TFFIS and U/D = {D1 , D2 , D3 }, where D1 =
i  +Δ N
1) Sumt+1 = Sumti + N j =N +1 D(fi (x, a), fj (x, a))(i = 1,
{x1 }, D2 = {x2 , x5 }, and D3 = {x3 , x4 }. Given a thresh- i
2, . . . , N );
old α = 0.6, then we have R(α ) (D1 ) = {x|[x]S im α ⊆ D1 } =  N +Δ N
(α ) 2) Sumt+1 i =N− N j =1 D(fj (x, a), fi (x, a)) + j =N +1
{x1 }, R (D1 ) = {x|[x]S im α ∩ D1 = ∅} = {x1 }, R(α ) (D2 ) D(fi (x, a),fj (x, a))(i = N + 1, N + 2,. . . ,N + ΔN ).
(α ) (α )
= {x5 }, R (D2 ) = {x2 , x4 , x5 }, R(α ) (D3 ) = {x3 }, R The proof can be found in Appendix A.
(D3 ) = {x2 , x3 , x4 }, APS im α (U/D) = 0.43 and AQS im α Corollary 1: The core [mt+1 a (x), na (x)] with regard
t+1

(U/D) = 0.6. to the object x under the attribute a at time t + 1


According to the above-mentioned notions, an algorithm (see can be obtained according to the median interval of
Algorithm 1) based on fuzzy granulation is developed to fuse {f1 (x, a), f2 (x, a), . . . , fN +Δ N (x, a)} based on the order of
multisource interval-valued data. Sumt+1i (i = 1, 2, . . . , N + ΔN ).
Algorithm 1 is divided into four processes for computing the Theorem IV.1 and Corollary 1 show an incremental method
trapezoidal fuzzy granules: 1) computes the dominant matrix to update the core of a fuzzy granule by utilizing the prior matrix
from Steps 4–8, whose time complexity is O(N 2 ); 2) calcu- information, rather than recomputing the dominant matrix.
lates the row sum of the dominant matrix from Steps 9–11, To employ the prior information to update the left and
whose time complexity is O(N 2 ); 3) determines the core of a right endpoints of the support of a fuzzy granule, we sort

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
HUANG et al.: DYNAMIC FUSION OF MULTISOURCE INTERVAL-VALUED DATA BY FUZZY GRANULATION 3409

the right and left endpoints of the interval-valued data the object x under the attribute a at time t + 1 can be updated
{f1 (x, a), f2 (x, a), . . . , fN (x, a)} from M IvDS t in an as- as follows:
cending order at time t, respectively. The corresponding sorted AROS12 + ARIS12 + ARN S12
a (x) =
ut+1 − nt+1
a (x)
+ + +
results are denoted as {f(1) (x, a), f(2) (x, a), . . . , f(N ) (x, a)} AROS11 + ARIS11 + ARN S11
− − − +
and {f(1) (x, a), f(2) (x, a), . . . , f(N ) (x, a)}, where f(i) (x, a) (11)
− where
and f(i) (x, a) denote the i-th order of the right and left end-
points of these interval-valued data, respectively. 
N
+ −
Theorem IV.2: Let K1 denote the location, which sat- AROS1p = [(f(i) (x, a))p − (f(i) (x, a))p ]
+ +
isfies f(K 1)
(x, a) < mta (x) and f(K 1 +1)
(x, a) ≥ mta (x). If i=K 3 +1

a (x) ≥ ma (x), the left endpoint la (x) of the support with


mt+1 t t+1

K3
+
respect to the object x under the attribute a at time t + 1 can be ARIS1p = [(f(i) (x, a) ∨ nt+1
a (x))
p

updated as follows: i=1



ALOS12 + ALIS12 + ALN S12 − (f(i) (x, a) ∨ nt+1
a (x)) ]
p
lat+1 (x) = − mt+1
a (x) (9)
ALOS11 + ALIS11 + ALN S11 and
where +Δ N
N


K1 ARN S1p = [(fi+ (x, a) ∨ nt+1
a (x))
p
+ −
ALOS1p = [(f(i) (x, a))p − (f(i) (x, a))p ], i=N +1
i=1
− (fi− (x, a) ∨ nt+1
a (x)) ](p = 1, 2).
p


N
+
ALIS1p = [(f(i) (x, a) ∧ mt+1
a (x))
p The proof can be found in Appendix A.
i=K 1 +1 Theorem IV.5: Let K4 denote the location, which satisfies
+ +
− f(K 4)
(x, a) < nta (x) and f(K 4 +1)
(x, a) ≥ nta (x). If nt+1
a (x) >
− (f(i) (x, a) ∧ mt+1
a (x)) ]
p
na (x), the right endpoint ua (x) of the support with respect to
t t+1

and the object x under the attribute a at time t + 1 can be updated


+Δ N
N as follows:
ALN S1p = [(fi+ (x, a) ∧ mt+1
a (x))
p
ARIS22 + ARN S22
i=N +1 a (x) =
ut+1 − nt+1a (x) (12)
ARIS21 + ARN S21
− (fi− (x, a) ∧ mt+1
a (x)) ]
p
where
(p = 1, 2). 
N
ARIS2p = [(f(i++ ) (x, a) ∨ nt+1
a (x))
p
The proof can be found in Appendix A. i=K 4
Theorem IV.3: Let K2 denote the location, which sat-

isfies f(K (x, a) < mta (x) and f(K −
(x, a) ≥ mta (x). If − (f(i−+ ) (x, a) ∨ nt+1
a (x)) ]
p
2) 2 +1)

a (x) < ma (x), the left endpoint la (x) of the support with
mt+1 t t+1
and
respect to the object x under the attribute a at time t + 1 can be +Δ N
N
updated as follows: ARN S2p = [(fi+ (x, a) ∨ nt+1
a (x))
p

ALIS22 + ALN S22 i=N +1


lat+1 (x) = − mt+1
a (x) (10)
ALIS21 + ALN S21 − (fi− (x, a) ∨ nt+1
a (x)) ](p = 1, 2).
p

where
Proof: The proof is similar to Theorem IV.4. 

K2
Remark: ALOS and AROS represent the prior information,
+
ALIS2p = [(f(i) (x, a) ∧ mt+1
a )
p
ALIS and ARIS represent the interactive information with
i=1
respect to the original data and the updated core and ALN S

− (f(i) (x, a) ∧ mt+1
a ) ]
p and ARN S represent the updated information.
By the utilization of the prior information, Theorems IV.2–
and IV.5 show the incremental mechanisms for updating the support
+Δ N
N of a fuzzy granule when adding data sources.
ALN S2p = [(fi+ (x, a) ∧ mt+1
a )
p
Algorithm 2 is used to describe the process of incremental
i=95N +1 fusion of multisource interval-valued data with the addition of
− (fi− (x, a) ∧ mt+1
a ) ](p = 1, 2).
p new sources.
In Algorithm 2, Steps 2–19 are divided into three parts to
Proof: It can be obtained in a similar way as Theorem IV.2. compute the trapezoidal fuzzy granules: 1) updates the row
Theorem IV.4: Let K3 denote the location, which satisfies sum of the dominant matrix from Steps 4–6, whose time
− −
f(K 3)
(x, a) < nta (x) and f(K 3 +1)
(x, a) ≥ nta (x). If nt+1
a (x) ≤ complexity is O((N + ΔN )ΔN ); 2) renews the value of
na (x), the right endpoint ua (x) of the support with respect to
t t+1
the original trapezoidal fuzzy core from Step 7, whose time

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
3410 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 26, NO. 6, DECEMBER 2018

ΔN different IvIS, the corresponding fusion result is


[lat (x), mta (x), nta (x), uta (x)] (∀x ∈ U , a ∈ ATi ). In the fol-
lowing, we discuss how to incrementally update the core
[mta (x), nta (x)] and the support [lat (x), uta (x)] when deleting
data sources.
Theorem IV.6: Suppose ΔN IvIS IvISi = (U, ATi , VA T i ,
fi )}(i = N + 1, N + 2, . . . , N + ΔN ) are removed from
M IvDS t at time t + 1. ∀x ∈ U and ∀a ∈ ATi , let Sumt+1 i
denote the sum of the i-th row of the dominant matrix DM a (x)
with respect to the attribute a in M IvDS t+1 at time t + 1.
Sumt+1 i can be updated as follows:
+Δ N
N
Sumt+1
i = Sumti + D(fi (x, a), fj (x, a))
j =N +1

(i = 1, 2, . . . , N ).

Proof: It can be obtained by the same method as employed


in Theorem IV.1.
Corollary 2: The core [ mt+1 a ( x ), na ( x ) ] with regard to
t+1

the object x under the attribute a at time t + 1 can be up-


dated in terms of the median interval of {f1 (x, a), f2 (x, a), . . . ,
fN (x, a)} based on the order of Sumt+1 i (i = 1, 2, . . . , N ).
Instead of updating the whole dominant matrix, Theorem IV.6
and Corollary 2 show an incremental approach to update the core
by means of the prior matrix information.
Similar to the scenario of updating the support of a
fuzzy granule when adding data sources, we sort the
right and left endpoints of the original interval-valued data
{f1 (x, a), f2 (x, a), . . . , fN +Δ N (x, a)} in an ascending or-
der at time t, respectively. And the corresponding sorted
+ + +
results are {f(1) (x, a), f(2) (x, a), . . . , f(N +Δ N ) (x, a)} and
− − −
{f(1) (x, a), f(2) (x, a), . . . , f(N +Δ N ) (x, a)}.
Theorem IV.7: Let Q1 denote the location, which sat-
− −
isfies f(Q 1)
(x, a) < mta (x) and f(Q 1 +1)
(x, a) ≥ mta (x). If
a (x) < ma (x), the left endpoint la (x) of the support with
mt+1 t t+1
complexity is O((N + ΔN )log2 (N + ΔN )); 3) updates the
left and right endpoints of the original fuzzy support from respect to the object x under the attribute a at time t + 1 can be
Steps 8–17, whose time complexity is determined by the values updated as follows:
of K1 , K2 , K3 , and K4 . Thus, the worst case of the total DLIS22
complexity of Algorithm 2 is O(|U ||A|((N + ΔN )ΔN + lat+1 (x) = − mt+1
a (x) (13)
DLIS21
(N + ΔN )log2 (N + ΔN ) + 4(N + ΔN ))) when K1 = 0
or K2 = N and K3 = N or K4 = ΔN . In the best-case where
scenario, it becomes O(|U ||A|((N + ΔN )ΔN + (N + 
+
DLIS2p = (f(i) (x, a) ∧ mt+1
a (x))
p
ΔN )log2 (N + ΔN ) + 4ΔN )) when K1 = N or K2 = 0
i∈L I n dex 1
and K3 = 0 or K4 = N . Obviously, the time complexity of

incremental fusion algorithm (Algorithm 2) is better than − (f(i) (x, a) ∧ mt+1
a (x) )(p = 1, 2), LIndex1
p

that of the static algorithm (Algorithm 1) when adding data


sources. = {{(1), (2), . . . , (Q1 )}

− {N + 1, N + 2, . . . , N + ΔN }}.
B. Incremental Fusion of MIvDS With the Deletion
of Data Sources The proof can be found in Appendix B.

Let M IvDS t = {M IvIS t M D} be an MIvDS at time Theorem IV.8: Let Q2 denote the location, which sat-
+ +
t, where M IvIS t = {IvISi |IvISi = (U, ATi , VA T i , fi ), i = isfies f(Q 2)
(x, a) < mta (x) and f(Q 2 +1)
(x, a) ≥ mta (x). If
1, 2, . . . , N, N + 1, . . . , N + ΔN }. For these interval- ma (x) ≥ ma (x), the left endpoint la (x) of the support with
t+1 t t+1

valued data {f1 (x, a), f2 (x, a), . . . , fN +Δ N (x, a)} from N + respect to the object x under the attribute a at time t + 1 can be

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
HUANG et al.: DYNAMIC FUSION OF MULTISOURCE INTERVAL-VALUED DATA BY FUZZY GRANULATION 3411

updated as follows: where



DLOS12 + DLIS12 − DLN S12 DROS1p = +
(f(i) −
(x, a))p − (f(i) (x, a))p ,
lat+1 (x) = − mt+1
a (x)
DLOS11 + DLIS11 − DLN S11 i≤Q 4
(14) 
+
where DRIS1p = (f(i) (x, a) ∨ nt+1
a (x))
p

 i∈R I n dex 2
+ −
DLOS1p = (f(i) (x, a))p − (f(i) (x, a))p , −
i≤Q 2
− (f(i) (x, a) ∨ nt+1
a (x))
p

 and
+
DLIS1p = (f(i) (x, a) ∧ mt+1
a (x))
p

+
i∈L I n dex 2 DRN S1p = (f(i) (x, a))p

− (f(i) (x, a) ∧ mt+1
a (x))
p i∈R I n dex 3

− (f(i) (x, a))p (p = 1, 2).
and
 Moreover,
+ −
DLN S1p = (f(i) (x, a))p − (f(i) (x, a))p (p = 1, 2).
i∈L I n dex 3 RIndex2 = {{(1), (2), . . . , (Q4 )}
Moreover, − {N + 1, N + 2, . . . , N + ΔN }}

LIndex2 = {{(Q2 + 1), (Q2 + 2), . . . , (N + ΔN )}


− {N + 1, N + 2, . . . , N + ΔN }} and

and RIndex3 = {{(Q4 + 1), (Q4 + 2), . . . , (N + ΔN )}

LIndex3 = {{(1), (2), . . . , (Q2 )}∩ ∩ {N + 1, N + 2, . . . , N + ΔN }}.

{N + 1, N + 2, . . . , N + ΔN }}. Proof: It can easily be obtained by the similar way in


Theorem IV.9. 
Proof: It can be obtained in a similar way as Theorem IV.7. Remark: DLOS and DROS represent the prior information,
Theorem IV.9: Let Q3 denote the location, which satisfies DLIS and DRIS represent the interactive information with
+ +
f(Q 3)
(x, a) < nta (x) and f(Q 3 +1)
(x, a) ≥ nta (x). If nt+1
a (x) > respect to the original data and the updated core and DLN S
na (x), the right endpoint ua (x) of the support with respect to
t t+1
and DRN S represent the updated information.
the object x under the attribute a at time t + 1 can be updated Theorems IV.7–IV.10 display the incremental approaches to
as follows: update the support of a fuzzy granule when deleting data sources.
DRIS22 Algorithm 3 shows how to incrementally update trape-
a (x) =
ut+1 − nt+1
a (x) (15) zoidal fuzzy granules in MIvDS when obsolete sources are
DRIS21
removed.
where Algorithm 3 is divided into three parts for updating the trape-
 zoidal fuzzy granules from Steps 2–19:
+ −
DRIS2p = (f(i) (x, a) ∨ nt+1
a (x)) − (f(i) (x, a)
p
1) computes the row sum of the dominant matrix from
i∈R I n dex 1 Steps 4–6, whose time complexity is O(N ΔN );
∨ nt+1 2) updates the value of the original trapezoidal fuzzy core
a (x)) (p = 1, 2), RIndex1 = {{(Q3 + 1),
p
from Step 7, whose time complexity is O(N log2 N );
(Q3 + 2), . . . , (N + ΔN )} − {N + 1, N + 2, 3) renews the left and right endpoints of the original fuzzy
support from Steps 8–17, whose time complexity is
. . . , N + ΔN }}. determined by the values of |LIndex1 |, |LIndex2 |,
|LIndex3 |, |RIndex1 |, |RIndex2 |, and |RIndex3 |.
The proof can be found in Appendix B.
In the worst-case scenario, the time complexity of
Theorem IV.10: Let Q4 denote the location, which satisfies
− − Algorithm 3 is O(|U ||A|(N ΔN + N log2 N + 4(N + ΔN )))
f(Q (x, a) < nta (x) and f(Q (x, a) ≥ nta (x). If nt+1
a (x) ≤
4) 4 +1) when |LIndex2 | = N , |LIndex3 | = ΔN , |RIndex2 | = N
na (x), the right endpoint ua (x) of the support with respect to
t t+1
and |RIndex3 | = ΔN . The best case of the total complex-
the object x under the attribute a at time t + 1 can be updated ity of Algorithm 3 is O(|U ||A|(N ΔN + N log2 N )) when
as follows: |LIndex1 | = 0 and |RIndex1 | = 0. It is clear that the time
DROS12 + DRIS12 − DRN S12 complexity of incremental fusion algorithm (Algorithm 3) is
a (x) =
ut+1 − nt+1
a (x) better than that of the static algorithm (Algorithm 1) when delet-
DROS11 + DRIS11 − DRN S11
(16) ing data sources.

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
3412 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 26, NO. 6, DECEMBER 2018

TABLE III
DESCRIPTION OF DATA SETS

proposed methods. The detailed information of these data sets


is summarized in Table III.
All concerned experiments are performed with MATLAB
2012a on a personal computer with Intel Core i5-4200U CPU
1.60GHZ, 4.0 GB of memory.
Since multisource interval-valued datasets are not available in
any public databases, we construct a synthetic MIvDS according
to the following steps:  
1) Let S = (U, AT D, V  = VA T VD , g  ) denote an
original real-valued decision system. Generate an IvIS
IvIS = (U, AT, VA T , g) according to the method pre-
sented by Leung et al. [18]. ∀x ∈ U , ∀a ∈ AT , g − (x, a)
= g  (x, a) − 2σa , g + (x, a) = g  (x, a) + 2σa , where σa
denotes the standard deviation of the attribute a in the
same decision class.
2) Draw a random number r from a normal distribution
N (0, 0.1). Let each ATi = AT (i = 1, 2, . . . , N ). If r >
0, f − (x, a) = g − (x, a)(1 − r) and f + (x, a) = g + (x, a)
(1 + r); otherwise f − (x, a) = g − (x, a)(1 + r) and
f + (x, a) = g + (x, a)(1 − r). Let M D = {D, VD , gD 
},
where D and VD are the decision attribute set and
the corresponding decision domain from the original

decision system, respectively. Moreover, gD is the in-
formation function with respect to the decision at-
tribute D. Then we obtain an MIvDS, M IvDS =
{M IvIS M D}, where M IvIS = {IvISi |IvISi =
(U, ATi , VA T i , fi )}(i = 1, 2, . . . , N ).

A. Effectiveness Analysis
In this section, we compare the proposed fusion method based
on fuzzy granulation (abbreviated as FgF for convenience) with
other six related fusion methods  to verify the effectiveness.
Let M IvDS = {M IvIS M D} be an MIvDS, where
M IvIS = {IvISi |IvISi = (U, ATi , VA T i , fi ), i = 1, 2, . . . ,
N } and ATi  A. In this section, we fix N = 20. Assume
min and max denote the minimum and maximum operations,
respectively. And mean indicates the averaging operation.
∀x ∈ U , ∀a ∈ A, then other six fusion approaches refer to: 1)
V. EXPERIMENTAL ANALYSIS maximum fusion method (abbreviated as MaxF): MaxF− a (x) =
+
In this section, comparative experiments based on nine data min{f1− (x, a), f2− (x, a),. . . ,fN− (x, a)}, MaxF+a (x) = max{f 1
+ +
sets from the UCI Repository of Machine Learning are car- (x, a),f2 (x, a),. . . ,fN (x,a)}, where MaxFa (x) and MaxF+

a
ried out to demonstrate the effectiveness and efficiency of the (x) denote the left and right endpoints of a maximum fusion

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
HUANG et al.: DYNAMIC FUSION OF MULTISOURCE INTERVAL-VALUED DATA BY FUZZY GRANULATION 3413

TABLE IV
AP AND AQ OF DIFFERENT FUSION APPROACHES IN THE WINE DATASET

TABLE V
MEANS AND STANDARD DEVIATIONS OF CLASSIFICATION ACCURACY FOR DIFFERENT FUSION METHODS

result, respectively; 2) minimum fusion method (abbreviated as with respect to the comparisons of AP and AQ between dif-
MinF): MinF− − − −
a (x) = max{f1 (x, a), f2 (x, a), . . . , fN (x, a)}, ferent fusion methods can be found in supplementary materials
+ + + +
MinFa (x) = min{f1 (x, a), f2 (x, a), . . . , fN (x, a)} where (Tables B– I). According to the experimental results, the pro-
MinF− +
a (x) and MinFa (x) denote the left and right endpoints posed fusion method based on fuzzy granulation outperforms
of a minimum fusion result, respectively; 3) mean fusion all the other fusion approaches in most situations under the
method (abbreviated as MeanF): MeanF− a (x) = mean{f1

variation of α.
+
(x, a), f2− (x, a),. . . ,fN− (x, a)}, MeanF+
a (x) = mean{f 1 (x, a), K-nearest neighbor classifier is employed to verify the fusion
f2+ (x, a), . . . , fN+ (x, a)}, where MeanF− a (x) and MeanFa (x)
+
effectiveness. Table V shows the average accurate classification
denote the left and right endpoints of a mean fusion result, rate and standard deviation by tenfolds cross validation. Further-
respectively; 4) single mean fusion method (abbreviated more, two-sample t-test [36] using MATLAB [22] is conducted
f 1− (x,a)+f 1+ (x,a) f 2− (x,a)+f 2+ (x,a) to check whether the proposed fusion method (FgF) is signifi-
as SMF): SMFa (x) = mean{ 2 , 2 ,
f − (x,a)+f + (x,a) cantly better than other six fusion approaches. Given a signifi-
..., N 2
N
}, where SMFa (x) denotes a single mean cance level of 5%, let the null hypothesis be H0 : μF g F = μO F
fusion result; 5) maximal source fusion method (abbreviated and the alternative hypothesis be Ha : μF g F > μO F , where
as MSF): choose the i-th IvISi as the fusion result according μF g F and μO F denote the means of classification accuracy w.r.t.
to the maximum evaluation indexes; 6) the fusion method the fusion method FgF and other fusion methods, respectively.
proposed by Zhang et al. [54] (abbreviated as ZF). AP, AQ and In the supplementary material, Table J displays the detailed P-
the accurate classification rate are employed to measure the values of the comparison results between FgF and other fusion
fusion effectiveness in this section. methods. According to the t-test results, FgF is statistically bet-
Table IV illustrates the AP and AQ of different fusion ap- ter than other fusion methods w.r.t. classification accuracy on all
proaches in the multisource interval-valued dataset Wine when datasets with the exception of the data sets Wine and BCW. The
the similarity degree parameter α changes from 0.50 to 0.95 in classification accuracies of FgF on the Wine and BCW data sets
an interval of 0.05. The bold values in Table IV indicate the best are statistically better than most of the other fusion approaches,
AP and AQ among seven fusion approaches when changing the excepting MeanF, MaxF and MSF. The bold values in Table V
parameter α. Due to the limited space, other datasets’ results

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
3414 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 26, NO. 6, DECEMBER 2018

TABLE VI
SOURCE NUMBERS OF DIFFERENT MULTISOURCE INTERVAL-VALUED DATASETS

Fig. 4. Running times of Algorithm 1 (Static) and Algorithm 2 (Incremental) when inserting different ratios of sources. (a) Wine, (b) Ecoli, (c) Climate,
(d) BCW, (e) Diabetic, (f) Image, (g) Wall, (h) Occ, (i) Default.

describe the best classification accuracy with a statistically sig- mental algorithms are appended to supplementary materials.
nificant difference between different fusion methods. In the first Tables K and L in the supplementary materials show the com-
and fourth rows of Table V, these bold values with superscript putation times between static and incremental algorithms when
∗ correspond to the best classification accuracy and there are no inserting and deleting different ratios of sources, respectively.
significant differences among them. Table M (in the supplementary materials) describes the incre-
Hence, compared with other six fusion approaches, these ex- mental speedup ratio when inserting and deleting different ratios
perimental results show that the proposed fusion method based of data sources. It shows that the incremental fusion algorithm
on fuzzy granulation is a more reasonable choice for the fusion (Algorithm 2) yields 1.47–56.99× speedup over the static fusion
of multisource interval-valued data. algorithm (Algorithm 1) when adding data sources. Similarly,
the incremental fusion algorithm (Algorithm 3) achieves 1.13–
B. Efficiency Analysis 587.63 × over the static fusion algorithm (Algorithm 1) when
deleting data sources.
In this section, we compare the running time between the Figs. 4 and 5 show the more detailed change trend of com-
static fusion algorithm (Algorithm 1) and the incremental fu- pared algorithms with the increase and decrease of the number
sion algorithms (Algorithms 2 and 3) with the variation of data of sources, respectively. In each sub-figures (a)– (i) of Figs. 4
sources on nine datasets shown in Table III. and 5, the x-coordinate pertains to the variation of ratios and
To demonstrate the efficiency of the proposed incremental fu- the y-coordinate pertains to the running time of static and in-
sion algorithms, we generate nine multisource interval-valued cremental algorithms. Although, there are some fluctuations of
datasets and the corresponding number of sources is increased the running time under the variation of source numbers, the
by 2 times from 100 to 25600. Table VI illustrates the detailed computational efficiency of incremental fusion algorithms are
information of data sources. In order to analyze the computa- better than that of the static fusion algorithm. Obviously, it is
tional efficiency under the addition and deletion of data sources, coherent with the analysis of time complexity in Section IV.
respectively, we choose 50% data sources from the whole dataset Furthermore, we employ the Wilcoxon signed-rank test to de-
as the basic dataset and the source number is increased by 10% termine whether the performance difference between the static
from the remaining dataset when adding data sources. The whole and incremental algorithms are indeed statistically significant.
dataset is considered as the basic dataset and the source number With 5% significance level, the P-values of the observed results
is decreased by 10% from the original dataset when deleting in Figs. 4 and 5 are 3.835e − 4 and 5.825e − 16, respectively.
data sources. Both of them are less than 0.05. Hence, the performances of pro-
Due to the limited space, the detailed running times and posed incremental fusion algorithms (Algorithms 2 and 3) are
the incremental speedup ratio with respect to static and incre-

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
HUANG et al.: DYNAMIC FUSION OF MULTISOURCE INTERVAL-VALUED DATA BY FUZZY GRANULATION 3415

Fig. 5. Running times of Algorithm 1 (Static) and Algorithm 3 (Incremental) when deleting different ratios of sources. (a) Wine, b) Ecoli, (c) Climate, (d) BCW,
(e) Diabetic, (f) Image, (g) Wall, (h) Occ, (i) Default.

statistically better than that of the static algorithm (Algorithm 1) Furthermore, in this paper, the multisource interval-valued data
when inserting and deleting data sources in MIvDS. include the label information. However, we always encounter
the interval-valued data without labels in many situations.
The clustering problem of interval-valued data has been
VI. CONCLUSION
intensively investigated [3]–[5]. Our another future work will
Multisource interval-valued data are common in many investigate the fusion technologies of multisource unlabeled
practical applications. How to effectively fuse multisource interval-valued data to improve the effectiveness of clustering.
interval-valued data is a challenge in a dynamic multisource
environment. In this paper, we presented a novel fusion method
APPENDIX A
via the utilization of fuzzy granulation to transform multiple
PROOFS OF THEOREMS IV.1, IV.2, AND IV.4
interval-valued data into a trapezoidal fuzzy granule. A new
dominant matrix was constructed to compute the core of a In this appendix, we give the proofs of incremental fusion
trapezoidal fuzzy granule and the corresponding support was mechanisms, viz., Theorems IV.1, IV.2, and IV.4, under the
obtained by maximizing two objective functions. Furthermore, addition of data sources.
considering that the variation of data sources would incur the Proof of Theorem IV.1: While ΔN interval-valued data
changes of fuzzy granule structures in MIvDS, an incremental from different IvIS are inserted into the M IvDS t at
fusion approach was proposed by utilizing the accumulated time t + 1, ∀x ∈ U , and ∀a ∈ ATi , the dominant ma-
granules information to improve the computing efficiency. trix DM a (x) = (Dij a
(x))(N +Δ N )×(N +Δ N ) , where Dija
(x) =
We investigated the incremental fusion mechanisms and D(fi (x, a), fj (x, a))(i, j ∈ {1, 2, . . . , N + ΔN }). Evidently,
developed the corresponding incremental fusion algorithms the first N rows and N columns of DM a (x) are the entries of
when adding and deleting data sources. Experimental results on original dominant matrix according to Definition III.2. Hence,
several UCI datasets demonstrated the proposed fusion method
+Δ N
N
outperformed most of other six fusion approaches in terms of
Sumt+1
i = Sumti + D(fi (x, a), fj (x, a))
AP, AQ, and classification accuracy. Moreover, we constructed
j =N +1
nine MIvDS containing source numbers from 100–25600
to verify the efficiency of incremental fusion algorithms in (i = 1, 2, . . . , N ).
comparison with the static fusion algorithm. The comparative
results showed that the proposed incremental fusion algorithms In terms of Proposition III.2,
can effectively improve the computation performance when
adding and deleting multiple data sources. In this paper, we D(fi (x, a), fj (x, a)) = 1 − D(fj (x, a), fi (x, a))
focused on the scenario that each IvIS had the same attributes
in MIvDS. However, in real-world applications, there may (i ∈ {N + 1, N + 2, . . . , N + ΔN },
exist different attributes in each source. In the future, we
j ∈ {1, 2, . . . , N + ΔN }).
will extend the proposed method to deal with this situation.

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
3416 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 26, NO. 6, DECEMBER 2018

Thus, N +Δ N + 2 −
Clearly, i=1 [(fi (x, a) ∧ mt+1
a (x)) −(fi (x, a)∧ma
t+1
2
+Δ N
N 
N (x)) ] = ALOS12 + ALIS12 + ALN S12 . The denominator
Sumt+1
i = D(fi (x, a), fj (x, a)) = D(fi (x, a), of the first term of lat+1 (x) can be obtained in a similar way.
j =1 j =1 Therefore, we have
+Δ N
N
fj (x, a)) + D(fi (x, a), fj (x, a)) ALOS12 + ALIS12 + ALN S12
lat+1 (x) = − mt+1
a (x).
j =N +1 ALOS11 + ALIS11 + ALN S11


N +Δ N
N Proof of Theorem IV.4: In terms of Equation (3),
= (1 − D(fj (x, a), fi (x, a))) +
j =1 j =N +1
a (x) =
ut+1

N N +Δ N 2 2
D(fi (x, a), fj (x, a)) = N − D(fj (x, a), [(fi+ (x, a) ∨ nt+1 −
a (x)) − (fi (x, a) ∨ na (x)) ]
t+1
i=1
N +Δ N
j =1 i=1 [(fi+ (x, a) ∨ nt+1 −
a (x)) − (fi (x, a) ∨ na (x))]
t+1

+Δ N
N
− nt+1
a (x)
fi (x, a)) + D(fi (x, a),fj (x, a))
j =N +1
while ΔN interval-valued data are appended to M IvDS t at
(i = N + 1, N + 2, . . . , N + ΔN ). a (x) can be
time t + 1. The numerator of the first term of ut+1
partitioned into two items, namely,
Proof of Theorem IV.2: According to Equation (2), we have
lat+1 (x) = +Δ N
N  
2 2
(fi+ (x, a) ∨ nt+1 −
a (x)) − (fi (x, a) ∨ na (x))
t+1
N +Δ N 2 2
[(fi+ (x, a) ∧ mt+1 −
a (x)) − (fi (x, a) ∧ ma (x)) ]
t+1 i=1
i=1
N +Δ N
i=1 [(fi+ (x, a) ∧ mt+1 −
a (x)) − (fi (x, a) ∧ ma (x))]
t+1 N 
 2 2

= (fi+ (x, a) ∨ nt+1 −
a (x)) − (fi (x, a) ∨ na (x))
t+1
− mt+1
a (x). i=1

The numerator of lat+1 (x) can be divided into the sum of two +Δ N
N 
2
parts, viz., + (fi+ (x, a) ∨ nt+1
a (x))
i=N +1
+Δ N
N   
2 2
(fi+ (x, a) ∧ mt+1 (x)) − (f −
(x, a) ∧ m t+1
(x)) − (fi− (x, a) ∨ nt+1
2
a (x))
a i a .
i=1
N 
  −
2 2 Since f(i) (x, a) ≥ nta (x)(i ∈ {K3 + 1, K3 + 2, . . . , N }) and
= (fi+ (x, a) ∧ mt+1 −
a (x)) − (fi (x, a) ∧ ma (x))
t+1
− −
i=1 nta (x) ≥ nt+1
a (x), then f(i) (x, a) ∨ na (x) = f(i) (x, a).
t+1

− + +
+Δ N
N  Because f(i) (x, a) ≤ f(i) (x, a), then f(i) (x, a) ∨ nt+1a (x) =
2 N +Δ N +
+ (fi+ (x, a) ∧ mt+1
a (x))
+
f(i) (x, a). Thus, the first term of i=1 [(fi (x, a)
i=N +1 2 − 2
 ∨ nt+1
a (x)) − (fi (x, a) ∨ na (x)) ] can be divided into the
t+1
2
− (fi− (x, a) ∧ mt+1
a (x)) . sum of two parts, viz.,

+
Since f(i) (x, a) < mta (x)(i ∈ {1, 2, . . . , K1 }) and mt+1
a (x) ≥ 
N
+
+ + − [(f(i) (x, a))2 − (f(i)

(x, a))2 ]
ma (x), then f(i) (x, a) ∧ ma (x) = f(i) (x, a). Because f(i)
t t+1
i=K 3 +1
+ − −
(x, a) ≤ f(i) (x, a), then f(i) (x, a) ∧ mt+1 a (x) = f(i) (x, a).
Hence, 
K3
+ 2 − 2
+ [(f(i) (x, a) ∨ nt+1
a (x)) − (f(i) (x, a) ∨ na (x)) ].
t+1
N 
 
2 2 i=1
(fi+ (x, a) ∧ mt+1 −
a (x)) − (fi (x, a) ∧ ma (x))
t+1

a (x) can
Similarly, the denominator of the first term of ut+1
i=1
K1  
 be obtained in a similar way. Then, we have ut+1 a (x) =
+
= (f(i) (x, a))2 − (f(i)

(x, a))2 A R O S 1 2 +A R I S 1 2 +A R N S 1 2
A R O S 1 1 +A R I S 1 1 +A R N S 1 1 − nt+1
a (x).
i=1


N  APPENDIX B
+ 2
+ (f(i) (x, a) ∧ mt+1
a (x)) PROOFS OF THEOREMS IV.7 AND IV.9
i=K 1 +1
− 2
 Due to the limited space, the proofs of Theorems IV.7 and
− (f(i) (x, a) ∧mt+1
a (x)) . IV.9 can be found in supplementary materials.

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.
HUANG et al.: DYNAMIC FUSION OF MULTISOURCE INTERVAL-VALUED DATA BY FUZZY GRANULATION 3417

REFERENCES [28] W. Pedrycz and P. Rai, “Collaborative clustering with the use of fuzzy c-
means and its quantification,” Fuzzy Sets Syst., vol. 159, no, 18, pp. 2399–
[1] A. Bargiela and W. Pedrycz, “Toward a theory of granular computing 2427, 2008.
for human-centered information processing,” IEEE Trans. Fuzzy Syst., [29] W. Pedrycz and M. L. Song, “Analytic hierarchy process (AHP) in group
vol. 16, no. 2, pp. 320–330, Apr. 2008. decision making and its optimization with an allocation of informa-
[2] L. Billard and E. Diday, “From the statistics of data to the statistics of tion granularity,” IEEE Trans. Fuzzy Syst., vol. 19, no. 3, pp. 527–539,
knowledge: Symbolic data analysis,” J. Amer. Stat. Assoc., vol. 98, no. 462, Jun. 2011.
pp. 470–487, 2003. [30] W. Pedrycz and S. M. Chen, “Granular computing and decision-making,”
[3] F. A. T. de Carvalho, “Fuzzy c-means clustering methods for sym- New York, NY, USA: Springer, 2015.
bolic interval data,” Pattern Recognit. Lett., vol. 28, no. 4, pp. 423–437, [31] Y. H. Qian, S. Y. Li, J. Y. Liang, Z. Z. Shi, and F. Wang, “Pessimistic
2007. rough set based decisions: A multigranulation fusion strategy,” Inf. Sci.,
[4] F. A. T. de Carvalho and Y. Lechevallier, “Dynamic clustering of vol. 264, pp. 196–210, 2014.
interval-valued data based on adaptive quadratic distances,” IEEE Trans. [32] S. Rehman and M. Mohandes, “Artificial neural network estimation of
Syst., Man, Cybern. A, Syst., Humans, vol. 39, no. 6, pp. 1295–1306, global solar radiation using air temperature and relative humidity,” Energy
Nov. 2009. Policy, vol. 36, no. 2, pp. 571–576, 2008.
[5] F. A. T. de Carvalho and C. P. Tenório, “Fuzzy K-means clustering al- [33] S. E. Sklarz, A. Novoselsky, and M. Dorfan, “Incremental fusion of GMTI
gorithms for interval-valued data based on adaptive quadratic distances,” tracks for road map estimation,” in Proc. 11th Int. Conf. Inf. Fusion, 2008,
Fuzzy Sets Syst., vol. 161, no. 23, pp. 2978–2999, 2010. pp. 1–7.
[6] H. K. Chan, Z. X. Hu, and L. H. Beng, “Fusion of simplified entity [34] A. Skowron and J. Stepaniuk, “Information granules: Towards foundations
networks from unstructured text,” in Proc. 14th Int. Conf. Inf. Fusion, of granular computing,” Int. J. Intell. Syst., vol. 16, no. 1, pp. 57–85, 2001.
2011, pp. 1–7. [35] A. Skowron and P. Synak, “Reasoning in information maps,” Fundam.
[7] D. Dubois, “The role of fuzzy sets in decision sciences: Old techniques Inform., vol. 59, no. 2, pp. 241–259, 2003.
and new directions,” Fuzzy Sets Syst., vol. 184, no. 1, pp. 3–28, 2011. [36] G. W. Snedecor and W. G. Cochran, Statistical Methods, 8th ed. Ames,
[8] D. Dubois, “On the use of aggregation operations in information fusion IA, USA: Iowa State Univ. Press, 1989.
processes,”Fuzzy Sets Syst., vol. 142, no. 1, pp. 143–161, 2004. [37] K. B. Song, Y. S. Baek, D. H. Hong, and G. Jang, “Short-term load
[9] D. Dubois, W. R. Liu, and J. B. Ma, “The basic principles of uncertain forecasting for the holidays using fuzzy linear regression method,” IEEE
information fusion,” Inf. Fusion, vol. 32, pp. 12–39, 2016. Trans. Power Syst., vol. 20, no. 1, pp. 96–101, Feb. 2005.
[10] P. Durso and J. M. Leski, “Fuzzy c-ordered medoids clustering for interval- [38] F. de A. T. de Carvalho, P. Bertrand, and E. C. Simões, “Batch SOM
valued data,” Pattern Recognit., vol. 58, pp. 49–67, 2016. algorithms for interval-valued data with automatic weighting of the vari-
[11] R. Gravina, P. Alinia, H. Ghasemzadeh, and G. Fortino, “Multi-sensor ables,”Neurocomputing, vol. 182, pp. 66–81, 2016.
fusion in body sensor networks: State-of-the-art and research challenges,” [39] J. Q. Wang, J. T. Wu, J. Wang, H. Y. Zhang, and X. H. Chen, “Multi-criteria
Inf. Fusion, vol. 35, pp. 68–80, 2017. decision-making methods based on the Hausdorff distance of hesitant
[12] D. Guru, B. B. Kiranagi, and P. Nagabhushan, “Multivalued type proxim- fuzzy linguistic numbers,” Soft Comput., vol. 20, no. 4, pp. 1621–1633,
ity measure and concept of mutual similarity value useful for clustering 2016.
symbolic patterns,” Pattern Recognit. Lett., vol. 25, no. 10, pp. 1203–1213, [40] W. Z. Wu, Y. Leung, and J. S. Mi, “Granular computing and knowledge
2004. reduction in formal contexts,” IEEE Trans. Knowl. Data Eng., vol. 21,
[13] T. P. Hong, K. Y. Lin, and S. L. Wang, “Fuzzy data mining for interesting no. 10, pp. 1461–1474, Oct. 2009.
generalized association rules,” Fuzzy Sets Syst., vol. 138, no. 2, pp. 255– [41] X. B. Xu, Z. Zhang, D. L. Xu, and Y. W. Chen, “Interval-valued evidence
269, 2003. updating with reliability and sensitivity analysis for fault diagnosis,” Int.
[14] C. C. Huang, J. H. Li, C. L. Mei, and W. Z. Wu, “Three-way concept J. Comput. Intell. Syst., vol. 9, no. 3, pp. 396–415, 2016.
learning based on cognitive operators: An information fusion viewpoint,” [42] W. H. Xu and W. T. Li, “Information fusion based on information entropy
Int. J. Approx. Reason., vol. 83, pp. 218–242, 2017. in fuzzy multi-source incomplete information system,” Int. J. Fuzzy Syst.,
[15] E. Hullermeier, “Fuzzy sets in machine learning and data mining,” Appl. vol. 19, no. 4, pp. 1200–1216, 2017.
Soft Comput., vol. 11, no. 2, pp. 1493–1505, 2011. [43] W. H. Xu and W. T. Li, “Granular computing approach to two-way learning
[16] C. Kahraman, B. öztayşi, and S. Ç. Onar, “A comprehensive literature based on formal concept analysis in fuzzy datasets,” IEEE Trans. Cybern.,
review of 50 years of fuzzy set theory,” Int. J. Comput. Intell. Syst., vol. 9, vol. 46, no. 2, pp. 366–379, Feb. 2016.
pp. 3–24, 2016. [44] W. H. Xu and J. H. Yu, “A novel approach to information fusion in multi-
[17] K. Kotwal and S. Chaudhuri, “A novel approach to quantitative evaluation source datasets: A granular computing viewpoint,” Inf. Sci., vol. 378,
of hyperspectral image fusion techniques,” Inf. Fusion, vol. 14, no. 1, pp. 410–423, 2016.
pp. 5–18, 2013. [45] R. R. Yager, “Set measure directed multi-Source information fusion,”
[18] Y. Leung, M. M. Fischer, W. Z. Wu, and J. S. Mi, “A rough set approach IEEE Trans. Fuzzy Syst., vol. 19, no. 6, pp. 1031–1039, Dec. 2011.
for the discovery of classification rules in interval-valued information [46] R. R. Yager, “A framework for multi-source data fusion,” Inf. Sci., vol. 163,
systems,” Int. J. Approx. Reason., vol. 47, no. 2, pp. 233–246, 2008. no. 13, pp. 175–200, 2004.
[19] M. Lhuillier, “Incremental fusion of structure-from-motion and GPS using [47] Y. Y. Yao, “Information granulation and rough set approximation,” Int. J.
constrained bundle adjustments,” IEEE Trans. Pattern Anal. Mach. Intell., Intell. Syst., vol. 16, pp. 87–104, 2001.
vol. 34, no. 12, pp. 2489–2495, Dec. 2012. [48] J. F. Yao and J. S. Yao, “Fuzzy decision making for medical diagnosis
[20] J. H. Li, W. Y. Zeng, J. J. Xie, and Q. Yin, “A new fuzzy regression based on fuzzy number and compositional rule of inference,” Fuzzy Sets
model based on least absolute deviation,” Eng. Appl. Artif. Intell., vol. 52, Syst., vol. 120, no. 2, pp. 351–366, 2001.
pp. 54–64, 2016. [49] Y. Y. Yao, “Granular computing: Basic issues and possible solutions,” in
[21] G. P. Lin, J. Y. Liang, and Y. H. Qian, “An information fusion approach Proc. 5th Joint Conf. Inf. Sci., 2002, pp. 186–189.
by combining multigranulation rough sets and evidence theory,” Inf. Sci., [50] F. S. Yu and W. Pedrycz, “The design of fuzzy information granules: Trade-
vol. 314, pp. 184–199, 2015. offs between specificity and experimental evidence,” Appl. Soft Comput.,
[22] W. L. Martinez and A. R. Martinez, Computational Statistics Handbook vol. 9, no. 1, pp. 264–273, 2011.
With MATLAB. Boca Raton, FL, USA: CRC Press, 2002. [51] L. A. Zadeh, “Fuzzy sets,” Inf. Control, vol. 8, no. 3, pp. 338–353, 1965.
[23] S. Mitra, S. K. Pal, and P. Mitra, “Data mining in soft computing frame- [52] L. A. Zadeh, “Fuzzy logic = computing with words,” IEEE Trans. Fuzzy
work: A survey,” IEEE Trans. Neural Netw., vol. 13, no. 1, pp. 3–14, Syst., vol. 4, no. 2, pp. 103–111, May 1996.
Jan. 2002. [53] L. A. Zadeh, “Toward a theory of fuzzy information granulation and its
[24] Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning About Data. centrality in human reasoning and fuzzy logic,” Fuzzy Sets Syst., vol. 90,
Norwell, MA, USA: Kluwer, 1991. no. 2, pp. 111–127, 1997.
[25] Z. Pawlak and A. Skowron, “Rudiments of rough sets,” Inf. Sci., vol. 177, [54] M. X. Zhang, K. Q. Dong, and F. S. Yu, “Fuzzy granulation of interval
no. 1, pp. 3–27, 2007. numbers,” in Proc. 8th Int. Conf. Fuzzy Syst. Knowl. Discovery, 2011,
[26] W. Pedrycz, Granular Computing : An Introduction. Heidelberg, pp. 372–376.
Germany: Physica-Verlag HD, 2000, pp. 309–328.
[27] W. Pedrycz and A. Gacek, “Temporal granulation and its application to
signal analysis,” Inf. Sci., vol. 143 no. 1, pp. 47–71, 2002. Authors’ photographs and biographies not available at the time of publication.

Authorized licensed use limited to: ULAKBIM UASL - SELCUK UNIVERSITESI. Downloaded on November 11,2022 at 07:55:14 UTC from IEEE Xplore. Restrictions apply.

You might also like