Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Information Fusion 36 (2017) 162–171

Contents lists available at ScienceDirect

Information Fusion
journal homepage: www.elsevier.com/locate/inffus

Full Length Article

Divide-and-conquer architecture based collaborative sensing for target


monitoring in wireless sensor networks
Kejiang Xiao a,b, Rui Wang a,∗, Tun Fu b, Jian Li b, Pengcheng Deng b
a
School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
b
State Grid Information & Communication Company of Hunan Electric Power Company, Changsha,410007, China

a r t i c l e i n f o a b s t r a c t

Article history: Most surveillance applications in wireless sensor network (WSN) have stringent accuracy requirements in
Received 21 January 2016 targets surveillance with maximized system lifetime, while large amount of continuous sensing data and
Revised 17 November 2016
limited resource in WSNs pose great challenges. So it is necessary to select appropriate sensors that can
Accepted 22 November 2016
collaboratively work with each other in order to obtain balance between accuracy and system lifetime.
Available online 23 November 2016
However, because of sensing diversity and big data from WSN, most existing methods can not select ap-
Keywords: propriate sensors to cover all critical monitoring locations in large scale real deployments. Accordingly, an
Wireless sensor network AdaBoost based algorithm is first proposed to identify valid sensors with contribution towards accuracy
Divide-and-conquer improvement, which can reduce computation and communication overhead by excluding invalid sensors.
Targets surveillance The valid sensors are combined and work in a collaborative way, which can obtain better performance
Sensing diversity than other ways. Then, because of independence of each monitoring location, a divide-and-conquer ar-
AdaBoost
chitecture based method (EasiSS) is proposed to select the most informative sensor clusters from the
Sensor selection
valid sensors for critical monitoring locations. EasiSS can obtain higher classification accuracy at different
user requirement. Finally, according to the experiment on real data, we demonstrate that our proposed
method can get a better performance of sensor selection, comparing with traditional methods.
© 2016 Elsevier B.V. All rights reserved.

1. Introduction and perform classification tasks independently [4,5]. The final deci-
sion is made on fusion node which fuses decisions gathered from
Many wireless sensor networks (WSN) based surveillance ap- neighbor nodes. In order to make a good balance between accu-
plications [1], such as WSN based vehicle classification [2], body racy and energy consumption, it is necessary to select a subset of
sensor network based health monitoring [3], and WSN based mil- sensors to join target classification process instead of taking all the
itary surveillance, require the WSNs based sensing system to be sensors into the process. However, most existing works select sen-
as intelligent as possible in order to accurately sense and recog- sors only basing on the distance between the sensors and target,
nize interested targets. However, because WSN is energy limited and these selected sensors sometimes are not the most important
as well as target surveillance require continuous sensing data, it is sensors with great capabilities for sensing diversity in dynamic en-
necessary to build an energy efficient method for WSN, that save vironments. Thus, they cannot cluster appropriate sensors to meet
energy usage while maintaining classification accuracy that meet user requirements and save energy.
user requirements. In this paper, we take target classification as an As sensing diversity is ubiquitous in WSNs for target detection
example and target monitoring means target classification. [6], sensor capability in dynamic environments is implied by sens-
Generally, WSNs has two universal characteristics: one is the ing diversity. Thus, we try to utilize sensing diversity for target
existence of a large number of redundant nodes in real deploy- classification in WSNs. Sensing diversity includes the sensing abil-
ment; another is the existence of heterogeneous diversity. Thus, ity among different sensors with same or different modalities. This
wireless sensor network can provide extra advantage-redundant is caused by the difference of cheap off-the-shelf motes and in-situ
sensor nodes which enable target classification to perform in a reality of a specific deployment [7]. Many existing works [8] do
collaborative way, and sensor sensing capabilities are diversity. In not take the sensing diversity into consideration. They assume the
previous works, sensor nodes near the target yield measurements whole sensor have similar sensing capabilities, while other works
attempt to overcome sensing diversity by correcting for the differ-

ences in readings from different sensors [6,9]. But they select sen-
Corresponding author.
sor nodes according to their importance defined as the sum of all
E-mail address: wangrui@ustb.edu.cn (R. Wang).

http://dx.doi.org/10.1016/j.inffus.2016.11.014
1566-2535/© 2016 Elsevier B.V. All rights reserved.
K. Xiao et al. / Information Fusion 36 (2017) 162–171 163

contributions on the node for all of its sensors and sensitive loca- sic algorithm (CSSM) is presented to cluster appropriate sen-
tions in its fusion range. Thus, the most important node may not sors for single location and it is a distributed sensor selection
be the most informative for all its sensitive locations or some sen- method basing on sensing diversity of individual sensor and
sor nodes with less sensing capability also participate in percep- sensor cluster. What is more, CSSM method can adapt to dy-
tion for some monitoring locations, which will bring unnecessary namic environments and provide user accuracy requirements,
overhead for new sensors are needed to join. Difference from pre- while minimizing energy consumption.
vious works, we make full use of the sensing capability difference
among sensors and divide-and-conquer architecture based method The rest of the paper is organized as follows. Section 2 and
to achieve user accuracy requirements for all critical monitoring 3 review related work and presents motivation respectively.
locations. But there are challenges in target monitoring with sen- Section 4 introduces an overview design. Section 5 intro-
sor possessing various abilities. These challenges are summarized duces SensorBoost algorithm and related theoretical analysis.
as follows. Section 6 presents the divide-and-conquer based method and its
basic algorithm CSSM. Section 7 and 8 presents fully experiments
• On-demand collaborative sensing. It is important to decide on real data and concludes this paper respectively.
when individual sensors are sufficient and when the additive
sensors are need to collaborate with each other, because such 2. Related work
collaboration can save valuable resources and meet user accu-
racy requirements. Especially, some sensors with little contri- There are many research works about target monitoring, such
bution towards accuracy improvement should be identified and as [4,10] that focus on the target detection and classification. For
excluded to reduce system overhead. example, target classification result is achieved by a static classifier
• Distributed sensor clusters selection to cover all monitor- in a centralized manner in [9]. A proposed classifier in [10] uses
ing locations. Because of sensing diversity and limited re- a simple distributed architecture as follows: local hard decisions
source in WSNs, it is difficult to select right sensor clusters to from each sensor node are communicated over noisy links to a
cover large amounts of monitoring locations in large-scale de- manager node; then the manger node optimally fuses this infor-
ployment while minimizing energy consumption. Besides, be- mation to make final decision. However, these works mainly focus
cause of dynamic environments, a distributed scheme is need on detecting or tracking objects, while omitting the details of col-
to adapt to changes more efficient. laborative sensing scheme design.
Through machine learning method, we explore sensing diver- Most existing works utilize distance-based collaborative sensing
sity in a heterogeneous sensor networks for vehicle classification schemes to cluster appropriate sensors for target monitoring. For
to meet the user classification accuracy requirements. We show example, [11] proposes distance-based decision fusion scheme via
that sensing capabilities are significant differences among sensors exploiting the relationships among the sensor node to target dis-
in real deployments. When additive sensors or nodes are needed to tance, classification rate, and signal to noise ratio. The classification
collaborate with each other, arbitrary sensor selection often fails to of moving ground vehicles is addressed in [12]. They present a dis-
meet user classification accuracy requirements. Besides, there are tributed framework to classify vehicles based on FFT (fast Fourier
some sensors with little contribution towards accuracy improve- transform) and PSD (power spectral density) features, and pro-
ment, while AdaBoost algorithm can calculate the sensor weight posed three distributed algorithms which are based on k-nearest
from training and such sensor weight can reflect the sensor sens- neighbor (k-NN) method. The authors of [13] propose a binary
ing capability of the exiting sensor cluster. Thus, we use AdaBoost classification tree based framework for distributed target classifica-
based method to identify the sensors with little contribution via tion in multimedia sensor networks. It takes the advantage of both
the sensor weight. Then, a divide-and-conquer architecture based efficient computation of the classification tree and the high clas-
collaborative sensing scheme EasiSS is proposed to divide all mon- sification accuracy of SVM. These works evaluate node importance
itoring locations into single monitoring location and select appro- basing on distance between sensor and target, and sometimes can-
priate sensor cluster for each location independently. A distributed not cluster right sensors for sensing diversity.
online sensor selection method named CSSM is present as basic al- Some other works attempt to solve sensing diversity by ac-
gorithm for the single monitoring location, which utilizes sensing counting for sensing differences in different sensors but cannot
diversity for use in practical deployments. And it can cluster ap- provide user accuracy requirements [14] or some sensors with
propriate sensors collaborating with each other to meet user clas- less information also participate in perception bringing unneces-
sification accuracy requirements and minimize energy usage when sary overhead. Besides, some existing works use sensing diversity
individual sensors are not accurate enough. Besides, during run- based collaborative sensing schemes to cover critical monitoring
ning time, CSSM algorithm can adapt to environmental changes locations. For example, sensing diversity is utilized to cluster sen-
that cause accuracy decrease. Specially, it adjusts the member of sors for providing sensing confidence in [15], but it is a centralized
collaborative sensors or train new classifiers adaptively. The con- sensor selection method. A collaborative sensor selection approach
tributions of this paper are summarized as follows. [16] is provided to training a composite classifier for shared clas-
sification of human activities. But it does not fully explore the ef-
• We propose an AdaBoost algorithm based method (SensorBoost fects of sensing diversity to cluster appropriate sensors for target
method) via computing sensor weight to identify valid sen- monitoring. The most related work is the sensor cluster selection
sors and exclude invalid sensors with little contribution to- method for target monitoring proposed in [6]. It selects the most
wards accuracy improvement. Thus, SensorBoost method can important sensor node according to sensing diversity and the im-
reduce communication and computation overhead. In addition, portance is defined as the sum of all contributions on the node for
we make theoretical analysis to performance of valid sensors all of its sensors and sensitive locations. However, the most im-
via combining classifiers trained by these valid sensors. portant node may not be the most important for its all sensitive
• We provide a divide-and-conquer architecture based collabo- locations. Therefore, when the most important node with less con-
rative sensing scheme (EasiSS scheme) to select appropriate tributions for some sensitive locations, there will be needed more
sensor clusters from valid sensors for all critical locations. In nodes taking part in the process of detection with the increase of
particular, EasiSS divides all monitoring locations into single the user classification requirements, which often will cause more
monitoring location according to their fusion range. The ba- sensing and communication energy usage.
164 K. Xiao et al. / Information Fusion 36 (2017) 162–171

Fig. 1. CDF of classification accuracy of individual sensor and sensor cluster. Fig. 2. Relationship between sensor cluster accuracy and individual sensor accu-
racy.

3. Motivation

In this section, we demonstrate the need for a new scheme to


meet user classification accuracy requirements and explore how to
make full use of sensing diversity to select sensors on demand in
real deployments. We utilize Wisconsin SensIT vehicle trace data
[2] and AdaBoost algorithm to perform classification. In this pa-
per, the trace data includes 8 vehicle passes: aav3-aav6 and dw3-
dw6. aav3 and dw3 are used for training, and aav4-aav6 and dw4-
dw6 are used for test. We plot CDF of classification accuracy for all
acoustic sensors and seismic sensors as shown in Fig. 1, which ex-
hibits sensing capabilities of the sensors within a specific deploy-
ment.
Fig. 1 shows those sensors of the same modality experience
significant differences in classification performance. For example,
there are 10% of acoustic sensors exhibiting a accuracy of less than
50% and 20% of acoustic sensors exhibiting a accuracy of more Fig. 3. Overview design.
than 80%, with a minimum and maximum range of 40%–83% re-
spectively. This diverseness in sensing capability can be linked to
most situations, so collaborative sensing is needed for the user re-
the quality of the sensor itself and dynamic environments. Thus,
quirements cannot be met by a single sensor in most cases. Thus,
single sensor may even perform differently in different environ-
based on the conclusions, we try to exclude invalid sensors to re-
ments for sensing diversity. Fig. 1 also illustrates the differences
duce system overhead and utilize sensing diversity to cluster ap-
between different sensing modalities. Seismic sensors have 50% of
propriate sensors.
sensors exhibiting an accuracy of less than 50% and 1% of seis-
mic sensors exhibiting an accuracy of 80%, with a minimum and
maximum range of 20%–80% respectively. Conversely, acoustic sen- 4. Overview design
sors only have 10% of acoustic sensors exhibiting an accuracy of
less than 50%. Many existing methods [17] rely on modality sens- In this paper, we present a divide-and-conquer based col-
ing models, making sensor collaboration difficult. Thus, significant laborative sensing scheme for target monitoring named EasiSS.
sensing diversity exists among sensors with different modalities as It explores and utilizes sensing diversity and divide-and-conquer
well as same modalities which should be addressed. method to cluster right sensor clusters for all monitoring locations
Using the vehicle trace data mentioned above and vehicle loca- in a distributed way as shown in Fig. 3. We first present Sensor-
tion ground truth [2], we form random clusters with size from 1 Boost algorithm to identify and exclude sensors with little informa-
to 6. We perform classification with AdaBoost algorithm and use tion defined as invalid sensors. Then, in order to cover all critical
ground truth for each individual sensor or cluster reading to deter- monitoring locations, we propose a divide-and-conquer based col-
mine classification accuracy. For each generated cluster, we com- laborative sensing scheme to cluster the most informative sensor
pare the accuracy of the best individual member sensor as a sin- clusters from the valid sensors. Besides, basing on sensing diver-
gleton cluster with the generated cluster accuracy and plot the re- sity, we design a distributed collaborative sensing scheme CSSM as
sults as shown in Fig. 2, where blue points indicate the relation- basic algorithm to select sensors for each single monitoring loca-
ship between best individual sensor and sensor cluster accuracy. tion.
Fig. 2 shows that the best individual sensor accuracy is less than Valid sensor identification. We propose an AdaBoost algorithm
that of sensor cluster in most cases. Thus, when individual sensor based approach named SensorBoost to calculate sensor weight
cannot meet user requirements, other sensors can be selected to from training. Based on the sensor weight, the valid sensor can be
collaborate with each other to meet such requirements. identified. The invalid sensors can not improve accuracy and bring
According to the analysis of mentioned above, we can obtain unnecessary overhead, thus they should be excluded. Besides, we
following observations: (a) Sensing abilities greatly differ among make theoretical analysis to the performance of valid sensors via
sensors of the same modality as well as different modalities in real combining classifiers trained by these valid sensors.
deployment environments. (b) Some sensors provide little contri- Collaborative sensor clusters selection. The sensor clusters se-
bution for target classification. They will bring unnecessary over- lection for all critical monitoring locations can be solved by divide-
head and cannot improve accuracy for target classification. (c) In- and-conquer architecture based method to select sensor cluster for
dividual sensor with best accuracy is less than sensor cluster in each single monitoring location dependently. The basic algorithm
K. Xiao et al. / Information Fusion 36 (2017) 162–171 165

CSSM is designed to cluster appropriate sensors for single moni- Algorithm 1 SensorBoost algorithm.
toring location and described as follows. CSSM first evaluate sensor
Input:
sensing capability on each candidate nodes. Then CSSM qualify the
Sequence of N labeled examples (x1 , y1 ), . . ., (xN , yn )
node capability according to individual sensor sensing capability.
Distribution D over the M examples
Finally, based on node capability, CSSM utilize back-off timer to de-
Weak learning algorithm Weaklearn
sign a collaborative sensing scheme in a distributed way and select
Number of learning iteration T
the most informative sensors on these nodes. Besides, when the
Output:
sensor cluster fails to meet user requirements, they can be adap-
Valid sensor set Cs ;
tively updated basing on user accuracy requirements.
1: Initialize the νs and sensor set Cs : νs = 0, Cs = Ø, s =
1, 2, 3, . . ., N
5. Valid sensor identification 2: for t = 1, 2, 3, . . ., T do

3: Normalize the weights wt,i = wt,i / ni=1 wt,i , so that wt is a
In this section, an AdaBoost algorithm based approach (Sensor- probability distribution.
Boost) is proposed to identify invalid sensors with little contribu- 4: for each sensor s do
tion to classification accuracy. Next, we first introduce identifica- 5: Train a classifier ht,s which is restricted to use thefeature
tion process and then we make theoretical analysis. from the sensor s.
6: Compute classification error of the sensor s atiteration t:

5.1. Identifying valid sensor εt,s = i wt,i |ht,s (xi,s ) − yi |.
7: end for
AdaBoost algorithm [18] can use weak learning algorithm to 8: Compute the lowest error εt = min εt,s and choose the classi-
train sensor data and generate simple local classifier. It selects lo- fier with the lowest error ht = arg min εt,s .
1−e
cal classifiers with lowest error rate during each round and defines 9: Update the weights wt+1.i = wt,i × βt i where ei = 0 if ex-
the local classifiers weights according to their error rate. These se- ample xi is classified correctly, otherwise ei = 1, and βt =
lected classifiers are combined to form strong classifier. Compared εt /(1 − εt ).
with other machine learning method, such as Bayesian classifier 10: Compute the weight of the classifier ht : αt = log β1 .
t
and Chair-Varshney fusion rule, AdaBoost can reflect collaborative 11: end for
sensing performance of sensor cluster better for it considers the 12: for each sensor s do
weight of classifier and training sample at the same time. Besides, T
13: Compute νs = t=1 αt Ψst,s .
weak classifiers generated by AdaBoost can run on cheap off-the- 14: if νs > 0 then
shelf motes for their low complexity. Thus, AdaBoost can calculate 15: s is valid sensor and added to the set Cs = Cs ∪ {s}.
the sensor weight from training and such sensor weight can be 16: end if
used to reflect the sensor sensing capability of the exiting sen- 17: end for
sor cluster in wireless sensor networks. On the other hand, clas- 18: return Cs .
sification accuracy of some classifiers trained by real sensors data
is lower than 50% and it means that its performance is less than
random decision. Thus, some sensors have little contribution for
target classification and cannot improve classification accuracy. We where Ψst ,s is the Kroneckers delta, which has the value 1 if st is
believe that these sensors are invalid, and we define invalid sensor s, and 0 otherwise. If ν s > 0, the sensor s is a valid sensor for
that the sensor with classifiers accuracy is less than 50%. Thus, we classification accuracy of the sensor s is higher than 50% according
use AdaBoost based method (SensorBoost) to identify validation of to the Algorithm 1.
sensors.
In SensorBoost algorithm (as shown in Algorithm 1), the weak
learner on the tth iteration generates a hypothesis ht, s and the 5.2. SensorBoost algorithm
decision stumps are used as the base classifiers to identify valid
sensor. A decision stump is built for each dimension feature vec- As shown in Algorithm 1, let N sensor nodes to classify mon-
tor, where a dimension corresponds to a sensor. After training the itored target (e.g., vehicles). Each training sample consists of one
weak learner of the sth dimension on the tth iteration, the error of or many dimension and each dimension is associated with one
ht, s is calculated as follows. sensors data. Local classifier is trained via the collaboration be-
 tween the data coming from different sensors and the local classi-
εt,s = i
wt,i |ht,s (xi,s ) − yi | (1) fiers with the lowest error are selected as the local classifier dur-
ing each iteration. The weights of the training samples are updated
where wt, i is the weight of sample xi, s during tth iteration. according to the classification error rate and higher weights are as-
The decision stump which outputs the minimum error among signed to the best weak classifier. In the process, we construct the
all dimensions is defined as the tth weak learner. Then the dimen- correspondence between the local classifiers and the sensors. The
sion index which outputs the minimum error is preserved as st in valid sensors are selected based on the weight trained from the
order to identify the valid sensor as shown in Algorithm 1. sensors data. All the selected local classifiers trained by the valid
Next, compute the weight of classifier ht on the tth iteration as sensors during each round can form the strongest final classifier. If
follows. only combing subset of these local classifiers will undermine the
αt = 0.5 × log((1 − εt )/εt ) (2) final classifiers performance according to the theorem 1. Thus, the
valid sensors identified by SensorBoost algorithm can work effec-
Then, update the training data weight wt+1,i as follows. tively in a collaborative way.
wt+1,i = wt,i × (εt /(1 − εt ))1−ei (3)
Definition 1. Final classifier χ f . we suppose that H =
The weight for each sensor can be computed as follows. h1 (x ), h2 (x ), h3 (x ), . . . , hT (x ) and hi (x)(1 ≤ i ≤ T) is generated
T by SensorBoost algorithm, the subset of H is select randomly
νs = t=1
αt × Ψst ,s (4) called χ , χ ⊆H. We combine the classifiers in χ to assemble the
166 K. Xiao et al. / Information Fusion 36 (2017) 162–171

final strong classifier χ f computed as follows. transmitted among nodes in each single monitoring location, thus
 1 energy consumption of communication is small. We measure en-
χ f = sign( αt ht (x )) (5) ergy usage as active node sampling time and transmission energy
2 as defined in [19].
ht ∈χ

Theorem 1. Suppose the weak learning algorithm Weaklearn, when 6.2. Divide-and-conquer based sensor clusters selection
SensorBoost generates hypotheses h1 (x), h2 (x), h3 (x), . . . ,hT (x) with
errors ε 1 , ε 2 , ε 3 , . . . , εT , as defined in Algorithm 1. The upper bound In order to cover all critical locations in large-scale deploy-
of the error ε f for final χ f classifier is defined by ments and save energy, we need to cluster right sensors in the
T   fusion range of corresponding location. We cluster sensors within
εf ≤ ( t=1
2(1 − εt )) × (
ht ∈χ
εt /(1 − εt )) (6) one communication hop of each monitoring location to save band-
width and energy according to [6]. Thus sensor cluster selection
The theorem can be easily proved according to [18].
for each location can be conducted independently and the final re-
According to AdaBoost algorithm, the upper bound error ε = sults can be combined by the sensor cluster for each location as
P ri−w [h(xi ) = yi ] for final classifier generated by SensorBoost algo- shown in Algorithm 2, where |L| and L0 means the set of monitor-
T 
rithm is t=1 2 εt (1 − εt ) [18]. However, according to the theo-
T
rem 1, the upper bound of the error for χ f (x) is ( t=1 2(1 − εt )) × Algorithm 2 Divide-and-conquer based sensor cluster selection
  T  scheme: Divide-and-conquer(P).
( ht ∈χ εt /(1 − εt )) which is far bigger than t=1 2 εt (1 − εt ),
and the upper bound error for χ f (x) is too slack. Thus, the final 1: if |L| ≤ L0 then
classifier formed by all local classifiers trained by the valid sensors 2: Call the algorithm 3 to solve the problem P.
during each round has better performance than the final classifier 3: end if
formed by subset of these local classifiers. Besides, the algorithm 4: Divided the problem into smaller sub problems
complexity is mainly reflected by the convergence speed in this P1 , P2 , P3 , . . ., Pi , . . ., PT
paper. The convergence speed of error rate is (T − n + 1 )/T during 5: for i = 1, 2, 3, . . ., T do
training period if ε 1 ≥ ε n , and if ε 1 < ε n , the convergence speed 6: μi =Divide-and-conquer(Pi )
of error rate is 0, where T is number of learning iteration, n is the 7: end for
rounds when the error ratio changes, ε 1 is the error ratio of first 8: return merge (μ1 , . . ., μk , . . ., μT ).
round during training period, ε n is the error ratio of n-th round
during training period. ing locations and minimum set of locations respectively. Thus the
sensor cluster selection for all locations can be solved by divide-
6. Collaborative sensor clusters selection and-conquer method. Specifically speaking, we divide sensor clus-
ter selection for all locations into some sub problem that select
The valid sensors are identified by SensorBoost method in appropriate sensors for single location in its fusion range. The sub
Section 5. In this section, we cluster right sensors from the valid problem can be formulated as follows:
sensors for all critical monitoring locations to meet user require-
ments while minimizing energy usage for the limited resource of μk = arg min c ( μk ) (8)
μk ∈Mk , (μk )≥UR
sensor networks. Next, we first formalize our problem, and then
describe our proposed scheme. where Mk is the set of possible sensor cluster in the fusion range
of monitoring location Lk . The sub problem can be solved by CSSM
6.1. Problem formulation as shown in the Algorithm 3.
For example, as shown in Fig. 4, in order to save energy, we
We first define a set of nodes τ = {n1 , n2 , n3 , . . . , nn }. Each node need to select appropriate sensor clusters to cover three locations
ni ∈ τ contains λi sensors, forming the set ϑ of all sensors ϑ = L1 , L2 and L3 respectively. The problem can be divided into three
sub problems according to the monitoring location and we can se-
{s11 , s21 , s31 , . . . , sλ1 1 , s12 , s22 , s32 , . . . , sλ2 2 , . . . , s1n , s2n , s3n , . . . , sλn n }, where sm
i lect sensors within the corresponding monitoring locations fusion
is the m-th sensor on node ni . Due to constrained-resource of sen-
range for each location independently. Therefore, we can select the
sor networks, energy saving is one of the most important problems
most informative sensors for each location and reduce the commu-
to be considered. Thus, our goal is to select the optimal group set
nication costs for new sensors needed to join the existing cluster,
of sensor cluster μ∗ to cover all critical monitoring locations and
while the method proposed in [6] can not select most important
obtain a precise result of classification for each monitoring loca-
sensor for some locations and new sensors are needed with the
tion while minimizing energy usage. Motivated by this, we study
increase of user accuracy requirements. In this example, the contri-
the selection of sensor clusters for all critical monitoring locations
butions of acoustic and seismic sensors on node towards to moni-
by using the criteria of minimum cost. Thus, we formulate the op-
toring locations are marked in {,} respectively, such as {0.6,0.8} on
timal sensor clusters selection problem: Choose a set of optimal
node n1 means the contribution of acoustic and seismic sensor on
sensor cluster μ∗ which minimizes energy usage c(μ∗ ) subject to
n1 towards to location L1 are 0.6 and 0.8 respectively. Thus the
(μk ) ≥ UR, k = 1, . . . , T , where μk is the sensor cluster for mon-
nodes capability for L1 is 1.4. Because the capability of n3 (its ca-
itoring location Lk and UR is predefined threshold for accuracy. The
pability is 2.1) is bigger than other nodes, it will be selected as
problem also can be expressed by
the fusion node of L1 , L2 and L3 at the same time if the method
μ∗ = arg min c ( μ∗ ) (7) proposed in [6] is used. But the node n3 s capability for L1 and L2
μk ∈μ∗ , (μk )≥UR
are smaller than that of node n1 and n2 respectively. If our method
Here, (μk ) means how much the sensor cluster μk can con- is used, because the capability of the node n1 , n2 and n3 for loca-
tribute towards meeting user classification accuracy requirements tions L1 , L2 and L3 are the biggest in their fusion range respectively,
for single monitoring location Lk , thus we quantifies utility of μk the three nodes are selected as the fusion node for locations L1 , L2
by its accuracy which is computed by (μk ) = 1 − (μk ), (μk ) and L3 respectively. Thus, compared with our method, the method
is false classification rate. Besides, c(μ∗ ) is the total energy con- in [6] cannot select the most informative sensor node for all mon-
sumption for all monitoring locations, because only decisions are itoring locations.
K. Xiao et al. / Information Fusion 36 (2017) 162–171 167

Fig. 4. An example: divide-and-conquer based process to cover three locations.

Algorithm 3 Collaborative Sensor Selection(CSSM). selects sensors to meet user classification accuracy requirements.
In the process of sensor selection for each location, CSSM clus-
Input:
ters sensors on nodes basing on their sensing capabilities and only
All sensors ϑ on nodes τ in the fusion range of monitoring lo-
cluster the valid sensors. The sensors on the nodes are added to
cation Lk
the cluster in decreasing order of learned sensing capability of the
Output:
nodes. Thus, nodes should first quantify their sensing capability
A sensor cluster μk ;
and then compete to declare them as fusion node. The cluster is
1: for i = 1, 2, 3, . . ., n do
trained using AdaBoost algorithm and observation history of all of
2: Learning the sensing capability of the sensors on node ni ac-
sensors in the cluster. Note that SensorBoost method is used to
cording to Eq. (9) using history observations on ni .
identify valid sensors at design time while during the deployment
3: Compute node capability ϕ (ni ) using Eq. (10), and set timer
the sensor cluster with maximum capability is selected from the
according to the node capability.
valid sensors. As for the ground truth, in the process of event train-
4: end for
ing and cluster formation, each active node maintains a history of
5: if Backoff timer of node ni fires and no cluster exists then
recent observations for all of its sensors and each node also main-
6: ni create sensor cluster μk .
tains an application-level feedback mechanism, such as a vehicle
7: Set ni as the fusion node and compute 1 − (μk ).
tracking application, to provide event ground truth in a manner
8: if 1 − (μk ) ≥ UR then
similar to [7] which proposed PSAM (Physical Sensing Area Mod-
9: User requirement is met and the cluster μk is formed.
eling) method. PSAM can identify accurate non-parametric sensing
10: The sensor selection process ends.
patterns (areas), that are close to the on-the-ground truth. This is
11: end if
achieved by capturing the time-space relationships of controlled or
12: else
monitored events and matching event positions with event detec-
13: if 1 − (μk ) < UR then
tion results of individual sensor nodes. We now describe CSSM as
14: Update ϕ (ni ) using Eq. (12) and set timer using ϕ (ni ).
shown in Algorithm 3 and then describe how sensor cluster can be
15: The sensors on node ni competes to join μk .
updated adaptively if the cluster fails to meet user requirements.
16: end if
a) Qualifying sensing capability
17: end if
We use AdaBoost algorithm to learn sensing capability of each
18: return μk .
node ni by training its sensors smi
. In particular, each node ni deter-
mines how much each of its sensors can contribute towards meet-

ing user accuracy requirements UR, which is defined as m (sm ).
Next, we introduce the basic algorithm named CSSM to select  i
m (si ) is real numbers between 0 and 1. Its values closer to 0
m
right sensors for each single monitoring location in a distributed
indicate that the sensor sm i
contributes very little towards meet-
way, while minimizing the number of active sensor nodes to re-
ing user requirements, while maximum possible values 1 − UR in-
duce energy consumption. 
dicates that user requirements are met. m (sm i
) is defined in (9).

6.3. Distributed collaborative sensor selection



m (sm
i ) = 1 − m (si )
m
(9)
In this section, we introduce the basic algorithm CSSM of our
divide-and-conquer based method EasiSS. CSSM algorithm can se-
lect the most informative sensor cluster for each single monitoring where m (smi
) is false classification rate of sensor sm
i
.
location in its fusion range while minimizing energy consumption. Then, a node quantifies its sensing capability by calculating the
According to the analysis of Section 3, a single sensor residing on sum of all contributions on a node for all of its sensors. The more
a single node may not be enough to meet user classification accu- capable a node, the more valuable it is towards meeting user re-
racy requirement in most cases. Thus, in this section, we propose quirements. More capable nodes are more likely to have very capa-
a distributed method CSSM to solve our sensor selection problem ble sensors. Each node sets a back-off timer based on its capability,
defined in Section 6.2. Based on nodes sensing capabilities, CSSM where greater capability values result in shorter timers. Capability
168 K. Xiao et al. / Information Fusion 36 (2017) 162–171

ϕ (ni ) is computed as follows.


Si 

ϕ ( ni ) = i (sm
i ) (10)
m=1

where Si is the number of sensors on node ni .


b) Capability-based sensor selection
When a timer first fires on node ni , node ni becomes fusion
node, creating a sensor cluster μk and add one or more of its sen-
sors to μk . Otherwise, node ni declared itself as member node and
adds its sensors to the existing cluster μk containing other mem- Fig. 5. Nodes deployment [2] and the monitoring locations marked by .
ber sensors. In both cases, the declaring node ni only adds its valid
sensors that are identified by SensorBoost algorithm. The cluster
μk is trained using AdaBoost algorithm and the observation his- As shown in Fig. 4(b), there are node n6 and n7 in the fusion range
tory of all sensors in μk . If the declared sensor cluster μk does not of L1 . They will compete to join in the existing cluster. The com-
meet user accuracy requirements UR, other candidate nodes nj at- peting process is described as follows. Firstly, fusion node broad-
tempts to add its sensors to the cluster μk via competition. The cast its history information to its all neighbors. Then, compute the
competition process is described as follows. total contribution increment (node capability) of sensors on node
Node nj first determines its sensors contribution towards meet- n6 and n7 towards to the existing sensor cluster when they join
ing user requirement, if it adds its sensors to μk . For each in the cluster according to Eq. (12). The contribution increments
of its valid sensor sm , the sensors on node nj of contributions of the node n6 and n7 are 0.12 and 0.09 respectively as shown
 j
(μk , smj ) towards to μk as performed in (11). in Fig. 4(b). Finally, each nodes back-off timer is set according to
 their contribution increments, where greater increments result in
(μk , smj ) = (μk ) − (μk ∪ smj ) (11) shorter timer. So n6 win the competition and join in the exiting
cluster when the timer fires.
Then the capability of nj is computed when nj attempts to add its
sensors to μk , which is computed as follows.
7. Evaluation
S j

ϕ (n j ) = (μk , smj ) (12) To evaluate the performance of our proposed method, we have
m=1 conducted trace data driven simulation experiments. Next, we first
where Sj is the number of sensors on node nj in the cluster μk . describe experimental methodology and settings, and then we in-
Finally, node nj sets back-off timer to compete with other nodes troduce the corresponding results thoroughly.
to join the cluster μk based on (12). The competition process men-
tioned above move in cycles until the formed cluster meets user 7.1. Experiments methodology and settings
accuracy requirement.
After the sensors have been selected joining in target classifica- We utilize Wisconsin SensIT vehicle trace data [2] to perform
tion, member sensor nodes first make local decision and transmit classification and trace data of each sensor is provided with a sam-
the local decision at each sample interval to fusion node. Fusion pling rate of 4960Hz. There are 23 nodes deployed along roads and
node makes a final decision at each sample interval using weight each node containing an acoustic, seismic and infrared sensor. We
majority voting method. Environmental changes might result in the perform vehicle classification at specific locations marked by as
decrease of the classification accuracy of cluster μk and run the shown in Fig. 5. We assume each node is a low power mote-class
risk of not meeting user requirements. Such environment changes device equipped with an 802.15.4 radio, such as Telos Mote. Be-
often include changes in background noise or to the properties of sides, we focus on sensing accuracy, thus we assume communica-
the target. In these cases, the existing sensor cluster is dissolved tion is reliable.
and a new, more accurate sensor cluster is formed. Thus, EasiSS Although vehicle passing through the deployment might be
can adapt to environmental changes via this strategy mentioned several minutes, the event series will be much shorter, because
above. The fusion node maintain moving accuracy using observa- it only spans the short period of time when the vehicle is close
tion history and ground truth, thus the sensor cluster can detect to the node. We select 30 monitoring locations along the road
the increase of classification error rate (μk ). If 1 − (μk ) < UR, throughout the deployment and we only use acoustic and seismic
the fusion node broadcasts an update message. When the current trace sensor data in this paper. The fusion range of each location
members of μk receive the update message, they stop making lo- is set to 100m. Based on the energy level of the acoustic, one 50
cal decisions and stop sending these decisions to the fusion node. dimensional FFT feature is abstract from every 0.75s event time
All nodes compete to form a new cluster as in Algorithm 3. series [2] when the vehicle occurs. In this experiment, we use two
Next, we give an illustration about our EasiSS scheme. As shown types of vehicles: aav and dw. We classify sensor and sensor clus-
in Fig. 4(a), we explore the sensing diversity for all the sensors ter readings into the type of the vehicles. The trace data of the ve-
in the fusion range for each location. In location L1 , the node n1 hicle passes aav3 and dw3 are used for initial training and vehicle
is the most important node and it will be selected as the fusion passes aav4, dw4; aav5, dw5; aav6, dw6; aav7, dw7; aav8, dw8;
node. Thus, the fusion nodes for L2 and L3 are n2 and n3 respec- aav9, dw9 are used for runtime classification. The vehicle path de-
tively. Because n3 is in the fusion range of L1 , L2 and L3 at the viates slightly with each vehicles pass, creating environmental dy-
same time, if n3 is the most important node for location L1 , L2 and namics during the runtime. According to [18], AdaBoost algorithm
L3 at the same time, it can be shared as the fusion node for L1 , L2 is one of the first practical boosting algorithms and can combine
and L3 at the same time. Under this circumstance, the energy cost weak classifiers to form strong classifier, while weak classifiers can
of our method is the same as WolfPack [6]. The sensors on the run on cheap off-the-shelf motes with lower performance. By col-
fusion node will be clustered to monitoring each location. If the laborating sensors wisely, AdaBoost can achieve high performance
existing sensor clusters in location L1 does not meeting the accu- and low complexity. Thus, we use AdaBoost to learn sensing capa-
racy requirements, new sensors should add to the existing cluster. bility of individual sensor and sensor cluster.
K. Xiao et al. / Information Fusion 36 (2017) 162–171 169

In order to verify the effectiveness of EasiSS scheme, we se-


lected three contrasting methods: WolfPack [6], BinTree [13] and
AllSen method. WolfPack selects the most informative sensors on
node according to sensing diversity and all its sensitive locations
share the node, while minimizing the active sensor nodes. The dif-
ferences between our work EasiSS and WolfPack include three as-
pects: (1) Different valid sensor identification method: our paper
utilizes AdaBoost based method to identify valid sensors while the
previous work uses simple threshold method. Thus, our method
can reflect the sensors real capability better than that of the previ-
ous work; (2) Different capability metric: the capacity in our paper
is defined as the sum of all contributions on the node in the fusion
range of corresponding single location, while the capacity in previ- Fig. 6. Numbers of valid sensors and corresponding sensor cluster accuracy at dif-
ferent iterations.
ous work is defined as the sum of all contributions on the node
for all of its sensors and sensitive locations; (3) Different prob-
lem: our paper selects optimal sensor clusters for each single loca-
tion instead of for all sensitive locations in the whole network. In
particular, our paper aims to maximize the cluster capabilities per
location instead of a global maximum in the previous work. Bin-
Tree method selects sensor nodes to classify targets based on the
distance between the sensors and target in distributed way, while
minimizing energy consumption. Compared with BinTree method,
our method EasiSS takes the difference of sensor and clusters sens-
ing capability into consideration, which can reflect the sensors real
contributions towards to each monitoring location. AllSen method
makes all the sensors take part in classification and all nodes are
always active. Other aspects of AllSen are the same as our method.
To evaluate the performance of our method, we define sev- Fig. 7. Fusion node and the corresponding acoustic, seismic sensors average contri-
eral metrics as follows. (a) Energy usage: we measure energy us- bution towards to each location.
age as active node sampling time and transmission energy as de-
fined in [19]. When the Telos node is active, the power of the
node is 111.226 mW according to [20]. (b) Classification accu-
racy: it is the ratio of correct classified samples and total samples.
(c)User requirements meeting ratio (URuMet) : it is calculated by
URuMet = Y uMet/T uMet, where YuMet is the total meeting require-
ments cases; TuMet is the total cases. One case is 50 time interval
data segment when target occurs. One time interval is 0.75s.

7.2. Experiment results and discussion

In this section, we first discuss valid sensor identification ac-


cording to SensorBoost algorithm and corresponding valid sensor
clusters accuracy. Secondly, we discuss the fusion nodes average Fig. 8. Sensor cluster average accuracy with different number of nodes.
capability for each monitoring location. Besides, we make analysis
for the sensor clusters average accuracy with different number of
sensor nodes. Thirdly, our methods classification accuracy and to- In this section, we compare EasiSS with WolfPack and BinTree
tal energy usage are compared with WolfPack, BinTree and AllSen method about sensor node’s capability and sensor clusters’ average
method. What is more, the URuMet of EasiSS is compared with accuracy for all the monitoring locations.
WolfPack, BinTree and AllSen in different user accuracy require- Average capability of fusion node and average contributions of
ments (UR). Finally, we make Fcomparison EasiSSs classification ac- its corresponding sensors on the node selected by EasiSS, WolfPack
curacy with WolfPack and BinTree’s in dynamic environments and and BinTree method for all the locations are compared as shown in
analyze the number of updates when the sensor cluster detects de- Fig. 7. The fusion nodes average capability of EasiSS is bigger than
crease of the classification accuracy in dynamic environments. that of WolfPack and BinTree method. The reason is that EasiSS
(1) Valid sensor identification is based on the sensing capabilities of sensors on the node and
We identify valid sensors by SensorBoost proposed in utilize divide-and-conquer method to select the most informative
Section 4 and set the iterations T = 1 to 150. Fig. 6 depicts sensors on the nodes for each monitoring location, while WolfPack
the number of the valid sensors selected from all the sensors method cannot, and BinTree method selects sensor nodes accord-
and corresponding sensor cluster classification accuracy during ing to the distance between the sensor nodes and target, which
each iteration. As shown in Fig. 6, we use 150 iterations to obtain might not reflect the real capability in dynamic environments, in-
maximum classification accuracy and SensorBoost can exclude cluding many kinds of noises and other obstacles. Besides, average
20∼ 30% invalid sensors from all the sensors. In our experiment, contribution of acoustic sensor is bigger than that of seismic sen-
there are 9 candidate sensors on average for all the monitoring sor. What is more, we make analysis of the sensor cluster average
locations, while only 7 sensors are valid on average. During the accuracy with different number of the nodes as shown in Fig. 8.
runtime, any invalid sensors will be disabled to save energy and We can know that accuracy of EasiSS is bigger than WolfPack and
reduce communication costs. BinTree method for EasiSS basing on node capability for each mon-
(2) Fusion node capability and sensor cluster accuracy itoring location.
170 K. Xiao et al. / Information Fusion 36 (2017) 162–171

Table 1
Results of Average URuMet.

Method UR Real type (t/r/c)a URuMet

EasiSS UR = 0.80 30/30/0 100%


UR = 0.85 30/30/0 100%
UR = 0.90 30/29/1 96.7%
WolfPack UR = 0.80 30/30/0 100%
UR = 0.85 30/29/1 96.7%
UR = 0.90 30/27/3 90%
BinTree UR = 0.80 30/30/0 100%
UR = 0.85 30/29/1 96.7%
UR = 0.90 30/28/2 93.3%
AllSen UR = 0.80 30/30/0 100%
UR = 0.85 30/30/0 100%
UR = 0.90 30/29/1 96.7% Fig. 10. Energy consumption in different user requirements (UR).
a
t/r/c are the number of the total cases, meeting user re-
quirements cases and not meeting user requirements cases
respectively.

Fig. 9. Average classification accuracy in different user requirements (UR).

Fig. 11. Classification accuracy in different user requirements (UR) under dynamic
(3) Meeting user requirements and energy consumption environments in 8∗ 50 time intervals.
In this section, we demonstrate that EasiSS can meet user clas-
sification accuracy requirements in most cases. We use 30 cases
to test classification accuracy, each case has 50 time intervals and
each time interval is 0.75s event time series.
As shown in Table 1, when UR = 0.80 and 0.85, URuMet of Ea- node and target, so its energy usage is less than AllSen. We can
siSS can achieve 100%. URuMet of EasiSS is 96.7% when UR = 0.90, also conclude that our proposed method has higher classification
which is higher than WolfPack and BinTree method. Because all accuracy than that of Wolfpack at different user requirement, while
sensors take part in the classification, AllSen method also has our proposed methods energy consumption is close to Wolfpack
higher URuMet than WolfPack and BinTree method. Next, we com- method with the increase of the UR. When the user requirement
pare our method EasiSS with BinTree and AllSen about sensor clus- (UR) is lower, Wolfpack (from Keally et al. 2011) is more energy
ter average accuracy and total energy usage in different user re- efficient than the proposed method. That is because Wolfpack has
quirements UR (UR = 0.80, UR = 0.85, UR = 0.90 ). We use one case smaller active sensor nodes than our proposed method. With the
on which all methods can meet user requirements to test accuracy increase of UR, the active sensor nodes in Wolfpack will increase
and total energy consumption. more rapidly than our proposed method. Thus, our proposed meth-
As shown in Fig. 9, average accuracy of EasiSS is bigger than ods energy consumption is close to Wolfpack method when user
that of WolfPack, BinTree and AllSen method for EasiSS is based on requirement (UR) is higher (UR = 0.90).
the sensing capabilities and utilize divided-and-conquer method to (4) Adaptive collaboration in dynamic environments
select the most informative sensors on the nodes for each mon- In this section, we demonstrate that EasiSS with adaptive sen-
itoring location, while WolfPack and BinTree method cannot se- sor selection is able to update and maintain accuracy under dif-
lect most informative sensors on the nodes for each monitoring ferent UR when the environment changes. We select 8 cases for
location. Fig. 10 shows that when the four methods select sen- test and they can all meet user requirements for EasiSS in different
sors meeting user accuracy requirements at different UR, the total UR. Fig. 11 shows that EasiSS and WolfPack maintains accuracy or
energy consumption of EasiSS is smaller than that of BinTree and drops only slightly when environment changes, while BinTree have
AllSen method. That is because EasiSS selects the most important 8 times that cannot meet the user requirements for BinTree does
nodes taking part in target classification for each monitoring lo- not have adaptive collaboration scheme in dynamic environments.
cation,the number of active nodes is less than BinTree and AllSen. Besides, the accuracy of EasiSS is higher than WolfPack and Bin-
Because the member of active sensor nodes selected by WolfPack Tree methods in dynamic environments for EasiSS can select the
method is less than EasiSS in lower UR, the energy consumption most informative sensor cluster for each monitoring location. As
of EasiSS is bigger than that of WolfPack method at UR = 0.80 shown in Fig. 12, the number of updates of EasiSS is smaller and
and 0.85. Besides, as AllSen method uses all sensors to classify the equal than that of WolfPack at different UR, which shows that the
target, its energy consumption is the largest than the other three effect of EasiSS method is better than that of WolfPack method in
methods. BinTree selects parts of nodes based on distance between dynamic environments.
K. Xiao et al. / Information Fusion 36 (2017) 162–171 171

[2] M.F. Duarte, Y.H. Hu, Vehicle classification in distributed sensor networks, J.
Parallel Distrib. Comput. 64 (13) (2004) 826–838.
[3] M. Keally, G. Zhou, G. Xing, J. Wu, A. Pyles, Pbn: towards practical activity
recognition using smartphone-based body sensor networks., in: ACM Confer-
ence on Embedded Networked Sensor Systems, 2011, pp. 246–259.
[4] W. Xue, W. Sheng, B. Daowei, Distributed visual-target-surveillance system in
wireless sensor networks, IEEE Trans. Syst. Man. Cybernetics Part B f the IEEE
Systems Man & Cybernetics Society 39 (5) (2009) 1134–1146.
[5] R. Bajwa, R. Rajagopal, P. Varaiya, R. Kavaler, In-pavement wireless sensor net-
work for vehicle classification., in: Information Processing in Sensor Networks
(IPSN), 2011 10th International Conference on, 2011, pp. 85–96.
[6] M. Keally, G. Zhou, G. Xing, J. Wu, Exploiting sensing diversity for confi-
dent sensing in wireless sensor networks., IEEE INFOCOM Proc. 2 (3) (2011)
1719–1727.
[7] J. Hwang, T. He, Y. Kim, Exploring in-situ sensing irregularity in wireless sensor
networks., IEEE Trans. Parallel Distrib. Syst. 21 (4) (2007) 289–303.
[8] G. Xing, X. Wang, Y. Zhang, C. Lu, R. Pless, C. Gill, Integrated coverage and
Fig. 12. The number updates in different user requirements (UR).
connectivity configuration for energy conservation in sensor networks, ACM
Trans. Sens. Netw. 1 (1) (2005) 36–72.
[9] R. Tan, G. Xing, X. Liu, J. Yao, Z. Yuan, Adaptive calibration for fusion-based
8. Conclusion wireless sensor networks, in: INFOCOM, 2010 Proceedings IEEE, 2010, pp. 1–9.
[10] X. Wang, S. Wang, Collaborative signal processing for target tracking in dis-
tributed wireless sensor networks, J.Parallel Distrib. Comput. 67 (5) (2007)
As for large amounts of continuous sensing data pose great 501–C515.
challenge to the target monitoring in large-scale sensor net- [11] M. Duarte, Y.H. Hu, Distance based decision fusion in a distributed wireless
work, we present a divide-and-conquer based collaborative sens- sensor network, Telecommun. Syst. 26 (2–4) (2002) 556–557.
[12] B. Malhotra, I. Nikolaidis, J. Harms, Distributed classification of acoustic targets
ing scheme. It can cluster appropriate sensors for all the monitor-
in wireless audio-sensor networks, Comput. Netw. 52 (13) (2008) 2582–2593.
ing locations to meet user accuracy requirements while minimiz- [13] L. Liu, A. Ming, H. Ma, X. Zhang, A binary-classification-tree based framework
ing the energy consumption. Through evaluation with vehicle trace for distributed target classification in multimedia sensor networks, in: INFO-
COM, 2012 Proceedings IEEE, 2012, pp. 594–602.
data, we show that the superior performance of our method over
[14] S. Subramaniam, V. Kalogeraki, T. Palpanas, Distributed Real-time Detection
existing solutions in terms of meeting user classification accuracy and Tracking of Homogeneous Regions in Sensor Networks, Rtss, Rio De
and energy usage. For the future work, we will predict how sen- Janeiro, 2007, pp. 401–411.
sor cluster changes in generating a new cluster to reduce overhead [15] M. Keally, G. Zhou, G. Xing, Watchdog: confident event detection in heteroge-
neous sensor networks, in: Real-Time and Embedded Technology and Applica-
when the current sensor cluster fails to meet user requirements. tions Symposium (RTAS), 2010 16th IEEE, 2010, pp. 279–288.
[16] M. Keally, G. Zhou, G. Xing, J. Wu, Remora: Sensing Resource Sharing among
Acknowledgment Smartphone-based Body Sensor Networks, IEEE, 2013.
[17] A. Singh, C.R. Ramakrishnan, I.V. Ramakrishnan, D.S. Warren, J.L. Wong, A
methodology for in-network evaluation of integrated logical, in: Proceedings
This work was supported by the National Natural Science Foun- of the 6th ACM Conference on Embedded Network Sensor Systems, 2008.
dation of China (No. 61379134). [18] Y. Freund, R.E. Schapire, A decision-theoretic generalization of on-line learning
and an application to boosting, J. Comput. Syst. Sci. 55 (1) (1997) 119–139.
[19] V. Shnayder, M. Hempstead, B.R. Chen, H.M. Welsh, Powertossim: Efficient
References power simulation for tinyos applications, in: Proceedings of the ACM Confer-
ence on Embedded Networked Sensor Systems (SenSys), 2003.
[1] L. Gu, D. Jia, P. Vicaire, T. Yan, L. Luo, A. Tirumala, Q. Cao, T. He, J.A. Stankovic, [20] D. Jung, T. Teixeira, A. Savvides, Sensor node lifetime analysis: models and
T. Abdelzaher, Lightweight detection and classification for wireless sensor net- tools, ACM Trans. Sens. Netw. 5 (1) (2009) 457–469.
works in realistic environments, in: In Sensys, 2005, pp. 205–217.

You might also like