Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

Wireless Visual Sensor Network Platform for Indoor Localization


And Tracking of a Patient for Rehabilitation Task
Monaem IDOUDIa, b, El-Bay BOURENNANEa, Khaled GRAYAAb
Existing supervision systems acquire data about the changes in
Abstract — Wireless Visual Sensor Networks (WVSNs) are commonly
using information technologies of modern networking and computing body motion/position in real time and process the information to
platforms. Today, visual network computing applications are faced characterize the movements. Many of these works have focused on
with high demand of powerful network functionalities and software/hardware, platform/architecture, or network
performances. This paper proposes specific WVSN nodes, which are
able to sense surrounding signals coming from the patient in the communications. This paper assumes to describe and demonstrate a
rehabilitation room and perform as local computations wirelessly WVSN node for patient rehabilitation supervision. Its main
communicated within the considered WVSNs. Assuming the WVSN contributions include the following points:
platform for the rehabilitation supervision of patients, this paper  A platform for real-time localization and tracking of Patient
discusses the specifications of the concept and the development of the
rehabilitation.
WVSN nodes relying on KINECT and Raspberry Pi 3 (RPi 3) boards.
Several technologies, such as RPi 3, Kinect, and ID sensor, are utilized  A low-cost Internet of Things (IoT) solution, as it proposes a novel node
in the proposed platform to realize sensors, image-capturing units, and by making an association between Raspberry Pi_3 (RPi 3) and Kinect.
processing cores. This collaborative platform consists of three nodes  Improved localization accuracy in comparison with other works.
that are designed in three processing cores using the entire  An evaluation of the energy consumption when using a WVSN node and
workflow to support the development steps. The communication of the
a global platform.
elements between one to another and between the components takes
place via Wi-Fi. The implementation of a preprocessing stage for
component integration and the development of software development
kit tools are necessary. In its final part, the paper summarizes the
outlines of the proposed platform in different scenarios which run on
nodes; it also provides evaluation results that prove the feasibility of
the approach.

Keywords: WVSN, Rehabilitation Task, Joint Skeleton,


Tracking & Localization, RPi3, Performances, Scenarios.

I. INTRODUCTION

Fig.1. Structure of WVSN Platform for rehabilitation issue


Patient rehabilitation is an exercise program (Figure 1) with
the objective of achieving a physical functioning level that allows The rest of this paper is organized as the followings; Section 2
elders or athletes to regain their initial capabilities after an accident discusses the related works on the existing platforms and on
or a surgery. Research has shown that an increased amount of WVSNs, as well as the classification of monitoring into various
exercises during rehabilitation leads to an increase in the functional categories. In Section 3 present the technological choices
recovery of patients [1]. The need for in-home care, monitoring, hardware, Section 4 presents the proposed approach and WVSN
and rehabilitation of the elderly and impaired people is increasing node for patient tracking and localization. Section 5 presents the
as the world population continues to age. The United Nations extensive experimental results and elaborates some performances
expected that 16% of the population would reach 65 years or older to enhance the level of availability of the WVSN node. Section 6
by 2020. This estimation comes along with the fact that falls are the provides the proposal evaluation. Section 7 concludes the entire
major cause of elderlies’ injuries [2], and has prompted the need to study.
remote the condition monitoring of older people and provide them
an efficient rehabilitation tool for the balance assessment.
Accordingly, many research teams have designed nonintrusive
surveillance tools, such as systems that can record gait parameters
over time [3] or perform daily monitoring activities [4], therefore,
they are suitable for smart home and health applications [5].
Intensive exercise programs require continuous supervision of
patients and this increases the load for therapists and medical staff.
a
Laboratoire LE2I UMR CNRS 6306 Université Bourgogne Franche-
Comté, Dijon 21000, France.
Fig. 2. Characterization of the environment
b
ENSIT Université de Tunis 1008, Tunis, Tunisia.
Corresponding author: idoudimonaem@gmail.com;
Monaem.Idoudi@u-bourgogne.fr

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

II. STATE OF ART sensors because of the cost limitation [14]. Consequently, the
Several technologies have been proposed for patient accuracy of filtering and extraction of the target position cannot
monitoring and have provided support to rehabilitation tasks. This be guaranteed in the local sensor level. Vision-based surveillance
paper presents an in-home rehabilitation solution by employing by multiple cameras receives considerable attention because
WVSNs to monitor patients and elderly people with special visual surveillance by multiple cameras will enlarge an area.
needs. Therapy at home is flexible and convenient for patients as Numerous problems can also be solved drawing on the
it allows frequent repetition of exercises. Smart area is one of the information that comes from multiple views [9]. Many countries
outstanding examples of the IoT. Modern homes contain an have reduced hospital resources and moved healthcare services,
increasing number of smart objects. The localization problem has such as medical checkups, to the home [15]. Extensive work
received considerable attention in the area of wireless sensor needs to be done on models and algorithms to utilize data for the
networks (WSNs) [6]. The use of camera and image processing decision-making activities of health care, diagnosis and medical
on the IoT is an application that has not been fully explored in treatment [16]. Several collaboration steps, as shown in Figure 2,
literature, especially in rehabilitation. However, these existing need to be performed. The IoT can be defined as a set of
localization algorithms cannot solve the target localization interconnected things (humans, tags, and sensors) over the
concerning WVSNs [7], because of the significant differences in Internet, which can measure, communicate, and act worldwide.
terms of data acquisition (capturing) and processing methods of
the conventional WSN. This paper is based on developing a Table1.Hardware components in WVSN platforms
platform and experiments conducted with various phases and
scenarios to detect the falls of the patients which will allow us to
obtain classify their technological performances. WVSN nodes
are combined with an ID sensor. The smart nodes can issue
warnings to people designated in case of requirement. This paper
considers the different features of WVSNs. Coverage
optimization: The faulty nodes that are artificially produced by a
monitoring schedule or coverage optimization are verified. The
selected set of nodes depends on the way visual information is
monitored by the application. Monitoring requirements:
Changing the monitoring requirements may alter the role of the
WVSN nodes for such an application. Low-quality monitoring:
Environmental conditions or poor configuration and adjustment
of the sensor cameras may reduce the quality of the retrieved
visual information. Poor deployment: WVSN nodes may be
damaged during deployment. For most of these applications, users
are interested not only in the existence of targets, but also in their
positions [8], because this could facilitate target detection,
recognition, and tracking. The localization task provides
coordinates of both sensors and targets in sensor works [9]. The
aim of target localization is to estimate the location of the target
basing on the visual information of camera [10]. The problem of
target localization is well studied in WSNs. The measurement
techniques in sensor localization include angle-of-arrival,
distance-related, and received signal strength (RSS)
measurements [11]. The target localization in WVSNs faces great
challenges. First, processing image is costly implemented in local
nodes [12] because the computing capabilities are limited in local
nodes. Second, bandwidth resources are also restricted in
WVSNs. Thus, transmitting a large amount of visual data
generated by cameras to a central node or to a base station is
critical [13]. Third, the location information of a target in the
The key idea of the IoT is to obtain information
depth dimension is lost in an image because the sensing capability
concerning our environment to understand, control, or even make
of a camera is characterized by a directional ensign. Fourth, visual
a decision to help us in our daily life [17]. Regarding the network
nodes in WVSNs are equipped with low-resolution optical
bandwidth consumption, we consider the fact that the size of

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

visual data is much larger than that of scalar data; hence, more captured by the depth camera cannot be recognized. This feature
network bandwidth resource is required for transmission. These helps to keep identity confidentiality. The hardware requirements
performance requirements make the visual data transmission for the remote enrolment node are listed below. In the node, the
within WVSNs more challenging than the transmission within images are taken by the MS Kinect and processed by the RPi
WSNs [28]. 3board, as shown in Figure 3.

Table.2.Hardware Boards Comparison


III. WVSN NODE ARCHITECTURE CHOICES Raspberry pi 3 Arduino Beaglebone
Base Price in US $ 30 35 50
Processor quad core ARM ATmega328P 1GHz TI Sitara
A. HW structure of the WVSN node Cortex-A7 1.2 GHz M3359 ARM
The sensor identifies the patient ID and permanently sends the Cortex A8
Memory 1GB LPDDR2 + in the order of 512 MB DDR3L
RSSI signal to the three WVSN nodes within a medical 64GB External MB @ 400 MHz
rehabilitation room. As shown in Figure 3. RPi 3: After receiving Power ~150 mA @ 5V ~50 mA @ ~250 mA @ 5V
the RSSI signal, Rpi checks if the received power is in the range of Draw 5V
RSSI to allow WVSN_N1 to wake up and thus, awakens the Kinect.
Operating system 64-bit Windows 10 Custom Linux
KINECT: The captured depth image is based on the skeletal joint IoT
position (X, Y, and Z). Suited for Software Hardware Software
Number of I/O pins 8 Digital 14 Digital (6 65 Digital
PWM)
6 analog
Peripherals 4USB Hosts, 1 None 1 USB Host, 1
Micro-USB Power, Mini-USB Client,
1 10/100 Mbps 1 10/100 Mbps
Ethernet, WIFI, Ethernet
Bluetooth
Internet Yes Via Shield Yes

Table.3. Localization & Tracking Nodes Comparison


Nodes Our node MicaZ TelosB Iris Cricket Lotus
Size 85.6*53 58*32 65*31 58*32 58*32 76*34
(mm) *21 *7 *6 *7 *7 *7
Weight 85 18 23 18 18 18
Fig. 3. Proof of Concept: WVSN Node KINECT and RPi3 (g)
Processor ARM ATMEG TI ATMEG ATMEL1
RPi3 boards with an MS Kinect camera are used here. This Cortex-A7 A128 MSP430 A1281 28L
system proposes a novel approach, in which a patient holds an ID RAM 1G + 256 4K 10 K 8K 128 K 512 K
kb
sensor that generates the RSSI signal. WVSN_Ni receives the
External 8-64 G 128 K 48 K 4K 512 K 64 K
RSSI and sends image measurements to approve the localization Memory
and to indoor track the patient for improved accuracy. WSVN OS WIN 10 TINY TINY TINY OS TINY RTOS,
node is compared with other visual nodes presented in Table3. IOT OS,MO OS,SOS, OS,MOT TINY
TE MANTIS E OS
First, the MS_Kinect is connected to the RPi 3 board through a RUNNE OS RUNNE
Universal Serial Bus (USB) interface, MS Kinect [5] is a marker- R R
less motion-capturing sensor that can track a user skeleton and Price 90-125 99 99 115 225 300
US$
capture data at a rate of 30 frames per second using the MS
Kinect for Windows SDK [5]. Each tracked skeleton contains 20
joints with 3D coordinates [19]. The Kinect sensor can also B. KINECT Skeleton Joint positions
maintain tracking through an extended range of approximately
70–600cm in the context of ignoring loss in terms of accuracy. Depth imaging technology has an undergone and remarkable
The FOV of the sensor has a pyramid shape. The sensor also has development in the last years, especially when a consumer price
an angular FOV of 57°horizontally and 43°vertically, and the point was reached with the release of Microsoft (MS) Kinect. MS
motorized pivot can tilt the sensor up to 27° either up or down Kinect (MS Corporation) has recently shown great reliability for
[20]. The advantages of this technology, with respect to classical depth image capturing, its popularity comes from their low cost,
video-based ones, are as follows [21]. high sample rate, and capability to combine visual and depth
This technology is less sensitive to variations in light intensity
information [26]. Kinect software deals with the available tools
and texture changes. This technology provides 3D information
using a single camera, whereas a stereoscopic system is necessary for working with Kinect hardware and various libraries to use its
in the RGB domain to achieve the same goal. A skeletal joint functionalities for developing several applications. Several tools
coordinate may be extracted using a depth sensor. The privacy of for Kinect, such as MS Kinect software development kit (SDK)
the patient is maintained because the facial details of the people [23], OpenNI [24], and OpenKinect [25] are available for

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

developing applications. OpenNI 2.0 is the highest version of quaternion (1, 0, 0, 0).
OpenNI MS Kinect SDK, released by MS, and its current version - In CoM rows: Not applicable. Angle status is set to
is 1.8.OpenKinect is a free, open-source library maintained by an S_FALSE=1.
open community of Kinect people. The SDK enables users to
develop sophisticated computer-based human motion-tracking
applications in C# and C++ programming languages [27]. A
version of skeleton estimation is incorporated in the Kinect SDK,
and a stream of skeleton frames can be obtained directly. A model
is developed to represent a full human skeleton to accomplish a
real-time human skeleton estimation. The model is then trained
with extensive labeled data. The trained model is incorporated
into the SDK for real-time skeleton tracking. In fact, Figure 4
shows the coordinates of skeletal joints that will be extracted. Fig.5. Data structure for KINECT depth distance
Besides, Figure 7 shows a two-phase system used for fall
detection. The first phase of the system characterizes the pre-task
IV. THE PROPOSED SOLUTION
to identify the patient before and during the task. The second
phase of the system concerns the critical event that could possibly
4.1 Proposed WVSN platform
happen to the patient, such as a fall situation.
This section presents the proposed approach, which is based on
three WVSN nodes and designed for location/tracking scenarios
in a clinical rehabilitation world. The nodes include an
image/video capture unit, a processing component and a
communication element. The design of a complete platform
involves three WVSN nodes, namely that it’s based on, (A) the
RSS indicator (RSSI) and information processing to compute the
initial location information as output; (B) the acquisition of
image-based skeletal joint location data of patients, the extraction
of distance, and the storing, analysis, monitoring, and displaying
Fig.4. Skeleton Joints Position of the Patient of data in a centralized management server; and (C) the system
 Data structure for the proposed scheme realization by implementing various available technologies. This
The range information is expressed in 2 bytes for every subsection presents the block diagram of the proposed platform. It
attribute value, while the distance information result is converted is necessary to consider how to find the location that coordinates
into hexadecimal numbers to prevent the identification of the from raw RSSI data.
information even if it is added to a hash value. The shown Figure In Figure 6, we present the operating blocks of the platform,
9 gives a description of the process of recording and extraction these plates are detailed on two layers of hardware and software
data using the proposed scheme and adding range information. In conception. Basically, as explained before, a compensation of the
below, an example of the basic data structure that is utilized in the location error margin fixed via RSSI, we managed to introduce
proposed method. When a WVSN node collects data, as shown in the extraction of the position via image too. The RSSI values are
the algorithm, the data is expressed as shown in figure5. configured from reference node (the patient's body level) within
"Joint_Position_....binary" File the distance estimation step. Environmental characterization
Each row contains following fields: using these RSSI values can be conducted to find the suitable
- Position (x, y, z); parameters for that area. So we can define the field of sense FOS
Joint tracking state (in joint rows)* to move on to the next step. Following the calibration process, the
* Tracking states: 0=joint is not tracked; environmental parameters will be fixed and will not change
1=joint is inferred; unless significant changes happen to the objects within the area.
2=joint is tracked While D1, D2 and D3 represent the distance between the patient
"Joint_Orientation_....binary" File --- and the 3 nodes, so by entering from one FOS zone to another, a
Each row contains following fields: camera will be activated among the three cameras C1, C2 or C3,
- Hierarchical joint angle as a quaternion (w, x, y, z), the whole approach of the passage between blocks is detailed in
- Angle status the algorithm (section 3.2-b). The next step is to obtain
* Angle status: continuous RSSI values online from the reference nodes. With
- In joint rows: 0=successful, any other value implies angle both the RSSI values and environmental parameters, it is possible
computation failed and the angle has been set to the Identity

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

to convert those RSSI values into distance using the path loss
model. After the RSSI–distance conversion, the distances
between the target sensor node and the reference nodes can be
obtained. Trilateration combines the distances and finds the exact
location coordinates of the target sensor node within the area.

Fig.7. WVSN Node Sw Solution Function Block

B. RSSI-range technique model

Information training is a common technique to reduce the


uncertainty in raw data sets and provide an accurate position of
patient location. This technique is given, such that both the
tracking in RSSI and the depth distance of captured image within
the tracking scene have various strengths and improve accuracy.
Figure 8 shows the approach of balancing from Zdepth to ZRSSI that
Fig. 6. Block Diagram of the global Platform has been used to reduce uncertainty measurement, and both can
complement each other through the research model in order to
A. Software structure of the WVSN node enhance tracking. WVSN node activation and RSSI range
measurements have interesting complementarities. However, the
Figures 4&6 shows the components of the used WVSN node. number of wireless camera networks that use RSSI is very low.
The capturing unit consists of one ID sensor and image capture Miyaki et al. estimated target location individually using cameras
element (KINECT). The RSSI issued by the ID sensor or the and RSSI and then integrated both estimations using a sensor
image data signals will be then in putted in the processing unit. fusion method [11] or a particle filter [12]. These methods were
The processing unit usually consists of a microcontroller or validated using outdoor tests, but the accuracy and robustness
microprocessor with a memory as part of the same died or were not analyzed. Nodes out of the cluster can be switched off,
integrated circuit package. The processing unit could contain which allows energy saving. RSSI depends on the distance
application-specific processor. The processing unit provides between nodes, but the RSSI values don’t change linearly with
intelligent control and processing of information to the sensor the distance. Although RSSI is sensitive to some types of
node. Figure 7, present the software solution architecture of the environmental noise, RSSI from a fixed location almost indicates
system for patient identification and localization. Because of joint the same value, regardless of time. Therefore, the main idea in
tracing status of input skeletal data can be known from each this paper is to reduce the environmental effects on RSSI not by
KINECT, only joint’s depth distance Z’ data is extracted & stored using a single RSSI, but by using a pattern extracted from several
in order to be used after detecting a prospective fall. Also, the RSSIs. Three pattern recognition methods must be introduced to
localization error of coordinates is reduced when comparing realize this idea. The assumed solution in this paper proposes a
between the KINECT depth Distance Z and DRSSI. localization algorithm based on the RSSI–trilateration localization
technique.

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

mentioned in section 4.3. Then, the perform model of depth


distance extracts algorithm to construct a target and position
model. Each position obtains coordinates from the constructed
model. Finally, resample the next frame based on the activate
nodes and calculate the average distance and location from the
new frames.

Fig.9. Operation structure of data processing step

C. Patient localization technique approach

 Our approach for fall detection phases & modes

Fig.8. RSSI Communication protocol overview


Trilateration is a general method to calculate the position of Fig.10. Fall detection phases
nodes using an arbitrarily large set of distance estimated from  Basing on the skeleton joint position of the patient during his
fixed points. The technique involves the knowledge of the movement, the proposed joints for different physical model
distance of the moving object relatively to a set of references with were selected, i.e., distance-based algorithm.
known positions [29]. As depicted in Figures 6 & 8, the inputs of  The algorithm concept defines the relationship among the body
the system include RSSI scans and image capture and readings. parts (five parts) and extracts the distance of human skeletal
Two major improvements are applied to traditional other systems joint to the body parts.
that based on RSSI or image separated. Firstly, a WVSN nodes-  The distances among the body parts will increase or decrease
based initialization phase is introduced to the platform. This along the y-axis versus postures as an example of a worn
method requires several scans from RSSI module and a distance difference between the shoulder and the knee while standing
model made by FOV comparison algorithm. It filters out all the and sitting.
position by the FOV and FOS and keeps the decision making for  The proposal is different from each 3D Euclidean distance
initialization and activation of nodes N1, N2 & N3, which among five seals to analyze deviation rates. The 3D Euclidean
increase the accuracy of position rate as well as the efficiency. distance among the skeletal joint “seals” is as follows:
After a normal initialization, an activation phase via RSSI 
intervals, an improved identification, and cameras activation d ( p, k )  ( p1  k1 )  ( p2  k2 )  ( p3  k3 ) .....
2 2 2

method are introduced. This phase initially permits to collect a Where d (p, k) is the distance among p values of the result as
multiple frames and position estimations in the second scenario

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

the initial position k in the final position skeletal joints, such as Algorithm 1 the proposed WVSN platform
the head. Left foot. P1, P2, and P3 are for sampling the axes x, y, Require: ID sensor, three nodes based on RPi 3 and Kinect MS
and z, respectively, from the initial position of the skeleton. k1, for image capturing
k2, and k3 are for sampling x, y, and z, respectively, of the Ensure: Acquisition and capturing, processing, transmitting, and
collaboration of WVSN nodes, patient fall detection
position of the backbone end.
Phase 1 – Data acquisition and image capturing
However, all information is not pertinent in the framework of 1: ID sensors send an RSSIid signal to a WVSN node that converts
fall detection. In the proposed method, a maximum of seven joints the RSSI signal into distance Drssi
are considered. These seven joints (in Figure 4) are combined to 2: Send the first RSSI signal through the ID sensor and receive it
make four modes in the experiment: via the ZigBee module
1st Mode use 3 joints: Head, Spine, Shoulder Center 3: Drssi in the average  the patient is in the Field of sense; we
2nd Mode use 5 joints: Head, Spine, Shoulder Center, activate the MS Kinect of the WVSN node
4: Image shooting and depth data capturing based on patient
Shoulder_Right, Knee_Right
skeleton
3rd Mode use 5 joints: Head, Spine, Shoulder Center, Phase2 – Processing of the input data and collaboration task
Shoulder_Left, Knee_Left 5: If Kinect is activated  Zdepth = ZKinect Else Zdepth = ZRSSI
4st Mode uses 7 joints: Head, Spine, Shoulder Center, 6: Utilize the image input via Kinect component to track the
Shoulder_Right, Knee_Right, Shoulder_Left, Knee_Left. skeleton of the patient
7: Verifying if the patient skeletal joints are in range we
compute skeletal joints
8: The storage of each Stream Saver: ON recording patient
skeleton in the form of Joint Position. Binary file [120 rows /
Skeleton data/ Frame]
9: Each row contains the following fields: position (x, y, and z);
joint-tracking state (in joint rows)  extract five rows to seven
rows.
10: The extracted Distance Patient/WVSN_N (Z) = Z (joint)
Phase3 – Processing of falling motion phases on scenarios
11: Fall detection? Patient map tracking: via B i Blocs of
processing
B1: No processing in node: send id, init position and image,
B2: Low-level detection: (x,y) position and image and
tracking,
B3: Composite event detection “fall,”
12: Scenario A: Mode without fall: B1  B2  B2
Scenario B: Mode with fall: B1  B2  B3
13: For each skeleton in the video sequence: Identify a maximum
of seven joint coordinates (Head, Spine, Shoulder_Center,
Shoulder_Right, Knee_Right, Shoulder_Left, Knee_Left).
The 3D Euclidean distance among the skeletal joint “seals: is as
follows: Determine the d distance among the joints:
d ( p, k )  ( p1  k1 )2  ( p2  k2 )2  ( p3  k3 )2 ..... ;
Where d (p,k)is the distance value of the result between pas initial
Fig.11. Approach Flowchart position and q as end position skeletal joints, such as head => left foot.
p1,p2, and p 3are the instances of x,y, and z axes, respectively, of the
initial skeletal position. k1, k2, and k3 are the instances of x, y, and z
4.2 Algorithm Phases axes, respectively, of the end skeletal position.
14: For each distance, we classify the fall situation on four modes
The Pseudo-code for WVSN-based identification & initialization Phase4 – Communication phase
is detailed in Algorithm. The input of this algorithm is a set of 15: Preparing to switch to other nodes depending on Fos, Zdepth
RSSI estimations and the depth distance issued by the camera. 15: Activate and forward other WVSN_nodes
The model is constructed by Depth distance extract via frames If a node receives RSSI on range then
captured on Kinect Camera of our WVSN Node and a few If node contains important information then
experienced parameters. Steps 2 and 3(figure 10) are the model If Fos2 > Fos1 Then // if the remaining data>
generations phases (processing of the input data, collaboration a predefined threshold
task & processing of falling motion phases on scenarios). The Transfer information [Wi-Fi and ZigBee
algorithm selects the scenario with a data points to the set of ACT/DESACT];
distances from frames as shown in the flowchart figure 11 of the EndIf
platform algorithm. EndIf

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

4.3 Proposed Scenarios of localization: The event of interest is to detect the patient and track the exercise
that he must do in the rehabilitation room. If the patient begins the
Three modes, basic scenario (B1), intermediate scenario (B2), rehabilitation task, then it will be verified if he is on the view of
and higher scenario (B3) are used to evaluate the different phases node WVSN2. RPi 3 provides local processing on the node. In
of localization and tracking measurements and thus, validating the this scenario, after the Kinect wakes up, the RPi 3 performs a
proposed WVSN platform. In this part of the scheme, three skeleton imaging to detect the skeletal joints of the moving
scenarios are defined to detect a composite event. An event of patient. The RPi 3 (the considered node is adapted to the hybrid
interest is assumed to be a composite as an event that can be communication components of Wi-Fi and ZigBee to reduce the
detected by at least two WVSN nodes. The overall methodology energy consumption) transmits only the position-based ZigBee
of the nodes, the scenarios, and the management of the patient and forwards the image-based Wi-Fi. The First point: Images are
state are presented in the algorithm below (figure 12). not transmitted every time. Instead, the RPi 3 transmits images
only if the position of the detected patient satisfies a certain
criterion in the case of simple event/movements (the patient
moves and changes the task).
 The captured and transmitted image size is 640 × 480.
 The Second point: The RPi 3 board only transmits the portion
of the image containing the object, instead of the entire frame.

A. Higher Scenario: B3
The WVSN nodes perform local processing in this mode as
well. If a composite event is defined as a sequence of critical
events across the FOV of one node or more, then the first node in
this sequence transmits a message addressed to the next node
once it detects the first critical event. The WVSN node: The RPi 3
board does perform a local processing via an algorithm to identify
a critical “fall” event. The event of interest can be defined as a
sequence of two steps of events. The first step event is detecting if
the skeleton of the patient has a problem in the target on the view
of Kinect A. The second step event is detecting whether the
patient falls in the way (based on the skeletal joint position
distance) specified in the view of Kinect B. As the first node
detects the first step, it transmits a message addressed to the
second one instead of transmitting a portion of the image to the
server. Composite event detection avoids redundant
communication, because the application is not interested in
Fig.12. Localization & Tracking Scenarios Flowchart multiple patients entering the room. Instead, a higher-level
composite event is of interest.
A. First Scenario: B1
Basic mode: The ID of the patient is given by the ID sensor to
the WVSN nodes. The ID sensor sends the patient ID to the
WVSN node and/or to a field-programmable array server for
further processing. In the WVSN node, the RPi 3 board does not
perform any image local processing (Figure 13). Nevertheless, the
node does some scalar preprocessing, such as RSSI conversion, to
find the initial position of the patient. Such preprocessing, as
shown in Figure 6, is embedded in the node board. This scenario
serves as a baseline for the following two scenarios (B2 and/or
Fig.13. Image processing tasks in
B3). The patient status is then detected, which will cause an WVSN node, in the first setup, an
end of image every time motion. ID sensor wakens the camera mote
when it detects motion in the scene
B. Intermediate Scenario: B2
In this scenario, the WVSN node is not limited to capturing
image, but also to perform local processing (Figure 9). The node
detects and computes the patient position to be sent to the server.

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

V. EXPERIMENTAL RESULTS & each case. The obtained results are presented in Figure 17, which
PERFORMANCES ANALYSIS : shows the effect of the number of nodes on energy needs.
A. Processing :
As compared to the ordinary data computing in WSNs, the
Image processing in WVSNs is more complex and demands more
hardware resources, such. As CPU power and memory. The data
size can be reduced (via data compression, collaboration among
multiple sensors, and minimizing the consumed time of imaging
using basic information) to decrease the required hardware
resources. The collaboration among Kinect nodes needs to
exchange FOV information. The sensor hardware capability
should be considered when designing visual data-processing
techniques. Preprocessing option: In this step, the image is
analyzed on the local CPU of RPi3; a real-time skeleton image is
generated from the colored image by using the SDK tools
installed via a Windows 10 IoT OS on board. This real-time
extraction of skeletal joint position allows to make the necessary
work to localize the patient and continue the tracking of a fall
event. In the proposed platform, collaboration among
multiprocessor SoCs is carried out, e.g., basing the hardware by
using three quad cores ARM Cortex-A7 1.2 GHz. The considered
prototype is implemented and tested on a 1.2GHz CPU with a
quad-core. The system can run in real-time at 30 frames per
second. The results of 3D skeleton tracking in the 3D virtual
scene are demonstrated and tested in the WVSN node. The
WVSN node is tested for different situations, in which the
memory, time execution, and power are measured, as shown in
Table 4. The test is carried out with video storage of indoor
environments to make the scene realistic.

Fig.14. Test bed layout and WVSN Node


The proposed approach is compared with the D RSSI method and
DZ extracted distances to demonstrate its performance. It
examines the effect of the approach of WVSN nodes basing on
the criteria of accuracy improvement. The energy consumption of
WVSN nodes are also considered in the scenarios. Energy is the
parameter that defines the life of WVSN node and platform. This
section, presents the results of a detailed analysis of the
performances and localization accuracy of the three operation
scenarios described above, along with. The results of some
simulations, which are conducted to examine the performance of
the proposed platform based on WVSN node as presented in
figure 14. Three nodes are deployed in a room of 8m * 6m 2D
field in order to reduce the test reliability. The positions of the
cameras are shown in the figure of the test layout. All the cameras
have a FOV of 57.4° and a radius of 10m. This parameter is Fig.15. WVSN Node Memory & Process Consumption
considered as the most important criterion and the performance of A custom program is created in C++ to (1) capture image and
the proposed approach will be evaluated according to this save the data, (2) display the information in depth and skeleton
parameter, as indicated in Figures 15. The experimental test images, (3) allow the user to store a binary file input for tasks,
layout is fixed with the three nodes to generate a set of results for (4) identify the joint coordinates to calculate the D parameters

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

Online, and (5) give a memory and process performance on collect data, such as time, voltage, and current, 170 times per
execution feedback regarding the online/offline status. The second from node end-devices through a shield that is made to
performance measures of the various processing stages operated measure the energy. Joule (J) is a unit commonly used to measure
on different phases are presented in Figure 15. In the conducted mechanical energy (work). The energy consumption, in Joules, is
test (Figure 15), it has been assumed that the nodes consume calculated from the voltage and current at time t [V (t) and I (t)]
limited resources in terms of memory and process. The node as x
performs excellently. The test aims to focus on the performance WVSN _ Joules   V1,2,3 ( t ) I1,2,3 ( t ) dt (1)
0
of the WVSN node. Like, the platform is implemented by
The total transmission energy drained for N hops is given by
creating different practical scenarios.
Table.4. Performance of practical WVSN node test equation (2): Enode = Ecpu + Ecom+ ECapturing (2), where;
WVSN Node Scenario A Ecom ( n, d
Scenario B )  Eelec  
n 
Ecapturing 
Eampl Ekinect
( n, d )  Eelec * n  Eampl * n * d 2

Memory OFF 65 65 130 Ecom(n, d )  Eelec  n   Eampl (n, d )  Eelec * n  Eampl * n *


(MB) n

 Pcu  state(i) *Tcu  state(i)


ON 161 161 350
CPU OFF 3% 3% 3%
Ecpu 
i 1
Memory ON 56% 61% 78%
Execution Where Enode is the energy consumed in the system, given in Joules. Ecom
sec 57 57 133
Time is the energy consumed in the transmitter receiver RF system. E sensors is
Draw Idle 0,31 0,31 - the energy consumed in the sensor system. P cpu-state is the power
AMPS Load 0,58 0,66 0,82 consumed in the state i of the CPU, given in watts. Tcpu-state is the duration
Time Sec 49,02 0,55 0,36 time of state i,given in seconds. This time includes the transition time
Execution
among states. Eelec is the energy consumed in the transceiver circuitry by
n processed bits. Eampl is the energy consumed in the RF amplifier circuit
B. Energy Management: by transmitting n bits of information in distance d.
The lifetime of wireless visual sensor node is correlated with
the battery current usage profile. By being able to estimate the The first experiment consists of considering a single-hop network
energy consumption of the WVSN nodes, the applications, with no bit rate constraint (minimizing the activity time of nodes).
routing protocols and node management (FOV and on/off). In this network, it has been considered that nodes are located 4–8
Informed decisions that increase the WVSN lifetime can be made. m away from one to another because they would be placed in an
Minimizing energy consumption and size are important points to actual video patient surveillance. Figure 16 illustrates the total
make WVSNs deployable. Energy management is fundamental to energy consumption for all the nodes in the platform for each
the network reliability. The nature of the application may make it block of work in Section 4.3 The results shows that deploying
infeasible for interaction with the WVSN once it has been three collaborative WVSN nodes and communicating minimal
deployed. The nodes are frequently located in remote areas and data provide 27.56% savings in energy consumption compared
thus are impossible to access. Economics is also a factor; when with using off-collaborative nodes and exchanging compressed
thousands of sensors exist, the power of a given sensor should be data. The first experiment consists of considering a single-hop
considered. Communication is the primary consumer of energy in network with no bit rate constraint (minimizing the activity time
wireless networks. of nodes). In this network, it has been considered that nodes are
The desire to save energy has also affected routing algorithms, located 4–8 m away from one to another because they would be
scheduling, data collection and aggregation and medium access placed in an actual video patient surveillance.
control protocol research. The tradeoff between energy savings
and latency is a major concern. Some time-critical applications
cannot tolerate delay in packet delivery. Some techniques are
considered to conserve energy, including: Node switch between
active (on) and sleeping (off) modes
 Independent nodes
 Physical layer aware protocol
The energy efficiency of the proposed scheme is also evaluated
in different usage cases. Arduino Nano is used to monitor the
energy consumption, because it can ensure accuracy and
synchronism in data collection through an automated code to start
and finish the measure, which can be linked to the start and finish
times of events in the Raspberry Pi. The electric current can be
c
kept within 800mA by using a 1Ω resistor in series with the RPi
Fig. 16. The overall energy consumption for different Blocks
3. The voltage drop in the resistor can then be measured to infer
the precise electric current of the circuit. The Arduino is used to

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

high delays and high energy consumption. Figure 19 illustrates


the used node of the platform during the execution of the patient
rehabilitation task, which allows the performance comparison of
different state-of-the-art methods. For the tests performed, the
results are the average values for each point of passage of the
patient in exercise according to the different scenarios. It can be
seen that the two curves represent the precisions of the two
methodologies used with and without image. The difference is
purely clear between the two ways of localization.

Fig. 17. The distribution of the consumed energy C. Communication:


Given that the quality-of-service requirements make the visual
The total energy consumption in Scenario A when tracking one data transmission in WVSN seven even more challenging than
patient is 23.75% less than the collaborative setup, as shown in those in WSNs, the hybrid communication protocols using
Figure 17. In addition to energy consumption, we measure the ZigBee and Wi-Fi are selected. Also, the effect of the
average power consumption of each node for the scenarios communication energy cost that makes the fundamental critical
described in Section 4. Figure17 shows the obtained histograms. point of this field is considered. Small-sized packets are
The distance between the nodes and the ID sensor node is exchanged between nodes and ID sensor to reduce power
approximately 3–4 m, and the communication is performed consumption and delay. The maximum data rate of 802.15.4 is
during two hops. Figure 17 shows the amount of time needed to 250 kbps per frequency channel (16 channels are available in the
complete waiting, processing, and communication for all three 2.4 GHz band), which is far too low for a WVSN node to stream
scenarios. The accuracy of tracking patient is consistently images back to the server at a sufficiently high quality and frame
measured on the FOV of three nodes for the proposed approach rate for real-time applications. A key tenet of the design is to
and the previous RSSI work configurations. The latency push computing out to the edge of the network and only send
introduced during these different scenarios is also measured. In all preprocessed data. If an event of interest occurs in the network,
latency measurements, the measured time intervals include the then a query can be sent for the relevant image sequence to be
wakeup time of the node sensor, which is approximately 4 s. For compressed. Figure 18 present the impact of the considered
the first scenario described in Section 4.1, the time interval from scenario of execution, so the cost of communication more
KINECT waking up to a WVSN_N1 node sending the complete importunate on the case of scenario.
image is measured.
D. Accuracy:
Now we analyze the accuracy as shown on figure 19, of the
training RSSI model method proposed in Section IV. We
employed the same setting in the WVSN Integrated testbed and
tracked the Patient using an image that integrates only distances
measurements. We compared the performance of the EIF with
three different RSSI models: the default RSSI model, the RSSI
calibration method presented in [11] and the RSSI model training
method proposed in this paper. The experiment was repeated 10
times. Figure 19 shows the cumulate Patient localization errors
obtained when using the two models. The method proposed in
this paper has significantly higher accuracy: the mean error is 29
cm and the error was lower than 50 cm in 90% of the samples.
Errors were significantly higher in the other cases even
considering that both RSSI models were generated for that
specific environment. Our RSSI training mechanism behaves
Fig. 18. The latencies of different components of operation for better because it estimates target location with cameras and trains
different scenarios B1, B2 & B3 the RSSI model dynamically considering the local surroundings
of the target and of the static node. As presented in Section IV the
The results are displayed in Figure 19. While decreasing the proposed scheme includes self-comparative mechanisms that
error average when introducing the camera distance extracted on improve robustness against RSSI value errors. We analyzed the
the considered node, the reliability is very high for the real-time robustness of the trailing method against the most common errors
scenes. Given that only RSSI location data are exchanged, the in the RSSI. Occlusions originated high uncertainties in the RSSI
reliability will decrease for scalar scenes. The use of wireless training method and the default RSSI model was selected.
However, the integration of RSSI measurements even using the
network protocol channels increases the accuracy at the cost of
default RSSI model was a significant advantage when no camera

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

measurements were available. The proposed method, consuming users, as opposed to micro controller programming which usually
40% less, has robustness against other platforms errors depends on the development kit.
comparable to that of Method2 and is significantly more robust to The second one is the using of the RSSI treatment with image capturing.
target solutions. Fusion of depth distance DZkinect and the initial position pre-calculated
Drssi. The other specificity is those WVSN nodes perform a local pre-
processing image before sending data to the server. In summary,
according to the computational capability and system resource
consumption, there are two types of WVSN platforms. The first one is
the low-end WVSN platform that is designed specifically for energy
efficiency (e.g., Cyclops [30]). The other is the high-end WVSN platform
(e.g., Meerkats [31]) that is designed for sophisticated visual data
applications where the system resource requirement and energy
consumption are about one order of magnitude higher. This offers new
research directions and opportunities for designing new WVSN platforms
that could perform sophisticated visual data operations in energy efficient
ways. This section reviews the five major hardware components of the
sensor node in WVSN; other detailed hardware architecture can be seen
in [32].
Table.5. Comparison of platforms performances

Fig.19. The accuracy of different approach and the scheduling of


nodes on active situation.

Wvsn_N Kinect Rpi3 Zigbee Unit Total


Cost (€) 55 35 15 105
For the cost criterion of the proposed solution, cost 350 euros, so for a
single node does not exceed 120 euros comparing with other platform is
much cheaper. on the 3 WVSN sensor nodes, as long as one deploys
Kinect, RPi 3 and with a Zigbee reader module then the wireless sensor
nodes in the target environments are not expensive because one is limited
to only three nodes.

DISCUSSION
The first difference between the proposed approach and the
approaches presented in the previously mentioned works is in the use of a
Win 10 IOT OS for the implementation of the prototype which is one of
the main modern operating systems. Also, to create the driver of the node
sensor, open source Pi4J library is used. This is a bridge between the
VS15 & SDK libraries for full access to the Raspberry Pi. A win 10 IOT
service was preinstalled on the processing unit in order to make a
connection between Raspberry Pi and the KINECT. Thus, the developed
Table 5 summarizes the different components for the WVSN platform in
testbed advance is in its modularity – developed codes written in C++
comparison with other systems. As a specificity of image quality it is IR
provide multiple use of written software in future projects. Solution
camera Kinect, as the memory size (1G + 256Kbytes) by far is more
implementation is quite simple and it is accessible to a large number of
important than the other nodes, for communication the combination

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

between Zigbee and Wifi, for reasons of optimization at the point of [3]. E. E. Stone and M. Skubic, “Evaluation of an inexpensive depth
consumption and taking advantage of the high calculability of ARM camera for passive in-home fall risk assessment,” in Proc. Int.
cortex A53 microcontroller, For the case of the energy consumption like Conf. Pervasive Comput. Technol. Healthcare (Pervasive Health),
Dublin, Ireland, May 2011, pp. 71–77.
the interval is corrected as follows 200-4000 mAh. Thanks to the
[4]. S. Patel, H. Park, P. Bonato, L. Chan, and M. Rodgers, “A review
collaboration and management provided by the different scenarios, we of wearable sensors and systems with application in rehabilitation,”
obtain an average of 500 mAh for each node on a rehabilitation exercise J. NeuroEng. Rehabil., vol. 9, pp. 21:1–21:17, Jan. 2012.
session. [5]. [Microsoft Kinect, [Online] Available:
http://www.microsoft.com/enus/kinectforwindows/,December
2013.
[6]. 6. Mao G Q, Fidan B, Anderson B. Wireless sensor network
VI. CONCLUSION localization techniques. Computer Networks, 2011, 51(10): 2529–
2553
A new WVSN platform based on the used WVSN node is presented [7]. Rajeev S, Ananda A, Mun C, Wei T. Mobile, wireless, and sensor
for patient rehabilitation supervision. This system is a low-cost, networks: technology, applications, and future directions. John
lightweight, and easy-to-use monitoring system which can be used at Wiley and Sons, 2005
home or in a hospital. The proposed solution in this paper relies on a [8]. Liu Liang, Zhang Xi, Ma Huadong. Optimal node selection for
novel association of RPi 3 with Kinect using a hybrid communication target localization in wireless camera sensor networks. IEEE Trans
VehTechnol 2010; 59(7):3562–76
protocol (ZigBee, WI-Fi). It also meets the localization and tracking
[9]. SoroStanislava, Heinzelman Wendi. A survey of visual sensor
requirements of patient rehabilitation supervision in terms of data networks. Adv Multimedia 2009;2009:1–21
processing and data communication. The functionality and the [10]. Kulkarni Purushottam. Senseye: a multi-tier heterogeneous camera
performance of the prototype have been proved by testing it in a real sensor network. Amherst, MA: University of Massachusetts; 2007.
environment. The experimental results of two tracking methods (one [11]. Mao Guoqiang, FidanBaris, Anderson Brian DO. Wireless sensor
with depth information) to localize the indoor benchmarks of patient and network localization techniques. ComputNetw 2011;51(10):2529–
a challenging RSSI dataset demonstrate the effectiveness of the proposed 53
[12]. Ercan D Yang, Gamal AE, Guibas L. Optimal placement and
scheme to introduce KINECT on the used node’s depth and visual feature
selection of camera
for extraction, accurate 3D detection of a complex activity, and [13]. Network nodes for target localization. DCOSS; 2006. p. 389–404
interaction. In particular, the overall nodes are installed in a clinical [14]. CharfiYoussef, Wakamiya Naoki, Murata Masayuki. Challenging
room. issues in visual sensor networks,. IEEE WirelCommun 2009;
FUTURE WORK 16(2):44–9.
A WVSN platform is proposed to monitor and track the elderly or [15]. Akyildiz IF, Melodia T, Chowdury KR. A survey on wireless
athlete patients. This platform significantly reduces the costs of multimedia sensor networks,. IEEE WirelCommun 2007; 14(6):32–
9.
processing and communication, as well as the energy consumption,
[16]. Yang F, Tschetter E, Léauté X, Ray N, Merlino G, Ganguli D.
which has a significant effect on the WVSN performances. Several Druid: a real-time analytical data store. In: Proceedings of the 2014
studies that have been conducted, as explained in Section 2, have shown ACM SIGMOD international conference on management of data.
that WVSN nodes are more efficient than the traditional architecture of ACM; 2014. p. 157–68.
client/server-based WSN or the simple use of cameras. Among these [17]. Yan H, Xu LD, Bi Z, Pang Z, Zhang J, Chen Y. An emerging
works, the RSSI and depth distance extracted from skeletal joint technology-wearable wireless sensor networks with applications in
human health condition monitoring. J Manag Anal 2015;2:121–
coordinates (Zk) are fused to precisely localize the patients indoor. The
37.http://dx.doi.org/10.1080/23270012.2015.1029550.
RSSI seeks the most adequate node FOVs of MS Kinect to activate this [18]. Dai R, Akyildiz IF. A spatial correlation model for visual
node, and it ensures collaboration among the three used nodes. This study information in wireless multimedia sensor networks. IEEE Trans
proposes the use of an intelligent strategy based on a new association of Multimed 2009;11(6):1148–59
RPi 3 with MS KINECT for data capturing, processing, and transmission, [19]. J. Webb and J. Ashley, ―Beginning Kinect Programming with the
as well as intelligent fusion of distance localization processing in a Microsoft Kinect SDK,‖ , 2012, pp. 52, pp. 67-100
WVSN node. As perspectives, we exploited this Platform to centralize all [20]. Kinect Wikipedia, [Online] Available:
http://en.wikipedia.org/wiki/Kinect, December 2013
of the frames and data with Reconfigurable architecture server of an
[21]. Samuele Gasparrini, EneaCippitelli, Susanna Spinsante and
FPGA board and we explored the dynamic Reconfiguration with our Ennio Gambi, "A Depth-Based Fall Detection System Using a
WVSN nodes. Providing connectivity of de Windows 10 IoT Core on the Kinect Sensor," In the International Journal of Sensors, Volume
RPI 3. 14, Issue 2, pp. 2756-2775, February 2014.
REFERENCES [22]. http://www.raspberrypi.org, about raspberry Pi.
[23]. Microsoft Kinect SDK [Online]. Available:
http://www.microsoft.com/en- us/kinectforwindows/.
[1]. [G. Kwakkel, R. van Peppen, R. C. Wagenaar, S. W. Dauphinee, C. [24]. OpenNI [Online]. Available: http://www.openni.org/
Richards, A. Ashburn, K. Miller, N. Lincoln, C. Partridge, I. [25]. OpenKinect[Online].Available:https://github.com/OpenKinect/libfr
Wellwood, and P. Langhorne, “Effects of augmented exercise eenect/.
therapy time after stroke: A Meta-Analysis,” Stroke, vol. 35, no. [26]. A.Taha1, Hala H. Zayed, M. E. KhalifaandEl.El-Horbaty ,Human
11, pp. 2529–2539, Nov. 2004. Action Recognition based on MSVM and Depth Images, IJCSI
[2]. J. A. Painter, S. J. Elliott, and S. Hudson, “Falls in community- International Journal of Computer Science Issues, Vol. 11, Issue 4,
dwelling adults aged 50 years and older: Prevalence and No 2, July 2014.
contributing factors,” J. Allied Health, vol. 38, no. 4, pp. 201–207, [27]. RoannaLun, A survey of applications and human motion
Jan. 2009. recognition with Microsoft kinect International Journal of Pattern

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal

Recognition and Articial Intelligence, March 23, 2015 0:8


WSPC/INSTRUCTION FILE
[28]. Florence G. H. Yap and Hong-Hsu Yen, A Survey on Sensor
Coverage and Visual Data Capturing/Processing/Transmission in
Wireless Visual Sensor Networks, Sensors 2014, 14, 3506-3527
[29]. M.IDOUDI, J.CABRAL, E.BOURENNANE, and K.GRAYAA,
WSN localization scheme based on Received Signal Strength
Indicator (RSSI) for ZIGBEE Network, CIE45 Proceedings, 28-30
October 2015, Metz / France.
[30]. Rahimi, M.; Baer, R.; Iroezi, O.I.; Garcia, J.C.; Warrior, J.; Estrin,
D.; Srivastava, M. Cyclops: In situ Image Sensing and
Interpretation in Wireless Sensor Networks. In Proceedings of
International Conference on Embedded networked Sensor Systems,
San Diego, CA, USA, 2–4 November 2005; pp. 192–204
[31]. Boice, J.; Lu, X.; Margi, C.; Stanek, G.; Zhang, G.; Manduchi, R.;
Obraczka, K.; Meerkats: A Power Aware, Self Managing Wireless
Camera Network For Wide Area Monitoring. In Proceedings of
Workshop on Distributed Smart Cameras, Boulder, CO, USA, 31
October 2006.
[32]. Tavli, B.; Bicakci, K.; Zilan, R.; Barcelo-Ordinas J.M. A survey of
visual sensor network platforms. Multimed. Tools Appl. 2012, 60,
689–726.

Monaem IDOUDI was born in Tunis, Tunisia, in 1988. He


received the B.S. degree in electronic and
telecommunication engineering from the University of
Tunis, Tunis in 2010, and the Master degree in science
and technology of the information and communication;
specialty: Electronics and Advanced technology
engineering; from ENSIT - University of Tunis, Tunis,
Tunisia, in 2013. He is currently pursuing the Ph.D. in
electronic and informatics at the University of Burgundy,
Dijon, France. His research interests include: Wireless
communications and embedded image processing, FPGA
design and real time implementation, IoT technologies.

El-Bay BOURENNANE, is Professor of Electronics at


the laboratory LE2I (Laboratory of Electronics,
Computer Science and Image) at the University of
Burgundy, Dijon, France. His research interests include:
dynamic reconfigurable system, image processing,
embedded systems, FPGA design and real time
implementation.

Khaled GRAYAA, is Professor of Electronics at the


laboratory LIRINA (Laboratory of Intelligent Networks
and Nanotechnology) at the University of Tunis, Tunis,
Tunisia. His research interests include: Smart sensor
and smart Grid networks, embedded systems, FPGA
design and real time implementation.

1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

You might also like