Professional Documents
Culture Documents
Indu 2018
Indu 2018
fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
I. INTRODUCTION
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
II. STATE OF ART sensors because of the cost limitation [14]. Consequently, the
Several technologies have been proposed for patient accuracy of filtering and extraction of the target position cannot
monitoring and have provided support to rehabilitation tasks. This be guaranteed in the local sensor level. Vision-based surveillance
paper presents an in-home rehabilitation solution by employing by multiple cameras receives considerable attention because
WVSNs to monitor patients and elderly people with special visual surveillance by multiple cameras will enlarge an area.
needs. Therapy at home is flexible and convenient for patients as Numerous problems can also be solved drawing on the
it allows frequent repetition of exercises. Smart area is one of the information that comes from multiple views [9]. Many countries
outstanding examples of the IoT. Modern homes contain an have reduced hospital resources and moved healthcare services,
increasing number of smart objects. The localization problem has such as medical checkups, to the home [15]. Extensive work
received considerable attention in the area of wireless sensor needs to be done on models and algorithms to utilize data for the
networks (WSNs) [6]. The use of camera and image processing decision-making activities of health care, diagnosis and medical
on the IoT is an application that has not been fully explored in treatment [16]. Several collaboration steps, as shown in Figure 2,
literature, especially in rehabilitation. However, these existing need to be performed. The IoT can be defined as a set of
localization algorithms cannot solve the target localization interconnected things (humans, tags, and sensors) over the
concerning WVSNs [7], because of the significant differences in Internet, which can measure, communicate, and act worldwide.
terms of data acquisition (capturing) and processing methods of
the conventional WSN. This paper is based on developing a Table1.Hardware components in WVSN platforms
platform and experiments conducted with various phases and
scenarios to detect the falls of the patients which will allow us to
obtain classify their technological performances. WVSN nodes
are combined with an ID sensor. The smart nodes can issue
warnings to people designated in case of requirement. This paper
considers the different features of WVSNs. Coverage
optimization: The faulty nodes that are artificially produced by a
monitoring schedule or coverage optimization are verified. The
selected set of nodes depends on the way visual information is
monitored by the application. Monitoring requirements:
Changing the monitoring requirements may alter the role of the
WVSN nodes for such an application. Low-quality monitoring:
Environmental conditions or poor configuration and adjustment
of the sensor cameras may reduce the quality of the retrieved
visual information. Poor deployment: WVSN nodes may be
damaged during deployment. For most of these applications, users
are interested not only in the existence of targets, but also in their
positions [8], because this could facilitate target detection,
recognition, and tracking. The localization task provides
coordinates of both sensors and targets in sensor works [9]. The
aim of target localization is to estimate the location of the target
basing on the visual information of camera [10]. The problem of
target localization is well studied in WSNs. The measurement
techniques in sensor localization include angle-of-arrival,
distance-related, and received signal strength (RSS)
measurements [11]. The target localization in WVSNs faces great
challenges. First, processing image is costly implemented in local
nodes [12] because the computing capabilities are limited in local
nodes. Second, bandwidth resources are also restricted in
WVSNs. Thus, transmitting a large amount of visual data
generated by cameras to a central node or to a base station is
critical [13]. Third, the location information of a target in the
The key idea of the IoT is to obtain information
depth dimension is lost in an image because the sensing capability
concerning our environment to understand, control, or even make
of a camera is characterized by a directional ensign. Fourth, visual
a decision to help us in our daily life [17]. Regarding the network
nodes in WVSNs are equipped with low-resolution optical
bandwidth consumption, we consider the fact that the size of
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
visual data is much larger than that of scalar data; hence, more captured by the depth camera cannot be recognized. This feature
network bandwidth resource is required for transmission. These helps to keep identity confidentiality. The hardware requirements
performance requirements make the visual data transmission for the remote enrolment node are listed below. In the node, the
within WVSNs more challenging than the transmission within images are taken by the MS Kinect and processed by the RPi
WSNs [28]. 3board, as shown in Figure 3.
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
developing applications. OpenNI 2.0 is the highest version of quaternion (1, 0, 0, 0).
OpenNI MS Kinect SDK, released by MS, and its current version - In CoM rows: Not applicable. Angle status is set to
is 1.8.OpenKinect is a free, open-source library maintained by an S_FALSE=1.
open community of Kinect people. The SDK enables users to
develop sophisticated computer-based human motion-tracking
applications in C# and C++ programming languages [27]. A
version of skeleton estimation is incorporated in the Kinect SDK,
and a stream of skeleton frames can be obtained directly. A model
is developed to represent a full human skeleton to accomplish a
real-time human skeleton estimation. The model is then trained
with extensive labeled data. The trained model is incorporated
into the SDK for real-time skeleton tracking. In fact, Figure 4
shows the coordinates of skeletal joints that will be extracted. Fig.5. Data structure for KINECT depth distance
Besides, Figure 7 shows a two-phase system used for fall
detection. The first phase of the system characterizes the pre-task
IV. THE PROPOSED SOLUTION
to identify the patient before and during the task. The second
phase of the system concerns the critical event that could possibly
4.1 Proposed WVSN platform
happen to the patient, such as a fall situation.
This section presents the proposed approach, which is based on
three WVSN nodes and designed for location/tracking scenarios
in a clinical rehabilitation world. The nodes include an
image/video capture unit, a processing component and a
communication element. The design of a complete platform
involves three WVSN nodes, namely that it’s based on, (A) the
RSS indicator (RSSI) and information processing to compute the
initial location information as output; (B) the acquisition of
image-based skeletal joint location data of patients, the extraction
of distance, and the storing, analysis, monitoring, and displaying
Fig.4. Skeleton Joints Position of the Patient of data in a centralized management server; and (C) the system
Data structure for the proposed scheme realization by implementing various available technologies. This
The range information is expressed in 2 bytes for every subsection presents the block diagram of the proposed platform. It
attribute value, while the distance information result is converted is necessary to consider how to find the location that coordinates
into hexadecimal numbers to prevent the identification of the from raw RSSI data.
information even if it is added to a hash value. The shown Figure In Figure 6, we present the operating blocks of the platform,
9 gives a description of the process of recording and extraction these plates are detailed on two layers of hardware and software
data using the proposed scheme and adding range information. In conception. Basically, as explained before, a compensation of the
below, an example of the basic data structure that is utilized in the location error margin fixed via RSSI, we managed to introduce
proposed method. When a WVSN node collects data, as shown in the extraction of the position via image too. The RSSI values are
the algorithm, the data is expressed as shown in figure5. configured from reference node (the patient's body level) within
"Joint_Position_....binary" File the distance estimation step. Environmental characterization
Each row contains following fields: using these RSSI values can be conducted to find the suitable
- Position (x, y, z); parameters for that area. So we can define the field of sense FOS
Joint tracking state (in joint rows)* to move on to the next step. Following the calibration process, the
* Tracking states: 0=joint is not tracked; environmental parameters will be fixed and will not change
1=joint is inferred; unless significant changes happen to the objects within the area.
2=joint is tracked While D1, D2 and D3 represent the distance between the patient
"Joint_Orientation_....binary" File --- and the 3 nodes, so by entering from one FOS zone to another, a
Each row contains following fields: camera will be activated among the three cameras C1, C2 or C3,
- Hierarchical joint angle as a quaternion (w, x, y, z), the whole approach of the passage between blocks is detailed in
- Angle status the algorithm (section 3.2-b). The next step is to obtain
* Angle status: continuous RSSI values online from the reference nodes. With
- In joint rows: 0=successful, any other value implies angle both the RSSI values and environmental parameters, it is possible
computation failed and the angle has been set to the Identity
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
to convert those RSSI values into distance using the path loss
model. After the RSSI–distance conversion, the distances
between the target sensor node and the reference nodes can be
obtained. Trilateration combines the distances and finds the exact
location coordinates of the target sensor node within the area.
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
method are introduced. This phase initially permits to collect a Where d (p, k) is the distance among p values of the result as
multiple frames and position estimations in the second scenario
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
the initial position k in the final position skeletal joints, such as Algorithm 1 the proposed WVSN platform
the head. Left foot. P1, P2, and P3 are for sampling the axes x, y, Require: ID sensor, three nodes based on RPi 3 and Kinect MS
and z, respectively, from the initial position of the skeleton. k1, for image capturing
k2, and k3 are for sampling x, y, and z, respectively, of the Ensure: Acquisition and capturing, processing, transmitting, and
collaboration of WVSN nodes, patient fall detection
position of the backbone end.
Phase 1 – Data acquisition and image capturing
However, all information is not pertinent in the framework of 1: ID sensors send an RSSIid signal to a WVSN node that converts
fall detection. In the proposed method, a maximum of seven joints the RSSI signal into distance Drssi
are considered. These seven joints (in Figure 4) are combined to 2: Send the first RSSI signal through the ID sensor and receive it
make four modes in the experiment: via the ZigBee module
1st Mode use 3 joints: Head, Spine, Shoulder Center 3: Drssi in the average the patient is in the Field of sense; we
2nd Mode use 5 joints: Head, Spine, Shoulder Center, activate the MS Kinect of the WVSN node
4: Image shooting and depth data capturing based on patient
Shoulder_Right, Knee_Right
skeleton
3rd Mode use 5 joints: Head, Spine, Shoulder Center, Phase2 – Processing of the input data and collaboration task
Shoulder_Left, Knee_Left 5: If Kinect is activated Zdepth = ZKinect Else Zdepth = ZRSSI
4st Mode uses 7 joints: Head, Spine, Shoulder Center, 6: Utilize the image input via Kinect component to track the
Shoulder_Right, Knee_Right, Shoulder_Left, Knee_Left. skeleton of the patient
7: Verifying if the patient skeletal joints are in range we
compute skeletal joints
8: The storage of each Stream Saver: ON recording patient
skeleton in the form of Joint Position. Binary file [120 rows /
Skeleton data/ Frame]
9: Each row contains the following fields: position (x, y, and z);
joint-tracking state (in joint rows) extract five rows to seven
rows.
10: The extracted Distance Patient/WVSN_N (Z) = Z (joint)
Phase3 – Processing of falling motion phases on scenarios
11: Fall detection? Patient map tracking: via B i Blocs of
processing
B1: No processing in node: send id, init position and image,
B2: Low-level detection: (x,y) position and image and
tracking,
B3: Composite event detection “fall,”
12: Scenario A: Mode without fall: B1 B2 B2
Scenario B: Mode with fall: B1 B2 B3
13: For each skeleton in the video sequence: Identify a maximum
of seven joint coordinates (Head, Spine, Shoulder_Center,
Shoulder_Right, Knee_Right, Shoulder_Left, Knee_Left).
The 3D Euclidean distance among the skeletal joint “seals: is as
follows: Determine the d distance among the joints:
d ( p, k ) ( p1 k1 )2 ( p2 k2 )2 ( p3 k3 )2 ..... ;
Where d (p,k)is the distance value of the result between pas initial
Fig.11. Approach Flowchart position and q as end position skeletal joints, such as head => left foot.
p1,p2, and p 3are the instances of x,y, and z axes, respectively, of the
initial skeletal position. k1, k2, and k3 are the instances of x, y, and z
4.2 Algorithm Phases axes, respectively, of the end skeletal position.
14: For each distance, we classify the fall situation on four modes
The Pseudo-code for WVSN-based identification & initialization Phase4 – Communication phase
is detailed in Algorithm. The input of this algorithm is a set of 15: Preparing to switch to other nodes depending on Fos, Zdepth
RSSI estimations and the depth distance issued by the camera. 15: Activate and forward other WVSN_nodes
The model is constructed by Depth distance extract via frames If a node receives RSSI on range then
captured on Kinect Camera of our WVSN Node and a few If node contains important information then
experienced parameters. Steps 2 and 3(figure 10) are the model If Fos2 > Fos1 Then // if the remaining data>
generations phases (processing of the input data, collaboration a predefined threshold
task & processing of falling motion phases on scenarios). The Transfer information [Wi-Fi and ZigBee
algorithm selects the scenario with a data points to the set of ACT/DESACT];
distances from frames as shown in the flowchart figure 11 of the EndIf
platform algorithm. EndIf
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
4.3 Proposed Scenarios of localization: The event of interest is to detect the patient and track the exercise
that he must do in the rehabilitation room. If the patient begins the
Three modes, basic scenario (B1), intermediate scenario (B2), rehabilitation task, then it will be verified if he is on the view of
and higher scenario (B3) are used to evaluate the different phases node WVSN2. RPi 3 provides local processing on the node. In
of localization and tracking measurements and thus, validating the this scenario, after the Kinect wakes up, the RPi 3 performs a
proposed WVSN platform. In this part of the scheme, three skeleton imaging to detect the skeletal joints of the moving
scenarios are defined to detect a composite event. An event of patient. The RPi 3 (the considered node is adapted to the hybrid
interest is assumed to be a composite as an event that can be communication components of Wi-Fi and ZigBee to reduce the
detected by at least two WVSN nodes. The overall methodology energy consumption) transmits only the position-based ZigBee
of the nodes, the scenarios, and the management of the patient and forwards the image-based Wi-Fi. The First point: Images are
state are presented in the algorithm below (figure 12). not transmitted every time. Instead, the RPi 3 transmits images
only if the position of the detected patient satisfies a certain
criterion in the case of simple event/movements (the patient
moves and changes the task).
The captured and transmitted image size is 640 × 480.
The Second point: The RPi 3 board only transmits the portion
of the image containing the object, instead of the entire frame.
A. Higher Scenario: B3
The WVSN nodes perform local processing in this mode as
well. If a composite event is defined as a sequence of critical
events across the FOV of one node or more, then the first node in
this sequence transmits a message addressed to the next node
once it detects the first critical event. The WVSN node: The RPi 3
board does perform a local processing via an algorithm to identify
a critical “fall” event. The event of interest can be defined as a
sequence of two steps of events. The first step event is detecting if
the skeleton of the patient has a problem in the target on the view
of Kinect A. The second step event is detecting whether the
patient falls in the way (based on the skeletal joint position
distance) specified in the view of Kinect B. As the first node
detects the first step, it transmits a message addressed to the
second one instead of transmitting a portion of the image to the
server. Composite event detection avoids redundant
communication, because the application is not interested in
Fig.12. Localization & Tracking Scenarios Flowchart multiple patients entering the room. Instead, a higher-level
composite event is of interest.
A. First Scenario: B1
Basic mode: The ID of the patient is given by the ID sensor to
the WVSN nodes. The ID sensor sends the patient ID to the
WVSN node and/or to a field-programmable array server for
further processing. In the WVSN node, the RPi 3 board does not
perform any image local processing (Figure 13). Nevertheless, the
node does some scalar preprocessing, such as RSSI conversion, to
find the initial position of the patient. Such preprocessing, as
shown in Figure 6, is embedded in the node board. This scenario
serves as a baseline for the following two scenarios (B2 and/or
Fig.13. Image processing tasks in
B3). The patient status is then detected, which will cause an WVSN node, in the first setup, an
end of image every time motion. ID sensor wakens the camera mote
when it detects motion in the scene
B. Intermediate Scenario: B2
In this scenario, the WVSN node is not limited to capturing
image, but also to perform local processing (Figure 9). The node
detects and computes the patient position to be sent to the server.
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
V. EXPERIMENTAL RESULTS & each case. The obtained results are presented in Figure 17, which
PERFORMANCES ANALYSIS : shows the effect of the number of nodes on energy needs.
A. Processing :
As compared to the ordinary data computing in WSNs, the
Image processing in WVSNs is more complex and demands more
hardware resources, such. As CPU power and memory. The data
size can be reduced (via data compression, collaboration among
multiple sensors, and minimizing the consumed time of imaging
using basic information) to decrease the required hardware
resources. The collaboration among Kinect nodes needs to
exchange FOV information. The sensor hardware capability
should be considered when designing visual data-processing
techniques. Preprocessing option: In this step, the image is
analyzed on the local CPU of RPi3; a real-time skeleton image is
generated from the colored image by using the SDK tools
installed via a Windows 10 IoT OS on board. This real-time
extraction of skeletal joint position allows to make the necessary
work to localize the patient and continue the tracking of a fall
event. In the proposed platform, collaboration among
multiprocessor SoCs is carried out, e.g., basing the hardware by
using three quad cores ARM Cortex-A7 1.2 GHz. The considered
prototype is implemented and tested on a 1.2GHz CPU with a
quad-core. The system can run in real-time at 30 frames per
second. The results of 3D skeleton tracking in the 3D virtual
scene are demonstrated and tested in the WVSN node. The
WVSN node is tested for different situations, in which the
memory, time execution, and power are measured, as shown in
Table 4. The test is carried out with video storage of indoor
environments to make the scene realistic.
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
Online, and (5) give a memory and process performance on collect data, such as time, voltage, and current, 170 times per
execution feedback regarding the online/offline status. The second from node end-devices through a shield that is made to
performance measures of the various processing stages operated measure the energy. Joule (J) is a unit commonly used to measure
on different phases are presented in Figure 15. In the conducted mechanical energy (work). The energy consumption, in Joules, is
test (Figure 15), it has been assumed that the nodes consume calculated from the voltage and current at time t [V (t) and I (t)]
limited resources in terms of memory and process. The node as x
performs excellently. The test aims to focus on the performance WVSN _ Joules V1,2,3 ( t ) I1,2,3 ( t ) dt (1)
0
of the WVSN node. Like, the platform is implemented by
The total transmission energy drained for N hops is given by
creating different practical scenarios.
Table.4. Performance of practical WVSN node test equation (2): Enode = Ecpu + Ecom+ ECapturing (2), where;
WVSN Node Scenario A Ecom ( n, d
Scenario B ) Eelec
n
Ecapturing
Eampl Ekinect
( n, d ) Eelec * n Eampl * n * d 2
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
measurements were available. The proposed method, consuming users, as opposed to micro controller programming which usually
40% less, has robustness against other platforms errors depends on the development kit.
comparable to that of Method2 and is significantly more robust to The second one is the using of the RSSI treatment with image capturing.
target solutions. Fusion of depth distance DZkinect and the initial position pre-calculated
Drssi. The other specificity is those WVSN nodes perform a local pre-
processing image before sending data to the server. In summary,
according to the computational capability and system resource
consumption, there are two types of WVSN platforms. The first one is
the low-end WVSN platform that is designed specifically for energy
efficiency (e.g., Cyclops [30]). The other is the high-end WVSN platform
(e.g., Meerkats [31]) that is designed for sophisticated visual data
applications where the system resource requirement and energy
consumption are about one order of magnitude higher. This offers new
research directions and opportunities for designing new WVSN platforms
that could perform sophisticated visual data operations in energy efficient
ways. This section reviews the five major hardware components of the
sensor node in WVSN; other detailed hardware architecture can be seen
in [32].
Table.5. Comparison of platforms performances
DISCUSSION
The first difference between the proposed approach and the
approaches presented in the previously mentioned works is in the use of a
Win 10 IOT OS for the implementation of the prototype which is one of
the main modern operating systems. Also, to create the driver of the node
sensor, open source Pi4J library is used. This is a bridge between the
VS15 & SDK libraries for full access to the Raspberry Pi. A win 10 IOT
service was preinstalled on the processing unit in order to make a
connection between Raspberry Pi and the KINECT. Thus, the developed
Table 5 summarizes the different components for the WVSN platform in
testbed advance is in its modularity – developed codes written in C++
comparison with other systems. As a specificity of image quality it is IR
provide multiple use of written software in future projects. Solution
camera Kinect, as the memory size (1G + 256Kbytes) by far is more
implementation is quite simple and it is accessible to a large number of
important than the other nodes, for communication the combination
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
between Zigbee and Wifi, for reasons of optimization at the point of [3]. E. E. Stone and M. Skubic, “Evaluation of an inexpensive depth
consumption and taking advantage of the high calculability of ARM camera for passive in-home fall risk assessment,” in Proc. Int.
cortex A53 microcontroller, For the case of the energy consumption like Conf. Pervasive Comput. Technol. Healthcare (Pervasive Health),
Dublin, Ireland, May 2011, pp. 71–77.
the interval is corrected as follows 200-4000 mAh. Thanks to the
[4]. S. Patel, H. Park, P. Bonato, L. Chan, and M. Rodgers, “A review
collaboration and management provided by the different scenarios, we of wearable sensors and systems with application in rehabilitation,”
obtain an average of 500 mAh for each node on a rehabilitation exercise J. NeuroEng. Rehabil., vol. 9, pp. 21:1–21:17, Jan. 2012.
session. [5]. [Microsoft Kinect, [Online] Available:
http://www.microsoft.com/enus/kinectforwindows/,December
2013.
[6]. 6. Mao G Q, Fidan B, Anderson B. Wireless sensor network
VI. CONCLUSION localization techniques. Computer Networks, 2011, 51(10): 2529–
2553
A new WVSN platform based on the used WVSN node is presented [7]. Rajeev S, Ananda A, Mun C, Wei T. Mobile, wireless, and sensor
for patient rehabilitation supervision. This system is a low-cost, networks: technology, applications, and future directions. John
lightweight, and easy-to-use monitoring system which can be used at Wiley and Sons, 2005
home or in a hospital. The proposed solution in this paper relies on a [8]. Liu Liang, Zhang Xi, Ma Huadong. Optimal node selection for
novel association of RPi 3 with Kinect using a hybrid communication target localization in wireless camera sensor networks. IEEE Trans
VehTechnol 2010; 59(7):3562–76
protocol (ZigBee, WI-Fi). It also meets the localization and tracking
[9]. SoroStanislava, Heinzelman Wendi. A survey of visual sensor
requirements of patient rehabilitation supervision in terms of data networks. Adv Multimedia 2009;2009:1–21
processing and data communication. The functionality and the [10]. Kulkarni Purushottam. Senseye: a multi-tier heterogeneous camera
performance of the prototype have been proved by testing it in a real sensor network. Amherst, MA: University of Massachusetts; 2007.
environment. The experimental results of two tracking methods (one [11]. Mao Guoqiang, FidanBaris, Anderson Brian DO. Wireless sensor
with depth information) to localize the indoor benchmarks of patient and network localization techniques. ComputNetw 2011;51(10):2529–
a challenging RSSI dataset demonstrate the effectiveness of the proposed 53
[12]. Ercan D Yang, Gamal AE, Guibas L. Optimal placement and
scheme to introduce KINECT on the used node’s depth and visual feature
selection of camera
for extraction, accurate 3D detection of a complex activity, and [13]. Network nodes for target localization. DCOSS; 2006. p. 389–404
interaction. In particular, the overall nodes are installed in a clinical [14]. CharfiYoussef, Wakamiya Naoki, Murata Masayuki. Challenging
room. issues in visual sensor networks,. IEEE WirelCommun 2009;
FUTURE WORK 16(2):44–9.
A WVSN platform is proposed to monitor and track the elderly or [15]. Akyildiz IF, Melodia T, Chowdury KR. A survey on wireless
athlete patients. This platform significantly reduces the costs of multimedia sensor networks,. IEEE WirelCommun 2007; 14(6):32–
9.
processing and communication, as well as the energy consumption,
[16]. Yang F, Tschetter E, Léauté X, Ray N, Merlino G, Ganguli D.
which has a significant effect on the WVSN performances. Several Druid: a real-time analytical data store. In: Proceedings of the 2014
studies that have been conducted, as explained in Section 2, have shown ACM SIGMOD international conference on management of data.
that WVSN nodes are more efficient than the traditional architecture of ACM; 2014. p. 157–68.
client/server-based WSN or the simple use of cameras. Among these [17]. Yan H, Xu LD, Bi Z, Pang Z, Zhang J, Chen Y. An emerging
works, the RSSI and depth distance extracted from skeletal joint technology-wearable wireless sensor networks with applications in
human health condition monitoring. J Manag Anal 2015;2:121–
coordinates (Zk) are fused to precisely localize the patients indoor. The
37.http://dx.doi.org/10.1080/23270012.2015.1029550.
RSSI seeks the most adequate node FOVs of MS Kinect to activate this [18]. Dai R, Akyildiz IF. A spatial correlation model for visual
node, and it ensures collaboration among the three used nodes. This study information in wireless multimedia sensor networks. IEEE Trans
proposes the use of an intelligent strategy based on a new association of Multimed 2009;11(6):1148–59
RPi 3 with MS KINECT for data capturing, processing, and transmission, [19]. J. Webb and J. Ashley, ―Beginning Kinect Programming with the
as well as intelligent fusion of distance localization processing in a Microsoft Kinect SDK,‖ , 2012, pp. 52, pp. 67-100
WVSN node. As perspectives, we exploited this Platform to centralize all [20]. Kinect Wikipedia, [Online] Available:
http://en.wikipedia.org/wiki/Kinect, December 2013
of the frames and data with Reconfigurable architecture server of an
[21]. Samuele Gasparrini, EneaCippitelli, Susanna Spinsante and
FPGA board and we explored the dynamic Reconfiguration with our Ennio Gambi, "A Depth-Based Fall Detection System Using a
WVSN nodes. Providing connectivity of de Windows 10 IoT Core on the Kinect Sensor," In the International Journal of Sensors, Volume
RPI 3. 14, Issue 2, pp. 2756-2775, February 2014.
REFERENCES [22]. http://www.raspberrypi.org, about raspberry Pi.
[23]. Microsoft Kinect SDK [Online]. Available:
http://www.microsoft.com/en- us/kinectforwindows/.
[1]. [G. Kwakkel, R. van Peppen, R. C. Wagenaar, S. W. Dauphinee, C. [24]. OpenNI [Online]. Available: http://www.openni.org/
Richards, A. Ashburn, K. Miller, N. Lincoln, C. Partridge, I. [25]. OpenKinect[Online].Available:https://github.com/OpenKinect/libfr
Wellwood, and P. Langhorne, “Effects of augmented exercise eenect/.
therapy time after stroke: A Meta-Analysis,” Stroke, vol. 35, no. [26]. A.Taha1, Hala H. Zayed, M. E. KhalifaandEl.El-Horbaty ,Human
11, pp. 2529–2539, Nov. 2004. Action Recognition based on MSVM and Depth Images, IJCSI
[2]. J. A. Painter, S. J. Elliott, and S. Hudson, “Falls in community- International Journal of Computer Science Issues, Vol. 11, Issue 4,
dwelling adults aged 50 years and older: Prevalence and No 2, July 2014.
contributing factors,” J. Allied Health, vol. 38, no. 4, pp. 201–207, [27]. RoannaLun, A survey of applications and human motion
Jan. 2009. recognition with Microsoft kinect International Journal of Pattern
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2838676, IEEE Sensors
Journal
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.