Professional Documents
Culture Documents
He 2019
He 2019
fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2019.2903482, IEEE
Sensors Journal
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2019.2903482, IEEE
Sensors Journal
activities through a decision-tree algorithm [17]. This approach is Cartesian coordinate system is established in accordance with the
probably the most promising, and it effectively utilizes edge replacement direction of the sensor (Fig. 1).
computing, and more and more researchers are working in this
x
area. Yet, this model is still facing some challenges such as x ax
ɷx
designing a simple preprocessing algorithm, compressing raw ax
data into small feature values, and extending battery usage time. ay ay
y y ɷy
o o
In a previous study, the third approach was used to develop a
az z az
wearable vest incorporating 3-axial accelerometer, gyroscope,
ɷz
and Bluetooth. The acceleration and angular velocity of human z
activities were collected in real time by a sensor board in the vest. Fig.1. The motion model of human activity.
Corresponding signal vector norms of acceleration and angular In Fig. 1, the left-hand panel shows the coordinate system of
velocity were calculated and then sent to a mobile phone and fall acceleration, and 𝑎 , 𝑎 , and 𝑎 denote the acceleration along
detection and alarm were actualized using the k nearest neighbor the x, y, and z axis, respectively. The right-hand panel shows the
(kNN) algorithm on the mobile phone [18]. The following issues angular velocity of the human body and the coordinate system of
were identified during the use of this system: (1) The 3-dimensional angles. ɷ , ɷ , and ɷ denote the angular
transmission distance of Bluetooth is short (approximately 10 m), velocity of the body around the x, y, and z axis, respectively.
and its ability to penetrate walls is weak, so the elderly are Interrupt
Control
required to carry mobile phones to use the system, which is LEDs Push button
Interrupt Interrupt
inconvenient, and not suitable for the use in elderly communities; RF transceiver Data
CC 2530
Microcontroller
Data
Control
Accelerometer
Control
(2) data need to be sent by the wearable vest in real time, which Real-time clock
Interrupt
Data
Data
Gyroscope
Control
leads to great Bluetooth power consumption (a 600-mAh lithium Control MPU 6050
power
battery can only provide power for 6 hours); (3) there are signal Regulator
power
Battery
drift errors on the gyroscope, and voltage fluctuations are (a) The architecture of the sensor board. (b) The size of the sensor board.
generated by the 3-axial accelerometer in the motion state. Fig.2 The sensor board.
These factors will affect the accuracy and effectiveness of the Based on the human motion model, the sensor board which
fall detection algorithm. In this work, low-power consumption mainly consists of a CC2530 microcontroller, a MPU6050
ZigBee and MPU6050, which can collect and cache data in sleep sensor, ZigBee radio-frequency and power management
mode, are utilized for the sensor board. In addition, an modules is designed. Fig. 2 shows the architecture of the sensor
interrupt-driven motion data acquisition and transmission board. The dimensions of the sensor board are 30mm × 30mm ×
algorithm is carefully designed. At receiving end, the server 5 mm, which is slightly bigger than 1-dollar coin. The size of the
normalizes the received raw data according to the range module makes it suitable for the placement at the waist of the
specification and caches them into a sliding window. The cached wearable vest to collect motion data. The transmission rate of the
data are mapped into bitmap, and FD-CNN is designed to ZigBee module is 115200 baud, with a maximum transmission
identify falls from activities of daily livings (ADLs). distance of 100 m. A 3-axial MEMS gyroscope, accelerometer,
The rest of the paper is organized as follows. In Section II, the and expandable digital motion processor (DMP) are integrated
human motion model, and the algorithm for human activity data into the MPU-6050. The measurement range of the gyroscope is
acquisition and transmission are presented. In section III, the up to ±2000°/s, while the range of the accelerometer is up to ±16
technology for normalization and visualization of human activity g. Since the frequency of human activity is usually less than 20
data is introduced. The FD-CNN is introduced in detail in section Hz, the sampling frequency is set to 100 Hz to collect the user
IV. The experiment and its analysis are discussed in section V. activity data from accelerometers and gyroscopes.
The conclusion and our future works are presented in Section VI. The MPU-6050 contains a 1-kB first in, first out (FIFO)
register as a data cache. Meanwhile, its DMP could read data
II. LOW POWER MOTION SENSING TECHNOLOGY from the gyroscope and accelerometer in sleep mode and cache
In this section, the hardware structure of the sensor board them into the FIFO buffer. Since there is no need to access the
integrated with MPU6050 and ZigBee is introduced at first, and a microcontroller and ZigBee radio frequency, the mode is low
low power algorithm for human activity data acquisition and power consumption. Besides, the MPU6050 has a programmable
transmission is designed. interrupt system which supports free-fall, zero-motion, and FIFO
overflow interrupts [20]. Among them, the MPU6050 uses
A. Motion Sensing Technology
FF_THR register to set the threshold of free fall. If the measured
In the course of a movement, a human body’s acceleration and 3-axial accelerations are within the threshold, the sampling value
angular velocity change in real time. In a study by Erdogan et al. is ignored, else a free-fall interrupt will be triggered and a flag
[19]
, the upper torso of the human body (i.e., above the waist and will be generated. The MPU6050 accelerometer has a
below the neck) is proved to be the optimal place to acquire configurable digital high-pass filter (DHPF), a zero-motion
acceleration data and to distinguish falls from other daily interrupt will be generated when the acceleration read by the
activities. Considering the comfort of the wearable device and DHPF is less than the threshold. The ZRMOT_THR register is
the reliability of the system, the sensor board is placed at the used to set the zero-motion threshold, meanwhile, the
waist of the custom vest, and a human motion model based on the
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2019.2903482, IEEE
Sensors Journal
Acceleration
F1: zero-motion state. 3-axial acceleration and angular (0,0)
sensor data
(0,0) Acceleration
velocity data are collected and cached into the FIFO buffer by the Y-axial acceleration
Z-axial acceleration
Gyroscope sensor
sensor data
Acceleration
data
X-axial acceleration
DMP. If a FIFO overflow interrupt is triggered, the data in the sensor data sensor
Gyroscope
data (19,19)
Z-axial angular velocity
Gyroscope sensor
FIFO buffer will be updated by the DMP according to the X-axial angular velocity
Y-axial angular velocity
data (19,19)
principle of first in first out, and the zero-motion interrupt flag (19,19)
will be reset. Otherwise, if the zero-motion interrupt lasts longer Fig.4 Schematic illustration of mapping 3- axial data into RGB bitmap.
than 1 s, the FIFO buffer will be updated, and the zero-motion At the server, the sliding window caches human activity data
interrupt flag is reset to continue the low-power mode. for 2 seconds, of which there are 400 3-axial accelerations and
F2: active state. If a free-fall interrupt occurs, and it lasts more angular velocities, respectively. If the 3-axes of the human
than 40ms, the data stored in the FIFO buffer will be sent to the motion model are considered as the 3 channels of an RGB image,
server via ZigBee. Otherwise, if the FIFO overflow interrupt the value of the XYZ axial data can be mapped into the value of
occurs, the module will send the data in the FIFO buffer to the the RGB channel data in an RGB image respectively. Namely,
server via ZigBee, and clean the FIFO data and return to F1. Else each 3-axial data can be converted into an RGB pixel. The 400
the FIFO buffer will be updated according to the principle of first pieces of 3-axial data cached in the sliding window can be
in first out, and then return to F1. viewed as a bitmap with the size of 20 or 20 pixels. Fig. 4
schematic illustrates the way to map 3-trail accelerations and
angular velocities into the bitmap. In Fig. 4, the first 200 data of
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2019.2903482, IEEE
Sensors Journal
the bitmap are 3 axial accelerations, and the latter 200 are 3 axial
angular velocities. Namely, the data from (0, 0) to (9, 19) are
3-axial accelerations, and from (10, 10) to (19, 19) are 3-axial
angular velocities.
Because the range of image data is from 0 to 255, and the
ranges of accelerometer and gyroscope data are different, the
data of acceleration and angular velocity are normalized to the
range of 0 ~ 255 according to Equation (1). (a) Acceleration of Fall. (b) Angular velocity of Fall.
255 (Value Range) (1)
Result
2 Range
Range is the range value of accelerator or gyroscope. Value is
the measured value. The calculated Result is a float value, and it
is converted down to an integer value. For example, an
acceleration data of which 𝑎 , 𝑎 , and 𝑎 are 8.302,-9.532, and
0.962 respectively. The range value of the accelerator is 16g. The (c) Acceleration of Walk. (d) Angular velocity of Walk.
calculated Result is as follows.
result_x = (8.302+16)*255/32 = 193
result_y = (-9.532+16)*255/32 = 45
result_z = (0.962+16)*255/32 = 118
That is, the calculated Result is (193, 45, 118).
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2019.2903482, IEEE
Sensors Journal
and angular velocity of fall are quite different from those of other kernel is used in each feature map. The convolution kernel
ADLs. Hence, falls can be distinguished from other ADLs by strides one pixel at a time, the size of the feature map is 18x18.
selecting appropriate classification algorithms. Each convolution kernel has 5x5x3 join parameters and a bias,
namely, it has 76 parameters, and each unit is activated
according to Equation (3) (Rectified Linear Unit, ReLU) after
the convolution. C1 contains 2432 trainable parameters.
f ( x ) 0,x ,xx00 (3)
(a) The bitmap of Fall. (b) The bitmap of Walk.
S2 layer is a sub-sampling layer with 32 feature maps. Before
sub-sampling, each feature map of C1 is expanded by padding
edge, so the size of each expanded feature map is 20x20. Each
unit in each feature map is connected to a 2x2 neighborhood in
(c) The bitmap of Jog. (d) The bitmap of Jump.
the corresponding expanded feature map, and max-pooling is
adopted to do sub-sampling. Since the receptive fields are
non-overlapping, the size of each feature map is 10x10.
C3 layer is a convolutional layer with 64 feature maps.
Before convolution, each feature map of S2 is expanded by
padding edge, the size of each expanded feature map is 12x12.
(e) The bitmap of Go-upstairs. (f) The bitmap of Stand-up. The size of the convolutional kernel is 5x5x32, only one
common convolution kernel is used in each feature map. The
convolution kernel strides one pixel at a time, the size of each
feature map in C3 is 8x8. Each convolution kernel has 5x5x32
join parameters and a bias, namely, it has 801 parameters, and
(g) The bitmap of Go-downstairs. (h) The bitmap of Sit-down. each unit is activated according to ReLU after the convolution.
Fig.7 The bitmaps of ADLs and Fall. C3 contains 51264 trainable parameters.
Fig. 7 shows the corresponding bitmaps after converting the S4 layer is a sub-sampling layer with 64 feature maps. Before
data of fall and daily activities in Fig. 6 into RGB pixels. From sub-sampling, each feature map of C3 is expanded by padding
Fig. 7, it can be found that the bitmap of fall is different from that
edge, so the size of each expanded feature map is 10x10. Each
of daily activities, which provides the basis for using
unit in each feature map is connected to a 2x2 neighborhood in
classification algorithm based on image recognition to identify
the corresponding expanded feature map, and max-pooling is
falls from ADLs. CNN shows excellent recognition accuracy for
image detection and recognition, and LeNet[21] operating directly adopted to do sub-sampling. Since the receptive fields are
on 32 ⅹ 32 pixel image has been succeeded on character non-overlapping, the size of each feature map is 5x5.
F5 contains 512 units, and is fully connected to C4. Each unit
recognition. Therefore, a CNN-based algorithm for falls
detection (namely FD-CNN algorithm) according to the is activated according to ReLU after fully connected. The
architecture of LeNet is designed. dropout is adopted to prevent over-fitting during network
training.
IV. FALL SENSING ALGORITHM Finally, the output layer is fully connected to F5 with 8 units.
Softmax is used to compute the probability of each unit, and the
Fig. 8 shows the architecture of FD-CNN, which has two one with the maximum probability will be the predicting result.
convolutional layers, two subsampling layers, and two
fully-connected layers (not including the input). B. FD-CNN Training
A. The Architecture of FD-CNN The dataset published on SisFall [22] and MobiFall [23] were
first extracted and transformed according to the coordinate system
In Fig.8, the convolutional layer is labeled as Cx, the
of Fig. 1, so as to ensure the data being the same coordinate.
subsample the subsampling layer is labeled as Sx, and the fully
Besides, the transformed data were normalized by the range
connected layer is labeled as Fx, where x is the index of the layer.
specification, and mapped into a bitmap to form the open dataset.
The input is a 3 channels 20ⅹ20 RGB image, and each pixel
Among the open dataset, 1000 sets of Walk, Jump, Jog,
value is normalized according to Equation (2). That is, each pixel
Go-upstairs and Go-downstairs, and 500 sets of Fall were
value ranging from 0 to 255 is normalized from -1 to 1. This not
extracted from MobiFall respectively; 1000 sets of Sit-down and
only speeds up the network training, but also improves the
Stand-up, and 500 sets of Fall were extracted from SisFall. Due to
accuracy of the network.
the shortage of falls, the forward falls, backward falls, the left falls
value 2
scaled value 1 (2) and the right falls were not distinguished with each other. They
255 were all classified as falls. In addition, the experimental
C1 layer is a convolutional layer with 32 feature maps. environment shown in Fig. 9 was used to obtain data of 7 types of
Before convolution, each input data is expanded by padding edge, daily activities and falls through 20 subjects to build the
the size of expanded input is 22x22x3. The padding edge can experimental dataset, of which included 200 sets of Walk, Jump,
preserve more features of the input bitmap. The size of the Jog, Go-upstairs, Go-downstairs, Sit-down, Stand-up and Fall
convolutional kernel is 5x5x3, only one common convolution
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2019.2903482, IEEE
Sensors Journal
Input data C1:Feature maps S2:Feature maps C3:Feature maps S4:Feature maps
20*20*3 32@18*18 32@10*10 64@8*8 64@5*5 F5:layer 512
3 Output:8
32
32 64 64
20
5 10
18 5
5 5 5
8
5
10 Classes
8 Number
20 18 512
Convolution Layer 1 Maxpool Layer 1 Convolution Layer 2 Maxpool Layer 2
kernel size:2*2*1 kernel size:5*5*32*64 kernel size:2*2*1 Fully Connected Fully Connected
kernel size:5*5*3*32
strides:1*2*2*1 strides:1*2*2*1 Layer 1 Layer 2
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2019.2903482, IEEE
Sensors Journal
specificity of FD-CNN are 2.71%, 2.71% and 0.38% the prominent error term in the short cluster times, whereas it is
respectively higher than those of Random Forest algorithm the drift rate-ramp term in the long cluster times. To solve the
which is the highest one in Weka, and 19.01%, 17.46% and measurement noise, reference [25] introduced Kalman filter to
3.68% higher than those of SMO algorithm which is the lowest preprocess the raw data so as to reduce noise, and then used
one in Weka. Meanwhile, FD-CNN algorithm running on a PC Bayes network to distinguish falls from 5 kinds of ADLs. The
with GTX1080 graphics card only spends 0.13s to classify, experiment showed that it distinguished simulated falls from
which can fully fit the need of real-time fall detection. ADLs with the accuracy of 95.67%, while sensitivity and
TABLE Ⅲ specificity were 99.0% and 95.0%, respectively. However, the
THE COMPARISON ON DIFFERENT ALGORITHMS WITH FD-CNN. FD-CNN gets higher accuracy, especially its average sensitivity
Algorithm Accuracy Avg. Sensitivity Avg. Specificity Test Time(s) and specificity of the fall detection are up to 100%. The reason is
FD-CNN 98.61% 98.62% 99.80% 0.13 that the normalization used in this paper according to Equation (1)
Lazy IBK 89.61% 88.73% 98.54% 11.5 and (2) can reduce the gross influences of measurement noise
Naive Bayes 87.28% 82.73% 97.92% 0.27
Bayes Net 82.72% 88.01% 98.04% 0.62
and anomaly time series. For example, Equation (2) which
Random Forest 95.90% 95.91% 99.42% 10.18 transforms the pixel value range of acceleration ranging from
Random Tree 81.20% 81.12% 96.32% 0.06 0~255 to -1 ~1, can eliminate the effects of measurement noise.
Bagging 93.12% 93.64% 98.73% 0.04 Hence, it improves the accuracy of fall detection. Besides, the
Ripper 86.34% 86.31% 97.30% 1.94
SMO 79.60% 81.16% 96.12% 0.2 padding edge expanded at the convolutional layer makes sure
TABLE Ⅳ that each pixel can be calculated twice at least. As a result, more
THE RESULT OF ONLINE TEST. features of the bitmap can be preserved. ReLU being the
Activity Accuracy Sensitivity Specificity activation function in FD-CNN, has advantages of sparse
Fall 99.77% 100.00% 99.74% activation, fewer vanishing gradient problems, and efficient
Walk 100.00% 100.00% 100.00%
Jog 99.54% 100.0% 99.47%
computation. It took only 0.13 seconds to do the test under a
Jump 99.31% 96.77% 99.73% Lenovo ThinkCenter m6200t with an i5 CPU, 8G memory, and
Go-upstairs 98.85% 95.91% 99.22% GTX-970 graphical card. In addition, the effectiveness of deep
Go-downstairs 98.85% 92.98% 99.73%
Stand-up 99.31% 93.87% 100.00%
ANNs has been demonstrated in many fields besides image
Sit-down 99.31% 100.00% 99.19% classification, such as natural language processing, and transfer
Average 97.47% 97.44% 99.63% learning [26]. Consequently, studies on deep-learning based
solutions for human activity recognition (HAR) via wearable
6 graduate students were invited to wear the vest integrated
sensors have multiplied for the past few years [27]. For example,
with the sensor board so as to do the online experiment. Table 4
shows that the average accuracy is 97.47%, while the average Ordonez [28] introduced deep convolutional and Long-Short-
sensitivity and specificity are 97.44%, and 99.63% respectively. Term-Memory (LSTM) recurrent neural networks for
Besides the average sensitivity and specificity of the fall multimodal wearable activity recognition, which mainly focused
detection are 100.00%, and 99.74% respectively. on recognition modes of locomotion and postures, especially
The sensor board was integrated with a 600mAH battery in sporadic gestures (such as open/close door, open/close fridge,
order to test its power efficiency. Firstly, the sensor board was etc.). Frédéric et al. [29] carried out on the OPPORTUNITY[30]
designed to send the data stored in the FIFO buffer every 0.8 and UniMiB-SHAR[31] datasets, and proved the effectiveness of
seconds to simulate the continuous motion of the human body. hybrid deep-learning architectures involving convolutional and
The test proved that the sensor board could continuously work LSTM for HAR. However, there are periodic activities (such as
more than 30 hours in this mode. On the other hand, the same falls, stand up, go down, etc.), and sporadic gestures (e.g. open
600 mAH battery in reference [18] could only work for 6 hours. drawer) together, its accuracy is about 92.21, which is lower than
Secondly, two subjects (one male, one female) were selected to the method presented in this paper. Additionally, the
put on a wearable vest integrated the sensor board to carry out experimental result from reference [28] showed that 2 seconds
their daily work in the laboratory from 8 a.m. to 10 p.m. every sliding window is better than those of 1 and 3 seconds, it is one of
day, in order to test the power consumption of the sensor board in the reasons that the size of the sliding window in this paper is 2
daily work situations. The test results showed that the sensor seconds.
board could continuously work more than 10 days, namely it
could continuously work more than 140 hours. Those VI. CONCLUSION
experiments show that the interrupt driven and ZigBee based In this paper, an interrupt-driven, ZigBee-based sensor board
activity sensor board is a low power consumption system, which is designed to realize low power human activity data acquisition
is suitable for activity perception and fall detection in elderly and transmission. Meanwhile, inspired by the idea of 3-channel
communities. RGB image coding, and the 3-axial acceleration and angular
velocity data are mapped into RGB bitmap, and a fall detection
C. Discussion
CNN is designed to distinguish falls from ADLs. Even though,
Even though the inertial sensor has been widely used in most the existing technologies for fall detection based on inertial
wearable devices, it has non-negligible measurement noise. sensor use traditional machine learning. The experimental results
El-Sheimy et al. [24] used Allan variance to model and analyze prove that the average accuracy of our proposed technology is
inertial sensors. The results show that the quantization noise is 98.61%, while its average sensitivity and specificity are 98.62%
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2019.2903482, IEEE
Sensors Journal
and 99.80% respectively. It takes advantages of high accuracy [19] S.Z. Erdogan T.T. Bilgin. “A data mining approach for fall detection by
using k-nearest neighbour algorithm on wireless sensor network data,” IET
for fall detection, low-power consumption, long communication Communication, vol. 6, no. 18, pp. 3281–3287, 2012.
distance of ZigBee, and so on. Hence, it is very suitable for the [20] MPU-6000/6050 “Six-Axis (Gyro + Accelerometer) MEMS Motion
fall detection in the elderly community. In the future, the NB-iot Tracking Devices,” Available: http://www.invensense.com /mems/gyro/
mpu 6050.html, accessed March 14, 2016.
and edge computing technologies will be studied to design and
[21] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner. “Gradient-Based Learning
construct a sensor board based on low-power logistic network Applied to Document Recognition,” Proc. of The IEEE, 86(11), pp. 2278
technology. -2324, 1998.
[22] A. Sucerquia, J. D. López, J. F. Vargasbonilla. “SisFall: A Fall and
Movement Dataset,” Sensors, 17(1):198, 2017.
REFERENCES [23] G. Vavoulas, M. Pediaditis, C. Chatzaki, et al. “The MobiFall Dataset: Fall
[1] P. Kannus, J. Parkkari, S. Koskinen et al., “Fall-induced injuries and deaths Detection and Classification with a Smartphone,” International Journal of
among older adults,” The Journal of the American Medical Association, vol. Monitoring & Surveillance Technologies Research, 2(1), pp.44-56, 2014
281, no. 20, pp. 1895–1899, 1999. [24] N. El-Sheimy, H. Hou, X. Niu. “Analysis and modeling of inertial sensors
[2] State Statistical Bureau. “The sixth national population census of the using Allan variance,” IEEE Trans. Instrum. Meas., 57, pp.140–149, 2008.
people’s republic of China”, Chinese Journal of Family Planning, vol. 19, [25] J. He, S. Bai, X. Wang. “An Unobtrusive Fall Detection and Alerting
no. 8, pp. 511–512, 2011. System Based on Kalman Filter and Bayes Network Classifier,” sensors,
[3] N. Pannurat, S. Tiemjarus, and E. Nantajeewarawat, “Automatic fall 17(6):1393, 2017.
monitoring: a review,” Sensors, vol. 14, no.7, pp.12900–12936, 2014. [26] Y.Bengio, A. Courville, P. Vincent. “Representation Learning: a Review
[4] A. Buke, F. Gaoli, W. Yongcai, S. Lei, and Y. Zhiqi, “Healthcare and New Perspectives,” IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35,
algorithms by wearable inertial sensors: a survey,” China Communications, 1798–1828.
vol. 29, no. 4, pp. 7–15, 2015. [27] N. Y. Hammerla, S. Halloran, T. Ploetz. “Deep, Convolutional, and
[5] Y. Zigel, D. Litvak, and I. Gannot, “A method for automatic fall detection Recurrent Models for Human Activity Recognition using Wearable,” In
of elderly people using floor vibrations and sound proof of concept on Proceedings of the IJCAI 2016, pp. 1533–1540, July, 2016.
human mimicking doll falls,” IEEE Transactions on Biomedical [28] F.J. Ordonez, D. Roggen. “Deep Convolutional and LSTM Recurrent
Engineering, vol. 56, no. 12, pp. 2858–2867, 2009. Neural Networks for Multimodal Wearable Activity Recognition,” Sensors,
[6] A. Yazar, F. Erden, and A. E. Cetin, “Multi-sensor ambient assisted living 16(1):115, 2016.
system for fall detection,” in Proceedings of the IEEE International [29] L. Frédéric, S. Kimiaki, A. N. Muhammad, L. Köping, M. Grzegorzek.
Conference on Acoustics, Speech, and Signal Processing (ICASSP ’14), pp. "Comparison of Feature Learning Methods for Human Activity
1–3, May 2014. Recognition Using Wearable Sensors," Sensors, 18(2):679, 2018.
[7] M. Yu, A. Rhuma, S. M. Naqvi, L. Wang, and J. Chambers, “A posture [30] R. Chavarriaga, H. Sagha, A. Calatroni, S. Digumarti, G.Tröster, J. Millán,
recognition-based fall detection system for monitoring an elderly person in D. Roggen. “The Opportunity challenge: A benchmark database for
a smart home environment,” IEEE Transactions on Information on-body sensor-based activity recognition,” Pattern Recognit. Lett., 34,
Technology in Biomedicine, vol. 16, no. 6, pp. 1274–1286, 2012. 2033–2042, 2013.
[8] A. Ariani, S. J. Redmond, D. Chang, and N. H. Lovell, “Simulated [31] D. Micucci, M. Mobilio, P. Napoletano. “UniMiB SHAR: A new dataset
unobtrusive falls detection with multiple persons,” IEEE Transactions on for human activity recognition using acceleration data from smartphones,”
Biomedical Engineering, vol. 59, no. 12, pp. 3185–3196, 2012. arXiv 2016, arXiv:1611.07688
[9] B. U. Toreyin, E. B. Soyer, I. Onaran, and E. E. Cetin, “Falling person
detection using multisensor signal processing,” EURASIP Journal on Jian He received the M.S. degree in Computer Software
Advances in Signal Processing, vol. 2008, Article ID 149304, 2008. from Northwest University, Xi’an, China in 2000, and
[10] N. Twomey, T. Diethe, X. Fafoutis, A. Elsts, R. McConville, P. Flach, I. received the Ph.D. degree in Computer Software from
Craddock, "A Comprehensive Study of Activity Recognition Using Xi’an Jiaotong University, Xi’an, China 2005. He is an
Accelerometers", Informatics 2018, 5(27), pp. 1-2. associate professor at the School of Software
[11] A. Bulling, U. Blanke, B. Schiele. “A Tutorial on Human Activity
Recognition Using Body-worn Inertial Sensors,” ACM Comput. Surv.
Engineering, Beijing University of Technology. His
2014, 46, 1–33. research interests include Ubiquitous Computing,
[12] C. Becker, L. Schwickert, S. Mellone, F. Bagalà, et al. “Proposal for a Embedded System, and HCI.
multiphase fall model based on real-world fall recordings with body-fixed
sensors,” Zeitschrift Für Gerontologie Und Geriatrie, vol. 45, no. 8, pp, Zhihao Zhang He received the B.S. degree in Computer
707-715, 2012. College from Shijiazhuang University in 2016. He is a
[13] H. Gjoreski, S. Kozina, M. Gams, and M. Lustrek, “RAReFall— real-time graduate student at the School of Software Engineering,
activity recognition and fall detection system,” in Proceedings of the IEEE
International Conference on Pervasive Computing and Communication
Beijing University of Technology. His research interests
Workshops (PERCOM WORKSHOPS ’14), pp. 145–147, IEEE, Budapest, include Ubiquitous Computing, Wearable Technology.
Hungary, March 2014. Xiaoyi Wang received his B.S. and Ph.D. degree in
[14] M. Benocci, C. Tacconi, E. Farella, L. Benini, L. Chiari, and L. Vanzago, computer science and technology from Tsinghua
“Accelerometer-based fall detection using optimized ZigBee data
University, Beijing, China, in 2004 and 2010
streaming,” Microelectron. J., vol. 41, no. 11, pp. 703–710, 2010.
[15] Shi G, Chan C S, Li W J, et al. “Mobile human airbag system for fall respectively. He is a lecturer with Beijing University of
protection using MEMS sensors and embedded SVM classifier,” IEEE Technology, China. His research interests include the
Sensors Journal: 2009,9(5):495-503. data mining of IoT systems.
[16] C. Wang, W. Lu, M. Narayanan, D. Chang, S. Lord. "Low-Power Fall
Detector Using Triaxial Accelerometry and Barometric Pressure Sensing", Shengqi Yang received the Double B.S. degree in
IEEE Transactions on Industrial Informatics, Vol.12, Issue 6, pp: mechanical engineering and economics and the M.S.
2302-2311, 2016.
degree in electrical engineering from Peking University,
[17] J. Yuan, K. K. Tan, T. H. Lee, and G. C. H. Koh, "Power-efficient
interrupt-driven algorithms for fall detection and classification of activities Beijing, China, in 2000 and 2002, respectively, and the
of daily living," IEEE Sensors Journal, vol. 15, pp. 1377-1387, 2015. Ph.D. degree in electrical engineering from Princeton
[18] J. He, C. Hu, X. Y. Wang. “A Smart Device Enabled System for University, Princeton, NJ, USA, in 2006. He is an
Autonomous Fall Detection and Alert,” International Journal of Distributed adjunct professor in Beijing Advanced Innovation
Sensor Networks, Vol. 2016, Article ID 2308183, pp.1-10, 2016. Center for Future Internet Technology, Beijing University of
Technology, China. His research interests include IoT, embedded
system design, and big data in digital health.
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.