Professional Documents
Culture Documents
Automated_Vision-based_Surveillance_System_to_Detect_Drowning_Incidents_in_Swimming_Pools
Automated_Vision-based_Surveillance_System_to_Detect_Drowning_Incidents_in_Swimming_Pools
Abstract — At present, swimming pools are built in hotels, Wristbands and headbands are two examples of wearable-
sport clubs, schools and private residences. Although there have based systems. Under preset values, the wearable system will
been various regulations put into place to reduce drowning send a wireless alarm signal to the lifeguards. Another device,
accidents in some countries, communities still experience many that monitors the nervous system’s activities through the use
drowning incidents. Accordingly, a real-time system that will of electrocardiographic (ECG) signals, have been also
track swimmers in a pool using machine learning techniques proposed. Furthermore, accelerometers and gyroscopes were
and prevents drowning accidents is proposed. The system used to measure swimmers' behavior in water. The
consists of a Raspberry Pi with the Raspbian operating system, measurements from the above mentioned sensors are used to
a Pixy camera, an Arduino Nano board, stepper motors, an
trigger the drowning danger alarm. A current limitation of the
alarm system, and motor drivers. The proposed system is based
use of such a system is discomfort. This discomfort is
on the color-based algorithm to position and rescue swimmers
who are drowning. The device then sends an alarm to the
especially felt by younger children.
lifeguards. To verify the performance of the proposed system, a For this project, a hybrid system that will automatically
prototype has been developed, implemented, and tested. The detect a drowning person and then set off an alarm to alert the
results from experiments indicate that the system has a unique lifeguards has been developed. The system mainly consists of
capability to monitor and track swimmers, thereby enabling it three modules: a vision module, an event-inference module
to mitigate and curb the number of deaths by drowning. and an event-driven module. The vision module is responsible
for monitoring and detecting the position of the person who is
Keywords— Drowning Detection, Alert System, Real-Time
Image Processing, Swimming Pools.
drowning. The event-inference module is responsible for
determining a swimmer’s position, velocity, and path of the
I. INTRODUCTION movement. The event-driven module is responsible for
initiating the rescue by sending an alarm alerting the lifeguard.
In recent years, there has been an interest in integrating
This project is a modified version of our project submitted to
computer vision in swimming pool surveillance systems.
the Think Science competition, where swimmers’ positions,
Automating such a process will provide the communities with
velocities, movement paths and time under water were
an efficient way of detecting drowning incidents that may
collected and used to determine through inference the eminent
occur while swimming. As the evolution in sensor technology
danger.
and image processing algorithms are advancing day by day,
different approaches for solving such emergencies have been The main contribution of this project is to develop a
proposed. Existing drowning detection technologies can be system for monitoring swimming pool to prevent the onset of
categorized into two types: vision based systems [1], and a drowning incident. The remainder of this paper is organized
wearable sensor based systems [2]. as follows: The next section will start with a survey of current
research regarding the technologies that are employed with the
Vision based systems are classified according to the
aim of using vision-based system in detecting drowning
positions of the cameras. They mainly depend on two types of
incidents in swimming pools. Section III presents an overview
cameras: underwater cameras [3], and above water cameras
of the proposed approach. The proposed system is described
[4]. Underwater cameras are mainly used to eliminate
in section IV. Experimental results are presented in section V.
interference caused by the water. Such method still has a
Finally, conclusions and plans for future projects are drawn in
limitation, since the used camera might miss incidents taking
section VI.
place above the water. Above water cameras also have some
limitation where it is difficult to cover the entire swimming II. RELATED WORK
pool with one camera.
Automating vision-based surveillance systems are
Wearable-based systems are also used in this regard to considered one of the effective ways of monitoring any
monitor the individual’s depth in water, heart rate pressure, unusual activity in swimming pools. In this regard, processing
and to monitor how much time an individual spends there. the background on input video frames has been proposed. The
Authorized licensed use limited to: VIT University- Chennai Campus. Downloaded on March 07,2022 at 10:16:35 UTC from IEEE Xplore. Restrictions apply.
authors in [5] develop a video surveillance system capable of The Pixy camera combines a camera and a
detecting drowning incidents in swimming pool. The system microcontroller running an optimized color detection
is based on modelling the background and then the detection algorithm. It processed any input image in a resolution of 640
of swimmers' is performed based on the comparing the x 400 per frame and outputs an image with a resolution of 320
similarity between a color model and the background model. x 200 pixels. The camera requires 5v, and consumes 140 mA.
A drowning detection method based on background The camera is trained to detect the yellow color. It can be
subtraction was also presented in [6]. The authors used a learned up to detect seven color signatures. If the object is
Gaussian mixture model to describe the background and detected by the camera, the camera will output a block of data
swimmers. His model was able to eliminate the shadows and
that contains the position, the width and height of the object
noise from the foreground objects.
in pixels. The output from camera is then fed to the Raspberry
In [7], the authors combine the mean-shift clustering and Pi through a USB connection for further processing.
cascaded boosting learning algorithm for swimmer detection. Furthermore, the camera has an analog and digital outputs
They proposed three steps in their work: background pins. Pin 1 goes high (3.3V) when an object is detected, and
modeling, swimmer detection, and swimmer tracking. To pin 8 outputs a voltage that is directly linearly proportional to
tackle the background and crowded scenarios at the pools, the object position in the image. The two pins are connected
authors in [8] proposed a set of methods like background directly to ElekStar controller. Figure 2 shows Pixy camera
subtraction, de-noising, data fusion and blob splitting to and ElekStar I/O pins.
recognize different swimming activities and to detect the
occurrence of early drowning incidents.
LED Strip (R) LED Strip (G) LED Strip (B) Robot
Detection of swimmers' body parts was also researched
in this regard. In [9] the authors propose a method based on
pixel classification for the detection of objects in water. They Relay Relay Relay
analyzed the standard color models and proposed YCbCr as
the best color model for describing uniqueness of object. Speaker Level Shifter Arduino Nano
Measuring swimming biomechanics was proposed in [10].
Authors used computer vision techniques to enhance images
for improving clarity and automating detection of both arm Amplifier Home Switch X Home Switch Y
also researched. In [11], the authors developed two sub- Pixy Camera 1 Stepper Motor X
algorithms for intruder detection using thermal imaging ElekStar
controller
system. The algorithm can detect not only the movement of a Stepper Motor Y
Authorized licensed use limited to: VIT University- Chennai Campus. Downloaded on March 07,2022 at 10:16:35 UTC from IEEE Xplore. Restrictions apply.
Raspberry Pi was attached to two Pixy cameras. The two ElekStar linear stage consists of three stepper motors,
cameras are oriented at about 90° relative to one another and controller, two switches, and one robot arm (claw). As shown
attached also to the linear stage controller. A python script is in figure 5 (a), the claw is equipped with a Vex DC motor,
written to process the data from the cameras and then and Vex motor controller. The arm is controlled separately by
commands the linear stage accordingly. Meanwhile, the an Arduino Nano board, and it used to handle the rescue tools.
output is also fed to the level shifter. Since the Raspberry Pi The stepper motors as well as the two switches are shown in
GPIO pins work with 3.3v logic levels, while other devices figure 5 (b). The main purpose of the switches is to bring the
work with 5v logic levels, a level shifter is added to interface motors back to the original coordinates.
the communication between them (figure 3, and figure 4). A
LED strip is used with three colors; red, blue, and green. The
green LEDs get ON when the system is switched ON. On the
other hand, if any abnormal events occur, the red LEDs in the
linear stage will turn ON. Lastly, the blue LEDs are turn ON
while the claw is working. The strip interfaces with three
relays since it requires 12v. An audio message is also
programmed into the Raspberry Pi to identify whether a
person is drowning or not.
G A4 A3 A2 A1 LV
(a) (b)
Level Shifter
Fig. 5. ElekStar linear stage (a) robot arm, (b) stepper motors and home
switches.
G B4 B3 B2 B1 HV
(a) (b)
Fig. 6. ElekStar controller (a) Arduino Nano board and two A4988 stepper
motor drivers, (b) A4988 chip.
Fig. 4. Raspberry Pi GPIO pins.
Authorized licensed use limited to: VIT University- Chennai Campus. Downloaded on March 07,2022 at 10:16:35 UTC from IEEE Xplore. Restrictions apply.
V. EXPERIMENTAL RESULTS
A shown in figure 7, a prototype has been developed in
our laboratories to validate the proposed system. The
prototype consists of two parts: The electronic part, and the
pool. The electronic parts are assembled on an aluminum
frame. The physical dimensions of the frame are (92x92x45)
cm. The pool is fabricated from glass with the physical
dimensions of (52x52x25) cm. To test and demonstrate the
laboratory prototype services, several experiments were
conducted. The purpose of the preliminary experiments were
to collect pre-test data and then evaluate the factors that
should be considered when implementing such system on a
real pool.
(d)
Fig.7. Hardware architecture of the prototype (a) Front side, (b) Right
side, (c) Left side, (d) Back side
(a) (b)
TABLE I. Data returned by the Pixy camera. Here the units are in Pixels
(c)
Authorized licensed use limited to: VIT University- Chennai Campus. Downloaded on March 07,2022 at 10:16:35 UTC from IEEE Xplore. Restrictions apply.
B. Swimmer detection and tracking system, the number of drownings would be reduced. For
future development, the system is currently being improved
The second set of experiments has aimed at assessing the
by attaching an infrared LED to the swimmer’s vest.
object detection in pool. The experiment setup is shown in
figure 9. Unfortunately, the initial results were unsatisfactory REFERENCES
because of the false triggering. The problem was solved by a [1] W. Lu, Y. Tan, Y. Peng, “A Vision-Based Approach to Early Detection
fine tune to the Pixy camera. Pixy is processing 60 frames of of Drowning Incidents in Swimming Pools,” IEEE Transactions on Circuits
video per second. We adjusted the frames per second to a and Systems for Video Technology 14:2 (2004):159 –178.
value less than 60. In addition, the signature range was tuned [2]. S. Nagalikitha, A. Kiranmai, “Automatic Waist Airbag Drowning
to improve detection accuracy. After having all settings Prevention System Based on Motion Information Measured by Memos
Accelerometer and Pressure,” International Journal of Emerging Trends in
loaded to the Pixy, good results were obtained when the Engineering Research (IJETER) 3:6 (2015): 204-206.
object was at the top-left, at the top-right, at the bottom-left, [3] Z. Chi, X. Li, and F. Lei, “A Novel Camera-Based Drowning Detection
and at the bottom-right of the image. All results were taken Algorithm,” Advances in Image and Graphics Technologies, Springer Berlin
with respect to the Pixy monitor. The results are documented Heidelberg (2015): 224-233.
in table II. [4] N. Salehi, M. Keyvanara, S. Monadjemmi, “An Automatic Video-based
Drowning Detection System for Swimming Pools Using Active Contours,”
I.J. Image, Graphics and Signal Processing 8:8 (2016).
[5] A. Kam, W. Lu, W. Yau, “A video-based Drowning Detection System,”
Proceedings of European Conference on Computer Vision LNCS, vol. 2353,
pp. 297–311 (2002).
[6] F. Lei, W. Xueli, and C. Dongsheng, “ Drowning Detection Based on
Background Subtraction,” Embedded Software and Systems, 2009.
ICESS'09. International Conference on. IEEE, 2009.
[7] W. Chen, P. Cho, P. Fan, and Y. Yang, “A framework for Vision-based
Swimmer Tracking,” International Conference on Uncertainty Reasoning
and Knowledge Engineering, pp. 44–47, 2011.
[8] H. Eng and K. Toh, “DEWS: A live Visual Surveillance System for Early
Drowning Detection at Pool,” IEEE Transactions on Circuits and Systems
(a) (b) for Video Technology, vol. 18, no. 2, pp. 196–210, 2008.
[9] P. Vladimir, and V. Papić, “Features Analysis for Tracking Players in
Water Polo,” 16th International Conference on Automatic Control,
Modelling & Simulation. 2014.
[10] R. Dubois, D. Thiel, and D. James, “Using Image Processing for
Biomechanics Measures in Swimming,” Procedia Engineering, vol. 34, pp.
807-812, 2012.
[11] W. Wong, J. Hui, C. Loo and W. Lim, “Off-time Swimming Pool
Surveillance Using Thermal Imaging System,” International journal of
innovative computing, information and control, vol. 9 (3), pp. 366-371,
2013.
(c) (d)
Fig.9. Experimentation setup for the underwater object detection (a)
Pixy camera alignment, (b) Home switch's alignment, (c) Testing alarming
unit, (d) Placing the pool in the middle
TABLE II. Data returned by the Pixy camera when object under water.
Authorized licensed use limited to: VIT University- Chennai Campus. Downloaded on March 07,2022 at 10:16:35 UTC from IEEE Xplore. Restrictions apply.