Professional Documents
Culture Documents
A Novel Indoor Positioning System For Firefighters in Unprepared PDF
A Novel Indoor Positioning System For Firefighters in Unprepared PDF
A Novel Indoor Positioning System For Firefighters in Unprepared PDF
Fall 10-17-2018
Recommended Citation
Vadlamani, Vamsi Karthik. "A Novel Indoor Positioning System for Firefighters in Unprepared Scenarios." (2018).
https://digitalrepository.unm.edu/ece_etds/452
This Thesis is brought to you for free and open access by the Engineering ETDs at UNM Digital Repository. It has been accepted for inclusion in
Electrical and Computer Engineering ETDs by an authorized administrator of UNM Digital Repository. For more information, please contact
disc@unm.edu.
Vamsi Karthik Vadlamani
Candidate
This thesis is approved, and it is acceptable in quality and form for publication:
Trilce Estrada
Ramiro Jordan
A Novel Indoor Positioning System
for Firefighters in Unprepared Scenarios
by
THESIS
Master of Science
Computer Engineering
December, 2018
Dedication
To my parents, Sai Lakshmi and Ramam, for their support and encouragement. To
my Professor Manel Martinez, my colleagues and friends for their support
iii
Acknowledgments
I would like to thank my advisor, Professor Manel Martinez Ramon, for his support,
sharing his knowledge and expertize during my thesis. I would like to thank my co-
researcher Manish Bhattarai for sharing his knowledge and helping me in collecting
data. I would like to thank Sophia Thompson for reviewing and suggesting required
changes in my thesis. I would like to thank my friend Usha Revathi for working on
the visual representations in the thesis and extending her support all the way through
my thesis. I would like to thank my friend Chaitanya Boggavarupu for helping me
with data collection.
A Novel Indoor Positioning System
for Firefighters in Unprepared Scenarios
by
Abstract
In fire fighting environments, thermal imaging cameras are used due to smoke and
low visibility, hence obtaining relative orientation from camera information alone be-
comes very difficult. We gained a deeper appreciation of the significance of maintain-
ing a sense of relative orientation to a first responders ability to successfully navigate
v
when we first visited first responders training facility for data collection. The follow-
ing technique that is the content of this research is two fold; Firstly, we implement a
novel, optical flow, gyroscope-based relative orientation estimation and secondly, we
implement a velocity estimation technique using data from accelerometer fused with
LIDAR. We also provide insight, discuss about the data we are going to use for our
implementations.
vi
Contents
List of Figures xi
Glossary xiv
1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
vii
Contents
2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Data 17
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 IMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
viii
Contents
5.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6 Velocity Estimation 30
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.4.1 SVR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8 Future Work 46
ix
Contents
Appendices 47
References 63
x
List of Figures
4.1 The IMU, Lidar setup on a single mount to have same body of reference 18
4.2 The Thermal Imaging Camera (TIC) we used to conduct our exper-
iments, Firefighting qualified . . . . . . . . . . . . . . . . . . . . . . 20
xi
List of Figures
6.3 Least squares estimation of position data in Orange, true data in Blue 34
7.3 Algorithm selecting the shortest path out of all available paths . . . 44
xii
List of Tables
xiii
Glossary
xiv
Chapter 1
Introduction
1.1 Overview
According to the fire fighter fatalities report issued by the U.S. Fire Administration,
ninety-one firefighters including 56 volunteer, 30 career and five wildland agencies
died while on duty in the year of 2014. Respecting the fact that the total number of
fire incidents across the country is actually decreasing, being a fire fighter becomes
more and more dangerous disregarding other significant advances in various areas
of science and technology. In the proposed research project, we investigate a new
connected and smart infrastructure that allows the development of next-generation
first responder coordination protocols. The proposed system will fundamentally aug-
ment existing systems used by first responders, with an initiative that couples novel
hardware and software components to the fire fighters existing equipment, keeping
minimal weight and firefighter training. The proposed system will provide a predic-
tive modeling capability that supports the incident commanders evaluating possible
alternative actions and choosing the best approach based on their experience and
available resources. In following sections we discuss about Indoor positioning sys-
1
Chapter 1. Introduction
tem designed for firefighters and a reinforcement learning based approach to achieve
smart path planning which results in minimal risk and maximum reward.
In this thesis, we aim to discuss about the technology we developed to provide situ-
ational awareness, position information of victims and fire fighting officers, efficient
path planning. Indoor location tracking, relative orientation estimation for firefight-
ers is a very appealing and helpful tool to reduce the first responder fatalities. The
current GPS signal structure and signal power levels are barely sufficient for indoor
applications. Recent developments in high sensitivity receiver technology are promis-
ing for indoor positioning inside light structures such as wooden frame houses but
generally not for concrete high rise buildings. Errors due to multipath and noise
associated with weak indoor signals limit the accuracy and availability of global
navigation satellite system (GNSS) in difficult indoor environments [1].
There are few techniques like Bluetooth beacons based triangulation[2, 3], WiFi
signal based location extraction [4], structure from motion based scene reconstruc-
tion and purely inertial measurement unit (IMU) based location tracking for indoor
positioning system (IPS).
The WiFi based indoor localization problem (WILP) is a difficult task in fire-
2
Chapter 1. Introduction
fighting environment because the WiFi data are very noisy and highly dependent
the local environmental conditions due to multi-path and shadow fading e↵ects in
indoor environment. The wifi signal distribution is constantly changing depending
on various factors, such as human movement, temperature and humidity changes.
In general, location-estimation systems using radio frequency (RF) signal strength
function in two phases: an o✏ine training phase and an online localization phase.
In the o✏ine phase, a radio map is built by tabulating the signal strength values
received from the access points at selected locations in the area of interest. These
values comprise a radio map of the physical region, which is compiled into a deter-
ministic or statistical prediction model for the online phase. In the online localization
phase, the real-time signal strength samples received from the access points are used
to search the radio map to estimate the current location based on the learned model.
This simplistic approach poses a serious problem in a dynamic environment caused
by the unpredictable movements [5].
These techniques are not suitable for a fire fighting environment due to high tem-
peratures and constant condition fluctuations. There are also IMU based techniques
which address the IPS problems. However, the use of a strapdown inertial navi-
gation system (INS) system and its traditional mechanization as a personal indoor
positioning system is rather unrealistic due to the rapidly growing positioning errors
caused by accelerometer drifts [1]. Even a high performance INS will cause hundreds
of meters of positioning error in 30 minutes without GPS updates [1]. Using a stan-
dard Scene From Motion SFM technique for the scene reconstruction and camera
point estimation needs to have parallax in images. Parallax is di↵erence in position
of object due to viewing object from di↵erent line of sight. It is highly impossible in
thermal images because of the cameras low resolution and very less dynamic range.
The orientation information alone is a powerful tool for the firefighters as their
rescue protocol and successful navigation in and out of the building depends needs
3
Chapter 1. Introduction
the recollection of their orientation. However, orientation in heavy smoke, heat, and
other hazardous conditions associated with such environments can become extremely
really difficult for them to keep track o↵. We gained a deeper appreciation of the
significance of maintaining a sense of relative orientation to a first responders ability
to successfully navigate when we first visited first responders training facility for data
collection. There are few existing techniques to estimate relative orientation using
the camera input [6, 7]. In this techniques they are trying to extract the relative
orientation of the camera using a vanishing point estimation technique. A vanishing
point is a point on the image plane where projections of lines in image in 3D space
appear to converge. If a significant number of the imaged line segments meet very
accurately in a point, this point is very likely to be a good candidate for a real
vanishing point [6]. For vanishing point estimation commonly canny edge detector
[8] is used followed by a RANSAC [9] to remove the outliers and extract the lines
that are required to estimate vanishing points. However these techniques cannot be
applied to firefighting scenario because of the lack of edges that can be discerned in
thermal imagery.
1.3 Summary
For indoor positioning system GPS is not a good solution because of its restricted
abilities in locating indoor subjects. Beacons, WILP also does not work well due
to the high temperatures and constant changes in the environment. IMU based
techniques are not reliable because the errors accumulate exponentially over a period
of time. It is highly difficult to obtain relative orientation using vanishing point
detection due to lack of edges and quality resolution in thermal images.
4
Chapter 1. Introduction
We estimate the camera movement, obtain the relative orientation estimation us-
ing algorithms like scale-invariant feature transform (SIFT) [10] for feature extraction
and L-K optical flow [11] for flow estimation and fuse information from gyroscope.
We then assign a velocity to the extracted orientations to obtain the indoor location
of firefighters. Once we obtain paths taken by each firefighter we use this knowledge
to devise a reinforcement learning algorithm which helps in efficient path planning.
We implement a series of individual algorithms each solving its own problem in the
context of breaking down the environmental data received from the various sensors.
These individual algorithms then cascade into one- another and their individual
products are added together to recreate the entire system. The following figure 1.1
is the block diagram of the complete implementation.
• We compute Lucas-Kanade (L-K) [11] optical flow of SIFT key point positions
• We estimate velocity by using the data from LIDAR Lite V3 and Accelerometer
using Least Squares and SVR techniques for data fusion
5
Chapter 1. Introduction
LK
IR Image SIFT
Optical
camera Enhancement features
flow
Inertial
Measurement 2σ
Unit Thresholding
IMU
Gyroscope Orientation
LIDAR Accelerometer
Measurements Estimation
Fusion of
Orientation
Indoor
Velocity
Position
Estimation
Estimatiion
Deep
Reinforcement
Learning
Learning
System
Later, we take inputs of detections from our deep learning systems for fire, persons
and objects. We apply Reinforcement Learning technique to the extracted path and
the detections along this path to obtain the best efficient path to be taken by a
firefighter during a first responding scenario. The idea behind reinforcement learning
is, making an agent(usually a robot, automobiles or a software bot) learn based on
the rewards and penalty an agent receives based on its actions.
6
Chapter 2
2.1 Introduction
Thermal and Infrared (IR) imagery do not provide the same visual quality in terms
of sharpness or level of detail needed for individual object delineation, as the RGB
images that are obtained from the normal pin-hole cameras. Intense shifts in move-
ment and orientation of the view in hand- held cameras related to the real time
movements and reactions of the fire fighter holding the camera requires an algorithm
that is robust against variability in illumination, scale and rotation.
All the above mentioned conditions can be managed by using SIFT feature
detection[10] in the relative orientation estimation process. SIFT algorithm is dis-
tillation of pixels that are passed through four stages of distillation like scale space
extrema detection, keypoint localization, orientation assignment, keypoint descrip-
tor. After each stage the pixels that are distilled will become invariant to scale, space
(location), orientation. If there are two images that have approximately the same
7
Chapter 2. SIFT - Scale Invariant Feature Transform
content, the SIFT descriptors for these images will be similar even if one image has
been changed in scale, orientation, rotation with respect to the other one. In our
methodology we extract the SIFT keypoints of the frames as we assume that the
keypoints of two subsequent images will be approximately the same.
SIFT feature extraction primarily consists of 5 stages. Each stage of this feature
extraction makes the feature highly robust, invariant to scale, orientation. We will
discuss about each stage in the following subsections.
In this step we primarily focus on obtaining the scale invariance i.e ., The features
obtained when an object in an image is captured from di↵erent distances must not
vary. To achieve this we construct a scale space of the given image and we apply
di↵erence of the images in the all the scales and we do this in several octaves. Each
octave is the image that is progressively reduced to the half of the original image.
Each scale space consists of image which is progressively blurred for several number
of times. To construct a scale space like in figure 2.2 we apply a Gaussian blurring
progressively which means that we convolve the given image with a Gaussian kernel.
Mathematically it can be represented as following.
8
Chapter 2. SIFT - Scale Invariant Feature Transform
1 (x2 + y 2 )
G(x, y, ) = 2
exp (2.2)
2⇡ 2 2
Now we need to take di↵erence of these gaussians at every scale in between the
image and blurred images like in figure ??. This is equivalent to taking Laplacian of
9
Chapter 2. SIFT - Scale Invariant Feature Transform
Regarding di↵erent parameters, the paper[12] gives some empirical data which
can be summarized as, number of octaves = 4, number of scale levels = 5, initial
p
=1.6, k= 2 etc as optimal values.
After scale space extrema detection is done the resulted images D(x,y, ) will be
processed further to eliminate any points that are with low contrast or poorly lo-
calized on edge. The initial scrutiny is to locate approximate maxima/minima by
iterating each pixel with the pixels around it in the current image and the images
above and below it with di↵erent scales in the octaves. Once we obtain approximate
maxima/minima we need to now analyze for subpixel maxima/minima.
Subpixel is refered to colored pixel components (Red, Green, Blue, etc.,). We can
10
Chapter 2. SIFT - Scale Invariant Feature Transform
find subpixel extremums by using taylor series expansion around the approximate
keypoint extracted from previous step. Mathematically it can be represented as:
DT 1 2
D
D(x) = D + + xT 2 x (2.4)
x 2 x
We can find the extreme points of this equation (di↵erentiate and equate to zero).
On solving, we’ll get subpixel key point locations. Now if the intensity of the pixel
locations are below a threshold we reject this keypoint. According to the paper[12]
the threshold is 0.03. This is called contrast threshold.
Usually Di↵erence of gaussians has high response on edges and the edges need to
be removed. A 2 ⇥ 2 hessian matrix is used to compute principle of curvature, when
ever an edge is detected the harris corner detector has one eigen value more than
other one. By taking ratio of these eigen values and employing a threshold on this
ratio is called Edge thresholding, according to paper [12] the threshold is 10.
11
Chapter 2. SIFT - Scale Invariant Feature Transform
Figure 2.3: Calculating HoG and assigning the orientation to key points
2.3 Summary
The video feed is given as input to the SIFT feature extraction algorithm and the
keypoint positions obtained are going to be forwarded to the next module of our
implementation. Lukas-Kanade optical flow take keypoint positions as inputs and
helps us to identify the movement of the features along the frames.
12
Chapter 3
3.1 Introduction
I(x, y, t) = I(x + x, y + y, t + t)
(3.1)
U= x/ t, V = y/ t
where x,y are pixel positions and t is time, likewise represents the change in
respective quantities. Also U, V are horizontal and vertical velocities respectively,
they are also called flow vectors in some applications. Image I(x, y, t) = I(x+ x, y+
13
Chapter 3. Lukas-Kanade Optical flow
Ix U + Iy V + It = 0 (3.3)
Let us consider we have pixel of interest p and there are pixels around it q1 , q2 ...qn
then we can perform least squares. we can express the process with the help of the
following equations
Ix (qi ), Iy (qi ), It (qi ) are the partial derivatives of the image I with respect to po-
sition x, y and time t, evaluated at the point qi . The equations (3.4) can be written
in the form of Av = b as follows:
14
Chapter 3. Lukas-Kanade Optical flow
2 3 2 3
I (q ) Iy (q1 ) I (q )
6 x 1 7 6 t 1 7
6 7 2 3 6 7
6 7 6 7
6 Ix (q2 ) Iy (q2 ) 7 Vx 6 It (q2 ) 7
6 7 6 7 6 7
A=6
6 .
7
7 v=4 5 b=6
6 . 7
7 (3.5)
6 .. .
. 7 6 .. 7
6 . 7 Vy 6 7
6 7 6 7
4 5 4 5
Ix (qn ) Iy (qn ) It (qn )
From (3.5) we can see that there are only 2 unknowns but many equations. Thus
the system will be overly determined. The L-K algorithm will reach to an optimal
solution by using the least squares principle. It solves the 2 ⇥ 2 system as follows
AT Av = AT b
(3.6)
v = (AT A) 1
AT b
2 32 P P 3 12 P 3
2
V i Ix (qi ) i Ix (qi )Iy (qi ) i Ix (qi )It (qi )
6 x7 6 7 6 7 (3.7)
=
4 5 4 5 4 5
P P 2
P
Vy i Iy (qi )Ix (qi ) i Iy (qi ) i Iy (qi )It (qi )
From the above equation we can see the summations where i varies from 1.....n
which depends on the number of pixels we would like to consider around the pixel
of interest.
This problem can be solved using other criteria. In particular, we can use the so
called Structural Risk Minimization (SRM) principle [13]. This principle gives rise in
particular to the well known Support Vector Machines for classification [14] and for
regression [15]. This approach has the advantage of having excellent generalization
15
Chapter 3. Lukas-Kanade Optical flow
properties under low sample sets and of being robust to outliers thanks to its implicit
cost function, which is a combination of the original Vapnik’s or " cost function and
the Huber robust cost function [16, 17]. We can also use the Gaussian Process
Regression, that employs Maximum Likelihood over a Bayesian model [18]. This
method o↵ers the advantage of providing with a probabilistic model for the likelihood
of the observations, which can give more information to the user, as for example a
confidence interval of the estimation.
3.3 Summary
Using the above procedure we can find the optical flow given a sequence of images.
In our research we used SIFT keypoint locations as our pixel of interest. In the the
following chapters we would like to detail how we used velocities obtained from L-K
optical flow to track relative orientation of firefighters.
16
Chapter 4
Data
4.1 Introduction
In this experiment we have extracted data from various sensors and a thermal imaging
camera. We have used Bosch XDK110 device to extract IMU data like accelerometer
data, gyroscope data. We have also used Garmin V3 LIDAR to obtain the position
information for velocity estimation. We have used a MSA 5200HD2 Thermal Imaging
Camera (TIC) Camera to obtain infrared video feed. In the following section we
would like to briefly describe about the data collection.
4.2 IMU
We used IMU unit which is attached to the camera in order to fuse the data we obtain
from other sensors. Bosch XDK110 is a cross-domain development kit that allows
for rapid prototyping of sensor-based products and applications. The XDK110 is an
integrated product inclusive of multiple Micro-Electromechanical Systems (MEMS)
17
Chapter 4. Data
Figure 4.1: The IMU, Lidar setup on a single mount to have same body of reference
sensors. It has Integrated development environment supplied with XDK. We can use
this application to flash the program required into XDK110. XDK110 contains Ac-
celerometer (BMA280), Gyroscope (BMG160), Magnetometer (BMM150), Inertial
measurement unit (BMI160) and various other sensors. However, we would like to
collect only acceleration(Accelerometer) and rotation(Gyroscope) information in all
three axes. The mount we used to bind LIDAR and IMU on same body of frame of
reference can be seen in the following figure 4.1
18
Chapter 4. Data
We also use LIDAR Lite V3 for velocity estimation. This device measures distance
by calculating the time delay between the transmission of a Near-Infrared laser signal
and its reception after reflecting o↵ of a target. This translates into distance using
the known speed of light. This device supports I2C and PWM protocols, We have
used I2C protocol to interface and communicate with the device.
We have used Arduiuno UNO board as an interface to obtain data, which com-
municates through I2C protocol. The code is presented in appendix section of this
thesis. We communicate to lidar using ardiuno to collect the data and make it avail-
able on serial interface of the computer. Then we used application called ”CoolTerm”
to capture the data placed on serial interface.
According to our observation the data from Lidar is being sampled at 60 sam-
ples/sec at an average which results a noise which is IID, non zero mean and non
gaussian due to uneven sampling of the data points across the time. Hence we should
use an algorithm that accounts for all these factors.
Extensive video data was acquired using an IR MSA 5200HD2TIC Camera, with
Nx = 320 pixels where Nx is number of pixels in horizontal direction and = 49o
where is the horizontal Field of View (FOV) . This camera is a multipurpose
firefighting tool designed for search and rescue and structural firefighting and can be
seen in below figure 4.2. It uses an uncooled microbolometer vanadium oxide(Vox)
detector which comprises of 320x240 Focal Plan Array (FPA) with the pitch of
38um and spatial resolution of 7.5 to 13.5um. The resolution is a necessary aspect to
capture features for orientation estimation and IPS corresponding to this project. It
19
Chapter 4. Data
Figure 4.2: The Thermal Imaging Camera (TIC) we used to conduct our experiments,
Firefighting qualified
records the image with 320x240 focal plane array sensor. It has the ability to record
imagery in two di↵erent modes, i.e. low and high sensitivity modes. This device also
features high score imagery generating 76000 pixels of image detail in low and high
sense modes. Dense spectral resolution is (7.5 to 13.5 um) and the output video is
in NTSC format with a frame rate of up to 30 fps and a scene temperature range of
560 degrees Celsius/ 102 1040 degree Fahrenheit.
The dataset used in this thesis was recorded at Computer Engineering Build-
ing, University of New Mexico, Albuquerque and the Santa Fe Firefighting Facility,
located in Santa Fe, New Mexico. Of the extensive data we collected we used 4
videos that involves a person holding the camera in hand and navigating through
the indoors of computer engineering building for this experiment. The setup involves
attaching the IMU, Lidar to the body frame of camera. Hence we are able to obtain
the Gyroscopic data, accelerometer data, Lidar data and the videos with one single
frame of reference.
20
Chapter 5
5.1 Introduction
The orientation information alone is a powerful tool for the firefighters as their rescue
protocol and successful navigation in and out of the building depends needs the
recollection of their orientation. However, orientation in heavy smoke, heat, and
other hazardous conditions associated with such environments can become extremely
really difficult for them to keep track o↵. We gained a deeper appreciation of the
significance of maintaining a sense of relative orientation to a first responders ability
to successfully navigate when we first visited first responders training facility for data
collection. There are few existing techniques to estimate relative orientation using
the camera input [6, 7]. In this techniques they are trying to extract the relative
orientation of the camera using a vanishing point estimation technique. A vanishing
point is a point on the image plane where projections of lines in image in 3D space
appear to converge. If a significant number of the imaged line segments meet very
accurately in a point, this point is very likely to be a good candidate for a real
vanishing point [6]. For vanishing point estimation commonly canny edge detector
21
Chapter 5. Relative Orientation Estimation
[8] is used followed by a RANSAC [9] to remove the outliers and extract the lines
that are required to estimate vanishing points. However these techniques cannot be
applied to firefighting scenario because of the lack of edges that can be discerned in
thermal imagery
Thermal and Infrared (IR) imagery do not provide the same visual quality in
terms of sharpness or level of detail needed for individual object delineation, as the
RGB images that are obtained from the normal pinhole cameras. Intense shifts in
movement and orientation of the view in hand held cameras related to the real time
movements and reactions of the fire fighter holding the camera requires an algorithm
that is robust against variability in illumination, scale and rotation. All the above
mentioned conditions can be managed by using SIFT feature detection[10] in the
relative orientation estimation process.
Usually a lot of L-K optical flow implementations use corners and blobs as areas
of interest. However, in thermal images it is very difficult to find corners, edges and
blobs. Hence, we find SIFT keypoints use the SIFT keypoint locations to perform
optical flow algorithm. Once the SIFT key point positions are given as interest
points for L-K optical flow estimation algorithm, we obtain vertical and horizontal
velocities of the keypoint movement in terms of pixels/frame. We further process the
horizontal velocity to obtain the orientation of firefighter. We process this through a
thresholding stage to remove outlier velocities and also velocities due to foreground
activity, like moving human subjects, moving flames or hot temperatures.
The firefighting scenario is highly dynamic with rapidly changing environmental con-
ditions. The imagery that is captured in real time by first responders holds the
potential for imperfect orientation estimation due to high levels of foreground move-
22
Chapter 5. Relative Orientation Estimation
ments being captured. To correct for these high levels of foreground movement, we
apply a thresholding to reject higher velocities. We have observed the histogram of
the data with foreground object to threshold the foreground activity.
After thresholding, we compute the mean of the remaining flow vectors, which
gives the average movement of camera.
23
Chapter 5. Relative Orientation Estimation
Im
ag
e
Pl
an
e
n
So Sn
Image Plane o
Prior state
e
at
st
ew
N
The estimated velocities are here used to compute the change of angle ✓ in radians per
frame in the horizontal plane of the camera movement to estimate relative orientation
of the firefighter. Figure 5.2 shows the model for this estimation.
Consider the initial position of camera from which we extract SIFT keypoint
positions So of the corresponding image plane Po . When there is a change of angle
✓ the camera is in a new position whose corresponding image plane Pn that contains
SIFT keypoint position Sn . The camera has a Field of View (FOV) and a horizontal
resolution of Nx pixels. If the FOV is much smaller than 180o , the angular resolution
can be approximated by a constant Rx = Nx
degrees per pixel. This data is used to
convert velocity U to angular velocity. Here we consider U as a tangential velocity of
the camera rotation in the horizontal plane, and the corresponding angular velocity
24
Chapter 5. Relative Orientation Estimation
is then
U
✓c = (5.1)
Nx
which will be expressed in degrees per frame. The angular velocity is then integrated
across frames to compute the current orientation. This is represented in the screen
as a compass. This information is very robust and highly accurate when the number
of features obtained from images is high. To maintain the robustness of system, we
consider the orientation information obtained from gyrometer.
5.3.2 Results
In this section we would like to show some results that we obtained showing the
deflection in compass for relative orientation estimation 5.3.
The setup currently has a gyroscope attached to the body frame of camera. We
are collecting data from the gyroscope at 100hz. The data collected is a measure of
angular velocity in mdeg/sec units. We take the data from gyro scope and median
filter for every 10 samples to obtain consistent data. This data is further processed
and integrated to obtain the change in angle ✓g of the camera frame. The information
obtained from gyroscope is very robust to rapid changes and reflects the body frame
movement accurately.
Since we have orientation information from two sources we would like to fuse the
information and relay the necessary information to firefighters. Orientation informa-
25
Chapter 5. Relative Orientation Estimation
tion alone is a very powerful tool for firefighter rescue operation. We would like to
relay this information on firefighter video feed in a form of compass and enable them
to navigate through the building with ease. The relative orientation can be currently
obtained by a simple fusion of the two information sources as follows:
⇥ = ✓g + (1 )✓c (5.2)
26
Chapter 5. Relative Orientation Estimation
where wg [i], wc [i] are the corresponding filter coefficients. The optimization criterion
is the minimization of the square error expectation with respect to all parameters, i.
e.
min L(w, )
w,
(5.4)
subject to 0 1
where wg and wc are column vectors containing parameters wg [i] and ✓ g [n 1],
✓ c [n 1] are column vectors containing samples ✓g [i], ✓c [i], n P in 1.
27
Chapter 5. Relative Orientation Estimation
The gradient descent approach needs a time estimation of the expectation, but
the data is clearly nonstationary. Among the classical approaches that can be applied
here, the Recursive Least Squares (RLS) and the Least Mean Squares (LMS) [19]
seem to be the best ones. In the simulations, the LMS is the taken approach, which
consists of approximating the expectations by the present data samples. Hence, the
adaptation algorithm is
1
wc [n] = wc [n 1] µ ( )e[n]✓ c [n 1]
k✓ c [n 1]k2
1 (5.8)
wg [n] = wg [n 1] µ(1 ( ))e[n]✓ c [n 1]
k✓ g [n 1]k2
0
[n] = [n 1] µe[n] ✓g [n] wgT ✓ g [n 1] ✓c [n] + wcT ✓ c [n 1] ( )
28
Chapter 5. Relative Orientation Estimation
29
Chapter 6
Velocity Estimation
6.1 Introduction
Velocity estimation is a very important step for the IPS routine. To achieve this task
we use the data from two sensors accelerometer and a LIDAR. We are using Garmin
LIDAR v3 and interface it to an ardiuno uno board for data collection. However,
accelerometer readings are obtained by a stand alone IoT device Bosch XDK110. We
use non linear least squares estimation to fuse the data obtained from accelerometer
and LIDAR.
30
Chapter 6. Velocity Estimation
tain 60 samples/sec at an average and with uneven sampling in time domain which
causes a noise that is not I.I.D. The velocity can be estimated by di↵erentiating the
distance data. However, since the LIDAR data is very noisy, It is highly impossible
to completely depend on this data. In the below Figure we can see the lidar data 6.1.
We also observed that there is a noise that is not zero mean, not independent
and identically distributed(I.I.D). Hence, we would also like to apply Support Vector
31
Chapter 6. Velocity Estimation
Figure 6.2: Initial conditions estimation from convolution of sigmoid function and
the orientation information. Here thresholding amplitude is 150
Regression(SVR) and observe the performance of the algorithm and compare the
results against non linear least squares method.
As we discussed that the measurements needs to follow the rules of real world
processes, we verify this with the help of orientation data that we obtain from the
above module. We perform a non overlapping moving window convolution of a
sigmoid function on to the orientation information obtained. We now detect for
32
Chapter 6. Velocity Estimation
peaks in the resulted signal by thresholding it. The peak detection and thresholding
can be seen in figure 6.2. After obtaining the time stamps of change in initial
conditions we use them in the non linear least squares and SVR techniques to obtain
a better estimation of the positions data.
We have implemented a least squares technique to minimize the error by fusing the
data from two sensory sources Lidar and Accelerometer. We have position data from
LIDAR, Acceleration data from accelerometer. To fuse the information we modeled
the fusion in a form of voltera equations
S l = w 0 + w 1 t + w 2 t 2 + w 3 t 3 + Sa
where (6.1)
Z tZ ⌧0
Sa = a(⌧ )d⌧ d⌧ 0
0 0
We take this model and implement a non linear least squares. Since we know from
the least squares model that
33
Chapter 6. Velocity Estimation
Figure 6.3: Least squares estimation of position data in Orange, true data in Blue
E = Sl Sa (6.4)
Here t1 t2 t3 . . . tn are time stamps of the input E and D is the degree of non linearity.
We obtained the following estimation as seen in figure 6.3 after applying initial
conditions to the LIDAR and Accelerometer Data.
This problem can be solved using other criteria. In particular, we can use the so
called Structural Risk Minimization (SRM) principle [13]. This principle gives rise in
particular to the well known Support Vector Machines for classification [14] and for
regression [15]. This approach has the advantage of having excellent generalization
properties under low sample sets and of being robust to outliers thanks to its implicit
cost function, which is a combination of the original Vapnik’s or " cost function and
the Huber robust cost function [16, 17]. We can also use the Gaussian Process
Regression, that employs Maximum Likelihood over a Bayesian model [18]. This
method o↵ers the advantage of providing with a probabilistic model for the likelihood
34
Chapter 6. Velocity Estimation
of the observations, which can give more information to the user, as for example a
confidence interval of the estimation.
6.4.1 SVR
We have also implemented SVR[15] on the same data that we obtained from Lidar
to obtain a good estimation of position data from Lidar. SVR is very robust to noisy
data and is highly reliable method for estimating the data that has a noise that is
I.I.D.
In this method we have a special criterion that minimizes the linear empirical
error and also a term that is monotonic with complexity [15], This is a problem that
can be solved with the use of SVR as follows.
X N
⇤ 1 ⇤
↵p (W, , , ✏) = kW k + C ( i + i) (6.6)
2 i=1
subject to
yi W T Xi b✏+ i
⇤
yi + W T X i + b ✏ + i
⇤
i, i 0
By applying Lagrangian multipliers to the above equation we can solve for a dual
with following parameters which are ↵i , ↵i⇤ , µi , µ⇤i . We must maximize the two first
constraints, minimize the other two. The Lagrangian by solving is as we know that
the optimization problem takes into account the KKT complementary conditions for
the product constraint-Lagrange multipliers [20], leads to the minimization of
1
LD = (↵ ↵⇤ )T K(↵ ↵⇤ ) + (↵ ↵ ⇤ )T y ✏(↵ + ↵⇤ ) (6.7)
2
35
Chapter 6. Velocity Estimation
Constrained to
0 ↵n , ↵n⇤ C (6.8)
X
f (x) = ↵i K(Xi , X) + b (6.9)
The dot product in the Euclidean space can be changed by a kernel dot product
or Mercer’s kernel function in a higher dimension Hilbert space [21]. See also [22] for
a complete description of kernels and kernel methods.
One popular kernel is the p-th order polynomial one, with the form
1
K(Xi , Xj ) = exp( kXi Xj k2 ) (6.11)
2
36
Chapter 6. Velocity Estimation
By applying the di↵erentiation to the data we estimated from SVR SSV R we obtain
the velocity information. Since there are sudden raises or falls in SSV R , We apply
a thresholding to velocities obtained VSV R in a way that the process obeys laws of
physics.
The thresholded velocities are replaced by median of velocities that are obtained.
From the below figure 6.5 we can observe the need for thresholding by looking at
the unnatural velocities in detected because of not thresholding. we also can see the
e↵ect of thresholding
37
Chapter 6. Velocity Estimation
38
Chapter 7
Indoor position estimation is the most crucial part of the research and is a final
product of series of algorithms. From the previous modules we obtained information
about orientation and velocity by applying various procedures. In this module we
use the information to reconstruct the path by multiplying the velocity to the ori-
entation information, in other words assigning a velocity to the tensor information
and integrating them.
7.1 Results
Figures 7.1 and 7.2 show the result of two of the experiments at the University of
New Mexico. The solid blue line represents the estimated trajectory of the subject
carrying the camera with the mounted sensors. The image also shows the floor plan
of the building, showing that the trajectory is consistent with the structure of the
building. The estimated accuracy with respect the actual start and end points is less
39
Chapter 7. Indoor Position Estimation
than 2 meters.
Therefore by efficiently using the data from camera, gyroscope, LIDAR and ap-
40
Chapter 7. Indoor Position Estimation
We are also building a system to label the objects detected from the camera feed
using another module which consists of a series of deep learning frame works and
algorithms. We are going to use this information in future for Simultaneous Location
and Mapping (SLAM) and Reinforcement learning.
41
Chapter 7. Indoor Position Estimation
Q-learning mainly consists of an agent a set of states S, and a set of actions per
state A. By performing an action a A, the agent transitions from state to state.
The agent has to choose an action that results in a state with positive reward. The
goal of agent is to maximize the total reward which typically is defined in the reward
matrix R. There is also a terminal state which will end the agent from going to take
further action, thereby limiting the reward to some value. To maximize the reward
Q-learning uses the following approach.
t
The weight for a step from a state t steps into the future is calculated as .
(the discount factor) is a number between 0 and 1 ( 0 1) and has the e↵ect
of valuing rewards received earlier higher than those received later (reflecting the
value of a ”good start”). may also be interpreted as the probability to succeed (or
survive) at every step t.
The algorithm, therefore, has a function that calculates the quality of a state-
action combination:
Q:S⇥A!R (7.1)
Then, at each time t the agent selects an action at , observes a reward rt , enters
a new state st+1 (that may depend on both the previous state st and the selected
action, and Q is updated. The core of the algorithm is a simple value iteration
update, using the weighted average of the old value and the new information. At
beginning the Q is initialized to some arbitrary value and as the algorithm begins
the q value is updated.
42
Chapter 7. Indoor Position Estimation
Where rt is reward from current state, If st+1 is a terminal state the algorithm
ends updating. This process repeats for several iterations and finally the Q-value
becomes the best possible actions, states that the agent can take.
7.4 Conclusion
43
Chapter 7. Indoor Position Estimation
Figure 7.3: Algorithm selecting the shortest path out of all available paths
estimation does not have a systematic, accumulative error, but it is very noisy due
to the di↵erent obstacles found by the LASER. The IMU is much less noisy, but it
is subject to accumulative errors. In computing the rotation, The image data is also
prone to accumulative errors, but it is less noisy than the IMU and it provides a
better estimation when the number of features is high.
Figure 7.4: Algorithm selecting efficient and shortest path by avoiding fire
44
Chapter 7. Indoor Position Estimation
This data is combined to estimate the absolute position of the firefighter from an
initial point with an error less than 2 meters. This result is competitive with other
published results [24, 3, 1, 25, 2, 5]. These works [24, 3, 1, 25, 2, 5] require prior
information regarding the scenario, like positions of Bluetooth beacons, WiFi routers
and better resolution of images to do a 3D reconstruction and location mapping. Our
approach is autonomous, since this external sources are not needed to produce an
estimation.
These sources can be combined with the methodology presented in this paper
in order to improve the position and reduce further the systematic errors, but in
firefighting scenarios this side information is usually unavailable. Hence, the IPS
algorithm presented in this paper is ideal for unprepared firefighting scenarios.
45
Chapter 8
Future Work
Future work includes the use of this methodology for floor plan reconstruction. In
order to do that, we need to combine this algorithm with feature recognition strate-
gies that detect objects of interest like doors, windows and other objects of interest
that are intrinsically associated to walls, ceilings and floors. We will also implement
reinforcement learning based real time route assistance by integrating the informa-
tion provided by deep neural networks that will be deployed to detect fire, doors,
windows and other objects of interest. The reinforcement learning pipeline is used
to produce efficient path planning in search and rescue operations, thereby helping
firefighters in decision making.
46
Appendices
47
Appendix A
import numpy a s np
import cv2
import pandas a s pd
import p i c k l e
import cv
from math import hypot , s q r t
import math
import i t e r t o o l s
from s k l e a r n . c l u s t e r import A g g l o m e r a t i v e C l u s t e r i n g
import m a t p l o t l i b . p y p l o t a s p l t
from s k l e a r n . n e i g h b o r s import k n e i g h b o r s g r a p h
s i f t = cv2 . SIFT ( )
#cap = cv2 . VideoCapture ( ’ comb . ASF ’ )
cap = cv2 . VideoCapture ( ’ exp2 . ASF ’ )
#cap = cv2 . VideoCapture ( ’ 2 . avi ’ )
48
Appendix A. Code for orientation estimation of Camera feed.
import c P i c k l e
from s t a t i s t i c s import median
from s c i p y . s t a t s import mode
MIN MATCH COUNT = 10
#pt =(197 ,21)
l e n g t h =20
# params f o r ShiTomasi c o r n e r d e t e c t i o n
f e a t u r e p a r a m s = d i c t ( maxCorners = 1 0 0 ,
qualityLevel = 0.3 ,
minDistance = 7 ,
blockSize = 7 )
# Parameters f o r l u c a s kanade o p t i c a l f l o w
lk params = d i c t ( winSize = (10 ,10) ,
maxLevel = 2 ,
c r i t e r i a = ( cv2 . TERM CRITERIA EPS | cv2 . TERM CRITERIA
cam = [ [ 1 8 0 3 . 2 3 , 0 , 118.49522241] ,
[ 0 , 1801.67 , 165.88792572] ,
[ 0 , 0 , 1 ]]
from numpy . l i n a l g import i n v
K I=i n v ( cam )
FLANN INDEX KDTREE = 0
in dex params = d i c t ( a l g o r i t h m = FLANN INDEX KDTREE, t r e e s = 5 )
search params = d i c t ( checks = 50)
f l a n n = cv2 . FlannBasedMatcher ( index params , s e a r c h p a r a m s )
# Cr eat e some random c o l o r s
c o l o r = np . random . r a n d i n t ( 0 , 2 5 5 , ( 1 0 0 , 3 ) )
r e t , o l d f r a m e = cap . r e a d ( )
o l d g r a y = cv2 . c v t C o l o r ( o l d f r a m e , cv2 .COLOR BGR2GRAY)
49
Appendix A. Code for orientation estimation of Camera feed.
50
Appendix A. Code for orientation estimation of Camera feed.
p r e v i o u s A n g l e=0
c u r r e n t A n g l e 2=0
p r e v i o u s A n g l e 2=0
veldump = [ ]
f g b g = cv2 . BackgroundSubtractorMOG ( )
p t i x =0
p t i y =0
xplot =[]
yplot =[]
while ( 1 ) :
try :
thr =[]
r e t , frame = cap . r ea d ( )
h , w, d=frame . shape
o l d g r a y = cv2 . a d a p t i v e T h r e s h o l d ( o l d g r a y , 2 5 5 , cv2 . ADAPTIVE THRE
cv2 . THRESH BINARY, 1 1 , 2 )
key=key+1
# Cr eat e a mask image f o r drawing p u r p o s e s
kp1 , des1= s i f t . detectAndCompute ( o l d g r a y , None )
f r a m e g r a y = cv2 . c v t C o l o r ( frame , cv2 .COLOR BGR2GRAY)
f r a m e g r a y = cv2 . a d a p t i v e T h r e s h o l d ( f r a m e g r a y , 2 5 5 , cv2 . ADAPTIVE T
cv2 . THRESH BINARY, 1 1 , 2 )
kp2 , des2= s i f t . detectAndCompute ( f r a m e g r a y , None )
matches = f l a n n . knnMatch ( des1 , des2 , k=2)
good = [ ]
f o r m, n i n matches :
i f m. d i s t a n c e < 0 . 7 ⇤ n . d i s t a n c e :
good . append (m)
51
Appendix A. Code for orientation estimation of Camera feed.
kp=np . a s a r r a y ( kp1 )
#p r i n t l e n ( kp )
kp pt =[]
i =0
f o r i i n r a n g e ( 0 , l e n ( kp ) ) :
k p p t . append ( kp [ i ] . pt )
k p p t=np . a s a r r a y ( kp pt , dtype=np . f l o a t 3 2 )
k p p t=kp p t . r e s h a p e ( l e n ( k p p t ) , 1 , 2 )
p1 , s t , e r r = cv2 . calcOpticalFlowPyrLK ( o l d g r a y , f r a m e g r a y , kp
good new = p1 [ np . where ( s t ==1)]
g o o d o l d = k p p t [ np . where ( s t ==1)]
v e l d i f f = lambda ( p1 , p2 ) : ( ( p1 [0] p2 [ 0 ] ) , ( p1 [1] p2 [ 1 ] ) )
dp1=g o o d o l d . r e s h a p e ( l e n ( g o o d o l d ) , 2 )
dp2=good new . r e s h a p e ( l e n ( good new ) , 2 )
v e l=map( v e l d i f f , z i p ( dp1 , dp2 ) )
c l u s=lambda ( p1 ) : ( p1 i f p1>= 20 and p1<=20 e l s e 0 )
v e l=np . a s a r r a y ( v e l )
veldump . append ( v e l )
l u=map( c l u s , v e l [ : , 0 ] )
l v=map( c l u s , v e l [ : , 1 ] )
u=np . mean ( l u )
v=np . mean ( l v )
t h e t a =(u ⇤2)
t h e t a v =(v ⇤2)
T. append ( ( t h e t a , t h e t a v ) )
c u r r e n t A n g l e=t h e t a+p r e v i o u s A n g l e
o r i e n t . append ( c u r r e n t A n g l e )
c u r r e n t A n g l e 2=t h e t a+p r e v i o u s A n g l e 2
52
Appendix A. Code for orientation estimation of Camera feed.
ptx= i n t ( (w w/ 2 ) + l e n g t h ⇤ math . c o s ( c u r r e n t A n g l e ⇤ cv . CV PI / 1 8 0 )
pty = i n t ( 2 1 + l e n g t h ⇤ math . s i n ( c u r r e n t A n g l e ⇤ cv . CV PI / 1 8 0 ) )
pt1x= ( ptix + 1 ⇤ . 8 ⇤ math . c o s ( c u r r e n t A n g l e ⇤ cv . CV PI / 1 8 0 ) )
pt1y = ( p t i y 1 ⇤ . 8 ⇤ math . s i n ( c u r r e n t A n g l e ⇤ cv . CV PI / 1 8 0 ) )
p r i n t pt1x , pt1y
p r e v i o u s A n g l e=c u r r e n t A n g l e
p t i x=pt1x
p t i y=pt1y
x p l o t . append ( pt1x / 1 0 )
y p l o t . append ( pt1y / 1 0 )
cv2 . c i r c l e ( frame , ( w w/ 2 , 2 1 ) , 1 9 , c o l o r = ( 2 5 5 , 2 5 5 , 2 5 5 ) , t h i c k n e s s =
cv2 . l i n e ( frame , ( w w/ 2 , 2 1 ) , ( ptx , pty ) , ( 2 5 5 , 2 5 5 , 2 5 5 ) , 1 )
f o r i , ( new , o l d ) i n enumerate ( z i p ( good new , g o o d o l d ) ) :
a , b = new . r a v e l ( )
c , d = old . ravel ()
mask = cv2 . l i n e ( frame , ( a , b ) , ( c , d ) , ( 2 5 5 , 2 5 , 2 5 ) , 2 )
#frame = cv2 . c i r c l e ( frame , ( a , b ) , 5 , ( 2 5 5 , 2 5 , 2 5 ) , 1 )
cv2 . imshow ( ’ frame ’ , frame )
k = cv2 . waitKey ( 3 0 ) & 0 x f f
i f k == 2 7 :
break
# Now update t h e p r e v i o u s frame and p r e v i o u s p o i n t s
o l d g r a y = f r a m e g r a y . copy ( )
#p r i n t ( h 23 ,21)
k p p t = good new . r e s h a p e ( 1 ,1 ,2)
except :
print ( ’ entered exception ’ )
break
53
Appendix A. Code for orientation estimation of Camera feed.
t o c s v=np . v s t a c k ( ( x p l o t , y p l o t ) )
O=np . a s a r r a y ( o r i e n t )
T=np . a s a r r a y (T)
R=np . a s a r r a y (R)
fig = plt . figure ()
p l t . show ( )
cv2 . destroyAllWindows ( )
cap . r e l e a s e ( )
54
Appendix B
import numpy a s np
import pandas a s pd
from m a t p l o t l i b import p y p l o t a s p l t
import math
data=pd . r e a d c s v ( ’ LOG 007 . CSV’ , d e l i m i t e r = ’ ; ’ )
o=np . l o a d ( ’ opt 4.npy ’ )
l 1=pd . r e a d t a b l e ( ’ 1 2 f . txt ’ , d e l i m i t e r = ’ , ’ , names =[ ’ t ’ , ’ d ’ ] )
gx=data [ ’ bmg160 y [ mDeg ] ’ ]
c=(math . p i /1 8 0)
j =0.00277778
r =1/360
gx=(gx / 100 0)
gx=np . a s a r r a y ( gx )
p r i n t l e n ( gx )
gx=gx [ : ] . r e s h a p e ( 1 ,10)
gxg=np . median ( gx , a x i s =1)
p r i n t l e n ( gxg )
55
Appendix B. Code for Velocity Estimation
#p r i n t ( t u r n i d x [ k ] )
count . append ( ( t u r n i d x [ k ] , c ) )
k=i +1
c=0
count=np . a s a r r a y ( count )
count
ax=data [ ’ bma280 x [ mg ] ’ ]
ax=ax /1000
ax=np . a s a r r a y ( ax ) [ : ] . r e s h a p e ( 1 ,10)
apm=np . median ( ax , a x i s =1)
56
Appendix B. Code for Velocity Estimation
#p l t . p l o t (apm)
vp=np . cumsum (apm)
#p l t . p l o t ( vp )
dp=np . cumsum ( vp )
l 1=l 1 [ ˜ l 1 [ ’ d ’ ] . i s n u l l ( ) ]
print ( len ( l1 ))
l 1=l 1 [ l 1 . d ! = 1 ]
print ( len ( l1 ))
l 1=l 1 [ l 1 . d <2000]
print ( len ( l1 ))
l d=l 1 [ ’ d ’ ]
print ( len ( l1 ))
s h a p e l =( l e n ( l d ) i n t ( l e n ( l d ) / 6 ) ⇤ 6 )
i f shape l !=0:
ldm=np . median ( np . a s a r r a y ( l d [: shape l ] ) . reshape ( 1 ,6) , a x i s =1)
else :
ldm=np . median ( np . a s a r r a y ( l d [ : ] ) . r e s h a p e ( 1 ,6) , a x i s =1)
p l t . p l o t ( ldm )
from numpy . l i n a l g import i n v
d e f n o n l s ( l 1 , l 2 , D, gama ) :
D=D
gama=gama
#l 1 , l 2 =700 ,850
S=ldm [ l 1 : l 2 ]
Sa=dp [ l 1 : l 2 ]
X=np . ones ( l e n ( S ) )
f o r i i n r a n ge ( 1 ,D) :
x=np . a s a r r a y ( r a ng e ( l 1 , l 2 ) ) ⇤ ⇤ i
57
Appendix B. Code for Velocity Estimation
X=np . v s t a c k ( (X, x ) )
X. shape
w=np . matmul ( i n v ( np . matmul (X,X.T) + gama⇤np . i d e n t i t y (X. shape [ 0 ] ) ) , np .
s e=np . matmul (X. T,w)+Sa
#p l t . p l o t ( s e )
#p l t . p l o t ( ldm [ l 1 : l 2 ] )
return se
s e=np . z e r o s ( l e n ( ldm ) )
s e=np . z e r o s ( l e n ( ldm ) )
k=0
f o r i i n r a n ge ( 0 , l e n ( count ) + 1 ) :
i f i !=( l e n ( count + 1 ) ) :
l 1=count [ i , 0 ]
l 2=count [ i , 0 ] + count [ i , 1 ]
#p r i n t k , l 1 , l 2
s e [ k : l 1 ]= n o n l s ( k , l 1 , 4 , 0 . 0 5 )
s e [ l 1 : l 2 ]= n o n l s ( l 1 , l 2 , 4 , 0 . 1 )
k=l 2
else :
s e [ k : l e n ( ldm )]= n o n l s ( k , l e n ( ldm ) , 3 , 0 . 0 5 )
p l t . p l o t ( ldm )
plt . plot ( se )
#p l t . p l o t ( ldm )
p l t . x l a b e l ( ’ time i n m i l l i s e c o n d s / 1 0 ’ )
plt . ylabel ( ’ distance in centimeters ’ )
p l t . t i t l e ( ’ o ran ge e s t i m a t e d , b l u e r e a l l i d a r data . F u l l y automated ’ )
veldme=np . g r a d i e n t ( np . median ( s e [ : 4 ] . reshape ( 1 ,10) , a x i s =1))
#p l t . p l o t ( abs ( veldme ) )
58
Appendix B. Code for Velocity Estimation
59
Appendix B. Code for Velocity Estimation
d e f s v r l ( l 1 , l 2 ,D) :
D=3
S=s e [ l 1 : l 2 ]
Sa=dp [ l 1 : l 2 ]
X=np . ones ( l e n ( S ) )
f o r i i n r a n ge ( 1 ,D) :
x=np . a s a r r a y ( r a ng e ( l 1 , l 2 ) ) ⇤ ⇤ i
X=np . v s t a c k ( (X, x ) )
X=X. r e s h a p e ( 1 ,3)
#p r i n t l e n (X)
y=S
#p r i n t l e n ( y )
y=y . r e s h a p e ( 1 ,1)
#S=S . r e s h a p e ( 1 ,1)
#Sa=Sa . r e s h a p e ( 1 ,1)
c l f . f i t (X, y )
r e t u r n c l f . p r e d i c t (X)
from s k l e a r n . svm import SVR
sv=np . z e r o s ( l e n ( ldm ) )
c l f = SVR(C=1000 , e p s i l o n =0.0 ,gamma=’ auto ’ , d e g r e e =4)
#p r i n t l e n ( ldm ) , l e n ( dp )
k=0
f o r i i n r a n ge ( 0 , l e n ( count ) + 1 ) :
i f i !=( l e n ( count + 1 ) ) :
l 1=count [ i , 0 ]
l 2=count [ i , 0 ] + count [ i , 1 ]
#p r i n t k , l 1 , l 2
sv [ k : l 1 ]= s v r l ( k , l 1 , 3 )
60
Appendix B. Code for Velocity Estimation
sv [ l 1 : l 2 ]= s v r l ( l 1 , l 2 , 3 )
k=l 2
else :
sv [ k : l e n ( ldm )]= s v r l ( k , l e n ( ldm ) , 3 )
#p r i n t k , l e n ( ldm )
p l t . p l o t ( ldm )
p l t . p l o t ( sv , l i n e w i d t h =2.0)
p l t . x l a b e l ( ’ time i n m i l l i s e c o n d s / 1 0 ’ )
plt . ylabel ( ’ distance in centimeters ’ )
p l t . t i t l e ( ’ o ran ge e s t i m a t e d , b l u e r e a l l i d a r data . F u l l y automated ’ )
veldmesvr=np . g r a d i e n t ( np . median ( sv [ : 4 ] . reshape ( 1 ,10) , a x i s =1))
#p l t . p l o t ( abs ( veldme ) )
veldmesvr [ abs ( veldmesvr )>50]=np . median ( veldmesvr )
p l t . p l o t ( abs ( veldmesvr ) )
gx=data [ ’ bmg160 y [ mDeg ] ’ ]
c=(math . p i /1 8 0)
j =0.00277778
r =1/360
gx=(gx / 100 0)
#p l t . p l o t ( gx )
gx=np . a s a r r a y ( gx ) [ ( l e n ( gx) i n t ( l e n ( gx ) / 1 0 0 ) ⇤ 1 0 0 ) / 2 : ( l e n ( gx) i n t ( l e n ( gx )
gx=np . median ( gx , a x i s =1)
gxc=np . cumsum ( gx )
ptx = [ ]
pty = [ ]
p t i x =0
p t i y =0
f o r i i n r a n ge ( 2 , min ( l e n ( gxc ) , l e n ( veldme ) ) + 1 ) :
61
Appendix B. Code for Velocity Estimation
#p r i n t i
p t i x=p t i x + abs ( veldmesvr [ i 1])⇤ math . c o s ( ( gxc [ i ] / 1 8 0 ) ⇤ math . p i )
p t i y=p t i y abs ( veldmesvr [ i 1])⇤ math . s i n ( ( gxc [ i ] / 1 8 0 ) ⇤ math . p i )
ptx . append ( p t i x )
pty . append ( p t i y )
p t i x=p t i x
p t i y=p t i y
p l t . p l o t ( ptx , pty )
Acknowledgements
This work has been supported by NSF S&CC EAGER grant 1637092.
62
References
[2] S. Subedi, G. R. Kwon, S. Shin, S. seung Hwang, and J.-Y. Pyun, “Beacon
based indoor positioning system using weighted centroid localization approach,”
in 2016 Eighth International Conference on Ubiquitous and Future Networks
(ICUFN), July 2016, pp. 1016–1019.
[3] S. Subedi and J.-Y. Pyun, “Practical fingerprinting localization for indoor po-
sitioning system by using beacons,” Journal of Sensors, vol. 2017, 2017.
[5] J. Yin, Q. Yang, and L. Ni, “Adaptive temporal radio maps for indoor location
estimation,” in Third IEEE International Conference on Pervasive Computing
and Communications, March 2005, pp. 85–94.
[7] J. Kosecka and W. Zhang, “Video compass,” in Proceedings of the 7th European
Conference on Computer Vision-Part IV, ser. ECCV ’02. London, UK, UK:
Springer-Verlag, 2002, pp. 476–490.
63
References
[11] B. D. Lucas and T. Kanade, “An iterative image registration technique with
an application to stereo vision,” in Proceedings of the 7th International Joint
Conference on Artificial Intelligence - Volume 2, ser. IJCAI’81. San Francisco,
CA, USA: Morgan Kaufmann Publishers Inc., 1981, pp. 674–679.
[13] V. N. Vapnik, The Nature of Statistical Learning Theory. New York, NY, USA:
Springer-Verlag New York, Inc., 1995.
[19] S. Haykin, Adaptive Filter Theory (4th Edition). Prentice Hall, Sep. 2001.
64
References
[22] J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis. Cam-
bridge, UK: Cambridge University Press, jun 2004.
65