Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/362470587

A Deep Learning Approach Using Blurring Image Techniques for Bluetooth-


Based Indoor Localisation

Preprint  in  SSRN Electronic Journal · August 2022


DOI: 10.2139/ssrn.4180099

CITATIONS READS

0 15

4 authors, including:

Manuel Castillo-Cara Luis Orozco-Barbosa


Universidad Politécnica de Madrid University of Castilla-La Mancha
33 PUBLICATIONS   220 CITATIONS    279 PUBLICATIONS   1,921 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Proyecto TecnoCRA de la ESII (Universidad de Castilla-La Mancha) View project

Wearable methodology project a new methodology based on the use of innovative technologies for education View project

All content following this page was uploaded by Manuel Castillo-Cara on 05 August 2022.

The user has requested enhancement of the downloaded file.


A Deep Learning Approach using Blurring Image Techniques for
Bluetooth-based Indoor Localisation

ed
Manuel Castillo-Caraa,∗,† , Reewos Talla-Chumpitazb,† , Luis Orozco-Barbosac and
Raúl García-Castroa

iew
a Universidad Politécnica de Madrid, Madrid (Spain)
b Center of Information and Communciation Technologies, Universidad Nacional de Ingenieria, Lima (Peru)
c Albacete Research Institute of Informatics, Universidad de Castilla-La Mancha, Albacete (Spain)

ARTICLE INFO ABSTRACT


Keywords: The growing interest in the use of IoT technologies has generated the development of numerous and
Indoor positioning diverse applications. Many of the services provided by the applications are based on knowledge of the

ev
Fingerprinting localisation localisation and profile of the end user. Thus, the present work aims to develop a system for indoor
Metaheuristic algorithm optimisation localisation prediction using Bluetooth-based fingerprinting using Convolutional Neural Networks
Image blurring technique (CNN). For this purpose, a novel technique has been developed that simulates the diffusion behaviour
Convolutional Neural Network of the wireless signal by transforming tidy data into images. For this transformation, we have used
Image generation the technique used in painting known as blurring technique, simulating the diffusion of the signal
spectrum. Our proposal also includes the use and a comparative analysis of two dimensional reduction

rr
algorithms, PCA and t-SNE. Finally, an evolutionary algorithm has been implemented to configure
and optimize our solution with the combination of different transmission power levels. The results
reported in this work are close to 94% of accuracy, which clearly shows the great potential of this
novel technique to the development of more accurate indoor localisation systems.
ee
1. Introduction major challenges, as indoor propagation of signals is highly
sensitive to the multipath fading effect (Fan et al., 2021; Zhou
The development of wireless networks and devices equipped et al., 2021). It is also widely recognised that the capabili-
with multiple sensors (Mercuri et al., 2019), and their con-
ties of the surveying devices will play a major role in the
nection to storage centres and data processing over the Inter-
quantity, quality, and time of the effort invested to produce
net, has led to the implementation of the Internet of Things
tp

valuable Received Signal Strength Indicator (RSSI) finger-


(IoT) (Yang, 2019). The growing interest in the use of IoT prints (Gu et al., 2019; Ishida et al., 2018).
technologies has led to the development of numerous and On the second point, one of the main research is the
diverse applications (Roy and Chowdhury, 2021; Rahman
characterisation of the behaviour of wireless signals (Lovón-
et al., 2020). The proper functioning of these applications re-
Melgarejo et al., 2019; Liu et al., 2021). The results of di-
quires the control of huge flows of data generated by mobile
no

fferent investigations show that the use of classification al-


devices to the intelligent decision-making centres deployed gorithms in the signal characterisation process is capable of
in the cloud (Real Ehrlich and Blankenbach, 2019). Many of greatly improving the indoor localisation mechanisms based
the services provided by the applications are based on knowl- on performance (Zhou et al., 2021; Tiglao et al., 2021).
edge of the localisation and profile of the end user (Roy et al.,
Since there are many learning methods to solve this pro-
2019).
blem, this work will focus on the development of a novel
Therefore, the main effort focusses on the main compo- prediction technique using Convolutional Neural Networks
t

nents of the development of localisation systems for IoT ap- (CNN) (Sinha and Hwang, 2019). However, since the in-
plications (Kwok et al., 2020): (i) the analysis of the signal
rin

put format for a CNN is images, this input does not match
that transmitters emit to the receiver and how to adequately the format of data obtained by IoT devices, which are tidy
treat the generated data (Chen et al., 2021; Lovón-Melgarejo
data, also called tabular or tidy data (and from now named
et al., 2019); and (ii) algorithms for processing wireless sig-
tidy data), based on RSSI (Liu et al., 2021; Borisov et al.,
nals that allow indoor localisation (Chen et al., 2020; Bencak 2021). The main contribution of this work on transposing
et al., 2022).
ep

tidy data (Wickham, 2014), into images with a novel pre-


Regarding the first point, the design and development of processing technique that simulates the behaviour of a wire-
wireless-based fingerprint localisation techniques presents less signal in the image construction process (Borisov et al.,
† This authors have contributed equally to this work. 2021; Vasquez-Espinoza et al., 2021).
∗ Corresponding author Therefore, the proposal of this work is to transform these
manuel.castillo.cara@upm.es (M. Castillo-Cara); tidy data into images (Maaten and Hinton, 2008; Borisov
Pr

reewos.talla.c@uni.pe (R. Talla-Chumpitaz); luis.orozco@uclm.es (L.


Orozco-Barbosa); r.garcia@upm.es (R. García-Castro)
et al., 2021), since the input information is the RSSI col-
ORCID (s): 0000-0002-2990-7090 (M. Castillo-Cara); lected in the sample collection phase. Therefore, the tidy
0000-0003-1510-1608 (L. Orozco-Barbosa); 0000-0002-0421-452X (R. data must be transformed into images and, thus, enabling
García-Castro) the implementation of the classification process to be done

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 1 of 15


This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

by a a CNN. For this purpose, the following work will use the consumption. To avoid interference between devices, since
methods and mechanisms of the article (Sharma et al., 2019) Wi-Fi and BLE use the same 2.4 GHz band, BLE4.0 uses

ed
as a basis for developing the model. Given the high accuracy channels labelled 37 (2402 MHz), 38 (2426 MHz), and 39
indicated by the aforementioned article, it is expected that (2480 MHz) (Faragher and Harle, 2015; Lovón-Melgarejo
the accuracy of the model to be developed will be equally et al., 2019). Various authors have been developed a beacon
high. For this purpose, two techniques of dimensional re- performance testing for positioning (Kwok et al., 2020) wh-
duction will be used: t-SNE (Wattenberg et al., 2016) and cih have translated into the publication of recommendations
PCA (Kim et al., 2020) when transforming the tidy data into on the placement and density of the beacons, transmission

iew
images. Finally, an evolutionary algorithm will be used to intervals, fingerprint space layout and sampling intervals in
find the best combination of results using different Transmis- physical learning spaces for sustainable eLearning environ-
sion Power (from now named as TxPower) in the transmitter ments.
devices. However, indoor localisation systems based on a single
The remainder of this paper is organised as follows. Sec- technology have exhibited limitations (Rahman et al., 2020;
tion 2 reviews the recent Bluetooth localisation literature in Niu et al., 2019), such as the drift of inertial navigation and
two sections: indoor localisation fingerprint and techniques fluctuation of the received strength of the Bluetooth signal,

ev
developed to construct images from tidy data. Section 3 making them unable to provide reliable positioning. In or-
specifies our indoor environment and the devices used as der to overcome the shortcomings of a single source of in-
transmitters and receivers, depicted in Figure 1 with blocks formation, various authors have developed multi-sensor so-
called “Analysis of the Environment”. Subsequently, Sec- lutions by benefiting from the use of the different sensors
tion 4 “Image transformation methodology” describe step- encountered nowadays in most mobile devices together al-

rr
by-step the details of the mechanisms and techniques used to gorithms or other sources of information, such as particle
create images from the tidy data collected. This section also filters, indoor space or other various wireless technologies
covers the preprocessing methodology and the importance maps (Chen et al., 2021).
of the blurring painting technique to emulate the behaviour Kriz et al. (Kriz et al., 2016) have reported a localisa-
ee
of the signal. Section 5 presents our first set of results using tion experiment using a set of Wi-Fi access points (AP) ac-
symmetric and asymmetric TxPower configurations of the companied by BLE devices. Their localisation mechanism
beacons with a parallel CNN and evolutionary algorithm, was based on Weighted-Nearest Neighbours in the Signal
represented in block “Indoor localisation model”. Finally, Space algorithm. The main objectives of their study were to
Section 6 presents our conclusions and future work direc- improve the indoor localisation accuracy by introducing the
tp

tions. use of beacons and deploy a system that constantly updates


the RSSI levels reported by the mobile devices (receivers).
During their experiments, they varied two parameters of the
beacons: the duration of the RSSI signal scan, and the den-
sity. However, throughout all their experiments, the power
no

of the beacons was set to their maximum value.


A more in-depth study on localisation with beacons and
prior to the one described in this article is the one described
in (Lovón-Melgarejo et al., 2019). The authors carried out an
exhaustive study on two main areas to be taken into account
when performing indoor localisation based on wireless sig-
nals. First, they explored on mitigating the negative effect
t

Figure 1: Overall Schema Proposal. of multipath fading on localisation by using different trans-
rin

mission powers. Second, they developed a methodology to


determine the best combination of transmission powers used
by the different beacon. Their solution was based on the use
2. Related Work of metaheuristics. Their study showed a great improvement
This section introduces the main terminology and an analysis in terms of the localization accuracy with respect to previous
ep

of related works for a better understanding and development results reported in the literature.
of the proposed solution. Regarding the data processing methods being used on
the development of indoor fingerprinting-based indoor loca-
2.1. Indoor localisation fingerprint lisation environments, they mainly involve three main types
A large number of RSSI-based localization studies have of algorithmic methods (Roy and Chowdhury, 2021): proba-
been reported in the literature. Among the most popular bilistic methods such as particle filter (Chen et al., 2021), de-
Pr

wireless technology, Bluetooth Low Energy (BLE) has been terministic methods such as Support Vector Machine (Lovón-
the object of many studies. BLE devices (from now named Melgarejo et al., 2019), and neural networks such as CNN (Liu
as beacon) use 40 2-MHz channels to broadcast information. et al., 2021). Nowadays, most studies focus on the use of
The protocol uses short message duration to reduce power CNN to improve the accuracy and processing of indoor-localisation

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 2 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

solutions (Oh et al., 2021). 3. Background: Devices and Methods


The design and development of BLE localisation sys-

ed
2.2. CNN for indoor localisation fingerprint
One of the main aspects when developing CNNs is to be tem is widely recognised that the capabilities of the survey-
able to have the data reinforced in a preprocessing phase (Borisov ing devices will play a major role in the quantity, quality,
et al., 2021). In terms of the classification technique used and time of effort invested to produce valuable RSSI finger-
when using images in the context and classification process, prints. Details about the area, transmitter/receiver, survey
CNNs predominate. The works developed in terms of the campaigns, Multipath Fading (MPF) and intraday signal at-

iew
use of a neural network architecture are varied, but the vast tenuation were analysed in previous works. Hence, we dis-
majority focus on having an input image obtained in the sam- cuss additional information about BLE4.0 signal characte-
ple collection phase (Alhomayani and Mahoor, 2020). rization and conduct an in-depth analysis about the impact
A different research work from the classical one of col- of different materials/structures on RSSI (Lovón-Melgarejo
lecting samples from images and classifying them with CNN (Hsieh et al., 2019; Ishida et al., 2018).
et al., 2019). The authors present an approach based on CNN
3.1. Experimental Area
using transmission channel quality metrics, mainly RSSI and

ev
Our experiments were carried out in the laboratory of
Channel State Information (CSI). By testing different meth-
our research institute. We placed four beacons at each one
ods, they conclude that with a one-dimensional CNN with
of the four corners of a rectangular area 9.3m × 6.3m. The
CSI they obtain better metrics than other models by reduc-
fifth beacon was placed in the middle of one of the longest
ing complexity.
edges of the room. Figure 2 depicts the experimental area in
More specifically, another more sophisticated research

rr
which the five beacons have been labeled as “Be07”, “Be08”,
work on image integration (Jiao et al., 2018). The authors
“Be09”, “Be10” and “Be11”. We divided the experimental
propose a new localisation algorithm in which wireless sig-
area into 15 sectors of 1m2 , each separated by a guard dis-
nals and images are combined. The novelty of this work on
tance of 0.5m. A 1.5m-wide strip was left around the ex-
first obtaining a coarse-grained estimation based on the vi-
perimental area. This arrangement will allow us to better
ee
sualisation of wireless signals by fingerprinting. Addition-
differentiate the RSSI level of the joint sectors when report-
ally, the authors perform a matching process to determine the
ing our results. Measurements were performed by placing
correspondences between two- and three-dimensional pixels
the mobile device at the centre of each one of the 15 sectors,
based on the images collected. Based on their results, the
as described below. The shortest distance between a bea-
method through the combination of visual and wireless data
con and a receiver was limited to 1.5m. Figure 3 shows four
tp

significantly improves the localisation metrics and robust-


views taken from each one of the four corners of the labo-
ness.
ratory. As shown in the figure, we placed beacons “Be10”
Therefore, this work has been based on the technique
and “Be11” in front of a window, Figures 3(d) and 3(b), re-
used in Sharma et al. (2019), in which the authors focus on
spectively, while all other beacons were placed in front of
developing a method for the transformation of structured to
the opposite plasterboard wall. We further noticed that bea-
unstructured data. In this case, it would be from tidy data
no

con “Be08” was placed at the left edge of the entrance door,
to images for prediction using CNN. More specifically, in
close to the corridor with a glass wall (Figure 3(c)).
this research, the model created with five datasets (1 can-
cer, 1 vowel, 1 text, and 2 artificial) are evaluated. On the Be07
Be11

other hand, for the transformation, dimensional reduction al- Sector 1

gorithms are used with the purpose of obtaining cartesian


coordinates for each feature in the dataset. From the gene-
t

rated coordinates, it generates an image pattern where a pi- Be09 Sector 8


rin

xel represents the feature and the value of each pixel would
be given by the value of the feature. Finally, the generated
images would go through a CNN with parallel layers, obtai-
ning the final classification result. The accuracy obtained is
higher than 90% for most of the datasets used in this work.
Sector 15

As can be seen, the use of different machine/deep learn-


ep

ing techniques is widely developed, especially in the use of


Be10
Be08

known algorithms. Therefore, the developed work shows


a novel method for developing an indoor localisation tech-
nique in which structured data (tidy data) are transformed Figure 2: Beacon indoor experimental area setup.
into unstructured data (images) to use a neural network ar-
Pr

chitecture. For the transformation of the data, the method-


ology described in the article (Sharma et al., 2019) has been
used, as well as, for the signal specifications have been used 3.2. Transmitter and Receiver Devices
the methodology described in (Lovón-Melgarejo et al., 2019). For this experiment, JAALEE beacon devices were used (JAALEE
Inc., 2018). According to the specifications of the five bea-

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 3 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

Table 1
Structure of the RSSI for Tx04 obtained in the data acquisition

ed
phase. An instance of the RSSI for the different sectors (in
total 15) is shown as an sample example.
Be07 Be08 Be09 Be10 Be11 Sector
-65 -61 -74 -73 -67 1
-60 -57 -83 -62 -69 2

iew
-66 -70 -78 -63 -73 3
(a) View from “Be07”. (b) View from “Be11”.
... ... ... ... ... ...
-58 -66 -71 -73 -69 14
-60 -62 -73 -69 -57 15

images from the previous process, when imported, must be


converted into a data matrix of size 𝑝 × 𝑝 × 1 (one channel

ev
since the images are given in black and white), which will be
the input to CNN. On the other hand, we have the test data,
(c) View from “Be08”. (d) View from “Be10”. which are related to the sector being a classification problem,
where each sample is converted into a vector of size 𝐶. Note
Figure 3: Pictures from each of the four corners of the labo-
that 𝐶 is the number of classes, in this case 15 referring to

rr
ratory.
the sectors of the experimental area.
In this context, Figure 4 shows the architecture of the par-
allel CNN developed in this work. The result of this CNN is
cons used in our experiments, they may operate at one of
the one that has maximised the results by testing empirically
eight different TxPower levels. Following the specifications,
ee different configurations of it. We will work with a CNN with
the TxPower levels are labelled in consecutive order from
2 parallel branches, having the same input for both and four
highest to lowest level as TxPower=0x01, TxPower=0x02,
blocks. Each block consists of 4 different layers, with all 4
. . . , TxPower=0x08 (from now named as Tx01, . . . , Tx08)
blocks having the same configuration. Thus, the initial layer
(ultra wide range TxPower: 4dBm to -40dBm), although
of each block is a convolutional layer followed by a batch
Tx07 and Tx08 were discarded since they did not adequately
normalisation layer that resets and rescales the results ob-
tp

cover the signal spectrum in the entire area. During our ex-
tained by the previous layer. The next is a ReLU activation
periments, we conducted several measurement campaigns
layer and the last is a pooling layer. The difference between
by fixing the TxPower level of all beacons at the beginning
one branch and the other is the size of the kernel filter and
of each campaign. Additionally, all measurements were per-
the type of pooling. Maximising the results has given a con-
formed under line-of-sight conditions.
figuration of a filter of size (3, 3) and MaxPooling for the
no

As receiver, we used a Raspberry Pi equipped with a


first branch and, for the second branch, (5, 5) and Average-
USB BLE4.0 antenna (Trendnet, 2018), hereinafter referred
Pooling. For both branches, it is satisfied that the number of
to as BLE4.0 antenna. Finally, the dataset can be down-
filters for the convolution layers are 16, 32, 64, and 64 for
loaded in (Castillo-Cara, 2022).
each block and in that order.
Therefore, the two branches converge into one (concate-
3.3. Acquisition phase data format
nate). The Flatten layer follows, which arranges the data,
In the data acquisition phase, the RSSI obtained of the
t

flattening the matrices created from the previous layers. Sub-


five beacons will be denoted as the features, 𝑋. The output
sequently, 1 dense layer of 256 neurones with ReLU acti-
rin

value, i.e., target, is the sector in which the person is located,


vation and 3 dense layers with Sigmoid activation (128, 64
denoted as 𝑌 . Note that you have from Tx01 to Tx06. Thus,
and 32 neurones, respectively) are used. The last layer of the
a sample of the input data for Tx04 can be seen in Table 1.
CNN has Softmax activation, which aims to provide values
Note that the features, 𝑋, are the RSSI values called from
(probabilities) to classify the image, and has 15 neurons due
“Be07” to “Be11”, and the target, 𝑌 , is the feature called
to the number of classes (sectors) in the experimental area.
ep

“Sector”. It can be seen that the input data is in tidy data


format.
Since the main purpose is to convert the tidy data into 4. Methodology on Image Transformation
images for the use of a CNN, a partition of the samples must
As discussed in the aforementioned sections, one of the
be made, i.e., for train/test. The partition ratio would be 70%
main contributions of this work is to develop and formalise a
for training and 30% for testing.
Pr

methodology for converting tidy data into images. Thus, this


section will describe this methodology and the mathematical
3.4. Convolutional Neural Network Architecture
formalisation that justifies this novel mechanism for indoor
For the development of CNN, the structure of the incom-
localisation fingerprint.
ing and outgoing data must be understood. The resulting

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 4 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

ed
iew
ev
Figure 4: Convolutional Neuronal Network Architecture.

rr
4.1. Tidy Data into Image Transformation 1. Data dimensionalily reduction: The initial matrix is
One of the first translations to be performed is to con- transposed. Note that each feature has been repre-
vert the RSSI obtained for each Bluetooth device in the same sented with a different colour per comprehension al-
image, i.e., the corresponding sample (row) in the sampling though the resulting figure will consist of two chan-
ee
phase (see Table 1). To develop a tidy data into image trans- nels, i.e., black and white. Therefore, this task makes
formation, the data processing shown in the pipeline detailed use of a dimensionality reduction algorithm. In our
in Figure 5 must be performed. As can be seen, the con- case, we have explored two such algorithms: PCA and
version from tidy data into image data is done through a t-SNE.
two-dimensional space in 𝑋 and 𝑌 . This translation to two- 2. Centre of mass and delimitation: Once having obtained
tp

dimensional space will allow us to build an image of cha- the coordinates, the centre of gravity of the points is
racteristic pixels for each sample of the dataset, i.e., each determined and subsequently the area is delimited.
data sample is converted to a two-dimensional space. Note 3. Scaling and pixel positions. The matrix is transposed,
that for data transformation, we have used the basic method- scaled and the values are rounded to an integer value
ology (although adapted to this particular research) set out positions.
no

in (Sharma et al., 2019). 4. Characteristic pixel positions: The values obtained would
be the positions of the characteristic pixels for the cre-
ation of the image pattern.
As mentioned above, this work has been based on the
methodology presented by (Sharma et al., 2019) for the trans-
formation of tidy data into image. However, the authors do
t

not provide a formal specification of the steps to get the posi-


rin

tions of the characteristic pixels from the initial tidy data. In


the following paragraphs, we do not only provide a precise
description of the steps of our solution, but we fully anal-
yse in Section 4.2, the relevance of the tidy data into image
patterns obtained when applying the two dimensionality re-
ep

duction algorithms, i.e., PCA and t-SNE, and blurring paint-


Figure 5: Process for obtaining the feature coordinates from ing technique in the image generation process. Our analysis
the transpose matrix. It can be seen that the tidy data fully identifies the shortcomings of the t-SNE algorithm and
are converted into a two-dimension matrix through the diffe- provides a mechanism to avoid the overlapping of characte-
rent phases with their respective techniques, i.e., delimitation, ristic pixels.
Pr

translation, and so on.


Dimensional reduction
As seen from Figure 5, the pipeline to convert each tidy This first task, not shown in Figure 5, takes as input the
data sample into a two-dimensional space consists of five initial data matrix,denoted by 𝐴 ∈ IRm×n , where 𝑚 is the
main processing tasks: number of samples (rows) and 𝑛 is the number of features

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 5 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

(columns). As seen from the above steps, the procedure basically


consists on replicating the matrix 𝐴𝑡 𝑘 times before applying

ed
the t-SNE algorithm, and then reducing the number of rows
⎡ 𝑎11 … 𝑎1𝑛 ⎤
of 𝑘𝑛 to 𝑛 by averaging over (ℎ − 1) ⋅ 𝑛 + 𝑖 rows, where
⎢𝑎 … 𝑎2𝑛 ⎥
𝐴 = ⎢ 21 (1) ℎ ∈ [1, 𝑘], for each 𝑖 ∈ [1, 𝑛]
⋮ ⋱ ⋮ ⎥
⎢ ⎥
⎣𝑎𝑚1 … 𝑎𝑚𝑛 ⎦ Centre of mass and delimitation
In this second task, each pair of elements of a row is

iew
The matrix 𝐴 is transposed, denoted by 𝐴𝑡 ∈ IRn×m :
considered as a point 𝑝𝑖 where 𝑖 ∈ [1, 𝑛], corresponds to
⎡𝑎11 𝑎21 … 𝑎𝑚1 ⎤ the coordinates 𝑋 and 𝑌 the elements of the first and second
𝐴𝑡 = ⎢ ⋮ ⋮ ⋱ ⋮ ⎥ (2) column, respectively:
⎢ ⎥
⎣𝑎1𝑛 𝑎2𝑛 … 𝑎𝑚𝑛 ⎦

The transposed matrix 𝐴𝑡 is then then reduced by apply- 𝑝𝑖 = (𝑏𝑖1 , 𝑏𝑖2 ) → 𝑃 = {𝑝1 , … , 𝑝𝑛 } (7)
ing one of the two dimension reduction algorithms (t-SNE

ev
We proceed to find the centre of mass of the points 𝑝𝑖 ,
or PCA). The resulting matrix, denoted as 𝐴𝑅𝐷, is reduced
represented as 𝑝𝑀𝐶 , where all points have the same mass.
from 𝑚 columns to two columns. Therefore, the resulting
matrix 𝐵 ∈ IRn×2 can be simply denoted as: ∑𝑛 ∑𝑛
𝑖=1 𝑏𝑖1 𝑏𝑖2
𝑝𝑀𝐶 = ( , 𝑖=1 ) (8)
𝑛 𝑛

rr
⎡𝑏11 𝑏12 ⎤
Then, the maximum integer distance from the farthest
𝐴𝑅𝐷(𝐴𝑡 , 2) = 𝐵 = ⎢ ⋮ ⋮⎥ (3)
⎢ ⎥ point to the centre of mass, denoted 𝑑𝑚𝑎𝑥 , is calculated:
⎣𝑏𝑛1 𝑏𝑛2 ⎦
𝑑𝑚𝑎𝑥 = ⌈ max ‖𝑝→ →
𝑖 − 𝑝𝑀𝐶 ‖⌉ (9)
In the case that the use of the t-SNE algorithm be pre-
ee {𝑖∈[1,𝑛]}
ferred, we should take care of avoiding the overlapping of
characteristic pixels. This condition may arise due to the where ‖‖ is the norm function, ⌈⌉ is the ceil function, and
stochastic nature and results in the loss of information, see 𝑝→ is a vector pointing from the origin of the coordinates to
Section 4.2 for an in-depth analysis. Accordingly in this the point 𝑝.
case, the following three processing task should be performed The delimitation of the section in the Cartesian plane
tp

when using the t-SNE algorithm. containing the characteristic points 𝑝𝑖 , ∀𝑖 ∈ [1, 𝑛] is made,
denoted as 𝑆.
• Given the matrix 𝐴𝑡 , a new matrix 𝐷 ∈ IRkn×2 is cre-
ated by overlapping 𝑘 times the matrix 𝐴𝑡 : 𝑆 = {(𝑥, 𝑦) ∶ 𝑥, 𝑦 ∈ IR∕𝑥 ∈ [𝑝𝑀𝐶𝑥 − 𝑑𝑚𝑎𝑥 , 𝑝𝑀𝐶𝑥 + 𝑑𝑚𝑎𝑥 ],
𝑦 ∈ [𝑝𝑀𝐶𝑦 − 𝑑𝑚𝑎𝑥 , 𝑝𝑀𝐶𝑦 + 𝑑𝑚𝑎𝑥 ]} (10)
⎡𝑎(1) ⎤
no

𝑎(1) … 𝑎(1)
⎢ ⋮11 21
⋮ ⋱ ⋮ ⎥⎥
𝑚1
A linear transformation 𝐹1 ∶ 𝑆 → 𝑇 is performed, such
⎡𝐴1 ⎤ ⎢ (1)
𝑡
that 𝐹1 is defined as follows:
⎢𝑎 𝑎(1) … 𝑎(1)
𝑚𝑛 ⎥
𝐷 = ⎢ ⋮ ⎥ = ⎢ 1𝑛 2𝑛
(2) ⎥ (4)
⎢ 𝑡 ⎥ ⎢𝑎(2) 𝑎(2) … 𝑎𝑚1 ⎥
⎣𝐴𝑘 ⎦ 11 21 𝐹1 (𝑥, 𝑦) = (𝑥−𝑝𝑀𝐶𝑥 +𝑑𝑚𝑎𝑥 , 𝑦−𝑝𝑀𝐶𝑦 −𝑑𝑚𝑎𝑥 ); ∀(𝑥, 𝑦) ∈ 𝑆
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
⎢𝑎(𝑘) ⎥ (11)
⎣ 1𝑛 𝑎(𝑘)
2𝑛
… 𝑎(𝑘)
𝑚𝑛 ⎦
t

Using this maximum distance 𝑑𝑚𝑎𝑥 , the square is bounded,


• Dimension reduction is performed as specified by (3):
rin

so each side of the image square will be a size of 2 × 𝑑𝑚𝑎𝑥 ;


where the set 𝑇 is defined as follows:
′ 𝑏′12 ⎤
⎡ 𝑏11
⎢ ⋮ ⋮ ⎥ 𝑇 = {(𝑥, 𝑦) ∶ 𝑥, 𝑦 ∈ IR∕𝑥 ∈ [0, 2×𝑑𝑚𝑎𝑥 ], 𝑦 ∈ [−2×𝑑𝑚𝑎𝑥 , 0]}
⎢ 𝑏′ 𝑏′𝑛2 ⎥ (12)
t-SNE(𝐷, 2) = 𝐷′ = ⎢𝑏′ 𝑛1 ⎥
ep

𝑏′(𝑛+1)2 ⎥ (5)
⎢ (𝑛+1)1
⎢ ⋮ In our case, the translation was made to the fourth quad-
⋮ ⎥
⎢ ′ ′ ⎥ rant because of its similarity to the order of the matrices (up-
⎣ 𝑏(𝑘𝑛)1 𝑏(𝑘𝑛)2 ⎦
per left position as origin). In this case, the values of the co-
ordinates are negative, but they are solved in later steps (see
• A linear transformation is performed 𝑇 ∶ IRkn×2 → expressions 13 and 14).
Pr

IRn×2 , such that 𝑇 is defined as follows:


∑𝑘
⎡ ′ ⎤
ℎ=1 𝑏((ℎ−1)𝑛+𝑖)𝑗
𝑇 (𝐷 ) = ⎢𝑏𝑖𝑗 =
′ ⎥=𝐵 (6)
⎢ 𝑘 ⎥
⎣ ⎦

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 6 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

Scaling and pixel positions Table 2


This task starts by a linear transformation, the scaled co- Comparison of the patterns generated in the image creation

ed
ordinates of the features, 𝐹2 ∶ 𝑇 → 𝑈 defined as follows: by the two-dimensionality reduction algorithms (Alg.): PCA
and t-SNE. In the different images, the impact of the choice
of seeds can be seen when generating the image of the same
𝑝𝑖𝑥𝑒𝑙 − 1 𝑝𝑖𝑥𝑒𝑙 − 1 sample.
𝐹2 (𝑥, 𝑦) = (⌊𝑥× ⌉, ⌊|𝑦× |⌉); ∀(𝑥, 𝑦) ∈ 𝑇
2 × 𝑑𝑚𝑎𝑥 2 × 𝑑𝑚𝑎𝑥 Alg. seed=1 seed=7 seed=23
(13)

iew
where 𝑝𝑖𝑥𝑒𝑙 is the number of pixels of the resulting image PCA
and || and ⌊⌉ are the absolute value and rounding functions
(solely to integer value positions), respectively.
In turn, the set 𝑈 is defined as: (a) (b) (c)

𝑈 = {(𝑥, 𝑦) ∶ 𝑥, 𝑦 ∈ IN∕𝑥 ∈ [0, 𝑝𝑖𝑥𝑒𝑙−1], 𝑦 ∈ [0, 𝑝𝑖𝑥𝑒𝑙−1]}

ev
t-SNE —
(14)

Consequently, having characteristic points 𝑝𝑖 , we trans- (d) (e)


form them into characteristic pixel positions 𝑞𝑖 :

rr
𝑞𝑖 = 𝐹2 (𝐹1 (𝑝𝑖 )) → 𝑄 = {𝑞1 , … , 𝑞𝑛 } (15) Characteristic pixel positions
With the purpose of ordering the data to feed the CNN,
Then the values of the initial matrix 𝐴, should be scaled
a linear transformation 𝐻 ∶ 𝑅𝑚×𝑛 × 𝑅𝑛×2 → 𝑅𝑚×𝑝𝑖𝑥𝑒𝑙×𝑝𝑖𝑥𝑒𝑙
over the interval [0, 255]. This is done by first obtaining
must be performed, as follows:
ee
the global minimum (see Equation 16) and global maximum
(see Equation 17) of matrix 𝐴:
[ { ]
𝑧𝑖𝜆 , if (𝑗, 𝑘) = 𝑞𝜆 ∈ 𝑄, 𝜆 ∈ [1, 𝑛]
𝑎𝑚𝑖𝑛 = min {𝑎𝑖𝑗 ∕𝑎𝑖𝑗 ∈ 𝐴} (16) 𝐻(𝑍, 𝑄) = 𝑤𝑖𝑗𝑘 =
{𝑖∈[1,𝑚];𝑗∈[1,𝑛]} 0, otherwise
(21)
tp
∈ 𝑅𝑚×𝑝𝑖𝑥𝑒𝑙×𝑝𝑖𝑥𝑒𝑙
𝑎𝑚𝑎𝑥 = max {𝑎𝑖𝑗 ∕𝑎𝑖𝑗 ∈ 𝐴} (17) where, 𝑍 ∈ 𝑅𝑚×𝑛 and 𝑄 ∈ 𝑅𝑛×2 .
{𝑖∈[1,𝑚];𝑗∈[1,𝑛]}

We proceed to normalise the values of the matrix 𝐴 by 4.2. On the relevance of Image Pattern
means of the linear transformation 𝐺1 ∶ 𝑅𝑚×𝑛 → 𝑅𝑚×𝑛 , Recognition
no

being defined as follows: In this section, an analysis of the patterns in the images
[ ] obtained will be performed based on the methodology ex-
𝑥𝑖𝑗 − 𝑎𝑚𝑖𝑛 plained in Section 4. When creating images, we have to take
𝐺1 (𝑋) = ∈ 𝑅𝑚×𝑛 = [𝑦𝑖𝑗 ] ∈ 𝑅𝑚×𝑛 = 𝑌
𝑎𝑚𝑎𝑥 − 𝑎𝑚𝑖𝑛 into account that the scenes should contain as much informa-
(18) tion as possible to be processed by a CNN in order extract
the relevant features for classification. Therefore, it is im-
t

the scale over the [0,255] is finally completed by means portant to identify the main parameters used during the tidy
of the linear transformation 𝐺2 ∶ 𝑅𝑚×𝑛 → 𝑅𝑚×𝑛 , defined as data into image process and their impact on the representa-
rin

follows: tion of the pixels in the scene. Hence, in this section, four
relevant parameters are analysed.

𝐺2 (𝑌 ) = [⌊255×𝑦𝑖𝑗 ⌉] ∈ 𝑅𝑚×𝑛 = [𝑧𝑖𝑗 ] ∈ 𝑅𝑚×𝑛 = 𝑍 (19) On the relevance of dimensionality reduction


algorithms
ep

In summary, by applying the linear transformations to Concerning the image pattern produced, and in particu-
the matrix 𝐴: lar the positions of the characteristic pixels, we have found
out that dimensionality reduction algorithms (PCA and t-
𝐶 = 𝐺2 (𝐺1 (𝐴)) (20) SNE) are affected by different inputs to the transformation
process as the sample size and the seed (see Table 2).
where 𝐶 ∈ 𝑅𝑚𝑡𝑖𝑚𝑒𝑠𝑛 is the matrix containing the final va-
Pr

Table 2 shows five images of the same sample when ap-


lues, after normalisation and scaling to [0, 1], to be placed at
plying the two reduction algorithms and five different seeds.
the corresponding positions in the characteristic pixels. For
Images with labels (a), (b) and (c) they have been obtained
this specific procedure, the MinMaxScaler class of the scikit-
using the PCA algorithm with seeds to 1, 7, and 23, respec-
learn Python library was used.
tively. In turn, the images labelled (d) and (e) have been

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 7 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

obtained using the t-SNE algorithm with seeds to 7 and 23, Table 3
respectively. Note that we have not included the image for Variations of the image pattern using consecutive duplication

ed
the case of the t-SNE algorithm with 𝑠𝑒𝑒𝑑 = 1. Our results (dupl.). The duplication values analysed in the image range
have shown that for this case, there is a big overlap of pixels. from 0 to 6.
Our results have reported only 4 pixels in an image. From the
table, we see that there are clear differences in the resulting
image patterns. Furthermore, the PCA always gives, when
applied to a given data instance, the same pattern regardless

iew
of the random seed. In contrast in the case of the t-SNE algo-
rithm, the pattern varies for each seed. We can then conclude (a) dupl.=0 (b) dupl.=1 (c) dupl.=2 (d) dupl.=3
that for PCA, being invariant to the random seed, no optimi-
sation or change is required. On the other hand, t-SNE does
require some optimisation or at least some way to avoid over-
lapping of the characteristic pixels as we will explain below
based on the findings reported in (Wattenberg et al., 2016).

ev
(e) dupl.=4 (f) dupl.=5 (g) dupl.=5 (h) dupl.=6
On the relevance of t-SNE parameters
As observed in Table 2, PCA algorithm is indifferent requires an analysis of the seed set-up (Wattenberg et al.,
to the given seed, i.e., the samples have the same pattern; 2016). An improper set-up mar results on two duplicate pi-
whereas t-SNE, is vulnerable to the seed, i.e., the result de-

rr
xels falling on the same two-dimensional point, representing
pends on the number of seeds. In this context, using the less than the five characteristic pixels requierd as input data.
initial default parameters, there is the possibility of overlap- Note that our proposal requires N characteristic pixels in a
ping one characteristic pixel with another. We should then dataset. In our case, we must have 𝑁 = 5 characteristic
take into account that the characteristic pixels must not be pixels in the resulting image.
ee
superimposed, as the information of a pixel would be lost, However when designing a solution, we should realize
not obtaining the 5 points in the two-dimensional space. that the patterns of the images generated should properly
The authors from (Wattenberg et al., 2016) mention the operate irrespectively of the seed being used. In other words,
need for the use of parameters for optimisation. Further- we should provide a solution allowing us to overcome the
more, t-SNE is mentioned as a way to visualise the data, stochastic nature of t-SNE. Our proposal involves the repro-
tp
but not as an indication of any relationship between point- duction of 𝑘 instances of the transposed matrix and before
to-point distances. In this case, this would not be a problem, applying the t-SNE algorithm. In this way, we made use
since t-SNE is not used for that purpose in this work but of 𝑘 coordinate registers for characteristic pixel 𝑖, then we
only allows us to obtain a visual pattern. According to the average the 𝑘 registers and plot their average in the two-
authors, the following parameters are recommended to be dimensional plane.
analysed for an optimal result (taking into account that the
no

Table 3 presents an example when applying the different


parameters not specified have the value by default): images obtained by consecutive duplication: (a) duplication
• n_components: Define the number of final compo- is 0 and it would be only 5 samples; (b) duplication is 1 and
nents. In our case, these components correspond to it would be 10 samples; (c) duplication is 2 and it would be
the coordinates of each pixel. 20 samples; (d) duplication is 3 and it would be 40 samples;
(d) duplication is 4 and it would be 80 samples; (f) duplica-
• random_state: Define the random seed. Several va- tion is 5 and it would be 160 samples; (g) duplication is 5
t

lues should be used to evaluate their impact on the pa- and it would be 160 samples and, with respect to the previ-
rin

tterns of the images as previously above (see 2). ous one, another random seed is used; and (h) duplication
is 6 and it would be 320 samples. Some similarity can be
• perplexity: Indicates how easy it is to predict the pro-
observed between (c), (e) and (f). On the other hand, (a)
bability distribution. Its default value is 30. By modi-
and (h) are also similar, but this would be an indication not
fying this parameter to 50, the result was that the vari-
to increase further, i.e., not to make more than 6 consecutive
ation of the patterns per random seed is not as abrupt
ep

duplications. Due to the similar structure to the PCA, option


as presented in Table 2. Moreover, the result is simi-
(e) could be chosen, i.e., duplicated four consecutive times.
lar, but has slight variations, such as rotation of the
As a summary of the above, the duplication value was set to
pixels with respect to the central axis.
four as in our empirical experiments this value fulfilled the
On the relevance of the overlapping pixels for t-SNE purpose of maximising the model output.
Pr

This process will only be used when t-SNE is used as As we have observed, with the pixel duplication tech-
the dimensionality reduction algorithm. As seen in Table 2, nique, we fulfil two definitions generated by t-SNE: (i) Sta-
t-SNE is vulnerable to the chosen seed value, while PCA bilise the image pattern regardless of the seed used; and (ii)
remains constant in the characteristic pixel representation. Combat having overlapping pixels in a two-dimensional rep-
This makes sense due to the stochastic nature of t-SNE which resentation. While it is true that the image can be created

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 8 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

with fewer characteristic pixels due to the issue of overlap- with more pixels. This allows the necessary convolutions to
ping pixels, the results resulted in slightly worse final perfor- be made to improve the prediction.

ed
mance, so it was decided to condition the experiment with In this context, the comparison of the training and test
the duplication of pixels. for images of an initial size of 10 × 10 and 20 × 20 pixels
is shown in Figure 7. We can observe that the accuracy of
On the relevance of pixel-image size the test curve for an image of 20 × 20 pixels is very unsta-
For the development of this experiment, one of the chal- ble, varying up to approximately 30% in a single epoch of
lenges was also to find a suitable size of the pixels in the difference. This reflects a large uncertainty, and therefore a

iew
images. Summarising the tests carried out, the preliminary totally random model that does not generalise. At the same
results of using images of 10×10, 20×20, and 40×40 pixels time, using images of 10 × 10 pixels, we see fully smoothed,
will be presented below. stable curves with almost no bias, indicating robust learning
Therefore, Figure 6 depicts these preliminary training re- of the model.
sults for the three image sizes used in pixels: 10 × 10 (see
Figure 6(a)); 20 × 20 (see Figure 6(b)); and 40 × 40 (see Fi-
gure 6(c)). All images are scaled. As can be seen, it would

ev
be difficult to visually compare samples in 40 × 40 (see Fi-
gure 6(c)), compared to samples 10 × 10 (see Figure 6(a)).
This is due to the ratio of characteristic pixels to the total
number of pixels: 10 × 10 (see Figure 6(a)) has a ratio of
5%, while 20 × 20 (see Figure 6(b)) has 1.25% and 40 × 40

rr
(see Figure 6(c)) has 0.3125%.
ee
Figure 7: Comparison of accuracy Vs. epoch plots using diffe-
tp

rent initial image sizes (in pixels). (i) The blue-coloured curve
represents the training curve for the size of 10 × 10; (ii) the
(a) Size: 10 × 10 pixels. (b) Size: 20 × 20 pixels.
orange-coloured curve represents the test curve for the size of
10 × 10; and (iii) the green-coloured curve represents the trai-
ning curve for the size of 20 × 20; and (iv) the red-coloured
no

curve represents the test curve for the size of 20 × 20 pixels.

Summarising, we can note that the size of the image will


be closely linked to the number of characteristic pixels. In
our case, for the five characteristic pixels, the size 10 ade-
quately optimises CNN learning. Note that if more wireless
t

points are used, a larger two-dimensional space should be


(c) Size: 40 × 40 pixels. used, which would lead to a rescaling of the scene.
rin

Figure 6: Comparison of scaled images with respect to the


number of pixels for the different matrix sizes for the images.
4.3. Image rendering techniques
As the procedure used has been to convert tidy data into
image, a CNN will be developed that can carry out the clas-
Therefore, the percentage of characteristic pixels in total sification process and, therefore, the localization model. In
ep

pixels must be neither too low nor too high. If it is too low, order to evaluate the localization spectrum in a wireless envi-
the values will go unnoticed; if it is too high, the characte- ronment, two techniques have been developed as case studies
ristic pixels may overlap, regardless of the dimensionality to be evaluated: one without blurring of the characteristic
reduction algorithm used. In this work, the percentage of pixel; and the other one with blurring of the characteristic
5% of characteristic pixels with respect to the total pixels, pixels.
Pr

i.e., 10 × 10 pixels (see Figure 6(a)), was chosen as it max-


imised the performance of the final CNN model. Hence, to Case 1: Characteristic pixels without blurring
increase the convolutional layers, the original image, which In this case, we fill in the data 𝐴𝑠𝑐𝑎𝑙𝑒 at their correspon-
is 10×10 pixels (see Figure 6(a)), was scaled to 40×40 pixels ding characteristic pixel positions in the matrix 𝑀. The ma-
(see Figure 6(c)), resulting in the same original pattern but trix 𝑀 is then converted into an image. This is done for each

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 9 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

𝐴𝑠𝑐𝑎𝑙𝑒 sample. All other values of the matrix 𝑀 are zeros, • 𝑖𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦0 is the value of the characteristic pixel;
so it is not relevant in this case.
• amplitude is a constant that would allow the intensity

ed
Therefore, for the creation of images from the tidy data,
to extend along the strokes. For this study, we put the
the characteristic pixel position and the scale generated by
amplitude with the value 𝜋, simplifying the value 𝜋
the normalisation are used. Figure 10(a) shows an example
of the denominator and improving the blurring tech-
of the result without the blurring technique.
nique;
Case 2: Characteristic pixels with blurring • distance is a percentage value that can range from 0 to

iew
This procedure has been created empirically, based on 1. For this study, the distance was chosen 0.1 (10%)
the technique of blurring in the plastic arts, specifically in corresponding to one pixel in the image of size 10𝑥10
drawing and painting. This technique is often used to soften pixels; and
and extend strokes so that the transition from intense to faint
is uniform. Then, by making an analogy with the characte- • total_steps is the number of steps for image fading. It
ristic pixels as spots on a canvas, one could blur these spots has been set to four since in the following steps, the in-
so that they cover more area without losing the intensity of tensity value is very small, close to zero. Accordingly,

ev
the original characteristic pixel. the value 𝑖 ∈ [0, 𝑡𝑜𝑡𝑎𝑙_𝑠𝑡𝑒𝑝𝑠].
Therefore, the use of the blurring technique of the cha- Figure 9 shows the rationale for the parameters chosen
racteristic pixels is proposed to enlarge their area. To do as an example. Therefore, a comparison of the intensities
this, the thickness, number, and intensity of the strokes must per step is made for the amplitudes 𝑎𝑚𝑝 = [1, 2, 3, 𝜋, 4] and
be defined. Each stroke would be a circumference around 𝑖𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦0 = 0.8. In the figure, the distance is represented

rr
the pixel and the previous stroke with a thickness called dis- over all the steps interval, i.e. as a continuous function. Ho-
tance, represented as 𝑟; and the number of strokes or cir- wever, in practice, the distance value is only defined for a
cles would be denoted as steps, represented as 𝐶𝑥 , see Fi- discrete, integer values, of steps (pixels).
gure 8. Moreover, the intensity of that stroke or circum-
ee
ference would be determined by the encompassed area of
the circumference, with the centre (the characteristic pixel)
being the initial intensity, represented as 𝑃 . Moreover, the 0.8 amp=1

intensity fades as you move away from the characteristic pi-


amp=2
0.7
amp=3

xel. The intensity of the stroke or circumference, 𝑖, where 0.6 amp=PI


tp
1 ≤ 𝑠𝑡𝑒𝑝 ≤ 𝑡𝑜𝑡𝑎𝑙_𝑠𝑡𝑒𝑝𝑠, would be given by the following amp=4

0.5
equation:
intensity

0.4

0.3

𝑖𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦0
𝑖𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦𝑖 = 𝑎𝑚𝑝𝑙𝑖𝑡𝑢𝑑𝑒 × 0.2

𝜋 × (𝑡𝑜𝑡𝑎𝑙_𝑠𝑡𝑒𝑝𝑠 × 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒)2
no

(22)
0.1

0.0

0 1 2 3 4 5 6

steps

Figure 9: Comparison of intensities by distance for different


t

amplitudes. The amplitudes evaluated are 𝑎𝑚𝑝 = [1, 2, 3, 𝜋, 4].


rin

Finally, by using this technique, it is possible to deter-


mine the step size where the blurring of two characteristic
pixels overlap. In our case, we have experimented with two
possible solutions for the representation of the scene with
overlapping pixels:
ep

• Average value: An alternative to the above is to ave-


rage the intensity of the two, this being the one already
in the matrix and the new one, see Figure 10(b).
Figure 8: Representation of blurring technique, where r would • Maximum: The highest scoring value pixel of the
Pr

be the distance, P the localisation of the characteristic pixel,


overlapping pixels is chosen, i.e., if the value of the
and the colours would represent the intensity for each stroke
or circumference.
pixel in the matrix is lower than the value of the in-
coming pixel, it is replaced, otherwise it is kept, see
Figure 10(c).
where,

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 10 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

ed
iew
(a) Without blurring. (b) Blurring with average value. (a) Without blurring. (b) Blurring with maximum
value.

ev
rr
(c) Blurring with maximum value.
(c) Blurring with average value.
Figure 10: Example of a scene with the different techniques
used. For this image, a sample of the Tx01 dataset, sector 9 Figure 11: Image samples for PCA depends on the use of
and PCA as dimensionality reduction algorithm has been used. blurring technique. For all images, Tx04 has been used for
ee sector 4 of the experimental area.

5. Experimental Results
Once the CNN input has been analysed, we will now
tp
show the results obtained from the different procedures ex-
plored above, both the generated images and the CNN pre-
dictive result. First, we will look at the baseline results with
symmetric TxPower setup and then discuss the results using
metaheuristic optimisation to find the best combination of
TxPower.
no

(a) Without blurring. (b) Blurring with maximum


5.1. Image Pattern Recognition value.
In this section, we will present the samples generated
using PCA and t-SNE in each dataset, with and without blur-
ring technique. The same seed was used in all evaluations for
the separation of train and test data, i.e., for the same dataset
t

the same conditions will be used, varying only the use of


blurring and the type of overlapping.
rin

Figure 11 and Figure 12 show the generation of three


sample images for the different cases under study, for PCA
and t-SNE, respectively. These sample images are clear ex-
amples of the resulting images that will be the input to CNN.
(c) Blurring with average value.
ep

5.2. Symmetric Transmission Power Setups Figure 12: Image samples for t-SNE depends on the use of
This first analysis will allow us to identify which of the blurring technique. For all images, Tx04 has been used for
two dimensionality reduction algorithms combined with three sector 4 of the experimental area.
different pixel representation techniques provides the best
results. By best results, we mean not only the final locali-
Pr

sation results, but also the setting and configuration of the Case 1: Without Blurring
algorithms and techniques used in the data processing tasks.
Table 4 shows the results obtained from the six TxPower,
Accordingly, we define three cases: (i) without blurring; (ii)
i.e., Tx01,. . . , Tx06, without the blurring technique. Each
with maximum blurring; and (iii) with average blurring.
dataset is evaluated with PCA and t-SNE. The results are

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 11 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

Table 4 Table 5
Results in accuracy (Acc), loss, and mean error (ME) without Results in accuracy (Acc), loss and mean error (ME) using the

ed
blurring technique for each TxPower in the test dataset for average blurring technique for each TxPower in Test dataset
PCA and t-SNE. Best results are shown in bold. for PCA and t-SNE. Best results are shown in bold.
TxPower Algorithm Acc Loss ME (m) TxPower Algorithm Acc Loss ME (m)
Tx01 t-SNE 67.49 1.1517 0.7661 Tx01 t-SNE 79.68 0.7302 0.4916
Tx01 PCA 68.15 1.1422 0.7669 Tx01 PCA 81.67 0.6618 0.4267
Tx02 t-SNE 68.49 1.1675 0.8093 Tx02 t-SNE 79.30 0.8020 0.5331

iew
Tx02 PCA 66.83 1.1508 0.8749 Tx02 PCA 79.08 0.8098 0.5154
Tx03 t-SNE 69.16 1.1485 0.9334 Tx03 t-SNE 82.52 0.7189 0.5452
Tx03 PCA 71.04 1.0431 0.8784 Tx03 PCA 85.24 0.6254 0.4569
Tx04 t-SNE 68.71 1.0054 0.7937 Tx04 t-SNE 83.43 0.6354 0.4270
Tx04 PCA 69.27 0.9831 0.7775 Tx04 PCA 84.52 0.5756 0.3894
Tx05 t-SNE 65.68 1.2827 1.0863 Tx05 t-SNE 78.20 0.8251 0.6289
Tx05 PCA 68.57 1.1744 1.0118 Tx05 PCA 78.92 0.7753 0.6158

ev
Tx06 t-SNE 77.38 0.9195 0.6470 Tx06 t-SNE 89.26 0.4578 0.3041
Tx06 PCA 76.04 0.9288 0.6768 Tx06 PCA 90.50 0.4093 0.2738

Table 6
shown in accuracy, loss and mean error (𝑚). From the re-
Results in accuracy (Acc), loss and mean error (ME) using the
sults, it is clear that the power level plays major role on all

rr
maximum blurring technique for each TxPower in Test dataset
counts (metrics). The results also show that the best results is for PCA and t-SNE. Best results are shown in bold.
obtained for the lowest power level used in our experiments, TxPower Algorithm Acc Loss ME (m)
i.e., Tx06. The results also show that for or a given trans- Tx01 t-SNE 81.55 0.6829 0.4413
mission power level, both dimension reduction algorithms
ee Tx01 PCA 81.45 0.6596 0.4403
PCA and t-SNE report similar values. This proves the effec- Tx02 t-SNE 79.78 0.7840 0.5249
tiveness of the proposed extra processing introduced in our Tx02 PCA 80.09 0.7808 0.5059
proposal to overcome the shortcomings of the t-SNE algo- Tx03 t-SNE 85.02 0.6092 0.4488
rithm. Tx03 PCA 85.57 0.5939 0.4437
Tx04 t-SNE 84.07 0.5822 0.4058
tp
Case 2: With Average Blurring Tx04 PCA 85.36 0.5692 0.3678
Table 5 shows the results obtained from the six TxPow- Tx05 t-SNE 79.58 0.7999 0.5877
ers, i.e., Tx01, . . . , Tx06, using the average blurring tech- Tx05 PCA 79.78 0.7656 0.5996
nique. Each dataset is evaluated with PCA and t-SNE. The Tx06 t-SNE 89.58 0.4265 0.2938
results are shown in accuracy, loss and mean error (𝑚). Tx06 PCA 90.33 0.4173 0.2778
The results show a substantial improvement with respect
no

to the values reported for the previous case. In terms of the


notice that the best results when using one of the two blur-
accuracy, the results reported show an improvement of more
ring techniques report an improvement of more than 14% in
than 10% and a mean error less than half for all the transmis-
terms of the accuracy and a loss and mean error of less than
sion powers under study. Similar to the previous case, the
half to the ones reported by the case when no blurring tech-
best results are obtained for the transmission power Tx06 for
nique is applied. These results clearly show the effectiveness
both reduction algorithms with a remarkable improvement in
of using the blurring technique.
t

all metrics: an accuracy of more than 14%, a loss and a mean


From our results, it is clear that the use of blurring for
rin

error of less than half than the ones reported when no blur-
image generation considerably improves the performance of
ring is applied. We also note that the best results obtained
the developed model compared to the use of the characteris-
for the two dimension reduction algorithms are very similar.
tic pixels alone. This is mainly due to the fact that, in the
Case 3: With Maximum Blurring produced images, the values of the non-characteristic pixels
Table 6 shows the results obtained from the six TxPow- are mostly non-zero and that the characteristic pixels cover
ep

ers, i.e., Tx01, . . . , Tx06, using maximum blurring tech- more space, similar to a signal propagation. This setting is
nique. fully exploited by the convolution process.
Similar to the previous case, the use of the blurring tech- For case 1: without blurring, we can observe that the
nique report much better results with respect to the case when values are between 67.49% and 77.38%. However, these va-
no blurring is applied. However, when comparing the results lues can be improved by using blurring, even reaching 90%
Pr

with respect to the previous case, avergae blurring, the re- for both dimensionality reduction algorithms. This experi-
sults are very much similar for each one of the transmission mental study demonstrates that having more information of a
powers under study. Once again, the best results are reported scene makes the classification process more orderly by max-
for the case when the transmission power is set to Tx06. We imising the metrics.
Regarding the dimension reduction algorithms, both al-

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 12 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

gorithms report very similar results. They operate very si- of metaheuristics in order to determine the best power set-
milar, enabling to extract valuable information and improv- ting of the beacons, i.e., the one reporting the best perfor-

ed
ing the learning process. This fact is demonstrated by the mance results. In this section, we carry out a similar study
fact that the accuracy for most of the TxPower cases has in- which should allow us to show the great benefits of our pro-
creased more than 10% when the blurring technique is ap- posal. Our main aim is to show that the solution proposed
plied, e.g., the Tx06 dataset using PCA went from 89% to in this work provides results close to the optimal setting. It
90% (average blurring) of accuracy. is worth mentioning that the search for the optimal trans-
This analysis can be verified with the learning curve. Fi- mission power configuration of the various beacons requires

iew
gure 13 shows an example for the accuracy and loss metric high computational resources. Even though metaheuristic
using maximum blurring with PCA. From the figure, we can optimisation algorithms may prove effective in computing
see that having used 100 epochs, we have an orderly learn- the optimal asymmetric power transmissions setup, the re-
ing. In fact, the number of epochs could have been reduced sults reported in this section should show that the image blur-
to 60-70. It should be noted that t-SNE and PCA plots of the ring techniques provides us results close to the optimal ones.
different results have the same verification. In this section, we make use of an Evolutionary Algo-
rithm (EA) specifically a Genetic Algorithm (GA) (Bencak

ev
Model accuracy et al., 2022; Lovón-Melgarejo et al., 2019). EA is a generic
1.0 population-based metaheuristic optimisation algorithm that
uses mechanisms inspired by biological evolution, such as
0.8 reproduction, mutation, recombination, and selection.
The operation of EA algorithm is depicted in Algorithm 1 (Bäck

rr
0.6 et al., 2000; Fortin et al., 2012), where: population denotes
accuracy

a list of individuals (value: 100); cxpb is the probability of


0.4 crossing two individuals (value: 0.5); mutpb is the mutation
probability (value: 0.2); ngen is the number of generations
ee (value: 20); bNGF() (buildNextGenerationFrom) function com-
0.2
train putes the next generation population applying crossing and
test
0.0 mutation operations to the selected population, validates the
0 20 40 60 80 100 new individuals, and computes the statistics of this new po-
epoch
pulation.
tp
(a) Accuracy.
Algorithm 1 Evolutionary Algorithm.
Model loss 1: 𝑒𝑣𝑎𝑙𝑢𝑎𝑡𝑒(𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛)
train 2: for 𝑔 𝑖𝑛 𝑟𝑎𝑛𝑔𝑒 (𝑛𝑔𝑒𝑛) do
2.5 test 3: 𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 = 𝑠𝑒𝑙𝑒𝑐𝑡(𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛, 𝑙𝑒𝑛(𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛))
no

4: 𝑜𝑓 𝑓 𝑠𝑝𝑟𝑖𝑛𝑔 = 𝑏𝑁𝐺𝐹 (𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛, 𝑐𝑥𝑝𝑏, 𝑚𝑢𝑡𝑝𝑏)


2.0
5: 𝑒𝑣𝑎𝑙𝑢𝑎𝑡𝑒(𝑜𝑓 𝑓 𝑠𝑝𝑟𝑖𝑛𝑔)
1.5 6: 𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 = 𝑜𝑓 𝑓 𝑠𝑝𝑟𝑖𝑛𝑔
loss

7: end for
1.0
Moreover, the methods of this evolutionary algorithm
0.5 are: chromosome representation with a list of integers, flat
t

0.0 crossover, random mutation and tournament method with


rin

0 20 40 60 80 100 size four. Finally, an adaptive mutation probability was em-


epoch pirically established as a function of the generation number
(b) Loss. according to the following equation Sivanandam and Deepa
(2008):
Figure 13: Loss and accuracy learning curves with respect
ep

1−𝑖
to the epochs of the CNN model, using PCA and maximum 𝑃𝑚 (𝑖) = 0.2 ⋅ 𝑒 10 (23)
blurring for train and test dataset.
where,
• 𝑖 is the generation; and
5.3. Asymmetric TxPower Configuration:
Pr

Metaheuristic Optimisation • 𝑃𝑚 is the mutation probability.


In (Lovón-Melgarejo et al., 2019), the authors proposed
the use of an asymmetric power configuration of the bea- Table 7 shows the results obtained with GA for each clas-
cons as a means to improve the performance of wireless- sification model. It is important to note that the value of the
based indoor localisation mechanisms. They also made used evaluation function used in both algorithms is the accuracy

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 13 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

Table 7 ther maximising the final results and bringing the solution
Best two results for PCA and t-SNE in accuracy (Acc), loss closer to a minimum error.

ed
and mean error (ME) using the asymmetric TxPower combi- Finally, regarding the open challenges, one of the main
nation and the average (Aver.) and maximum (Max.) blurring points to develop is to be able to set the blurring technique in
technique. Best result are shown in bold. accordance with the signal degradation itself and not empir-
TxPower Alg. Blurring Acc Loss ME (m) ically as it has been developed in this work. Using channel
[3-6-6-3-1] PCA Max. 93.94 0.254 0.148 propagation models to develop the blurring of the characte-
ristic pixels in the image and similarity with the signal’s own

iew
[3-6-1-3-5] t-SNE Max. 93.35 0.283 0.157
[3-6-1-3-2] PCA Aver. 92.91 0.364 0.161 degradation with respect to the device may lead to an hybrid
[3-4-1-2-2] t-SNE Aver. 92.54 0.373 0.173 communications-based blurring approach to the problem.

obtained for each specific classification model of each indi- Acknowledgments


vidual in the population, i.e., for each candidate solution to The research leading to these results has been funded
the problem. by the European Commission H2020 project “Construction-

ev
In this final result, we can observe that PCA with max- phase diGItal Twin mOdel (COGITO)” under contract #958310.
imum blurring has a better combination result in the eval- Manuel Castillo-Cara has been funded by the European Next
uated metrics. Likewise, applying t-SNE with maximum Generation and the Spanish Ministry of Universities in a
blurring, we can see that it is quite close to PCA in the me- postdoctoral programme (Recualificación - María Zambrano).
trics.

rr
As a final discussion of the process it can be said that
both dimensionality reduction algorithms perform very sim- Data availability
ilarly in terms of results. Likewise, using the maximum or This project contains the following extended data:
average blurring technique does not have impact in the final
• Dataset used in this analysis Castillo-Cara (2022).
ee
results since they are very similar in all the cases evaluated.
Therefore, as a recommendation it could be determined to
• Python code used in the analysis to develop all of this
use PCA for the process of transforming tidy data into image
work Castillo-Cara and Talla-Chumpitaz (2022).
due to the representation of the characteristic pixels in the
image. In turn t-SNE, due to its stochastic nature, must be
more carefully configured with respect to the analysis of the
tp
References
parameters and their impact on the image created. Alhomayani, F., Mahoor, M.H., 2020. Deep learning methods for
fingerprint-based indoor positioning: a review. Journal of Location
Based Services 14, 129–200.
6. Conclusions and Open Challenges Bäck, T., Fogel, D.B., Michalewicz, Z., 2000. Evolutionary computation 1:
In this article, we have explore the transformation of tidy Basic algorithms and operators. volume 1. CRC press.
no

Bencak, P., Hercog, D., Lerher, T., 2022. Indoor positioning system based
data to images supplemented by blurring image techniques on bluetooth low energy technology and a nature-inspired optimization
to process BLE signals with the aim of developing wireless algorithm. Electronics 11, 308.
indoor localisation mechanisms. Our results have shown that Borisov, V., Leemann, T., Seßler, K., Haug, J., Pawelczyk, M., Kasneci, G.,
the process has successfully contributed to mitigate the effect 2021. Deep neural networks and tabular data: A survey. arXiv preprint
of Multipath Fading (MPF). We have also shown that the use arXiv:2110.01889 .
Castillo-Cara, M., 2022. Bluetooth indoor localization dataset. URL: https:
of optimisation techniques based on metaheuristics may be //doi.org/10.5281/zenodo.6343083, doi:10.5281/zenodo.6343083. [On-
t

used to determine the best TxPower configuration setting of line: accessed 18-may-2022].
the BLE devices.
rin

Castillo-Cara, M., Talla-Chumpitaz, R., 2022. Deep learning indoor lo-


In addition, images created from tidy data were used as calization from tidy data into synthetic images. URL: https://doi.org/
input to this CNN. To this end, the partial contributions that 10.5281/zenodo.6557364, doi:10.5281/zenodo.6557364. [Online: accessed
18-may-2022].
have made possible the development of this work are the use Chen, J., Zhou, B., Bao, S., Liu, X., Gu, Z., Li, L., Zhao, Y., Zhu, J., Lia, Q.,
of a CNN with two parallel branches for localisation as a 2021. A data-driven inertial navigation/bluetooth fusion algorithm for
base learning model. Hence, to achieve this transformation, indoor localization. IEEE Sensors Journal , 1–1doi:10.1109/JSEN.2021.
ep

two-dimensionality reduction algorithms such as t-SNE and 3089516.


PCA have been evaluated. In this context, two cases of image Chen, Y.Y., Huang, S.P., Wu, T.W., Tsai, W.T., Liou, C.Y., Mao, S.G., 2020.
Uwb system for indoor positioning and tracking with arbitrary target ori-
generation have been tested, such as: (i) using the charac- entation, optimal anchor location, and adaptive nlos mitigation. IEEE
teristic pixels; and (ii) applying the blurring technique by Transactions on Vehicular Technology 69, 9304–9314.
simulating the signal fading. The blurring method demon- Fan, S., Wu, Y., Han, C., Wang, X., 2021. Siabr: A structured intra-
Pr

strates that generating an image emulates the signal fading, attention bidirectional recurrent deep learning method for ultra-accurate
maximising the result of the analysed metrics, and reducing terahertz indoor localization. IEEE Journal on Selected Areas in Com-
munications 39, 2226–2240.
the mean error. Moreover, to improve the results with asym- Faragher, R., Harle, R., 2015. Location fingerprinting with bluetooth low
metric TxPower, an evolutionary algorithm has been used energy beacons. Selected Areas in Communications, IEEE Journal on
which demonstrates the best combination of TxPowers, fur- 33, 2418–2428.

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 14 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099
CNN with blurring technique for indoor localisation

Fortin, F.A., De Rainville, F.M., Gardner, M.A., Parizeau, M., Gagné, C., art and classification. Measurement 179, 109349.
2012. DEAP: Evolutionary algorithms made easy. Journal of Machine Trendnet, 2018. Micro Bluetooth USB Adapter. Link: https://www.
Learning Research 13, 2171–2175. trendnet.com/products/USB-adapters/TBW-107UB/. [Online: accessed 18-

ed
Gu, F., Hu, X., Ramezani, M., Acharya, D., Khoshelham, K., Valaee, S., may-2022].
Shang, J., 2019. Indoor localization improved by spatial context—a sur- Vasquez-Espinoza, L., Castillo-Cara, M., Orozco-Barbosa, L., 2021. On the
vey. ACM Computing Surveys (CSUR) 52, 1–35. relevance of the metadata used in the semantic segmentation of indoor
Hsieh, C.H., Chen, J.Y., Nien, B.H., 2019. Deep learning-based indoor image spaces. Expert Systems with Applications 184, 115486.
localization using received signal strength and channel state information. Wattenberg, M., Viégas, F., Johnson, I., 2016. How to use t-sne effectively.
IEEE access 7, 33256–33267. Distill doi:10.23915/distill.00002.

iew
Ishida, S., Takashima, Y., Tagashira, S., Fukuda, A., 2018. Design and Wickham, H., 2014. Tidy data. Journal of Statistical Software 59, 1–23.
Initial Evaluation of Bluetooth Low Energy Separate Channel Finger- doi:10.18637/jss.v059.i10.
printing. Springer International Publishing, Cham. pp. 19–33. Yang, Y., 2019. Multi-tier computing networks for intelligent iot. Nature
JAALEE Inc., 2018. Beacon IB0004-N Plus. Link: https://www.jaalee. Electronics 2, 4–5.
com/. [[Online: accessed 18-may-2022]. Zhou, M., Li, Y., Tahir, M.J., Geng, X., Wang, Y., He, W., 2021. Integrated
Jiao, J., Li, F., Tang, W., Deng, Z., Cao, J., 2018. A hybrid fusion of wireless statistical test of signal distributions and access point contributions for
signals and rgb image for indoor positioning. International Journal of wi-fi indoor localization. IEEE Transactions on Vehicular Technology
Distributed Sensor Networks 14, 1550147718757664. 70, 5057–5070. doi:10.1109/TVT.2021.3076269.

ev
Kim, D., Joo, D., Kim, J., 2020. Tivgan: Text to image to video genera-
tion with step-by-step evolutionary generator. IEEE Access 8, 153113–
153122. doi:10.1109/ACCESS.2020.3017881.
Kriz, P., Maly, F., Kozel, T., 2016. Improving indoor localization using
bluetooth low energy beacons. Mobile Information Systems 2016.
Kwok, C.Y.T., Wong, M.S., Griffiths, S., Wong, F.Y.Y., Kam, R., Chin,

rr
D.C., Xiong, G., Mok, E., 2020. Performance evaluation of ibeacon
deployment for location-based services in physical learning spaces. Ap-
plied Sciences 10. doi:10.3390/app10207126.
Liu, J., Jia, B., Guo, L., Huang, B., Wang, L., Baker, T., 2021. Ctsloc:
an indoor localization method based on cnn by using time-series rssi.
Cluster Computing , 1–12.
ee
Lovón-Melgarejo, J., Castillo-Cara, M., Huarcaya-Canal, O., Orozco-
Barbosa, L., García-Varea, I., 2019. Comparative study of supervised
learning and metaheuristic algorithms for the development of bluetooth-
based indoor localization mechanisms. IEEE Access 7, 26123–26135.
doi:10.1109/ACCESS.2019.2899736.
Maaten, L.v.d., Hinton, G., 2008. Visualizing data using t-sne. Journal of
tp

machine learning research 9, 2579–2605.


Mercuri, M., Lorato, I.R., Liu, Y.H., Wieringa, F., Hoof, C.V., Torfs, T.,
2019. Vital-sign monitoring and spatial tracking of multiple people
using a contactless radar-based sensor. Nature Electronics 2, 252–262.
Niu, Q., Li, M., He, S., Gao, C., Gary Chan, S.H., Luo, X., 2019. Resource-
efficient and automated image-based indoor localization. ACM Trans-
no

actions on Sensor Networks (TOSN) 15, 1–31.


Oh, Y., Noh, H.M., Shin, W., 2021. C-cnnloc: Constrained cnn for robust
indoor localization with building boundary. Electronics Letters 57, 422–
425.
Rahman, A.B.M.M., Li, T., Wang, Y., 2020. Recent advances in indoor
localization via visible lights: A survey. Sensors 20.
Real Ehrlich, C., Blankenbach, J., 2019. Indoor localization for pedestrians
with real-time capability using multi-sensor smartphones. Geo-spatial
t

Information Science 22, 73–88.


rin

Roy, P., Chowdhury, C., 2021. A survey of machine learning techniques


for indoor localization and navigation systems. Journal of Intelligent &
Robotic Systems 101, 1–34.
Roy, P., Chowdhury, C., Ghosh, D., Bandyopadhyay, S., 2019. Juindoor-
loc: A ubiquitous framework for smartphone-based indoor localization
subject to context and device heterogeneity. Wireless Personal Commu-
nications 106, 739–762.
ep

Sharma, A., Vans, E., Shigemizu, D., Boroevich, K.A., Tsunoda, T., 2019.
Deepinsight: A methodology to transform a non-image data to an image
for convolution neural network architecture. Scientific reports 9, 1–7.
Sinha, R.S., Hwang, S.H., 2019. Comparison of cnn applications for
rssi-based fingerprint indoor localization. Electronics 8. doi:10.3390/
electronics8090989.
Pr

Sivanandam, S., Deepa, S., 2008. Introduction to genetic algorithms. Chap-


ter 3: Terminologies and Operators of GA. Springer International Pub-
lishing. doi:10.1007/978-3-540-73190-0.
Tiglao, N.M., Alipio, M., Cruz, R.D., Bokhari, F., Rauf, S., Khan, S.A.,
ts
sta

2021. Smartphone-based indoor localization techniques: State-of-the-


n
tio
ca
bli
pu
ew
Vi

Manuel Castillo-Cara et al.: Preprint submitted to Elsevier Page 15 of 15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4180099

You might also like