Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

G3SafetyRobotFINAL.

pdf

ByBoniRex777

Aplicaciones Telemáticas Avanzadas

4º Grado en Ingeniería Telemática

Escuela Técnica Superior de Ingeniería y Sistemas de


Telecomunicación
Universidad Politécnica de Madrid

Reservados todos los derechos.


No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
Safety Robot

Jaime Pérez Gómez, Antonio Luna de Toledo León, George Ion,

Daniel Hermoso Jiménez, Jorge Nuevo López, Rubén López Martínez

Abstract

Emergencies need to be faced with efficiency, clear coordination, and fast response. Throughout
history, different types of technologies have been used and developed in order to deal with these challenges.
This project seeks alternative manners to confront these problems and provides vital real-time information
from distinct perspectives. By means of autonomous actuators such as UGVs, this approach is aimed to
support authorities. Interoperability is implemented to sense the data and to interpret the information
received from these self-operating devices allowing the global system to perform a real-time mapping of
any complex scenario. A scheme and a simulation of use of case is detailed. Taking advantage of this
technology offers an additional tool for the competent authority. Damage and harm may be prevented or
significantly reduced. The more equipment and preparation the firefighters have, the more efficiency may
be reached. Eventually, this achievement is the main objective of this project developed with constantly
updated technology.

Document Version: 13/05/2022


Keywords: Autonomous; Mapping; Perception; Sensing; UGV; Interoperability; LiDAR; Network

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
Contents
Contents 2
1. Introduction 3
1.1. Motivation 3

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
1.2. Use Case 3
1.3. Objectives 4
2. State of Art 4
3. Technologies 7
ROS 7
Gazebo 7
RVIZ 8
Netica 8
3.1. Mapping 8
Gmapping Package 8
3.2. Sensing 9
LiDAR 9
SWIR, MWIR and LWIR cameras 9
Thermal cameras conclusion 11
3.3. Interpretation 12
Bayesian networks 12
Simulation of the Theorical Proposal using Netica 13
PYTHON SCRIP – AVOIDING OBSTACLES 15
4. Proof of Concept 16
5. Conclusions and Next Steps 17
6. Referencias 18
7. Distribution of work 19

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
Aplicaciones Telemáticas Ava...
Banco de apuntes de la
1. Introduction
This project consists of a robot with the purpose of providing additional information for emergency
authorities. His work is divided into two phases.

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
The first one is where the robot can make a provisional map of the affected zone where the fire or the
landslide has taken place. This gets a quick look at the situation and gathers essential data. Once this
mapping is created, the second phase takes place. Here, the robot considers this data and creates a route to
move in the dangerous area with the information provided.

The robot is also capable of recognizing the temperature of the area thanks to cameras and thermal sensors,
in addition to measuring the quality of the air or quantity of volatile gasses and carbon dioxide that is in the
air.

1.1. Motivation

Let us introduce some facts to give an overview. In 2021 just in the city of Madrid there has been a daily
average of 57 interventions of the firefighters, 2 out of 3 were fires, damaged structures, and rescues. In
these situations, emergency operators must deal with different risks and threats at the same time they work
as fast as possible.

Figure 1 - Incidents in Madrid. Year 2021

We are going to use the most advanced technology to help emergency services to protect them and to
increase their fast response. We have developed a robot that gets inside the most difficult and dangerous
areas such as a household on fire, or unstable locations like a building with damaged structure, in order to
create a mapping zone and be able to locate people or different objects.

With our robot, firstly we reduce the risk exposure that emergency services suffer, secondly, we provide
diverse information quite relevant through different sensors, and thirdly we ease decision making in
complex scenarios where time is the most valuable factor.

1.2. Use Case

One example where our robot is useful is in a collapsed building due to a gas explosion. In this situation

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
the main objective of the robot is to move around the affected area while mapping it in order to detect
threats and other risks, as well as locate people. It can also be used in big fires to retrieve additional valuable
information and be able to improve the time response. The robot will be streaming in real-time every
information and offering a view with a high-definition camera. This information may be presented in an
intuitive and simple GUI on a tablet or a laptop.

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
The robot will be able to avoid obstacles automatically and recognize the best path for mapping the area.
To do this, it will be capable of identifying these obstacles, which could be people. It will also be able to
look around the area established by the operator seeking potentially dangerous zones.

1.3. Objectives

As we have described in our motivation, it will be essential for us to receive a large amount of information
and process it. For this task we focus on technologies such as mapping, sensing, and interpretation. Overall,
the main objectives of 3 topics are:

Mapping

We plan to do a mapping of the affected terrain recognising the obstacles and zones, to deal with the SLAM
problem we will make use of the gmapping package and the RVIZ software.

Sensing

The objectives set for this topic are the mapping of dangerous areas using the LiDAR sensor and the
measurement of temperatures in the affected area, with the purpose of detecting gas leaks, fire zones,
people, etc. For these functions we will make use of LWIR cameras.

Another objective is to obtain a real-time view through an RGB camera for the operators.

Interpretation

Our goal is the evaluation of the situation, i.e., which direction the robot will follow or how it will react in
certain circumstances considering all the data obtained by the sensors. For this case, we had decided to
make use of Bayesian Networks, which we have developed theoretically, but for the implementation of the
robot in the same simulated environment we observed it was more efficient and simpler to perform an
avoiding obstacles function in phyton.

2. State of Art
Robotic applications have been used and developed in rescue scenarios to help find people, identify
different threats, or obtain a visual from another perspective. These robots use many sensors and
technologies to fulfil difficult tasks. In this section, we have an overview of how robots implement these
tools and a comparison among several technologies.

Mapping tools

At the beginning of our research of the different technologies, we found a paper developed by the Temple
University of Philadelphia named Robot Mapping for Rescue Robots which describes the main technologies

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
that robots have been using for mapping an environment. Each of them has their benefits and drawbacks.

Odometry is a field that mobile robots and other wheel-based vehicles use to estimate their position. These
robots track the rotation of the wheels while they move to know the distance they have travelled. It is an
easy manner to estimate the position and reliable in short distances. However, there is an induced error due
to the structure of the wheels and variables related to the environment, such as slippery ground or not
controlled accelerations. Moreover, the more distance a robot has travelled, the greater the error becomes.
Thus, this technology is not the most suitable for our use case.

Another way to keep a track of the robot in a mapped area is by using a GNSS. Global service is provided
by different systems such as Galileo, GPS, GLONASS or BeiDou. This satellite-based technology is quite
useful and accurate in exterior places and cheap to implement. Nevertheless, when a robot needs to map an
area inside a building, a tunnel or other structure surrounded by large obstacles, a GNSS is neither suitable
for a rescue scenario.

Instead of using conventional high-definition cameras to map and have the scenario awareness, robots also
make use of a scanner. This device allows the robots to obtain very accurate measures, e.g., distances
between the robot and some obstacles or a wall, which can be logged to create a map of the surrounding
area. A versatile scanner used for this purpose is a LiDAR sensor. Among the advantages of the sensor, we
highlight the non-requirement of daylight or light conditions to work, and the precision it can obtain. More
details will be explained in the following section.

Velodyne LIDAR

LiDAR scanner is an essential component for robots regarding navigation and autonomy. The list of
possible applications is quite extended such as industry, 3D modelling structures, mapping, and autonomous
vehicles.

After doing some research, we have discovered the main capabilities of this sensor and what degree of
development it has been achieved. In particular, the Velodyne Lidar is a Lidar scanner that offers a full

Figure 2. Velodyne LIDAR sensors


range of visibility and 360º cloud point capability. Moreover, it can project up to 695,000 points per second
up to 100 m. The accuracy of the sensor is around 2 cm.

Velodyne Lidar it is designed to operate in extreme conditions such as adverse weather conditions or very
high temperatures, which it makes very suitable for rescue scenarios.

SLAM problem

One of the main problems regarding robots when mapping an environment is the ability to locate itself
while travelling around the scenario. This problem is known as Simultaneous Location and Mapping
(SLAM). We found a paper related to this problem and how it is managed in a robot named the Mobile,

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
Dextrous, Social (MDS) Robot. The paper can be found at reference [1]. There are several techniques and
algorithms to run a SLAM solution. Some of these are described as 2D SLAM, 3D sensors and 6D SLAM.

2D mapping consist of traced data points formed by beams of sensors, such as laser scanner or sonar
sensors. Distances measured with these devices are stored as the robot move around. However, the height

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
of these distances is fixed. In the mapping section will be explained a tool we have used for this purpose.

A further step is the use of 3D sensors to expand the range of service. Laser sensors may be rotated on its
vertical axis to cover the surrounding area. To collect data and be able to analyse it, the robot mut be static
while performing this rotation.

6D SLAM refers to maps described by their coordinates (x,y,z) but also the axes rotation (yaw, pitch, roll).
These additional coordinates focus on covering the whole area in every direction. Nevertheless, it presents
an important disadvantage regarding computational cost because it requires more resources and processing
capabilities.

Detecting capabilities

Sensing the environment is one of the most important capabilities of robots, otherwise robotics would not
have the same application by far. In this topic there is a wide range of sensors, scanners, antennas, and other
devices capable of retrieve the information in different manners. This information is then analysed and
converted into an output or a decision.

One example of a robot with developed awareness is the FLIR PackBot 510, a military robot designed to
support services in extreme conditions. The main objective of this robot is to help and avoid any dangerous
risks to the people. In order to achieve this goal, the robot contains an extensive list of sensors such as high-
definition cameras, obstacles detection, thermographic sensors, night vision, biological and nuclear
radiation detection.

Figure 3 - FLIR PackBot 510


We focused on this robot to research about sensing and devices, like the LWIR.

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
Roidmi Self-Collecting Robot Vacuum

Robots and Internet of Things (IoT) devices are every day more present at our homes. These interconnected
devices have similar capabilities to the industrial or even military robots. One example is the popular use
of autonomous vacuum cleaners like the Roidmi Robot Vacuum EVE Plus. [7]

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
Figure 4 - Roidmi Robot Vacuum EVE Plus.

This robot developed by Roidmi contains a LIDAR sensor that provide the capability of mapping different
rooms and avoiding obstacles. In this use case, the information is retrieved, stored, and then analysed to
ensure every zone of the house is cleaned. It is also capable of go to a specific location and return to the
base to charge itself automatically. Overall, SLAM is also managed.

3. Technologies
In this section we will explain the technologies we have decided to use and why.

ROS

ROS is an open-source operating system that has a set of libraries and tools that help to create robot
applications. Specifically, we have used the ROS Kinetic version, which is designed to work with Ubuntu
16.04.
It uses nodes to represent different processes, and the way to transmit information between nodes is through
a topic, in which they can publish or subscribe. Another type of communication between nodes is the use
of services, where the messages are transmitted bidirectionally, instead of topic that are unidirectional.
To integrate the different technologies of our project we have decided to use ROS, since it offers a set of
packages called gazebo_ros_pkgs that provides the necessary interfaces to simulate a robot in Gazebo using
messages, services and managing the different topics.

Gazebo

Gazebo is an open-source software that allows 3D simulations of robots, sensors, and objects in virtual
worlds. It also features a realistic physics simulation, where robots can interact with objects by affecting
gravity or obstacle collisions.
We have used this software to create a virtual world where a house is simulated to do the proof of concept.
In this world, we have imported an open-source robot called TutleBot, which allowed us to simulate our
project since it has similar characteristics to our robot. It also incorporates a Lidar sensor and an RGB
camera.

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
RVIZ

RVIZ is a graphical ROS interface that allows you to visualize a lot of information, using plugins for many
types of themes available. Rviz works by reading and interpreting the different data contained in ROS
messages. So, you need an external generator of these messages, such as a real or simulated robot. In our

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
case, we use the Turlebot robot from which Rviz will read the real-time data obtained from the RBG camera,
the LIDAR sensor, or the position by odometry.

ThRend

ThRend is a raytracing based infrared radiation renderer designed to quickly generate simulated
thermograms based on 3D scenes. This software can be considered as a post-processing tool that with the
3D geometry and the point field with the nodal temperatures allows to simulate the behaviour of the
longwave radiation (LWIR) that arrives reflected to the infrared sensor (thermal camera).

We have used this software to test our proposal for the simulation of the LWIR cameras, but we have not
been able to integrate it into the simulated environment because when researching for the proof of concept
we have seen that this method is not very developed within robotics.

Netica

Netica is an application from the Canadian company Norsys Software, which allows simulating the
behavior of a Bayesian Network. The software is paid, but a limited but free version is available on their
website. With Netica we can design and train a network with the Junction Tree algorithm.

We have used this software to be able to carry out a test on our theoretical proposal for Interpretation, but
we have not been able to integrate it because when investigating to carry out the proof of concept we have
seen that this method is not very well developed within robotics, since it is not the most efficient.

3.1. Mapping

In our robot mapping the area is the main objective. On first approaches we found in our research an
application named ORB-SLAM. This open software developed by the University of Zaragoza based on ROS
creates a map of an environment with monocular and stereo cameras. It allows continuously relocation by
tracking key points and using cumulative images of the zone. Nevertheless, we tried to implement this
application unsuccessfully, due to a complex installation process and some incompatibility issue with the
native Linux OS.

Gmapping Package

We then did additional research and found a package in ROS named gmapping quite suitable for our use
case. This package provides a SLAM technique based on a laser sensor, rather that optical images, making
this software useful for our robot, as it has a LiDAR scanner and RBG cameras.

Gmapping make use of the LiDAR information to know distances of objects, odometry, and the Rao-
Blackwellized Particle Filter [2] to estimate the pose and location of the robot. The output is a 2D map of
the area.

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
This package is managed by:

1. The slam_gmapping node, which retrieve the messages form the LiDAR scanner and builds the
map
2. The subscribed topic scan, where it receives the laser information
3. The subscribed topic tf, where it manages the information of odometry and the laser
4. The published topic map, where the node updated the mapping results

In the proof-of-concept section the output of this package and the process to obtain it can be observed.

To work with gmapping package, installation of ROS must be achieved. ROS is the middleware that
manages the messages between the nodes with topics. Another layer for visualize the results is also needed.
For this purpose, we have chosen RVIZ to print the data mapping on a 2D plane.

/slam_gmapping
/scan

/tf /map

Figure 5 - Basic structure of the package

3.2. Sensing

To collect the information that the algorithms and networks that our project is going to implement, we need
different types of sensors designed for the conditions in which our robot is going to move.

LiDAR

A LiDAR unit works by firing lasers from the unit and then recording how long it takes the laser to hit an
object and return to the unit. The more lasers a LiDAR unit has, the more data the unit can pick up during
a scan. These lasers can find even the smallest gaps between objects, allowing a LiDAR unit to map between
the leaves of a forest canopy.[8]

The advanced mapping capabilities of LiDAR have a great potential to aid in search and rescue efforts. Not
only can a LiDAR sensor assist in identifying a human form on any given terrain, but also it can be used to
plot the most efficient route to the lost person. Knowing what type of hazards the rescue team may face,
such as dangerous terrain or unstable foundations, will increase the efficiency of rescue efforts and
minimize injury to the rescue team.

LiDAR, unlike cameras, does not require light to capture data, which makes it a quite effective tool at night
rescues. It can work even with bad weather conditions, making it a suitable sensor for searching and
rescuing.

SWIR, MWIR and LWIR cameras

Fires are risk situations in which there are very high temperatures and dense smoke. Therefore, our robot

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
must implement cameras adapted to these conditions. Thermographic cameras are quite useful for these
purposes

SWIR cameras

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
Shortwave infrared cameras or SWIR, provide effective capabilities that complement LWIR and MWIR
cameras.

The sensor in this band of the infrared detects the environment like visual light, behaving like the human
eye where photons are reflected depending on the object, rather than irradiating itself, as it occurs with
MWIR and LWIR. Moreover, SWIR cameras may be used in daylight and night conditions.

SWIR cameras operates in the range between 0,9 and 1,7 μm. It is the unique technology that can penetrate
the smog, haze, and smoke. [3]

Figure 6: Comparison of a volcano scene using a visual and SWIR camera.

MWIR cameras

Medium Wave Infrared (MWIR) cameras are important for detecting gas leaks that are invisible to the
human eye and high temperatures. It is typically defined in the wavelength range of 3.0 to 5.0 μm.The
interesting thing about this type of cameras is the very high speed at which they are capable of taking
thermal measurements.

MWIR is an interesting sensor to include in our robot to be able to detect gas leaks and large sources of fire
and heat that could arise in a building. [4]

Figure.7 Two frames of a MWIR demo video where a gas leak may be observed

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
LWIR cameras

Long wavelength infrared cameras are the main devices used in temperature inspection practices. The range
where LWIR sensors operate are between 8 and 14 μm. Detectors operating in the LWIR band are well
suited to image room temperature objects (buildings, people etc.). [5]

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
Figure.8 LWIR Camera Performance Comparison

For LWIR sensors we distinguish between the uncooled LWIR microbolometer sensors, and the cooled
LWIR sensors.

An uncooled LWIR microbolometer is a thermal sensor with a resistor at each pixel. The resistance value
will change depending on the amount of incident radiation and this change in each pixel is measured and
processed, and then used to create an image.

Thermal cameras conclusion

According to Wien’s Law, the higher the temperature of the object is, the shorter the wavelength of the
energy radiated is. For example, a source heated to 300º Kelvin (≈ 26º Celsius) has a maximum power
intensity at the peak of 9.7 μm. Whereas a source heated to 1000º Kelvin (≈ 720º Celsius), this maximum
power intensity occurs at 2.9 μm. Thus, LWIR detectors are suitable for ambient temperature object
imaging (people, buildings, structures…) and MWIR detectors are useful for objects exposed at higher
temperatures (engines, pipes, gas systems…)[12][13]

ThRend Software for LWIR cameras.

This software can be considered as a post-processing tool that with the geometry in 3D and the field of
points with the nodal temperatures allows to simulate the behavior of long-wave radiation reaching an
infrared sensor (thermal camera).[14]

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
Figure 9:LWIR cameras

ThRend input data is handled through two configuration files: viewSettings and materials. viewSettings
contains the configuration of the scene, camera and output images. materials describe the infrared
properties of the materials to be used.

We can change the parameters in both archives, and with it configure the outputs to our liking.

Finally, we decide not to use ThRend because it has a hard implementation but, in the future, we can
investigate their use and implementation.

3.3. Interpretation

For the solution of the path interpretation problem, we have chosen Bayesian Networks for a simple reason:
the robot enters an unknown area in which radical changes have been produced in the structure and it must
move through dynamic and static objects, making use of all the sensors that will feed the network.

Theoretically, this solution seemed very good and efficient, but when performing the simulations, we
concluded that it is not the best solution, so we replaced it with a Python script to avoid collisions that we
will explain later.

Bayesian networks

A Bayesian network is a probabilistic graphical model for representing knowledge about an uncertain
domain where each node corresponds to a random variable and each edge represents the conditional
probability for the corresponding random variables. These kinds of networks can represent discrete or
continuous variables, both qualitative and quantitative. They are normally formed by the pair G-DPC where
G is an Acyclic Directed Graph and DPC is a Joint Probability Distribution.[10][11]

For the nodes, the parents are called nodes Xj and the child nodes Xi. Each child node (Xi) is associated
with a Conditional Probability Distribution between it and its parent's nodes (Pa (Xi)) and each node is said
to be conditionally independent of its non-descendants. These interdependencies simplify the representation
of knowledge and reasoning.

When we talk about Bayesian Networks, we must define what Inference is. Once the graph is known, the
effects of the evidence must be propagated through the network to know the posterior probability of the
variables, that is, values are given to variables (evidence) and probability of the other variables is obtained.
There are different types of algorithms to calculate this posterior probability, but we will focus on the
Junction Tree algorithm. This kind of algorithm consists of transforming the original network onto a tree,
by means of grouping nodes.

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
Figure 10. Dynamic Bayesian Network

Thanks to the sensors that it has incorporated and specially to the LIDAR, the robot will obtain
measurements that will help it take the correct direction, that is, we intend that the direction be the parent
node and the sensors to be the sensors to be the child nodes.

The robot must move for some time through an environment that is not fixed, so, within Bayesian networks,
we will use a Dynamic Network, in which the state of variables is presented at a certain moment in time (in
our case, the measurements of the sensors). For this type of Bayesian Networks, the following assumptions
must be made:

1. Markovian Process: In which the current state only depends on the previous state.
2. Stationary Process in Time: The conditional probabilities do not change with time.

Sensor (m) Direction (angle)


Direction P(Dir)

90º 180º 90º 0.6

2 0.2 0.8 180º 0.4

5 0.8 0.2

Each node will have its table of probabilities, the direction can be measured according to the turning angles
of the robot and the distance obtained by the sensors in meters. With this data, depending on the distance
obtained by the sensors at each instant of time that it is decided to perform a scan, a probability will be
obtained for each angle of rotation. Finally, a maximum probability will be obtained, and the optimal path
will be created.

Simulation of the Theorical Proposal using Netica

In our Network where the node “Direccion” is the parent one, is the main action of the robot. This node is
directly proportional to the rest of the child nodes, namely:

● Fuga_gas: if there is a gas leak.


● Tipo_gas: the three main gasses that can appear in a building fire.
● Debilidad_estructural: if the robot can see clear damage in the structure.
● Obstaculo: if the robot detects an obstacle in their path.
● Persona: if an obstacle is detected and it is observed that it is a person.

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
To train our network we will use a database with possible cases, this database can be entered in
Netica.This is how we obtain the probabilities that are observed in the bars of the network image.

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
Figure 11. Bayesian Network design.

Below we show how we perform the simulation and the data we obtain (https://youtu.be/jNY_ZLoouk0)

We obtain different results in the parameters such as errors, processing times but the ones that really
matter to us are:

● Error rate: the proportion of error between 0% to 100% that the simulation has had when
processing the data, we consider that a value of 5% or less is quite acceptable to implement the
network.
● Gini Coeff field: value between 0 and 1, a value close to 1 suggests that the network model is good
and consistent. We have obtained a value of 0.99.
● Area Under ROC field: value ranging between 0 and 1, if a value of 0 is obtained, the predictions
are incorrect, and the model is not effective. We have obtained a value of 0.995.

With the results obtained from the simulation it can be determined that the network is correct and stable
and can be incorporated.

Figure 12. Output file for our Bayesian Network design.

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
PYTHON SCRIP – AVOIDING OBSTACLES

The real solution that we have integrated into our robot is a Python code so that it could move freely through
any area without the need for a person behind it to control it, so the robot, interpreting the environment,
manages to avoid any type of obstacle and can create a way to move.

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
We have designed a code in which the robot moves with an angular velocity for rotation as a function of
the z-axis and a linear velocity for moving along the x-axis. Then, to carry the movement, the robot
considers its distance from the obstacles, so with the conditions, we indicate that if the obstacle is at a
distance greater than that indicated (0.7m), it should continue forward. And if the obstacle is at a distance
less than 0.7 m, we make the robot stop and rotate to avoid the object until it finds an escape route through
which it can continue its march. For the conditions we check several angles for rotation.

Figure 13: Avoiding Obstacles

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
4. Proof of Concept

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
Figure 14. Our Proof Concept Diagram. Technologies used.

To generate the simulation scenario we have used the Gazebo tool with the turtlebot packages (a AGR
with similar features); gmapping to be able to generate the map and locate the robot;
teleop_twist_keyboard to send the movement signals to the robot. To obtain the information we have used
the RViz tool. To implement the functionality of the Lidar sensors and the camera we have used the ROS
tool: kinetic.

In the simulation we demonstrate how the robot is able to move freely around the room, generates its map
and recognizes objects in three dimensions thanks to the lidar sensor.

Figure 16. A desk figure in 3D caught with Rviz. We can use in the
future for detecting humans.

Figure 15. The map of our simulated environment in Rviz.

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
5. Conclusions and Next Steps
As a conclusion, we achieve the objectives of our project because it can create a map of an unknown area
with the help of various sensors which can provide 3D images of the elements in that specific area and
manages to move freely avoiding these obstacles.

At the beginning of the project, we did several investigations about the technologies that we could use but
the more we got into it, the more difficult it seemed to implement, so we redid our investigation and obtained
similar solutions with other technologies.

As for the next steps, we plan to introduce the LWIR cameras mentioned above, as we think they can be a
differential point with respect to certain situations. We have also thought of implementing a neural network
capable of recognizing dangerous elements such as heat sources and human beings. Another way to increase
the capabilities of this robot would be to provide it with actuators, which would allow it to extinguish small
fire outbreaks in inaccessible or dangerous places, or to cut off the robot's path.

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
6. Referencias

[1] M. Guirguis, “Robot Search and Rescue - A Comparison of 3D Mapping Techniques,” [Online]. Available:
http://dspace.mit.edu/bitstream/handle/1721.1/61000/698254285-MIT.pdf;sequence=2.

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
[2] ,. H. ,. K. W.A.S Norzam, “Analysis of Mobile Robot Indoor Mapping using GMapping Based SLAM,” 2019.
[Online]. Available: https://iopscience.iop.org/article/10.1088/1757-899X/705/1/012037/pdf.
[3] C. B. a. D. Gustafsso, «Single Pixel SWIR Imaging using Compressed Sensing,» [En línea]. Available:
https://www.researchgate.net/publication/315752468_Single_Pixel_SWIR_Imaging_using_Compressed_Sens
ing.
[4] S. S. Ltd, «OGI (Optical Gas Imaging) Cameras | Gas Detection,» [En línea]. Available:
https://www.youtube.com/watch?v=7oI7sNSF19I.
[5] L. Jackson, «How to Build LWIR (Long-Wave Infrared) Cameras That Run on Raspberry Pi,» [En línea].
Available: https://www.arducam.com/how-to-build-an-lwir-camera/.
[6] [En línea]. Available: https://robots.media.mit.edu/wp-
content/uploads/sites/7/2015/01/GuirguisMEThesis10.pdf.
[7] [En línea]. Available: https://www.lasexta.com/tecnologia-tecnoxplora/gadgets/nuevo-robot-aspirador-xiaomi-
presume-sensor-lidar-como-iphone_2021012760114f9ba81ee90001d7bcf2.html.
[8] [En línea]. Available: https://www.pix4d.com/es/blog/lidar-fotogrametria.
[9] [En línea]. Available: Bayesian Robot Programming.
[10 [En línea]. Available: https://es.slideshare.net/hmartinezc2/inteligencia-artificial-redes-bayesianas.
]
[11 [En línea]. Available: https://ccc.inaoep.mx/~esucar/Clases-mgp/caprb.pdf .
]
[12 [En línea]. Available: https://www.intechopen.com/chapters/38815.
]
[13 [En línea]. Available: https://www.xenics.com/infrared-technologies/.
]
[14 [En línea]. Available: https://github.com/jpaguerre/ThRend .
]

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins
7. Distribution of work
Distribution of activities

Reservados todos los derechos. No se permite la explotación económica ni la transformación de esta obra. Queda permitida la impresión en su totalidad.
Name Developed point

Antonio Luna de Toledo León LíDAR Sensor, Gazebo & ROS simulation, Proof of
Concept, SOTA, Revisión.

Rubén López Martínez Mapping, Gmapping, Motivation, Gazebo simulation and


SOTA

Jaime Pérez Gómez Interpretation, Technologies, Revisión.

Daniel Hermoso Introduction, Objetives, Conclusion and Next steps

George Ion Proof of Concept with Ros Kinetic,Gmapping, Gazebo,


Rviz. LWIR simulation in ThRend.

Jorge Nuevo López Bayesian networks simulated with Netica, Sensing.

a64b0469ff35958ef4ab887a898bd50bdfbbe91a-5964899

Todos los planes de suscripción incluyen descargas sin publicidad con coins

You might also like