Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Way-finding and mapping Using Robot ROS

Bhavya Sharma1, Harshit Singh2, Navendu Kumar3, Ujjwal Gupta4, Md. Saquib Faraz5
,1,2,3,4,5
Department of Electrical and Electronics,
Dr. Akhilesh Das Gupta Institute of Technology and Management
New Delhi, India
1bhavyaaganpal@gmail.com,2harshitsingh1006@gmail.com,3kumarnavendu14s10n@gmail.com

4guptaujjwal1b7@gmail.com

Abstract—This paper aims to give you an understanding of environment. Self-driving cars, warehouse robots are the best
autonomous navigation and how does it work in real-life example of such kind of navigation. But as discussed above,
scenarios. Also, this paper introduces you to various algorithms the optimal approach requires environment information to
such as the SLAM algorithm, sensor fusion algorithms such as achieve optimised result, practically it will be difficult for the
Kalman filter, and many more. All the processing and
communication part is done with the help of the operating system
robot t know the environment variable every time. Robots and
known as ROS (Robotic operating system) which comes along cars must be designed to work perfectly even in unknown
with various packages to make the robotic application much environment. To solve this, we came across slam algorithm
simpler. Working of the proposed model is dependent on the which mean simultaneous localization and mapping. With the
master client connection, in our case, our mobile robot will be the help of this algorithm the robot will be able to map the
master node where all the processing and path planning unknown environment and navigate through the unknown
algorithms will be running and our computer/laptop will be the environment as same time. The fig 1. give you the brief idea
client which will help us in visualizing the data processed by the about the navigation stack of a mobile robot.
mobile robot. The path planning algorithm such as the A*(A
Star) algorithm is used to get the optimized path towards the
goal. And all the data can be visualized with the help of Rviz and
Gazebo running at the client node.
Keywords— SLAM, KALMAN FILTER, A*, ROS, GAZEBO

I. INTRODUCTION
Autonomous navigation is the ability of the robot to determine
its location and plan a path to some goal without human
intervention[8]. This Goal can be achieved by various
methods but in this paper, we are going to focus on ROS Fig1: autonomous navigation stack of mobile robot
based method to get the desired result. Navigation has a wide
range from teleoperation to fully autonomous navigation This research article is organized as follows. Section II gives
which helps the robot to reach its goal. But in teleoperations, you the brief description of pose estimation for mobile robots
human intervention is required which leads to human error followed by section III which helps us to understand the slam
and more costing and labour cost increases due to human- algorithm. Section IV describes the need and application of
based operation, in this type of operations commands are the path planning algorithm. And Section V gives us the
given by the user via remote connection and on the basis of conclusion of our research paper.
those commands’ robot behave. But with the help of
autonomous navigation, there is less error and cost of
operation is also decreased. There are two type of autonomous II. POSE ESTIMATION
navigation During navigation, the robot needs to know its location and
i) Heuristic Approach orientation. We can estimate the position and orientation of a
ii) Optimal Approach mobile robot using a particle filter. When a robot is placed in
The heuristic approach is more like a probability approach an environment initially it does not know where it is within
where the optimal path is not achieved by the mobile robot the map, it could be anywhere therefore the problem is that the
and more or less it is not practical in some of the cases for robot needs to know its position and orientation within the
example maze solving robot, which does not require environment[1]. Our Robot has a lidar sensor that returns the
information about the environment for reaching the goal, it distance to objects within the field of view so if the sensor is
takes more time to reach goal and path is not optimized[10]. looking at the wall and the obstacle it might return a point
On the other hand, the Optimal approach is the best suited for cloud as shown in fig 2.
autonomous navigation as with the help of this we are able to
achieve the optimized path with the help of the known
the map according to the probabilities. Positions with higher
probabilities are more likely to have one or multiple robots
placed there, and the robots will be placed at exactly the same
location[10]. I've overlapped them a little bit here so you can
see that there are multiple robots. Positions with low
probabilities are unlikely to have a robot. So, maybe this is the
final distribution of robots. Then, the robots can be moved,
redistributed, moved, and redistributed again and again in a
loop. Over time, the robots begin to cluster around a single
spot. Sometimes, other clusters might break off into areas that
look similar to the one the real robot is in, but they disappear
as the robot moves into a new area. The exception to this is
when the robot is driving somewhere where the map looks
exactly the same from several locations[5]. The robot could be
in any of these four places and the sensor readings would
always be exactly the same no matter how the robot moves. If
there ends up being only one cluster, where the final cluster
Fig2: Point cloud data recived from LIDAR sensor ends up is actually just a matter of chance
However, now that we're honing in on the real low of the
But the measurement is actually noisy, we can check the robot and the distributions are getting narrower it doesn't take
pattern with the map to localize the robot but this could not be as many particles to fully represent them we could still use 50
useful in some of the cases like a rectangular room where our but that would just slow the filter down without adding too
bot could face and side of the room, and out of 4 walls it much value, therefore, there is this idea of adaptive Monte
would be difficult for us to determine which wall the robot is Carlo localization or AMCL which recalculates the number of
facing. But with the help of the lidar sensor and odometry data particles after each generation so that you're not wasting
received from the encoder motors and IMU, the robot has computational resources. And with this approach, we are able
weighted dead reckon in its position. dead reckoning is when to localize and estimate the position of a robot in any
a future position is calculated using a past position and environment.
relative measurements like velocity and angular rate. dead
reckoning can be used over shorter time frames with a lot of
success but due to noise in the relative measurements over
time the error grows and needs to be corrected by measuring
where it is relative to the environment. Though the Kalman
filter can be used to fuse lidar and odometry data to get the
best result we will not use the Kalman filter in our case
because to work with Kalman filter the input and output
should be in gaussian and many of the time can occur where
the output is not gaussian. so, to deal with this we used a
particle filter[11].
While using a particle filter What the robot can do is place
many simulations of itself on the map. we can imagine that if
there were enough robots, at least one robot would end up in
the same position as the real robot. When the real robot
moves it has the simulated robots move in exactly the same
way, just with a little bit of error. The robot is actually really
terrible, and when you tell the robot to move straight forward,
it won't. It'll turn as it drives, it won't go quite far enough, or it
a little bit too far. This could be because the floor is slippery,
or maybe the wheels don't drive at the same speed. In other
words, the robot might end up in the right place or it could end Fig. 3 Final Map generated by the robot on rviz
up any of these places. When the robot moves it's most likely III. SLAM ALGORITHM
to end up where it's supposed to be. It's unlikely to end up far
from where it's supposed to be, but it could be there. In fact, In the previous section, we discussed how we can estimate
the robot could wind up anywhere[6]. How did this help us to robot pose that is location and orientation in a known
find which simulated robot is closest to the real robot's environment. But while working on robots in real-time it is
position? This step is called redistribution. not necessary that the environment is known to the robot
The robot could be near or far away from these three robots. every time, here the SLAM algorithm plays the key role. In
The robots are removed from the map and then placed back on our solution, we are going to build a map using sensor data
and simultaneously navigate through the environment. Map of the environment is obtained with the help of the graph
Specifically, we have used the graph-based slam technique to in which all nodes are connected to build a map.
obtain the best result. There are many different slam
algorithms but they can mostly be classified into two groups B. Binary Grid
i. Filtering the idea behind it is that the environment is broken up into a
ii. Smoothing grid of cells if you believe the cell is occupied you set the
filtering like the extended Kalman filter or the particle filter value to a1 and if it's not occupied you set it to a zero.
models the problem as an online state estimation where the
robot state and may be part of the environment is updated on
the go as new measurements become available.
smoothing techniques estimate the full robot trajectories from
the complete set of measurements, not just the new
measurements.
A. Pose Graph Optimization
Pose graph optimization is a modern technique used in a slam
to optimize the slam algorithm. We have a lidar sensor on our
bot to sense the distances and angles to nearby obstacles and
we've given it away to dead wreck in its relative position over
time using odometry in this case it uses wheel encoders to
count the number of rotations each wheel makes as it drives
Fig 5: Binary Grid Representation
and from that, it estimates how far it's gone and how it's
turned since its last known position. The first thing our robot C. Probabilistic Occupancy Grids
does is take a measurement of the environment this
measurement is associated with the currently estimated robot The cell doesn't have to be fully occupied or not and instead
pose and we can add both to the pose graph so essentially. The there's a probability that it's occupied between zero and one
pose is defined as an x and y location and a rotation angle and with this model you have full confidence in black and white
along with it we save off the distances and angles to the cells and everywhere else is some shade of Gray depending on
sensed obstacles now there are also uncertainties associated uncertainty
with this pose entry. again the robot drives for a little bit and
our estimated pose starts to deviate from the real pose another
measurement of the environment which gets associated with
the new estimated pose and this combination is now saved in
the pose graph so we have two poses each with their own local
estimate of where the obstacles are and even though we don't
know where these two poses are in the environment we do
have an idea of about how far apart they are from each other
in our case this knowledge came from counting the wheel
rotations but the idea behind the slam algorithm[7] isn't tied to
a specific set of sensors the relative pose distances could have
come from another internal measurement source like an IMU.

Fig. 6: Probabilistic occupancy grid representation

IV. PATH PLANNING ALGORITHM


Now since we have discussed the robot pose estimation and
also dealt with the slam algorithm now, we will discuss more
the path planning algorithm which we have used to make our
robot reach its desired goal.
Fig 4: vector representation of pose optimization
We needed a path planning algorithm so that the robot can
In this graph-based optimization, every node corresponds to a reach the goal by the optimized path and save time.
robot pose and laser measurement. And edge between two Path planning algorithms are categorized into two parts
nodes represents constraint between the nodes[8]. The final
i) Search-based algorithm (algorithm like A*)
ii) Sampling-based algorithm (like RRT & RRT*)
Motion planning is concerned with the path and its derivative
(velocity, acceleration, and rotation). The path between the
initial node and goal node is the straight line which is the
shortest distance possible between the initial and final node.
Since we have used a graph-based approach A star algorithm
is the best-suited algorithm for us.

Fig. 8: Final version of Robot

REFERENCES
[1] Ananta Adhi Wardhana, Evan Clearesta, Augie Widyotriatmo,
Fig. 7: graphical Representation of A* search algorithm Suprijanto. "Mobile robot localization using modified particle filter" ,
2013 3rd International Conference on Instrumentation Control and
V. IMPLEMENTATION Automation (ICA), 2013.
[2] Jesse Levinson, Jake Askeland, Jan Becker, Jennifer Dolson, David
The bot was implemented in real life with the help of some of Held, Soeren Kammel, J. Zico Kolter, Dirk Langer, Oliver Pink,
the hardware which made it possible to build such a type of Vaughan Pratt, Michael Sokolsky, Ganymed Stanek, David Stavens,
mobile robot. An Acrylic sheet was laser cut by 15cm which Alex Teichman, Moritz Werling, and Sebastian Thrun. Towards fully
was the dimension of the mobile robot. autonomous driving: systems and algorithms. In Intelligent Vehicles
Symposium (IV), 2011 IEEE, 2011.
Hardware such as Arduino Uno (acting as a low-level [3] Morgan Quigley, Ken Conley, Brian P. Gerkey, Josh Faust, Tully
controller in our case) along with raspberry pi 3B (host Foote, Jeremy Leibs, Rob Wheeler, and Andrew Y. Ng. Ros: an open-
computer for mobile robot) is used while building of the robot. source robot operating system. In ICRA Workshop on Open Source
Robot pose is calculated with the help of the encoder motor Software, 2009.
[4] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic
and IMU (in our case it is MPU6050). We have used 200 Robotics (Intelligent Robotics and Autonomous Agents). The MIT
RPM encoder motor to get the best result. Press, 2005. ISBN 0262201623.
Along with the hardware software playing a key role in the [5] M. Köseoğlu, O. M. Çelik and Ö. Pektaş, "Design of an autonomous
development of this mobile robot, UBUNTU 18.04 is the mobile robot based on ROS," 2017 International Artificial Intelligence
and Data Processing Symposium (IDAP), 2017, pp. 1-5, doi:
operating system we have used to build our solution. ROS 10.1109/IDAP.2017.8090199.
(Robotic operating system) has various versions available [6] J. Kerr and K. Nickels, "Robot operating systems: Bridging the gap
from ros kinetic to ros noetic but we have used ros melodic between human and robot," Proceedings of the 2012 44th Southeastern
has it is stable and has a greater number of open-source Symposium on System Theory (SSST), 2012, pp. 99-104, doi:
10.1109/SSST.2012.6195127.
packages as compared to other ros versions. [7] S. Tarao, Y. Fujiwara, N. Tsuda and S. Takata, "Development of
Ubuntu mate 18.04 is installed on raspberry pi which will help Autonomous Mobile Robot Platform Equipped with a Drive Unit
us to run the ros packages and compute all the required data Consisting of Low-End In-Wheel Motors," 2020 5th International
for autonomous navigation. Conference on Control and Robotics Engineering (ICCRE), 2020, pp.
42-47, doi: 10.1109/ICCRE49379.2020.9096433.
All the maps generated by the mobile robot can be visualized [8] Autonomous navigation for mobile robotics and ugv,
by the host computer via SSH connection between the host https://in.mathworks.com/
and the client computer. [9] C. Wang and D. Du, "Research on logistics autonomous mobile robot
system," 2016 IEEE International Conference on Mechatronics and
Automation, 2016, pp. 275-280, doi: 10.1109/ICMA.2016.7558574.
[10] S. Tarao, Y. Fujiwara, N. Tsuda and S. Takata, "Development of
Autonomous Mobile Robot Platform Equipped with a Drive Unit
Consisting of Low-End In-Wheel Motors," 2020 5th International
Conference on Control and Robotics Engineering (ICCRE), 2020, pp.
42-47, doi: 10.1109/ICCRE49379.2020.9096433.
[11] M. Liu et al., "Campus Guide: A Lidar-based Mobile Robot," 2019
European Conference on Mobile Robots (ECMR), 2019, pp. 1-6, doi:
10.1109/ECMR.2019.8870916.

You might also like