Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Review of SLAM Algorithms for Indoor

Mobile Robot with LIDAR and RGB-D


Camera Technology

Chinmay Kolhatkar and Kranti Wagle

Abstract Simultaneous localization and mapping are techniques of mapping the


homogeneous environments by localizing the sensors’ position in an environment
with centimetre accuracy. This paper reviews the different techniques used in
mapping and localization of mobile robot and designing of low-cost mobile plat-
form with sensors like RPLIDAR and Microsoft Kinect. This paper also discussed
the use of Robot Operating System and different packages and topics used for imple-
mentation of SLAM. Along with software implementation of ROS, this paper also
covers the comparison of different hardware boards such as NVIDIA Jetson Nano and
NVIDIA TK1 GPUs for running heavy and complex CUDA algorithms. Lower slave
boards such as Arduino Mega and STM32 are also discussed in this paper. This paper
also focuses on use of different localization techniques such as AMCL, ORB-SLAM,
Hector SLAM, Gmapping, RTAB-MAP and particle filter SLAM. Robot Operating
System played the crucial role in all the complex processing and communication
between different running nodes. Autonomous navigation is achieved using ROS
navigation stack, and simulations for the same are done in Rviz and gazebo real-time
environments. 2D point cloud-based algorithms with laser scanner are compared
against the 3D visualization techniques. At the end, clear comparison between all
the benchmark algorithms such as Hector SLAM, Gmapping, RTAB-MAP, ORB-
SLAM and ZEDfu is done for clear understanding while selecting an algorithm for
future research.

Keywords SLAM · ROS · LIDAR · Kinect · Navigation stack · AMCL ·


Gmapping · ORB-SLAM · Mobile platform · Rviz

C. Kolhatkar (B) · K. Wagle


Fr. Conceicao Rodrigues College of Engineering, Bandstand, Mumbai University, Bandra (W),
Mumbai, Maharashtra 400050, India
e-mail: chinmayhere84@gmail.com

© The Editor(s) (if applicable) and The Author(s), under exclusive license 397
to Springer Nature Singapore Pte Ltd. 2021
M. N. Favorskaya et al. (eds.), Innovations in Electrical and Electronic Engineering,
Lecture Notes in Electrical Engineering 661, https://doi.org/10.1007/978-981-15-4692-1_30
398 C. Kolhatkar and K. Wagle

1 Introduction

With advances in electronics and computer vision technology, service robots are
becoming part of our daily life. Service robots are usually deployed as assistants
in indoor environments, and few examples of such robots are “Roomba” vacuum
cleaning robot, Personal Robot PR2, Pepper service robot, Husky (UGV) and many
more [1]. GPS-based navigation system cannot provide centimetre accuracy and is
also not suitable for indoor environment; hence, navigating the mobile robot in office
or home-style environment becomes a challenge. Problem of navigation becomes
more difficult when environment is complex and unknown. When robot does not
know neither map nor location, formerly problem is called as SLAM problem. SLAM
technique allows the use of sensors like LIDAR, RGB-D cameras, IMU, etc., in
order to construct the map of surrounding environment while simultaneously navi-
gating the robot towards the target location. SLAM problem requires accurate sensors
and precise real-time algorithms. There is verity of real-time SLAM algorithms like
SLAM algorithms based on monocular camera called mono SLAM, parallel tracking
and mapping (PTAM), oriented fast and rotated brief (ORB-SLAM), etc., where
SLAM algorithms implemented using RGB cameras are known as stereo SLAM or
visual SLAM (V-SLAM) algorithms [2]. RGB cameras with depth-sensing capabil-
ities like Microsoft Kinect are mostly used in SLAM-based navigation; because of
their depth-sensing capabilities they can be used to map the 3D environment. On the
other hand, LIDARs are 360 degree rotating horizontal scanners which can scan the
environment in 2D, and LIDARs are expensive but most effective [2] while mapping
the environment and constructing map at speed with minimal computational power.
Review is targeted to address the problem faced by the newcomers in the field of
ROS and SLAM with real-world robotics platforms. Most of the well-known and
premade robotics platforms such as TurtleBot, Kobuki and Husky are very costly and
can cost thousands of dollars. Rather, making your own platform with the help of
low-cost controller such as Arduino and high-end controller such as NVIDIA CUDA
GPU can make a good combination which is much more sufficient for entry-level
person/research to perform any research on ROS and robotics platform. Platform can
be designed with help of DC geared motors, stepper motors along with the encoders
for odometry purposes. Additionally, 6 to 10 DOF IMU sensor can be added to
improve localization and mapping quality. Sensors such as RPLIDAR A1 can be
used as 2D planner mapping sensor, and sensors like Microsoft Kinect and Asus
cams can be used as 3D depth-sensing mapping cameras for excellent dense map
building and accurate navigation along with features such as loop closure. Addition-
ally, sensors such as stereo cameras and ZED cameras can be used to improve the
map quality and density; also, more features can be detected which improves the
quality of navigation. 2D techniques such as Gmapping and Hector SLAM can be
used for 2D mapping and navigation of robotic vehicle, whereas techniques such as
RTAB-MAP, DPPTAM and ORB-SLAM can be used for 3D mapping and navigation
of robotic vehicles. This review is targeted to explore all these techniques and their
pros and cons. This paper refers to AMCL, Hector SLAM, Gmapping, ORB-SLAM,
Review of SLAM Algorithms for Indoor Mobile … 399

RTAB-MAP, DPPTAM, ZEDfu and usage of sensor and quality of each sensor with
different algorithms in different scenarios. We have also discussed the hardware
utilized by the other researchers in order to implement the SLAM algorithms, and
comparative analysis of different SLAM techniques is also done. Finally, conclusion
is drawn which can give a clear idea to the newcomer while selecting the particular
technique and utilization of hardware in order to reduce the cost of set-up and cost
of experimentation.

2 Mobile Robot Platform (System Set-Up)

For autonomous navigation in indoor environment, mobile robotic vehicle is designed


with multiple sensory inputs and actuating outputs. Mobile robot also called
unmanned ground vehicle is designed to move freely with its own computational
power. The UGV or mobile robotic platform is designed with differential drive base
which is having two driving wheels and two auxiliary wheels (omnidirectional). For
driving wheel, either DC geared motors with encoders or stepper motors are used.
According to the author, stepper motors have smaller accumulated error [1]. For
driving these motors, a motor driver with suitable ampere ratings is used. In all the
designs, the mobile robotic platform is divided into two sections. One is bottom
layer called as low-level computing layer and upper layer which is called as high-
performance processing layer. The lower layer consists of some slave device like
microcontroller/microprocessor, such as Arduino and STM32 which has two main
functions:
1. Collect data from on-board sensors like wheel encoders and inertial measurement
unit (IMU), and send that data in appropriate format to upper processing layer.
2. Controlling the motor directions (PWM control) as per the commands from upper
processing node.
Usually, upper layer consists of a processor and GPU for processing complex data
such as data from camera modules, LIDAR and ROS. Upper controller configuration
used by Ilmir Z. Ibragimov and Ilya Afanasyev uses Intel Core i3 CPU 3.6 GHz 4 core
processor and Geforce GT740M supporting CUDA-based GPU [2]. Kartik Madhira
and Dr. Jignesh Patel performed a quantitative study of mapping and localization
algorithms on ROS-based differential drive robot. They have designed a robot which
utilizes the RPLIDAR A2 as laser scanner and 560 CPR 300RPM encoder motors
for odometry calculations. Their design utilizes CUDA-based NVIDIA Jetson TK1
CPU and Arduino Mega 2560 as lower controller [3]. Other robotics platform and
computational configurations are presented in Table 1. The Robot Operating System
(ROS) is running on the upper layer processor, upper layer basically processes the
data from LIDAR and other sensors like RGB-D, and it also collects the data such as
encoder data (for odometry) from lower layer and sends this data to the master ROS
connected node which may be a laptop or a PC [4]. In ROS language, the on-board
upper processing layer is called as slave layer/slave node working on ground vehicle
Table 1 Different robotic platforms and their configurations with utilized sensors and ROS packages
400

Hardware Author
Ilmir Z. Ibragimov and Manuel Gonzalez Yong Li and Sukkpranhachai Kartik Madhira Maksim Filipenko and Ilya
Ilya M. Afanasyev Ocando and Novel Changxing Shi Gatesichapakorn and and Dr. Jignesh Afanasyev
Certad Jun Takamatsu Patel
Processor Intel Core i3 processor PC/laptop for 3D Industrial Intel NUC Kit NVIDIA TK1 Quad ARM A57
utilized reconstruction of computer D54250WYK and and laptop with
map over Wi-Fi Raspberry Pi 3 model 1 GB GPU
ROS node B+ 4 Gb RAM Core
i3
GPU utilized Geforce GT740M CUDA Not used – – NVIDIA Kepler NVIDIA Maxwell with 4 GB of
GK20a GPU RAM
with 192 CUDA
cores
Laser scanner Hokuyo UTM-30LX 2 Hokuyo RPLIDAR SICK LMS100–10000 RPLIDAR A2 Hokuyo UTM-30LX
URG-04LX-UG01
RGB-D camera (1) Microsoft Kinect Not used Microsoft Kinect ASUS Xtion PRO Live Not used Basler acA1300-200uc
technology (2) Stereolabs ZED Stereo camera ZED camera
camera
(3) Basler acA2000-50gc
GigE
Communication – RS232 Serial port USB to RS232 – Serial
protocol used
Motor and – Omron Adept Stepper motor Raspberry Pi as lower 560 CPR Traxxas 7407 Radio-Controlled Car
motor mobile robot drive with controller 300 RPM Model
driver/lower STM32f103 ARM encoder motors,
controller board cortex M3 as lower Arduino Mega
controller 2560
(continued)
C. Kolhatkar and K. Wagle
Table 1 (continued)
Hardware Author
ROS package, Rosbag, RosAria, Rviz tool Tf2_ros, ROS framework nav msgs/OccupancyGrid, geometry
nodes and hector_slamORB-SLAM2, Move_base, Rviz, SLAM_gmapping, based on msgs/PoseWithCovarianceStamped,
frameworks ZED, RTAB-MAP, Smach viewer, teleop_twist_keyboard, ClearkBot, sensor msgs/Image, sensor
OctoMap OctoMap Node AMCL /ticks, /cmd_vel, msgs/LaserScan, base_link.
/odom,
hector_slam,
slam_gmapping,
coreslam and
amcl
Review of SLAM Algorithms for Indoor Mobile …
401
402 C. Kolhatkar and K. Wagle

and the ROS node running on laptop or PC is called as master node. The master
node is usually used for monitoring the different sensor data visually in real time; for
example, the tools like ROS visualizer (Rviz) can be used to see what exactly LIDAR
and RGB-D cameras receive in real time. The master node basically subscribes to the
topic published by the slave node (robot) over Wi-Fi in same network configuration.
In construction of this mobile robotic platform, sensors such as RGB-D cameras,
LIDARs, IMUs and motor encoders are used which we will discuss now.
Yong Li and Changxing Shi used STM32f103 as a lower controller which is
loaded with µC/OS-II, it takes the command from upper controller for controlling
the speed of the stepper motors, and the speed of the stepper motors is controlled by
the variable frequency control [1]. Mobile robot system uses both RGB-D camera
and LIDAR at different floor levels; usually, LIDAR is placed on the top of the robot
to avoid blocking its range, as LIDAR rotates all 360 degree and its range should not
be blocked. As LIDAR is placed high at height of 5 cm, this scheme basically allows
the robot to avoid obstacle even below 5 cm in range. 2D LIDAR is used as an “Obsta-
cleCostmapPlugin”, and RGB-D camera is used as “voxelCostmapplugin”; further
PCL is used for converting depth-sensing data to point cloud data. Openni2_launch
package is utilized for handling PCL with Kinect.

3 Navigation and Localization Techniques


(Implementation)

Kalman filter, extended Kalman filter SLAM (EKF-SLAM) and particle filter algo-
rithms are known as baseline algorithms amongst many SLAM-based algorithms
in mobile robotics applications. ROS supports multiple navigation and localization
algorithms, and navigation stack can be utilized in order to navigate the robot towards
its destination using multiple techniques. Algorithms such as Hector SLAM, Gmap-
ping, RTAB-MAP, ORB_SLAM and AMCL are used to generate map of the environ-
ment and to localize and navigate the robot in environment. Now, we will discuss few
algorithms used in localization and autonomous navigation of robot. Hector SLAM
and Gmapping algorithms are mostly used with range sensors/LIDARS, while ORB-
SLAM and RTAB-MAP algorithms are mostly used with RGB-D camera set-up along
with range sensors.

3.1 Extended Kalman Filter SLAM (EKF-SLAM)

The most common and baseline algorithm used in robotics is EKF-SLAM algorithm
which can be implemented either for multiple sensor fusion or for state estimation [3].
Kalman filter basically works on the principle of weighted mean of the measurement
and predicted Gaussian distribution. Kalman filter has assumption that the state of
Review of SLAM Algorithms for Indoor Mobile … 403

the robot is linear, i.e. noise free, and distribution is not multimodal but always Gaus-
sian. But the sensor data is always noisy which causes distribution to be nonlinear
and non-Gaussian. Hence in EKF-SLAM first nonlinear distribution is converted to
Gaussian, linear distribution. But the main problem with EKF-SLAM is it requires
simultaneous updating of many matrix elements which increases the computation
complexity linearly with complexity of environment, i.e. number of landmarks in
the environment.

3.2 Particle Filter SLAM

Although EKF-SLAM is baseline for many SLAM techniques, most practical SLAM
techniques such as Hector SLAM and Gmapping use the particle filter as the baseline
algorithm. Particle filter such as Rao-Blackwellized particle filter which is baseline
for Gmapping algorithm uses large number of particles to represent the pose of the
robot and map with the platform [3].

3.3 AMCL

AMCL is a probabilistic localization system for a robot moving in 2D. It implements


the adaptive (or KLD-sampling) Monte Carlo localization approach, which uses a
particle filter to track the pose of a robot against a known map [3]. AMCL takes in a
laser-based map and laser scans and transforms messages, and outputs pose estimates.
The particle filter data is visualized in Rviz visualization tool and represented with
green colour where trajectory is represented with red colour. The start and end point
for navigating the robot towards its destination can be given by clicking on particular
point in the map [5].

3.4 Hector SLAM

Hector SLAM is one of the famous algorithms used for 2D map building and localiza-
tion in mobile robots. Hector SLAM is an algorithm specifically designed for range
sensors. Hector SLAM works based on 2D laser scan data obtained from LIDAR
sensor. Hector SLAM does not require odometry information; it only requires 2D
laser scan data [2]. Algorithms are able to build 2D map and localize the robot at
same frequency as scanning frequency of LIDAR. Hector SLAM is used to build the
occupancy grid map from the laser scan data, and it publishes this information to
the nav_msgs/OccupancyGrid topics [2]. Hector SLAM combined with IMU uses
scan matching with Kalman filter to compute the robot pose in full 3D environment.
When Hector SLAM is visualized in Rviz, each cell of the map is painted with
404 C. Kolhatkar and K. Wagle

different colours such as occupation probability when coloured black means cell
is busy, light grey means cell is free and dark grey means cell is not scanned yet.
Green line in Hector map represents the UGV trajectory. According to the author,
Hector SLAM algorithm has very high update rate and low measurement noise. The
author also says the estimation and orientation are through relative overlapping of
beam endpoints with the current map [3]. Hector SLAM uses a Gaussian–Newton
minimization which is an updated version of Newton method; it has the advantages
that second derivatives need not be computed. Also, it takes less energy to avoid
changes in dynamic environment. However, this method does not solve the loop
closure problem.

3.5 Gmapping

Gmapping algorithm is the most widely used algorithm in the field of mobile robotics.
Gmapping algorithms are based on Rao-Blackwellized particle filter (RBPF) [6].
Gmapping algorithm creates the 2D occupancy grid-based map. This algorithm uses
the particles as representation of map. These particles are regarded as hypotheses that
means particle can either be true or be false. Each particle obtains its measurement
from laser scanned data. RBPF filters these particles to obtain the most probable
particle that is considered as the map and trajectory by the algorithms. RBPF is
used to solve the metric (grid-based) SLAM problem where the inputs are sensor
reading z0 , z1 , z2 , …, zn , odometry or control signal u1 , …, ut and output is trajectory
estimation of robot x 1 , …, x t and map M. The joint posterior is given by (x1: t, M | z1:
t, u1: t − 1). In order to simplify the posterior calculation, two steps are conducted;
firstly, estimated trajectory is calculated from odometry data and after that simply
(M | x, z1: t). Then, the joint posterior simply becomes equal to (x1: t, M | z1: t, u1:
t − 1) = (M | x, z1: t). (x1: t | z1: t, u1: t −1); then, individual map for all particles
is built [7]. ROS uses “Gmapping” package and slam_gmapping node to start map
building process. This node takes data from sensors/laser scan messages which are
basically sent from LIDAR to ROS stack. The information from these messages is
used to transform the scan into odometry tf frame.

3.6 ORB_SLAM

Oriented fast and rotated brief (ORB-SLAM) algorithm is a fast feature detection
algorithm. ORB-SLAM is a feature-based method which maps the environment in
3D space using RGB-D cameras. ORB-SLAM uses bundle adjustment algorithm
to create a 3D environment by extracting features from different images and place
them in 3D. ORB-SLAM basically tracks the ORB features of the robot’s pose,
calculates the camera trajectory and recovers a sparse 3D scene of environment. This
detector algorithm works in real time and creates sparse point cloud as the map. The
Review of SLAM Algorithms for Indoor Mobile … 405

main feature of ORB-SLAM algorithm is loop closure detection, keyframe selection


and localization for each frame. ORB_SLAM library is used in ROS for monocular
trajectory tracking. Visualization tool such as Rviz can be used to visualize the ORB-
SLAM map in 3D; also, ORB_SLAM library has its own GUI for visualization [8].
The red points represent the point cloud of generated map, green line represents
the trajectory, and blue rectangles represent the planes in which camera shoots at a
particular frame. According to the author, ORB-SLAM is very robust and provides
the good approximation of robot trajectory.

3.7 RTAB-MAP

Real-time appearance-based mapping algorithm is visual SLAM algorithm which


creates the map of environment in 3D using graph-based SLAM approach. The algo-
rithm relies on graph-based SLAM and global loop closure detection to compute
robot pose and map. The map is stored as graph with robot poses, associated images
as nodes and odometry/loop closure transformation as edges [8]. It can be used with
any sensor providing 3D scanning information; mostly, stereo RGB-D cameras like
Microsoft Kinect are used with this algorithm. This algorithm provides excellent
loop closure and pose estimation. The loop closure detection algorithm uses “bag-
of-words” algorithm to compare the images captured of current location with the
previously visited location images [8]. Then, each such closure helps in optimizing
the graph. The algorithm can be implemented with or without the odometry infor-
mation. According to the author, Maksim Filipenko and Ilya Afanasyev algorithm
sometimes fails in pose estimation when robot moves closer to monotonous walls.
But the failure recovery system is able to detect the failure and determine the robots’
pose properly. According to the author, the system is fairly accurate and robust. It
solves the problem of localization more accurately than LIDAR without additional
manipulation. RTAB-MAP is integrated with ROS via RTAB-MAP ROS package.
Algorithm provides the 3D map in the form of dense point cloud and 2D map in the
form of occupancy grid which can be visualized using Rviz tool of ROS. According
to Ilmir Z. Ibragimov and Ilya Afanasyev during their test, UGV motion trajectory
deviates from the marked trajectory but the Kinect RGB-D camera and camera-based
RTAB_MAP odometry give results which are close to the ground reality.

3.8 DPPTAM

Dense piecewise planar tracking and mapping from monocular sequence is a


new method of real-time direct visual SLAM. DPPTAM uses an assumption that
homogeneous and monochrome regions belong to approximately same plane [5].
According to authors Maksim Filipenko and Ilya Afanasyev, DPPTAM is not enough
robust method as it lost tracking during turns. Also, author Ilmir Z. Ibragimov
406 C. Kolhatkar and K. Wagle

Table 2 Comparison of different SLAM algorithms based on pose estimation trajectory accuracy
and map building
SLAM 2D/3D LIDAR/RGB-D Odometry/IMU Map Pose Loop
algorithm or both building estimation closure
density algorithm
AMCL 2D LIDAR Odometry Sparse Good None
Hector 2D LIDAR IMU Sparse Poor None
SLAM
Gmapping 2D mono LIDAR Odometry Sparse Good None
ORB-SLAM 3D RGB-D Better with data Semi-dense Good FabMap
cameras association
RTAB-MAP 3D RGB-D Better with data Semi-dense Good Bag of
mono/stereo association words
DPPTAM 3D mono Direct RGB-D Odometry Dense Poor None
ZEDfu Stereo 3D RGB-D Odometry Dense Good None

concluded from the DPPTAM real-time trajectory and marked white line trajectory
are mismatched; hence, it is a failure visual odometry method in indoor navigation
environment.

3.9 ZEDfu

ZEDfu is a stereo visual SLAM method. Stereolabs developers provided cameras


with various tools and interfaces. ZEDfu stereo camera can be used for generating
3D map of the environment in the form of point cloud. ZED SDK has the integration
with ROS which is performed with zed-ros-wrapper package [2]. Authors Ilmir
Z. Ibragimov and Ilya Afanasyev performed the experiment with ZEDfu tool and
recorded point cloud is store in SVO format. According to them, the map is dense
and even obstacles such as chairs and table are clearly visible which are not visible in
case of Hector SLAM case. To build the UGV trajectory, ZED camera visualization
data and position tracking data from OpenGL window are used. According to the
author, the trajectory calculated using ZEDfu tool matches with the ground trajectory
and only has difference of 10 cm (Table 2).

4 Future Scope

SLAM is a baseline algorithm which can be used for autonomous localization and
navigation of mobile robot. SLAM algorithms such as Hector SLAM and Gmapping
are highly dependent on sensors’ accuracy; hence, the work can be done in direction
to reduce sensor noise and improve accuracy of these algorithms. Data association is
Review of SLAM Algorithms for Indoor Mobile … 407

a crucial part of mobile robotics and machine vision. If sensory data from LIDAR and
RGB-D cameras is combined together, it will improve the accuracy of the naviga-
tion. Algorithms such as RRT, PRM, Dijkstra and A*can be used in association with
the existing SLAM algorithms in order to improve navigation. Along with SLAM
algorithm, machine learning algorithms especially deep reinforcement learning algo-
rithms can be developed in order to avoid the dependencies for preamp building and
accuracy can also be improved to some extent. Deep reinforcement algorithms such
as DQN, DDPG, actor critic and SARSA can also be deployed in order to improve
SLAM algorithms’ accuracy and improve navigation. Problem of loop closure can
be easily tackled with improved sensor quality and sensor data association. There
are various other possible ways which can also be deployed such as ceiling feature
matching and landmark tracking [9] which are used in low-cost dust cleaning robot
to improve the overall system performance and to make mobile robot cost-effective.
SLAM is a benchmark technique in the field of robotics and computer vision, and
it is been used from several years and will be used in future, good example of
which is Xiaomi using the SLAM in their smart dust cleaning robots such as Xiaomi
RoBoRock S650.

5 Conclusions

1. Low-cost and high-performance hardware can be built by using a low-cost


LIDAR and sensor like Microsoft Kinect. Optionally to improve performance
of experiment, odometry can be added by using the motor encoders and IMU
sensors.
2. A ready-made platform such as Omron Adept mobile robot can be used to
design a low-cost mobile SLAM-based robot hardware [6].
3. Even though it is possible to place a system like laptop on mobile robot, it is
feasible to use small on-board computer like Jetson TK1 or Intel NUC to reduce
the robot’s weight and mobility constraints [3].
4. As ROS navigation stack performs heavy computation during navigation and
processes so much of data from all nodes, it is feasible to use on-board GPU
[3].
5. ROS open-source repositories and navigation stack packages greatly reduce the
code length and shorten the development time [1].
6. LIDAR is 2D sensor that comes out to be fast and precise amongst all for 2D
map building and navigation experiments.
7. As LIDAR is single-line laser radar, only obstacles of the level of LIDAR are
detected and obstacles on the other levels cannot be effectively avoided [4].
8. Use of Kinect sensor with laser range sensor can highly improve the obstacle
avoidance capabilities of mobile robot.
9. Odometry sensor seems to be noisy, and the algorithms such as Gmapping are
highly affected by the measurement noise.
408 C. Kolhatkar and K. Wagle

10. Rough terrain areas cause robot wheels to stuck in place but as motors kept
running Rviz showed robot is still moving since the wheels are still rotating.
Causing serious errors in scan matching and mapping inaccuracies [4].
11. Gmapping algorithms worked well in all scenarios come out to be best for
2D planner navigation despite the fact that not originally designed for RGB-D
cameras [10].
12. Although Gmapping comes out to be accurate, it fails to detect some obstacles
like chairs and tables considering them as homogeneous walls [4].
13. Both Gmapping and Hector SLAM realize the map correctly, as Gmapping
algorithm needs odometry information which puts certain limit on the robot’s
speed. Navigation-wise Hector SLAM works much faster and with lesser noise
in maps, but Hector SLAM requires higher laser scanning frequency.
14. During semi-outdoor environment, laser sensor is not able to provide accurate
scan due to bright light originating from outdoor area. This causes SLAM
algorithm to misinterpret the orientation of robot [4].
15. Faster movement of robot while map generation causes poor map building and
causes error in scan matching [4].
16. No monocular SLAM algorithm estimates absolute scale of the received map
and localization, but stereo and RGB-D sensors give good result as compared
to mono sensors in map building and localization.
17. ORB-SLAM comes out to be better amongst the other visual SLAM/odometry
algorithms, and it is more robust in homogeneous environments as compared
to DPPTAM. DPPTAM provides dense map [5].
18. ORB-SLAM method should be used if high-performance requirement is
required and environment does not have any flat monochrome objects.
19. DPPTAM method should be used if a dense area map is required to build an
obstacle map or the environment does not contain enough features and also
hardware has enough performance power. Lack of feature points affected the
DPPTAM method adversely [8].
20. Zed camera odometry also shows good results with errors close to LIDAR data;
however, it loses tracking over sharp turns [2].
21. RTAB-MAP can be considered as good solution for 3D mapping scenarios with
loop closure detection and good feature of recovery in case of moving out of
trajectory [11].

References

1. Li Y, Shi C (2018) Localization and navigation for indoor mobile robot based on ROS. IEEE,
New York
2. Ibragimov IZ, Afanasyev IM (2017) Comparison of ROS-based visual SLAM methods in
homogeneous indoor environment. In: 14th WPNC. IEEE, New York
3. Madhira K, Patel J (2017) A quantitative study of mapping and localization algorithm on ROS
based differential robot. In: NUiCONE. IEEE, New York
Review of SLAM Algorithms for Indoor Mobile … 409

4. Syaqur WA, Kamarudin K (2018) Mobile robot based simultaneous localization and mapping
in unimap’s unknown environment. IEEE, New York
5. da Silva BMF, Xavier RS (2017) Experimental evaluation of ROS compatible SLAM algorithm
for RGB-D sensors. IEEE, New York
6. Cheng Y, Wang GY (2018) Mobile robot navigation based on LIDAR. IEEE, New York
7. Eliwa M, Adham A (2017) A critical comparison between Fast and Hector SLAM algorithms.
REST J Emerging Trends Modell Manuf
8. Filipenko M, Afanasyev I (2018) Comparison of various SLAM systems for mobile robot in
an indoor environment. IEEE, New York
9. Hwang S-Y, Song J-B (2013) Clustering and probabilistic matching of arbitrarily shaped ceiling
features for monocular vision-based SLAM. Adv Robot 27(10):739–747
10. Gatesichapakorn S, Takamatsu J (2019) ROS based autonomous mobile robot navigation using
2D LiDAR and RGB-D Camera. IEEE, New York
11. Yagfarov R, Ivanou M (2018) Map comparison of LIDAR based 2D SLAM algorithms using
precise ground truth. In: ICARCV. IEEE, New York
12. Ocando MG, Certad N (2017) Autonomous 2D SLAM and 3D Mapping of an environment
using a single 2D LIDAR and ROS. IEEE, New York
13. Megalingam RK, Teja CR (2018) ROS based autonomous indoor navigations simulation using
SLAM algorithm. Int J Pure Appl Math 118(7):199–205

You might also like