Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/322656676

Turtlebot 2 Autonomous Navigation under the Virtual World Gazebo

Conference Paper · November 2017

CITATIONS READS

0 1,105

3 authors:

Chaima Bensaci Zennir Youcef


Université 20 août 1955-Skikda Université 20 août 1955-Skikda
12 PUBLICATIONS   22 CITATIONS    81 PUBLICATIONS   187 CITATIONS   

SEE PROFILE SEE PROFILE

Denis Pomorski
Université de Lille
116 PUBLICATIONS   818 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Machine learning View project

Multi-sensor fusion View project

All content following this page was uploaded by Zennir Youcef on 25 August 2020.

The user has requested enhancement of the downloaded file.


The International Conference on Advanced Engineering in Petrochemical Industry (ICAEPI’17)
November 28-30, 2017, Skikda, Algeria.

Turtlebot 2 Autonomous Navigation Under the


Virtual World Gazebo
Chaima BENSACI Youcef ZENNIR Denis POMORSKI
Université 20 Août 1955 Skikda Université 20 Août 1955 Skikda Université Lille 1
LGCES Laboratory Automatic Laboratory of Skikda CRIStAL Laboratory– UMR 9189
Skikda, Algeria Skikda, Algeria Lille, France
ch.bensaci@univ-skikda.dz y.zennir@univ-skikda.dz denis.pomorski@univ-lille1.fr

Abstract — In this paper we use the simulated version of intervention of human being whose role is limited to
Turtlebot 2, mobile wheeled robot composed of two supervision [1] [2]. Autonomous navigation requires a
differential wheels, under Gazebo world. We choose this number of heterogeneous capabilities, including the ability to
kind of robot because of its simpler architecture which execute elementary goal-achieving actions, like reaching a
makes it possible to study its own navigation. The purpose given location; to react in real time to unexpected events, like
of this study is to release an autonomous navigation; we the sudden appearance of an obstacle; to build, use and
have planned as a first step different trajectories and try maintain a map of the environment; to determine the robot's
to follow them. Our navigation strategy is based on its position with respect to this map; to form plans that pursue
odometer information, according to the robot positions specific goals or avoid undesired situations; and to adapt to
and orientation errors. Velocities commands are sent to changes in the environment [3]. An autonomous navigation
make the robot follow the chosen trajectory. system consists of three main steps: Environment perception
Keywords — autonomous navigation; Gazebo environment; and localization, trajectory planning and robot control as
trajectory following; Turtlebot 2. shown in the following figure.
I. INTRODUCTION
Nowadays, the use of robotics is widespread,
encompassing the different areas of life, which are found
everywhere in homes, in the fields, at the bottom of the sea,
in space especially in industrial zones like industrial robot
manipulators, whose main objective is to facilitate worker
tasks, increase productivity while preserving human and
material safety within these hazardous environments.
As the progress in this area is continuous and progressive,
the major challenge facing the industrial world is how to
benefit of this progress and integrate mobile robots to move Fig. 1. Autonomous navigation system.
products, especially in places that threaten the lives of
workers such as chemistry laboratories, therefore improved A. Navigation strategies
control of autonomous robots navigation is required, in this
Diverse mobile robot navigation strategies are adopted to
context this study has been proposed.
allow mobile robot to move, avoid obstacles and reach the
In this paper, we are going to realize a planned navigation goal, Trullier et al have classified them into five categories
of the differential wheeled mobile robot, Turtlebot 2 [4]:
simulated under Gazebo environment, by a speeds command
1) Approach of an object: This basic capacity makes it
to control the robot positions compared of the trajectory
possible to move towards a visible object from the current
followed, wishing to achieve our desired objective in future
works. position of the robot. It is generally performed by a gradient
rise based on the perception of the object. This strategy uses
II. AUTONOMOUS MOBILE ROBOTS NAVIGATION reflex actions, in which each perception is directly associated
Autonomous systems are systems that develop, for with an action.
themselves, the laws and strategies according to which they 2) Guidance: This ability seems to be used by certain
regulate their behavior: they are self-governing as well as insects; it allows to reaches a goal that is not a directly visible
self-regulating. Therefore, autonomous mobile robots material object but a point of space characterized by the
navigation means that these robots have the ability to spatial configuration of a set of remarkable objects. This
navigate automatically by their own selves without the strategy also uses reflex actions.
The International Conference on Advanced Engineering in Petrochemical Industry (ICAEPI’17)
November 28-30, 2017, Skikda, Algeria.
3) Action associated with a location: This strategy makes Map-based positioning often includes improving global maps
it possible to reach a goal from positions for which this goal based on the new sensory observations in a dynamic
or the landmarks that characterize its location are invisible. It environment and integrating local maps into the global map
requires an internal representation of the environment which to cover previously unexplored areas. The maps used in
consists of defining places as zones of space in which navigation include two major types: Geometric maps and
perceptions remain similar, and associating an action to be topological maps. Geometric maps represent the world in a
performed at each of these places. The sequence of actions global coordinate system, while topological maps represent
defines a road that allows reaching the goal. the world as a network of nodes and arcs [3].
4) Topological navigation: It is an extension of the C. Reactive Navigation
previous strategy which memorizes in the internal model the Reactive navigation differs from planned navigation in
spatial relations between the different places. These that, while a mission is assigned or a goal location is known,
relationships indicate the possibility of moving from one the robot does not plan its path but rather navigates itself by
place to another. Therefore the internal model is a graph that reacting to its immediate environment in real time. Reactive
makes it possible to calculate different paths between two methods share a number of important features:
arbitrary places. This model, however, allows only the  First, sensors are tightly coupled to actuators
planning of displacements among the known places and through fairly simple computational mechanisms.
following the known paths.
 Second, complexity is managed by decomposing the
5) Metric navigation: This strategy allows the robot to
problem according to tasks rather than functions [3].
plan paths within unexplored areas of its environment. It In this paper we use the Turtlebot 2 robot simulated in
memorizes for this the relative metric positions of the Gazebo to make a planned autonomous navigation; we
different places, in addition to the possibility of passing from represent as a first step different trajectories and try to follow
one to the other. By simple composition of vectors these them.
relative positions can calculate a trajectory, going from one
place to another, even if the possibility of this displacement III. ROBOT DESCRIPTION
has not been memorized in the link form. The TurtleBot 2 is one of the non-holonomic robots
B. Systems and Methods for mobile robot navigation officially proposed by Willow Garage to develop in the
operating system dedicated to robotics: ROS. It is equipped
1) Odometry and Other Dead-Reckoning Methods: with a Kinect sensor, a Netbook, trays for the installation of
Odometry is one of the most commonly adopted navigation these two components and a Kobuki base as shown in
methods for mobile robot positioning. This method uses figure 1.
encoders to measure wheel rotation and/or steering
orientation. It provides good short-term accuracy, an
inexpensive facility, and allows high sampling rates.
2) Inertial Navigation: This method uses gyroscopes and
sometimes accelerometers to measure rate of rotation and
acceleration. Measurements are integrated once (or twice) to
yield position. Inertial sensors are thus unsuitable for accurate
positioning over an extended period of time.
3) Active Beacon Navigation Systems: This method
computes the absolute position of the robot from measuring
the direction of incidence of three or more actively
transmitted beacons. The transmitters, usually using light or Fig. 2. Components of Turtlebot 2.
radio frequencies must be located at known sites in the
environment. The Kobuki base is a mobile wheeled robot with two
4) Landmark Navigation: In this method distinctive differential wheels and a castor wheel, it also contains
artificial landmarks are placed at known locations in the proximity sensors, encoders on the wheels and a gyrometer
environment. The advantage of artificial landmarks is that for each axis [5].
they can be designed for optimal detectability even under The Turtlebot 2 is moved by the following model: We
adverse environmental conditions. but this approach is consider that (𝑋, 𝑌) is the global coordinate system and (𝑋𝑅,
computationally intensive and not very accurate. 𝑌𝑅) is the local coordinate system of the robot. The position
5) Map-based Positioning: In this method information of the robot P(𝑥, 𝑦, θ) is represented in Cartesian coordinates
acquired from the robot's onboard sensors is compared to a in the global coordinate system.
map or world model of the environment. If features from the
sensor-based map and the world model map match, then the The relationship between the robot frame and the global
vehicle's absolute location can be estimated. frame is given by the basic transformation matrix:
The International Conference on Advanced Engineering in Petrochemical Industry (ICAEPI’17)
November 28-30, 2017, Skikda, Algeria.
cosθ sinθ 0
𝑅(θ) = [−sinθ cosθ 0]
0 0 1
The figure below shows the relationship between the two
frames.
Fig. 4. Turtlebot 2 position and orientation error vectors.
Y
The robot Cartesian is given by:
YR XR 𝑥̇ = 𝑣 cos θ
{ 𝑦̇ = 𝑣 sin θ (4)
ϴ
y θ̇ = 𝑤
Where x, y and θ are measured relative to the global
frame.
The robot position can also be represented by the polar
X coordinates with a distance error 𝜌>0:
x ρ̇ = −𝑣 cos(β − θ) = −𝑣 cos α
sin α
Fig. 3. Turtlebot 2 representation in a Cartesian coordinate system. { β̇ = 𝑣 (5)
ρ

The wheels are motorized independently. When both θ̇ = 𝑤


wheels turn with the same speed in the direct action the robot where 𝛼= 𝛽−𝜃 is the angle measured between the axis 𝑋𝑅
moves forward, otherwise it moves backwards. Turning right of the reference relative to the robot and the axis ρ expressed
is done by actuating the left wheel at a higher speed than that by the following equation:
of the right wheel and vice versa to turn left. It also can rotate ρ = √(xd − xc)2 + (yd − yc)2 (6)
on the spot by actuating a wheel forward and the second The kinematic equations obtained on the basis of the
wheel in the opposite direction with the same speed. The third polar coordinates are:
wheel is a free wheel preserve the robot stability. ρ̇ = −𝑣 cos α
sin α
The Turtlebot 2 wheels are non-holonomic, they represent α̇ = −𝑤 + 𝑣 (7)
ρ
non holonomic constraints: sin α
𝑥̇ sin θ − 𝑦̇ 𝑐𝑜𝑠θ = 0 (1) ̇ 𝑣
β=
{ ρ
The relationship between the speeds of the robot (𝑣, 𝑤)
and the speeds of the wheels (𝑤𝑟, 𝑤𝑙) is expressed by this two This set of equations is only valid for ρ ≠ 0.
equations:
𝑤 +𝑤
𝑣 = 𝑣𝑥 = 𝑣1 + 𝑣2 = 𝑟( 𝑟 𝑙 ) (2) B. Control law and stability
2
𝑤𝑟 − 𝑤𝑙 The control algorithm must be designed to move the robot
𝑤 = 𝑤1 + 𝑤2 = 𝑟( ) (3)
𝑑 from its current configuration to its desired configuration, so
Where r is the wheel radius and d is the distance between we are going to send linear and angular velocity commands
wheels. 𝑢 = (𝑤𝑣 ) until the robot reaches the desired position. The
A. Kinematic equations 𝑣
proposed control law is state-dependent, ie ( ) = 𝑓(ρ, α,
𝑤
Suppose that the robot is in an arbitrary position P(𝑥𝑐, 𝑦𝑐, β), which ensured that the state is moving toward (0, 0, β)
θ) and the distance between its current position and the without reaching ρ = 0 in a finite time interval. One of the
desired position R(𝑥𝑑, 𝑦𝑑, β); defined with reference to the most commonly used methods for studying asymptotic
global frame; higher than 0. behavior is based on Lyapunov stability theory. Consider a
simple positive definite quadratic form of the Lyapunov
function:
1 1
𝑉 = 𝑉1 + 𝑉2 = ρ2 + α2 (8)
2 2
ρ 𝑒𝑡 α are respectively the distance and orientation errors.
Its derivative relative to time is given by:
𝑉̇ = 𝑉1̇ + 𝑉2̇ = ρ̇ ρ − α̇ α (9)
XR Using the kinematic equations:
sin α
YR 𝑉̇ = (−𝑣 cos α) + α (−w + 𝑣 ) (10)
ρ
β The first term can be non-positive, putting the linear
ϴ Desired position (R) velocity in the form:

ρ
YR XR
α
ϴ
Current position (P)
The International Conference on Advanced Engineering in Petrochemical Industry (ICAEPI’17)
November 28-30, 2017, Skikda, Algeria.
𝑣 = K ρ ρ cos α avec K ρ > 0
{ 𝑉1̇ = ρ (− K ρ ρ cos 2 α) 𝑥 = 9. (cos(2𝑡) + 1) . cos(𝑡)
{ (18)
𝑦 = 9. (cos(2𝑡) + 1) . sin(𝑡)
𝑉1̇ = − K ρ ρ cos α ≤ 0 2 2
 Circle trajectory:
(11)
𝑥 = 18. cos(𝑡)
This means that the term 𝑉1̇ is always non-increasing over {
𝑦 = 18. sin(𝑡)
(19)
time, therefore it converges asymptotically to a non-negative
finite limit. Likewise, the second term can be non-positive  Ellipse trajectory:
𝑥=𝑡
thus the angular velocity is put in the form: {𝑦 = 𝑥 2 − 19 (20)
𝑤 = K ρ sin α cos α + 𝐾𝛼 with 𝐾𝛼 > 0 (12)
𝑥 = 18. (cos(2𝑡) − 1) . cos(𝑡)
K ρ sin α cos α { (21)
𝑉2̇ = α ( −K ρ sin α cos α − 𝐾𝛼 α − ρ ) (13) 𝑦 = 18. (cos(−2𝑡) + 1) . sin(𝑡)
ρ

𝑉2̇ = − 𝐾𝛼 α2 ≤ 0 (14)  Inclined straight:


𝑥 = 20. cos(𝑡) . sin(−𝑡)
{ (22)
Finally, the expression of the time derivative of the 𝑦 = 20. sin(t) . cos(−𝑡)
Lyapunov function 𝑉 becomes:
𝑉̇ = 𝑉1̇ + 𝑉2̇ = −K ρ ρ 2 cos 2 α − Kα α2 ≤ 0 (15)
V. SIMULATION UNDER MATLAB
The result is in semi-negative form. By applying the We have used this strategy of control with reference
Barbalat lemma, it follows that 𝑉̇ necessarily converges to trajectory of infinity trajectory. Different trajectories as
zero on time. Which implies that the state vector convergence follow:
from (ρ, α, β) to (0, 0, β). So we conclude that the expressions
of the following linear and angular velocities make the
movement of the robot smooth and stable [5].

𝑣 = K ρ ρ cos α
{ (16)
𝑤 = K ρ sin α cos α + 𝐾𝛼 α

IV. CONTROL ARCHITECTURE PROPOSED


The planned motion of the Turtlebot 2 robot must
conform to the chosen trajectory. Therefore, we have chosen
a control law based on the regulation of linear and angular
velocities depending to the distance and the orientation errors
between the desired positions and the current positions of the
robot. The nice tracking of these trajectories is a first step on
the success way of the desired autonomous navigation. The
figure below represents the proposed control architecture
loop. Fig. 6. Reference infinite trajectory.

A. Infinite trajectory
Reference Turtlebot motion (𝒗, 𝒘)
trajectory controller  Trajectory following
Turtlebot The following figure shows the Turtlebot 2 follows the
desired trajectory:

(𝒙𝒄 , 𝒚𝒄 , 𝜽𝒄 )

Fig. 5. The proposed control architecture loop.

The reference trajectory can give the destination


according to the current position and the trajectory. The
controller is tested on different reference trajectories:
 Infinite trajectory:
𝑥 = 9. (cos(2𝑡) − 1) . cos(𝑡)
{ (17)
𝑦 = 9. (cos(2𝑡) − 1) . sin(𝑡)
The International Conference on Advanced Engineering in Petrochemical Industry (ICAEPI’17)
November 28-30, 2017, Skikda, Algeria.

Fig. 10. Circle trajectory following by Turtlebot 2.

Fig. 7. Turtlebot 2 follows the infinite trajectory. C. Ellipse trajectory

Fig. 8. Turtlebot 2 follows the infinite trajectory. Fig. 11. Ellipse trajectory following by Turtlebot 2.

 Trajectory tracing
The following figure shows the Turtlebot tracing the
trajectory during its navigation:

Fig. 12. Ellipse trajectory following by Turtlebot 2.

Fig. 9. Infinite trajectory tracing by the movement of Turtlebot 2.

B. Circle trajectory

D. Inclined straight trajectory


The International Conference on Advanced Engineering in Petrochemical Industry (ICAEPI’17)
November 28-30, 2017, Skikda, Algeria.
We note from this result that the robot does not follow the
desired trajectory.
In the future work we will try to solve this issue by
Fig. 13. Inclined straight trajectory tracing by the movement of Turtlebot 2. improving the robot control.

VI. 3D-SIMULATION UNDER GAZEBO VII. CONCLUSION


Gazebo is a robotic three-dimensional simulator for
indoor and outdoor environments. It is capable of simulating In this paper, we have presented the principal of our robot
a population of robots, sensors and objects. It generates both (Turtlebot 2) with cinematic model of robot used to
realistic sensor feedback and physically plausible interactions navigation. We have described the control strategy used to
between objects (it includes an accurate simulation of rigid- control robot navigation on different reference trajectories.
body physics). By realistically simulating robots and The 3D-simulation have been developed in Gazebo software
environments code designed to operate a physical robot can using interaction with Matlab-Simulink. The obtained results
be executed on an artificial version. Numerous researchers show the efficiency of the navigation control strategy in
have also used Gazebo to develop and run experiments solely simulation. But under Gazebo we need more manipulations
in a simulated environment. Gazebo simulation can be in order to control parameters. In the future work, we will
regarded as an experimental copy of real robot in virtual generalize this study using two or more robots and an
world [6]. The figure below show the Turtlebot 2 simulated artificial intelligent navigation control strategy.
robot under Gazebo world.
REFERENCES
[1] Gilles Tagne Fokam. Commande et planification de trajectoires pour la
navigation de véhicules autonomes. Thèse de doctorat. Université de
Technologie de Compiègne, 2014.
[2] Luc Steels, When are robots intelligent autonomous agents? Robotics
and Autonomous Systems, vol.15, issues 1-2, July 1995, pp.3-9.
[3] Jin Cao, Vision Techniques and Autonomous Navigation for an
Unmanned Mobile Robot, A.de Ruijter, Master thesis, the University
of Cincinnati, Department of Mechanical, Industrial and Nuclear
Engineering Of the College of Engineering, Tianjin University, China,
1995.
[4] David Filliat. Robotique Mobile. Engineering school. Robotique
Mobile, ENSTA ParisTech, 2011, pp.175.
[5] J. Al Hage, F. El Zahraa Wehbi, D. Pomorski, M. El Badaoui El Najjar,
« Contrôle d’une flotte de robots mobiles via l’environnement
Matlab/ROS », Congrès National de la Recherche en IUT
Fig. 14. Turtlebot 2 robot simulated under Gazebo world. (CNRIUT’2016), Nantes, France, June 2016.
[6] Wei Qian, Zeyang Xia, Jing Xiong, Yangzhou Gan, Yangchao Guo,
The result of 3D-simulation with Turtlebot 2 in Gazebo Shaokui Weng, Hao Deng, Ying Hu, Jianwei Zhang, Manipulation
world when using an infinity reference trajectory is Task Simulation using ROS and Gazebo, Proceedings of the 2014
IEEE International Conference on Robotics and Biomimetics
represented in the following figure: December 5-10, 2014, Bali, Indonesia.

Fig. 15. Result of 3D simulation with Turtlebot 2 in Gazebo world.

View publication stats

You might also like