Autonomous Exploration With Prediction of The Quality of Vision Based Localization - 2017 - IFAC PapersOnLine

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Proceedings of the 20th World Congress

Proceedings
The of
of the
International
Proceedings 20th
20th World
Federation
the Worldof Congress
Automatic Control
Congress
Proceedings
The
Toulouse, of the
The International
France,
International 20th9-14,
July World
Federation
Federation of Congress
of Automatic
2017
Automatic Control
Control
The International
Available online at www.sciencedirect.com
Toulouse,
Toulouse, France,Federation
France, July
July 9-14, of Automatic Control
9-14, 2017
2017
Toulouse, France, July 9-14, 2017
ScienceDirect
IFAC PapersOnLine 50-1 (2017) 10274–10279
Autonomous exploration with prediction of
Autonomous exploration with prediction of
Autonomous
the quality exploration withlocalization
of vision-based prediction of
the quality of vision-based localization
the quality of vision-based

localization

Hélène Roggeman ∗ Julien Marzat ∗
Anthelme Hélène
Hélène Roggeman
Roggeman ∗∗ Julien
Bernard-Brunel ∗
Guy Marzat
Julien Marzat
Le Besnerais∗


Anthelme Hélène Roggeman Julien
Bernard-Brunel ∗ Guy Marzat

Le Besnerais ∗
Anthelme Bernard-Brunel ∗ Guy Le Besnerais ∗∗

Anthelme Bernard-Brunel Guy Le Besnerais

ONERA – The French Aerospace Lab, F-91123, Palaiseau, France.
∗ ONERA – The French Aerospace Lab, F-91123, Palaiseau, France.
∗ ONERA – The Frenchfirstname.lastname@onera.fr.
Aerospace Lab, F-91123, Palaiseau, France.
ONERA – Thee-mail: French Aerospace Lab, F-91123, Palaiseau, France.
e-mail:
e-mail: firstname.lastname@onera.fr.
firstname.lastname@onera.fr.
e-mail: firstname.lastname@onera.fr.
Abstract: This paper presents an algorithm to perform autonomous exploration with robotic
Abstract:
Abstract:equipped
platforms This
This paper
paperwithpresents
presents an
an algorithm
a stereo-vision systemto
algorithm toinperform
perform autonomous
autonomous
indoor, unknown andexploration
exploration with
with robotic
cluttered environments. robotic
Abstract:
platforms This paper
equipped with presents
a an algorithm
stereo-vision system toinperform
indoor, autonomous
unknown and exploration
cluttered with robotic
environments.
platforms
The accuracyequipped
of the with a stereo-vision
vision-based systemdepends
localization in indoor, on unknown
the quantityandof cluttered
visible environments.
features in the
platforms
The equipped with a stereo-vision systemdependsin indoor, unknown and cluttered environments.
The accuracy
scene accuracy
captured of
ofbythe
thethevision-based
vision-based
cameras. A localization
localization
Model dependsControl
Predictive on
on the quantity
quantity of
the approach visible
ofpermits
visible to features
features
perform in
in the
the
The
scene accuracy ofby thethevision-based localization dependsControlon the approach
quantity ofpermits visible to features in the
the
scene captured
exploration captured
task by
with cameras.
theobstacle
cameras. A
A Model
avoidance Modeland Predictive
Predictive
taking Control
into account approach
the permits
quality of the perform
toscene
performin the
order
scene
exploration capturedtask by
withtheobstacle
cameras. A Modeland
avoidance Predictive
taking Control
into account approach
the permits
quality of the toscene
performin the
order
exploration
to avoid areas task with the
where obstacle
visualavoidance
odometry andis taking
likely to into account
fail. the quality
Experiments wereof the scene outin order
exploration
to task with theobstacle avoidance and taking into account the quality ofcarried
the scene inwith
order a
to avoid
avoidrobot
mobile areas
areastowhere
where
assess thethevisual
visual odometry
odometryinis
improvement likely
likely to
to fail.
islocalization fail. Experiments
Experiments
accuracy and were
were carried
coverage carried
for out
out with
with aa
exploration.
to avoid areastowhere
mobile the visual odometry islocalizationlikely to fail. Experiments were carried out with a
mobile robot robot to assess
assess the the improvement
improvement in in localization accuracyaccuracy and and coverage
coverage for for exploration.
exploration.
© 2017, robot
mobile to assess theFederation
IFAC (International improvement in localization
of Automatic Control) accuracy
Hosting byand coverage
Elsevier forrights
Ltd. All exploration.
reserved.
Keywords: Autonomous vehicles, Autonomous exploration, Stereo-vision, Model Predictive
Keywords: Autonomous
Keywords:
Control Autonomous vehicles,vehicles, Autonomous
Autonomous exploration,exploration, Stereo-vision,
Stereo-vision, Model Model Predictive
Predictive
Keywords:
Control Autonomous vehicles, Autonomous exploration, Stereo-vision, Model Predictive
Control
Control
1. INTRODUCTION they used a RRT* algorithm and add a criterion which
1. INTRODUCTION
1. INTRODUCTION they used
they
computes used aaa viewpoint
RRT* algorithm
RRT* algorithm
score based and add
and add
on the aa criterion
criterion
density of which
which
the
1. INTRODUCTION they
computes used aa viewpoint
RRT* algorithm score and add
based on the thea criterion
density which
of the
Autonomous exploration allows to build maps of unknown computes triangle
computes ina the
viewpoint
a the 3D
viewpoint mesh score
scoreof based
the
based
on
environment.
on the density
density of
Mostegel the
of the
Autonomous
Autonomous
environments exploration
exploration
without human allows to build
allows intervention. maps
to build mapsThis of unknown
of unknown
can be triangle triangle
et al. (2014)in
in the 3D mesh
3D mesh
evaluate of the environment.
of the environment.
the quality of the cameraMostegel Mostegel
motions
Autonomous
environments exploration
without allows intervention.
human to build mapsThis of unknown
can be be triangle
et al.
al. in the
(2014) 3D mesh
evaluate of the environment.
the quality
quality of the
the cameraof Mostegel
motions
environments
interesting, for without
instance, human
for intervention. This
search-and-rescue missionscan in et
for the (2014) evaluate
localization the
quality and theof camera
possibility motions
seeing
environments
interesting,areas.forwithout
instance, human
for mobileintervention. This
search-and-rescue missionscan be in for et
for al.
the (2014) evaluate
localization the quality
quality and theof the cameraofmotions
possibility seeing
interesting,
dangerous for instance,
UAVs for
and search-and-rescue
robots can missions
be comple- in new the localization
features. A quality
combination and
of the
criteria possibility
on the of seeing
features is
interesting,areas.
dangerous for instance,
UAVs for mobile
and search-and-rescue
robots cancan missions
be comple-
comple- in new for the localization
features. Aused quality and
combination the possibility
ofif criteria
criteria onposition of seeing
the features
features is
dangerous
mentary to areas.
perform UAVs thisand mobile
type of robots
missions. Theirbe success new
computedfeatures. and A combination
to define of a future on the will is
be
dangerous
mentary to areas.
perform UAVs thisand mobile
type of robots can
missions. Theirbe comple-
success new
computedfeatures. and Aused
combination
to define ofif criteria
a future onposition
the features
will is
be
mentary
depends to perform
on the accurate this type of missions.
localization of the Theirrobot. success
The computed
well suited andfor theused to define ifInathese
localization. future twoposition
references,will the
be
mentary
depends on to perform
on algorithms
the accurate this
accurate type of
localizationmissions.
ofbethe
theTheir success
robot. The The computed
well suited
suited and
for theused to define
the localization.
localization. ifIn a future
these position
two references,
references,will be
the
depends
localization the arelocalization
designed toof robot.
embedded on well robotic for
platforms equippedIn these two the
depends
localization on algorithms
the accurate localization
areThedesigned toofbethe robot. The
embedded well
on robotic
robotic suited for the are
platforms localization.
are equipped
with
In these
with
a monocular
two references, camera, the
localization
UAVs and algorithms
mobile robots.are designed
UAVs to be aembedded
have low payload on which platforms
involves are
specific equipped
depth with
estimation aa monocular
monocular
issues to camera,
camera,
compute
localization
UAVs and algorithms
mobile robots.areThedesigned
UAVs to be aembedded
have low payload on robotic
which platforms
involves are equipped
specific depth with a monocular
estimation issues to camera,
compute
UAVs
that and mobile
implies robots.ofThe
a limitation the UAVs
number have a low payload
of embedded sen- which the involves specific
localization. In previousdepthwork estimation
(Roggeman issuesettoal.,compute
2016),
UAVs and mobile
that implies
implies robots.ofThe the UAVs have a low payload which involves specific depthwork estimation issuesettoal.,
compute
that
sors. Moreover, aa limitation
limitation
these of the
exploration number
number
missions of are
of embedded
embedded
most of sen-
sen-
the the localization.
the
we localization.
focused on In
In
the previous
previous
same (Roggeman
work (Roggeman
problematic but 2016),
et al.,stereo-
with 2016),
that
sors. implies
Moreover, a limitation
these of the number
exploration missions of are
embedded
most of sen-
the the
we localization.
focused on In
the previous
same work (Roggeman
problematic but et al.,stereo-
with 2016),
sors.
time Moreover,
conducted these these exploration
in indoor missions
environments, are
where most of
no global the we
vision focused on
systemonwhich the same problematic
makesproblematic
it possible to but with
obtain stereo-
directly
sors. Moreover,
time conducted
conducted exploration
in indoor
indoor missions
environments, are
where nomost of
no global the
global we focused
vision system
system which the
whichthe same
makes it ofpossible but with
to obtain
obtain stereo-
directly
time
localization in
systems, such environments,
as GPS, are available.where Thus, vision information makesdepthit possible to The directly
time conducted in indoor
such environments, where no global vision systemabout whichthe makes it of the points.
possible to obtain present
directly
alocalization
localization
good solution
localization
systems,
systems,
to
systems,
such
address
such as
as
as GPS,
GPS,
these
GPS,
are available.
are
constraintsavailable.
are available.is to Thus, information
Thus,
install information
paper
Thus, information elaboratesabout
about on
aboutonthe
the
this depth
depth
work
depth
of
ofby the
the points.
points.
addressing
the points. The
The
The present
present
autonomous
present
a good
acamera solution
good solution to address
to address
sensors, because these
theythese constraints
constraints is
are lightweight, is to install
to install paper
inexpensive paper
explorationelaborates
elaborates this work by addressing
on this work by addressing autonomous
missions. autonomous
a good
camera solution
sensors, to address
because they these constraints
are amount
lightweight, is to install
inexpensive exploration paper elaborates
exploration missions. on
missions. this work by addressing autonomous
camera
and thesensors,
images becauseprovide theya high are lightweight, inexpensive
of information.
camera thesensors,
andstereo images because
provide they are amount
lightweight, inexpensive Some exploration
authors missions.
were interested in the active reduction of the
and
A the images
rig provide
composed aa high
by high
two amount
cameras ofwith
of information.
information.
a visual Some authors
authorsduring
were interested
interested in thethe active
active reduction of the
the
and
A the images
stereo rig provide by
composed a high
two amount
cameras ofwithinformation.
a visual Some
uncertainty were an autonomous in reduction
exploration of
mission:
A stereo rig
odometry composed
algorithm by to
allows twocompute
camerasthe with a visual Some
localization authorsduring
uncertainty were interested
an in the active
autonomous reduction
exploration of the
mission:
A stereo
odometry rig composed
algorithm by
allows two cameras
to compute
compute thewith a visual
localization uncertainty
Bourgaul during
et during
al. (2002) an autonomous
aimed at exploring exploration mission:
and building
odometry
of algorithm
the platform fromallows
features to extractedthe in localization
the images Bourgaul uncertainty
Bourgaul et map
al. (2002)
(2002)an autonomous
aimed exploration
at exploring
exploring and mission:
building
odometry
of the algorithm
platform from allows
features to compute
extracted thein localization
the images an accurate et al. of the aimed
environmentat with a and
mobile building
robot
of the
(Sanfourche platformet al.,from
2013).features extracted in the images Bourgaul
an accurate et al.
map (2002)
of the aimed
environmentat exploring
with a and
mobile building
robot
of the platform
(Sanfourche et al.,from
2013).features extracted in the images an accurate
equipped withmap a of the
laser environment
range finder with
selecting a mobile
the robot
control
(Sanfourche et al., 2013). an accurate
equipped withmap a of therange
laser environment
finder with a mobile
selecting the robot
control
(Sanfourche et al., 2013).
All passive vision-based navigation systems are likely to equipped actions withmaximize
that a laser range finder selecting
the accuracy the control
of the localization.
All passive
passive vision-based navigation where systems are likely equipped
to actions
actions and with a
that Sukkariehlaser
maximize (2008) range finder
the accuracy
accuracy selecting
of thethe the control
localization.
All
fail vision-based
in low-textured navigation
environments, systems theare lacklikely
of in- to that maximize the of localization.
All
fail passive
in points vision-based
low-textured navigation where
environments, systems theare lacklikely
ofofin-to Bryson
in- actions
Brysonpath that Sukkarieh
and maximize (2008) the accuracydeveloped
developed of the an information-
an localization.
information-
fail
terest in low-textured
in the environments,
images prevent where
the the
computation lack of a Bryson
based and Sukkariehmethod
planning (2008) for developed
a UAV. an
It information-
plans a tra-
fail
terestin points
low-textured
in the environments,
images prevent where
the the lack ofofin-
computation a Bryson
based and planning
path Sukkariehmethod (2008) for developed
a UAV. an
It information-
plans
terest
good points in theThe
localization. images
aim prevent
of this the computation
work is to develop ofana basedjectory path
which planning
maximizes methodthe for a UAV.
accuracy of It map
the plansand aa tra-
tra-
the
terest
good points in theThe
localization. images
aim prevent
of this the computation
work is to develop ofana basedjectory path
which planning
maximizes methodthe for a UAV.
accuracy of It map
the plansand a tra-
the
good localization.
autonomous The
exploration aim of
system this
with work
a is
commandto develop
strategy an jectory
vehicle which
location maximizes
during thethe accuracy
exploration of
of the map
unknown and the
areas.
good localization.
autonomous The aim
exploration of this
system with work is to develop an jectory which maximizes theexploration
accuracy ofofthe map and the
autonomous
that seeks to exploration
avoid the system
situations with
where aa command
command
the robot strategy
strategy
is likely vehicle
vehicle
It is basedlocation
location
on the during
during the
the
computation exploration
of the entropic unknown
of unknown areas.
areas.
information
autonomous
that seeks to exploration
avoid the system with
situations where a command
the robot strategy
is likely vehicle
It is basedlocation
on the during the exploration
computation of the of unknown
entropic areas.
information
that
to seeksits
loose to localization
avoid the situations
due to where lack of thetexture
robot isinlikely the It gainis based
beforeon andtheafter
computation
taking an of the entropic
action. This system information
can be
that seeksits
to loose
loose to localization
avoid the situations
duewithto where
lack thetexture
of robot isinlikely the gain It
gainis based
before on
andtheafter
computation
taking an an of the entropic
action. This system information
system can be be
to
environment. its localization
We experiment due to lack
a of
mobile texture
robot in inorder
the used before
with and
vision after taking
sensors. action. This can
to loose its localization
environment. We experiment duewithto lack
a of texture
mobile robot in inorder
the gain used before
with and after
vision taking an action. This system can be
sensors.
environment.
to validate theWe We experiment
developed with
strategy, a mobile robot in order used with vision sensors.
environment.
to validate
validate theadapted experiment
developed with abut
strategy, but
the overall
mobile robot insystem
the overall
overall order used with vision sensors.
system
to
can easily bethe developed for strategy,
a UAV. but the system
to validate
can easily betheadapted
developed for strategy,
a UAV. but the overall system 1.2 Problem statement
1.2 Problem
Problem statement
statement
can easily be adapted for a UAV. 1.2
can easily be adapted for a UAV. 1.2 Problem statement
1.1 Related Work This paper describes a complete architecture for au-
1.1 Related
1.1 Related Work Work This paper
This
tonomous paperexploration
describes on
describes aa robotic
completeplatforms.
complete architecture
architecture The forfor
missionau-
au-
1.1 Related Work This
tonomous paperexploration
describes on a robotic
completeplatforms. architecture The for
missionau-
The issue of evaluating the scene quality for vision-based considered tonomous exploration
in this paper on robotic
is platforms.
the exploration The
by mission
a mission
mobile
The issue
issue ofof evaluating
evaluating the scene qualityoffor
for vision-based tonomous
considered exploration
inunknown
this paper
paper on robotic
is cluttered platforms.
the exploration
exploration The
by aa mobile
mobile
The
localization studiedthe scene quality vision-based be- considered of anin this andis the by
The issue of was
localization evaluating
was studied
in the
the
in scene
the
context
qualityoffor
context
navigation
vision-based
navigation be- robot
robot
considered
robot aninunknown
of some this paper
unknown andis cluttered
the exploration
cluttered
environment
environment
which
by a mobile
which
localization
tween was
waypoints. studied
Sadat in
et the
al. context
(2014) of navigation
take into account be- presents of an low texturedand areas. The environment
robot has to which
com-
localization
tween was studied
waypoints. Sadat in
et the
al. context
(2014) of navigation
take into account be- robotpresents of some
an unknown
low texturedand cluttered
areas. The environment
robot has to which
com-
tween
the waypoints.
richness of the Sadat et al. (2014)
environment for take
the path intoplanning,
account presents plete its some low textured
exploration mission areas.
avoiding Thethe robot has toincom-
obstacles the
tween waypoints.
the richness
richness theSadat
of the et al. (2014)
environment for the take
the path intoplanning,
account presents plete its some low textured
its exploration
exploration missionareas. avoiding Thethe robot has toincom-
obstacles the
the of environment for path planning, plete mission avoiding the obstacles in the
the richness of the environment for the path planning, plete its exploration mission avoiding the obstacles in the
Copyright
2405-8963 ©© 2017,
2017 IFAC 10761
IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
Copyright
Copyright © 2017
©under IFAC
2017 responsibility
IFAC 10761
10761
Peer review
Copyright © 2017 IFAC of International Federation of Automatic
10761Control.
10.1016/j.ifacol.2017.08.1479
Proceedings of the 20th IFAC World Congress

Toulouse, France, July 9-14, 2017 Hélène Roggeman et al. / IFAC PapersOnLine 50-1 (2017) 10274–10279 10275

room and keeping an accurate localization during all the the occupied and free areas. At time tn , the exploration
mission. map is represented as a matrice denoted G(n) whose
elements gi,j are 0 or 1: 0 if the location is not explored,
To achieve this mission, it is necessary to localize the robot 1 if it is explored.
and map the environment. The localization is estimated
from the images of the stereo-vision system by the visual
odometry algorithm (Sanfourche et al., 2013), described 3. VISUAL QUALITY FOR THE LOCALIZATION
in Section 2.1. The mapping task is possible thanks to a
RGB-D sensor, explanations can be found in Section 2.2. From the 3D points extracted by the odometry algorithm
For the detection of the low-textured areas, a criterion and with a known future position, the proposed visual
based on the prediction of the amount of information quality criterion is derived from the prediction of the num-
available in the future images permits to define if a position ber of visible points in the future images, considering the
is appropriate for the localization. This criterion is ex- uncertainty (Roggeman et al., 2016). The whole process is
plained in Section 3. The Model Predictive Control (MPC) illustrated in Figure 1 and described in the following.
strategy presented in Section 4, uses the information of
mapping, localization and quality of texture of the scene
in order to find the optimal control to send to the motors.
Experiments were made in real situations and the results
are reported in Section 5.

2. PERCEPTION

2.1 Visual odometry algorithm: eVO

The localization of the robot is ensured by a visual


odometry algorithm, using the images providing by the
stereo rig. The aim is to estimate the localization of the
robot from its starting point. In our experiments, we use
eVO (Sanfourche et al., 2013) but other algorithms could
be used as well (Klein and Murray, 2007).
The following is a brief description of the algorithm. Two
tasks are working in parallel:
Fig. 1. Two 3D points are triangulated with their covari-
• Mapping: this task consists in providing a map with ances at time tn . The two points are projected onto
a limited number of points localized in space. Interest the camera plane after a displacement (R, T ): the
points (Harris and Stephens (1988) or FAST Rosten green one is predicted to lie in the image whereas the
and Drummond (2006)) are extracted from both red one is outside.
images and matched. The 3D position of the points in
space is then computed by a triangulation (see Eq. 1). 3.1 Future point position
• Localization: The matching between the 2D points
in the left image and the 3D points in the map From the 2D points extracted in the stereo images, the
derives from the temporal tracking of 2D points position of a 3D point in the current camera frame, defined
with KLT (Shi and Tomasi, 1994). The position by the position of the left camera, is given by
and orientation of the left camera are computed  
by minimization of the reprojection error, within a −b u − u0
T −1
RANSAC procedure (Fischler and Bolles, 1981). Y = (x, y, z) = Π (u, v, d) = · v − v0 (1)
d α
The 3D points computed during the mapping task will (u, v) is the 2D position of the point in the image and d
serve to evaluate the ability of the robot to localize itself is the disparity. b denotes the baseline between the left
from a given position, see Section 3. and right camera, α, the focal length and (u0 , v0 ) are the
coordinates of the principal point.
2.2 Environment reconstruction
A change of basis is necessary in order to express the
For the obstacle avoidance and the exploration tasks, it is position of the 3D point in the future camera frame.
 
necessary to have an occupancy map of the environment. ˜  R T T
A Kinect sensor is installed on the robot and gives a 3D Y = P · Ỹ = · (x, y, z, 1) (2)
0 1
representation of the environment in a 3D point cloud R and T are respectively the rotation and translation
format. We first remove the ground plan, using a RANSAC between the two frames.
method. Then, the obtained point cloud is transformed
into an Octomap model (Hornung et al. (2013)), which The 3D point is then projected on the future image plane,
is a representation of the volumetric occupancy. Finally, corresponding to the camera frame.
the Octomap is projected onto the ground plane and p = Π (Y )
two 2D maps are created: an exploration map with the T (3)
explored and unexplored areas and an obstacle map with p̃ = (u, v, 1) = z −1 K · Y

10762
Proceedings of the 20th IFAC World Congress
10276
Toulouse, France, July 9-14, 2017 Hélène Roggeman et al. / IFAC PapersOnLine 50-1 (2017) 10274–10279

with K, the camera matrix 4. CONTROL


 
α 0 u0
K= 0 α v0 (4) 4.1 Robot kinematic model
0 0 1
We denote by X = (x, y, θ), the 2D position and the
 orientation of the robot, U = (v, ω) is the control input,
The expression of p the position of the future 2D point
with v the linear speed and ω the angular speed and te is
as a function of the position p, the disparity d and the
the sampling period. The kinematic model of the mobile
displacement parameters Θ is:
robot is: 
xn+1 = xn + te vn cos θn
p = f (Θ, u, v, d) (5) yn+1 = yn + te vn sin θn (11)
θn+1 = θn + te ωn
   
p = Π [Y  ] = Π P · Ỹ = Π P · Π̃−1 (u, v, d) (6) 4.2 MPC

Using the robot model (11), some trajectories are simu-


3.2 Uncertainty lated within a finite horizon Hp , from a sequence of control
inputs U, which are defined on a control horizon Hc . The
The uncertainty on the position of the point p is ex- control inputs and simulated state sequences at time tn
pressed as a 2D covariance matrice. We assume that the are denoted:
uncertainty related to Θ and (u, v, d) are independent. The Un = {Un , Un+1 , . . . , Un+Hc −1 } (12)
covariance is:  
Xn = Xn+1 , Xn+2 , . . . , Xn+Hp (13)
Σp = JfΘ · ΣΘ · JfTΘ + Jfu,v,d · Σu,v,d · JfTu,v,d (7)
The linear and angular velocities are bounded by
with JfΘ and Jfu,v,d the Jacobian matrices of f with T T
(−vmax , −ωmax ) and (vmax , ωmax ) .
respect to Θ and (u, v, d), respectively.
A cost function J (Un , Xn ) is defined to mathematically
∂f represent the mission objectives (see Section 4.3). The
J fΘ = = JΠ (Y  ) · JY  (Y ) (8)
∂Θ minimization of this cost function gives the optimal control
∂f Un∗ to apply to the sytem as:
Jfu,v,d = = JΠ (Y  ) · R · JΠ (u, v, d) (9) Un∗ = arg min J (Un , Xn )
∂u, v, d Un
with Xk satisfying (11), (14)
JΠ , JΠ−1 and JY  are the Jacobian matrices of the projec- ∀k ∈ [n + 1, n + Hp ]
tion function, the triangulation function and the change of
basis function. Their expressions can be found by derivat- Only the first component Un∗ of the optimal solution is
ing the functions given in Section 3.1. applied to the system and the process is repeated at the
next time step.

3.3 Prediction of the number of points This optimization problem is non-linear and non-convex.
There are different solutions to resolve it. Global optimiza-
tion algorithms can be used but their execution time is not
We want to estimate the probability that the point p lies constant and it can be too long for a real-time application.
in the future image. Using the 2D covariance matrix Σp , In this work, we chose to define a finite set of predefined
representing the uncertainty on the position of the pre- control sequences and select the optimal solution in this
dicted point p , the corresponding 90% confidence ellipse set. This strategy permits to limit the risk of falling in a
T
is considered. x = (u, v) is an image point. The ellipse local minimum.
equation in the image frame is
T 4.3 Cost function
(x − p ) Σ−1 
p (x − p ) = s (10)
The expression of the cost function is
with s = 4.605 for a 90% confidence ellipse, this value can
be found in a table of χ2 distribution with 2 degrees of J = wloc Jloc + wexpl Jexpl + wobs Jobs (15)
freedom. Each cost J• represents an objective of the mission. They
are explained in the following paragraphs. All the costs are
Then, the area of the intersection between the ellipse and normalized. The weights ω• give more importance to one
the image is computed. If the ellipse is entirely located objective compared to the others. The modification of the
in the image, the area is that of the ellipse, if it is weights induces changes in the behavior of the robot and
completely out, the area is zero. In the other cases, the it requires trial and error to find an optimal tuning of the
area is estimated by computing the double integration of weights. This is discussed in Section 5.2.
the domain delimited by the ellipse and the edges of the
image. Localization quality cost Jloc This cost relies on the
visual quality criterion defined in Section 3. It favors
The probability of a point to lie into the future image is
trajectories which go through positions where a large
estimated by dividing the obtained area with the whole
number of points will be visible.
ellipse area. A threshold on this probability is used to
define if a point is considered into the image. The final N (Xn+Hloc , Yn )
Jloc = 1 − (16)
criterion is the number of points satisfying this condition. Nmax

10763
Proceedings of the 20th IFAC World Congress

Toulouse, France, July 9-14, 2017 Hélène Roggeman et al. / IFAC PapersOnLine 50-1 (2017) 10274–10279 10277

where N (Xn+Hloc , Yn ) is the number of predicted land- is likely to fail. We want to verify if the addition of
marks at n + Hloc and Nmax is the total number of 3D the criterion on the quality of the localization permits
points. Hloc is the time horizon where the prediction is to improve the behavior of the robot, that is, whether
computed, it must be fixed beforehand (see Section 5.2). it successes to explore the whole space while keeping an
accurate localization. Moreover, there are two obstacles in
Exploration cost Jexpl This cost favors trajectories which the room that the robot has to avoid.
go towards the unexplored areas. From a copy of the
exploration map at the current time G(n) (see Section 2.2),
the amount of future explored space is estimated on the
trajectory. From the known values of the range and the
opening angle of the mapping sensor, for each predicted
position, the corresponding explored space is added as
explored area in G(n). The obtained map, after the time
horizon, is denoted G(n + Hp ). The proposed cost tends
to maximize the additional explored area from G(n) to
G(n + Hp ). The same process was used successfully in a
previous work (Roggeman et al., 2014). The expression of
the cost is:
r2 
Jexpl = gi,j (n) − gi,j (n + Hp ) (17) Fig. 3. The robot in its environment, in the back of the
2
Hp .α.d i j room, there is a textureless wall which is a difficulty
for the visual localization
d is the range and α the opening angle of the sensor, r is
the resolution of the map. i and j are the coordinates of Material For the experiments, we use a turtlebot (see
the elements of G. Figure 4), equipped with a stereo rig composed of two
Obstacle avoidance cost Jobs This cost penalizes tra- cameras with 4.0mm lens and separated by a 21cm long
jectories which go too close from the obstacles or which baseline and a Kinect sensor, with a range fixed to 2.5 m
intersect them. The expression of the cost is: and an opening angle of 0.5 rad. The perception algorithms
are working on the embedded computer whereas the MPC
n+Hp
1  algorithm is working on a deported station. A motion
Jobs = fobs (Gdist (Xk )) . (18) capture system computes the localization of the robot, it is
Hp
k=n+1 considered as the ground truth and serves for comparison
with the visual localization.
The obstacle map (see Section 2.2) is transformed into a
distance map, called Gdist : for each square in the grid, the Parametrization
distance to the nearest obstacle is computed.
• Localization criterion
fobs is a function which normalizes the distance into a · Probability threshold: sproba = 0.5
cost between 0 and 1. Two distances are defined: dsec is · Uncertainty ΣΘ = diag(σθ2x , σθ2y , σθ2z , σx2 , σy2 , σz2 )
the security distance, the robot must not exceed this limit
and ddes is a distance from which the obstacle is no longer and Σu,v,d = diag(σu2 , σv2 , σd2 ) with:
considered. fobs (d) is equal to 1 if d < dsec and to zero if · position: σx = σy = σz = 0.005 m
d > ddes , with a smooth decrease in-between. The curve is · orientation: σθx = σθy = σθz = 0.001 rad
displayed in Figure 2. · image point position: σu = σv = 0.2 pixel
· disparity: σd = 0.4 pixel
• MPC
· Time step: te = 0.25 s
· Horizons: Hp = 20, Hc = 8
· Obstacle distances: ddes = 0.5 m, dsec = 0.5 m
· Map resolution: r = 0.2 m per pixel
· Speed limitations: vmax = 0.25 m.s−1 , ωmax =
0.4 rad.s−1
· Weigths: wexpl = 0.3, wobs = 0.7
The weights on the exploration cost and the obstacle cost
are fixed at the beginning. Only the weight on the quality
Fig. 2. Penalty function for obstacle avoidance
of the localization is modified during the experiments. The
weights are normalized after each change on wloc so that
5. EXPERIMENTS the sum of the weights remains equal to 1.

5.1 Experimental set-up 5.2 Results

Mission The aim of the mission is to explore the room In Figure 5, two results of exploration missions are dis-
showed in Figure 3. It measures approximately 11x11 played. In the left image, the weight on the localization
meters. In this room, there is a wall with low texture. If the quality cost is zero. At the beginning, the robot turns to
robot arrives in front of this wall, the visual localization the right, it is the optimal trajectory because it maximizes

10764
Proceedings of the 20th IFAC World Congress
10278
Toulouse, France, July 9-14, 2017 Hélène Roggeman et al. / IFAC PapersOnLine 50-1 (2017) 10274–10279

nb Hloc wloc duration distance coverage rel. err. max dist.


0 0 0.0 66.41 19.03 27.24 107.98 0.513
1 0 0.0 16.36 2.92 13.28 40.19 0.073
2 0 0.0 14.1 2.74 12.24 29.97 0.08
3 0 0.0 32.76 6.26 26.84 21.39 0.111
4 0 0.0 35.09 7.58 31 91.42 0.152
5 1 0.02 184.67 52.07 72.92 25.76 0.063
6 1 0.05 229.18 63.82 75.04 34.96 0.076
7 1 0.07 186.51 48.75 71.68 30.62 0.08
8 1 0.11 66.54 19.1 51.68 33.48 0.068
9 2 0.02 102.95 28.75 61.12 28.18 0.065
10 2 0.05 132.74 37.37 66.04 37.39 0.058
11 2 0.07 98.78 27.37 59.8 21.33 0.064
12 2 0.11 91.7 23.31 55.96 24.83 0.063
13 3 0.07 106.68 28.13 60.72 32.63 0.064
14 3 0.08 106.53 27.53 59.08 23.26 0.071
15 4 0.07 80.65 20.36 53.96 19.5 0.059

Table 1. Table of the experimental results: the


Fig. 4. The Turtlebot platform used for the experiments: duration is expressed in seconds, the distance
the stereorig is visible on the upper part; and the in meters, the coverage as a percentage of
Kinect sensor is below. the whole area to explore, the relative error
the observed area, but this leads the robot in front of as a percentage and the maximal distance in
the textureless wall and the visual localization becomes meters.
inaccurate, rapidly leading to a large drift of the esti-
mated trajectory and the exploration mission cannot be
performed. In the second experiment, the weight wloc is to 0.25 m.s−1 and the maximal duration between two
increased to 0.07. The robot begins at the same position, time steps is approximately 0.20 s, the maximal travelled
but in this case, it goes straight forward and does not face distance between two time steps is approximately 0.05 m.
the wall. The mission then continues successfully, the zone Hence the value of the maximal distance allows to evaluate
is entirely explored. to the drift induced by the visual navigation.
The first five runs (number 0-4) were done without using
the localization cost (ie. Hloc = wloc = 0). Among them,
three experiments present a high error on the maximum
distance (experiences 0, 3 and 4 with 0.513, 0.111 and
0.152 m maximum distance respectively), whereas with a
positive weight, the maximal error is below 0.08 m. These
experiments also present the lowest exploration coverages
because they were too hazardous and had to be stopped
by the user. We can conclude that, the addition of the
criterion on the quality of the localization permits to avoid
the occurrence of large drift in the visual localization.
(a) wloc = 0 (b) wloc = 0.07 During the experiments (5-15), Hloc was fixed between 1
and 4, and Table 1 shows that the coverage is higher if Hloc
Fig. 5. Results of two experiments with different values of is lower. Indeed, if Hloc is low, the robot goes more easily
wloc , the visual localization is the blue line and the in the corners of the room whereas it tends to avoid them
ground truth is the red one, the grey area is the free when Hloc is higher (see Figure 6). In this figure, with the
explored spaces and the black area is the occupied same value of wloc , the result of the exploration appears
areas completely different depending on the value of Hloc . Let us
recall that the visual quality is computed from 3D point
Others experiments were made to confirm these results and seen in the past. The best way to keep them in the field of
to define what could be the best parametrization for the view within a large horizon of time is to follow a straight
robot in order to have the best exploration result. The re- trajectory. Using a shorter horizon permits curved paths
sults of these experiments are summarized in Table 1. The and eases the exploration of corners.
values of Hloc and wloc were modified, and we computed
measures on the localization error and on the success of the Figure 7 present four results obtained with a short horizon
exploration mission. The relative error is the error between Hloc = 1 and various values of wloc . We can see that the
the total travelled distance computed by the visual local- results relative to the exploration depends on the value
ization algorithm and the ground truth relative to the total of wloc : if wloc is too high (experience 8, wloc = 0.11),
travelled distance. This value is expressed as a percentage. the robot adopts a cautious behavior by keeping a large
The maximal distance column shows the maximal distance number of previously seen point in its field of view, which
estimated by visual localization between two time steps precludes him to explore new areas whereas with a lower
in the visual localization. As the maximal speed was set value (experiences 5, 6 and 7, wloc = 0.02, 0.05 and 0.07

10765
Proceedings of the 20th IFAC World Congress

Toulouse, France, July 9-14, 2017 Hélène Roggeman et al. / IFAC PapersOnLine 50-1 (2017) 10274–10279 10279

using a vision-based localization system. The model pre-


dictive control stategy is designed to avoid the situations
where the localization is likely to fail due to the lack of
texture in the environment. Experiments were carried out
in the real world and demonstrated that this system can
improve the exploration behavior of the robot and ensure
safe localization.

REFERENCES
Bourgaul, F. et al. (2002). Information based adaptive
(a) Hloc = 1 (b) Hloc = 2 robotic exploration. In Proceedings of the IEEE/RSJ
International Conference on Robots and Systems, Lau-
sanne, Switzerland, volume 1, 540–545.
Bryson, M. and Sukkarieh, S. (2008). Observability analy-
sis and active control for airborne SLAM. IEEE Trans-
actions on Aerospace and Electronic Systems, 44(1),
261–280.
Fischler, M.A. and Bolles, R.C. (1981). Random sample
consensus: a paradigm for model fitting with applica-
tions to image analysis and automated cartography.
Communications of the ACM, 24(6), 381–395.
(c) Hloc = 3 (d) Hloc = 4 Harris, C. and Stephens, M. (1988). A combined corner
and edge detector. In In Proc. of Fourth Alvey Vision
Fig. 6. Results of four experiments with different values of Conference, 147–151.
Hloc with wloc = 0.07 Hornung, A. et al. (2013). OctoMap: an efficient proba-
bilistic 3D mapping framework based on octrees. Au-
respectively), the robot explores the whole area, keeping a tonomous Robots, 34(3), 189–206.
good visual localization. Klein, G. and Murray, D. (2007). Parallel tracking and
mapping for small AR workspaces. In Proceedings of
the 6th IEEE and ACM International Symposium on
Mixed and Augmented Reality, Nara, Japan, 225–234.
Mostegel, C. et al. (2014). Active monocular localization:
Towards autonomous monocular exploration for multi-
rotor MAVs. In Proceedings of the IEEE International
Conference on Robotics and Automation, Hong Kong,
China, 3848–3855.
Roggeman, H., Marzat, J., Bernard-Brunel, A., and
Le Besnerais, G. (2016). Prediction of the scene quality
for stereo vision-based autonomous navigation. In 9th
(a) Hloc = 1 (b) Hloc = 2 IFAC Symposium on Intelligent Autonomous Vehicles
IAV, 94–99. Leipzig, Germany.
Roggeman, H., Marzat, J., Sanfourche, M., and Plyer,
A. (2014). Embedded vision-based localization and
model predictive control for autonomous exploration.
In IROS Workshop on Visual Control of Mobile Robots
(ViCoMoR), 13–20. Chicago, United States.
Rosten, E. and Drummond, T. (2006). Machine learning
for high-speed corner detection. In Proceedings of
the European Conference on Computer Vision, Graz,
Austria, volume 1, 430–443.
Sadat, S.A. et al. (2014). Feature-rich path planning
(c) Hloc = 3 (d) Hloc = 4 for robust navigation of MAVs with mono-SLAM. In
Proceedings of the IEEE International Conference on
Fig. 7. Results of four experiments with different values of Robotics and Automation, Hong Kong, China, 3870–
wloc with Hloc = 1 3875.
According to the previous parametric experimental study, Sanfourche, M. et al. (2013). evo: A realtime embedded
we conclude that a good parametrization is using Hloc = stereo odometry for MAV applications. In Proceedings of
1 and wloc between 0.02 and 0.07. the IEEE/RSJ International Conference on Intelligent
Robots and Systems, Tokyo, Japan, 2107–2114.
6. CONCLUSION Shi, J. and Tomasi, C. (1994). Good features to track.
In Proceedings of the IEEE Conference on Computer
In this paper, we have presented a complete architecture Vision and Pattern Recognition, Jerusalem, Israel, 593–
to perform autonomous exploration with a mobile robot 600.

10766

You might also like