Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

A Range Security System for Vehicle Navigation

M. Gardufio,B. Vachon
HEUDIASYC URA CNRS 817
University of Compikgne
BP 649,60206 COMPEGNE Cedex, FRANCE
Phone (33) 44.23.44.23, Fax (33) 44.23.44.77
mgarduno@hds.univ-compiegne.fr bvachon@hds.univ-compiegne.fr

Abstract autonomous land vehicle ALV, or [Kanade, 861 for


details on the Carnegie-Mellon's vehicle Navlab.
The contribution of this research is the development Another algorithm for autonomous detection and
of a security system the purpose of which is to assist avoidance is described in [Eibert, 891.
the teleoperator of a mobile robot by providing him
pertinent information about the robot working space. 1.2 Assistance systems
In this aim, the range image based perception system
must be able to detect and localize, as fast as possible, The purpose of most perception systems is generally
obstacles (nontraversable regions) in the scene during either to give some autonomy to a given vehicle, or to
robot's motion. In this paper we present an accelerated assist a human operator in its guidance. The second
strategy of obstacle detection using the knowledge of problem is addressed in this paper.
objects contained in previous images and their Vision based aid systems have found important and
predicted positions in the new image. An iterative useful applications in intelligent vehicle design. Such
procedure based on a Kalman filter is used to improve advanced systems as IVHS [IVHS,92] in United
robot's state knowledge. Then, an image segmentation States or the Prometheus project EeFort, 931 in
takes place according to security and emergency Europe are intended to warn the conductor when
criteria. The range image is divided in priority areas to critical situation occurs while driving on a highway.
Although video images are often used, some works
reduce obstacle research time. Some results obtained
through experimentation of this method on synthetic deal with range image finders [Devy, 941.
images are developed in this paper.
1.3 ROBUST project
Our work is part of the ROBUST* project. ROBUST is
1. Introduction a teleoperated robot intended to navigate in outdoor
environments.. Teleoperation of a vehicle is complex,
1.1 Navigation of Vehicles because the human operator has no physical feeling of
When a mobile robot works in unknown environ- the vehicle but must use information obtained via
ments, especially in outdoor ones, it may encounter sensors.
unexpected situations and have to solve problems like When only a vision system is available, environment
perception through a screen may lead to inappropriate
obstacle detection and avoidance. Such situations are
common and easily solved by a human being. To actions or decisions of the teleoperator. To face this
achieve these goals, man extracts information from problem, we have developed a range image based
his environment through natural senses (eyes, ears, system to improve the security of a teleoperated
skin.. .). Situation analysis is carried out by the brain outdoor robot. Severe constraints are to be
and decision may be taken. However, transposition to considered : unstructured environment, natural
robots is a very complex problem. In the last few lighting conditions and motion condition since the
years many efforts were made to endow robots with vehicle may reach a speed of up to 40 km/h.
enough intelligence to realize this kind of tasks. As Sensing is based on range images. Physical range
humans, robots use sensory capabilities. They are finders have nowaday a slow scanning process
equipped with physical sensors capable of catching [Lecoeur-T, 921, but geometric description of the
information from the environment in which they are scene may be directly obtained from depth
operating. The remaining problem is to know how and information, and this is quite important as response
by whom sensory data will be processed and time is essential in our application. Moreover, laser
interpreted. This is the point where sophisticated range finders offer the benefit of being relatively
perception systems become useful in vehicle naviga- independent of lighting conditions [Hebert, 891.
tion. A wide variety of techniques based on different
philosophies have been used. See, for example, * This project is conduced by the Robotics Department of Dassault
[Dunlay, 881 at Denver for the description of the Electronique Society (92, France) and has been subsidised by the French
Department for Research and Technology (aide MRT 90P0962)

357
2. General system synopsis 2.1 Simulator
A range generator has been developed in the
The system is composed of different modules as framework of our project to support the perception
shown in figures l(a,b). The range image sensor is not system. This simulator is able to model a three
yet available and has been replaced by a simulator.
dimensional sensor carried on a mobile vehicle. It is
The simulator (figure la) produces range images from based on a point laser principle, this means that it
artificial scenes and takes into account distortions due measures the distance along a single vector director
to motion of the sensor during the scanning process. from a supposed optical center to a visible surface
The fist image of a sequence is analyzed in a specific point in the scene.
way to detect obstacles in the scene. As this is the
A range image represents a view by the sensor of a
start up of a robot's mission, the vehicle is usually synthetic scene composed of simple volumes as
motionless and the response time is not significant. spheres, cylinders, and parallelepipeds. The data we
This will allow to separate obstacle from non obstacle obtain is function of sensor and motion specifications.
regions in the image and extract symbolic representa-
Thus simulation is based on geometric characteristics
tion of these areas. The dynamic image analysis
of real sensors and other parameters as robot's speeds
detects obstacles during the motion. This process and displacement. It involves distortioned images due
deals with tracking features and robot's location. A
predictor/estimation method based in a classical time and motion between two scanned pixels.
Additionally to this degradation, a Gaussian noise was
Kalman filter is applied to this purpose (figure lb).
intrcduced in ray position and range measured.
Another module in the diagram is the search of the
new obstacles technique. It refers to a segmentation of
2.2 Obstacle binarization
the images in priority regions in order to reduce the
search time of newly visible obstacles. Separating obstacles from ground region is the first
/ sensor
parameters
/ scene
description
step in the image processing. It requires grouping
pixels belonging to the same obstacle. Two criteria
are generally used according to the nature of the data :

pqq=zq generator 4- g
3D images are often depth images but sometimes
elevation maps.
Both formalisms are described in [Daily, 871. In the
first one, height discontinuities are computed to
classified obstacle pixels. In the second one, range

E
...,
image
... t-1, t,
L
...,
odometric
information
...
t-I, t,
discontinuities are extracted directly from range data.
In [Veatch, 871, first derivative discontinuity in range
images is used. A fast method developped in the ALV
project [MMC, 861 compares and analyzes each range
Figure l a : synopsis of the range image generator. in the image with respect to the corresponding range
in an ideal image. These four techniques have been
integrated to our system to detect obstacles.
Implementation, advantages and drawbacks of these
methods for obstacle detection with respect to time
and security constraints are discussed i n
[GarduAo, 941.
odometric
information 2.3 Symbolic representation
From the binary image, obstacles are characterized by
representative features stored by the system. A
4 I
segmentation algorithm is applied to compute m
interest around each object. From these areas,
as of
KOL(t- 1) -tNOLO)
4KOL (t) image
distinctive attributes are extracted and registered as
P the symbolic representation of each obstacle. The
output of this process is a set of vectors
I
detection of
new obstacles A(r) = [a1(r), a&),. ..an(r)l (1)
~vailableat time t-1
where ai(r) is the value of ith attribute of obstacle r.
11
1 available at timet L2 image
The attributes representing an obstacle are its gravity
center coordinates with respect to the world, with
respect to the image plane, the corresponding range
Figure l b :synopsis of the perception system. values, its compactness and dispersionFabre, 891.

358
3. Tracking process center position in image t is given by :

A well known problem of obstacle tracking with a


vision sensor is due to the limited view field. Indeed,
even if obstacles are motionless, robot's motion make
them appear and disappear in the rolbot's perception
field. Localizing objects with respect ID the robot (and
vice-versa) is a main objective of our system. The The problem is now to know how robot's position at
"tracking" concept is then introduced in our problem. instant z can be estimated. One of the advantages of
Obstacles are tracked through a sequence of range using Bezier's polynomial for motion generation, is
images scanned during a robot's motiaa. that it's possible to obtain any point P(u) at any instant
in the curve. Any portion of the trajectory can be it
3.1 Motion model
self approximated by another Bezier's curve. In this
Since the system is intended to be embedded on a case the initial and final positions (Po', P3') are
moving vehicle, a trajectory generation method was defined when the first and the last pixels of a range
implemented for simulation expeximents.. This image are scanned. Considering that time spent
method is based on Bezier's curves which allows between two measures is negligible, one can assume
modelizing a displacement guiding the robot through constant robot's speed and associate each pixel of the
a set of defined points taking into account kinematic image with one point of the segment. Therefore the
restrictions and orientation constraints [Segovia, 931. value of U in which the polynomial (2) will be
A Bezier's segment is a vector fonction P of an evaluated is directly given by the pixel position in the
independent parameter U giving the coordinates of a image matrix.
geometric point in a plane and definedl as :
3.3 Matching process
P : U E R -+ (x,y) E R2
P(U) = (P3 - 3P2 + 3P1 - Po) U3 + When an obstacle is located in one image, it is
(2)
characterized (see section 2.3) and stored in the New
-
3(P2 2P1-I-Pz) U2 i- 3(P1 - Po) + Po
Obstacle List (NOL). A second list, created by the
where (U E [O, 11). The four control points (Po to P3) system is the Known Obstacle List (KOL) which
required by the 3rd degree polynomial are Po and P3 contains the characteristics of obstacles detected in
respectively the initial and final robol's positions, and the previous images. Matching process will then make
P i and P2 intermediate configurations computed by the correspondance between NOL and KOL to select
the system. As the result of the evaluation of the obstacles appearing in two successive frames.
polynomial, a set of points P(u) = [x(ui), y(u)] defining Let's consider two images, scanned at instant t-1 and t,
the curve is obtained. In the simulator, the number of that contain a certain number of obstacles stored in
points P(u) or the number of evaluations in U, is a the KOL and the NOL respectively. The tracking
function of the robot's speed and of the simulated process can be summarized in two steps :
sensor measurement frequency. e The first step consists in predicting obstacle
positions in the image It of the obstacles known
3.2 Obstacle prediction from image It-l. Equation (3) and (4) will provide
An obstacle detected in a current image may appear in reasonable estimates of the expected values. This
the next one. These already spotted obstacles are estimation is used to perform tracking by providing
easily located in new images as their position can be a possible point (or a reduced area) around which
estimated. Nevertheless to compute these estimates in the obstacle may appear in image t. If estimates
sequences of images varing in time, motion informa- (U', v') belong to an obstacle then this obstacle is
tion must be available. characterized and stored in the NOL.
In our system, obstacle position estimation is compu- 0
In the second step a match is to be found between
ted according to its gravity center (U', v') in image
obstacles of the NOL and obstacles of the KOL. In
plane coordinates system. During thie processing of
this aim, the distance A between the estimate (from
range image taken at time t-1, gravity center position
KOL) and the observed value (in NOL) of the
(xt-1, yt-1, zt-1) of the obstacle with irespect to world gravity center position is computed. All obstacle
coordinate system was obtained. Let f be the focal
attributes are considered in the matching process,
distance of the range image sensor, h its vertical but trial are first attempted according to this
location with respect to the ground and YT= (XT, Y,,
0,) the robot's position at some instant 2.
distance A. Thus, an object of NOL matches with
Then, if the obstacle has been seen al. instant z by the
the closest (in the sense of A) object of the KOL if
sensor and if the robot's position is known at this their attributes are not significantly different and if
A is inferior to a specified threshold.
instant 2, the estimate (U', v') of the obstacle gravity

359
4. Estimation of the robot's position The range image segmentation is performed in two
steps : the determination of the security area in the
As the estimate @t-l/t-l of the robot's configurzion working space of the robot, and the determination of
Yt-l is known at instant t-1 and the robot's motion is the security region in the image.
measured by odometry, an estimate Q t - 1 of the
robot's position at instant t, Yt, can be computed. 5.1 Determination of the security area
Visual measures provided by the NOL allows to
Considering the space in which the robot has to move
improve the accuracy of the robot's position estimate.
without collision, a security area is determined. The
This estimation is done by an Extended Kalman Filter
optimal space is established taking into account the
that stabilizes the measurements and system state in
robot's dimensions and the nature of the displacement.
the presence of noise and detection errors.
As long as the robot's move is a translation, the
Let A: be the attribute vector of obstacle r computed
security zone width is defined by the vehicule width.
from the image It. After the matching process, the
But if rotation is considered, the obstacle free area
word coordinates ( x:-~, y r l , z:-~)of obstacle r com-
puted from image It-1 and the image coordinates width must be bigger. No a priori information on
(U:, vi) extracted from image t are available. The vehicle's movement nature beeing available, the width
of the security zone is defined as a circle whose
Kalman filter equations then provide an estimate @t/t
of Yt with respect to A: : diameter is the length of the longest diagonal of the
robot (see figure 2) increased by a distance depending
(5)
on the vehicle's speed. Likewise the vertical heigth of
the security area is definied by the height of the
The accuracy of the estimate is given by the vehicle increased by a distance depending on the
covariance matrix vehicle's speed.
due to mbot's speed
0robot
K, is the Kalman gain, and H, the Jacobian mamx of static security zone
G [Chenavier, 921. The measurement equations are : dynamic security zone

Figure 2 : security zone


of the robot
Locating the security zone in the working space is
therefore based on motion criteria since critical areas
are those crossed during the next movement. In most
T cases, the image top corresponds to the sky area
where @t/t.l = (kt qt bt) whereas the bottom informs of the approximative
The estimate Q t - 1 is put up-to-date for every obstacle ground area on which the robot is moving.
r matched in KOL and NOL.When every obstacle has Furthermore, the central zone has the highest priority
been taken into account, gives the best estimate of for straight movements, the rightside zone when
the robot's position. turning to the right, and the leftside one when turning
to the left. When the robot is travelling, current
motion is in some way given by odometric
5. Detection of new obstacles information, but no knowledge about future motion is
available. A kinematic model of the robot is used to
A final process is applied to detect new obstacles in obtain an estimate of this future motion. Orientation
the image. It consists in dividing the image in priority constraints and curvature continuity due to the non
regions to identify the more dangerous obstacles fist. holonomy of the robot are taken into account. This
The part of the working space that the robot will go allows to establish a relation between the future dis-
through in its next future is the most critical for its placement and the previous mouvement (see figure 3).
security. This area is called security area. The part of
the range image corresponding to the security area 5.2 Determination of the security region
will be called security region and must be scanned
first since obstacle contained in it have to be The estimate of the security area, defined in the
highlighted priorily to the teleoperator. As the security robot's coordinate system is then projected in the
area depends on the robot's motion, the security sensor's space. The result of the projection in the
region has to be determined dynamically. range image is the security region. It represents the

360
part of the image where the presence of any obstacle analyzed with an a priori empty KOL(t-2). No
is dangerous for the vehicle and must be signaled to matching or prediction can then occur, and only
the teleoperator in a very urgent manner. It will obstacle detection is carried out (see figure 5b). Two
therefore be treated with the utmost ]priority. Other obstacles are spotted and are put in NOL(t-1).
parts of the image may then be examined for obstacle KOL(t-1) is computed as the union of KOL(t-2) and
detection with priorities depending on their horizontal NOLO-I) and therefore contains the two obstacles
proximity of the security region. When regions spotted in image It-l.
corresponding to the vehicle's security has been
checked, detection of obstacles in remaining part of
the image only allows to improve the location process
accuracy and may then be interrupted ias soon as next
range image becomes available.
r\ t+l
w Fig. 5a : Robot's motion
during image scanning
Fig. 5b : Obstacle detection
in image It-1
range image (t)
Odometric information about the robot's motion
securityregion I between t-1 and t is used to predict, in image t, the
position of the center of gravity of the obstacles
contained in KOL(t-1). Predicted positions are marked
with black crosses in figure 6a. Obstacles are then
searched for around these positions and characterized,
giving NOL(t). The obstacle domain appears as a
white dotted rectangle and computed position of the
zone center of gravity as a black dot in figure 6b.

Figure 3 : Determination
of the security area
and the security region

6. Experiment results Fig. 6a : Obstacle predic- Fig. 6b : Obstacle detec-


tion in image I, tion in image I,
The range image generator and the perception system
are written in C++ and implemented on a Sun Sparc 2 The robot's position estimate and uncertainty are
workstation. Experiments have been carried out with computed by a Kalman Filter. In figure 7a, the result
different simple artificial outdoor environments. In of the estimation based on odometric information is
the case illustrated in figures 4a to 7b, the shown. The uncertainty about the robot's position is
environment is composed of two houses. Two range marked with an ellipse, the uncertainty about the
images are created during the motion of a notional robot's orientation by an angular circle in the top-right
robot in that scene. Distortions of the houses' shapes part of the figure. Obstacle matching processing is
due to the sensor motion can be seen (see figure 4). then carried out and the robot's position accuracy is
During the image generation, the robot is moving in improved. Figure 7b shows the reduction of the
the scene as shown in figure 5a. To keep the same position and orientation uncertainty.
notations as in previous sections, image 4a will be
refered to as It- 1and image 4b as It.

I
Fig. 4a :Image It-1 Fig. 4b : Image It
Fig. 7a : Robot's locali- Fig. 7b : Robot's locali-
zation with odometry zation with obstacles
As it is the first range image of the sequence, 1,-1 is

363
The Kalman filter is an iterative procedure and gives Eibert, 891 Eibert M., Lux P., Schaefer C.H.,
satisfactory results after several iterations. However, a "Autonomous obstacle avoidance for off-road
problem occurs when obstacle matching fails, vehicles", Proc. of an Int. Conf Intelligent
Estimation uncertainty then grows, and several steps Autonomous System, Amsterdam, Dec. 1989, vol. 2,
are necessary for the filter to converge again. pp. 737-747.
Problems in obstacle matching mainly occurs when
occlusion takes place, or when obstacles disappear, [Fabre, 891 Fabre P., "Exercices de reconnais-
partially or entirely, from the view field of the sensor. sance des formes par ordinateur", Masson, France 89.
However, when the range image finder is fast enough
or the robot's motion is smooth enough, simulation [Garduiio,94] Garduno M., Vachon B., "Range
results are very satisfactory. image based obstacle detection during mobile robot
motion", International Journal Machine Graphics &
Vision, Poland 1994, Vol.3,pp. 309-318.
7. Conclusion
[Hebert, 891 Hebert Martial, " Building and
A Range Security System for Vehicle Navigation has navigating maps of road scene using an active
been described in this paper. It is intended to assist the sensor", IEEE Int. Conf. on Robotics and automation,
teleoperator of an outdoor mobile robot by warning Scottsdale,Arizona (USA), May 1989, vol. 2, pp.
him or her when obstacles may threaten the vehicle's 1136- 1142.
security. A range image simulator has been imple-
mented to simulate real laser range finders. The [IVHS, 921 "Strategic plan for intelligent vehicle
perception system is based on a Kalman Filter. highway system in United States", Report N IVHS-
Obstacles are tracked in images to improve the robot's AMER-92-3, Publisherd by IVHS America, May
localization accuracy. A security area is identified in 1992.
the working space of the robot and the corresponding
security region in the range image is determined. [Kanade, 871 Kanade T., Thorpe C., Shafer S.,
Detection of obstacles is then carried out, taking into Herbert M., "Carnegie Mellon Navlab vision system",
account emergency and efficiency criteria. A dynamic Intelligent Autonomous Systems, Hertzberger-Groen
segmentation of the range image is done. Regions Editors, North-Hollandl987, pp. 681-693.
important for security reasons are scanned first;
afterwards, if time is available, other regions are
scanned to improve localization accuracy. The system [Le Fort, 931 Le Fort N., Piat E., Ramamon
is implemented in C++ on a Sun Sparc 2 workstation. jisoa D., "Towards a co-pilot architecture based on
Experiments have yielded promising results. embedded real time expert systems", First IFAC Int.
Workshop on Intelligent Autonomous Vehicles,
Southampton,(UK), April, 1993.
References
[Lecceur-T,921 Lecceur-Taibi L., Ballard Ph.,
[Chenavier,921 Chenavier, F., Crowley, J., 1992. Vacherand F., " Tdl6mCtrie Laser ",Research Report
Position estimation for a mobile robot using vision NO4 VAP Program of CNES, INRIA and LET1
and odometry. In: Proc. IEEE Int. Conf. on Robotics laboratories, France, January 1992, pp. 15-31.
and Automation,Nice, France, Vol. 3, pp. 2588-2593.
[MMC, 861 Martin Marietta Corporation, "The
[Daily, 871 Daily M.J., Harris J.G., Reiser K., autonomous land vehicle second quartely report",
"Detecting obstacles in range imagery", Image Under- MRC-84-600, Sept. 1986.
standing Workshop Proc., Los Angeles, 1987,
pp. 87-97. [Segovia, 93lSegovia A., Rombaut M.,
"Continuous curvature path finding for a non-
[Devy,94] On Autonomous Navigation in a holonomic mobile robot", First IFAC Int. Workshop
Natural Environment, M.Devy, R.Chatila, on Intelligent Autonomous Vehicles, Southampton,
Ph.Fillatreau, S.Lacroix et F.Nashashibi, Proc. of (UK), April, 1993,pp. 481-486.
IRS'94, Grenoble, France, July 94, pp. 213-220
[Veatch, 871 Veatch P.A., Davies L.S., "Range
[Dunlay, 881 Dunlay Terry., "Obstacle avoidance imagery algorithms for the detection of obstacles by
perception processing for the autonomous land autonomous vehicles", Center for Automation
vehicle", IEEE Int. Conf. on Robotics and automation, Research, University of Maryland, July 1987.
Philadelphia, April 1988,vol. 2, pp. 912-917.

362

You might also like