Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Robotics and

Autonomous
Systems
ELSEVIER Robotics and Autonomous Systems 16 (1995) 81-91

3D environment modelling using laser range sensing


V~tor Sequeira a,,, Joao G.M. Gon~alves a, M. Isabel Ribeiro b
a European Commission, Joint Research Centre, TP 270, 21020 Ispra (VA), Baly
b Dept. Eng. Electrotdcnica e Computadores, lnstituto Superior Tdcnico, Av. Rovisco Pais 1, 1096 Lisboa, Portugal

Abstract

This paper describes a technique for constructing a geometric model of an unknown environment based on data
acquired by a Laser Range Finder on board of a mobile robot. The geometric model would be most useful both for
navigation and verification purposes. The paper presents all the steps needed for the description of the environment,
including the range image acquisition and processing, 3D surface reconstruction and the problem of merging
multiple images in order to obtain a complete model.

Keywords: Range images; Laser sensing; Surface reconstruction; Multiple views; 3D scene modelling

1. Introduction information (without the computation overhead


of conventional passive techniques such as stereo
Most computer vision research has concen-
vision), (b) independence from illumination con-
trated on replicatine human vision using digitised
ditions, and (c) largely independent from the
grey-scale intensity images as sensor data. This
reflectivity of the objects being sensed. Intensity
approach proved to be most difficult to under-
image problems with shadows and surface mark-
stand and describe images in a general purpose
ings do not occur.
way. An important problem is the lack of explicit
The main objective of the work described in
depth information in intensity images, capable of
the p a p e r is to build a 3D volumetric model (i.e.
directly relating the image to the real world. In
a description of the topology and the contents) of
recent years digitised range data became avail-
an unknown environment based on data acquired
able from both active and passive sensors. Range
by a Laser Range Finder (LRF) on board of a
data is produced in the form of a rectangular
mobile robot. Such a description would be most
array of numbers, referred to as depth m a p or
useful both for navigation and verification pur-
range image, where the numbers quantify the
poses [12]. Constructing the model requires sens-
distances from the sensor plane to the surfaces
ing and interpretation, ranging from low-level
within the field of view. Active range sensing (e.g.
data collection to high level scene modelling.
Laser Range Finder) has several advantages over
The p a p e r starts by describing range image
passive range sensing: (a) it provides explicit range
acquisition and error sources in distance mea-
surements. Follows a discussion on methods for
* Corresponding author. E-mail: vitor.sequeira@jrc.it, Fax: the extraction of geometric features in range im-
+ 39 332 789185, Tel.: + 39 332 785195. ages, and the presentation of the 3D surface

0921-8890/95/$09.50 © 1995 Elsevier Science B.V. All rights reserved


SSDI 0921-8890(95)00036-4
82 I~ Sequeira et al. / Robotics and Autonomous Systems 16 (1995) 81-91

or limited measurement resolution, some others


are inherent to the positioning system. A main
limitation in the spatial and distance resolution of
the present images is due to the fact that the
~ x / .> / : t laser beam is not an ideal line, but rather a
y v / / =22. 111 [ II [
~ J pan0 cylinder with a diameter of about 4 cm and a
divergence of 0.16 degrees. The distance mea-
sured at each pixel is an average of range values
Fig. I. Range image acquisition by scanning an LRF. over a 2D area, the laser f o o t p r i n t , which is the
intersection of the cylinder with the target sur-
face. The size of the footprint also depends on
reconstruction process. The combination of mul-
tiple range images into a single 3D scene repre-
sentation is then described, and results pre-
sented.

2. Range images

To have a range image, it is necessary to scan


the laser range finder (LRF) in both horizontal
and vertical directions. The mechanical scan is
achieved while rotating the L R F by means of a
computer controlled pan-and-tilt unit. Fig. 1
sketches this scanning principle.
One should not forget that all range images
suffer from geometrical deformations associated
to the geometry of the scanning system. Many (a)
applications require a geometrical correction
phase prior to processing. Fig. 2a represents a
grey level representation of a range image from
an office scene. Lighter areas correspond to
shorter distances. It can be seen that the lines
separating the wall from both the ceiling or the
ground are curved. This illustrates the geometri-
cal deformation inherent to the raw range image.
A 3D view of the same range image with geomet-
rically corrected data is shown in Fig. 2b. Some
details of the scene illustrate the potentialities of
range images, in particular: (i) the drawers' han-
dles; (ii) a small thermostat on the wall; (iii) the
door frame and handle.

2.1. E r r o r s o u r c e s

There are several sources introducing errors in (b)


distance measurements. Some errors are charac- Fig. 2. Representation of a range image from an oltice scene:
teristic of the LRF, such as measurement noise (a) Grey level display, (b) 3D perspective view.
V. Sequeira et al. / Robotics and Autonomous Systems 16 (1995) 81-9l 83

the angle 0 between the surface normal and the cess needs to detect and identify both surface
beam as shown in F'ig. 3. discontinuities and continuous surfaces, the roles
A major effect that distorts the image at spe- of edge based and region based techniques can
cific locations is known as the mixed point prob- be seen as complementary.
lem and is illustrated in Fig. 3. The laser foot- Most region-based techniques detect only pla-
print is seen to cross an edge between two ob- nar surfaces. Since the goal is to detect regions
jects. In this case, the distance measurement is a (and not edges), resulting region boundaries do
combination of the distance to the two objects, not necessarily correspond to the projection of
and does not have any physical meaning. This surface discontinuities. Furthermore, region-
effect can be seen in Fig. 2. The top edge of the based methods often do not classify region
cupboard is not a straight line, but rather appears boundaries, and cannot detect curvature curves
as a sawtooth. This effect renders edge detection extreme.
more difficult. Generally, results obtained with edge-based
Another disturbing effect is due to the re- methods are more closely related to real edges
flectance properties, of the target surface. When a than those obtained with region-based methods.
surface is highly specular no laser echo is de- Detected edges are however often fragmented,
tected, since most of the radiation is reflected. no matter which method is used. Edge linking
Furthermore, objects at the same distance with and polygonal approximation are additional pro-
different reflectance coefficients tend to be mea- cessing steps required to organise fragmented
sured at different distances. Brighter objects ap- edges into a structured description corresponding
pear to be 1 to 2 cm closer than darker objects. to relevant scene features.
This is due to the different beam energy reflected A requirement for the extraction of geometric
by the target which affects the detection thresh- features for geometric modelling is that the local-
old level. isation of edges, especially those corresponding
to surface discontinuities, must be as precise as
possible, and that the classification of edges must
3. Range image processing be correct. Based on these criteria, it was decided
to start by using edge-based methods due to the
The goal for range image processing is the poor ability of the region-based methods in edge
extraction of geometric primitives or features rel- localisation and classification.
evant to the current application. Range image
processing segments data into geometric primi- 3.2. Edge-detection
tives so that each image point is assigned a geo-
Methods for detecting edges without comput-
metric representation, such as vertices, curves,
ing surface curvatures are divided in two cate-
surfaces, a n d / o r wglumes.
gories:
3.1. Extraction of geometrical features from range 1. detecting extreme or zero-crossings of surface
derivatives, and
images
2. geometric model fitting.
Range image segmentation techniques are
classified into two major categories: (a) edge or
surface discontinuity based, and (b) surface or
TargetSurface
region based.
Laserfootprint / ~
Edge based segmentation techniques [9,15,17] fromobject 1 ~ ~
attempt the detection of edges or surface discon- i//~ ~ Mixedpoint
tinuities in the range image. Region based tech- ,~" Laserfootprint ~- \ Laserfootprint
niques [2,5,14,19] detect regions corresponding to fromobject2
continuous surfaces. Since the segmentation pro- Fig. 3. Sources of noise in range data.
84 V. Sequeira et al. / Robotics and Autonomous Systems 16 (1995) 81-91

Surface derivative methods [6] Suffer from edge


displacement because an implicit smoothing is
used before the computation of derivatives. The
basic idea of model fitting methods [15] is to fit
pre-defined edge models to data, and then make
a decision to either accept or reject the presence
of such models. Edge model fitting is in fact a
smoothing process, so precautions must be taken
to prevent fitting from crossing surface disconti-
nuities. When this situation happens the fitting
error becomes often very high. These methods
can explicitly provide a mechanism for classifying
the edges detected.
For the current application the method used
to detect edges is based on geometric model
fitting [7,15]. The basic principle of this method is
to fit a curve to the points in a neighbourhood
left of p, and fit a separate curve to the neigh- Fig. 4. Range image processing: Edge extraction.
bourhood right of p. Then the limits of these two
curves arc compared statistically when they ap-
proach p. If they are significantly different, there same label on the basis of m-connectivity or mixed
is a C ° (jump) discontinuity at p. The same connectivity. This linking is performed in such a
process can be used to detect a C 1 (crease) way that the range values of the resulting edges is
discontinuity. The discontinuity detection for 1D continuous in depth. Edges are oriented such
sampled points can be summarised as follows: that the surface closest to the sensor is to the
1. Fit a curve to each of the left and right hand right of the oriented edges. Edge detection re-
neighbourhoods for every point; sults are shown in Fig. 4. It can be seen that both
2. Eliminate those points whose neighbourhoods jump and crease edges were well detected.
contain a discontinuity; Jump edges are caused by occlusions between
3. Apply a statistical test to the fitted curves of object surfaces. Thus along a jump edge, there
each remaining point to decide whether it is a are two surfaces meeting at it. In the previous
discontinuity point. processing, a detected jump edge belongs to the
Step (2) is aimed at preventing the fitting limit of the surface which is closer to the sensor.
(smoothing) from crossing a potential discontinu- It is thus needed to double each jump edge in its
ity. To detect discontinuities in a range image, other adjacent surface in order to provide bound-
this procedure is applied in two passes, to each ary limits for both adjacent surfaces along a jump
row and then to each column. A very simple edge. These boundary limits will be used in com-
principle is used to combine the results of these bining range images from multiple views for plan-
two passes: once a discontinuity is detected at ning the next view [16]. An additional processing
some position during the first pass, this position requirement aiming at surface reconstruction is
will not be tested during the second pass. With the polygonal approximation of the extracted sur-
this method, detected edge points can be labelled face characteristic. The basic principle of the
as jump or crease edges. For crease edge points, algorithm for 3D curve approximation is to recur-
it can be further decided if they are convex or sively refine an initial polygonal approximation of
concave by comparing their fitted curves at the the curves at the locations where the error is
left and right-hand neigbbourhoods. greater than a given threshold [13]. Such refining
After the extraction of discontinuity edge is realised adding such points where the error are
points, these points are linked into edges of the local maxima in the previous approximation.
V. Sequeira et al. / Robotics and Autonomous Systems 16 (1995) 81-91 85

4. 3D surface reconstruction

The scene recon'struction process being inves-


tigated is based on range segments and uses 2D
Delaunay triangulation done in the image plane
and results in a triangular faced piece wise linear
description of scene surfaces. The choice of sur-
face triangulation as surface topology was based
on the following considerations:
1. is efficiently corrtputed;
2. is the only topoJLogy allowing for a systematic
construction when data are scattered in the
3D space;
3. surface features are easily embedded as con-
straints;
4. triangular patches can represent complex sur-
faces. Fig. 5.3D perspectivewew of the surface triangulation.
Since the input data is a range image, a 2D
triangulation method can be used to construct a
surface triangulation. lem is known in computational geometry as the
The construction of a surface triangulation constrained triangulation problem.
constrained to the pre-extracted surface features The algorithm for the constrained Delaunay
is the same as having a 2D triangulation con- triangulation is described in detail in [4,11]. The
strained to the projection of these features in the basic idea of this algorithm is to modify the input
image domain. The computed 2D mesh is then data by adding more points, in such a way that
back projected into the 3D space using the corre- the resulting Delaunay triangulation is guaran-
sponding 3D segments endpoints evaluated dur- teed to contain a set of initial edges. Results from
ing the edge-detection phase. The result of the this algorithm applied to a set of segments com-
whole procedure i,; a triangular-faced piece wise ing from a real image are shown in Fig. 5. In
linear surface, in which the segments are pre- order to give a more expressive representation,
served. An interesting feature of this approach is each triangle was projected on the image plane,
its fairly low computational cost, due to the fact computed the average grey level on the original
that most of the processing is done in 2D. image, and created a new image where each
The surface reconstruction procedure consists visible triangle is drawn and filled with the grey
of three phases: level obtained (see Fig. 6).
1. range segments reconstruction;
2. constrained Delaunay triangulation in the im-
age plane; 5. Combining multiple range images
3. back projection of the 2D tessellation into the
3D space. One range image is only a portion of the
Delaunay triangulation solves the problem for information present in a 3D working domain. To
3D data interpolation since it guarantees the resolve the ambiguities that are caused by occlu-
preservation of depth discontinuities. A problem sions (see left side of the cupboard in Fig. 6),
with Delaunay triangulation is that it is not obvi- images from different viewpoints are required.
ous how to include line segments, triangles or Merging representations from different view-
even tetrahedra in the data set, and make sure points produces a description of the environment
the final triangulation is still the same. This prob- with more information than any of the individual
86 1~ Sequeira et al. / Robotics and Autonomous Systems 16 (1995) 81-91

based on some kind of feature correspondence to


estimate the transformation, depends mostly on
the position accuracy of the selected features,
and is often not good enough for model construc-
tion.
In the present case, the problem is registering
two partially overlapping surfaces that may lack
significant surface features. It is also required
that the registration be as precise as possible.
From these two considerations, it can be seen
that methods based on minimising a distance
measured [3,8] are more appropriate to this case.
The iterative registration algorithm described
in this section assumes that an initial estimate of
the 3D rigid transformation between two partially
overlapping range images P and Q is available. It
can be obtained by a high level registration pro-
Fig. 6. A 3D synthetic image created from the polyhedral cess or provided by the data acquisition process.
model. Using this initial estimate, the overlapping re-
gions St, c_P and SQ c_ Q can be approximately
determined. S e and SQ do not need to be entire
descriptions. For example, an object observed by overlapping regions of the two images. It is only
the L R F creates an unknown area behind it: the required that they are sub regions of the real
range shadow, from where no useful information overlapping regions.
can be extracted. The shape and position of the The problem of solving the problem of regis-
range shadow changes as the laser range finder is tering P and Q, is concentrated on minimising
moved to another location. Merging images from the error function:
several locations will therefore reduce the size of
e= E d(Tp,SQ)+ ~-, d ( T T q , S p ) , (1)
the shadow, providing thus a more complete de-
p~Sp qES O
scription of the environment.
For the reconstruction of a complete surface where T is the 3D rigid transformation.
description of the environment, the following In the following only the first term of Eq. (1)
steps are necessary: will be considered and the squared Euclidean
1. data acquisition; distance norm used for d. Therefore, the error
2. registration of partially overlapping range im- function to be minimised becomes:
ages; e= ~ IITp - ql[ 2, with q = min [[Tp - q'[J. (2)
3. fusion of partial surface descriptions from each pESp q'ESo
image.
This is very difficult to implement since finding
5.1. Registration between views q is an optimisation process itself. An approxi-
mate solution can be found for this problem using
The goal of registration is to find the transfor- an iterative method. Starting from a good initial
mation between the data from different views estimate of the transformation T ° that brings
(also known as solving the correspondence prob- each p ~ S t, in near registration with SQ. Then
lem). In general, the registration methods can be the corresponding point q of p on S O is found,
which is either an exact or an approximation of:
divided into two categories: registration with or
without correspondence of features. The accuracy min I[r°p - q'll. (3)
of the derived transformations from the systems q' E S()
V. Sequeira et al. / Robotics and Autonomous Systems 16 (1995) 81-91 87

From the pairs {(Pi, qi)} determined, a new In order to reduce the computational cost,
estimate T 1 can be found minimising e in Eq. (2). only a subset of the surface p c Sp, called the
The new estimated it is used to start the next control points, is used in the registration process.
iteration and the procedure continues until a Since the control points do not need to represent
stopping criterion is :met. The key to this iterative meaningful surface features, and they are only
method resides in finding the nearest point q ~ SQ needed from one surface Sp, they can be simply
to a given point p ~ S e and to compute an esti- picked from a regular grid, instead of every point
mate of the transformation T given a set of pairs on Se. Control points are required to be in smooth
{(Pi, qi)} • areas for two reasons. First the corresponding
The problem of finding the nearest point q on neighbourhoods on surface Q of the control
a geometric shape Q to a given point p has been points will likely be smooth, consequently the
discussed by Besl and McKay [3] where Q can be outcome of the algorithm used for computing the
represented by a set of segments, implicit curves, intersection of a line and the digital surface will
parametric curves, triangular faceted surfaces, be more reliable and stable. Second the surface
implicit surfaces, or parametric surfaces. In the normal can be computed more reliably.
present case Q is a digital surface arranged in a To check the smoothness of the surface at the
range image. If Q is big, the exhaustive search for neighbourhood of point p, a smooth surface (for
the nearest point would be very time consuming. implementation simplicity, a plane is used) is
Another approach 1:o this search problem is to fitted into a 9 x 9 neighbourhood window, using a
relax the demand for finding the exact nearest least-squares method, and the residual standard
point q on Q to p. This idea has been used by deviation is checked. The threshold on the resid-
Potmesil [18] and Chen and Medioni [8]. ual standard deviation was adjusted experimen-
The present approach is based on the ap- tally.
proach used by Chen and Medioni [8]. The idea It is worth pointing out that finding a corre-
here is to approximate Q using its tangent S,~, sponding point q of p on a digital surface Q is
plane at q' (see Fig. 7b). That is different from finding feature correspondence.
Here q is found based on a distance measure,
q=- rain dZ(p,q"), (4)
qft ~Sq,k while feature correspondence is based on a simi-
larity between features, highly dependent on the
with: feature type.
Skq,={q"ln~q,'(q'--q")=O},q'=IkpOQ, (5)
Estimating the 3D rigid transformation
where d is the Euclidean distance from a point to The 3D rigid transformation can be repre-
a plane, l ~ = { a [ ( p - - a ) × n p = 0} is the line nor- sented by the 4 × 4 homogeneous transformation
mal to P at p (see Fig. 7a), q' is the intersecting matrix T,
point of Q with I k, and nkq, is the normal to
surface Q at q'.
000 ' (6)

where the matrix R, is a 3 × 3 rotation matrix


which specifies the orientation, and the t is a
3 × 1 translation vector.
Given a set of corresponding points, {(p/,qi)}
the least squares rotation matrix and the transla-
tion vector can be estimated using either the
singular value decomposition (SVD) technique [1]
or the quaternion method [10]. In both methods,
(a) (b) the estimation of translation is based on the
Fig. 7. Corresponding point. estimated rotation, i.e., the translation vector is a
88 V. Sequeira et al. / Robotics and Autonomous Systems 16 (1995) 81-91

function of the optimal rotation matrix R and the


data measurements. The problems with these
methods stem from cumulated errors in calculat-
ing the translation due to the previous errors in
computing the rotation. A technique based on
the dual number quaternions [20] is used in this
work for computing the least-squares translation
vector and rotation matrix. The problem is formu-
lated as an optimisation problem using dual-
quaternions to represent translation and rotation.
The advantage of this representation is that the
method simultaneously solves for the orientation
and the position estimate by minimising a single
cost function associated with the sum of orienta-
tion and position errors.

5.2. Integration of views

After having registered two neighbouring


views, a surface description is first constructed
from one view as described in section 4. The
(d)
bounding contours are then transported into the
second view using the 3D rigid transformation
obtained from the registration process. These
transported contours will be embedded as con-
straints in the surface description of the second
view. Fig. 8 shows the results of merging two
surface descriptions into a single 3D representa-
tion. A complete 3D shaded model, resulting
from the integration of three surface descriptions,
is seen in Fig. 9.

6. Dynamic range sensing

The implementation presented above both for (e)


range data processing assumes two independent
consecutive phases: acquisition and interpreta- Fig. 8. Merging of two surface descriptions into a single 3D
tion. A future implementation will be a dynamic representation: (a,c) domain triangulation with embedded
contours (b,d) parts of the two surface triangulation to be
(active) approach to range image interpretation, fused, (e) integrated view.
e.g. following a spatial feature such as an edge.
The LRF and its associated scanning mechanism
receive commands from the interpretation proce-
dure indicating the direction (solid angle) to ex- tation phases, since one drives the other: data
plore next, i.e., the direction in which the contin- acquisition is driven by the interpretation proce-
uation of the spatial feature is likely to be. It is dure, which is itself driven by the acquired range
no longer possible to separate the acquisition data. In spite of the common factors between the
phase from the low level processing and interpre- "classic" and the dynamic approach, some differ-
v. Sequeha et aL /Robotics and Autonomous Systems 16 (1995) 81-91 89

and on the quality of the registration between the


different images. Sensor resolution and problems
of geometrical nature influence the quality of the
extracted features.
Both the extracted features and the registra-
tion process are also dependent on the type of
environment that is used. Present results were
obtained for an office environment which is gen-
erally made of blocks. For this kind of scenes
good results were obtained specially for the regis-
tration process. The shape of the objects surfaces
was well kept, which shows the good performance
of registration. Tests were made for initial errors
of about 10 degrees in rotation between the im-
ages to be registered, leading to a final error of
about 0.3 degree.
Present results are not final and there is room
for improvements:
1. current work is in progress to improve range
segmentation by merging edge detection with
region detection techniques;
2. automatic refinement of step edges by re-
acquiring range data with higher resolution,
perpendicularly to the features extracted;
3. investigate the use of 3D constrained Delau-
nay triangulation for surface reconstruction;
4. automate the step of image acquisition based
on the detection of range occlusions;
5. use a high-level registration technique (feature
correspondence) to compute an initial estima-
tion of the 3D transformation;
6. tracking spatial features.

8. Conclusions

The paper presents a 3D environment mod-


elling system using depth images acquired by a
Fig. 9. 3D Surfacemodel:(a) left view,(b) right view. laser range finder. Range data map directly into a
3D volumetric model, as opposed to data from
ences occur at the level of algorithm implementa- TV cameras (intensity images) or other sensors.
tion and supporting data structures. This characteristic makes range data suited to
build 3D models of objects, architectural features
and in general real world scenes.
7. Discussion The paper presents all the steps needed for
the description of the environment, including
The quality of the final 3D model depends range image acquisition and processing, 3D sur-
highly on the precision of the extracted features, face reconstruction and the problem of merging
90 1,( Sequeira et al. / Robotics and Autonomous Systems 16 (1995) 81-91

multiple images in order to obtain a complete 3D nat, Representing stereo data with the Delaunay triangu-
model. Direct applications of the resulting 3D lation, Artificial hztelligence 44 (1990) 41-87.
[12] J.G.M. Gon~alves and V. Sequeira, Application of laser
model are 3D scene analysis and interpretation,
range images to design information verification, Proc.
free space modelling for robot navigation, and 1AEA Symposium on International Safeguards, Vienna,
design verification measurements. Austria (March 1994) 219-230.
[13] R.C. Gonzalez and P. Wintz, Digital Image Processing
(Addison-Wesley, Reading, MA, 1987).
[14] R. Hoffman and A.K. Jain, An evidence-based 3D vision
Acknowledgements system for range images, Proc. 1st h~t. Conf. on Comp,tter
1/'tsion, London, England (1987) 521-525.
The authors wish to thank the Electronics and [15] Y.G. Leclerc and S.W. Zucker, The local structure of
image discontinuities in one dimension, IEEE Trans. on
Sensor Based Applications unit head, Mr. F. Pattern Analysis and Machine Intelligence PAM1-9 (3)
Sorel, for all his support and providing the facili- (1987) 341-355.
ties that made possible this work, and Mr. G. [16] J. Maver and R. Bajcsy, Occlusions as a guide for plan-
Campos for the fruitful discussions and for the ning the next view, IEEE Trans. on Pattern Analysis and
use of his planar extraction software. Mr. Se- Machine Intelligence PAMI-15 (5) (1993) 417-433.
[17] B. Parvin and G. Medioni, Adaptive multiscale feature
queira acknowledges the "Programa CIENCIA" extraction from range data, Computer l/ision, Graphics,
from "Junta Nacional de Investiga~o Cientifica and Image Processing 45 (1989) 346-356.
e Tecnol6gica", Portugal, for his Ph.D. grant. [18] M. Potmesil, Generating models of solid objects by
matching 3D surface segments, Proc. 8th Int Joint Conf.
Artificial Intelligence, Karlsruhe, Germany (August 1983)
1089-1093.
References [19] R.W. Taylor, M. Savini and A.P. Reeves, Fast segmenta-
tion of range imagery into planar regions, Computer
[1] K. Arun, T. Huang and S. Blostein, Least-squares fitting l/-tsion, Graphics, and Image Processing 45 (1989) 42-60.
of two 3-D points sets, IEEE Trans. on Pattern Analysis [20] M.W. Walker, L. Shao and R.A. Volz, Estimating 3-D
and Machine Intelligence PAMI-9 (5) (1987) 698-700. location parameters using dual number quaternions,
[2] P.J. Besl and R.C. Jain, Segmentation through variable- CVGIP: Image Understanding 54 (3) (1991) 358-367.
order surface fitting, IEEE Trans. on Pattern Analysis
and Machine Intelligence PAMI-10 (2) (1988) 167-192. .... Vitor Sequeira received a degree in
Electronic and Telecommunications
[3] P.J. Besl and N.D. McKay, A method for registration of Engineering from the University of
3-D shapes, IEEE Trans. on Pattern Analysis and Ma- Aveiro, Portugal (1990). From June
chine Intelligence PAMI-14 (2) (1992) 239-256. 1991 to June 1993 he developed a
[4] M. Buffa, Navigation d'un robot mobile a raide de la control and navigation architecture
for a Mobile Robot in the European
stereovision et de ia triangulation de Delaunay, Ph.D. Commission's Joint Research Centre
Thesis, University of Nice Sophia-Antipolis, 1993. (JRC) -Ispra, Italy. He is currently
[5] G. Campos and J.G.M. Gon~alves, Segmentation of range ~'~ working towards the Ph.D. degree
images based on the split and merge paradigm, JRC ~' with the Instituto Superior Tecnico of
the Technical University of Lisbon,
Technical Note No. 1.93.01 (1993). Portugal in collaboration with the
[6] J. Canny, A computational approach to edge detection, JRC, in the area of 3D Scene Modelling. His research interest
IEEE Trans. on Pattern Analysis and Machine Intelligence include range images, laser sensing, 3D world modelling,
PAMI-8 (6) (1986) 679-698. CAD-based vision for robotic applications.
[7] X. Chen, Mod61isation g6ometrique par vision artifi-
Jo~o G.M. Gon~aives is with the Eu-
cielle, Ph.D. Thesis, T616com Paris 92E023, 1992. ropean Commission's Joint Research
[8] Y. Chen and G. Medioni, Object modelling by registra- Centre, Institute for Systems Engi-
tion of multiple range images, International Journal of neering and Informatics, Electronics
Image and Vision Computing 10 (3) (1992) 145-155. and Sensor Based Applications Unit
since 1988. He is responsible for pro-
[9] T.J. Fan, Describing and Recognizing 3-D Objects Using jects involving the use of Robotics
Surface Properties (Springer-Verlag, Berlin, 1990). Technologies for Nuclear Safeguards
[10] O.D. Faugeras and M. Hebert, The representation, applications, and namely on the use
recognition, and locating of 3-D objects, Intern. Journ. of Mobile Robotics for the Remote
Verification of Fissile Materials Stor-
Robotics Research 5 (3) (1986) 27-52. age Areas. He was the initiator of the
[11] O.D. Faugeras, E. Le Bras-Mehlman and J.D. Boisson- SMART (Semi-autonomous Monitor-
V. Sequeira et a l . / Robotics and Autonomous Systems 16 (1995) 81-91 91

ing and Robotics Technologies) Research Network. Prior to M. Isabel Ribeiro received the M.Sc.
coming to the JRC, he held the position of Associate Profes- and the Ph.D. degrees in Electrical
sor of Electronics at the University of Aveiro, Portugal, where Engineering from Instituto Superior
he worked in medical image processing, and lectured courses T6cnico (IST), Lisbon, Portugal in
on Digital Systems and Digital Image Processing. He has a 1983 and 1988. She is currently Asso-
degree in Electrical Engineering from the Technical Univer- ciate Professor of the Department of
sity of Lisbon (Portugal), an M.Sc. in Digital Systems from Electrical and Computer Engineering
Brunel University (UK), and a Ph.D. from the Medical School of Instituto Superior T6cnico and
of the University of Manchester (UK). His current research member of the Institute for Systems
interests lie in the area of 3D reconstruction, robot's naviga- and Robotics (ISR), Portugal. Start-
tion, control and human-computer interface. ing in October 1994, she spent a six-
month period at the Joint Research
Centre, Italy. Her research interests
include ultrasound and laser data processing applied to the
navigation of autonomous robots and to 3D world modelling.
She is a member of IEEE and of Ordem dos Engenheiros,
Portugal.

You might also like