Three-Dimensional Navigation With Scanning Ladars: Concept & Initial Verification

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

I.

INTRODUCTION

This paper focuses on the use of laser radars


Three-Dimensional Navigation (LADARs) for navigation of unmanned aerial vehicles
(UAVs) in an urban environment. To enable operation
with Scanning Ladars: Concept of UAVs at any time in any environment a precision
navigation, attitude, and time (PNAT) capability
& Initial Verification onboard the vehicle is required. This capability
should be robust and not solely dependent on the
Global Positioning System (GPS) since GPS may
not be available due to shadowing, significant signal
attenuation, and multipath caused by buildings, or
ANDREY SOLOVIEV due to intentional denial or deception. The following
University of Florida operational scenario is considered in this paper. A
MAARTEN UIJT DE HAAG UAV will take off at a known position in a known
Ohio University environment. After the take-off phase, the UAV will
enter an unknown or partially known environment
and starts its mission toward the urban target
environment. Upon arrival in the urban environment,
This paper investigates the use of scanning laser radars the UAV may perform tasks such as surveillance.
(LADARs) for 3D navigation of autonomous vehicles in structured Navigation during the en-route phase is based on
environments such as outdoor urban navigation scenarios. The terrain-referenced navigation (TRN) techniques. The
navigation solution (position and orientation) is determined in urban environment is fundamentally different from
unknown environments where no a priori map information is the en-route flight environment; whereas during
available. The navigation is based on the use of planar surfaces the en-route flight most information is found in the
(planes) extracted from LADAR scan images. Changes in plane environment below the UAV, in the urban environment
parameters between scans are applied to compute position and navigable information is mostly found around the
orientation changes. Feasibility of the algorithms developed is
aerial vehicle. Hence, the UAV platform must be
capable of observing features in a wide field-of-view
verified using simulation results and initial results of live data
(FOV): 2D LADAR and 3D imaging sensor are
tests.
excellent candidates for this approach. A conceptual
picture of the urban UAV scenario is shown in
Fig. 1.
During the en-route phase of flight of the
UAV terrain-referenced techniques may be applied
in the absence of GPS. Many TRN techniques
were successfully employed in the past. Two
well-known TRN schemes are terrain contour
matching (TERCOM) and Sandia inertial terrain aided
navigation (SITAN) [1]. These methods integrate
sensor data from a radar altimeter and a baro-altimeter
with an inertial navigation system (INS) and an
a priori know terrain database to obtain an estimate
of user position and velocity. A more recent variant
Manuscript received September 13, 2007; revised April 2, 2008;
released for publication July 22, 2008.
of a TRN system that exploits the use of an airborne
scanning LADAR instead of a radar altimeter has
IEEE Log No. T-AES/46/1/935926. been shown to provide meter-level accuracies [2].
Refereeing of this contribution was handled by D. Gebre-Egziabher. However, TRN techniques are operationally limited by
The work presented in this paper was supported in part through
the availability of an onboard terrain database at the
the Air Force Office of Scientific Research (AFOSR) 07NE174 location of interest. Reference [3] describes a system
research grant. The authors, especially, would like to thank Jon that uses a passive sensor such as a vision camera
Sjogren of the AFOSR for supporting this research. or an active sensor such as a radar to detect terrain
Authors’ addresses: A. Soloviev, University of Florida, Research features and perform self-localization and mapping
and Engineering Education Facility, 1350 N. Poquito Rd., Shalimar, for UAVs. That paper bases the position estimates
FL 32579-1163, E-mail: (soloviev@ufl.edu); M. U. de Haag, Ohio on features extracted from the imagery data. More
University, 210 Stocker Center, Athens, OH 45701. recently, a method has been proposed to perform
the navigation function using two airborne scanning
c 2010 IEEE
0018-9251/10/$26.00 ° LADARs integrated with an INS [4].

14 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 1. Sensor configurations for feature-based navigation in urban environment. (a) 2D LADAR configuration. (b) 3D imaging
LADAR configuration.

Due to the lack of terrain during the UAV This limits the feature availability for navigation
navigation in an urban environment, the in urban environments. Scanning LADARs have
LADAR-based methods must exploit features a significantly larger measurement range (80 m,
such as surfaces, corners, points, etc. For the typical) and a FOV of up to 360 deg. Hence, this
feature-based navigation, changes in position and paper extends the 3D navigation methodology
orientation are estimated from changes in the presented in [10] for the case of 2D scanning
parameters of features that are extracted from LADAR LADARs.
images. Two-dimensional (2D) laser scanners and This paper develops a methodology for using
feature-based localization methods have been used measurements of a 2D scanning LADAR for
extensively to enable navigation of robots in an 3D navigation for the urban part of the UAV
indoor environment. For example [5] describes a mission. Navigation herein is performed in
method to estimate the translation and rotation of completely unknown environments. No map
a robot platform from a set of extracted lines and information is assumed to be available a priori.
points using a 2D sensor. Reference [6] discusses Fully autonomous 3D relative positioning and
the feature extraction and localization aspects of 3D relative attitude determination are considered.
mobile robots and addresses the statistical aspects The navigation solution is computed in a local
of these methods, whereas [7] introduces improved coordinate frame that is defined by the LADAR
environment-dependent error models and establishes position and orientation at the initial scan. A
relationships between the position and heading relative navigation solution is thus provided.
uncertainty and the laser observations, thus enabling Estimating local frame position and orientation in
a statistical assessment of the quality of the estimates. one of commonly used navigation frames (e.g.,
In [8], 2D scanning LADAR measurements are East-North-Up and Earth-Centered Earth-Fixed
tightly integrated with inertial measurement unit frames) allows for the transformation of the relative
(IMU) measurements to estimate the relative position navigation solution into an absolute navigation
of a van in an urban environment. The idea of solution.
using 3D measurements and planar surfaces for The remainder of the paper is organized as
2D localization is introduced in [9]. Note that the follows. Key aspects of the 3D LADAR-based
above applications focus on 2D navigation (two navigation are first summarized. The 3D navigation
position coordinates and a platform heading angle). approach proposed uses planar surfaces as the basis
However, for applications such as autonomous UAVs, navigation feature. LADAR imaging technologies
a 3D navigation solution is required, especially, are then discussed. Next, a method for extracting
for those cases where the platform attitude varies planar surfaces from LADAR images is developed.
in pitch, roll, and yaw directions. To enable 3D The paper then discusses algorithms for computing
navigation, the utilization of the laser range scanner relative 3D position and orientation solution based
measurements must, somehow, be expanded for on parameters of planar surfaces that are extracted
estimation of 3D position and attitude. The use from scan images. Simulation results and live data test
of 3D features from 3D flash LADAR imagery results are used to initially demonstrate the feasibility
was introduced in [10]. Existing flash LADARs of the 3D plane-based navigation developed. The
have however a limited measurement range and paper is concluded by summarizing the main results
a limited FOV: 8 m and 45 deg, typically [10]. achieved.

SOLOVIEV & UIJT DE HAAG: THREE-DIMENSIONAL NAVIGATION WITH SCANNING LADARS 15

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 3. Generic routine of 3D navigation that uses images of
scanning LADAR.
Fig. 2. Examples of planar surfaces observed in urban images:
multiple planes can be extracted for indoor and outdoor image
examples.
algorithms developed in [8] must be extended for
II. 3DLADAR-BASED NAVIGATION a 3D case. Hence, the feature matching procedure
has to use position and orientation outputs of the
This paper exploits planar surfaces (planes) as INS to predict plane location and orientation in the
the basis feature for the 3D navigation solution. The current scan based on plane parameters observed in
rationale for the use of planes for navigation in 3D previous scans. If predicted plane parameters match
urban environments is that planes are common in closely to the parameters of the plane extracted from
man-made environments. To exemplify, Fig. 2 shows the current scan, a match is declared and a matched
typical urban indoor (hallway) and outdoor (urban plane is used for navigation computations. Note
canyon) images. Multiple planes can be extracted from that INS data can be also applied to compensate for
both images as illustrated in Fig. 2. Since changes in LADAR motion during scans for those cases where
image feature parameters between two different scans such motion can introduce significant distortions to
are used for navigation, this feature must be observed LADAR scan images. Following feature matching,
in both scans. Feature repeatability is thus essential changes in parameters of the planes that are matched
for the LADAR-based navigation. Planar surfaces between different scans are exploited to estimate the
satisfy this requirement as they are highly repeatable navigation solution. Changes in plane parameters
from scan to scan. If a wall of a building stays in the are also applied to periodically recalibrate the INS
LADAR measurement range then the plane associated to reduce drift terms in inertial navigation outputs
with that wall repeats in the scan images. in order to improve the quality of the INS-based
Fig. 3 illustrates a generic navigation routine plane prediction used by the feature matching
that exploits planar surfaces to derive the navigation procedure.
solution. A 3D scan image of the environment is This paper focuses on the key aspects of the
obtained by a scanning LADAR. Planes are extracted planar-based navigation that are related to LADAR
from LADAR images and used to estimate the data processing only. Development of LADAR/INS
navigation solution that is comprised of changes in integrated components will be addressed by future
LADAR position and orientation between scans. In research. Accordingly, two key questions that are
order to use a planar surface for the estimation of addressed by the remainder of the paper are: 1) how
position and orientation changes from one scan to to extract planes from LADAR scan images, and 2)
the next, this planar surface must be observed in both how to use parameters of extracted planes to compute
scans and it must be known with certainty that a plane the navigation solution. To address these questions,
in one scan corresponds to the plane in the next scan. LADAR imaging technologies are discussed first. A
Hence, the feature matching procedure establishes a method for extracting plane parameters from LADAR
correspondence between planes extracted from the measurements is then developed. The use of plane
current scan and planes extracted from previous scans. parameters for the estimation of relative position and
The navigation routine stores planes extracted from orientation is finally described.
previous scans into the plane list. The plane list is Aspects of the 3D navigation routine that are
initially populated at the initial scan. If a new plane is related to the LADAR/INS integration will be
observed during one of the following scans, the plane considered by future development. Particularly,
list is updated to include this new plane. In [8], INS future development will address the use of INS data
data are exploited to match lines extracted from 2D for feature matching, INS-based compensation of
LADAR images for a 2D navigation case. In order distortions in scan images created by LADAR motion
to use INS data for plane matching, line matching during scans, and LADAR-based INS calibration.

16 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
III. 3D IMAGING TECHNOLOGIES

Various optical approaches exist to obtain 3D


imagery of the environment such as stereo-vision
camera systems, the combination of a digital camera
and projected light from a laser source, flash LADAR
systems, and systems based on a LADAR scanning in
both azimuth and elevation directions.
Fig. 4. Zero elevation scan: lines observed in scan image are
Flash LADAR sensors consist of a modulated created by intersection of LADAR scanning beam with planar
laser emitter coupled with a focal plane array detector surfaces such as building walls.
and the required optics. Similar to a conventional
camera this sensor creates an “image” of the
environment, but instead of producing a 2D image and verify 3D navigation methods. An inexpensive 2D
where each pixel has associated intensity values, scanning LADAR (SICK LMS-200) is augmented by
the flash LADAR generates an image where each a low-cost servo motor that enables LADAR rotations
pixel measurement consists of an associated range in a limited elevation range. The elevation range is
and intensity value. Current low-cost flash LADAR chosen to allow for plane reconstruction as described
technology is capable of greater than 100 £ 100 in the next section. The 3D navigation methods
pixel resolution with 5 mm depth resolution at a described in this paper are also developed to meet the
30 Hz frame rate. Example commercial products are UAV payload requirements, since the limited elevation
produced by MESA Imaging, Canesta, Inc., and PMD scan range allows for a simple and light sensor design
Technologies GmbH. These cameras derive the range and requires limited processing power of the LADAR
by measuring the phase difference (shift) between the data.
transmitted and received (from the target) signal from 2D LADAR sensor imagery has been previously
a modulated light source and have a range limitation considered for 3D plane reconstruction in mapping
determined by the wavelength of the modulation. applications. Particularly in [11] 2D LADAR images
Other commercial sensors such as the sensors by are used to construct planar maps of indoor office
Advanced Scientific Concepts, Inc. (ASC) measure environments. Specifically, [11] employs an upward
the time-of-flight of a light pulse to compute distance. looking 2D LADAR that is mounted on a robotic
The advantage of all these 3D imaging sensors is vehicle. Planar surfaces are extracted from multiple
the instantaneous acquisition of all pixels within the LADAR images that are collected as the robot moves
FOV. The disadvantage is their often limited range through the indoor hallway. While [11] performs a
and limited FOV. The limited FOV can significantly 3D mapping, the navigation task is still carried out in
limit the availability of features that can be used for two dimensions using data of a 2D forward-looking
navigation. Note that the limited FOV mainly depends LADAR. As mentioned previously, the focus of this
on the optics used for the camera and that a larger paper is 3D autonomous navigation as opposed to
FOV results in a higher power requirement since the 3D mapping. Hence, the plane extraction method
light source must provide the same light density over described in the following section is not optimized for
a larger spherical area. mapping purposes but for estimation of the UAV’s
3D imaging sensors based on scanning LADARs 3D navigation solution from the changes in plane
are also commercially available, for example, from parameters between scans.
Velodyne, AutonoSys, Riegl, and Optech. In contrast
to the flash LADAR sensors, these scanning systems IV. PLANE RECONSTRUCTION USING 2D LADAR
require a large amount of optics and precise scanning ROTATIONS IN A LIMITED ELEVATION RANGE
mechanisms and are, therefore, expensive. Since
these systems are pulsed and have a very narrow This paper proposes the use of a 2D LADAR that
instantaneous FOV, their ranges are longer and is rotated in a limited elevation range for 3D plane
the range accuracy is higher. The FOV of these reconstruction. A 2D LADAR first performs a scan at
sensors is, furthermore, determined by the scanning zero elevation as shown in Fig. 4.
mechanism and is in general much larger (as large as The LADAR scanning beam intersects with a
360 deg). This type of scanner is designed primarily planar surface created, for example, by a wall of
for mapping applications. The scan rate is generally a building. A line is obtained in the scan image
slow (from few seconds to few minutes per FOV) due as a result of this intersection. This line can be
to extensive scans at different elevation angles, which extracted from the scan image using line extraction
is not required for navigation applications as shown in techniques such as the ones reported in [12]. One line
the following sections. is obviously insufficient for the plane reconstruction
This paper proposes a low-cost alternative to since this line can belong to multiple planes as
existing 3D scanning LADARs in order to develop illustrated in Fig. 5.

SOLOVIEV & UIJT DE HAAG: THREE-DIMENSIONAL NAVIGATION WITH SCANNING LADARS 17

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 6. First elevated scan: second intersect line obtained for
each planar surface in LADAR FOV; fictitious planes can still
exist since plane can be fit through two lines that belong to
different real planes.

Fig. 7. Second elevated scan: third intersect line extracted from


LADAR scan image; Use of this line allows for removal of
Fig. 5. Zero elevation scan: multiple planes can be fit through fictitious planes.
single line extracted from zero elevation scan; hence, one scan
insufficient for plan reconstruction.

of the reconstruction procedure is offered next. Fig. 8


illustrates the LADAR body frame. Fig. 9 represents a
The LADAR is thus elevated and a second scan
planar surface. In Fig. 9, n is the plane normal vector,
is taken as shown in Fig. 6. Two intersect lines are
which is the unit vector that originates from the
obtained after the elevated scan is performed: 1)
LADAR body frame origin perpendicular to the planar
intersection of the planar surface with the nonelevated
surface; ½ is the plane range, which is the closest
LADAR scanning plane (Fig. 4) and 2) intersection of
distance from the body-frame origin to the plane;
the planar surface with the elevated LADAR scanning
μ is the plane tilt angle, which is the angle between
plane (Fig. 6). These two lines are applied to the
the plane normal vector and the xb , yb plane; ® is the
plane reconstruction. A plane reconstruction that is
plane azimuth angle, which is the angle between the
solely based on two lines can still be ambiguous.
projection of n on the xb , yb plane and the xb axis.
Particularly, if there is a second planar surface present
Note that the plane normal vector is related to the
within the FOV of the LADAR, a fictitious plane can
plane angular parameters (azimuth and tilt angles) as
be fit through two lines that belong to different real
follows: 2 3
planes as illustrated in Fig. 6. Hence, information
cos(®) ¢ cos(μ)
contained in two LADAR images is insufficient to 6 7
separate real and fictitious planes. n = 4 sin(®) ¢ cos(μ) 5 : (1)
A third scan (second elevated scan) is taken to sin(μ)
resolve the plane reconstruction ambiguity. Fig. 7
illustrates the second elevated scan. A third intersect A plane can also be represented by its normal point
line is extracted from the third scan image. This line where the normal point is the intersection of the plane
belongs to the real plane but does not belong to the and a line originating from the LADAR location
fictitious plane. The fictitious plane is thus removed, perpendicular to the plane of interest.
which completes the plane reconstruction. Equation (2) formulates the plane equation in
The above consideration demonstrates that Cartesian coordinates:
three consecutive LADAR scans (zero elevation
xb ¢ cos(®) ¢ cos(μ) + yb ¢ sin(®) ¢ cos(μ) + zb ¢ sin(μ) = ½
scan and two elevated scans) are sufficient for the
reconstruction of planar surfaces. A formal description (2)

18 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
In (3), ®ˆ line1 is the line angle and ½ˆline1 is the line
range. Note that ®ˆ line1 and ½ˆline1 are estimated by a
line extraction procedure (e.g., by an iterative split
and merge procedure [7]) that is applied to LADAR
scan data for the zero elevation scan. The intersect
line should also satisfy the equation of intersection of
the planar surface with the LADAR scanning plane (a
horizontal plane zb = 0, in this case). A corresponding
equation of the intersect line is given below:
Fig. 8. LADAR body frame: xb and yb anes lie in scanning
plane (xb axis is in direction of zero scanning angle, yb axis is in xb ¢ cos(®) ¢ cos(μ) + yb ¢ sin(®) ¢ cos(μ)
direction of 90 deg scanning angle), zb axis perpendicular to
scanning plane. + zb ¢ sin(μ) = ½ (4)
zb = 0
or
xb ¢ cos(®) ¢ cos(μ) + yb ¢ sin(®) ¢ cos(μ) = ½: (5)

Dividing both sides of (5) by cos(μ) yields


½
xb ¢ cos(®) ¢ +yb ¢ sin(®) = : (6)
cos(μ)
Comparison of (3) and (6) allows relating estimates
of the intersect line parameters with parameters of the
planar surface:
®ˆ = ®ˆ line1
½ˆ (7)
= ½ˆline1 :
Fig. 9. Representation of planar surface.
cos(μ̂)
Equation (7) partially formulates the planar surface
based on zero elevation scan data. A tilted scan is
employed next to complete the plane formulation.
The LADAR is rotated around its xb axis on the '
angle for the tilted scan case. Equation (8) defines
the equation of the coordinate transformation from
the elevated LADAR frame (x0b , y0b , z0b ) into the zero
elevation frame:

xb = xb0
yb = cos(') ¢ yb0 ¡ sin(') ¢ zb0 (8)
zb = sin(') ¢ yb0 + cos(') ¢ zb0 :
Fig. 10. Intersections of planar surface with nonelevated and
elevated LADAR scanning Planes; for elevated scan, LADAR is Substitution of the coordinate transformation (8) into
rotated about its xb axis on angle Á. the plane (2) provides the plane equation expressed at
the elevated LADAR frame:
where xb , yb , and zb are the Cartesian coordinates of
xb0 ¢ cos(®) ¢ cos(μ) + yb0 ¢ (sin(®) ¢ cos(μ) ¢ cos(')
any point that belongs to the plane; these coordinates
are expressed in the LADAR body frame. + sin(μ) ¢ sin(')) + zb0 ¢ (sin(μ) ¢ cos(')
Fig. 10 shows lines of intersection of LADAR
¡ sin(®) ¢ cos(μ) ¢ sin(')) = ½: (9)
scanning plane with the planar surface being
reconstructed for cases of zero elevation scan and This plane intersects with the LADAR scanning plane
elevated scan. at zb0 = 0. The intersect line equation is thus expressed
Using a polar line representation [13], the intersect as follows:
line for the zero elevation scan is expressed as
follows: xb0 ¢ cos(®) ¢ cos(μ) + yb0 ¢ (sin(®) ¢ cos(μ) ¢ cos(')
xb ¢ cos(®ˆ line1 ) + yb ¢ sin(®ˆ line1 ) = ½ˆline1 : (3) + sin(μ) ¢ sin(')) = ½: (10)

SOLOVIEV & UIJT DE HAAG: THREE-DIMENSIONAL NAVIGATION WITH SCANNING LADARS 19

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
This line can be also expressed using line parameters to compute estimates of plane parameters for the
estimated from the elevated scan image. The zero and first elevated scans (®ˆ 1 , μ̂1 , ½ˆ1 ), and zero
expression is similar to (9) above: and second elevated scans (®ˆ 2 , μ̂2 , ½ˆ2 ). Differences in
plane parameter estimates are then compared with
xb0 ¢ cos(®line2 ) + yb0 ¢ sin(®line2 ) = ½line2 : (11)
predetermined threshold values (±®, ±μ, ±½). The plane
Equations (10) and (11) formulate the intersect line is extracted if the differences between estimates are
equation using plane parameters and parameters of the below the thresholds, i.e., if the following conditions
line (range and angle) determined from LADAR data, are satisfied:
correspondingly. Evaluating (10) and (11) at xb0 = 0 j®ˆ 1 ¡ ®ˆ 2 j < ±®
yields
jμ̂1 ¡ μ̂2 j < ±μ (18)
xb0 = 0 ) yb0
½ j½ˆ1 ¡ ½ˆ2 j < ±½:
=
(sin(®) ¢ cos(μ) ¢ cos(') + sin(μ) ¢ sin(')) (12) Otherwise, the plane extraction is not declared and
½ˆline2 the plane is removed from consideration. Removal of
xb0 = 0 ) yb0 = : fictitious planes completes the plane reconstruction
cos(®ˆ line2 )
procedure.
From (12) it follows The threshold values in (18) (±®, ±μ, ±½) are
currently predetermined based on specifications
½ ½ˆline2 of LADAR measurement errors. Particularly, the
= :
(sin(®) ¢ cos(μ) ¢ cos(') + sin(μ) ¢ sin(')) cos(®ˆ line2 ) following threshold values are used:
(13) ±® = 3 ¢ ¢®
Substitution of (7) into (13) provides the following ±μ = 3 ¢ ¢® (19)
expression:
±½ = 3 ¢ ¾½
½ˆ line2 ½ˆ line1 ¢ cos(μ)
= where ¢® and ¾½ are the LADAR angular resolution
(sin(®line1 ) ¢ cos(μ) ¢ cos(') + sin(μ) ¢ sin(')) cos(®ˆ line2 )
ˆ
and standard deviation of ranging measurement noise,
(14) accordingly. The use of predetermined thresholds
or can be modified into an adaptive threshold choice
½ˆline1 ½ˆline2 by evaluating the real quality of lines extracted from
= : scan images and then transforming line extraction
(sin(®line1 ) ¢ cos(') + tg(μ) ¢ sin(')) cos(®ˆ line2 )
ˆ
errors into plane errors through the plane estimation
(15) equation (17). Particularly, the approach proposed
The plane tilt angle is thus related to the estimates in [14] exploits the actual line noise samples
of intersect line parameters obtained from the zero comprised of LADAR measurement errors and
elevation scan and the elevated scan: a texture of a scanned surface to estimate sigma
values of line extraction errors. Hence, this approach
½ˆline1 ¢ cos(®ˆ line2 ) ¡ ½ˆline2 ¢ sin(®ˆ line1 ) ¢ cos(') evaluates the actual line quality and characterizes
tg(μ) = :
½ˆline2 ¢ sin(') it by one sigma values of errors in line parameter
estimates (range and angle). For adaptive choice
(16) of plane extraction thresholds, line errors must
A combined use of (7) and (16) completes the be first transformed into plane parameter errors
formulation of the planar surface: for (®ˆ 1 , μ̂1 , ½ˆ1 ) and (®ˆ 2 , μ̂2 , ½ˆ2 ). Adaptive extraction
thresholds then need to accommodate combined
ˆ = ®ˆ line
® 1 errors in (®ˆ 1 , μ̂1 , ½ˆ1 ) and (®ˆ 2 , μ̂2 , ½ˆ2 ). Aspects of the
μ ¶ adaptive threshold choice will be addressed by future
½ˆ line1 ¢ cos(®
ˆ line ) ¡ ½ˆ line ¢ sin(®ˆ line ) ¢ cos(')
μ̂ = arctg 2 2 1
(17) research.
½ˆ line ¢ sin(') 2
The plane extraction procedure can separate
½ˆ = ½ˆ line1 ¢ cos(μ̂): planes only if differences between plane parameters
exceed the threshold values in (19). Thus, planar
As mentioned previously a second elevated scan surfaces with closely located normal points (see
is applied to remove fictitious planes. Fictitious Fig. 9 for the illustration of the plane normal point)
planes can be created by fitting a plane through are merged into a single plane. The procedure does
two lines that belong to different real planes (see not separate those planes that are nearly coplanar
Fig. 6). To remove fictitious planes, (17) is applied (i.e., differences in angular plane parameters are

20 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
below the threshold) and are nearly at the same where jhVLADAR ij is the absolute value of the average
distance from the origin of the LADAR body-frame LADAR velocity during the scanning interval,
(i.e., differences in plane ranges are below the jh!LADAR ij is the absolute value of the average
threshold). This feature can limit the use of the LADAR rotation rate during the scan, and ¢tscan
plane extraction method proposed herein for is the scan duration. Applying range and angular
mapping applications. However, from the navigation error specification of the SICK LMS-200 LADAR
perspective, separation of planar surfaces that are in (20) shows that the LADAR velocity must not
nearly coplanar does not have a considerable influence exceed 1.5 m/s and the angular rate must not exceed
on the observability of navigation states (position 77 deg/s in order to avoid motion-related distortions
and attitude). Particularly, dilution of precision of scan images. While the angular motion generally
(DOP) values that characterize the influence of planar stays below this angular rate threshold for most UAV
geometry on the navigation solution accuracy (see the applications, the velocity threshold can be exceeded
section on the use of plane parameters for computing for at least some of the UAV flight scenarios. For
the navigation solution for the definition of DOP) these scenarios, LADAR range measurements can be
stay practically unchanged if nearly coplanar surfaces adjusted using INS estimates of position changes as
are used separately for computing the navigation described in [7].
solution. Thus, this paper does not address this LADAR scans at different elevation angles are
separation. separated by a finite time interval. If the LADAR
It must be also noted that a limited elevation motion between scans exceeds ranging noise and
scan range is used for the plane extraction method angular resolution, this motion must be taken into
presented in this section. As a result, the method has account. The maximum allowable translational motion
limited application in 3D mapping, i.e., map building. and rotational motion between scans are computed as
For navigation applications, the plane extraction follows:
method described herein allows for a complete jhVLADAR ij ¢ ¢Tscans < ¾½
reconstruction of planar surfaces that are then used (21)
to compute a 3D navigation solution. jh!LADAR ij ¢ ¢Tscans < ¢®

where ¢Tscans is the time interval between two


V. INFLUENCE OF LADAR MOTION ON PLANE consecutive scans at different elevation angles. For
EXTRACTION the current system implementation that is described
in the test setup section below, ¢Tscans is equal to
LADAR motion during scans and between 0.7 s. In this case, the maximum allowable LADAR
consecutive scans taken at different elevation velocity and angular rate that does not require the use
angles can degrade the accuracy of the plane of motion compensation procedures are estimated
extraction procedure described in the previous as 1.4 cm/s and 0.7 deg/s, accordingly. For UAV
section. First, LADAR motion during a single operational scenarios where these motion thresholds
scan (i.e., motion between measuring first and are exceeded, INS data must be used for the LADAR
last points in a scan) can distort lines observed in motion compensation. To apply the INS-based motion
scan images. Second, LADAR motion between compensation approach, the coordinate transformation
consecutive scans at different elevation angles can (8) is modified to accommodate the LADAR motion
influence the choice of plane extraction thresholds between horizontal and elevated scans. For a general
that are applied in (17). This section discusses the case of an arbitrary LADAR motion between these
influence of LADAR motion on the planar-based scans, (8) is modified as follows:
navigation. 2 3 2 03 2 3
The influence of LADAR motion during scans xb xb ¢xINS
is evaluated below for the case of SICK LMS-200 6 7 6 7 6 7
4 yb 5 = CINS ¢ 4 yb0 5 ¡ 4 ¢yINS 5 (22)
scanning LADAR. This LADAR has a scan duration
of 6.5 ms, the angular resolution of 0.5 deg, and zb zb0 ¢zINS
a standard deviation of the ranging noise of 1 cm. where CINS is the INS estimate of the direction cosine
LADAR motion does not introduce considerable matrix (DCM) for the LADAR rotation between scans
distortions to the scan image if the LADAR (for a general case, this rotation includes both forced
displacement and LADAR rotation over the scan elevation rotation of the LADAR and any additional
duration do not exceed the ranging noise level and rotations due to the motion of autonomous vehicle),
angular resolution, correspondingly. The following and ¢xINS , ¢yINS , and ¢zINS are the INS estimates
conditions must be satisfied: of the LADAR displacement components resolved in
jhVLADAR ij ¢ ¢tscan < ¾½ the axes of the LADAR body frame at the horizontal
(20) scan. Taking into account (22), the equation for the
jh!LADAR ij ¢ ¢tscan < ¢® planar surface in the tilted scan frame is modified as

SOLOVIEV & UIJT DE HAAG: THREE-DIMENSIONAL NAVIGATION WITH SCANNING LADARS 21

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
follows:
xb0 ¢ (C11 ¢ cos(®) ¢ cos(μ) + C21 ¢ sin(®) ¢ cos(μ) + C31 ¢ sin(μ))
+ yb0 ¢ (C12 ¢ cos(®) ¢ cos(μ) + C22 ¢ sin(®) ¢ cos(μ)
+ C32 ¢ sin(μ))
+ zb0 ¢ (C13 ¢ cos(®) ¢ cos(μ) + C23 ¢ sin(®) ¢ cos(μ) (23)
+ C33 ¢ sin(μ))
+ ¢xINS ¢ cos(®) ¢ cos(μ) + ¢yINS ¢ sin(®) ¢ cos(μ)
+ ¢zINS ¢ sin(μ) = ½

where Ck,j , k = 1, : : : 3, j = 1, : : : 3 are the elements of Fig. 11. Use of plane normal points for navigation: changes in
the INS DCM CINS . Accordingly, (10) that expresses perceived location of normal point between scan i and scan j
the line extracted from the tilted scan image (i.e., for applied to estimate position changes.
zb0 = 0) is modified as follows:
case); ni is the plane normal vector whose components
xb0 ¢ (C11 ¢ cos(®) ¢ cos(μ) + C21 ¢ sin(®) ¢ cos(μ) + C31 ¢ sin(μ))
are resolved in the LADAR body frame at scan i; nj
+ yb0 ¢ (C12 ¢ cos(®) ¢ cos(μ) + C22 ¢ sin(®) ¢ cos(μ) is the plane normal vector whose components are
resolved in the LADAR body frame at scan j; and,
+ C32 ¢ sin(μ)) (24)
½i and ½j are shortest distances from the LADAR to
+ ¢xINS ¢ cos(®) ¢ cos(μ) + ¢yINS ¢ sin(®) ¢ cos(μ) the plane at scans i and j, accordingly. Note that the
plane normal vector at scan i is parallel to the plane
+ ¢zINS ¢ sin(μ) = ½:
normal vector at scan j since stationary planes are
Similar to the derivation of (12) through (17), the assumed. Components of the plane normal vector
modified plane extraction procedure was derived from in the LADAR body frame at scan i generally differ
(7), (24), and (11): from components of the normal vector in the body

®ˆ = ®ˆ line1
0 0 11
½ˆline1 sin(®ˆ line2 )
B 1 B CC
μ̂ = arctg @ ¢@ ¡½ˆline2 (C12 cos(®ˆ line1 ) + C22 sin(®ˆ line1 )) AA (25)
C32 ½ˆline2 + ¢zINS sin(®ˆ line2 )
¡(¢xINS cos(®ˆ line1 ) + ¢yINS sin(®ˆ line1 )) sin(®ˆ line2 )
½ˆ = ½ˆline1 cos(μ̂):

A modified plane extraction procedure, which uses frame at scan j since the body frame can rotate
inertial data (CINS , ¢xINS , ¢yINS , and ¢zINS ) for the between scans i and j. Not that a single scan herein
LADAR motion compensation as it is formulated by is a 3D scan that includes three consecutive 2D scans
(25), will be implemented by future development that at different elevation angles (zero elevation scan and
will consider LADAR/INS integration aspects of the two elevated scans). From the geometry presented
planar-based navigation. in Fig. 11, a projection of the LADAR displacement
between scans i and j on the plane normal vector
VI. USE OF PLANE PARAMETERS FOR COMPUTING is related to the change in the normal point range
NAVIGATION SOLUTION between scans i and j:
Plane-based navigation is performed using changes (¢R, ni ) = ½i ¡ ½j (26)
in perceived locations of plane normal points between
scans. As stated previously, a plane normal point is where ( , ) is the vector inner product.
defined as the intersection of the plane and a line The proposed plane-based navigation algorithm
originating from the LADAR location perpendicular to utilizes (26) to estimate the delta position vector.
the plane of interest. Fig. 11 illustrates the geometry A minimum of three noncollinear planar surfaces
involved in navigating off the plane normal points. is required to estimate the three delta position
In Fig. 11, ¢R is the delta position vector components. For measurements to multiple planar
(displacement vector between scans i and j, in this surfaces, (26) can be expanded into the following

22 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
matrix form: between the position error vector (±(¢R)) and the
H ¢ ¢R = ¢½ (27) range change error vector (±(¢½)) is given by
where ±(¢R) = (HT ¢ H)¡1 ¢ HT ¢ ±(¢½): (31)
2 3 2 3
nTi,1 ½i,1 ¡ ½j,1
The variance of ±(¢R) is derived from (31) and yields
6 ¢¢¢ 7 6 ¢¢¢ 7
6 7 6 7 the following variance relation:
6 7 6 7
H=6 n T 7,
6 i,m 7 ¢½ = 6
6 ½i,m ¡ ½j,m
7:
7 (28) VAR±(¢R) = (HT ¢ H)¡1 ¢ HT ¢ VAR±(¢½) ¢ H ¢ (HT ¢ H)¡1
6 7 6 7
4 ¢¢¢ 5 4 ¢¢¢ 5
(32)
nTi,M ½i,M ¡ ½j,M
where
In (28), superscript T denotes a transpose matrix, m is VARx = E[x ¢ xT ] (33)
the plane index, and M is the total number of planes
used to estimate delta position. A standard least mean and E[:] is the expected value. If the components of
square (LMS) formulation is applied to estimate the ±(¢½) are assumed to be independent and identically
delta position vector: distributed (IID), then

¢R̂ = (HT ¢ H)¡1 ¢ HT ¢ ¢½:


ˆ (29) VAR±(¢½) = I ¢ ¾¢½ (34)

In (29), ¢½ˆ is the estimated delta range vector, which where I is a unit matrix and ¾¢½ is the standard
contains differences in estimates of plane ranges deviation of the delta range error. Substitution of (34)
computed based on LADAR data (see (17)) for scans into (32) yields
i and j. Note that (29) uses a nonweighted LMS VAR±(¢R) = (HT ¢ H)¡1 ¢ ¾¢½
2
: (35)
formulation. This formulation can be modified into
a weighted LMS procedure if standard deviations of DOP factors are thus formulated as follows:
plane range changes are evaluated. Modification of p
the nonweighted LMS above into a weighted LMS D = (HT ¢ H)¡1 (36)
solution is considered as a topic for future research. where xDOP = [D]11 , yDOP = [D]22 and zDOP =
The LMS position accuracy depends on the [D]33 , correspondingly. As mentioned previously,
relative plane geometry, which is determined by (36) is derived assuming that range errors for
the LMS measurement matrix (the H matrix). This different planar surfaces are identically distributed
paper uses DOP factors to characterize the geometry and uncorrelated with each other. The noncorrelation
influence on the relationship between the localization assumption is generally valid since different planes are
accuracy and the planar range accuracies. DOP factors computed from different LADAR measurements that
for the LMS solution defined by (29) are formulated are normally uncorrelated and computation of plane
in this section. The next section uses simulation parameters for different planes is completely separate.
results to illustrate the influence of relative planar However, range errors associated with ranges to
geometry on the delta positioning accuracy. different planes can have different standard deviation
The DOP-based approach is adopted from the values. In this case, the unweighted LMS estimation
GPS [15] where DOPs are employed to characterize (see (29)) must be modified to a weighted LMS
the influence of satellite geometry on the positioning solution procedure. DOP formulation for a weighted
accuracy. Generally speaking, the use of DOP factors LMS solution is recommended as a topic for future
for plane-based localization allows evaluation of the research.
localization accuracy for a given plane geometry The attitude of the LADAR platform is estimated
and accuracy of the plane range estimates. More using plane normal vectors. Equation (37) relates a
specifically, the DOP is defined as a geometry change in the LADAR orientation between scans i
dependent linear coefficient that relates a standard and j to the change in the orientation of plane normal
deviation of the delta position estimation error to vector:
a standard deviation of the delta range error. For Cij ¢ nj = ni (37)
instance, a vertical DOP (VDOP) relates a standard
deviation of the vertical delta position error (¾¢RV ) where Cji is the DCM that corresponds to the
with a standard deviation of error in plane range coordinate transformation from the LADAR body
changes (¾¢½ ): frame at scan i (i-scan frame) to the LADAR body
frame at scan j (j-scan frame). To compute the
¾¢RV = VDOP ¢ ¾¢½ : (30)
DCM, the attitude estimation algorithm needs to
DOP factors can be formulated for the plane-based solve Wahba’s problem [17]: given a first set of
navigation similarly to the GPS DOP formulation normal vectors with vector components resolved at
(see [15] for the corresponding formulation of GPS the i-scan’s frame and the second set of the same
DOPs). From (29) it follows that the relationship vectors with their components resolved at the j-scan’s

SOLOVIEV & UIJT DE HAAG: THREE-DIMENSIONAL NAVIGATION WITH SCANNING LADARS 23

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
frame, find the DCM that brings the second set into
the best least square correspondence with the first.
At least two noncollinear normal vectors are required
for the attitude estimation. Attitude is generally
estimated by solving an eigenvalue/eigenvector
problem, which requires the solving of nonlinear
equations. For instance, the quaternion estimation
algorithm (QUEST) finds the optimal LMS quaternion
Fig. 12. Computational rotation of j-frame that aligns vector
by computing eigenvalues and eigenvectors of a components at j-frame with its components at i-frame.
four-by-four Hessian matrix [18]. In this case, a
fourth-order equation has to be solved in order
and the rotation angle Á1 is the angle between n̂i,k0
to computer eigenvalues. This paper implements
a two-step attitude estimation approach. First, an and n̂j,k0 :
initial (nonoptimal) DCM is computed based on Á1 = arccos(n̂j,k0 , n̂i,k0 ): (40)
two noncollinear normal vectors. Second, DCM To support this vector rotation, j-frame must be
initialization errors are optimally estimated by rotated in the direction opposite to the vector rotation.
applying a standard linear LMS formulation. The use Thus, based on rotation angle and rotation axis, the
of linear solution versus nonlinear solution techniques DCM of the first rotation is computed as follows:
is beneficial for error analysis, since it allows for
a direct transformation of plane extraction errors Ĉ1 = expm(Á̂1 ¢ ¹̂1 £) (41)
into attitude estimation errors. The two-step attitude where expm is the exponential matrix function and
estimation procedure is discussed next. ¹̂1 £ is the skew-symmetric matrix defined as follows:
As stated previously, the DCM is first initialized 2 3
based on two noncollinear vectors. Initial DCM is 0 ¡¹̂1z ¹̂1y
6 7
found based on two computational rotations of the ¹̂1 £ = 4 ¹̂1z 0 ¡¹̂1x 5 : (42)
j-frame that align j-frame vector components with ¡¹̂1y ¹̂1x 0
their components at the i-frame. Corresponding DCM
computations are described in detail in [16]. Main After the first rotation, the following condition is
computational steps are summarized below. Two satisfied:
n̂i,k0 = Ĉ1 ¢ n̂j,k0 (43)
associated noncollinear plane vectors extracted from
scan i and scan j (n̂i,k0 , n̂i,m0 and n̂j,k0 , n̂j,m0 ) are used and, components of the second normal vector are
to compute the initial DCM. Two vectors with the transformed as follows:
maximum absolute value of their cross product are
n̂0j,m0 = Ĉ1 ¢ n̂j,m0 : (44)
chosen amongst all available plane normal vectors
to maximize noncollinearity. An extensive search The second rotation matches n̂0j,m0 with n̂i,m0 while
is performed through all possible pairs of normal previously matched n̂i,k0 and n̂j,k0 remain unchanged:
vectors extracted from scan i to find the vector
pair that maximizes the cross-product absolute n̂i,m0 = Ĉ2 ¢ n̂0j,m0 , Ĉ2 ¢ n̂i,k0 = n̂i,k0 : (45)
value:
Correspondingly, n̂i,k0 serves as a rotation axis for the
jn̂i,k0 £ n̂i,m0 j = max jn̂i,k £ n̂i,m j (38) second rotation (i.e., ¹̂2 = n̂i,k0 ). The rotation angle is
k=1,:::,M
m=1,:::,M
chosen to satisfy the first condition in (45). In this
where £ is the vector cross product and M is the total case, the rotation angle Á2 can be estimated as the
number of planes extracted. angle between projections of n̂0j,m0 and n̂i,m0 on the
As stated above, the DCM is computed based planar surface perpendicular to n̂i,k0 (see [16] for more
on two computational rotations of the j-frame that details). Computation of the DCM for the second
match j-frame vectors components n̂j,k0 and n̂j,m0 rotation is similar to DCM computations for the first
with their i-frame components n̂i,k0 and n̂i,m0 . First rotation:
rotation matches components of n̂j,k0 and n̂i,k0 : i.e., Ĉ2 = expm(Á̂2 ¢ ¹̂2 £): (46)
the j-frame is computationally rotated such that n̂j,k0 The initial DCM estimate is determined as a
vector components become n̂i,k0 components at the superposition of the above rotations:
end of rotation as illustrated in Fig. 12. Hence, n̂j,k0
is rotated relative to the j-frame as shown in Fig. 12. (Ĉij )0 = Ĉ2 ¢ Ĉ1 : (47)
The rotation axis ¹̂1 is perpendicular to both n̂i,k0 and The initial estimate of the DCM can be represented
n̂j,m0 , i.e., as follows:
¹̂1 = n̂i,k0 £ n̂j,k0 (39) (Ĉij )0 = ±Cij ¢ Cij (48)

24 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
where ±Cji is the DCM estimation error matrix. Linear influence of relative plane geometry on the estimation
approximation of this matrix yields [19] accuracy of the LADAR platform pitch, roll, and
heading angles. Similar to the delta position LMS
±Cij ¼ I + ±−£ (49) solution above, DOP factors can be derived to relate
errors in plane normal vectors to angular errors.
where ±−£ is the skew-symmetric matrix
2 3 However, unlike the range errors for the position
0 ¡±Ã ±μ case, the normal vector errors are generally correlated.
6 7 Particularly, errors in components of the same normal
±−£ = 4 ±Ã 0 ¡±Á 5 (50)
vector are correlated as it can be inferred from (1) and
¡±μ ±Á 0 (17). This correlation needs to be taken into account
and ±μ, ±Á, and ±Ã are errors in pitch, roll, and for the derivation of DOP factors for the attitude case.
heading angles after the DCM initialization stage. This derivation is outside the scope of this paper.
The second stage of the DCM estimation procedure In order to use planar surfaces for the estimation
implements a linear LMS solution to estimate these of position and orientation changes as formulated in
angular errors. This LMS solution procedure is this section, it must be known with certainty that a
discussed below. plane in scan i corresponds to the plane in scan j.
Substitution of (49) into (48) provides the For plane matching, INS data can be used to predict
following expression: the plane range and normal vector in scan j based on
range and normal vector extracted from scan i:
(Ĉij )0 = ±Cij ¢ Cij ¼ (I + ±−£) ¢ Cij = Cij + ±− £ ¢Cij :
½ˆ¡ ˆ
j = ½i ¡ (¢R̂INS , ni )
(51)
(57)
Correspondingly,
n̂¡
j = ¢ĈINS ¢ n̂i
(Ĉij )0 ¢ nj ¡ ni
where ½ˆ¡ ¡
j and n̂j are predicted range and normal
= (I + ±−£) ¢ Cij ¢ nj ¡ ni = (I + ±−£) ¢ ni ¡ ni vector; ½ˆi and n̂i are plane range and normal vector
extracted from scan i; and, ¢R̂INS and ¢ĈINS are the
= ni + (±−£) ¢ ni ¡ ni = ±− £ ni = ¡ni £ ±−: INS estimates of the position change vector and DCM
(52) increment between scans i and j. If the predicted
Note that (52) uses equivalency of the matrix range and normal vector (½ˆ¡ ¡
j and n̂j ) match closely
multiplication (±−£) ¢ ni to the vector cross product the range and normal vector extracted from scan j
±− £ ni . Expanding (52) to include all available (½ˆj and n̂j ), the plane correspondence is established
normal vectors yields between scans i and j. Note that plane matching
thresholds must accommodate both plane extraction
¢n = HT− ¢ ±− (53) errors and INS drift errors. As stated previously,
where implementation of the feature matching procedure
2 3 2 3 that exploits inertial data will be addressed by
(Ĉji )0 ¢ ni,1 ¡ nj,1 ¡nj,1 £ future research. For the current realization of the 3D
6 7 6 7
¢n = 4 ¢ 5, H− = 4 ¢ 5: navigation solution, simulation and test scenarios are
(Ĉji )0 ¢ ni,M ¡ nj,M ¡nj,M £ designed such that a direct plane correspondence can
be used to match planes between different scans, i.e.,
(54) a kth plane extracted from scan i always correspond to
A LMS solution is applied to estimate initial angular the kth plane extracted from scan j.
errors:
± −̂ = (HT− ¢ H− )¡1 ¢ HT− ¢ ¢n̂: (55) VII. SIMULATION RESULTS
Note that the measurement matrix ¢n̂ is based on
The plane-based navigation methodology is first
estimated values of plane normal vectors that are
verified using simulations. Three planar surfaces
computed from LADAR data. Finally, the initial DCM
are simulated according to the geometry shown in
estimate is adjusted to incorporate LMS estimates of
Fig. 13. For this planar geometry, DOP factors for
initial angular errors:
the position estimation are computed as 0.7, 3.1, and
Ĉij = expm(¡± −̂£) ¢ (Ĉij )0 : (56) 10.6 for the xDOP, yDOP and zDOP, respectively.
2D LADAR scans are simulated at 0 deg, 5 deg,
Equation (55) provides the linear relation between and 10 deg elevation angles. The simulated LADAR
plane normal vectors and angular adjustments. Unlike measurements conform to the specifications of a SICK
nonlinear attitude estimation methods, this linear LMS-200 LADAR. The LADAR angular range is
relation can be directly applied to formulate the from 0 to 180 deg with the angular resolution of

SOLOVIEV & UIJT DE HAAG: THREE-DIMENSIONAL NAVIGATION WITH SCANNING LADARS 25

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 13. Simulation scenario for 3D plane-based navigation: three
planar surfaces simulated.

Fig. 15. 3D plane-based navigation: angular errors of 3D


planar-based navigation solution.

Fig. 14. 3D plane-based navigation: delta position errors of 3D


navigation solution.
Fig. 16. 3D plane-based navigation (scenario 2): three planar
0.5 deg. The LADAR distance range is from 0 to surfaces simulated, tilt angle of one of the planes is increased to
30 deg to improve VDOP factor.
80 m with a 1 cm ranging noise standard deviation.
The translational motion trajectory is simulated
as a constant velocity motion from the start to the 10:6) closely reflects the ratio of delta position sigma
end trajectory points. Simultaneous rotations of the values (0:5 : 2:3 : 8:4).
LADAR about the x, y, and z axes of the LADAR Fig. 15 shows errors in angular estimates. To
body frame are simulated with rotation rate of 0.2 deg compare the computed attitude with the reference
per 3D scan, 0.25 deg per 3D scan, and 0.3 deg attitude trajectory, the estimated DCM was
per 3D scan, correspondingly. Note that one 3D transformed into Euler angles (pitch, roll and heading)
scan corresponds to three consecutive 2D scans at using a standard transformation routine described in
0 deg, 5 deg, and 10 deg elevation angles. Six motion [19]. Standard deviations of errors in pitch, roll and
components (three translational motion components heading estimates are computed as 0.07 deg, 0.1 deg,
and three rotational motion components) are thus and 0.01 deg, respectively.
implemented for the simulation test. Axes of the To illustrate the influence of the plane geometry
LADAR body frame coincide with the navigation on the delta position accuracy, the tilt angle of one of
frame axes (x, y, z axes in Fig. 8) at the initial time the simulated planes is increased from 3 deg to 30 deg
(start of the trajectory). as shown in Fig. 16. As a result, the VDOP value is
Delta position and delta orientation estimates decreased from 10.6 to 2.3.
are computed based on changes in plane parameters Fig. 17 shows the corresponding delta position
between the initial 3D scan and the current 3D error plots. Accordingly, the standard deviation of
scan. Thus, delta position and orientation estimates the z delta position error is decreased from 8.4 cm
correspond to position and orientation changes to 2.2 cm as a result of the VDOP decrease.
between the start of the trajectory and the current The simulation scenarios implemented above
trajectory point. demonstrate that nonvertical planes are required
Fig. 14 shows errors in delta position estimate. to observe changes in the z position component.
Delta position errors herein are computed as the For applications such as autonomous operation of
differences between the delta position vector derived UAVs in urban environments, this requirement may
from LADAR measurements and the true delta not always be satisfied, particularly, for those cases
position vector. Delta position errors are at the where planar surfaces are created by vertical walls
centimeter level with one sigma values estimated of surrounding buildings. In these cases, the system
as 0.5 cm, 2.3 cm, and 8.4 cm for x, y and z delta can be augmented by a downward-looking scanning
position error components, accordingly. Note that the LADAR capable of extracting the horizontal planar
ratio of xDOP, yDOP, and zDOP values (0:7 : 3:1 : surfaces created, for instance, by urban roads.

26 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 17. 3D plane-based navigation (scenario 2): errors in delta
position solution.

Fig. 19. Diagram of data collection setup: FPGA-based data


collection system detects LADAR measurement message, controls
servo angular position (elevation angle), includes current value of
elevation angle into message, and sends updated message to PC
where measurement messages are collected for postprocessing.

Fig. 18. Photograph of test setup: setup includes scanning


LADAR, rotating bracket, low-cost servo motor, and FPGA-based
control and data collection board.

A. Test Setup
Fig. 18 shows a photograph of the test setup that
is developed to demonstrate the feasibility of 3D
trajectory estimation from LADAR measurements.
A SICK LMS-200 2D scanning LADAR is mounted
in a bracket that is capable of rotating around the x
axis of the LADAR body frame. LADAR rotations Fig. 20. Segmentation of LADAR measurements from data file:
are implemented using a low-cost Futaba digital servo first, output measurement messages are extracted from file;
motor. Scans are taken at elevation angles of 0 deg, second, scan image and its corresponding elevation angle are
decoded from each message; third, groups of three consecutive
5 deg, and 10 deg. The servo control and LADAR
images (0 deg, 5 deg, 10 deg elevation angles) are formed.
data collection functions are implemented in a Xilinx
Spartan 3 field programmable gate arrays (FPGA).
Fig. 19 shows the diagram of the data collection motor immediately after the detection of the LADAR
setup. Scanning LADAR outputs measurement measurement message. Thus, the servo has enough
messages where each message comprises time to change the LADAR elevation angle before the
measurements for a single scan. LADAR messages next scan is taken. A 2D scan repetition rate of about
are detected by the message detection block of the 0.7 s is implemented for the test setup. This time
FPGA-based control and data collection board. Once interval is sufficient to change the LADAR elevation
the LADAR message is detected, the board updates angle for the low-cost servo option used in the setup.
the servo elevation angle through the servo control The data collection block forms output messages
block. One of the key requirements for the setup where each message contains scanning measurements
design is that the LADAR motion between different of a single scan and the values of the scan’s elevation
elevation angles does not interfere with the LADAR angle that is provided by the servo control block.
scans themselves. In other words, the servo motor Output messages are stored into a binary file on a PC
must change the LADAR elevation angle between through the USB port.
scans and not during the scans to avoid introducing To process experimental data, the data
distortions into the scan images. To satisfy this segmentation scheme is implemented as illustrated
requirement, the FPGA-based control board sends in Fig. 20. Measurement messages in the data file
the elevation angle change command to the servo are identified by the message header and extracted

SOLOVIEV & UIJT DE HAAG: THREE-DIMENSIONAL NAVIGATION WITH SCANNING LADARS 27

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 21. Plane reconstruction based on LADAR data: photograph Fig. 23. Plane reconstruction: photograph of live data test
of live data test scenario 1. scenario 2.

Fig. 22. Reconstructed plane for test scenario 1: planes


associated with walls in LADAR FOV are successfully
reconstructed. Fig. 24. Reconstructed plane for test scenario 2: two planes
associated with wooden boards as well as two planes associated
with office walls are successfully reconstruction; third office wall
from the file. The scan image and the scan’s elevation
not visible to LADAR as it is being blocked by wooden boards.
angle are decoded from each message extracted. Scan
images that correspond to three consecutive elevation
angles (0 deg, 5 deg, and 10 deg) are formed into reconstruction of planar surfaces visible to the
groups of 2D scans. Groups of three consecutive scan LADAR.
images are then processed to extract planar surfaces Feasibility of the planar-based navigation methods
and estimate LADAR position and orientation changes presented in this paper was initially verified with
as discussed previously in the paper. experimental data. As mentioned previously, this
paper does not address LADAR/INS integration
B. Test Results aspects of the generic 3D navigation scheme shown
in Fig. 2. Hence, the test scenario was designed such
This section uses results of live data tests that INS-related procedures (motion compensation
to demonstrate the feasibility of the methods and feature matching) do not have to be applied.
developed in this paper. Test scenarios are designed Particularly, the test was performed in the office
to demonstrate: 1) reconstruction of planar surfaces environment shown in Fig. 22. In this case, a direct
from LADAR data and 2) estimation of position correspondence between planes observed at different
and orientation changes of the LADAR based on the scan images can be used for feature matching (i.e.,
extracted planar surface parameters. a kth plane in one image always corresponds to the
Fig. 21 shows a photograph of the test scene for kth plane in another image). In addition, the test
the first live data test scenario used to demonstrate scenario implemented does not require compensation
the plane reconstruction. The scans are taken in an of LADAR motion during 3D scans where a 3D scan
indoor office environment. Note that the LADAR scan is defined as three consecutive 2D scans at 0 deg,
that corresponds to the zero elevation angle is a 2D 5 deg, and 10 deg elevation angles. For this test
horizontal scan. scenario, stationary LADAR data were first collected
Fig. 22 shows planes reconstructed from scan for thirty 3D scans. A 3D LADAR motion was then
images. Three planes associated with the office walls applied: the LADAR was displaced simultaneously in
in the LADAR FOV are successfully reconstructed x, y, and z directions (0.2 m, 0.95 m, and ¡0:25 m
from live LADAR data. Two wooden boards were displacements, accordingly) with simultaneous
added to the environment for the second test scenario roll and heading rotations (¡15 deg and 45 deg,
as illustrated in Fig. 23. Plane reconstruction results accordingly). Following the LADAR motion, another
are shown in Fig. 24 that demonstrates successful thirty stationary scans were collected.

28 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 25. Reconstruction of 3D translational motion: true Fig. 27. Reconstruction of 3D angular motion: true attitude
trajectory versus motion trajectory estimated by 3D planar-based trajectory versus attitude estimated based on plane parameters
navigation that uses LADAR data. extracted from LADAR data.

Fig. 28. Reconstruction of 3D angular motion: errors in angular


Fig. 26. Reconstruction of 3D translation motion: errors in delta estimates of 3D planar-based navigation that uses LADAR data.
position estimates that are computed from LADAR data.

reconstruct translational motion in three dimensions


Fig. 25 compares delta position estimates at the centimeter accuracy level and to reconstruct
computed from LADAR data with the true delta rotational motion in three dimensions at the degree
position. Fig. 26 shows delta position errors. Position accuracy level. This accuracy level can be enhanced
errors shown are at a centimeter level. Standard by improving the precision of the LADAR elevation
deviations of errors in x, y, and z position components rotations or by measuring these rotations precisely
are computed as 3.5 cm, 0.7 cm, and 2.1 cm, with the INS.
accordingly.
Fig. 27 compares estimated rotation angles with VIII. CONCLUSIONS
the true attitude trajectory. Similar to the simulation
scenario, DCM estimates were converted into Euler This paper investigates the use of scanning
angles using standard computations (see, for example, LADARs for autonomous navigation in three
[19]). dimensions. The navigation solution is based
Fig. 28 represents angular error plots. Standard on planar surfaces extracted from LADAR scan
deviation of errors in pitch, roll, and heading images. The paper develops a method for estimating
angles are estimated as 1.6 deg, 2.5 deg, and 1 deg, plane parameters using images of a 2D scanning
correspondingly. It is noted that attitude errors for the LADAR that is rotated in a limited elevation range
live data test are increased notably as compared with (three different elevation angles are implemented).
attitude errors for the simulation test: the increase is Changes in plane parameters between scans are
from a subdegree level to a degree level. This error applied to compute position and orientation changes.
increase is mainly attributed to the deviation of the Least-squares linear position and attitude computation
actual elevation angle of the low-cost servo motor routines are presented. The use of DOP factors
from the commanded angular value that is used for is introduced to formulate the influence of planar
plane computations (see (17)). geometry on the navigation accuracy. Simulation
Overall, test results presented in this section results and test results presented demonstrate the
proof the validity of the methodology developed to feasibility of the 3D navigation methods developed.

SOLOVIEV & UIJT DE HAAG: THREE-DIMENSIONAL NAVIGATION WITH SCANNING LADARS 29

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
ACKNOWLEDGMENTS [10] Uijt de Haag, M., Venable, D., and Smearcheck, M.
Integration of an inertial measurement unit and 3D
The authors would especially like to thank Jon imaging sensor for urban and indoor navigation of
unmanned vehicles.
Sjogren of the AFOSR for supporting this research.
In Proceedings of the Institute of Navigation National
Technical Meeting 2007, Jan. 2007, 829—840.
REFERENCES [11] Thrun, S., Martin, C., Liu, Y., Hahnel, D.,
Emery-Montemerlo, R., Chakrabarti, D., and Burgard, W.
[1] Hostetler, L. D., and Andreas, R. D.
A real-time expectation maximization algorithm for
Nonlinear Kalman filtering techniques for terrain-aided
acquiring multi-planar maps of indoor environments with
navigation.
mobile robots.
IEEE Transactions on Automatic Control, AC-28, 3 (Mar.
IEEE Transactions on Robotics and Automation, 20, 3
1983).
(June 2004), 433—442.
[2] Campbell, J. L., Uijt de Haag, M., and van Graas, F.
[12] Nguyen, V., Martinelli, A., Tomatis, N., and Siegwar, R.
Terrain-referenced positioning using airborne laser
A comparison of line extraction algorithms using 2D laser
scanner.
rangefinder for indoor mobile robotics.
Navigation, Journal of the Institute of Navigation, 52, 4
In Proceedings of the IEEE Conference on Intelligent
(Winter 2005—2006), 189—197.
Robots and Systems (IROS), 2005.
[3] Kim, J., and Sukkarieh, S.
[13] Borges, G. A., and Aldon, M-J.
Autonomous airborne navigation in unknown terrain
Line extraction in 2D range images for mobile robotics.
environments.
Journal of Intelligent and Robotic Systems, 40, 3 (2004),
IEEE Transactions on Aerospace and Electronic Systems,
267—297.
40, 3 (July 2004), 1031—1045.
[14] Bates, D., and van Graas, F.
[4] Vadlamani, A. K., and Uijt de Haag, M.
Covariance analysis considering the propagation of laser
Use of laser range scanners for precise navigation in
scanning errors use in LADAR navigation.
unknown environments.
In Proceedings of the Institute of Navigation Annual
In Proceedings of the Institute of Navigation GNSS-2006,
Meeting, Apr. 2007, 624—634.
Sept. 2006, 1104—1114.
[15] Kaplan, E., and Hegarty, C. (Eds.)
[5] Borges, G. A., and Aldon, M-J.
Understanding GPS: Principles and Applications (2nd ed.).
Optimal robot pose estimation using geometrical maps.
Norwood, MA: Artech House, 2006.
IEEE Transactions on Robotics and Automation, 18, 1
[16] Mostov, K., Soloviev, A., and Koo, T. J.
(Feb. 2002), 87—94.
Initial attitude determination and correction of gyro-free
[6] Pfister, S. T.
INS angular orientation on the basis of GPS linear
Algorithms for Mobile robot localization and mapping
navigation parameters.
incorporating detailed noise modeling and multi-scale
In Proceedings of IEEE Conference on Intelligent
feature extraction.
Transportation Systems, Nov. 1997, 1034—1039.
Ph.D. dissertation, California Institute of Technology,
[17] Wahba, G.
Pasadena, 2006.
A least squares estimate of satellite attitude.
[7] Bates, D.
SIAM Review, 7, 3 (July 1965), 409.
Navigation using optical tracking of objects at unknown
[18] Psiaki, M. L.
locations.
Attitude-determination filtering via extended quaternion
M.S. thesis, Ohio University, Athens, 2006.
estimation.
[8] Soloviev, A., Bates, D., and van Graas, F.
Journal of Guidance, Control, and Dynamics, 23, 2
Tight coupling of laser scanner and inertial measurements
(Mar.—Apr. 2000), 206—214.
for a fully autonomous relative navigation solution.
[19] Titterton, D. H., and Weston, J. L.
Navigation, Journal of the Institute of Navigation, 54, 3
Strapdown Inertial Navigation Technology (2nd ed.).
(Fall 2007), 189—205.
Reston, VA, and Stevenage, UK: The American Institute
[9] Horn, J. P.
of Aeronautics and Astronautics, and The Institute of
Bahnführung eines mobilen Roboters mittels absoluter
Electrical Engineers, 2004.
Lagebestimmung durch Fusion von Entfernungsbild- und
Koppelnavigations-daten.
Ph.D. dissertation, Technical University of Munich,
Germany, 1997.

30 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Andrey Soloviev is a research assistant professor at the University of Florida.
Previously he served as a senior research engineer at the Ohio University
Avionics Engineering Center. He received his Master of Science in applied
mathematics and physics from Moscow University of Physics and Technology
in 1997 and a Ph.D. in electrical engineering from Ohio University in 2002. His
research interests focus on all aspects of multi-sensor integration for navigation
including integrated processing of GPS, inertial, laser radar and imagery signals.
Dr. Soloviev received the RTCA William E. Jackson Award in 2002 and the
Institute of Navigation Early Achievement Award in 2006.

Maarten Uijt de Haag is an Associate Professor of Electrical Engineering at


Ohio University and a principal investigator with the Ohio University Avionics
Engineering Center. He received his M.S.E.E. degree from Delft University
of Technology in 1994 and his Ph.D. in electrical engineering from Ohio
University in 1999. He has been involved with laser-based navigation systems,
GPS landing systems, advanced signal processing techniques for GPS receivers,
GPS/INS integrated systems, terrain-referenced navigation systems, and intelligent
integrated flight deck technologies.
Dr. Uijt de Haag was the recipient of the 2007 Institute of Navigation
Thurlow Award.

SOLOVIEV & UIJT DE HAAG: THREE-DIMENSIONAL NAVIGATION WITH SCANNING LADARS 31

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.

You might also like