Professional Documents
Culture Documents
Three-Dimensional Navigation With Scanning Ladars: Concept & Initial Verification
Three-Dimensional Navigation With Scanning Ladars: Concept & Initial Verification
Three-Dimensional Navigation With Scanning Ladars: Concept & Initial Verification
INTRODUCTION
14 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 1. Sensor configurations for feature-based navigation in urban environment. (a) 2D LADAR configuration. (b) 3D imaging
LADAR configuration.
Due to the lack of terrain during the UAV This limits the feature availability for navigation
navigation in an urban environment, the in urban environments. Scanning LADARs have
LADAR-based methods must exploit features a significantly larger measurement range (80 m,
such as surfaces, corners, points, etc. For the typical) and a FOV of up to 360 deg. Hence, this
feature-based navigation, changes in position and paper extends the 3D navigation methodology
orientation are estimated from changes in the presented in [10] for the case of 2D scanning
parameters of features that are extracted from LADAR LADARs.
images. Two-dimensional (2D) laser scanners and This paper develops a methodology for using
feature-based localization methods have been used measurements of a 2D scanning LADAR for
extensively to enable navigation of robots in an 3D navigation for the urban part of the UAV
indoor environment. For example [5] describes a mission. Navigation herein is performed in
method to estimate the translation and rotation of completely unknown environments. No map
a robot platform from a set of extracted lines and information is assumed to be available a priori.
points using a 2D sensor. Reference [6] discusses Fully autonomous 3D relative positioning and
the feature extraction and localization aspects of 3D relative attitude determination are considered.
mobile robots and addresses the statistical aspects The navigation solution is computed in a local
of these methods, whereas [7] introduces improved coordinate frame that is defined by the LADAR
environment-dependent error models and establishes position and orientation at the initial scan. A
relationships between the position and heading relative navigation solution is thus provided.
uncertainty and the laser observations, thus enabling Estimating local frame position and orientation in
a statistical assessment of the quality of the estimates. one of commonly used navigation frames (e.g.,
In [8], 2D scanning LADAR measurements are East-North-Up and Earth-Centered Earth-Fixed
tightly integrated with inertial measurement unit frames) allows for the transformation of the relative
(IMU) measurements to estimate the relative position navigation solution into an absolute navigation
of a van in an urban environment. The idea of solution.
using 3D measurements and planar surfaces for The remainder of the paper is organized as
2D localization is introduced in [9]. Note that the follows. Key aspects of the 3D LADAR-based
above applications focus on 2D navigation (two navigation are first summarized. The 3D navigation
position coordinates and a platform heading angle). approach proposed uses planar surfaces as the basis
However, for applications such as autonomous UAVs, navigation feature. LADAR imaging technologies
a 3D navigation solution is required, especially, are then discussed. Next, a method for extracting
for those cases where the platform attitude varies planar surfaces from LADAR images is developed.
in pitch, roll, and yaw directions. To enable 3D The paper then discusses algorithms for computing
navigation, the utilization of the laser range scanner relative 3D position and orientation solution based
measurements must, somehow, be expanded for on parameters of planar surfaces that are extracted
estimation of 3D position and attitude. The use from scan images. Simulation results and live data test
of 3D features from 3D flash LADAR imagery results are used to initially demonstrate the feasibility
was introduced in [10]. Existing flash LADARs of the 3D plane-based navigation developed. The
have however a limited measurement range and paper is concluded by summarizing the main results
a limited FOV: 8 m and 45 deg, typically [10]. achieved.
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 3. Generic routine of 3D navigation that uses images of
scanning LADAR.
Fig. 2. Examples of planar surfaces observed in urban images:
multiple planes can be extracted for indoor and outdoor image
examples.
algorithms developed in [8] must be extended for
II. 3DLADAR-BASED NAVIGATION a 3D case. Hence, the feature matching procedure
has to use position and orientation outputs of the
This paper exploits planar surfaces (planes) as INS to predict plane location and orientation in the
the basis feature for the 3D navigation solution. The current scan based on plane parameters observed in
rationale for the use of planes for navigation in 3D previous scans. If predicted plane parameters match
urban environments is that planes are common in closely to the parameters of the plane extracted from
man-made environments. To exemplify, Fig. 2 shows the current scan, a match is declared and a matched
typical urban indoor (hallway) and outdoor (urban plane is used for navigation computations. Note
canyon) images. Multiple planes can be extracted from that INS data can be also applied to compensate for
both images as illustrated in Fig. 2. Since changes in LADAR motion during scans for those cases where
image feature parameters between two different scans such motion can introduce significant distortions to
are used for navigation, this feature must be observed LADAR scan images. Following feature matching,
in both scans. Feature repeatability is thus essential changes in parameters of the planes that are matched
for the LADAR-based navigation. Planar surfaces between different scans are exploited to estimate the
satisfy this requirement as they are highly repeatable navigation solution. Changes in plane parameters
from scan to scan. If a wall of a building stays in the are also applied to periodically recalibrate the INS
LADAR measurement range then the plane associated to reduce drift terms in inertial navigation outputs
with that wall repeats in the scan images. in order to improve the quality of the INS-based
Fig. 3 illustrates a generic navigation routine plane prediction used by the feature matching
that exploits planar surfaces to derive the navigation procedure.
solution. A 3D scan image of the environment is This paper focuses on the key aspects of the
obtained by a scanning LADAR. Planes are extracted planar-based navigation that are related to LADAR
from LADAR images and used to estimate the data processing only. Development of LADAR/INS
navigation solution that is comprised of changes in integrated components will be addressed by future
LADAR position and orientation between scans. In research. Accordingly, two key questions that are
order to use a planar surface for the estimation of addressed by the remainder of the paper are: 1) how
position and orientation changes from one scan to to extract planes from LADAR scan images, and 2)
the next, this planar surface must be observed in both how to use parameters of extracted planes to compute
scans and it must be known with certainty that a plane the navigation solution. To address these questions,
in one scan corresponds to the plane in the next scan. LADAR imaging technologies are discussed first. A
Hence, the feature matching procedure establishes a method for extracting plane parameters from LADAR
correspondence between planes extracted from the measurements is then developed. The use of plane
current scan and planes extracted from previous scans. parameters for the estimation of relative position and
The navigation routine stores planes extracted from orientation is finally described.
previous scans into the plane list. The plane list is Aspects of the 3D navigation routine that are
initially populated at the initial scan. If a new plane is related to the LADAR/INS integration will be
observed during one of the following scans, the plane considered by future development. Particularly,
list is updated to include this new plane. In [8], INS future development will address the use of INS data
data are exploited to match lines extracted from 2D for feature matching, INS-based compensation of
LADAR images for a 2D navigation case. In order distortions in scan images created by LADAR motion
to use INS data for plane matching, line matching during scans, and LADAR-based INS calibration.
16 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
III. 3D IMAGING TECHNOLOGIES
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 6. First elevated scan: second intersect line obtained for
each planar surface in LADAR FOV; fictitious planes can still
exist since plane can be fit through two lines that belong to
different real planes.
18 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
In (3), ®ˆ line1 is the line angle and ½ˆline1 is the line
range. Note that ®ˆ line1 and ½ˆline1 are estimated by a
line extraction procedure (e.g., by an iterative split
and merge procedure [7]) that is applied to LADAR
scan data for the zero elevation scan. The intersect
line should also satisfy the equation of intersection of
the planar surface with the LADAR scanning plane (a
horizontal plane zb = 0, in this case). A corresponding
equation of the intersect line is given below:
Fig. 8. LADAR body frame: xb and yb anes lie in scanning
plane (xb axis is in direction of zero scanning angle, yb axis is in xb ¢ cos(®) ¢ cos(μ) + yb ¢ sin(®) ¢ cos(μ)
direction of 90 deg scanning angle), zb axis perpendicular to
scanning plane. + zb ¢ sin(μ) = ½ (4)
zb = 0
or
xb ¢ cos(®) ¢ cos(μ) + yb ¢ sin(®) ¢ cos(μ) = ½: (5)
xb = xb0
yb = cos(') ¢ yb0 ¡ sin(') ¢ zb0 (8)
zb = sin(') ¢ yb0 + cos(') ¢ zb0 :
Fig. 10. Intersections of planar surface with nonelevated and
elevated LADAR scanning Planes; for elevated scan, LADAR is Substitution of the coordinate transformation (8) into
rotated about its xb axis on angle Á. the plane (2) provides the plane equation expressed at
the elevated LADAR frame:
where xb , yb , and zb are the Cartesian coordinates of
xb0 ¢ cos(®) ¢ cos(μ) + yb0 ¢ (sin(®) ¢ cos(μ) ¢ cos(')
any point that belongs to the plane; these coordinates
are expressed in the LADAR body frame. + sin(μ) ¢ sin(')) + zb0 ¢ (sin(μ) ¢ cos(')
Fig. 10 shows lines of intersection of LADAR
¡ sin(®) ¢ cos(μ) ¢ sin(')) = ½: (9)
scanning plane with the planar surface being
reconstructed for cases of zero elevation scan and This plane intersects with the LADAR scanning plane
elevated scan. at zb0 = 0. The intersect line equation is thus expressed
Using a polar line representation [13], the intersect as follows:
line for the zero elevation scan is expressed as
follows: xb0 ¢ cos(®) ¢ cos(μ) + yb0 ¢ (sin(®) ¢ cos(μ) ¢ cos(')
xb ¢ cos(®ˆ line1 ) + yb ¢ sin(®ˆ line1 ) = ½ˆline1 : (3) + sin(μ) ¢ sin(')) = ½: (10)
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
This line can be also expressed using line parameters to compute estimates of plane parameters for the
estimated from the elevated scan image. The zero and first elevated scans (®ˆ 1 , μ̂1 , ½ˆ1 ), and zero
expression is similar to (9) above: and second elevated scans (®ˆ 2 , μ̂2 , ½ˆ2 ). Differences in
plane parameter estimates are then compared with
xb0 ¢ cos(®line2 ) + yb0 ¢ sin(®line2 ) = ½line2 : (11)
predetermined threshold values (±®, ±μ, ±½). The plane
Equations (10) and (11) formulate the intersect line is extracted if the differences between estimates are
equation using plane parameters and parameters of the below the thresholds, i.e., if the following conditions
line (range and angle) determined from LADAR data, are satisfied:
correspondingly. Evaluating (10) and (11) at xb0 = 0 j®ˆ 1 ¡ ®ˆ 2 j < ±®
yields
jμ̂1 ¡ μ̂2 j < ±μ (18)
xb0 = 0 ) yb0
½ j½ˆ1 ¡ ½ˆ2 j < ±½:
=
(sin(®) ¢ cos(μ) ¢ cos(') + sin(μ) ¢ sin(')) (12) Otherwise, the plane extraction is not declared and
½ˆline2 the plane is removed from consideration. Removal of
xb0 = 0 ) yb0 = : fictitious planes completes the plane reconstruction
cos(®ˆ line2 )
procedure.
From (12) it follows The threshold values in (18) (±®, ±μ, ±½) are
currently predetermined based on specifications
½ ½ˆline2 of LADAR measurement errors. Particularly, the
= :
(sin(®) ¢ cos(μ) ¢ cos(') + sin(μ) ¢ sin(')) cos(®ˆ line2 ) following threshold values are used:
(13) ±® = 3 ¢ ¢®
Substitution of (7) into (13) provides the following ±μ = 3 ¢ ¢® (19)
expression:
±½ = 3 ¢ ¾½
½ˆ line2 ½ˆ line1 ¢ cos(μ)
= where ¢® and ¾½ are the LADAR angular resolution
(sin(®line1 ) ¢ cos(μ) ¢ cos(') + sin(μ) ¢ sin(')) cos(®ˆ line2 )
ˆ
and standard deviation of ranging measurement noise,
(14) accordingly. The use of predetermined thresholds
or can be modified into an adaptive threshold choice
½ˆline1 ½ˆline2 by evaluating the real quality of lines extracted from
= : scan images and then transforming line extraction
(sin(®line1 ) ¢ cos(') + tg(μ) ¢ sin(')) cos(®ˆ line2 )
ˆ
errors into plane errors through the plane estimation
(15) equation (17). Particularly, the approach proposed
The plane tilt angle is thus related to the estimates in [14] exploits the actual line noise samples
of intersect line parameters obtained from the zero comprised of LADAR measurement errors and
elevation scan and the elevated scan: a texture of a scanned surface to estimate sigma
values of line extraction errors. Hence, this approach
½ˆline1 ¢ cos(®ˆ line2 ) ¡ ½ˆline2 ¢ sin(®ˆ line1 ) ¢ cos(') evaluates the actual line quality and characterizes
tg(μ) = :
½ˆline2 ¢ sin(') it by one sigma values of errors in line parameter
estimates (range and angle). For adaptive choice
(16) of plane extraction thresholds, line errors must
A combined use of (7) and (16) completes the be first transformed into plane parameter errors
formulation of the planar surface: for (®ˆ 1 , μ̂1 , ½ˆ1 ) and (®ˆ 2 , μ̂2 , ½ˆ2 ). Adaptive extraction
thresholds then need to accommodate combined
ˆ = ®ˆ line
® 1 errors in (®ˆ 1 , μ̂1 , ½ˆ1 ) and (®ˆ 2 , μ̂2 , ½ˆ2 ). Aspects of the
μ ¶ adaptive threshold choice will be addressed by future
½ˆ line1 ¢ cos(®
ˆ line ) ¡ ½ˆ line ¢ sin(®ˆ line ) ¢ cos(')
μ̂ = arctg 2 2 1
(17) research.
½ˆ line ¢ sin(') 2
The plane extraction procedure can separate
½ˆ = ½ˆ line1 ¢ cos(μ̂): planes only if differences between plane parameters
exceed the threshold values in (19). Thus, planar
As mentioned previously a second elevated scan surfaces with closely located normal points (see
is applied to remove fictitious planes. Fictitious Fig. 9 for the illustration of the plane normal point)
planes can be created by fitting a plane through are merged into a single plane. The procedure does
two lines that belong to different real planes (see not separate those planes that are nearly coplanar
Fig. 6). To remove fictitious planes, (17) is applied (i.e., differences in angular plane parameters are
20 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
below the threshold) and are nearly at the same where jhVLADAR ij is the absolute value of the average
distance from the origin of the LADAR body-frame LADAR velocity during the scanning interval,
(i.e., differences in plane ranges are below the jh!LADAR ij is the absolute value of the average
threshold). This feature can limit the use of the LADAR rotation rate during the scan, and ¢tscan
plane extraction method proposed herein for is the scan duration. Applying range and angular
mapping applications. However, from the navigation error specification of the SICK LMS-200 LADAR
perspective, separation of planar surfaces that are in (20) shows that the LADAR velocity must not
nearly coplanar does not have a considerable influence exceed 1.5 m/s and the angular rate must not exceed
on the observability of navigation states (position 77 deg/s in order to avoid motion-related distortions
and attitude). Particularly, dilution of precision of scan images. While the angular motion generally
(DOP) values that characterize the influence of planar stays below this angular rate threshold for most UAV
geometry on the navigation solution accuracy (see the applications, the velocity threshold can be exceeded
section on the use of plane parameters for computing for at least some of the UAV flight scenarios. For
the navigation solution for the definition of DOP) these scenarios, LADAR range measurements can be
stay practically unchanged if nearly coplanar surfaces adjusted using INS estimates of position changes as
are used separately for computing the navigation described in [7].
solution. Thus, this paper does not address this LADAR scans at different elevation angles are
separation. separated by a finite time interval. If the LADAR
It must be also noted that a limited elevation motion between scans exceeds ranging noise and
scan range is used for the plane extraction method angular resolution, this motion must be taken into
presented in this section. As a result, the method has account. The maximum allowable translational motion
limited application in 3D mapping, i.e., map building. and rotational motion between scans are computed as
For navigation applications, the plane extraction follows:
method described herein allows for a complete jhVLADAR ij ¢ ¢Tscans < ¾½
reconstruction of planar surfaces that are then used (21)
to compute a 3D navigation solution. jh!LADAR ij ¢ ¢Tscans < ¢®
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
follows:
xb0 ¢ (C11 ¢ cos(®) ¢ cos(μ) + C21 ¢ sin(®) ¢ cos(μ) + C31 ¢ sin(μ))
+ yb0 ¢ (C12 ¢ cos(®) ¢ cos(μ) + C22 ¢ sin(®) ¢ cos(μ)
+ C32 ¢ sin(μ))
+ zb0 ¢ (C13 ¢ cos(®) ¢ cos(μ) + C23 ¢ sin(®) ¢ cos(μ) (23)
+ C33 ¢ sin(μ))
+ ¢xINS ¢ cos(®) ¢ cos(μ) + ¢yINS ¢ sin(®) ¢ cos(μ)
+ ¢zINS ¢ sin(μ) = ½
where Ck,j , k = 1, : : : 3, j = 1, : : : 3 are the elements of Fig. 11. Use of plane normal points for navigation: changes in
the INS DCM CINS . Accordingly, (10) that expresses perceived location of normal point between scan i and scan j
the line extracted from the tilted scan image (i.e., for applied to estimate position changes.
zb0 = 0) is modified as follows:
case); ni is the plane normal vector whose components
xb0 ¢ (C11 ¢ cos(®) ¢ cos(μ) + C21 ¢ sin(®) ¢ cos(μ) + C31 ¢ sin(μ))
are resolved in the LADAR body frame at scan i; nj
+ yb0 ¢ (C12 ¢ cos(®) ¢ cos(μ) + C22 ¢ sin(®) ¢ cos(μ) is the plane normal vector whose components are
resolved in the LADAR body frame at scan j; and,
+ C32 ¢ sin(μ)) (24)
½i and ½j are shortest distances from the LADAR to
+ ¢xINS ¢ cos(®) ¢ cos(μ) + ¢yINS ¢ sin(®) ¢ cos(μ) the plane at scans i and j, accordingly. Note that the
plane normal vector at scan i is parallel to the plane
+ ¢zINS ¢ sin(μ) = ½:
normal vector at scan j since stationary planes are
Similar to the derivation of (12) through (17), the assumed. Components of the plane normal vector
modified plane extraction procedure was derived from in the LADAR body frame at scan i generally differ
(7), (24), and (11): from components of the normal vector in the body
®ˆ = ®ˆ line1
0 0 11
½ˆline1 sin(®ˆ line2 )
B 1 B CC
μ̂ = arctg @ ¢@ ¡½ˆline2 (C12 cos(®ˆ line1 ) + C22 sin(®ˆ line1 )) AA (25)
C32 ½ˆline2 + ¢zINS sin(®ˆ line2 )
¡(¢xINS cos(®ˆ line1 ) + ¢yINS sin(®ˆ line1 )) sin(®ˆ line2 )
½ˆ = ½ˆline1 cos(μ̂):
A modified plane extraction procedure, which uses frame at scan j since the body frame can rotate
inertial data (CINS , ¢xINS , ¢yINS , and ¢zINS ) for the between scans i and j. Not that a single scan herein
LADAR motion compensation as it is formulated by is a 3D scan that includes three consecutive 2D scans
(25), will be implemented by future development that at different elevation angles (zero elevation scan and
will consider LADAR/INS integration aspects of the two elevated scans). From the geometry presented
planar-based navigation. in Fig. 11, a projection of the LADAR displacement
between scans i and j on the plane normal vector
VI. USE OF PLANE PARAMETERS FOR COMPUTING is related to the change in the normal point range
NAVIGATION SOLUTION between scans i and j:
Plane-based navigation is performed using changes (¢R, ni ) = ½i ¡ ½j (26)
in perceived locations of plane normal points between
scans. As stated previously, a plane normal point is where ( , ) is the vector inner product.
defined as the intersection of the plane and a line The proposed plane-based navigation algorithm
originating from the LADAR location perpendicular to utilizes (26) to estimate the delta position vector.
the plane of interest. Fig. 11 illustrates the geometry A minimum of three noncollinear planar surfaces
involved in navigating off the plane normal points. is required to estimate the three delta position
In Fig. 11, ¢R is the delta position vector components. For measurements to multiple planar
(displacement vector between scans i and j, in this surfaces, (26) can be expanded into the following
22 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
matrix form: between the position error vector (±(¢R)) and the
H ¢ ¢R = ¢½ (27) range change error vector (±(¢½)) is given by
where ±(¢R) = (HT ¢ H)¡1 ¢ HT ¢ ±(¢½): (31)
2 3 2 3
nTi,1 ½i,1 ¡ ½j,1
The variance of ±(¢R) is derived from (31) and yields
6 ¢¢¢ 7 6 ¢¢¢ 7
6 7 6 7 the following variance relation:
6 7 6 7
H=6 n T 7,
6 i,m 7 ¢½ = 6
6 ½i,m ¡ ½j,m
7:
7 (28) VAR±(¢R) = (HT ¢ H)¡1 ¢ HT ¢ VAR±(¢½) ¢ H ¢ (HT ¢ H)¡1
6 7 6 7
4 ¢¢¢ 5 4 ¢¢¢ 5
(32)
nTi,M ½i,M ¡ ½j,M
where
In (28), superscript T denotes a transpose matrix, m is VARx = E[x ¢ xT ] (33)
the plane index, and M is the total number of planes
used to estimate delta position. A standard least mean and E[:] is the expected value. If the components of
square (LMS) formulation is applied to estimate the ±(¢½) are assumed to be independent and identically
delta position vector: distributed (IID), then
In (29), ¢½ˆ is the estimated delta range vector, which where I is a unit matrix and ¾¢½ is the standard
contains differences in estimates of plane ranges deviation of the delta range error. Substitution of (34)
computed based on LADAR data (see (17)) for scans into (32) yields
i and j. Note that (29) uses a nonweighted LMS VAR±(¢R) = (HT ¢ H)¡1 ¢ ¾¢½
2
: (35)
formulation. This formulation can be modified into
a weighted LMS procedure if standard deviations of DOP factors are thus formulated as follows:
plane range changes are evaluated. Modification of p
the nonweighted LMS above into a weighted LMS D = (HT ¢ H)¡1 (36)
solution is considered as a topic for future research. where xDOP = [D]11 , yDOP = [D]22 and zDOP =
The LMS position accuracy depends on the [D]33 , correspondingly. As mentioned previously,
relative plane geometry, which is determined by (36) is derived assuming that range errors for
the LMS measurement matrix (the H matrix). This different planar surfaces are identically distributed
paper uses DOP factors to characterize the geometry and uncorrelated with each other. The noncorrelation
influence on the relationship between the localization assumption is generally valid since different planes are
accuracy and the planar range accuracies. DOP factors computed from different LADAR measurements that
for the LMS solution defined by (29) are formulated are normally uncorrelated and computation of plane
in this section. The next section uses simulation parameters for different planes is completely separate.
results to illustrate the influence of relative planar However, range errors associated with ranges to
geometry on the delta positioning accuracy. different planes can have different standard deviation
The DOP-based approach is adopted from the values. In this case, the unweighted LMS estimation
GPS [15] where DOPs are employed to characterize (see (29)) must be modified to a weighted LMS
the influence of satellite geometry on the positioning solution procedure. DOP formulation for a weighted
accuracy. Generally speaking, the use of DOP factors LMS solution is recommended as a topic for future
for plane-based localization allows evaluation of the research.
localization accuracy for a given plane geometry The attitude of the LADAR platform is estimated
and accuracy of the plane range estimates. More using plane normal vectors. Equation (37) relates a
specifically, the DOP is defined as a geometry change in the LADAR orientation between scans i
dependent linear coefficient that relates a standard and j to the change in the orientation of plane normal
deviation of the delta position estimation error to vector:
a standard deviation of the delta range error. For Cij ¢ nj = ni (37)
instance, a vertical DOP (VDOP) relates a standard
deviation of the vertical delta position error (¾¢RV ) where Cji is the DCM that corresponds to the
with a standard deviation of error in plane range coordinate transformation from the LADAR body
changes (¾¢½ ): frame at scan i (i-scan frame) to the LADAR body
frame at scan j (j-scan frame). To compute the
¾¢RV = VDOP ¢ ¾¢½ : (30)
DCM, the attitude estimation algorithm needs to
DOP factors can be formulated for the plane-based solve Wahba’s problem [17]: given a first set of
navigation similarly to the GPS DOP formulation normal vectors with vector components resolved at
(see [15] for the corresponding formulation of GPS the i-scan’s frame and the second set of the same
DOPs). From (29) it follows that the relationship vectors with their components resolved at the j-scan’s
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
frame, find the DCM that brings the second set into
the best least square correspondence with the first.
At least two noncollinear normal vectors are required
for the attitude estimation. Attitude is generally
estimated by solving an eigenvalue/eigenvector
problem, which requires the solving of nonlinear
equations. For instance, the quaternion estimation
algorithm (QUEST) finds the optimal LMS quaternion
Fig. 12. Computational rotation of j-frame that aligns vector
by computing eigenvalues and eigenvectors of a components at j-frame with its components at i-frame.
four-by-four Hessian matrix [18]. In this case, a
fourth-order equation has to be solved in order
and the rotation angle Á1 is the angle between n̂i,k0
to computer eigenvalues. This paper implements
a two-step attitude estimation approach. First, an and n̂j,k0 :
initial (nonoptimal) DCM is computed based on Á1 = arccos(n̂j,k0 , n̂i,k0 ): (40)
two noncollinear normal vectors. Second, DCM To support this vector rotation, j-frame must be
initialization errors are optimally estimated by rotated in the direction opposite to the vector rotation.
applying a standard linear LMS formulation. The use Thus, based on rotation angle and rotation axis, the
of linear solution versus nonlinear solution techniques DCM of the first rotation is computed as follows:
is beneficial for error analysis, since it allows for
a direct transformation of plane extraction errors Ĉ1 = expm(Á̂1 ¢ ¹̂1 £) (41)
into attitude estimation errors. The two-step attitude where expm is the exponential matrix function and
estimation procedure is discussed next. ¹̂1 £ is the skew-symmetric matrix defined as follows:
As stated previously, the DCM is first initialized 2 3
based on two noncollinear vectors. Initial DCM is 0 ¡¹̂1z ¹̂1y
6 7
found based on two computational rotations of the ¹̂1 £ = 4 ¹̂1z 0 ¡¹̂1x 5 : (42)
j-frame that align j-frame vector components with ¡¹̂1y ¹̂1x 0
their components at the i-frame. Corresponding DCM
computations are described in detail in [16]. Main After the first rotation, the following condition is
computational steps are summarized below. Two satisfied:
n̂i,k0 = Ĉ1 ¢ n̂j,k0 (43)
associated noncollinear plane vectors extracted from
scan i and scan j (n̂i,k0 , n̂i,m0 and n̂j,k0 , n̂j,m0 ) are used and, components of the second normal vector are
to compute the initial DCM. Two vectors with the transformed as follows:
maximum absolute value of their cross product are
n̂0j,m0 = Ĉ1 ¢ n̂j,m0 : (44)
chosen amongst all available plane normal vectors
to maximize noncollinearity. An extensive search The second rotation matches n̂0j,m0 with n̂i,m0 while
is performed through all possible pairs of normal previously matched n̂i,k0 and n̂j,k0 remain unchanged:
vectors extracted from scan i to find the vector
pair that maximizes the cross-product absolute n̂i,m0 = Ĉ2 ¢ n̂0j,m0 , Ĉ2 ¢ n̂i,k0 = n̂i,k0 : (45)
value:
Correspondingly, n̂i,k0 serves as a rotation axis for the
jn̂i,k0 £ n̂i,m0 j = max jn̂i,k £ n̂i,m j (38) second rotation (i.e., ¹̂2 = n̂i,k0 ). The rotation angle is
k=1,:::,M
m=1,:::,M
chosen to satisfy the first condition in (45). In this
where £ is the vector cross product and M is the total case, the rotation angle Á2 can be estimated as the
number of planes extracted. angle between projections of n̂0j,m0 and n̂i,m0 on the
As stated above, the DCM is computed based planar surface perpendicular to n̂i,k0 (see [16] for more
on two computational rotations of the j-frame that details). Computation of the DCM for the second
match j-frame vectors components n̂j,k0 and n̂j,m0 rotation is similar to DCM computations for the first
with their i-frame components n̂i,k0 and n̂i,m0 . First rotation:
rotation matches components of n̂j,k0 and n̂i,k0 : i.e., Ĉ2 = expm(Á̂2 ¢ ¹̂2 £): (46)
the j-frame is computationally rotated such that n̂j,k0 The initial DCM estimate is determined as a
vector components become n̂i,k0 components at the superposition of the above rotations:
end of rotation as illustrated in Fig. 12. Hence, n̂j,k0
is rotated relative to the j-frame as shown in Fig. 12. (Ĉij )0 = Ĉ2 ¢ Ĉ1 : (47)
The rotation axis ¹̂1 is perpendicular to both n̂i,k0 and The initial estimate of the DCM can be represented
n̂j,m0 , i.e., as follows:
¹̂1 = n̂i,k0 £ n̂j,k0 (39) (Ĉij )0 = ±Cij ¢ Cij (48)
24 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
where ±Cji is the DCM estimation error matrix. Linear influence of relative plane geometry on the estimation
approximation of this matrix yields [19] accuracy of the LADAR platform pitch, roll, and
heading angles. Similar to the delta position LMS
±Cij ¼ I + ±−£ (49) solution above, DOP factors can be derived to relate
errors in plane normal vectors to angular errors.
where ±−£ is the skew-symmetric matrix
2 3 However, unlike the range errors for the position
0 ¡±Ã ±μ case, the normal vector errors are generally correlated.
6 7 Particularly, errors in components of the same normal
±−£ = 4 ±Ã 0 ¡±Á 5 (50)
vector are correlated as it can be inferred from (1) and
¡±μ ±Á 0 (17). This correlation needs to be taken into account
and ±μ, ±Á, and ±Ã are errors in pitch, roll, and for the derivation of DOP factors for the attitude case.
heading angles after the DCM initialization stage. This derivation is outside the scope of this paper.
The second stage of the DCM estimation procedure In order to use planar surfaces for the estimation
implements a linear LMS solution to estimate these of position and orientation changes as formulated in
angular errors. This LMS solution procedure is this section, it must be known with certainty that a
discussed below. plane in scan i corresponds to the plane in scan j.
Substitution of (49) into (48) provides the For plane matching, INS data can be used to predict
following expression: the plane range and normal vector in scan j based on
range and normal vector extracted from scan i:
(Ĉij )0 = ±Cij ¢ Cij ¼ (I + ±−£) ¢ Cij = Cij + ±− £ ¢Cij :
½ˆ¡ ˆ
j = ½i ¡ (¢R̂INS , ni )
(51)
(57)
Correspondingly,
n̂¡
j = ¢ĈINS ¢ n̂i
(Ĉij )0 ¢ nj ¡ ni
where ½ˆ¡ ¡
j and n̂j are predicted range and normal
= (I + ±−£) ¢ Cij ¢ nj ¡ ni = (I + ±−£) ¢ ni ¡ ni vector; ½ˆi and n̂i are plane range and normal vector
extracted from scan i; and, ¢R̂INS and ¢ĈINS are the
= ni + (±−£) ¢ ni ¡ ni = ±− £ ni = ¡ni £ ±−: INS estimates of the position change vector and DCM
(52) increment between scans i and j. If the predicted
Note that (52) uses equivalency of the matrix range and normal vector (½ˆ¡ ¡
j and n̂j ) match closely
multiplication (±−£) ¢ ni to the vector cross product the range and normal vector extracted from scan j
±− £ ni . Expanding (52) to include all available (½ˆj and n̂j ), the plane correspondence is established
normal vectors yields between scans i and j. Note that plane matching
thresholds must accommodate both plane extraction
¢n = HT− ¢ ±− (53) errors and INS drift errors. As stated previously,
where implementation of the feature matching procedure
2 3 2 3 that exploits inertial data will be addressed by
(Ĉji )0 ¢ ni,1 ¡ nj,1 ¡nj,1 £ future research. For the current realization of the 3D
6 7 6 7
¢n = 4 ¢ 5, H− = 4 ¢ 5: navigation solution, simulation and test scenarios are
(Ĉji )0 ¢ ni,M ¡ nj,M ¡nj,M £ designed such that a direct plane correspondence can
be used to match planes between different scans, i.e.,
(54) a kth plane extracted from scan i always correspond to
A LMS solution is applied to estimate initial angular the kth plane extracted from scan j.
errors:
± −̂ = (HT− ¢ H− )¡1 ¢ HT− ¢ ¢n̂: (55) VII. SIMULATION RESULTS
Note that the measurement matrix ¢n̂ is based on
The plane-based navigation methodology is first
estimated values of plane normal vectors that are
verified using simulations. Three planar surfaces
computed from LADAR data. Finally, the initial DCM
are simulated according to the geometry shown in
estimate is adjusted to incorporate LMS estimates of
Fig. 13. For this planar geometry, DOP factors for
initial angular errors:
the position estimation are computed as 0.7, 3.1, and
Ĉij = expm(¡± −̂£) ¢ (Ĉij )0 : (56) 10.6 for the xDOP, yDOP and zDOP, respectively.
2D LADAR scans are simulated at 0 deg, 5 deg,
Equation (55) provides the linear relation between and 10 deg elevation angles. The simulated LADAR
plane normal vectors and angular adjustments. Unlike measurements conform to the specifications of a SICK
nonlinear attitude estimation methods, this linear LMS-200 LADAR. The LADAR angular range is
relation can be directly applied to formulate the from 0 to 180 deg with the angular resolution of
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 13. Simulation scenario for 3D plane-based navigation: three
planar surfaces simulated.
26 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 17. 3D plane-based navigation (scenario 2): errors in delta
position solution.
A. Test Setup
Fig. 18 shows a photograph of the test setup that
is developed to demonstrate the feasibility of 3D
trajectory estimation from LADAR measurements.
A SICK LMS-200 2D scanning LADAR is mounted
in a bracket that is capable of rotating around the x
axis of the LADAR body frame. LADAR rotations Fig. 20. Segmentation of LADAR measurements from data file:
are implemented using a low-cost Futaba digital servo first, output measurement messages are extracted from file;
motor. Scans are taken at elevation angles of 0 deg, second, scan image and its corresponding elevation angle are
decoded from each message; third, groups of three consecutive
5 deg, and 10 deg. The servo control and LADAR
images (0 deg, 5 deg, 10 deg elevation angles) are formed.
data collection functions are implemented in a Xilinx
Spartan 3 field programmable gate arrays (FPGA).
Fig. 19 shows the diagram of the data collection motor immediately after the detection of the LADAR
setup. Scanning LADAR outputs measurement measurement message. Thus, the servo has enough
messages where each message comprises time to change the LADAR elevation angle before the
measurements for a single scan. LADAR messages next scan is taken. A 2D scan repetition rate of about
are detected by the message detection block of the 0.7 s is implemented for the test setup. This time
FPGA-based control and data collection board. Once interval is sufficient to change the LADAR elevation
the LADAR message is detected, the board updates angle for the low-cost servo option used in the setup.
the servo elevation angle through the servo control The data collection block forms output messages
block. One of the key requirements for the setup where each message contains scanning measurements
design is that the LADAR motion between different of a single scan and the values of the scan’s elevation
elevation angles does not interfere with the LADAR angle that is provided by the servo control block.
scans themselves. In other words, the servo motor Output messages are stored into a binary file on a PC
must change the LADAR elevation angle between through the USB port.
scans and not during the scans to avoid introducing To process experimental data, the data
distortions into the scan images. To satisfy this segmentation scheme is implemented as illustrated
requirement, the FPGA-based control board sends in Fig. 20. Measurement messages in the data file
the elevation angle change command to the servo are identified by the message header and extracted
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 21. Plane reconstruction based on LADAR data: photograph Fig. 23. Plane reconstruction: photograph of live data test
of live data test scenario 1. scenario 2.
28 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Fig. 25. Reconstruction of 3D translational motion: true Fig. 27. Reconstruction of 3D angular motion: true attitude
trajectory versus motion trajectory estimated by 3D planar-based trajectory versus attitude estimated based on plane parameters
navigation that uses LADAR data. extracted from LADAR data.
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
ACKNOWLEDGMENTS [10] Uijt de Haag, M., Venable, D., and Smearcheck, M.
Integration of an inertial measurement unit and 3D
The authors would especially like to thank Jon imaging sensor for urban and indoor navigation of
unmanned vehicles.
Sjogren of the AFOSR for supporting this research.
In Proceedings of the Institute of Navigation National
Technical Meeting 2007, Jan. 2007, 829—840.
REFERENCES [11] Thrun, S., Martin, C., Liu, Y., Hahnel, D.,
Emery-Montemerlo, R., Chakrabarti, D., and Burgard, W.
[1] Hostetler, L. D., and Andreas, R. D.
A real-time expectation maximization algorithm for
Nonlinear Kalman filtering techniques for terrain-aided
acquiring multi-planar maps of indoor environments with
navigation.
mobile robots.
IEEE Transactions on Automatic Control, AC-28, 3 (Mar.
IEEE Transactions on Robotics and Automation, 20, 3
1983).
(June 2004), 433—442.
[2] Campbell, J. L., Uijt de Haag, M., and van Graas, F.
[12] Nguyen, V., Martinelli, A., Tomatis, N., and Siegwar, R.
Terrain-referenced positioning using airborne laser
A comparison of line extraction algorithms using 2D laser
scanner.
rangefinder for indoor mobile robotics.
Navigation, Journal of the Institute of Navigation, 52, 4
In Proceedings of the IEEE Conference on Intelligent
(Winter 2005—2006), 189—197.
Robots and Systems (IROS), 2005.
[3] Kim, J., and Sukkarieh, S.
[13] Borges, G. A., and Aldon, M-J.
Autonomous airborne navigation in unknown terrain
Line extraction in 2D range images for mobile robotics.
environments.
Journal of Intelligent and Robotic Systems, 40, 3 (2004),
IEEE Transactions on Aerospace and Electronic Systems,
267—297.
40, 3 (July 2004), 1031—1045.
[14] Bates, D., and van Graas, F.
[4] Vadlamani, A. K., and Uijt de Haag, M.
Covariance analysis considering the propagation of laser
Use of laser range scanners for precise navigation in
scanning errors use in LADAR navigation.
unknown environments.
In Proceedings of the Institute of Navigation Annual
In Proceedings of the Institute of Navigation GNSS-2006,
Meeting, Apr. 2007, 624—634.
Sept. 2006, 1104—1114.
[15] Kaplan, E., and Hegarty, C. (Eds.)
[5] Borges, G. A., and Aldon, M-J.
Understanding GPS: Principles and Applications (2nd ed.).
Optimal robot pose estimation using geometrical maps.
Norwood, MA: Artech House, 2006.
IEEE Transactions on Robotics and Automation, 18, 1
[16] Mostov, K., Soloviev, A., and Koo, T. J.
(Feb. 2002), 87—94.
Initial attitude determination and correction of gyro-free
[6] Pfister, S. T.
INS angular orientation on the basis of GPS linear
Algorithms for Mobile robot localization and mapping
navigation parameters.
incorporating detailed noise modeling and multi-scale
In Proceedings of IEEE Conference on Intelligent
feature extraction.
Transportation Systems, Nov. 1997, 1034—1039.
Ph.D. dissertation, California Institute of Technology,
[17] Wahba, G.
Pasadena, 2006.
A least squares estimate of satellite attitude.
[7] Bates, D.
SIAM Review, 7, 3 (July 1965), 409.
Navigation using optical tracking of objects at unknown
[18] Psiaki, M. L.
locations.
Attitude-determination filtering via extended quaternion
M.S. thesis, Ohio University, Athens, 2006.
estimation.
[8] Soloviev, A., Bates, D., and van Graas, F.
Journal of Guidance, Control, and Dynamics, 23, 2
Tight coupling of laser scanner and inertial measurements
(Mar.—Apr. 2000), 206—214.
for a fully autonomous relative navigation solution.
[19] Titterton, D. H., and Weston, J. L.
Navigation, Journal of the Institute of Navigation, 54, 3
Strapdown Inertial Navigation Technology (2nd ed.).
(Fall 2007), 189—205.
Reston, VA, and Stevenage, UK: The American Institute
[9] Horn, J. P.
of Aeronautics and Astronautics, and The Institute of
Bahnführung eines mobilen Roboters mittels absoluter
Electrical Engineers, 2004.
Lagebestimmung durch Fusion von Entfernungsbild- und
Koppelnavigations-daten.
Ph.D. dissertation, Technical University of Munich,
Germany, 1997.
30 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 46, NO. 1 JANUARY 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.
Andrey Soloviev is a research assistant professor at the University of Florida.
Previously he served as a senior research engineer at the Ohio University
Avionics Engineering Center. He received his Master of Science in applied
mathematics and physics from Moscow University of Physics and Technology
in 1997 and a Ph.D. in electrical engineering from Ohio University in 2002. His
research interests focus on all aspects of multi-sensor integration for navigation
including integrated processing of GPS, inertial, laser radar and imagery signals.
Dr. Soloviev received the RTCA William E. Jackson Award in 2002 and the
Institute of Navigation Early Achievement Award in 2006.
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:530 UTC from IE Xplore. Restricon aply.