Professional Documents
Culture Documents
Efficient Sparse Pose Adjustment For 2D Mapping - Konolige10iros
Efficient Sparse Pose Adjustment For 2D Mapping - Konolige10iros
Efficient Sparse Pose Adjustment For 2D Mapping - Konolige10iros
Kurt Konolige Giorgio Grisetti, Rainer Kümmerle, Benson Limketkai, Regis Vincent
Willow Garage Wolfram Burgard SRI International
Menlo Park, CA 94025 University of Freiburg Menlo Park, CA 94025
Email: konolige@willowgarage.com Freiburg, Germany Email: regis.vincent@sri.com
Email: grisetti@informatik.uni-freiburg.de
H = CreateSparse(e, cf ) ContinuableLM(c, e, λ)
Input: set of constraints eij , and a list of free nodes (variables) Input: nodes c and constraints e, and diagonal increment λ
Output: sparse upper triangular H matrix in CCS format Output: updated c
1) Initialize a vector of size ||cf || of C++ std::map’s; each map 1) If λ = 0, set λ to the stored value from the previous run.
is associated with the corresponding column of H. The key of 2) Set up the sparse H matrix using CreateSparse(e, c−c0 ), with
the map is the row index, and the data is an empty 3x3 matrix. c0 as the fixed pose.
Let map[i, j] stand for the j’th entry of the i’th map. 3) Solve (H + λ diagH) ∆x = J⊤ Λe, using sparse Cholesky
2) For each constraint eij , assuming i < j: with AMD.
a) In the steps below, create the map entries if they do not 4) Update the variables c − c0 using Equation (6).
exist. 5) If the error e has decreased, divide λ by two and save, and
b) If ci is free, map[i, i] += Ji⊤ Λij Ji . return the updated poses for c − c0 .
c) If cj is free, map[j, j] += Jj⊤ Λij Jj . 6) If the error e has increased, multiply λ by two and save, and
d) If ci , cj are free, map[j, i] += Ji⊤ Λij Jj . return the original poses for c − c0 .
3) Set up the sparse upper triangular matrix H.
a) In the steps below, ignore elements of the 3x3 map[i, i]
entries that are below the diagonal. an accurate covariance. The method allows either a single
b) Go through map[] in column then row order, and set scan or set of aligned scans to be matched against another
up col ptr and row ind by determining the number of
elements in each column, and their row positions. single scan or set of aligned scans. This method is used in
c) Go through map[] again in column then row order, and the SRI’s mapping system Karto1 for both local matching of
insert entries into val sequentially. sequential scans, and loop-closure matching of sets of scans
as in [12]. To generate the real-world datasets for experiments,
we ran Karto on 63 stored robot logs of various sizes, using
F. Continuable LM System
its scan-matching and optimizer to build a map and generate
The LM system algorithm is detailed in Table II. It does constraints, including loop closures. The graphs were saved
one step in the LM algorithm, for a set of nodes c with and used as input to all methods in the experiments.
associated measurements. Running a single iteration allows
for incremental operation of LM, so that more nodes can be VI. E XPERIMENTS
added between iterations. The algorithm is continuable in that In this section, we present experiments where we compare
λ is saved between iterations, so that successive iterations SPA with state-of-the art approaches on 63 real world datasets
can change λ based on their results. The idea is that adding and on a large simulated dataset. We considered a broad variety
a few nodes and measurements doesn’t change the system of approaches, including the best state-of-the-art.
that much, so the value of λ has information about the state • Information filter: DSIF [7]
of gradient descent vs. Euler-Newton methods. When a loop • Stochastic gradient descent: TORO [10]
closure occurs, the system can have trouble finding a good • Decomposed nonlinear system: Treemap [8]
minima, and λ will tend to rise over the next few iterations to • Sparse pose adjustment: SPA, with (a) sparse direct
start the system down a good path. Cholesky solver, and (b) iterative PCG [15]
There are many different ways of adjusting λ; we choose a We updated the PCG implementation to use the same “con-
simple one. The system starts with a small lambda, 10−4 . If tinued LM” method as SPA; the only difference is in the
the updated system has a lower error than the original, λ is underlying linear solver. The preconditioner is an incomplete
halved. If the error is the same or larger, λ is doubled. This Cholesky method, and the conjugate gradient is implemented
works quite well in the case of incremental optimization. As in sparse matrix format.
long as the error decreases when adding nodes, λ decreases We also evaluated a dense Cholesky solver, but both the
and the system stays in the Newton-Euler region. When a computational and the memory requirements were several
link is added that causes a large distortion that does not get orders of magnitude larger than the other approaches. As an
corrected, λ can rise and the system goes back to the more example, for a dataset with 1600 constraints and 800 nodes
robust gradient descent. one iteration using a dense Cholesky solver would take 2.1
V. S CAN M ATCHING seconds while the other approaches require an average of a
few milliseconds. All experiments are executed on an Intel
SPA requires precision (inverse covariance) estimates from Core i7-920 running at 2.67 Ghz.
matching of laser scans (or other sensors). Several scan-match In the following, we report the cumulative analysis of
algorithms can provide this, for example, Gutmann et al. the behavior of the approaches under different operating
[11] use point matches to lines extracted in the reference conditions; results for all datasets are available online at
scan, and return a Gaussian estimate of error. More recently, www.ros.org/research/2010/spa. We tested each
the correlation method of Konolige and Chou [17], extended method both in batch mode and on-line. In batch mode, we
by Olson [22], provides an efficient method for finding the
globally best match within a given range, while returning 1 Information on Karto can be found at www.kartorobotics.com.
101
100
100
10
10-2 0.1
10-3 0.01
SPA SPA
TORO TORO
PCG PCG
10-4 0.001
0 2000 4000 6000 8000 0 2000 4000 6000 8000
# constraints # constraints
10-1
10-2
0.01
10-3 SPA
TORO
-4 PCG
10
Treemap
DSIF
10-5 0
0 2000 4000 6000 8000 0 2000 4000 6000 8000
# constraints # constraints
Fig. 5. On-line optimization on real-world datasets. Left: the average χ2 error Fig. 6. Large simulated dataset containing 100k nodes and 400k constraints
per constraint after adding a node during incremental operation. Right: the used in our experiments. Left: initial guess computed from the odometry.
average time required to optimize the graph after inserting a node. Each data Right: optimized graph. Our approach requires about 10 seconds to perform
point represents one data set, the x-axis shows the total number of constraints a full optimization of the graph when using the spanning-tree as initial guess.
of that data set. Note that the error for PCG and SPA is the same in the left
graph. 9
10 14000
SPA - Odometry SPA
SPA - Spanning Tree PCG
8 PCG - Odometry 13000
not handle non-circular covariances, which limit its ability to 10 PCG - Spanning Tree
TORO
achieve a minimal χ2 . Treemap is much harder to analyze, 7
10 12000
χ error
χ error
6 11000
10
for optimization. For these datasets, it appears to have a large
2
2
5
tree (large dataset loops) with small leaves. The tree structure 10 10000
10
2 [3] T. A. Davis. Direct Methods for Sparse Linear Systems (Fundamentals
of Algorithms 2). Society for Industrial and Applied Mathematics,
1 Philadelphia, PA, USA, 2006.
10
[4] F. Dellaert. Square Root SAM. In Proc. of Robotics: Science and
Systems (RSS), pages 177–184, Cambridge, MA, USA, 2005.
0
10 [5] J. Dongarra, A. Lumsdaine, R. Pozo, and K. Remington. A sparse matrix
2