A Line Feature Based SLAM With Low Grade Range Sensors Using Geometric Constraints and Active Exploration For Mobile Robot

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Auton Robot (2008) 24: 13–27

DOI 10.1007/s10514-007-9050-y

A line feature based SLAM with low grade range sensors using
geometric constraints and active exploration for mobile robot
Young-Ho Choi · Tae-Kyeong Lee · Se-Young Oh

Received: 27 February 2007 / Accepted: 29 August 2007 / Published online: 19 October 2007
© Springer Science+Business Media, LLC 2007

Abstract This paper describes a geometrically constrained 1 Introduction


Extended Kalman Filter (EKF) framework for a line feature
based SLAM, which is applicable to a rectangular indoor In most applications, a mobile robot must be able to self-
environment. Its focus is on how to handle sparse and noisy localize and build a map to perform a variety of useful long-
sensor data, such as PSD infrared sensors with limited range term tasks. Therefore, simultaneous localization and map-
and limited number, in order to develop a low-cost naviga- ping (SLAM) has been one of the most crucial issues for
tion system. It has been applied to a vacuum cleaning robot mobile robot navigation. It is the process of building a map
in our research. In order to meet the real-time objective with of the environment while concurrently generating an esti-
low computing power, we develop an efficient line feature mate for the location of the robot whose objective is to min-
extraction algorithm based upon an iterative end point fit imize the localization and mapping errors simultaneously.
(IEPF) technique assisted by our constrained version of the The history and critical issues of SLAM are well discussed
Hough transform. It uses a geometric constraint that every in Frese (2006).
line is orthogonal or parallel to each other because in a gen- In the last ten years, a great many SLAM algorithms
eral indoor setting, most furniture and walls satisfy this con- have been proposed. The laser range finder based SLAM
straint. By adding this constraint to the measurement model (Diosi and Kleeman 2005; Hähne et al. 2003; Tomono 2004;
of EKF, we build a geometrically constrained EKF frame- Nguyen et al. 2006) has been the most popular in various ap-
work which can estimate line feature positions more accu- plications owing to the sensor’s superior characteristics in-
rately as well as allow their covariance matrices to converge cluding long range, high angular resolution, and dense scan-
more rapidly when compared to the case of an unconstrained ning. Diosi (2005) proposed a method for 2D laser scan
EKF. The experimental results demonstrate the accuracy and matching called Polar Scan Matching (PSM), which be-
robustness to the presence of sensor noise and errors in an longs to a family of point to point matching strategies but
actual indoor environment. avoids exhaustive searching of all point associations by sim-
ply matching against points with the same bearing, which
makes it faster than the popular iterative closest point (ICP).
Keywords Line feature · SLAM · EKF · Geometric
Hähnel et al. (2003) presented a SLAM algorithm that com-
constraint · Environmental modeling
bines Rao-blackwellized particle filtering and scan match-
ing. In this approach, scan matching is used for minimizing
odometric errors during mapping and a probabilistic model
Y.-H. Choi () · T.-K. Lee · S.-Y. Oh
Department of Electrical Engineering, Pohang University of of the residual errors from scan matching is then used for
Science and Technology, Pohang, Republic of Korea the resampling steps. This way, the number of samples re-
e-mail: rockboy@postech.ac.kr quired is considerably reduced as well as the particle de-
T.-K. Lee pletion problem typically preventing the robot from closing
e-mail: devilee@postech.ac.kr large loops was mitigated. Tomono (2004) proposed a new
S.-Y. Oh scan matching method based on geometric hashing, which
e-mail: syoh@postech.ac.kr utilizes the Euclidean invariant features in order to match an
14 Auton Robot (2008) 24: 13–27

input scan with reference scan without an initial alignment. using geometric constraints between the lines. Thanks to
Nguyen et al. (2006) proposed the orthogonal SLAM us- this, it is now possible to implement a reliable SLAM al-
ing geometric constraints between the lines. They reduced gorithm for low cost robots equipped with sparse and noisy
the complexity of the SLAM algorithm by mapping only range sensors. Finally, we propose an efficient active explo-
the lines that are parallel or perpendicular to each other, ration control scheme for mapping which incrementally es-
which represent the main structure of most indoor environ- timates the locations of the invisible line features from the
ments. Despite their relative successes, due to their high line feature based environment model and then moves the
cost, processing requirement and safety issues, these active robot around that vicinity.
sensor-based systems had limited utilities in practical com- The rest of this paper is organized as follows. Section 2
mercial applications. reviews the previous work. Section 3 describes a new ef-
Recently, a number of researchers have proposed vision ficient line feature extraction scheme. Section 4 then de-
based SLAM solutions using relatively low cost cameras velops our geometrically constrained EKF formula whereas
(Davison 2003; Lowe 2004; Murray and Jennings 1998). Sect. 5 describes the line feature based environment mod-
Davison (2003) proposed a vision-based real-time SLAM, eling and the active exploration control scheme. Section 6
called Mono-SLAM, which uses only a single camera with- finally shows the experimental results.
out odometry information. Although it improved localiza-
tion accuracy by incorporating the camera velocity into the
object variables, it still requires an initial manual calibration 2 Previous work based on sparse and noisy range
process to obtain the scale information. Lowe (2004) intro- sensors
duced a triclops-vision system using their own wide base-
line matching technique called SIFT (Scale Invariant Fea- In the last decade, several researchers addressed the sparse
ture Transform). It maintains the robot pose and landmark and noisy sensor based localization and SLAM problems.
positions separately. Murray (1998) proposed a stereo vision Großmann (1999) proposed a position tracking method for
based mapping technique which is able to build a grid map a mobile robot with seven sonar sensors consisting of three
using a trinocular system. steps. It first computes a two-dimensional feature space by
These laser range finder or vision based SLAM tech- applying a straight-line Hough transform to the sonar read-
niques have been developed and are used in many real ap- ings. The detected features are then matched with those in
plications. However, these methods are only useful to more the world map. The correlation counts obtained in the pre-
expensive platforms such as guidance or security robots but vious step are used for updating the position probability
not applicable to the low cost platforms such as vacuum grid. Burguera (2005) proposed a scan matching localiza-
cleaning robots which have been the most successful com- tion strategy which is the extension of the ICP based al-
mercially. Most low cost robots use only sparse and noisy gorithm to sonar range sensors. In this approach, the sen-
range sensors such as sonar or PSD range sensors due to the sor readings are grouped to overcome their sparseness and
cost limit. Therefore, it is necessary to develop a new effi- the whole robot trajectory involved in the grouping process
cient SLAM algorithm that can work with sparse and noisy is corrected in the post process. Tardos (2002) proposed a
sensors. Along this line, some researchers have recently pro- perceptual grouping process that permits a robust identifica-
posed sonar range sensor based localization (Großmann and tion and localization of the specific environmental features,
Poli 1999) and SLAM (Burguera et al. 2005; Tardos et al. such as straight segments and corners, from the sparse and
2002; Kleeman 2003; Choi and Oh 2006) techniques. Al- noisy sonar data. He also proposed a map joining technique
though their performance is in general lower than that of that allows the system to build a sequence of independent
the laser range finders primarily due to their short range and limited-size stochastic maps and join them in a globally con-
high angular uncertainty (up to 30 degrees), it still is signif- sistent way. Kleeman (2003) proposed a SLAM algorithm
icant in view of their cost and availability. Details of these for indoor environments using two advanced sonar range
systems are reviewed in Sect. 2. Unfortunately, research on sensors which can produce accurate range and bearing mea-
low grade sensor based SLAM has been quite scarce. surements, classify targets, and reject interferences within
This paper has four main contributions. First, we develop one sensing cycle. He also presented techniques for detect-
an efficient line extraction algorithm which remarkably re- ing sonar feature clutter and selecting strong candidates for
duces the search space by combining the Iterative End Point sonar landmarks. Drumheller (2006) proposed a feature-
Fit (IEPF) clustering algorithm and LS line fitting with the based localization algorithm for mobile robots equipped
original Hough transform. Second, we demonstrate that it is with a sonar range finder. He introduced the idea of sonar
feasible to model the general indoor environment as a com- segments, which are straight-line segments extracted from
bination of line features. Third, we enhance the performance the measurements obtained in a 360-degree sweep of the
of the EKF framework for our line feature based SLAM by sonar range finder. Moreover, he used a sonar barrier test,
Auton Robot (2008) 24: 13–27 15

which eliminates implausible robot positions by exploiting bias incurred in sensor data accumulation can increase the
the physical constraint that sonar beams do not penetrate possibility of slow convergence or even divergence of the
solid objects. However, the algorithm is not able to exploit feature locations. Therefore, we need to eliminate these in-
readings from different positions. Crowley (1989) proposed correct line features in the association step by using a pre-
a system for dynamically maintaining a description of the defined set of geometric constraints between the lines. The
limits to the free space for a mobile robot using a belt of ul- following two sections show how these problems are solved.
tra sonic range devices. These techniques are based on the
principle of explicitly representing the uncertainty of the ve-
hicle position as well as the uncertainty inherent in the sens- 3 Line feature extraction and filtering
ing process. However, he only used three points to extract
a line segment and use only a simplified measurement un- We develop our own efficient algorithm using a multi-level
certainty model which disregards the correlation of the line filtering scheme shown in Fig. 1.
parameters.
As outlined above, there have been a variety of ap- 3.1 PDS building and resampling
proaches for sparse and noisy data based SLAM. However,
these solutions are still not adequate for a low cost vacuum Most SLAM algorithms based on sparse sensors have their
cleaning robot whose goal is to cover the entire floor space
own sensor data grouping method to overcome sparseness.
due to following reasons. In case of using the corner as the
We name our method Pseudo Dense Scan (PDS) reported
feature, its observation error is in general larger than that of
in Choi and Oh (2006) to overcome sparseness and reduce
the line feature because the line feature is extracted using
noise as the same time. PDS is basically an odometry-based
several tens of range readings while the location of a corner
temporal accumulation of sensor readings, each sampled af-
feature is generally calculated using only two range read-
ter a certain amount of pose changes, and this raw PDS is re-
ings. Therefore, the calculated location of corner feature is
sampled using a gravity based resampling scheme to reduce
more intensely contaminated by the sensor noise than that
noise as well as the number of data points (details in Choi
of line feature. Especially, when using low cost noisy range
and Oh 2006). In order to resample the scan points, we first
sensors, this problem manifests itself significantly because
sort the PDS points using Quicksort algorithm, based on θ of
the range reading error is largely affected by the orienta-
each point after being transformed to the polar coordinates.
tion of sensor with respect to the reflecting surface. This
Then we calculate the mean of each point group which has
is not a significant problem in case of the usual start-goal
been clustered based on the distance between points and the
navigation but it can increase the possibility of leaving un-
number of points within a group. Figure 2 shows an example
covered areas due to the trajectory discontinuities in case of
process of PDS building and resampling.
coverage path planning. In case of scan matching using raw
scan data, it can eliminate the step of feature extraction and
thus be applicable to a feature-less environment, but its ac-
curacy should be lower than that of the feature based method
because the scan constructed by accumulating sensor read-
ings observed from different viewpoints is much more noisy
than that of a scan from the laser range finder as shown in
Fig. 2. Therefore, it is difficult to expect the scan matching
accuracy as high as that of using a laser range finder. How-
ever, using line features as in our approach, this noise can
be greatly reduced in the feature extraction process. Due to
these reasons, in this paper, we only use the line features
whose observed locations are less significantly affected by
the robot’s viewpoint than other feature types (e.g. corner).
However, the original Hough transform based line extrac-
tion used in earlier works is very time consuming because it
should vote each point extensively against the entire Hough
space, which is not suitable for a low-end computing envi-
ronment. For a low cost system, we need to develop a much
more efficient line extraction algorithm. Furthermore, when
using the EKF framework for SLAM, an incorrect initial
feature vector estimation due to sensor noise or directional Fig. 1 Overall process of line extraction and filtering
16 Auton Robot (2008) 24: 13–27
Fig. 2 Example of PDS
building and resampling

3.2 IEPF based clustering and 1st filtering

In this step, the points of the resampled PDS are grouped


into several line segments through the IEPF algorithm which
is the most popular and effective line extraction method
based on Split-and-Merge (Nguyen et al. 2005). The initial
grouping is carried out by checking the distance between
the two consecutive points. Starting from j = 0, the dis-
tance between two adjacent points (xj , yj ) and (xj +1 , yj +1 )
are checked. If it is smaller than a preset threshold value D,
then (xj , yj ) and (xj +1 , yj +1 ) are considered to be in the
same cluster and otherwise to a new cluster. This proce-
dure continues until all the points are tested. If the num-
ber of points in a certain cluster is smaller than a preset Fig. 3 Example of an IEPF based clustering
threshold value N or the length of a cluster is shorter than
a preset threshold L, then the cluster is considered negli-
gible and not counted as a cluster. In addition, a cluster

N 
N 
N 
N
made up of multiple line segments is divided into several b̂ = yi xi2 − xi xi yi , (3)
cluster regions by the iterative end point fit (IEPF) method i=1 i=1 i=1 i=1
(Duda and Hart 1973). This algorithm recursively splits a set  2
of points C = {(x1 , y1 ), (x2 , y2 ), . . . , (xN , yN )} into C1 = 
N 
N 
N
ĉ = xi yi − xi2 yi2 . (4)
{(x1 , y1 ), . . . , (xa , ya )} and C2 = {(xa , ya ), . . . , (xN , yN )} if
i=1 i=1 i=1
Tmax is longer than a preset threshold T. The same proce-
dure is repeated for C1 and C2 until Tmax becomes shorter
than T. After splitting, two consecutive regions are merged After finding the approximate parameters â, b̂ and ĉ, clusters
if the maximum distance to the line formed by the ex- with line fitting errors larger than δ are eliminated.
treme points of these regions is smaller than T as shown
in Fig. 3(d). The figure shows an example of the IEPF based
3.4 Constrained Hough transform
clustering of points lying on 3 line segments. As a final step,
clusters with smaller number of points than N or shorter
length than L are pruned. In the previous step, we calculated an initial estimate of
the line parameters. However, it is well-known that the LS
3.3 Least square line fitting and 2nd filtering line fitting is significantly affected by outliers. Therefore,
we used the Hough transform to remove the effect of out-
In this step, we calculate an initial estimate of the line para-
liers as well as to estimate the uncertainties of the line pa-
meters in (1) for each cluster found in the previous subsec-
rameter. Figure 4 shows the geometric relation between the
tion using a least square line fitting scheme as follows:
line equation x cos θ0 + y sin θ0 = ρ0 in Hough space and
ax + by + c = 0, (1) âx + b̂y + ĉ = 0 in the x–y plane. Here, we refine the initial
line parameters âx + b̂y + ĉ = 0 of each cluster using the

N 
N 
N 
N
â = xi yi2 − yi xi yi , (2) Hough transform (HT). To save time, we restrict the Hough
i=1 i=1 i=1 i=1 voting space within the vicinity of the initial values ρ0 , θ0 as
Auton Robot (2008) 24: 13–27 17

shown in Fig. 5, calculated from the initial line parameter â, 3.5 Measurement uncertainty estimation and last filtering
b̂ and ĉ as follows: using the combined uncertainty measure
 
|ĉ| −1 b̂ To properly estimate the location of the features with the
ρ0 =  , θ0 = tan − . (5)
â 2 + b̂2 â EKF framework, we need a measurement uncertainty model
for the constrained HT:
Table 1 shows a comparison of the needed voting space
and the total voting counts between the original Hough z = h(x) + v,
 (6)
transform and our constrained Hough transform when ap- cρρ cρθ
plying each algorithm to the same PDS with 100 points. C(v) = ,
cθρ cθθ
Here, we quantize the Hough space with the resolution of
1 cm in ρ axis and 1◦ in θ axis, while limiting the range of where z is the measurement model and v is the sensor mea-
ρ to 1.8 m in case of the original Hough transform whereas surement noise with zero mean and covariance C(v). In gen-
and constrain the voting space to ±5 cm in ρ axis and ±5◦ eral case, C(v) is modeled as a combination of constant el-
in θ axis in case of the constrained Hough transform. After ements due to the difficulty in estimating it. However, we
voting every point of a given cluster to the Hough space, we developed our own measurement noise model based on the
find (ρMax , θMax ) acquiring maximum voting counts that are constrained Hough space voting result as follow:
used as an element of the line feature vector describing the 
cρρ cρθ
given cluster. C(v) =
cθρ cθθ

1  (ρk − ρMax )
= C(ρk ,θl )
Ctotal (θl − θMax )
k l
 T
(ρk − ρMax )
× (7)
(θl − θMax )

where Ctotal is the total voting count and C(ρk ,θl ) , voting
count in the Hough space whose ρ is ρk and θ is θl . From

Table 1 Comparison of the constrained HT with the original HT

Original HT Constrained HT

Needed voting space 180 × 360 11 × 11


Fig. 4 Geometric relationship between the two line models in two dif-
Total voting counts 180 × 360 × 100 11 × 11 × 100
ferent spaces

Fig. 5 Example of a
constrained Hough space setting
18 Auton Robot (2008) 24: 13–27

Fig. 6 Parameters of a line feature vector


Fig. 7 Measurement value of line feature

cρρ and cθθ in (7), we calculate the combined uncertainty


measure U of each cluster as follows: 4.1 State vector and covariance matrix

U = wl / l + wρ cρρ + wθ cθθ , (8)


In an EKF based framework, the state vector and its covari-
where l is the length of the estimated line as shown in ance matrix are partitioned into the robot pose and the land-
Fig. 6 and wl , wρ and wθ are the empirical weights which mark positions. In our constrained EKF, they are formulated
can be determined statistically (details in Appendix 1). as follows:
Using this combined uncertainty measure U , we select a ⎡ ⎤ ⎡ ⎤
r Prr Prl1 · · PrlN
set of reliable clusters whose U is lower than a thresh- ⎢ l1 ⎥ ⎢ P l1 r
⎢ ⎥ ⎢ P l1 l1 · · P l1 lN ⎥

old and encode these clusters as a line feature vector, l =
x=⎢ ⎥
⎢ · ⎥, P=⎢
⎢ · · · · · ⎥ ⎥, (9)
{ρ, θ, l, (xs , ys ), (xt , yt )} as depicted in Fig. 6. ⎣ · ⎦ ⎣ · · · · · ⎦
lN P lN r P lN l1 · · P lN lN

4 Geometrically constrained EKF framework and line where r = [rx , ry , rθ ]T is the robot pose, lk = [ρk , θk ]T rep-
feature association resents the kth line feature position in Hough space. Note
that although l, (xs , ys ) and (xe , ye ) of the line feature vec-
In this paper, we employ a constrained EKF framework tor are not used in our EKF formulation, they will be used
for a SLAM using line features. The constrained EKF im- to associate line features in Sect. 4.4.
poses a certain relationship between the lines of interest.
These relationships between the lines have also been used
4.2 Measurement model for the line feature
in the orthogonal SLAM (Nguyen et al. 2006). The orthog-
onal SLAM is similar to our geometrically constrained EKF
framework in aspect of using parallelism and orthogonality Figure 7 shows the measurement value of line feature
between lines. However, big differences exist as follows: lk = [ρ k , θ k ]T observed by a robot whose pose is r =
[rx , ry , rθ ]T . The measurement model in EKF is the projec-
1. They assume a constant uncertainty for all the observed tion function that maps the line features in the state vector
lines. However, we build our own measurement uncer- from the world to the robot centered coordinate, described
tainty model. by:
2. They separate orthogonalization of the observed lines
and estimation of their positions using EKF. Therefore,   k  
zρ ρr dx 2 + dy 2
the uncertainty of the matched feature in the map with = = ,
zθ θrk tan−1 (dy/dx) − rθ
the observed feature is not considered in orthogonaliza- (10)
tion. However, we carry out orthogonalization in con- dx = (−rx cos θ k − ry sin θ k + ρ k ) cos θ k ,
junction with estimation of the observed line feature lo-
dy = (−rx cos θ k − ry sin θ k + ρ k ) sin θ k ,
cation through a single measurement update step.
The orthogonal SLAM is an algorithm which dose not well where [zρ , zθ ]T denote a projected location of lk in the ro-
consider the influence of uncertainties in the estimation bot centered coordinate and tan−1 (dy/dx) returns the angle
process. from the xw to a point (dx, dy).
Auton Robot (2008) 24: 13–27 19
Fig. 8 Geometric constraints
imposed on the lines

4.3 Measurement model with geometric constraints and the length of overlap (Lov ) between lines as follows:

j
We consider three kinds of geometric constraints between DH i = w̄ρ (ρ j − ρ i )2 + w̄θ (θ j − θ i )2 , w̄ρ + w̄θ = 1,
the lines as depicted in Fig. 8. When the first line is ex-
(13)
tracted then we align the first extracted line l1 parallel with 
x axis, i.e., l1 ’s angle θ 1 is fixed to π/2. In this aligning j j
Lov i = l i + l j − (xe − xsi )2 + (ye − ysi )2
j
process, we assume that the robot always starts SLAM with 
wall following, after finding and approaching it. Therefore, positive if there exists overlap,
= (14)
the first extracted line l1 is believed to be a line from one negative otherwise,
of the walls. Figure 8(a) shows the parallel (l ) and the or-
thogonal (l⊥ ) relationships between the first line l1 and the j
where DH i is the weighted Euclidean distance, w̄ρ and w̄θ
others. The kth line lk is parallel with l1 if |θ 1 − θ k | < εθ and j
are normalized valued of wρ and wθ in (8), and Lov i is the
orthogonal if |θ 1 − θ k − π/2| < εθ , where εθ is an empiri- i j
length of overlap between l and l . Using these two mea-
cally determined threshold. Figure 8(b) shows the collinear sures, we can associate an observed line feature lobv with

relationship between two lines, lw and lw , extracted from line feature in the state vector as follows:
the same wall but separated by the furniture contacting that

wall, which satisfies the conditions |θ w − θ w | < εθ and lmat = arg min DH kobv

|ρ − ρ | < ερ . These constraints are formulated as fol-
w w k

lows: such that DH kobv < εDH and Lov kobv > 0, (15)

zang = [θ 1 − θ k + α], l col


= arg min DH kobv
k

0 if |θ 1 − θ k | < εθ , such that DH kobv < εDH and Lov kobv < 0 (16)
α= (11)
−π/2 if |θ 1 − θ k − π/2| < εθ ,
 w where lmat is the matched line with lobv . When two line fea-

ρ − ρw tures are associated, we first estimate the new line parame-
zcol = , (12)
θ1 − θk ters of lmat through a measurement update using (10). We
then estimate the new end-points by projecting the outer-
where zang is a measurement model for describing the par- most point of the observed cluster to the updated line fea-
allel and orthogonal relationships between lines l1 and lk ture as shown Fig. 6 if Lov > 0 and Lov < l of lobv . Else, we
and zcol is for describing the collinearity between lines lw estimate the new end-point by projecting the previous end-

and lw . These two measurements zang and zcol become zero point instead to the updated line feature. After updating the
when corresponding constraints are satisfied. end-point, we update the length l of lmat by calculating the
distance between the start-point and the new end-point. lcol
4.4 Line feature association is collinear with lobv without overlap as shown in Fig. 8(b)
and we use the measurement model (12) and insert lobv to
Data association has been known to be a crucial issue for the state vector. To this end, the line features which do not
a successful SLAM. We associate line features through a satisfy conditions (15, 16) but satisfy the condition in (11)
weighted Euclidean distance measure in Hough space (DH ) are inserted to the state vector as a new feature. We then
20 Auton Robot (2008) 24: 13–27

apply measurement model (11) according to the geometri- the end-point (xt1 , yt1 ) of l1 and the start-point (xsN , ysN )

cal relationship with l1 , but the others are not mapped to the of lN are projected onto the updated line, l1 . These pro-
   
feature space for localization. jected points, (xt1 , yt1 ) and (xs1 , ys1 ), are used as the end-

point and the start-point of l1 , respectively, and distance
     
4.5 Line merging l 1 between (xs1 , ys1 ) and (xt1 , yt1 ) is used as length of l1 .

Consequently, the elements of the merged line l1 become
      
In the loop closing phase shown in Fig. 9, the last extracted {ρ 1 , θ 1 , l 1 , (xs1 , ys1 ), (xt1 , yt1 )}.
line feature lN which is collinear with l1 but has no over-
lap is enlarged as the robot approaches to the starting posi-
tion. When lN is enlarged enough to satisfy condition (15) 5 Line feature based environment modeling and active
with l1 , these two lines must be merged to a single line. exploration
This merging process is completed by updating the el- Because we use wall following to explore a given environ-
ements of l1 and deleting lN , its related variance elements ment, there may be invisible lines which can not be ob-
which belong to a gray part in (9) from the state vec- served during wall following as shown in Fig. 11. In this
tor and covariance matrix. Figure 10 shows the merging section, we develop a line feature based environment mod-
process of line l1 and lN . First, line parameter (ρ 1 , θ 1 ) eling scheme to first reconstruct the environment with pre-
of l1 is updated using the measurement model (10) then defined objects consisting of connected line segments and
then judge whether there is any invisible line. If there is, we
move the robot to the vicinity of the estimated position of
the invisible line and explore its circumference by contour-
following.

5.1 Line feature based environment modeling

In a general indoor environment, most long line segments


come from walls and furniture in a room. From this assump-
tion, we define three kinds of objects consisting of line seg-
ments to model an indoor environment.

5.1.1 Definition of a room

A room R is an object consisting of the 4 walls where each


wall consists of lines. We define a room object in Fig. 11 as
follows:

Fig. 9 The loop closing phase R = {W b , W r , W t , W l } = {{l1 }, {l3 }, {l5 }, {l7 , l11 }},

Fig. 10 The line merging


process
Auton Robot (2008) 24: 13–27 21

Fig. 12 Active exploration and environment modeling


Fig. 11 Wall-following and initial environment modeling

check the existence of invisible lines by checking the num-


where W b , W r , W t and W l are the bottom, right, top and ber of lines belonging to each free object. If there is any
left walls, respectively. free object which is not complete, i.e., short of 4 lines,
then the robot needs to explore around that free object. To
5.1.2 Definition of a wall-contact object this end, we need to decide on where to start contour fol-
lowing among the set of lines belonging to the incomplete
A wall-contact object, Ow is a furniture object represented free object. We choose the starting location at the vicin-
by 2 vertical and 1 horizontal lines because one side of the ity of the center of the nearest line from the robot’s cur-
furniture in contact with the wall can not be observed by the rent position as depicted in Fig. 12. Here, we need to com-
robot. The object O1w in Fig. 11 is thus represented by: plete two incomplete free objects after the wall-following
process. To do this, we find the nearest free object, O1f ,
O1w = {l8 , l9 , l10 },
from the set of incomplete free objects, {O1f , O2f }, then
where l8 , l9 and l10 are the first vertical line, a horizontal line find the nearest line l2 of {l2 , l12 } belonging to O1f and
and the second vertical line, respectively. then move to P1 which is at the vicinity of the center
of l2 . Here, the distance between P1 and l2 depends on
5.1.3 Definition of a free object the marginal distance for contour following control. After
arriving at P1 , the robot follows the contour of O1f using
A free object, Of is a furniture object which is located apart the right-hand rule and find the invisible lines l13 , l14 in
from all the walls and thus consists of 4 lines because its sequence and complete O1f . Then, we repeat the same to
all sides can be observed. The two free objects, O1f , O2f in complete O2f .
Fig. 11 is:

O1f = {l2 , l14 , l13 , l12 },


6 Experimental results
O2f = {l15 , l4 , l6 , l16 }.
We have conducted simulated and real experiments to ver-
5.2 Active exploration to fill in the invisible lines ify the performance of our approach. Figure 13 shows the
sensor configuration of the robot used as an experimental
The invisible lines in the wall-following process should be- testbed. It is a differential-drive robot equipped with seven
long to free objects as shown Fig. 11. Therefore, we can IR sensors with the range from 20 cm to 150 cm. However,
22 Auton Robot (2008) 24: 13–27

we had to limit the detection range to 120 cm due to its large 6.1 Simulation results
sensor noise.
The proposed algorithm has been tested on a simulated envi-
ronment coded with Visual C++ on Pentium 4 3.0 GHz PC
with a 1 GB RAM. Three types of simulation were carried
Fig. 13 Sensor configuration of out.
our robot platform
6.1.1 Modeling of simple rectangular objects
The environment pictured in Fig. 14 is a 4 m × 5.5 m room
containing one wall-contact object and two free objects. In

Fig. 14 Simulation result in the


indoor environment with simple
rectangular objects. The red
ellipse in (g) represents a
Gaussian distribution of
uncertainty of each line feature
and its horizontal and vertical
axes stand for cρρ and cθθ ,
respectively
Auton Robot (2008) 24: 13–27 23
Fig. 15 Localization errors
during the environment
modeling process

Fig. 14(a), the robot explores a room by wall following and Fig. 16(a), the robot explores a room by wall following and
builds an initial incomplete model in Fig. 14(d) which con- builds an initial incomplete model in Fig. 16(d) which con-
sists of one wall-contact object and two incomplete free ob- sists of four wall-contact objects and two incomplete free
jects. After building this initial model, P1 , the starting po- objects. Here, the top of O1w and the bottom of O2w is cal-
sition of a contour following trajectory around the nearest culated as the mean of y values of the start-point of l2 and
incomplete free object, O1f , is estimated. After the robot the end-point of l3 , and the top of O4w becomes the bottom
reaches P1 , it follows the contour of O1f and completes O1f of O3w rather than the top of the room model. After build-
by filling in the two remaining lines as shown in Fig. 14(b), ing this initial model, P1 , the starting position of a contour
(e). After completing O1f , the same process is applied to O2f following trajectory around the nearest incomplete free ob-
as shown in Fig. 14(c), (f). Figure 14(g) shows the accuracy ject, O2f , is estimated. After the robot reaches P1 , it follows
of the map result. Figure 15 shows the localization errors the contour of O2f and tries to complete O2f by filling in re-
during the entire mapping process depicted in Fig. 14 and maining lines as shown Fig. 16(b), (e). In this filling step,
demonstrates that each time a correction step is performed, two lines, l15 and l16 , which are candidates of the right side
the absolute error in position or orientation is drastically re- of O2f are extracted and the left most line, l15 , is used as the
duced during a 340-second simulation. right side of O2f . Therefore, O2f is modeled using the ini-
tially extracted three lines, l11 , l13 and l14 , and the additional
6.1.2 Modeling of compound rectangular objects
line, l15 . After completing O2f , O3f is built using the remain-
The environment shown in Fig. 16 is a 4 m × 4 m room ing line, l16 and the lines of O2f as shown in Fig. 16(e). Af-
containing two wall-contact objects and two free objects. In ter completing O2f and O3f , the same process is applied to
24 Auton Robot (2008) 24: 13–27
Fig. 16 Simulation result in the
indoor environment with
compound rectangular objects

O1f . The completing result of O1f and O4f is different from tonomously repeated the same trajectory five times by wall-
that of O2f and O3f in the separating process due to the rea- following to evaluate the accuracy of localization as shown
son that the initially extracted lines are used as elements of in Fig. 18. Although the ground truth is not shown, the result
O2f with higher priority than the additionally extracted lines. shows that the error of the corrected pose is bounded while
This simulation result shows that the set of contacting fur- the odometry error gradually increases without bound. In the
nitures which can not be modeled with a single rectangular second experiment, we tested the performance of our line
model is able to be modeled using the compound rectangular feature based environment modeling under two experimen-
model. tal setups shown in Fig. 19(a) and Fig. 20(a) in a 4 m × 4 m
room. The setup 1 in Fig. 19(a) consists of 4 free objects. Af-
6.1.3 Comparison of the convergence performance ter wall-following, 4 incomplete free objects show up in the
initial model. Note that the robot does not follow the con-
In this simulation, we compare the EKF convergence per- tour of the fourth free object because some invisible lines of
formance with and without the geometric constraint. Fig- this object are already observed when the robot follows
ure 17(a) shows a situation where the robot meets the verti- the contour of the first and the third free objects. The robot
cal wall while following the horizontal wall. In this phase, trajectory is plotted with a solid blue line using the corrected
the robot gathers the sensor readings and builds a PDS while pose. The setup 2 in Fig. 20(a) consists of two wall-contact
conducting the turning motion. Figure 17(b) shows that a objects and two free objects. In the initial model, two wall-
PDS built in turning motion is biased (i.e. not orthogonal contact objects and two incomplete free objects are shown
to the horizontal wall) due to the beam with (5 degrees in in Fig. 20(b). Later on, these two incomplete objects are
our simulation) and wheel slippage. Figure 17(c) shows that properly completed through active exploration as shown in
this bias can be overcome efficiently by inserting a geomet- Fig. 20(c).
ric constraint in the measurement update step. This bias can
affect the entire mapping result and make the covariance ma-
trix of the line feature converge more slowly when not using
the geometric constraint. 7 Conclusion

6.2 Real-world results We have presented a novel line feature based SLAM based
on active exploration that are very efficient for the low cost
To verify our algorithm in a real environment, we carried service robots (such as cleaning robot) with sparse and noisy
out two kinds of experiments. In the first one, the robot au- range sensors. The methodology is based on a geometri-
Auton Robot (2008) 24: 13–27 25
Fig. 17 Comparison of the
convergence performance with
and without the geometric
constraint

matrices to converge more rapidly than without these con-


straints. In addition, we developed an efficient line feature
extraction algorithm based on a constrained Hough trans-
form whose voting space is restricted within the vicinity of
the initial line parameters which were estimated by applying
the least square line fitting technique to each cluster obtained
by an IEPF clustering algorithm. Moreover, we developed a
line feature based modeling scheme which models the en-
vironment with three kinds of objects, Room, Wall-contact
and Free objects and also presented an active exploration
control scheme which is able to observe invisible lines au-
tonomously without human intervention. Future work will
focus on how to efficiently cover the target environment by
decomposing it into a set of regions based on our line feature
based environment model.

Fig. 18 Trajectory with and without correction


Appendix 1: The choice of weighting parameters in (8)
cally constrained EKF framework which imposes geomet-
ric constraints between the lines in the measurement model. We can determine these weighting parameters statistically
To our knowledge, this is the first effort to use geometric using an accumulated histogram of each uncertainty. To do
constraints between the lines for a line feature based SLAM this, we gather a set of 100 scans from the test environment
using only sparse and noisy range sensors. By imposing and apply our line extraction algorithm to each scan and
geometric constraints, we can estimate the 2D line feature store l, cρρ and cθθ of the extracted lines whose length l
positions more accurately as well as allow their covariance is longer than 70% of L, which is a preset minimum length
26 Auton Robot (2008) 24: 13–27

Fig. 21 wρ determination from the accumulated histogram of cρρ

N and then calculate wρ using (17) and (18):

Uρρ = (cρρ − cρρ


d
+ Max(cρρ
min max
, cρρ ))
min max
/Max(cρρ , cρρ )

= cρρ /Max(cρρ
min max
, cρρ ) + const

Fig. 19 Modeling process of the first test configuration ∼ min


= cρρ /Max(cρρ max
, cρρ ). (17)

wρ = 1/Max(cρρ
min max
, cρρ ), (18)

where Uρρ is a weighted uncertainty of cρρ . The same pro-


cedure is repeated to calculate wl and wθ as well. These
parameters are closely related to the stability of the distance
measure of the range sensor. Therefore, it is not always ro-
bust. If the range sensor model is changed, new weighting
parameters must be found.

Appendix 2: Possible failure models of our algorithm

Our algorithm’s main goal is to extract the main structure of


a given rectangular environment whose utility in localization
is higher than other features. Therefore, we decided to use
only line features which are long enough because this long
line is likely to be extracted from the static objects (e.g. fur-
niture or wall) and therefore hardly changed while carrying
out the cleaning task. We certainly can have chairs and ta-
bles, but simply do not consider these easily movable objects
in feature mapping. However, these easily movable objects
still can affect the result of our algorithm. The following re-
Fig. 20 Modeling process of the second test configuration sults present some possible failure models of our algorithm.
Figure 22(a) shows that our algorithm can fail to build a
Room model when only small line segments of a wall can be
eligible for mapping. Using the stored data, we build an ac- observable due to a wall-contact object. Here, the left side of
cumulated uncertainty histogram of cρρ using N stored data Room is incorrectly modeled using the lines extracted from
as shown in Fig. 21. In this accumulated histogram, we first the two wall-contact objects because we assume that the left
d , where the accumulated number of lines is 50% of
find cρρ most line is the left side of Room. Figure 22(b) shows that
Auton Robot (2008) 24: 13–27 27

vironments from raw laser range measurements. In Proceedings


of IROS ’03 (pp. 206–211), Las Vegas, Nevada.
Kleeman, L. (2003). Advanced sonar and odometry error modeling
for simultaneous localisation and map building. In Proceedings
of IROS ’03 (pp. 699–704), Las Vegas, Nevada.
Lowe, D. G. (2004). Distinctive image features from scale-invariant
keypoints. International Journal of Computer Vision, 60(2), 91–
110.
Murray, D., & Jennings, C. (1998). Stereo vision based mapping
and navigation for mobile robots. In Proceedings of ICRA ’98,
(pp. 1694–1699), Leuven, Belgium.
Nguyen, V., Martinelli, A., Tomatis, N., & Siegwart, R. (2005). A com-
parison of line extraction algorithms using 2d laser rangefinder for
indoor mobile robotics. In Proceedings of IROS ’05 (pp. 1768–
1773), Edmonton, Canada.
Nguyen, V., Harati, A., Martinelli, A., & Seigwart, R. (2006). Orthog-
onal SLAM: a step toward lightweight indoor autonomous nav-
igation. In Proceedings of IROS ’06 (pp. 5007–5012), Beijing,
China.
Tardos, J. D., Neira, J., Newman, P. M., & Leonard, J. J. (2002). Ro-
bust mapping and localization in indoor environments using sonar
data. The International Journal of Robotics Research, 21(4), 311–
330.
Tomono, M. (2004). A scan matching method using euclidean invariant
signature for global localization and map building. In Proceedings
of ICRA ’04, (pp. 866–871), LA, USA.

Fig. 22 Possible failure models of our algorithm Young-Ho Choi received the B.S. degree in
Hanyang University, Seoul, Korea in 2003. He
is currently working toward a Ph.D. degree in
a wall-contact object can not be correctly built due to the the Department of Electrical Engineering, Po-
hang University of Science and Technology
chair in front of it. In this case, we may complete this object (POSTECH), Pohang, Korea. His research in-
by estimating the right side of the object using the right most terests include SLAM and autonomous naviga-
points of the two horizontal lines, as a possible solution. tion of mobile robot.

References

Burguera, A. et al. (2005). Robust scan matching localization using Tae-Kyeong Lee received the B.S. degree in
sonar range finders. In Proceedings of IROS ’05 (pp. 1367–1372), Pohang University of Science and Technol-
Alberta, Canada. ogy (POSTECH), Pohang, Korea in 2006. He
Choi, Y., & Oh, S. (2006). Grid-based visual SLAM in complex envi- is currently working toward a Ph.D. degree in
ronment. In Proceedings of IROS ’06 (pp. 2563–2569), Beijing, the Department of Electrical Engineering, Po-
China. hang University of Science and Technology
Crowley, J. L. (1989). World modeling and position estimation for a (POSTECH), Korea. His research interests in-
mobile robot using ultrasonic ranging. In Proceedings of ICRA clude complete coverage path planning and au-
tonomous navigation of mobile robot.
’89 (pp. 674–680), Scottsdale, AZ, USA
Davison, A. J. (2003). Real-time simultaneous localization and map-
ping with a single camera. In Proceedings of ICCV ’03 (pp. 1403–
1410), Nice, France.
Diosi, A., & Kleeman, L. (2005). Laser scan matching in polar coor- Se-Young Oh received B.S. degree in Electron-
dinates with application to SLAM. In Proceedings of IROS ’05 ics from Seoul National University, Seoul, Ko-
(pp. 1439–1444), Canada. rea in 1974, and the M.S. and Ph.D. Degrees in
Duda, R. O., & Hart, P. E. (1973). Pattern classification and scene Electrical Engineering from Case Western Re-
analysis. New York: Wiley. serve University, Cleveland, Ohio, USA in 1978
Frese, U. (2006). A discussion of simultaneous and mapping. Au- and 1981 respectively. He is currently a profes-
tonomous Robots, 20(0), 25–42. sor at POSTECH, Korea. His research interests
Großmann, A., & Poli, R. (1999). Robust mobile robot localisation include soft computing technology (neural net-
from sparse and noisy proximity readings. In Proceedings of works, fuzzy logic, and evolutionary computa-
IJCAI ’99 workshop on reasoning with uncertainty in robot navi- tion) and its application to robotics, face detec-
gation (RUR-99). tion and recognition, and intelligent vehicles. He is a senior member of
Hähne, D., Burgard, W., Fox, D., & Thrun, S. (2003). An efficient IEEE.
fastslam algorithm for generating maps of large-scale cyclic en-

You might also like