Professional Documents
Culture Documents
Self-Calibration of An On-Board Stereo-Vision Syst
Self-Calibration of An On-Board Stereo-Vision Syst
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228944333
CITATIONS READS
29 47
5 authors, including:
J.M. Armingol
University Carlos III de Madrid
119 PUBLICATIONS 1,817 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by J.M. Armingol on 22 May 2017.
The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan
4-1
156
Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan
157
Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan
158
Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan
100
global maximum.
150
y 200 1 1
250 F1 = ; F2 = (6)
300 E1 + K E2 + K
350
+
- 400 The global fitness function cannot be a simple sum of F1
+ 450
x
0 100 200 300 400 500 600
and F2 , because the genetic algorithm tends to maximize
-
only one of them at the expense of the other. Thus the
(a) Hough Parameterization (b) Calibration pattern fitness function has been defined as the minimum of the
two terms:
Fig. 3. Calibration pattern. In (b): (+) represent the pattern from
the right camera; () represent the pattern from the right camera 1
Fitness = min (F1 , F2 ) (7)
K
point projected onto the road (through the inverse perspec- In the experiments, the constant K has been empirically set
tive transformation) from the right camera should coincide to 104 , so that the fitness is constrained to the interval
with the projection from the left camera. The discrepan- [0, 104 ].
cies between the two sets of lines can be evaluated by the
sum of the squares of the distance between one point and
its corresponding from the other camera: 3 Results
2
E1 = ~xi, le f t ~xi, right (3) Both tests in sinthetic and real environments have been car-
i
ried out. The behaviour or the algorithm in a real environ-
where ~xi, le f t and ~xi, right are the sets of points extracted ment has been tested with the IvvI platform (figure 4). IvvI
from the left camera and right camera, respectively. is a research platform for the implementation of systems
The second term evaluates the parallelism between the based on computer vision, with the goal of building an Ad-
right and left borders of the lane. In Eq. (4) and Eq. (5) vanced Driver Assistance System (ADAS).However, a real
is the angle between the line and the horizontal axis. Sub- environment do not allow to evaluate the estimation be-
scripts 1 and 2 represent first and second line in the image cause it is difficult to measure the true parameters. Thus
(i.e. right and left lane border), while subscripts right and synthetic images (figure 5) have been used in order to eval-
le f t represents right and left camera, respectively. uate the performance of the algorithm. These images have
been generated through a perspective transformation with
E2 = right,1 right,2 + le f t,1 le f t,2 (4) the same intrinsic parameters of the cameras mounted on
the IvvI (Table I).
Table II shows the parameters of the genetic algorithm
(GA), and table III shows the output of the GA and the
E2 = min right,1 right,2 , le f t,1 le f t,2 (5)
comparison with the true parameters. For the first set of
Equation (4) does not work. Experiments using this equa-
tions shows that it gives a very high error even when the
parameters are close to the true ones. In contrast, Eq. (5)
gives better results, and it represents that at least there is
one pair of lines well aligned.
Both terms E1 and E2 should be zero (or nearly zero) in the
perfect case. Now, a fitness function must be created from
them. Experience has proved that it is indispensable for the
fitness function to give both terms the same importance,
and to have a high resolution when we are close to the true
solution.
Next, E1 and E2 are normalized and transformed into F1
and F2 , as in Eq. (6). K is a constant that helps to define
an upper limit (1/K), which is the same for both terms, Fig. 4. (left) IVVI vehicle; (top-right) detail of the trinocular
so that they have the same importance. The use of inverse vision system; (bottom-right) detail of the processing system
159
Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan
TABLE I
I NTRINSIC PARAMETERS
parameters, it also includes the error of the estimation and Population representation Real-valued
the variance of the estimated parameters calculated from Population 1000
ten executions of the algorithm. The variance of the result Number of parents 90% of the population
is about 102 in height, 108 in pitch, and 1010 in roll, Number of children 90% of the population
that is, the convergence of the algorithm is quite robust. Crossover probability 70%
Fig. 6 shows an example of execution. The figure on Mutation probability 1%
the left represents the error of the solution, and the con- Max. number of generations 50
vergence. The figure on the right shows the correspon-
dence between the points projected from the right camera TABLE III
(crosses) and the left camera (circles). It also shows (with R ESULTS OF THE GA
dots) the points projected onto the road using the true pa-
rameters. In other words, the dots represent the true posi-
Parameters Height Pitch Roll
tion of the lane on the road, and the crosses and circles
(mm) (rad) (deg) (rad) (deg)
represent the supposed position using the estimated param-
True 1500 0.170 9.74 -0.090 -5.15
eters. Error is about 1 meter at 35 meters ahead (3%), and
Estimated 1458 0.169 9.69 -0.093 -5.35
about 5 centimeters at 4 meters (1.3%).
Variance 1.34 9 1011 2 107
In order to recreate real environment, and to study the sen-
Error 42.0 0.001 0.05 0.003 0.20
True 1500 0.170 9.74 0.090 5.16
H = 1454.7319 0.16846318 0.090043347
Estimated 1484 0.168 9.62 0.081 4.65
9.3 35
True 1100 0.170 9.74 -0.090 -5.16
9.2 Best = 4118.3768
F1= 5881.6232
30 Estimated 1065 0.168 9.64 -0.100 -5.74
9.1
F2= 5881.7838
Longitudinal distance (m)
9 25
log(best fitness)
8.9
20
TABLE IV
8.8 R ESULTS WITH GAUSSIAN NOISE ADDED TO ORIGINAL
15
8.7 IMAGES
8.6 10
8.5
8.4
5 Parameters Height Pitch Roll
8.3 0
(mm) (rad) (deg) (rad) (deg)
0 10 20 30 40 50 3 2 1 0 1 2
Generation Lateral distance (m) True 1500 0.170 9.74 -0.090 -5.16
5% noise 1455 0.169 9.65 -0.090 -5.16
Fig. 6. GA execution example; (+) points projected from right 10% noise 1677 0.170 9.72 -0.042 -2.42
camera; () points projected from left camera; () points projected 20% noise 1663 0.170 9.71 -0.040 -2.28
using true parameters
160
Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan
8.9
20
8.8
15
8.7
8.3 0
0 10 20 30 40 50 3 2 1 0 1 2
Generation Lateral distance (m)
Best = 6702.5721 35
9.1
F1= 3297.4279
F = 3298.1936
2 30
(e) Lines detected (f) Lines detected
Longitudinal distance (m)
9.05
log(best fitness)
25
9 Fig. 8. Lines detected and used to generate the calibration pattern
20
8.95
15
8.9
10 meter at 35 meters ahead (3%), and about 5 centimeters at
8.85 5
4 meters (1.3%). With over 10% of noise, the error is about
40 centimeters at 4 meters ahead (10%), and 5 meters at 30
8.8 0
0 10 20
Generation
30 40 50 4 2 0 2
Lateral distance (m)
4
meters ahead (16%).
It can be seen that the algorithm always converge before 20
(b) 10% noise
generations.
H = 1662.5203 0.16953448 0.039735898
9.25 40 Table V shows the results obtained from a sequence in a
9.2
Best = 7504.2862
F = 2495.7138
35 real environment, while figure 8 shows the first frame the
1
F2= 2496.6751 30
sequence and the output of the preprocessing steps. In this
Longitudinal distance (m)
9.15
sequence the vehicle has just finished a curve to the left,
log(best fitness)
25
9.1 thus it is difficult to divide the variability that comes from
20
9.05 the algorithm from the variability that comes from the in-
15
ertial movements of the vehicle. The results are coherent
9
10
with the location and orientation of the stereo-system on-
8.95 5 board the vehicle.
8.9 0
0 10 20 30 40 50 4 2 0 2 4
Generation Lateral distance (m)
161
Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan
References
[1] S. Ernst, C. Stiller, J. Goldbeck, and C. Roessig,
Camera calibration for lane and obstacle detection,
in Proceedings 1999 IEEE/IEEJ/JSAI International
Conference on Intelligent Transportation Systems,
Tokyo, Japan, 5-8 Oct. 1999, pp. 356361.
162