FHM KN Scale Bar 2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

sensors

Article
Simultaneous All-Parameters Calibration and
Assessment of a Stereo Camera Pair Using a Scale Bar
Peng Sun 1,2 , Nai-Guang Lu 1 , Ming-Li Dong 3, *, Bi-Xi Yan 3 and Jun Wang 3
1 Institute of Information Photonics and Optical Communications, Beijing University of Posts and
Telecommunications, Beijing 100876, China; sunpeng@bistu.edu.cn (P.S.); nglv2002@163.com (N.-G.L.)
2 Beijing Key Laboratory: Measurement and Control of Mechanical and Electrical System, Beijing Information
Science and Technology University, Beijing 100192, China
3 School of Instrument Science and Opto-Electronics Engineering, Beijing Information Science and Technology
University, Beijing 100192, China; yanbx@bistu.edu.cn (B.-X.Y.); wangjun@bistu.edu.cn (J.W.)
* Correspondence: dongml@bistu.edu.cn; Tel.: +86-010-8242-6915

Received: 26 September 2018; Accepted: 10 November 2018; Published: 15 November 2018 

Abstract: Highly accurate and easy-to-operate calibration (to determine the interior and distortion
parameters) and orientation (to determine the exterior parameters) methods for cameras in
large volume is a very important topic for expanding the application scope of 3D vision and
photogrammetry techniques. This paper proposes a method for simultaneously calibrating, orienting
and assessing multi-camera 3D measurement systems in large measurement volume scenarios.
The primary idea is building 3D point and length arrays by moving a scale bar in the measurement
volume and then conducting a self-calibrating bundle adjustment that involves all the image points
and lengths of both cameras. Relative exterior parameters between the camera pair are estimated by
the five point relative orientation method. The interior, distortion parameters of each camera and
the relative exterior parameters are optimized through bundle adjustment of the network geometry
that is strengthened through applying the distance constraints. This method provides both internal
precision and external accuracy assessment of the calibration performance. Simulations and real data
experiments are designed and conducted to validate the effectivity of the method and analyze its
performance under different network geometries. The RMSE of length measurement is less than
0.25 mm and the relative precision is higher than 1/25,000 for a two camera system calibrated by
the proposed method in a volume of 12 m × 8 m × 4 m. Compared with the state-of-the-art point
array self-calibrating bundle adjustment method, the proposed method is easier to operate and can
significantly reduce systematic errors caused by wrong scaling.

Keywords: stereo photogrammetry; 3D vision; self-calibration; relative orientation; scale bar

1. Introduction
Photogrammetry is a technique for measuring spatial geometric quantities through obtaining,
measuring and analyzing images of targeted or featured points. Based on different sensor configurations,
photogrammetric systems are categorized into offline and online systems [1]. Generally, an offline
system uses a single camera to take multiple and sequential images from different positions and
orientations. The typical measurement accuracy lies between 1/50,000 and 1/100,000. That substantially
results from the 1/20–1/30 pixels target measurement accuracy and the self-calibrating bundle
adjustment algorithm [2].
Unlike offline systems, an online system uses two or more cameras to capture photos synchronously
and reconstruct space points at any time moment. They are generally applied to movement and
deformation inspection during a certain time period. Online systems are reported in industrial

Sensors 2018, 18, 3964; doi:10.3390/s18113964 www.mdpi.com/journal/sensors


Sensors 2018, 18, 3964 2 of 19

applications such as foil material deformation investigation [3], concrete probe crack detection and
analysis [4], aircraft wing and rotor blade deformation measurement [5], wind turbine blade rotation
measurement and dynamic strain analyses [6,7], concrete beams deflection measurement [8], bridge
deformation measurement [9], vibration and acceleration measurement [10,11], membrane roof
structure deformation analysis [12], building structure vibration and collapse measurement [13,14],
and steel beam deformation measurement [15]. This noncontact and accurate technique is still finding
more potential applications.
Online systems can also be applied to medical positioning and navigation [16]. These systems are
generally composed of a rigid two camera tracking system, LED or sphere Retro Reflect Targets (RRT)
and hand held probes. NDI Polaris (NDI, Waterloo, ON, Canada) and Axios CamBar (AXIOS 3D,
Oldenburg, Lower Saxony, Germany) are the leading systems. Generally, the measurement distance
is less than 3 m while the coordinate measurement accuracy is about 0.3 mm and length accuracy
about 1.0 mm. In order to achieve this, these camera pairs are calibrated and oriented with the
help of Coordinate Measuring Machine (CMM) [17] or special calibration frames in laboratory in an
independent process before measurement.
Online systems equipped with high frame rate cameras are also employed in 3D motion tracking,
such as the Vicon (Vicon, Denver, CO, USA) and Qualisys (Qualisys, Gothenburg, Sweden) systems.
In these systems, the number of cameras and their positions vary with the measurement volumes
which puts forward the requirement for calibration methods that can be operated on-site. Indeed,
these 3D tracking systems are orientated by moving a ‘T’ or ‘L’ shaped object in the measurement
volume. Generally speaking, motion tracking systems are featured in real-time 3D data, flexibility and
easy operation, rather than high accuracy.
For multi-camera photogrammetric systems, interior, distortion and exterior parameters are
crucial for 3D reconstruction and need to be accurately and precisely calibrated before any
measurement. There are mainly two methods for calibrating and orienting multi-camera systems for
large volume applications: the point array self-calibrating bundle adjustment method and the moving
scale bar method. The point array self-calibrating bundle adjustment method is the most accurate
and widely used method. Using convergent and rotated images of well distributed stable 3D points,
bundle adjustment recovers parameters of the camera and coordinates of the point array. There is no
need to have any prior information about the 3D point coordinates. For example, a fixed station triple
camera system is calibrated in a rotatory test field composed of points and scale bars [18,19]. However,
sometimes in practice, it is difficult to build a well distributed and stable 3D point array, so the moving
scale bar method was developed.
Originally, the moving scale bar method was developed for exterior parameter determination.
Hadem [20] investigated the calibration precision of stereo and triple camera systems under different
geometric configurations. Specifically, he simulated and experimented with camera calibration in a
pure length test field, and he mentioned that a moving scale bar can be easily adapted to stereo camera
system calibration. Patterson [21,22] determined the relative orientation of a stereo camera light pen
system using a moving scale bar. For these attempts, interior orientations and distortion parameters of
the cameras are pre-calibrated before measurement. The accuracy is assessed by length measurement
performance according to VDI/VDE CMM norm. Additionally, the author mentioned that length
measurement of the scale bar provides on-site accuracy verification. Mass [23] investigated the moving
scale bar method using a triple camera system and proposed that the method is able to determine
interior orientations and is reliable enough to calibrate multi-camera systems. However, it is required
that the cameras be fully or partially calibrated in an offline process [24], and the measurement volume
be small, for example 1.5 m × 1.7 m × 1.0 m [23].
This paper proposes a method for calibrating and orienting stereo camera systems. The method
uses just a scale bar and can be conducted on-site. Unlike previous moving scale bar methods,
this method simultaneously obtains interior, distortion and exterior parameters of the cameras without
any prior knowledge of the cameras’ characteristics. Additionally, compared with traditional point
Sensors 2018, 18, 3964 3 of 19

array bundle adjustment methods, the proposed method does not require construction of 3D point
arrays, but achieves comparable accuracy.
Sensors 2018, 18, x FOR PEER REVIEW 3 of 19
The paper is organized as follows: Section 1 introduces the background of this research, including
development of online
The paper photogrammetry,
is organized as follows:state-of-the-art techniques
Section 1 introduces for calibrating
the background andresearch,
of this orienting
online
including development of online photogrammetry, state-of-the-art techniques for calibratingmodels,
photogrammetric cameras and their limitations. Section 2 elaborates the mathematical and
computational
orienting algorithms and the precision
online photogrammetric and accuracy
cameras and theirassessment theory
limitations. of the2proposed
Section method.
elaborates the
Sections 3 and 4 report
mathematical thecomputational
models, simulations and experiments
algorithms designed
and the toand
precision testaccuracy
the method. Advantages
assessment theory of
of the proposed
the method, advices method. Sectionscalibration
for improving 3 and 4 report the simulations
performance and experiments
and potential practicaldesigned to testare
applications
the method.
summarized in theAdvantages
conclusions.of the method, advices for improving calibration performance and
potential practical applications are summarized in the conclusions.
2. Materials and Methods
2. Materials and Methods
A scale bar is an alloy or carbon fiber bar that has two photogrammetric RRTs fixed at each
end. The A scale bar
length is an alloy
between or carbon
the two RRTs fiber bar that has
is measured two photogrammetric
or calibrated RRTs with
by instruments fixed high
at each end.
(several
The length between the two RRTs is measured or calibrated by instruments
micrometer) accuracy. One of the measurement instruments is composed by an interferometer, with high (several
micrometer) accuracy. One of the measurement instruments is composed by an interferometer, a
a microscope and a granite rail [25,26]. Generally, a scale bar performs in photogrammetry as a
microscope and a granite rail [25,26]. Generally, a scale bar performs in photogrammetry as a metric
metric for true scale, especially in multi-image offline systems. In this paper, the scale bar is used as a
for true scale, especially in multi-image offline systems. In this paper, the scale bar is used as a
calibrating tool for multi-camera online systems.
calibrating tool for multi-camera online systems.
AfterAfter
being settled
being according
settled accordingto to
thethe
measurement
measurementvolume
volumeand
and surroundings, thecamera
surroundings, the camerapair
pair
is is
calibrated and oriented following the proposed method outlines in
calibrated and oriented following the proposed method outlines in Figure 1.Figure 1.

Figure
Figure 1. Key
1. Key components
components andprocedures
and proceduresof
of the
the method.
method.
Sensors 2018, 18, 3964 4 of 19
Sensors 2018, 18, x FOR PEER REVIEW 4 of 19

2.1. Construction
2.1. of of
Construction 3D3D
Point and
Point Length
and Array
Length Array
The
Thebar
bar is moved ininthethe
is moved measurement
measurement volumevolume to different
to different locations
locations that arethat are uniformly
uniformly distributed
distributed in the measurement
in the measurement volume. At volume. At eachthe
each location, location, the barin
bar is rotated is different
rotated inorientations.
different orientations.
The position
The
andposition and orientation
orientation is called anisattitude
called anof attitude
the bar. of the the
After bar.moving
After theandmoving and
rotating rotating
process, process,3D
a virtual
a virtual 3D point
point array is builtarray is RRTs
by the built by thebar
of the RRTs of the
in each bar in Meanwhile,
attitude. each attitude. Meanwhile,
because because
the distance the
between
distance
the twobetween
RRTs isthe thetwo RRTs
length ofisthe
thescale
length
bar,of athe scale 3D
virtual bar,length
a virtual 3D length
array is builtarray is built
by the by thein
bar length
bar length
each in each
attitude. attitude.
The camerasThe cameras synchronically
synchronically capture images capture images
of the of the measurement
measurement volume and volume
the bar in
and theattitude.
each bar in each attitude.

2.2.
2.2. Locating
Locating and
and Matching
Matching thethe RRTs
RRTs inin Images
Images
TheThe 2Dcoordinates
2D coordinatesofof thethe two RRT
RRT in inevery
everyimage
imageare determined
are determinedby by
computing
computing the the
greygrey
value
centroid
value of theofpixels
centroid in each
the pixels inRRT
eachregion. Correspondences
RRT region. of the two
Correspondences RRTs
of the twobetween each image
RRTs between pair
each
is determined
image by relative position
pair is determined of theposition
by relative points inofimage. More specifically,
the points the right/left/up/down
in image. More specifically, the
RRT in one imageRRT
right/left/up/down is matched to the right/left/up/down
in one image RRT in the other image.
is matched to the right/left/up/down RRT inThethe 2D coordinates
other image.
of the
The 2D matched RRTsofinthe
coordinates all image
matchedpairs are used
RRTs forimage
in all exterior parameter
pairs estimation
are used and all-parameter
for exterior parameter
bundle adjustment.
estimation and all-parameter bundle adjustment.

2.3.
2.3. Estimating
Estimating thethe Relative
Relative Exterior
Exterior Parameters
Parameters
TheThe five-point
five-point method
method [27][27] is used
is used to estimate
to estimate the essential
the essential matrix
matrix between
between thecameras.
the two two cameras.
At
Atstage,
this this stage, we only
we only know know a guess
a guess of the
of the principle
principle distance.
distance. TheThe principle
principle point
point offset
offset coordinates,
coordinates,
distortion
distortion parameters
parameters ofof each
each camera
camera are
are unknown
unknown and
and setset
toto zeros.
zeros.
Improperly selected five image point pairs may lead
Improperly selected five image point pairs may lead to computing to computing degeneration
degenerationand and
thus thus
failure
of theofmethod.
failure So to So
the method. avoid this problem,
to avoid an algorithm
this problem, is designed
an algorithm for automatically
is designed selecting
for automatically the most
selecting
suitable five point pairs, taking into account both maximizing distribution dispersion
the most suitable five point pairs, taking into account both maximizing distribution dispersion and and avoiding
collinearity.
avoiding The strategy
collinearity. is to findisfive
The strategy point
to find pairs
five that
point pairsare that
located
are near thenear
located center
theand fourand
center corners
four of
the two camera images by minimizing the following five functions:
corners of the two camera images by minimizing the following five functions:
+ ixi + yii , −xi − y −i xr − ix − yi +ix − iy , x ix−+yy
i +ix +yy −
i x +y−l − +iyr xri + y(1)
i i i i i i i i i i i i i i i i
xli x+i y+
i
yily−
r , xr l− yr ,l xl −
i+xand
i xlix+ i
l l yl r + xrr + yrl , −l xl − r yl +
r rl r l, xl +
r lr r + yr and
l r yl − r (1)

Figure 2 illustrates all the image RRT points and the selected five point pairs.
Figure 2 illustrates all the image RRT points and the selected five point pairs.

(a) (b)

Figure
Figure2. 2.
AllAll
thethe
2D2D
RRTs
RRTscentroids (blue
centroids (bluedots) and
dots) thethe
and selected five
selected correspondences
five correspondences (red circle)
(red forfor
circle)
relative orientation.
relative (a)(a)
orientation. RRT points
RRT of of
points thethe
leftleft
camera; (b)(b)
camera; RRTRRTpoints
pointsofof
the right
the camera.
right camera.

The
The computed
computed essential
essential matrices
matrices areare globally
globally optimized
optimized byby
thethe root
root polish
polish algorithm
algorithm [28]
[28] using
using
allall
thethematched
matchedRRTs.
RRTs.After
Afterthat,
that, the
the essential matrices
matrices are
aredecomposed
decomposedinto intothe
therotation matrices
rotation and
matrices
andtranslation vectors
translation from
vectors which
from we can
which wegetcantheget
exterior angle and
the exterior translation
angle parameters.
and translation Generally,
parameters.
at least two
Generally, geometric
at least network network
two geometric structuresstructures
of the cameras
of the can be obtained
cameras and onlyand
can be obtained oneonly
is physically
one is
physically correct. In this method, the equalization of the reconstructed lengths in the 3D length
Sensors 2018, 18, 3964 5 of 19
Sensors 2018, 18, x FOR PEER REVIEW 5 of 19

correct.
array In this method,
is employed as a the equalization
spatial constraintof to
thedetermine
reconstructed lengths
the true in the 3D
solution, length
which array robust
is more is employed
than
as a spatial constraint to determine
the widely used image error analysis. the true solution, which is more robust than the widely used image
errorInanalysis.
this part, the relative exterior parameters are inaccurate, the principle distance is just a guess
In this part,
and principle the relative
point exterior
offset as parameters
well as distortionsareare
inaccurate,
not dealtthewith.
principle
All ofdistance is just a guess
the inaccurate and
unknown parameters need further refinement through bundle adjustment to achieve high unknown
and principle point offset as well as distortions are not dealt with. All of the inaccurate and accuracy
parameters
and need further refinement through bundle adjustment to achieve high accuracy and precision.
precision.
2.4. Self-Calibrating Bundle Adjustment and Precision Estimation
2.4. Self-Calibrating Bundle Adjustment and Precision Estimation
For traditional 3D point array self-calibrating bundle adjustment, large amount of convergent
For traditional 3D point array self-calibrating bundle adjustment, large amount of convergent
images are essential to handle the severe correlations between unknown parameters and to achieve
images are essential to handle the severe correlations between unknown parameters and to achieve
reliable and precise results, so in theory, calibrating cameras through bundle adjustment using only one
reliable and precise results, so in theory, calibrating cameras through bundle adjustment using only
image pair of pure 3D point array is impossible, but moving a scale bar gives not only 3D points but
one image pair of pure 3D point array is impossible, but moving a scale bar gives not only 3D points
also point-to-point distances which, as spatial constraints, greatly strengthen the two-camera network
but also point-to-point distances which, as spatial constraints, greatly strengthen the two-camera
and can be introduced into bundle adjustment to enable self-calibration.
network and can be introduced into bundle adjustment to enable self-calibration.
The projecting model of a 3D point into the image pair is expressed by the following implicit
The projecting model of a 3D point into the image pair is expressed by the following implicit
collinear equations [29,30]:
collinear equations [29,30]:
xyli = f ( Il Xi )
xy
(2×1) li = f (8I×1)
( l
X(3×)1)
i
(2×1) (8×1) ( 3×1) . (2)
xyri xy I E
=ri f=( f(8(×rI1r) (E6×r X.i ) (2)
r 1)
X(i 3)×1)
(2×1) ( 2×1) (8×1) (6×1) ( 3×1)

In Equation
In Equation (2),(2), the subscripts ll and
the subscripts and rr mean
mean the
the left
left and
andright
right camera;
camera;xyxyisisthe
theimage
imagecoordinate
coordinate
vector. II is
vector. is the
the interior
interior parameter
parameter vector
vector including
includingthe
the principle
principledistance,
distance,the
theprinciple
principlepoint
pointoffset,
offset,
the radial
the radial distortion
distortion and
and decentering
decentering distortion parameters; EErr isisthe
distortion parameters; theexterior
exterior parameter
parameter vector
vector of
of the
the
right camera relative to the left including three angles and three translations; and X is
right camera relative to the left including three angles and three translations; and Xii is the coordinate the coordinate
vector of
vector of aa 3D
3D point.
point. The
The linearized
linearized correction
correction equations
equations forfor an
an image
image point observation are:

Bli δBi li
.
vli +vli l+li lli== A
Alili δlδ+
l + δi
(2×1)
(2×1) (2×1(2)×1) ((2 2××8) (8×1) (2×3) (3×1) (3×1)
8) (8×1) (2×3)
. . . (3)(3)
vri +vri l+ l
ri ri = = A
A ri
ri
δ δ
r r
+ B +
ri B
δi ri δi
(2×1) (2(×21)×1(2) ×1) (2(2××14)
14)(14 ×1)×1(2
(14 ) ×3) (3
(2××1) 3)(3×1)

In
In Equation
Equation(3),
(3), vv is
is the
the residual
residual vector
vector of
of an
an image
image point
point that
that is
is defined
defined by
by the
the disparity
disparity vector
vector
between the “true” (without error) coordinate
between the “true” (without error) coordinate x and the measured image point coordinate xxi ;; ll is
xi
and the measured image point coordinate is
i i
the
the reduced
reduced observation
observation vector
vector that
that is
is defined
defined by
by the
the disparity
disparity vector
vector between
between the the measured
measured image
image
point coordinatexi xand
point coordinate and
the the computed
computed image image
coordinate x 0 usingx0the
coordinate using the
approximate approximate
camera camera
parameters.
i i i

Figure 3 illustrates
parameters. Figure 3the x axis component
illustrates of v and l ofofan
the x axis component image
v and l ofpoint i. A is
an image the i.Jaccobian
point matrix of
A is the Jaccobian
fmatrix
with respect to camera interior, distortion
of f with respect to camera interior, and exterior parameters; B is the Jaccobian
distortion and exterior parameters; B is the Jaccobian matrix of f
.
with respect
matrix to space
of f with respectcoordinates; δ and δ are the
to space coordinates; δ and 
δ are the
corrections of corrections
the camera parameters
of the camera andparameters
the spatial
coordinates, respectively.
and the spatial coordinates, respectively.

0 xi xi xi0
x
vxi
lxi

Figure
Figure3.3.The
Theresidual
residualvvand
andreduced
reducedobservation
observationllof
of an
an image
image point
point ii along
along xx axis.
axis.

nn scale bars
barsprovide
provide2n2n3D3D
points
points n point-to-point
andand distances.
n point-to-point Considering
distances. the m-th
Considering thebar length:
m-th bar
length: q
sm = g( Xm1 Xm2 ) = ( Xm1 − Xm22)2 + (Ym1 − Y 2
m2 ) + ( Zm1 − Zm2 ) ,
2
(4)
sm = g( Xm1 Xm2 ) = ( Xm1 − X m2 ) + (Ym1 − Ym2 )2 + (Zm1 − Zm2 )2 , (4)
Sensors 2018, 18, 3964 6 of 19

where, m1 and m2 denote the two endpoints of the bar. Because Equation (4) is nonlinear, they need
to be linearized before participating the bundle adjustment. The linearized correction equation for a
spatial point-to-point distance constraint is:
. .
vsm + lsm = Cm1 δm1 + Cm2 δm2 , (5)
(1×1) (1×1) (1×3) (3×1) (1×3) (3×1)

where, C is the Jaccobian matrix of Equation (4) with respect to the coordinates of each endpoint.
Point-to-point distances are incorporated into bundle adjustment to avoid rank defect of the
normal equation and also to eliminate correlations between unknown parameters. For a two camera
system imaging n scale bars, the extended correction equation that involves all the image point
observations and point-to-point distance constraints can be written as:

v+l = Aδ
     
v1,11 l1,11 A1,11 0 B1,11 0 ··· 0 0

 v1,12  
  l1,12  
  A1,12 0 0 B1,12 ··· 0 0 

 ..   ..   .. .. .. .. .. .. .. 
. . . . . . . . .
     
      

 v1,n1
 
  l1,n1
 
  A1,n1 0 0 0 ··· B1,n1 0

 δ1
     
 v1,n2   l1,n2   A1,n2 0 0 0 ··· 0 B1,n2  δ2 
      . 
···


 v2,11  
  l2,11  
  0 A2,11 B2,11 0 0 0 
 δ11 
.

v2,12
 
+ l2,12
 
= 0 A2,12 0 B2,12 ··· 0 0

δ12 
 (5)
  ,
.. .. .. .. .. .. .. .. .. .. 
     
     
 .   .   . . . . . . .  . 
.
      
v2,n1 l2,n1 0 A2,n1 0 0 ··· B2,n1 0

      δn1 
.
     

 v2,n2  
  l2,n2  
  0 A2,n2 0 0 ··· 0 B2,n2 
 δn2
vs1 ls1 0 0 C11 C12 ··· 0 0
     
     
.. .. .. .. .. .. .. .. ..
     
     
 .   .   . . . . . . . 
vsn lsn 0 0 0 0 ··· Cn1 Cn2

where the subscripts (i, j, k) denote the k-th (k = 1, 2) endpoint of the j-th (j = 1, 2, . . . , n) distance in the
i-th (i = 1, 2) image. The normal equation is:

( A T PA)δ = A T Pl ⇒ Nδ = W
    
N11 N12 δ W1
 (22×22) (22×6n)  (22× 1)   (22×1)  . (7)
T
. =
N12 N22 W2
  
δ
(6n×22) (6n×6n) (6n×1) (6n×1)

In Equation (7), P is a diagonal weight matrix of all the image point coordinate and spatial distance
observations. Items in Equation (7) are determined by block computation:
 
n 2
T P A
 ∑ ∑ A1,jk p 1,jk 0 
 j =1 k =1
N11 =  ,

n 2
T P A
0 ∑ ∑ A2,jk
 
p 2,jk
j =1 k =1

 
2
T T T PC
 ∑ Bi,11 Pp Bi,11 + C11 Pl C11 C11 l 12 ··· 0 0 
 i =1 
 2 
T PC T P B T PC
C12 ∑ Bi,12 + C12 ··· 0 0
 
 l 11 p i,12 l 12 
 i =1 
N22

= .
. .. .. .. .. 
,
. . . . .


2
 
T P B T T PC
··· ∑ Bi,n1 p i,n1 + Cn1 Pl Cn1

Cn1

 0 0 l n2 
i =1
 
 
2
T PC T P B T
∑ Bi,n2
 
0 0 ··· Cn2 l n1 p i,n2 + Cn2 Pl Cn2
i =1
Sensors 2018, 18, 3964 7 of 19

" #
T P B
A1,11 T P B
A1,12 ··· T P B
A1,n1 T P B
A1,n2
p 1,11 p 1,12 p 1,n1 p 1,n2
N12 = T T T T ,
A2,11 Pp B2,11 A2,12 Pp B2,12 ··· A2,n1 Pp B2,n1 A2,n2 Pp B2,n2
 
2
T
 ∑ Bi,11 Pp li,11 + ls1 Pl C11 
 i =1 
 2 
T
∑ Bi,12 Pp li,12 + ls1 Pl C12 
 
n 2  
T
 ∑ ∑ A1,jk Pp l1,jk

 i =1 
..

 j =1 k =1  
w1 =  n 2  , w1 =  .

 . 
T P l
∑ ∑ A2,jk
 
p 2,jk 2
 
T P l
∑ Bi,n1 p i,n1 + lsn Pl Cn1 
 
j =1 k =1 
i =1
 
 
2
T P l
∑ i,n2 p i,n2
 
B + ls P C
n l n2
i =1

Assuming that the a priori standard deviations of the image point observation and the spatial
distance observation are sp and sl , respectively, and the a priori standard deviation of unit weight is s0 ,
the weight matrices Pp and Pl are determined by:
 
s20
s2p
0 s2
Pp =  , Pl = 20 . (8)
 
s20 sl
0 s2p

Solving Equation (7), we obtain the corrections for camera parameters and endpoint coordinates:

−1 −1−1
δ = ( N11 − N12 N22 N21 ) (W1 − N12 N22 W2 )
.
−1
. (9)
δ = N22 (W2 − N21 δ)
.
Again, using the block diagonal character of N22 , δ can be computed camera by camera and δ
can be computed length by length. The estimated camera parameters and 3D point coordinates are
updated by the corrections iteratively until the bundle adjustment converges. The iteration converges
and is terminated when the maximum of the absolute coordinate corrections of all the 3D points is
smaller than 1 µm.
The proposed algorithm is time efficient because block computations eliminate the need for
massive matrix inverse or pseudo inverse computation. In addition, the algorithm is unaffected
by invisible observations and allows for gross observation detection in the progress of adjustment
iteration. Additionally, this method allows both internal precision and external accuracy assessment of
the calibrating results. The internal precision is represented by the variance-covariance matrix of all
the adjusted unknowns:
C = ŝ20 N −1 , (10)
((22+6n)×(22+6n)) ((22+6n)×(22+6n))

where N is the normal matrix in Equation (7) and the a posteriori standard deviation of unit weight is
determined by: s
v T Pv
ŝ0 = , (11)
8n − (22 + 6n) + n
where n is the number of point-to-point distances and the size of v equals 8n for a two camera system.

2.5. Global Scaling and Accuracy Assessing of the Calibration Results


After bundle adjustment, 3D endpoints can be triangulated using the parameters. Generally,
the adjusted results include systematic errors that are caused by wrong scaling and cannot be eliminated
Sensors 2018, 18, 3964 8 of 19

through bundle adjustment. Again, the point-to-point distances are utilized to rescale the results.
Assuming that the triangulated bar lengths are:

L1 , L2 , . . . L n . (12)

The rescaling factor is calculated by:


L
Ks = , (13)
L
where L is the nominal length of the scale bar and L is the average of the triangulated bar lengths in
Equation (12). Then the final camera parameters, 3D coordinates and triangulated bar lengths are:
h iT
Il0 = Il , Ir0 = Ir , Er0 = Er (1 : 3) T Ks Er (4 : 6) T , Xi0 = Ks Xi and Li0 = Ks Li . (14)

Besides internal precision, this method provides on-site 3D evaluation of the calibration accuracy.
The triangulated lengths Li0 (i = 1, 2, . . . n) provide large amount of length measurements that are
distributed in various positions and orientations in the measurement volume. As a result, an evaluation
procedure can be carried out following the guidelines of VDI/VDE 2634 norm. Because all the lengths
are physically identical, calibration performance assessment through length measurement error is
much easier.
Since the length of the scale bar is calibrated by other instruments, the nominal length L has error.
Assuming that the true length of the scale bar is L0 , we introduce a factor K to describe the disparity
between L and L0 :
L = KL0 , (K 6= 1), (15)

Essentially, Equation (15) describes the calibration error of the scale bar length in another way.
The triangulated bar lengths Li0 (i = 1, 2, . . . n) in Equation (14) can be rewritten as:

L0
Li0 = K L i , ( K 6 = 1). (16)
L

The absolute error of Li0 is:


Li
ei0 = KL0 ( − 1), ( K 6 = 1). (17)
L
It can be derived that the Average (AVG) and Root Mean Square (RMS) values of the error are:

KL0
AVG(ei0 ) = 0, and RMS(ei0 ) = RMSE( Li ), (18)
L
where, RMSE(Li ) is the Root Mean Square Error (RMSE) of the triangulated scale lengths.
Further, we define the relative precision of length measurement by:

RMSE( L0 i ) RMS(e0 i ) RMSE( Li )


r ( Li0 ) = = = . (19)
L L L
In Equation (19), the relative precision is independent of factor K and it keeps unchanged under
different K value. For example, in a calibration process using a nominal L = 1000 mm (K = 1) bar, ten of
the triangulated bar lengths Li (i = 1, 2, . . . 10) are:

[999.997 999.986 1000.041 999.969 999.997 999.991 999.998 999.988 1000.024 1000.017]

whose RMSE is 0.020 mm and the relative precision r ( Li ) is 1/50,068.


Sensors 2018, 18, 3964 9 of 19

If K of the instrument is 1.000002 (the absolute calibration error is 0.002 mm), the nominal length
is 1000.002 mm. According to Equation (16), the rescaled lengths Li0 are:

[999.999 999.988 1000.043 999.971 999.999 999.993 1000.000 999.990 1000.026 1000.019]

whose relative precision r ( Li0 ) is also 1/50,068.


And further, if K is amplified to 1.2 (the absolute calibration error is 200 mm, which is not possible
00
in practice), the rescaled lengths Li are:

[1199.996 1199.983 1200.050 1199.963 1199.997 1199.989 1199.998 1199.985 1200.029 1200.021]
00
whose relative precision r ( Li ) is again 1/50,068.
The above example proves Equation (19). The relative precision of length measurement is invariant
under different scale bar nominal lengths (different K values in Equation (15)), which makes it a good
assessment of the calibrating performance of the camera pair.
Additionally, interior, distortion parameters and the relative rotating angles between the two
cameras are not affected by the scale factor K. These parameters are calibrated with a uniform accuracy,
no matter how large the instrument measurement error is, even if we assign a wrong value to L.
The two cameras can be calibrated precisely without knowing the true length L0 .

3. Simulations and Results


A simulation system is developed to verify the effectiveness and evaluate the performance of
the proposed method. The system consists of the generating module of control length array, camera
projective imaging module, the self-calibrating bundle adjustment module and the 3D reconstruction
module. The generating module simulates scale bars that evenly distribute over the measurement
volume. The length, positions, orientations of the bar and the scale of the volume can be modified.
The imaging module projects endpoints of the bars into the image pair utilizing assigned interior,
distortion and exterior parameters. The bundle adjustment module implements the proposed method
and calibrates all the unknown parameters of the camera pair. The reconstruction module triangulates
all the endpoints and lengths by forward intersection utilizing the calibrating results.

3.1. Point and Length Array Construction and Camera Pair Configurations
The bar is set to be 1 m long. Bar positions are evenly distributed in the volume and is one bar
length apart from each other. The scale bar is moved to each position and posed in different orientations.
It is worth noticing that, if the bar is moved and rotated in a single plane, self-calibrating of the camera
parameters fails. That is because there is a great correlation between the interior parameters (principle
distance, principle point coordinates) and the exterior translation parameters, and planar objects do
not provide sufficient information to handle the correlation. As a result, after bundle adjustment,
these parameter determinations show very large standard deviations, which means that the calibration
results are not precise and thus not reliable.
We use multi plane motion and out-of-plane rotations to provide the bundle adjustment process
with diverse orientation length constraints and thus optimize the parameters to adapt to different
orientations. As a result, uniform 3D measurement accuracy can be achieved in different orientations.
Figure 4 shows the six orientations of the bar in one position. Figure 5 demonstrates the simulated
point and length array and the camera pair.
In this simulation, the measurement volume is 12 m (length) × 8 m (height) × 4 m (depth).
The resolution of the cameras is 4872 × 3248 pixels and the pixel size is 7.4 µm × 7.4 µm. The interior
and distortion parameters are set with the values of the real cameras.
The cameras are directed at the center of the measurement volume, and they are 8 m away from
the volume and 5 m apart from each other which thus forms a 34.708 degrees intersection angle.
Normally distributed image errors of σ = 0.2 µm are added to the projected image coordinates of each
the interior parameters (principle distance, principle point coordinates) and the exterior translation
parameters, and planar objects do not provide sufficient information to handle the correlation. As a
result, after bundle adjustment, these parameter determinations show very large standard
deviations, which means that the calibration results are not precise and thus not reliable.
We use multi plane motion and out-of-plane rotations to provide the bundle adjustment
Sensors 2018, 18, 3964 10 of 19
process with diverse orientation length constraints and thus optimize the parameters to adapt to
different orientations. As a result, uniform 3D measurement accuracy can be achieved in different
orientations.
endpoint. ForFigure 4 shows
the cameras the six orientations
simulated, of the barain
σ = 0.2 µm indicates onepixels
1/37 position. Figure
image 5 demonstrates
measurement the
precision
simulated
which point
can be and length
achieved array
through and theRRTs
utilizing camera
andpair.
appropriate image measurement algorithm.

Sensors 2018, 18, x FOR PEER REVIEW 10 of 19

In this simulation, the measurement volume is 12 m (length) × 8 m (height) × 4 m (depth). The


resolution of the cameras is 4872 × 3248 pixels and the pixel size is 7.4 μm × 7.4 μm. The interior and
distortion parameters are set with the values of the real cameras.
Figure 4.
Figure 4. The six orientations of the
the bar
bar in
in one
one position.
position.

Figure
Figure 5. The simulated
5. The simulated control
control length
length array
array and
and the
the camera
camera pair.
pair.
3.2. Accuracy and Precision Analysis
The cameras are directed at the center of the measurement volume, and they are 8 m away from
In the simulation,
the volume and 5 m apart onlyfrom
a guess
eachvalue
otherof 20 mm
which thusis forms
assigned to thedegrees
a 34.708 principle distance. Other
intersection angle.
interior and distortion parameters are set to zeros. The five point and root
Normally distributed image errors of σ = 0.2 μm are added to the projected image coordinates polish methods give
of
good estimations
each endpoint. Forof the
the cameras
relative exterior parameters
simulated, σ = 0.2 μmandindicates
thus theaproposed self-calibrating
1/37 pixels bundle
image measurement
adjustment converges
precision which can be generally within three
achieved through iterations.
utilizing RRTs and appropriate image measurement algorithm.
In the simulation, sp and sl are set to 0.0002 mm and 0.2 mm respectively, and s0 is set to equal sp .
3.2. Accuracy
The and Precision
self-calibrating bundleAnalysis
adjustment refines and optimizes all of the camera parameters and spatial
coordinates. A posterior standard deviation of unit weight ŝ0 = 0.00018 mm is obtained, which indicates
In the simulation, only a guess value of 20 mm is assigned to the principle distance. Other
a good consistency between the a priori and a posterior standard deviation.
interior and distortion parameters are set to zeros. The five point and root polish methods give good
The standard deviations of the interior and distortion parameters of the two cameras, the relative
estimations of the relative exterior parameters and thus the proposed self-calibrating bundle
exterior parameters and the 3D coordinates of the endpoints can be computed following Equation (10).
adjustment converges generally within three iterations.
Table 1 lists the interior, distortion parameter determinations and their standard deviations from the
In the simulation, sp and sl are set to 0.0002 mm and 0.2 mm respectively, and s0 is set to equal sp.
bundle adjustment. The undistortion equation of an image point (x, y) is:
The self-calibrating bundle adjustment refines and optimizes all of the camera parameters and
spatial coordinates. A posterior standard deviation of unit weight ŝ 0 = 0.00018 mm is obtained,
xu = x − x0 + ∆x
which indicates a good consistency between the a priori and a posterior standard deviation. (20)
yu = y− y0 + ∆y
The standard deviations of the interior and distortion parameters of the two cameras, the
relative
in xu and yparameters
which,exterior and the 3D coordinates coordinates
u are the distortion-free/undistorted of the endpoints can bex0computed
of the point; and y0 are following
the offset
Equation (10).
coordinates Table
of the 1 listspoint
principle the in
interior,
the image; ∆x andparameter
distortion ∆y are the determinations
distortions alongand their
image standard
x and y axis
deviations from
respectively. ∆x and ∆y are calculated
the bundle adjustment.by:The undistortion equation of an image point (x, y) is:
x = x − x + Δx
∆x = x (K1 r2 + K2 r4 u+ K3 r6 )0 + P1 (,2x2 + r2 ) + 2P2 xy (20)
(21)
∆y = y(K1 r2 + K2 r4yu+=Ky3−r6y)0 +
+ Δy
P2 (2y2 + r2 ) + 2P1 xy
in which, xu and yu are the distortion-free/undistorted coordinates of the point; x0 and y0 are the
offset coordinates of the principle point in the image; △x and △y are the distortions along image x
and y axis respectively. △x and △y are calculated by:

Δx = x(K1r2 + K2r4 + K3r6 ) + P1 (2x2 + r2 ) + 2P2 xy


2 4 6 2 2
, (21)
Sensors 2018, 18, 3964 11 of 19

q
in which, x = x − x0 , y = y − y0 , r = x2 + y2 ; K1 , K2 and K3 are the radial distortion parameters; P1
and P2 are the tangential distortion parameters.

Table 1. Self-calibrating bundle adjustment results of the camera parameters.

Left Camera Right Camera


Interior
Parameters Calibration Standard Calibration Standard
Results Deviations Results Deviations
c (mm) 20.325 1.542 × 10−3 20.320 1.493 × 10−3
x0 (mm) −0.105 2.245 × 10−3 −0.135 2.409 × 10−3
y0 (mm) 0.168 7.423 × 10−4 0.247 7.857 × 10−4
K1 (mm2 ) 2.788 × 10−4 2.727 × 10−7 2.795 × 10−4 2.859 × 10−7
K2 (mm4 ) −4.866 × 10−7 1.372 × 10−9 −5.034 × 10−7 1.575 × 10−9
K3 (mm6 ) −1.657 × 10−11 2.209 × 10−12 1.404 × 10−11 2.721 × 10−12
P1 (mm) −7.030 × 10−6 1.551 × 10−6 −3.102 × 10−6 1.620 × 10−6
P2 (mm) −8.630 × 10−6 4.916 × 10−7 −8.606 × 10−6 5.267 × 10−7

Table 2 lists the relative exterior parameter determinations and their standard deviations from the
bundle adjustment, and Table 3 lists the mean standard deviations of the 3D coordinates of all the end
points from the bundle adjustment.

Table 2. Bundle adjustment results of the relative exterior parameters.

Relative Exterior Parameters Bundle Adjustment Determinations Standard Deviations


Tx (mm) 4772.757 0.325
Ty (mm) −0.023 0.068
Tz (mm) −1491.147 0.410
ϕ (Deg.) −34.710 2.845 × 10−5
ω (Deg.) 0.915 × 10−3 8.871 × 10−5
κ (Deg.) −1.300 × 10−3 2.622 × 10−5

Table 3. Mean standard deviations of the 3D point coordinate determinations.

Axes Mean Standard Deviations


X (mm) 0.094
Y (mm) 0.076
Z (mm) 0.203

From the above results, it can be seen that the bundle adjustment successfully and precisely
determines the interior, distortion, and exterior parameters of both cameras as well as the spatial
coordinates of the endpoints. Besides internal standard deviations, the reconstructed lengths of the
scale bar provide an external evaluation of the calibration accuracy. Table 4 exhibits the results of the
triangulated distances versus the known bar length.

Table 4. Mean Error, RMSE and Maximum Error of the reconstructed distances.

Mean Error (mm) RMSE (mm) Maximum Error (mm)


0.005 0.204 1.157

Figure 6 shows the histogram of the errors of the reconstructed lengths. The errors demonstrate a
normal distribution which means that no systematic error components exist and that the functional
and the stochastic models of the method are correctly and completely built up.
Sensors 2018, 18, x FOR PEER REVIEW 12 of 19
Sensors 2018, 18, x3964
FOR PEER REVIEW 12 of 19
12
Sensors 2018, 18, x FOR PEER REVIEW 12 of 19

Figure 6. Histogram of the length reconstructing errors.


Figure 6. 6.
Figure Histogram of of
Histogram thethe
length reconstructing
length errors.
reconstructing errors.
Figure 6. Histogram of the length reconstructing errors.
3.3. Performances of the Method under Different Spatial Geometric Configurations
3.3.
3.3. Performances
Performances
3.3. Performances of the
ofof Method
thethe
Method
Method under
under
underDifferent
Different
DifferentSpatial
Spatial Geometric
Geometric
Spatial GeometricConfigurations
Configurations
Configurations
In this part, simulations are carried out to analyze the performance of the proposed method
InIn
this part, simulations are carriedoutout totoanalyze
the the
theperformance theofproposed
ofthe
theproposed method
whenIncalibrating
this part,
this simulations
part, simulations
camera are
pairs carried
inare carried
different to analyze
out analyze
measurement performance
volume of using
performance
scales, method
proposed
different when
method
bar lengths
when calibrating
calibrating
when camera
camera intersection
pairs pairs in
in differentdifferent measurement
measurement volume
volume scales, scales,
using using
different different
usingbar bar lengths
lengthsbar
andlengths
with
and withcalibrating
different camera pairs in different
angles. measurement volume scales, different
and with
different
andForwith different
intersection
differentintersection
angles.
intersection angles.
angles.
a stereo camera system with a specific intersection angle, the scale of measurement volume
For
For aa stereo
stereo camera system with aaspecific
specific intersection angle, the scale ofofmeasurement
measurement volume
For
is dependent oncamera
a stereo camera system
system
the measuring with
with
distance. intersection
a specific 7–9angle,
intersection
Figures the
angle,
exhibit scale
the
the scale of measurement
calibrating volume is
volume
performances,
is dependent
dependent on on the measuring distance. Figures 7–9 exhibit the calibrating performances,
is dependent
represented bythe
baronmeasuring
length distance.distance.
the measuring
measurement Figures
errors, 7–9 exhibit
Figures
under the calibrating
7–9
different exhibit
geometrictheperformances, represented
calibrating performances,
configurations.
represented
bar lengthbymeasurement
byrepresented bar length measurement
errors, under errors, under
different different
geometric geometric
configurations. configurations.
by bar length measurement errors, under different geometric configurations.

Figure 7. Calibration performances at different measuring distances.


Figure 7.
Figure Calibration performances
7. Calibration performances at
at different
different measuring distances.
Figure 7. Calibration performances at different measuring distances.

Figure 8.
Figure Calibration performances
8. Calibration performances under
under different
different camera
camera intersection
intersection angles.
angles.
Figure 8. 8.
Figure Calibration performances
Calibration under
performances different
under camera
different intersection
camera angles.
intersection angles.
Sensors 2018, 18, 3964 13 of 19
Sensors 2018, 18, x FOR PEER REVIEW 13 of 19

Figure 9.
Figure Calibration performances
9. Calibration performances when
when using
using different
different calibrating
calibrating bar lengths.

Calibration accuracy
Calibration accuracy improves
improves inin smaller
smaller volumes
volumes andand with
with larger
larger intersection
intersection angles,
angles, which
which is is
consistent with normal knowledge. What is interesting is that, calibrating accuracy
consistent with normal knowledge. What is interesting is that, calibrating accuracy almost keeps almost keeps
unchanged when
unchanged when using
using scale
scale bars
bars with
with different
different lengths.
lengths. Thus,
Thus, it
it can
can be
be deduced
deduced that,
that, the
the measuring
measuring
error of longer distances in the same volume using the calibrated camera pair
error of longer distances in the same volume using the calibrated camera pair will be similar towill be similar to the
the
scale bar length measuring error. Further, the extended relative precision of length
scale bar length measuring error. Further, the extended relative precision of length measurement in measurement in
this volume
this volume is:is:
k
re = RMSE( Li ), (22)
D
k
re = RMSE( Li ) (22)
where, k is a confidence interval integer, D is the D scale of the, measurement volume, RMSE(Li ) is the
RMSE of bar length measurement as in Equation (18). For the calibrating results in Table 4, the relative
where, k isisanearly
precision confidence interval
1/25,000 wheninteger,
k equalsD3.is the scale of the measurement volume, RMSE(Li) is the
RMSE of bar length measurement as in Equation (18). For the calibrating results in Table 4, the
3.4. Accuracy
relative Comparison
precision is nearlywith the Point
1/25,000 whenArray Self-Calibrating
k equals 3. Bundle Adjustment Method
The point array self-calibration bundle adjustment method is widely used in camera calibration
3.4. Accuracy Comparison with the Point Array Self-Calibrating Bundle Adjustment Method
and orientation. This method takes multiple photos of an arbitrary but stable 3D array of points,
The point
and then array
conducts self-calibration
a bundle adjustment bundle adjustment
to solve method
the interior, is widely
distortion, usedcamera
extrinsic in camera calibration
parameters and
and orientation. This method takes multiple photos of an arbitrary but stable 3D
3D point coordinates. Generally, only one camera is calibrated by the point array bundle adjustment array of points, and
then
whileconducts a bundle
in our method theadjustment
two cameras to are
solve the interior,
calibrated distortion, extrinsic camera parameters and
simultaneously.
3D point
The coordinates. Generally,inonly
scale bar endpoints one 5camera
Figure is calibrated
are used by the
to calibrate point
each array by
camera bundle adjustment
the point array
while in our method the two cameras are calibrated simultaneously.
self-calibrating bundle adjustment method. For each camera, seventeen convergent pictures of the
pointThearrayscale
are bar
takenendpoints
at stations inevenly
Figuredistributed
5 are usedinto calibrate
front each camera
of the point array. Oneby of
thethepoint array
pictures is
self-calibrating bundle adjustment method. For each camera, seventeen convergent
taken at the station for stereo measurement and at least one picture is taken orthogonally. Figure 10 pictures of the
point array arethe
demonstrates taken at stations
camera stations evenly
and thedistributed
point arrayin front of theispoint
(the scene array.
rotated One ofvisualization
for better the pictures of is
taken at thestations).
the camera station for stereo measurement and at least one picture is taken orthogonally. Figure 10
demonstrates the camera
Image errors of σ =stations
0.2 µmand arethe pointto
added array
each(the scene is rotated
simulated for better
image point. visualization
Then, point arrayof
the camera stations).
self-calibration bundle adjustment is conducted using these image data to solve the parameters
of theImage errors of
two cameras σ = 0.2 μm
respectively. are added
Besides to each errors
reconstruction simulated
of theimage point.measurement
bar lengths, Then, point errors
array
self-calibration bundle adjustment is conducted using these image data to solve
of a 10 m length along the diagonal of the volume are also introduced to make the comparison between the parameters of
the
thesetwotwocameras
methods.respectively.
Table 5 listsBesides reconstruction
the results errors ofofthe
of 200 simulations barmethod.
each lengths,Itmeasurement
can be seen thaterrors
the
of a 10 m length along the diagonal of the volume are also introduced
proposed method is more accurate and precise. The point array bundle adjustment method shows to make the comparison
between these twoerrors
larger systematic methods. Table 5 listsErrors.
and Maximum the results of 200 simulations of each method. It can be seen
that the proposed method is more accurate and precise. The point array bundle adjustment method
shows larger systematic errors and Maximum Errors.
Sensors 2018,18,
Sensors2018, 18,3964
x FOR PEER REVIEW 14
14ofof19
19

Figure 10. The point array and the camera stations.


Figure 10. The point array and the camera stations.

Table5.5.Length
Table Lengthmeasurement
measurementerrors
errorsofofthe
thetwo
twomethods.
methods.
Scale
ScaleBar Length
Bar Length Spatial
SpatialLength
Length
Two Bundle
Measurement Error (mm) Measurement Error (mm)
Two Bundle
Adjustment Methods Measurement Error (mm) Measurement Error (mm)
Adjustment Methods Mean Error RMSE Maximum Error
Maximum Mean Error
Mean RMSE Maximum
Maximum Error
Mean Error RMSE RMSE
Length array 0.008 0.204 1.186
Error 0.101
Error 0.215 0.710
Error
Point
Lengtharray
array 0.0430.008 0.2950.204 1.664
1.186 0.243
0.101 0.205
0.215 0.820
0.710
Point array 0.043 0.295 1.664 0.243 0.205 0.820
4. Real Data Experiments and Results
4. Real Data Experiments and Results
Two industrial cameras (GE4900, AVT, Stadtroda, Germany) equipped with two consumer-level
lensesTwo
(Nikkorindustrial
20 mm F/2.8D, camerasNikon, (GE4900, AVT,areStadtroda,
Tokyo, Japan) used for realGermany)
experiments.equipped withof the
The resolution two
CCD is 4872 × 3248
consumer-level lenses
pixels(Nikkor
and the20dimension
mm F/2.8D, mm ×Tokyo,
is 36Nikon, 24 mm.Japan) are used (YN
Two flashlights for real experiments.
560 III, Yongnuo,
The resolution
Shenzhen, China)ofare the CCD is 4872
incorporated × 3248 pixels
to provide and the
illumination. dimension
A specially is 36 mm
designed and×manufactured
24 mm. Two
flashlights
carbon fiber (YN
scale560
bar III, Yongnuo,for
is employed Shenzhen,
calibrating. China)
Figure are11incorporated
shows the bar, to the
provide illumination.
spherical and planarA
specially
RRTs. Thedesigned and manufactured
bar has symmetrically threecarbon
bushing fiber scale
holes at bar
eachisend
employed
to bracefor calibrating.
and fasten theFigure 11
plug-in
shows the
shafted bar,Itthe
RRTs. spherical and
is convenient planar
to make RRTs. The bar
substitutions for has
RRTs symmetrically threeand
of different sizes bushing
types.holes at each
Plugging a
endof
pair toRRTs
bracesymmetrically
and fasten theinplug-in differentshafted
bushingRRTs. It ismakes
holes convenient to make bar
three different substitutions for lengths
lengths. The RRTs of
different
are measured sizesonand types. Plugging
a granite linear raila by pair of RRTs
a laser symmetrically
interferometer andina different
microscopic bushing holes
imaging makes
camera.
three
The different
length bar lengths.
measurement The lengths
accuracy are measured
is higher than 2.0 µm. on a granite linear rail by a laser interferometer
and Three
a microscopic
experiments imaging
werecamera.
carried The length
out to validate measurement
the proposed accuracy
methodis higher
and thethan 2.0 μm.results.
simulation
Three experiments
The cameras are 4 m away were
fromcarried out to validate
the measurement volume the that is 4 m ×
proposed 3 m ×and
method 2 m.theThesimulation
centroid
results. isThe
method camerastoare
employed 4 m away
measure image from
RRTs.theTarget
measurement
eccentricity volume that is because
is neglected 4 m × 3 the
m ×computed
2 m. The
centroid method
magnitude according is employed to measure
to [31] is less than 0.2 µmimage RRTs.
across theTarget eccentricity is neglected because the
entire image.
computed magnitude according to [31] is less than 0.2 μm across the entire image.
Sensors 2018, 18, 3964 15 of 19
Sensors 2018, 18, x FOR PEER REVIEW 15 of 19
Sensors 2018, 18, x FOR PEER REVIEW 15 of 19

Figure 11. The carbon fiber bar and RRTs.


RRTs. (a) The scale bar; (b) two of the bushing holes and a target
plugged
Figure 11.in;
The(c)carbon
two spherical
spherical targets;
fiber bar and (d) two
RRTs.
targets; (d) two planar
(a) planar targets.
The scale bar; (b) two of the bushing holes and a target
targets.
plugged in; (c) two spherical targets; (d) two planar targets.
4.1. Calibration Performances
4.1. Calibration Performances Using
Using Spherical
Spherical and
and Planar
Planar Targets
Targets
4.1. Calibration
Two Performances Using Spherical and Planar Targets
Two types
typesofofRRTs
RRTsareareused
usedforfor
camera
camerapair calibration:
pair calibration:9 mm9 mm diameter planer
diameter circular
planer target
circular and
target
6and
mm diameter
6 mm
Two spherical
diameter
types of RRTs target.
spherical The
are usedtarget. length of the
The length
for camera bar is set to
of the bar9 mm
pair calibration: 0.8 m. Bar positions
is setdiameter
to 0.8 m. and orientations
Bar circular
planer positions and
target
in the6 measurement
orientations
and in the measurement
mm diameter volume
sphericalaretarget.
nearly
volumeThethe
aresame forthe
nearly
length ofthe two
same
the bartypes
foristheof
set target.
two
to types Table
0.8 m.of Bar 6 lists
target. the errors.
Table
positions 6 and
lists
The spherical
the errors.
orientations in targets
The the achieve
spherical
measurement better
targets volumeaccuracy
achieve betterbecause
the they
accuracy
are nearly provide
because
same they
for the better
two visibility
provide
types better
of of the bar6of
visibility
target. Table from
the
lists
large
barerrors.
the viewing
from large angle.
viewing angle.
The spherical targets achieve better accuracy because they provide better visibility of the
bar from large viewing angle.
6. Calibration
Table 6.
Table Calibration accuracy
accuracy under
under different
different target
target types.
types.
Table
Bar 6. Calibration accuracy under differentRMS
target
oftypes.
Bar Length
Length Measurement
Measurement Errors
Errors (mm)
(mm) RMS Image Residuals
of Image (µm) (µm)
Residuals
TargetTarget
TypesTypes
RMSE
Bar Length
RMSE Maximum
Measurement Error
Errors
Maximum Error(mm) x Axis
RMS
x Axis of Image y Axis
Residuals
y Axis (µm)
Target Types
PlanarPlanar
TargetTarget 0.192
RMSE 0.192 0.520
Maximum0.520Error 0.21
x Axis
0.21 0.33
0.33y Axis
Sphere TargetTarget
PlanarSphere
Target 0.106 0.106
0.192 0.296
0.5200.296 0.12
0.21
0.12 0.19
0.19 0.33
Sphere Target 0.106 0.296 0.12 0.19
4.2. Calibration Performances under Different Intersection Angles and Bar Lengths
4.2. Calibration Performances under Different Intersection Angles and Bar Lengths
4.2. Calibration Performances under Different Intersection Angles and Bar Lengths
By changing the baseline, we have five different intersection angle configurations and the errors
By changing the baseline, we have five different intersection angle configurations and the errors
are shown in Figure
By changing 12. The plot
the baseline, we shows a similar
have five decline
different tendency
intersection as in
angle Figure 8 when
configurations andintersection
the errors
are shown in Figure 12. The plot shows a similar decline tendency as in Figure 8 when intersection
angle
are increases.
shown in Figure 12. The plot shows a similar decline tendency as in Figure 8 when intersection
angle increases.
angle increases.

Figure 12. Bar length measurement error varies with intersection angle.
Figure 12. Bar length measurement error varies with intersection angle.
Figure 12. Bar length measurement error varies with intersection angle.
The three
The three length
length configurations
configurations of the bar
of the bar are
are used
used respectively
respectively for
for calibrating
calibrating and
and the
the results
results
are listed
The in
threeTable 7.
length Almost unchanged
configurations of RMSE
the bar and
are Maximum
used Error
respectively verify
for the simulation
calibrating and
are listed in Table 7. Almost unchanged RMSE and Maximum Error verify the simulation results in results
the in
results
Figure
are listed
Figure 9. in Table 7. Almost unchanged RMSE and Maximum Error verify the simulation results in
9.
Figure 9.
Sensors 2018, 18, 3964 16 of 19

Sensors
Sensors 2018,
2018, 18,
18, xx FOR
FOR PEER
PEER REVIEW
REVIEW 16
16 of
of 19
19
Table 7. Bar length measurement errors under different scale bar lengths.
Table
Table 7.
7. Bar
Bar length
length measurement
measurement errors
errors under
under different
different scale
scale bar
bar lengths.
lengths.
Bar Lengths (mm) RMSE (mm) Maximum Error (mm)
Bar
Bar Lengths
Lengths (mm)
(mm) RMSE
RMSE (mm)
(mm) Maximum
Maximum ErrorError (mm)
(mm)
880.055 0.123 0.331
880.055
880.055 0.123
0.123 0.331
0.331
970.421 0.132 0.354
970.421
970.421 0.132
0.132 0.354
0.354
1078.405 0.136 0.368
1078.405
1078.405 0.136
0.136 0.368
0.368

4.3. Comparison
4.3. Comparison with the
the Ponit Array
Array Bundle Adjustment
Adjustment Method
4.3. Comparison with
with the Ponit
Ponit Array Bundle
Bundle Adjustment Method
Method
The
The comparison experiment is carried out to measure an object that
that is specially designed for
for
The comparison
comparison experiment
experiment isis carried
carried out
out to
to measure
measure an
an object
object that is
is specially
specially designed
designed for
photogrammetric
photogrammetric tests. The object is shown in Figure 13.
photogrammetric tests.
tests. The
The object
object is
is shown
shown inin Figure
Figure 13.
13.

Figure
Figure 13.
13. The
The 3D
3D object
object used
used for
for the
the comparison
comparison experiment.
experiment.

The size
The size of
of the
the object
the object is
object is 3.5
is 3.5 m
3.5 m×
m ×× 2.3 m ×××1.5
2.3 m 1.5m.
1.5 m.The
m. Theobject
The objectcontains
object contains58
contains 58RRTs
58 RRTs (the
RRTs (the bright dots) in
bright dots) in
which 56 are used for point array calibration. The other two RRTs locate
which 56 are used for point array calibration. The other two RRTs locate in the left-up and
56 are used for point array calibration. The other two RRTs in the left-up
locate in and
the right-down
left-up and
corner (indicated
right-down
right-down cornerby
corner red circles)
(indicated
(indicated by of the
by red
red object
circles)
circles) ofand
of the are measured
the object
object and are by
and are a Laserby
measured
measured Tracker
by aa Laser
Laser (API, LTS (API,
Tracker
Tracker 1100,
(API,
Jessup,
LTS
LTS 1100, MD,
1100, USA)MD,
Jessup,
Jessup, and measurement
MD, USA)
USA) and adaptors. The
and measurement
measurement distance
adaptors.
adaptors. Thebetween
The distancethem
distance is called
between
between them
them theis
istest length
called
called the
the
and
test is used for 3D measurement assessing of the calibration results just like
test length and is used for 3D measurement assessing of the calibration results just like the 10 m
length and is used for 3D measurement assessing of the calibration the
results 10 m diagonal
just like thelength
10 m
in simulation.
diagonal
diagonal length
length Thein test
in length isThe
simulation.
simulation. 4139.810
The test mm. The
test length
length is camerasmm.
is 4139.810
4139.810 are set
mm. Thewith
The the same
cameras
cameras are
are setintersection
set with
with thethe angle
same
same
as in the simulations
intersection
intersection angle
angle asas and
in aresimulations
in the
the calibrated by
simulations theare
and
and proposed
are method
calibrated
calibrated by
by the using
the the barmethod
proposed
proposed with planar
method using
using RRTs
the and
the bar
bar
1078.405
with planar RRTs and 1078.405 mm length configuration. All the bar lengths and the test length the
with mm
planar length
RRTs andconfiguration.
1078.405 mm All the
length bar lengths
configuration. and the
All test
the length
bar lengthsare triangulated
and the test using
length are
are
calibration
triangulated results.
using Figure
the 14 demonstrates
calibration results. rotation
Figure 14 of the bar
demonstrates in six orientations.
rotation of the bar
triangulated using the calibration results. Figure 14 demonstrates rotation of the bar in six orientations. in six orientations.

Figure
Figure 14.
14. Rotating
Rotating the bar in one position.
Figure 14. Rotating the
the bar
bar in
in one position.
one position.

Each
Each camera
camera is
is also
also calibrated
calibrated by
by the
the point
point array
array method
method with
with aa similar
similar photo-taking
photo-taking style
style as
as in
in
Figure
Figure 10. Then the bar is moved to construct a length array while the system measures the bar
10. Then the bar is moved to construct a length array while the system measures the bar
lengths
lengths and
and the
the test
test length.
length. Figure
Figure 15
15 shows
shows the
the network
network ofof the
the point
point array
array and
and the
the camera
camera stations.
stations.
Sensors 2018, 18, 3964 17 of 19

Each camera is also calibrated by the point array method with a similar photo-taking style as in
Figure 10. Then the bar is moved to construct a length array while the system measures the bar lengths
and the
Sensors test18,length.
2018, Figure
x FOR PEER 15 shows the network of the point array and the camera stations. 17 of 19
REVIEW

Figure 15. The


Figure 15. The point
point array
array and
and the
the photo-taking
photo-taking stations.
stations.

The
The Mean
Mean Error,
Error,RMSE
RMSEandandMaximum
MaximumError Errorofofbar
barlength and
length test
and length
test measurement
length measurement results in
results
10 different experiments are listed in Table 8. A very similar comparison result is achieved
in 10 different experiments are listed in Table 8. A very similar comparison result is achieved as as Table 5.
The proposed method gives better spatial length measurement results.
Table 5. The proposed method gives better spatial length measurement results.
Table 8. Length measurement errors of the two methods.
Table 8. Length measurement errors of the two methods.
Scale Bar
Scale Length
Bar (mm)
Length (mm) TestLength
Test Length(mm)
(mm)
Methods
Methods
Mean Error
Mean Error RMSE
RMSE Maximum
MaximumErrorError Mean Error
Mean Error RMSE
RMSE Maximum
Maximum Error
Error
Length
Length array 0.005
array 0.005 0.150
0.150 0.352
0.352 0.038
0.038 0.151
0.151 0.227
0.227
Point
Point array
array 0.031
0.031 0.152
0.152 0.507
0.507 0.134
0.134 0.179
0.179 0.312
0.312

5. Conclusions
5. Conclusions
This paper
This paper proposes
proposes aa method
method for for simultaneously
simultaneously calibrating
calibrating and and orienting
orienting stereo
stereo cameras
cameras of of
3D vision
3D vision systems
systems in in large
large measurement
measurement volume volume scenarios.
scenarios. A A scale
scale bar
bar is
is moved
moved in in the
the measurement
measurement
volume to build a 3D point and length array. After imaging the
volume to build a 3D point and length array. After imaging the 3D array, the two cameras are 3D array, the two cameras
calibratedare
calibrated through self-calibration bundle adjustment that is constrained by
through self-calibration bundle adjustment that is constrained by point-to-point distances. External point-to-point distances.
External can
accuracy accuracy can beon-site
be obtained obtained on-site
through throughbar
analyzing analyzing bar length reconstruction
length reconstruction errors. Simulationserrors.
Simulations validate effectiveness of the method regarding to the self-calibrating
validate effectiveness of the method regarding to the self-calibrating of interior, distortion and exterior of interior,
distortion
camera and exterior
parameters camera test
and meanwhile parameters
its accuracy and andmeanwhile test its accuracy
precision performance. Moreover, andsimulations
precision
performance.
and experiments Moreover, simulations
are carried andthe
out to test experiments
influence of arethe
carried
scale out
bar to test the
length, influence of the
measurement scale
volume,
bar length, measurement volume, target type and intersection angle on calibration
target type and intersection angle on calibration performance. The proposed method does not require performance. The
proposed
stable method
3D point array does notmeasurement
in the require stable 3D point
volume, andarray in the will
its accuracy measurement
not be affectedvolume,
by the and
scaleits
accuracy will not be affected by the scale bar length. Furthermore, cameras
bar length. Furthermore, cameras can be accurately calibrated without knowing the true length of the can be accurately
calibrated
bar. without
The method knowing
achieves the accuracy
better true length overof the
the bar. The methodpoint
state-of-the-art achieves
arraybetter accuracy over
self-calibration bundlethe
state-of-the-art
adjustment point array self-calibration bundle adjustment method.
method.
In order to accurately calibrate
In order to accurately calibrate thethe interior
interiorandanddistortion
distortionparameters,
parameters,plenty
plentyofofwell/evenly
well/evenly
distributed image points are needed, so the bar needs to be moved uniformly
distributed image points are needed, so the bar needs to be moved uniformly in as many positions in as many positions
as possible
as possible within
within the
the measurement
measurement volume.volume. In In order
order to to handle
handle the the correlation
correlation between
between interior
interior and
and
exterior parameters in bundle adjustment, and thus to guarantee the reliability
exterior parameters in bundle adjustment, and thus to guarantee the reliability of the calibration results, of the calibration
results,
the the bartoneeds
bar needs be movedto beinmoved in a 3D manner,
a 3D manner, such as in such
multias planes
in multi planes
and withand with out-of-plane
out-of-plane rotation.
rotation. Additionally, to achieve uniform triangulation accuracy in
Additionally, to achieve uniform triangulation accuracy in different orientations, the bar needsdifferent orientations, thetobar
be
needs to be rotated uniformly in
rotated uniformly in diverse orientations. diverse orientations.
This method
This methodcan canbe beeasily
easilyconducted
conducted inin medium
medium scale
scale volumes
volumes within
within human
human armarm reach,
reach, and and
can
can be extended to large scale measurement applications with the help
be extended to large scale measurement applications with the help of UAVs to carry and operate of UAVs to carry and operate
the scale
the scale bar.
bar.ItItcan
canalso be be
also used in calibrating
used in calibrating smallsmall
or even micromicro
or even scale stereo
scale vision
stereo systems such as
vision systems
structured
such light scanner.
as structured Compared
light scanner. Comparedwith with planer calibration
planer calibration patterns,
patterns,scale
scalebars
barsare
are easier
easier to to
calibrate, less restricted by camera viewing angle, and has higher image measurement accuracy
which will improve calibration accuracy and convenience. Our future works include studies of a
rigorous relationship between the motion of the bar and the measurement volume, the relationship
between calibrating performance and the number as well as distribution of bar motion positions in
the volume, and application of this method in practice.
Sensors 2018, 18, 3964 18 of 19

calibrate, less restricted by camera viewing angle, and has higher image measurement accuracy which
will improve calibration accuracy and convenience. Our future works include studies of a rigorous
relationship between the motion of the bar and the measurement volume, the relationship between
calibrating performance and the number as well as distribution of bar motion positions in the volume,
and application of this method in practice.

Author Contributions: Conceptualization, P.S. and N.-G.L.; methodology, P.S.; software, B.-X.Y. and J.W.;
validation, N.-G.L. and M.-L.D.; writing—original draft preparation, P.S; writing—review and editing, B.-X.Y. and
J.W.; visualization, P.S.; supervision, N.-G.L.; project administration, M.-L.D.; funding acquisition, M.-L.D.
Funding: This research was funded by National Natural Science Foundation of China, grant number 51175047
and 51475046, and by Beijing Municipal Education Commission, grant number KM201511232020.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Luhmann, T. Close range photogrammetry for industrial applications. ISPRS J. Photogramm. Remote Sens.
2010, 65, 558–569. [CrossRef]
2. Triggs, B.; Mclauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle Adjustment—A Modern Synthesis.
In Proceedings of the International Workshop on Vision Algorithms: Theory and Practice, Corfu, Greece,
21–22 September 1999.
3. Hinz, S.; Stephani, M.; Schiemann, L.; Zeller, K. An image engineering system for the inspection of transparent
construction materials. ISPRS J. Photogramm. Remote Sens. 2009, 64, 297–307. [CrossRef]
4. Hampel, U.; Maas, H.G. Cascaded image analysis for dynamic crack detection in material testing. ISPRS J.
Photogramm. Remote Sens. 2009, 64, 345–350. [CrossRef]
5. Liu, T.; Burner, A.W.; Jones, T.W.; Barrows, D.A. Photogrammetric techniques for aerospace applications.
Prog. Aerosp. Sci. 2012, 54, 1–58. [CrossRef]
6. Baqersad, J.; Niezrecki, C.; Avitabile, P. Full-field dynamic strain prediction on a wind turbine using
displacements of optical targets measured by stereophotogrammetry. Mech. Syst. Sig. Process. 2015, 62,
284–295. [CrossRef]
7. Black, J.T.; Pitcher, N.A.; Reeder, M.F.; Maple, R.C. Videogrammetry dynamics measurements of a lightweight
flexible wing in a wind tunnel. J. Aircr. 2010, 47, 172–180. [CrossRef]
8. Whiteman, T.; Lichti, D.D.; Chandler, I. Measurement of deflections in concrete beams by close-range digital
photogrammetry. In Proceedings of the Symposium on Geospatial Theory, Processing and Applications,
Ottawa, ON, Canada, 9–12 July 2002.
9. Alemdar, Z.F.; Browning, J.; Olafsen, J. Photogrammetric measurements of RC bridge column deformations.
Eng. Struct. 2011, 33, 2407–2415. [CrossRef]
10. Leifer, J.; Weems, B.J.; Kienle, S.C.; Sims, A.M. Three-dimensional acceleration measurement using
videogrammetry tracking data. Exp. Mech. 2011, 51, 199–217. [CrossRef]
11. Kalpoe, D.; Khoshelham, K.; Gorte, B. Vibration measurement of a model wind turbine using high speed
photogrammetry. In Proceedings of the SPIE Optical Metrology, Munich, Germany, 23–26 May 2011.
12. Lin, S.Y.; Mills, J.P.; Gosling, P.D. Videogrammetric monitoring of as-built membrane roof structures.
Photogramm. Rec. 2008, 23, 128–147. [CrossRef]
13. Liu, X.; Tong, X.; Yin, X.; Gu, X.; Ye, Z. Videogrammetric technique for three-dimensional structural
progressive collapse measurement. Measurement 2015, 63, 87–99. [CrossRef]
14. Chang, C.C.; Ji, Y.F. Flexible videogrammetric technique for three-dimensional structural vibration
measurement. J. Eng. Mech. 2007, 133, 656–664. [CrossRef]
15. Fraser, C.S.; Riedel, B. Monitoring the thermal deformation of steel beams via vision metrology. ISPRS J.
Photogramm. Remote Sens. 2000, 55, 268–276. [CrossRef]
16. Ojdanic, D.; Chen, L.; Peitgen, H.O. Improving interaction in navigated surgery by combining a pan-tilt
mounted laser and a pointer with triggering. In Proceedings of the SPIE Medical Imaging, San Diego, CA,
USA, 17 February 2012.
17. Wiles, A.D.; Thompson, D.G.; Frantz, D.D. Accuracy assessment and interpretation for optical tracking
systems. In Proceedings of the SPIE Medical Imaging, San Diego, CA, USA, 5 May 2004.
Sensors 2018, 18, 3964 19 of 19

18. Godding, R.; Luhmann, T. Calibration and accuracy assessment of a multi-sensor online-photogrammetric
system. Int. Arch. Photogramm. Remote Sens. 1993, 29, 24–29.
19. Loser, R.; Luhmann, T. The programmable optical 3D measuring system pom-applications and performance.
Int. Arch. Photogramm. Remote Sens. 1993, 29, 533.
20. Hadem, I.; Amdal, K. High Precision Calibration of a Close Range Photogrammetry Systems. Int. Arch.
Photogramm. Remote Sens. 1993, 29, 568.
21. Pettersen, A. Metrology Norway System-Optimum Accuracy based on CCD Cameras. Int. Arch. Photogramm.
Remote Sens. 1993, 29, 230.
22. Pettersen, A. Metrology Norway system-an on-line industrial photogrammetry system. Int. Arch. Photogramm.
Remote Sens. 1993, 29, 43.
23. Maas, H.G. Image sequence based automatic multi-camera system calibration techniques. ISPRS J. Photogramm.
Remote Sens. 1999, 54, 352–359. [CrossRef]
24. Fraser, C.S. Innovations in automation for vision metrology systems. Photogramm. Rec. 1997, 15, 901–911.
[CrossRef]
25. Gan, X.C.; He, M.Z.; Li, L.F.; Ye, X.Y.; Li, J.S. Photogrammetric scale-bar measurement method based on
microscopic image aiming. In Proceedings of the 6th International Symposium on Advanced Optical
Manufacturing and Testing Technologies, Xiamen, China, 26–29 April 2012.
26. Sun, P.; Lu, N.; Dong, M. Modelling and calibration of depth-dependent distortion for large depth visual
measurement cameras. Opt. Express 2017, 25, 9834–9847. [CrossRef] [PubMed]
27. Nistér, D. An efficient solution to the five-point relative pose problem. IEEE Trans. Pattern Anal. Mach. Intell.
2004, 26, 756–770. [CrossRef] [PubMed]
28. Hartley, R.I.; Li, H. An efficient hidden variable approach to minimal-case camera motion estimation.
IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2303–2314. [CrossRef] [PubMed]
29. Fraser, C.S. Automatic camera calibration in close range photogrammetry. Photogramm. Eng. Remote Sens.
2013, 79, 381–388. [CrossRef]
30. Luhmann, T.; Fraser, C.; Maas, H.G. Sensor modelling and camera calibration for close-range
photogrammetry. ISPRS J. Photogramm. Remote Sens. 2016, 115, 37–46. [CrossRef]
31. Luhmann, T.; Robson, S.; Kyle, S.; Boehm, J. Close-Range Photogrammetry and 3D Imaging, 2nd ed.; Walter de
Gruyter GmbH: Berlin, Germany, 2014; pp. 225–229. ISBN 978-3-11-030269-1.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like