Sensors: Depth Errors Analysis and Correction For Time-Of-Flight (Tof) Cameras

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

sensors

Article
Depth Errors Analysis and Correction for
Time-of-Flight (ToF) Cameras
Ying He 1, *, Bin Liang 1,2 , Yu Zou 2 , Jin He 2 and Jun Yang 3
1 Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen 518055, China;
bliang@tsinghua.edu.cn
2 Department of Automation, Tsinghua University, Beijing 100084, China;
y-zou10@mails.tsinghua.edu.cn (Y.Z.); he-j15@mails.tsinghua.edu.cn (J.H.)
3 Shenzhen Graduate School, Tsinghua University, Shenzhen 518055, China;
yangjun603@mails.tsinghua.edu.cn
* Correspondence: heying@hitsz.edu.cn; Tel.: +86-755-6279-7036

Academic Editor: Gonzalo Pajares Martinsanz


Received: 2 September 2016; Accepted: 9 December 2016; Published: 5 January 2017

Abstract: Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years,
are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate.
As a ToF camera is limited by the imaging conditions and external environment, its captured data
are always subject to certain errors. This paper analyzes the influence of typical external distractions
including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments
indicated that factors such as lighting, color, material, and distance could cause different influences
on the depth error of ToF cameras. However, since the forms of errors are uncertain, it’s difficult to
summarize them in a unified law. To further improve the measurement accuracy, this paper proposes
an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Moreover,
the experiment results showed that this method can effectively reduce the depth error of ToF cameras
to 4.6 mm within its full measurement range (0.5–5 m).

Keywords: ToF camera; depth error; error modeling; error correction; particle filter; SVM

1. Introduction
ToF cameras, which have been developed rapidly in recent years, are a kind of 3D imaging sensor
providing a depth image as well as an amplitude image with a high frame rate. With its advantages of
small size, light weight, compact structure and low power consumption, this equipment has shown
great application potential in fields such as navigation of ground robots [1], pose estimation [2],
3D object reconstruction [3], identification and tracking of human organs [4] and so on. However,
limited by its imaging conditions and influenced by the interference of the external environment,
the data acquired by a ToF camera has certain errors, among which is the fact it has no unified correction
method for any non-systematic errors caused by the external environment. Therefore, different depth
errors must be analyzed, modeled and corrected case by case according to the different causes.
ToF camera errors can be divided into two categories: systematic errors, and non-systematic
errors. A systematic error is triggered not only by its intrinsic properties, but also by the imaging
conditions of the camera system. The main characteristic of this kind of error is that their form is
relatively fixed. These errors can be evaluated in advance, and the correction process is relatively
convenient. Systematic errors which can be reduced by calibration under normal circumstances [5]
and can be divided into five categories.
A non-systematic error is an error caused by the external environment and noise. The characteristic
of this kind of error is that the form is not fixed and random, and it is difficult to establish a unified

Sensors 2017, 17, 92; doi:10.3390/s17010092 www.mdpi.com/journal/sensors


Sensors 2017, 17, 92 2 of 18

model to describe and correct such errors. Non-systematic errors are mainly divided into four
categories: signal-to-noise ratio, multiple light reception, light scattering and motion blurring [5].
Signal-to-noise ratio errors can be removed by the low amplitude filtering method [6], or an
optimized integration time can be decided by using a complex algorithm as per the area to be
optimized [7]. Other ways generally reduce the impact of noise by calculating the average of data to
determine whether it exceeds a fixed threshold [8–10].
Multiple light reception errors mainly exist at surface edges and depressions of the target object.
Usually, the errors in surface edges of the target object can be removed by comparing the incidence
angle of the adjacent pixels [7,11,12], but there is no efficient solution to remove the errors of depressions
in the target object.
Light scattering errors are only related to the position of a target object in the scene; the closer
it is to the target object, the stronger the interference will be [13]. In [14], a filter approach based
on amplitude and intensity on the basis of choosing an optimum integration time was proposed.
Measurements based on multiple frequencies [15,16] and the ToF encoding method [17] both belong
to the modeling category, which can solve the impact of sparse scattering. A direct light and global
separation method [18] can solve mutual scattering and sub-surface scattering among the target objects.
In [19], the authors proposed detecting transverse moving objects by the combination of a color
camera and a ToF camera. In [20], transverse and axial motion blurring were solved by an optical flow
method and axial motion estimation. In [21], the authors proposed a fuzzy detection method by using
a charge quantity relation so as to eliminate motion blurring.
In addition, some error correction methods cannot distinguish among error types, and uniformly
correct the depth errors of ToF cameras. In order to correct the depth error of ToF cameras, a fusion
method with a ToF camera and a color camera was also proposed in [22,23]. In [24], a 3D depth frame
interpolation and interpolative temporal filtering method was proposed to increase the accuracy of
ToF cameras.
Focusing on the non-systematic errors of ToF cameras, this paper starts with the analysis of the
impacts of varying external distractions on the depth errors of ToF cameras, such as materials, colors,
distances, and lighting. Moreover, based on the particle filter to select the parameters of a SVM error
model, an error modeling method based on PF-SVM is proposed, and the depth error correction of ToF
cameras is realized as well.
The reminder of the paper is organized as follows: Section 2 introduces the principle and
development of ToF cameras. Section 3 analyzes the influence of lighting, material properties, color and
distance on the depth errors of ToF cameras through four groups of experiments. In Section 4, a PF-SVM
method is adopted to model and correct the depth errors. In Section 5, we present our conclusions and
discuss possible future work.

2. Development and Principle of ToF Cameras


In a broad sense, ToF technology is a general term for determining distance by measuring the flight
time of light between sensors and the target object surface. According to the different measurement
methods of flight time, ToF technology can be classified into pulse/flash, continuous wave,
pseudo-random number and compressed sensing [25]. The continuous wave flight time system
is also called ToF camera.
ToF cameras were firstly invented at the Stanford Research Institute (SRI) in 1977 [26]. Limited by
the detector technology at that time, the technique wasn’t used widely. Fast sampling of receiving
light didn’t come true until the lock-in CCD technique was invented in the 1990s [27]. Then, in 1997
Schwarte, who was at the University of Siegen (Germany), put forward a method of measuring
the phases and/or magnitudes of electromagnetic waves based on the lock-in CCD technique [28].
With this technique, his team invented the first CCD-based ToF camera prototype [29]. Afterwards,
ToF cameras began to develop rapidly. A brief development history is shown in Figure 1.
Sensors 2017,
Sensors 2017, 17,
17, 92
92 3 of 18
Sensors 2017, 17, 92 3 of 18
Microsoft
Microsoft

MESA SoftKInetic
MESA SoftKInetic MESA  Kinect II:
MESA 512×424
 Kinect II:
512×424
Canesta
PMD Canesta  DS325:320×240
PMD  DS325:320×240

 SR3000:176×144  SR4000:176×144
Stanford  SR3000:176×144  SR4000:176×144
 64×64
Stanford
 第一款ToF相  64×64

 第一款ToF相  64×25
单点测量
 机 Lock-in CCD  64×25
 128×128@2h
 单点测量 Lock-in CCD
 128×128@2h

1977 1995 1999 2002 2006 2007 2009 2013


1977 1995 1999 2002 2006 2007 2009 2013

Figure 1. Development history of ToF cameras.


Figure
Figure 1.
1. Development
Development history
history of
of ToF cameras.
ToF cameras.
In Figure 2, the working principle of ToF cameras is illustrated. The signal is modulated on the
In Figure
Figure 2,2, the
the working
working principle
principle of
of ToF
ToF cameras
cameras isis illustrated.
illustrated. The
The signal
signal isis modulated
modulated onon the
the
lightInsource (usually LED) and emitted to the surface of the target object. Then, the phase shift
light
light source
source (usually LED) and emitted to the surface of the target object. Then, the phase shift
between the(usually
emitted LED)
andand emittedsignals
received to the surface of the target
is calculated object. Then,
by measuring thetheaccumulated
phase shift between
charge
between
the emittedthe emitted and received signals is calculated by measuring the accumulated charge
numbers ofand
eachreceived
pixel onsignals is calculated
the sensor. Thereby,by we
measuring
can obtainthe the
accumulated charge
distance from numbers
the of each
ToF camera to
numbers
pixel of sensor.
on the each pixel on the sensor. Thereby, we can obtain the distance from the ToF camera to
the target object. Thereby, we can obtain the distance from the ToF camera to the target object.
the target object.

ToF cameras.
Figure 2. Principle of ToF cameras.
Figure 2. Principle of ToF cameras.
The received
The received signal
signal is
is sampled
sampled four
four times
times atat equal
equal intervals
intervals for for every
every period
period (at
(at 1/4
1/4 period).
period).
The received signal is sampled four times at equal intervals for every period (at 1/4 period).
From the four samples (ϕ 0, ϕ1, ϕ3, ϕ4) of phase ϕ, offset B and amplitude A can be calculated as
From the four samples (ϕ0 , ϕ1 , ϕ2 , ϕ3 ) of phase ϕ, offset B and amplitude A can be calculated as follows:
From the four samples (ϕ0, ϕ1, ϕ3, ϕ4) of phase ϕ, offset B and amplitude A can be calculated as
follows:
follows: 
ϕ0 − ϕ2

ϕ = arctan     , (1)
 = arctan ϕ1 00−ϕ322  , (1)
 = arctan  1  3  , (1)
ϕ0 + ϕ1 +ϕ 12 + 3ϕ3
B= (2)
   4   
0
q B     1 2 3
(2)
(B
0
ϕ0− ϕ2 )2 +
1
4 (2ϕ1 − 3
ϕ3 )2 (2)
A= 4 (3)
22 2
Distance D can be derived: 0  2 2  1  3 2 (3)
A  0  2   1  3 
A  2  (3)
1 c∆ϕ 2 ,
D= (4)
Distance D can be derived: 2 2π f
Distance D can be derived:
where D is the distance from ToF camera to the target 1  cobject,
  c is light speed and f is the modulation
D  1  c   , on the principle of ToF cameras can (4)
D  2  2details
frequency of the signal, ∆ϕ is phase difference. More  be
f  , (4)
found in [5]. 2  2 f 
whereWeDlist the distance
is the exterior and
fromparameters
ToF cameraoftoseveral typical
the target commercial
object, c is light ToF cameras
speed and f ison the
the market in
modulation
where D is the distance from ToF camera to the target object, c is light speed and f is the modulation
Table 1.
frequency of the signal,  is phase difference. More details on the principle of ToF cameras can
frequency of the signal,  is phase difference. More details on the principle of ToF cameras can
be found in [5].
be found in [5].
SensorsSensors
Sensors
Sensors 17, 92
2017,2017,
2017,
2017, 17,92
17,
17, 92
92 444of
of18
of 18
18 4 of 18

Welist
We
We listthe
list theexterior
the exteriorand
exterior andparameters
and parametersof
parameters ofseveral
of severaltypical
several typicalcommercial
typical commercialToF
commercial ToFcameras
ToF camerason
cameras onthe
on themarket
the marketin
market in
in
Table1.
Table
Table 1.
1.
Table 1. Parameters of typical commercial ToF cameras.
Table1.
Table
Table 1.Parameters
1. Parametersof
Parameters oftypical
of typicalcommercial
typical commercialToF
commercial ToFcameras.
ToF cameras.
cameras.
Maximum
Maximum
Maximum
Maximum Maximum
Resolution Maximum Measurement Field of Power/W
ToF Camera Resolution Maximum
Resolution
Resolution
Maximum FrameMeasurement Field
Measurement
Measurement Fieldof
Field of
of Accuracy Power/W
Power/W
Power/WWeight/g
ToF Camera
ToF
ToF Camera
Camera of
ofDepth Frame
Depth Frame
Frame Rage/m
Rage/m
View/°
View/◦Weight/g
Accuracy
Accuracy
Accuracy Weight/g
Weight/g (Typical/Maximum)
(Typical/Maximum)
of Depth
of Depth Rate/fps
Rate/fps Rage/m
Rage/m View/°
View/° (Typical/Maximum)
(Typical/Maximum)
Images
Images
Images
Rate/fps
Rate/fps
Images

MESA-SR4000
MESA-SR4000
MESA-SR4000
MESA-SR4000 176
176
176
176 × 144
×××144
144
144 50
50
50 50 0.1–5
0.1–5
0.1–5 69×××55
0.1–5
69
69 55
55 ±1
±1 cm× 55 470
±1cm
69
cm 470
470 ±1 cm 9.6/24
9.6/24
9.6/24 470 9.6/24

Microsoft-Kinect
Microsoft-Kinect
Microsoft-Kinect
Microsoft-Kinect 512
512
512 ×××424
424
× 424
424 30
30
30 0.5–4.5
30 0.5–4.5
0.5–4.5 70×××60
70
70 60 ±3
60 ±3cm@2
±3 cm@2 m 60 550
70 m
cm@2 ×
m 550±3 cm@2 m
550 16/32
16/32
16/32
II II
IIII 512 0.5–4.5 550 16/32

PMD-Camcube
PMD-Camcube
PMD-Camcube
PMD-Camcube
3.0
3.0 200
200
200
200 × 200
×××200
200
200 15
15
15 0.3–7.5
15 0.3–7.5
0.3–7.5 40×××40
0.3–7.5
40
40 40 ±3
40 ±3 40 ×
±3mm@4
mm@4
mm@4 m40 1438
m
m 1438±3 mm@4 m---
1438 1438 -
3.0
3.0

3.Analysis
3.
3. Analysison
Analysis onDepth
DepthErrors
Errorsof
ofToF
ToFCameras
Cameras
3. Analysis on on Depth
Depth Errors
Errors of ToF
of Cameras
ToF Cameras
The external
The
The external environment
external environment usually
environment usually has
usually has aaa random
has random and
random and uncertain
and uncertain influence
uncertain influence on
influence on ToF
on ToF cameras,
ToF cameras,
cameras,
The external
therefore,
therefore,
therefore, it’s
it’s environment
it’sdifficult
difficult
difficult to
to establishusually
toestablish
establish unifiedhas
aaaunified
unified a random
model
model
model to
to describeand
todescribe
describe and
and uncertain
andcorrect
correctsuch
correct such influence
sucherrors.
errors.
errors. In
In thison
Inthis
this ToF cameras, therefore,
section,
section,
section,
we take
we taketo
it’s difficult
we take the
the
the MESA SR4000
MESA
establish
MESA SR4000 cameramodel
camera
a unified
SR4000 camera (Zurich,to
(Zurich,
(Zurich, Switzerland,
Switzerland,
describe aand
Switzerland, aa camera
camera
camera withsuch
with
correct
with good performance
good
good performance
errors.
performance [30],section, we take the
[30],
In this
[30],
which has
which
which has been
has been used
been used in
used in error
in error analysis
error analysis [31–33]
analysis [31–33] and
[31–33] and position
and position estimation
estimation [34–36])
[34–36]) as
as an
an example
example to to
MESAanalyze
SR4000
analyze camera
theinfluence
the influence of(Zurich,
of theexternal
the external Switzerland, aposition
camera
environmenttransformation
environment transformation
estimation
with
onthe
on
[34–36])
thegood
deptherror
depth
as
error
an
ofToF
of
example
performance
ToFcameras.
cameras.
to
[30], which has been
analyze the influence of the external environment transformation on the depth error of ToF cameras.
used in The
The
The error
data we
data
data analysis
we
we get
get from[31–33]
get from
from the
the and position
the experiments
experiments
experiments provide
provide estimation
provide references
references
references for
for [34–36])
for the
the
the correctionasof
correction
correction andepth
of
of example
depth
depth errors
errors to
errors in analyze the influence
in the
in the
the
next
of thenext
next step. environment transformation on the depth error of ToF cameras. The data we get from
step.
external
step.
the experiments
3.1.Influence
3.1.
3.1. Influenceof
Influence ofprovide
of Lighting,Color
Lighting,
Lighting,
references
Color
Color andDistance
and
and
foron
Distance
Distance onthe
on
correction
Depth
Depth
Depth Errors
Errors
Errors
of depth errors in the next step.
During the
During the measurement
measurement process process of of ToF
ToF cameras,
cameras, itit seems
During the measurement process of ToF cameras, it seems that the measured objects tend toseems that that the
the measured
measured objects
objects tend tend to to
3.1. Influence of Lighting, Color and Distance on Depth Errors
have different colors, different distances and may be under different lighting conditions. Then, the
have
have different
different colors,
colors, different
different distances
distances and and maymay be be under
under different
different lighting
lighting conditions.
conditions. Then, Then, the the
following question
following
following question arises:
question arises: will
arises: will the
will the difference
the difference in
difference in lighting,
in lighting, distances
lighting, distances and and colors
colors affect
affect the the
During the
measurementresults?
measurement
results?To Toanswer
answerthis
process
thisquestion,
of ToF
question,we
cameras,
weconduct
conductthe
itdistances
thefollowing
and
seemsexperiments.
that
followingexperiments.
colors
the affect
measured the objects tend to have
measurement
measurement results? To answer this question, we conduct the following experiments.
different colors,
Aswe
As
As weknow,
we different
know,
know, thereare
there
there distances
are
are severalnatural
several
several andindoor
natural
natural may be
indoor
indoor under
lighting
lighting
lighting different
conditions,
conditions,
conditions, such
such
such lighting
as
as conditions.
aslight-sunlight,
light-sunlight,
light-sunlight, indoorThen, the following
indoor
indoor
light-lamp
question arises:
light-lamp
light-lamp light
light
lightwillandthe
and
and nolight.
no
no light.
light. Thisexperiment
difference
This
This experiment
in lighting,
experiment mainlyconsiders
mainly
mainly considers
distances
considers theinfluence
the
the influence
and
influence of these
colors
of
of these
affect
these threethe
three
three lighting
lightingmeasurement results?
lighting
conditions on
conditions
conditions on the
on the depth
the depth errors
depth errors of
errors of the
of the SR4000.
the SR4000. Red,
SR4000. Red, green
Red, green and
green and blue
and blue are
blue are three
are three primary
three primary colors
primary colors that
colors that can
that can
can
To answer this
be superimposed
question,
superimposed into into any
we
any color.
conduct
color. White
White is
the
is the
following
the color
color for for measuring
experiments.
measuring error error [32,37,38],
[32,37,38], while
while reflective
reflective
be
be superimposed into any color. White is the color for measuring error [32,37,38], while reflective
As
paperswe(tin
papers
papers (tinknow,
(tin foil) can
foil)
foil) can
canthere
reflectare
reflect
reflect all
all several
all light.
light.
light. Therefore,
Therefore,
Therefore, natural
this
this indoor mainly
this experiment
experiment
experiment lighting
mainly
mainly conditions,
considers
considers
considers the influence
the
the influence
influence such of as light-sunlight,
of
of
indoor these
these
these fiveconditions
light-lamp
five
five conditions
conditions lightononthe
on thedepth
and
the depth
depth errorsof
no errors
light.
errors ofThis
of theSR4000.
the
the SR4000.
experiment mainly considers the influence of these three
SR4000.
As the
As
As the measurment
the measurment target,
measurment target, the
target, the white
the white wall
white wall is
wall is then
is then covered
then covered by
covered by red,
by red, blue,
red, blue, green,
blue, green, whitewhite and and
lighting conditions
reflective papers,
on the depth
papers, respectively,
respectively, as
errors
as examples
examples of
ofof the SR4000.
backgrounds with
Red,different
green
with different
and green,
blue
colors. Since
Since the
white
are and
three
the wall
wall is is
primary colors that
reflective
reflective papers, respectively, as examples of backgrounds
backgrounds with different colors.
colors. Since the wall is
can benotsuperimposed
not
not completely flat,
completely
completely flat, into
flat, laser
laser any color.
laser scanners
scanners
scanners are
are White
are used
used
used to
to is the
to build
build
build color
aaa wall
wall
wall for measuring
model.
model.
model. Then we
Then
Then we usederror
we used
used aaa 25HSX
25HSX
25HSX [32,37,38],
laser
laser
laser while reflective
scanner
papersscanner
(tin foil)
scanner fromcan
from
from Surphaser
reflect
Surphaser
Surphaser (Redmond,
all light.
(Redmond,
(Redmond, WA,
WA,
WA, USA) to
Therefore,
USA)
USA) to provide
to provide
provide aa reference
reference value,
this aexperiment
reference value,
value, becauseconsiders
mainly
because
because its accuracy
its
its accuracythe
accuracy is influence of these
is
is
relatively high
relatively
relatively high (0.3
high (0.3 mm).
(0.3 mm). The
mm). The SR4000
The SR4000 camera
SR4000 camera is
camera is set
set onon thethe right
right side
side ofof the
the bracket,
bracket, while
while the the 3D3D
five conditions
laser scanner
scanner is
on
is on
the
on the
depth
the left.
left.The
errors
The bracket
bracket is
of the isSR4000.
is mounted
mounted in
set on
in the
the
the middle
right
middle of
side
of two
of the
two tripods
bracket,
tripods and and the
while
the tripods
the
tripods are
3D
are
laser
laser scanner is on the left. The bracket is mounted in the middle of two tripods and the tripods are
As theparallel
placed
placed
placed measurment
parallel
parallel tothe
to
to thewhite
the white
white target,
wall.The
wall.
wall. thedistances
The
The white wall
distances
distances between
between
between is the
then
thetripods
the covered
tripods
tripods andthe
and
and thebywall
the red,
wall
wall areblue,
are
are measured
measured
measured green, withwhite and reflective
with
with
papers,two
two
two paralleltapes.
respectively,
parallel
parallel tapes.The
tapes. The
The experimentalof
as examples
experimental
experimental scene
scene
scene isarranged
is
is arrangedas
backgrounds
arranged aswith
as showndifferent
shown
shown inFigure
in
in Figure333colors.
Figure below. Since the wall is not completely
below.
below.
flat, laser scanners are used to build a wall model. Then we used a 25HSX laser scanner from Surphaser
(Redmond, WA, USA) to provide a reference value, because its accuracy is relatively high (0.3 mm).
The SR4000 camera is set on the right side of the bracket, while the 3D laser scanner is on the left.
The bracket is mounted in the middle of two tripods and the tripods are placed parallel to the white wall.
The distances between the tripods and the wall are measured with two parallel tapes. The experimental
scene is arranged as shown in Figure 3 below.
Sensors 2017, 17, 92 5 of 18
Sensors 2017, 17, 92 5 of 18

(a) (b)

Figure
Figure 3.
3. Experimental
Experimental scene.
scene. (a)
(a) Experimental
Experimental scene;
scene; (b)
(b) Camera
Camera bracket.
bracket.

The distances from


The distances fromthethetripods
tripodstotothe
thewall
wallareare
setset
to to 5, 3,
5, 4, 4,2.5,
3, 2.5, 2, 1.5,
2, 1.5, 1, m
1, 0.5 0.5respectively.
m respectively. At
At each
each position, we change the lighting conditions and obtain one frame with the laser
position, we change the lighting conditions and obtain one frame with the laser scanner and 30 frames scanner and 30
frames
with thewith
SR4000thecamera.
SR4000To camera.
excludeTo theexclude
influencetheofinfluence
the integralof time,
the integral time, thesoftware
the SR_3D_View SR_3D_View
of the
software of the SR4000 camera
SR4000 camera is set to “Auto”. is set to “Auto”.
In order to
In order toanalyze
analyzethethedepth
depth error,
error, thethe acquired
acquired datadata are processed
are processed in MATLAB.
in MATLAB. Since Since the
the target
target objectfill
object can’t can’t
the fill the image,
image, we select wethe
select the region
central centralofregion
90 × 90 of pixels
90 × 90 of pixels of thetoSR4000
the SR4000 to be
be analyzed
analyzed for depth errors. The distance error
for depth errors. The distance error is defined as: is defined as:
n

∑mi,j, f
n m i, j, f
f 1 (5)
hi , j f =1  ri , j ,
hi,j = n − ri,j , (5)
n
a b

∑
a b
∑ hhi,ji , j (6)
i 1 j 1
g i=1 j=1
g= (6)
ss
where hi,ji,j is
is the mean error of pixel i,j, f is the frame number of the
the mean camera, m
the camera, mi,j,f is the distance
i,j,f is

measured at pixeli,ji,jininFrame
at pixel Frame f, nf,=n30, ri,j isri,jthe
= 30, is real
the distance, a andab and
real distance, are the rowthe
b are androw column number
and column
of the selected region respectively and s is the total number of pixels.
number of the selected region respectively and s is the total number of pixels. The real i,j The real distance r is provided
distance ri,j is
by the laser
provided byscanner.
the laser scanner.
Figure 44 shows
Figure showsthe theeffects
effectsofofdifferent
differentlighting
lighting conditions
conditions onon
thethe depth
depth error
error of the of the SR4000.
SR4000. As
As shown in Figure 4, the depth error of the SR4000 is on slightly
shown in Figure 4, the depth error of the SR4000 is on slightly affected by the lighting affected by the lighting conditions
(the maximum
maximum effect effectisis22mm). mm).The Thedepth
deptherrorerrorincreases
increases approximately
approximately linearly with
linearly withdistance, andand
distance, the
measurement error value complies with the error test of other Swiss Ranger
the measurement error value complies with the error test of other Swiss Ranger cameras in [37–40]. cameras in [37–40]. Besides,
as seen in
Besides, asthe seenfigure,
in theSR4000
figure, is very robust
SR4000 is veryagainstrobust light changes,
against and canand
light changes, adaptcantoadaptvarious indoor
to various
lightinglighting
indoor conditions for the lower
conditions for theaccuracy requirements.
lower accuracy requirements.
Sensors 2017, 17, 92 6 of 18
Sensors 2017, 17, 92 6 of 18
Sensors 2017, 17, 92 6 of 18
light
0.025 light
0.025 dark
dark
natural
0.02 natural
indoor
0.02 indoor

0.015
0.015
deviation[m]
deviation[m]

0.01
0.01

0.005
0.005

00

-0.005
-0.005
0.5
0.5 11 1.5
1.5 22 2.5
2.5 3 3.5
3.5 44 4.5
4.5 55 5.5
5.5
measured
measured distance[m]
distance[m]

Figure 4.Influence
Figure Influence of lighting
lighting on depth errors.
Figure 4.
4. Influence of
of lighting on
on depth
depth errors.
errors.

Figure 5shows
Figure showsthe theeffects
effectsofof various
various colors
colors onon the
the depth errors ofofthe theSR4000 camera. AsAs
Figure 55 shows the effects of various colors on the depth depth
errorserrors
of the SR4000 SR4000
camera.camera.
As shown
shown
shown in in Figure
Figure 5, 5,thethedepth
deptherror
errorof
ofthe
the SR4000
SR4000 is is affected
affected by by the
thecolor
color ofofthe target
the target object,
object, and it it
and
in Figure 5, the depth error of the SR4000 is affected by the color of the target object, and it increases
increases
increases linearly
linearly with distance.
with distance. The
The depth
depth error
error curve
curve under reflective
under reflective conditions
conditions is quite different
is quite different
linearly
from with
the distance.
others. Whenthe The depth
thedistance
distanceerror curve
is1.5–2
1.5–2 m,under
the depthreflective
depth error conditions
isistoo large, is quite
while different
atat3–5 m,m, isfrom
it it small.the
from
others.the others. When is m, the error too large, while 3–5 is small.
When the distance is 5 m, the depth error is 15 mm less than when the color is blue. When thethe
When the distance is 1.5–2 m, the depth error is too large, while at 3–5 m, it is small. When
When
distance theis distance
5ism, is 5 m, the isdepth error is 15 when
mm less the than when the color is distance
blue. When the
distance 1.5the
m,depth error
the depth 15 when
error mm less
the than
color is white color
is 5 mm is blue.
higherWhenthanthewhen the color is 1.5
is m,
distance
thegreen. is 1.5 m, the depth error when the color is white is
depth error when the color is white is 5 mm higher than when the color is green. 5 mm higher than when the color is
green.
color
0.03 color
0.03 green
green
white
0.025
0.025 white
red
red
blue
0.02
blue
reflect
0.02
reflect
0.015
deviation[m]

0.015
deviation[m]

0.01
0.01
0.005
0.005
0

0
-0.005

-0.005
-0.01
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5
-0.01 measured distance[m]
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5
Figure measured
Figure5.5.Influence
Influence of distance[m]
of color
color on
on depth
deptherrors.
errors.
Figure 5. Influence of color on depth errors.
3.2.3.2. Influence
Influence of of MaterialononDepth
Material Depth Errors
Errors
During
3.2. Influence ofthe measurement
Material on Depth process
ErrorsofofToF
ToF cameras, it seems that the measured objects tend to be
During the measurement process cameras, it seems that the measured objects tend to be of
of different materials. Then, will this affect the measurement results? For this question, we
different materials.
During the Then, will this affect the measurement results? Forthe
thismeasured
question, objects
we conducted tothe
conducted themeasurement process of to
following experiments: ToF cameras,
analyze the iteffects
seemsofthat
different materials on the tend
depth be
following
of errors
differentexperiments: to analyze the effects of different materials on the depth errors of the SR4000,
of thematerials.
SR4000, we Then,
chosewill
four this
common affect the measurement
materials results?
in the experiment: ABSFor thisstainless
plastic, question, we
steel,
conducted the following experiments: to analyze the effects of different materials on the depth
errors of the SR4000, we chose four common materials in the experiment: ABS plastic, stainless steel,
Sensors 2017, 17, 92 7 of 18
Sensors 2017, 17, 92 7 of 18

we
woodchose
and
Sensors four17, common
glass.
2017, materials
92 The tripods in the experiment:
are arranged as shown inABS plastic,
Figure 3 of stainless steel,
Section 3.1, andwood and 7 ofglass.
the targets 18are
The tripods are arranged as shown in Figure 3 of Section 3.1, and the
four 5-cm-thick boards of the different materials, as shown in Figure 6. The tripods are placed targets are four 5-cm-thick
woodof
boards
parallel and
to the
theglass. Theand
different
target tripods are arranged
materials,
the distance is setastoin
as shown shown in1 Figure
Figure
about and3 tripods
6. The
m, of
theSection are3.1,
experiment and
placedis the targetsto
parallel
operated arethe
under
four
natural 5-cm-thick
target andlightthe boards
distance To
conditions. of
is setthe different
to about 1the
differentiate materials,
m,boards
and the as shown
on experiment in Figure
the depth image, 6. The
is operated
we leave tripods
under are placed
natural
a certain light
distance
parallel toTothe
conditions. target and the
differentiate theboards
distance on isthe
setdepth
to about 1 m,we
image, and the aexperiment
leave certain is operated
distance betweenunderthem.
between them. Then we acquire one frame with the laser scanner and 30 consecutive frames with
natural
Then light conditions.
we acquire one The
frame To
withdifferentiate
the laser the boards on30the depth image, we leave
with athe
certain distance
the SR4000 camera. integral time in scanner and
the SR_3D_View consecutive
software frames
of the SR4000 SR4000
camera iscamera.
set to
Thebetween
integral them.
time Then
in thewe acquire
SR_3D_View one frame
software with
of the
the laser
SR4000 scanner
camera and
is 30
set consecutive
to “Auto”. frames with
“Auto”.
the SR4000 camera. The integral time in the SR_3D_View software of the SR4000 camera is set to
“Auto”.

Figure 6. Four boards made of different materials.


Figure 6. Four boards made of different materials.
Figure 6. Four boards made of different materials.
For the SR4000 and the laser scanner, we select the central regions of 120 × 100 pixels and
750 ×For
750the
For theSR4000
pixels,SR4000 and
andthe
thelaser
respectively. laserscanner,
scanner, we
To calculate we select
the meanthe
select the central regions
central regions
thickness of ofof120
the four 120××100
100pixels
boards, wepixels and
and
need to
750 ×
750 750
× 750pixels, respectively.
pixels, To
respectively. calculate
To the
calculate mean
the thickness
mean of
thickness the
of four
the boards,
four
measure the distance between the wall and the tripods as well. Section 3.1 described the data we need
boards, we to measure
need to
themeasure
distancemethod
processing between
the theFigure
distance
and wall and
between the tripods
7 shows wall
theandas well.
mean Section
theerrors
tripods 3.1well.
as
of the described
four the3.1
Section
boards. data processing
described themethod
data
and Figure 7 shows
processing methodthe andmean
Figureerrors of the
7 shows thefour
meanboards.
errors of the four boards.
0.018
0.018 Laser scanner
Laser scanner
SR-4000
0.016
0.016 SR-4000

0.014
0.014

0.012
standard deviation/m

0.012
standard deviation/m

0.01
0.01

0.008
0.008

0.006
0.006

0.004
0.004

0.002
0.002

0 0
wood
wood plastic
plastic glass
glass metal
metal
texture
texture

Figure7.7.Depth
Figure Depth data
data of
of two
two sensors.
sensors.

As As shown
shown inFigure
Figure7,7,thethematerial
material affects
affects both the
the depth errors ofofthe
theSR4000 and the laser
As shown ininFigure 7, the material affects bothboth
the depth depth
errorserrors
of the SR4000 SR4000
and theand
laserthe laser
scanner.
scanner.
scanner. When the material is wood, the absolute error of the ToF camera is minimal and only 1.5 mm.
When theWhen the is
material material
wood, theis wood, theerror
absolute absolute error
of the ToFof the ToF
camera camera isand
is minimal minimal and
only 1.5 mm.only 1.5 mm.
When the
When the target is the stainless steel board, the absolute error reaches its maximum value and the
When
target the target is the stainless steel board, the absolute error reaches its maximum
is the stainless steel board, the absolute error reaches its maximum value and the depth error is value and the
depth error is 13.4 mm, because, as the reflectivity of the target surface increases, the number of
depth
13.4 error is 13.4asmm, because, asofthe reflectivity of increases,
the targetthe surface increases, the received
number by of
photonsbecause,
mm, received bythe thereflectivity
light receiverthe target surface
decreases, which leads to a highernumber of photons
measurement error.
photons
the light received by the light
receiver decreases, receiver
which leadsdecreases,
to a higherwhich leads to a higher
measurement error. measurement error.
3.3. Influence of a Single Scene on Depth Errors
3.3. Influence of a Single Scene on Depth Errors
The following experiments were conducted to determine the influence of a single scene on
The errors.
depth following experiments
experiments
The tripods wereasconducted
are placed conducted to determine
shown in Figure determine the 3.1,
3 of Section influence
and as of a single
shown scene8,on
in Figure
depth
the measuring target is a cone, 10 cm in diameter and 15 cm in height. The tripods are Figure
errors. The
The tripods
tripods are
are placed
placed as
as shown
shown in
in Figure
Figure 3
3 of
of Section
Section 3.1,
3.1, and
and as
as shown
shown in
in Figure 8,
placed 8,
theparallel
measuring
measuring
to thetarget
target isa the
axis isof a cone,
cone, 10 10
cone andcm
cm inthe indistance
diameter
diameter and and
15
is set to 15
cm cm
1inm. in height.
height.
The The istripods
The tripods
experiment are areunder
placed
operated placed
parallel
parallel to the axis of the cone and the distance is set to 1 m. The experiment
natural light conditions. We acquire one frame with the laser scanner and 30 consecutive frames is operated under
natural light conditions. We acquire one frame with the laser scanner and 30 consecutive frames
Sensors 2017, 17, 92 8 of 18

Sensors
Sensors2017,
2017,17,
17,92
92 88ofof18
18
to the axis of the cone and the distance is set to 1 m. The experiment is operated under natural light
conditions.
with
withthe We acquire
theSR4000
SR4000 camera.one
camera.Theframe
The withtime
integral
integral the laser
time in thescanner
in the and 30software
SR_3D_View consecutive
software frames
of the
of the SR4000
SR4000with the SR4000
camera
camera isisset
set
camera. The
toto“Auto”.
“Auto”. integral time in the SR_3D_View software of the SR4000 camera is set to “Auto”.

Figure 8.
Figure 8. The
The measured
measured cone.
cone.
Figure 8. The measured cone.

AsAsshown
shownininFigure
Figure9,9,we wechoose
choose oneone of
of the
the 30
30 consecutive
consecutive frames
frames to to analyze
analyze the theerrors,
errors,extract
extract
As
point shown
cloud in
data Figure
from the9, we
selected choose
frame one
andof the
compare30 consecutive
point cloud data from the selected frame and compare it with the standard cone to calculate
it with the frames
standard to analyze
cone to the errors,
calculate the
the
extract
error. The right side in Figure 9 is a color belt of the error distribution, of which the unit is m. the
error. point
The cloud
right data
side infrom the
Figure 9selected
is a frame
color belt and
of compare
the error it with the
distribution, standard
of which cone
the to calculate
unit is m. As
As
error.
shown
shown Theininright
Figureside
Figure 9,9,in
theFigure
the 9 is a coloraccuracy
measurement
measurement belt of the
accuracy oferror
of distribution,
SR4000
SR4000 is also
is of which
also higher,
higher, wherethe the
where unit is m. As shown
the maximal
maximal depth
depth
in Figure
error
error 9, the
isis0.06
0.06 m.measurement
m. The depth
The depth errors accuracy
errors theofSR4000
of the
of SR4000mainly
SR4000 is also locate
mainly higher,in
locate inwhere
the
the reartheprofile
rear maximal
profile of depth
of the
the cone. error
cone. The
Theis
0.06 m. The
measured
measured depth
object
object errors of the
deformation
deformation SR4000
isissmall, mainly
small, but,
but, locatewith
compared
compared in the
with the
therear profile
laser
laser of the
scanner,
scanner, its cone.
its point The
pointcloudcloud measured
data
dataareare
object deformation
sparser.
sparser. is small, but, compared with the laser scanner, its point cloud data are sparser.

Figure 9.
Figure 9. Measurement
Measurement errors
errors of
of the
the cone.
cone.
Figure 9. Measurement errors of the cone.
3.4.Influence
3.4. InfluenceofofaaComplex
Complex Sceneon on Depth Errors
Errors
3.4. Influence of a Complex Scene
Scene on Depth
Depth Errors
Thefollowing
The followingexperiments
experiments were were conducted
conducted in in order
order to
to determine
determine the
the influence
influence of
of aaa complex
complex
The
sceneonon following
depth experiments
errors. The were
tripods conducted
are placed in order
as in
shownto determine
in3 of
Figure the influence
3 3.1
of and
Section of
3.1 andcomplex
the
scene
scene ondepth
depth errors.
errors.The tripods
The are
tripods placed
are as shown
placed as Figure
shown in Section
Figure 3 of the
Section measurement
3.1 and the
measurement target is a complex scene, as shown in Figure 10. The tripods are placed parallel to the
target is a complex
measurement target scene,
is a as shown
complex in Figure
scene, as 10. The
shown in tripods
Figure are
10. Theplaced parallel
tripods are to theparallel
placed wall, and to the
the
wall, and the distance is set to about 1 m. The experiment is operated under natural light conditions.
distance
wall, andisthe
setdistance
to about 1 set m. to
The experiment Theisexperiment
operated under naturalunder
light natural
conditions.
lightWe acquire
We acquire one frameis with about
the laser1 m.
scanner is operated
and 30 consecutive frames with the SR4000 conditions.
camera.
one
We frame with the laser scanner and 30 consecutive frames with the SR4000 camera. The integral time
Theacquire
integralone
timeframe
in thewith the laser software
SR_3D_View scanner and
of the30SR4000
consecutive
cameraframes
is set towith the SR4000 camera.
“Auto”.
in
Thethe SR_3D_View
integral time insoftware of the SR4000
the SR_3D_View camera
software is set
of the to “Auto”.
SR4000 camera is set to “Auto”.
Sensors 2017, 17, 92 9 of 18
Sensors 2017, 17, 92 9 of 18

Sensors 2017, 17, 92 9 of 18

Figure 10. Complex scene.


Figure 10. Complex scene.
Figure 10. Complex scene.
We then choose one of the 30 consecutive frames for analysis and, as shown in Figure 11,
obtainWeWethen
the choose
point
then onedata
cloud
choose of the
one of30
of the
the consecutive
SR4000
30 frames
and
consecutive for analysis
theframes
laser scanner.and,
Asasshown
for analysis shown
and, inFigure
as in Figure
shown in 11,
11, obtain the
there11,is
Figure a
point
small cloud
amount
obtain data of the SR4000
of deformation
the point cloud data of and
in the the laser
the SR4000
shape of scanner.
andthethe As
target shown
laserobject in Figure
measured
scanner. As shown 11,
by inthere
theFigureis
SR4000 a small amount
11,compared
there is a to
of deformation
thesmall
laseramount
scanner,inespecially
of the shapeon
deformation ofinthe
the target
theedge
shape
ofofobject measured
thesensor
the target object by
where thethe
measured SR4000
measuredby the compared
SR4000
object to the
compared
is clearly laser
to
curved.
scanner,
However, especially
the laser scanner, on
distortion the edge
especially
exists on ontheofthetheedge
bordersensor
ofthe
of where
the the
sensor
point measured
where
cloud object
the and
data measured is clearly
object
artifacts iscurved.
appear clearly However,
on thecurved.
plant.
distortion
However, exists on theexists
distortion borderonofthe the pointofcloud
border datacloud
the point and artifacts
data andappear onappear
artifacts the plant.
on the plant.

Figure
Figure11.
Figure 11.Depth
11. Depthimages
Depth images based
images based on
based on the point
the point cloud
pointcloud ofdepth
cloudofof depthsensors.
depth sensors.
sensors.

3.5.3.5.
3.5. Analysis
Analysis of
Analysis Depth
of of DepthErrors
Depth Errors
Errors
From
From
From the
thethe abovefour
above
above fourgroups
four groupsof
groups ofexperiments,
of experiments, the
experiments, the depth
depth errors
depth errorsof
errors ofthe
of theSR4000
the SR4000are
SR4000 areweakly
are weakly
weakly affected
affected
affected
by by lighting
lighting conditions(2(2mm
conditions mmmaximum
maximum under under the same
same conditions).
conditions). The
The second
second factor
factoris is
thethetarget
target
by lighting conditions (2 mm maximum under the same conditions). The second factor is the target
object
object color.
color. Underthe
Under thesame
sameconditions,
conditions, this
this affects
affects the
the depth
depth error
error by
by aamaximum
maximum ofof
5 mm.
5 mm. OnOn thethe
object color. Under the same conditions, this affects the depth error by a maximum of 5 mm. On the
other
other hand,hand,
hand, the the material
material has has a great influence on the depth errors of ToF cameras. The greater the
other the material has a great influence
a great influence on on the
the depth
depth errors
errors of of ToF
ToF cameras.
cameras. The The greater
greater the the
reflectivityofofthe
reflectivity themeasured
measured objectobject material,
material, the the greater
greater the
the depth
depth error,
error, which
which increases
increases
reflectivity of the measured object material, the greater the depth error, which increases approximately
approximatelylinearly
approximately linearlywith
withthe thedistance
distance between
between the the measured
measured object
objectand and ToF camera. InIna more
linearly with the distance between the measured object and ToF camera. In ToF
a more camera.
complex a scene,
more
complex scene, the depth error of a ToF camera is greater. Above all, lighting, object color, material,
complex
the depthscene,
error the
of adepth error ofisa greater.
ToF backgrounds
camera ToF camera is greater. Aboveobject
all, lighting, object color, material,
distance and complex could Above all, lighting,
cause different influences on color,
the material,
depth errors distance
of ToF and
distance
complex and complex backgrounds could cause different influences on the depth errors of ToF
cameras, but it’s difficult to summarize this in an error law, because the forms of these errors areit’s
backgrounds could cause different influences on the depth errors of ToF cameras, but
cameras,tobut
difficult it’s difficult
summarize thistoin summarize
an error law,this in an the
because error law,ofbecause
forms the forms
these errors of these errors are
are uncertain.
uncertain.
uncertain.
4. Depth
4. Depth Error
ErrorCorrection
Correctionfor forToFToFCameras
Cameras
4. Depth Error Correction for ToF Cameras
In In
thethe
lastlast
section, fourfour
section, groups of experiments
groups of experiments werewere
conducted
conducted to analyze
to analyze the influence
the influenceof several
of
In
external the last
factors section,
on the four
depth groups
errors ofof experiments
ToF cameras. were
The conducted
results of our to
several external factors on the depth errors of ToF cameras. The results of our experiments indicate analyze
experiments the influence
indicate of
that
several
different external
that different factors
factors factors
have on the
different
have depthon
effects
different errors of the
ToFmeasurement
the measurement
effects on cameras. The
results, results
and it isand
results, of itour
difficult experiments
to establish
is difficult aindicate
unified
to establish
that different
a unified factors
model have different
to describe effects
and correct on errors.
such the measurement
For a complex results,
process andthat it isisdifficult
difficult toto establish
model
a unified model to describe and correct such errors. For a complex process that is difficult to model
Sensors 2017, 17, 92 10 of 18

Sensors 2017, 17, 92 10 of 18

model to describe and correct such errors. For a complex process that is difficult to model mechanically,
mechanically, an inevitable choice is to use actual measurable input and output data to model.
an inevitable choice is to use actual measurable input and output data to model. Machine learning is
Machine learning is proved to be an effective method to establish non-linear process models. It
proved to be an effective method to establish non-linear process models. It maps the input space to
maps the input space to the output space through a connection model, and the model can
the output space through a connection model, and the model can approximate a non-linear function
approximate a non-linear function with any precision. SVM is a new generic learning method
with any precision. SVM is a new generic learning method developed on the basis of a statistical
developed on the basis of a statistical learning theory framework. It can seek the best compromise
learning theory framework. It can seek the best compromise between the complexity of the model
between the complexity of the model and learning ability according to limited sample information
and learning ability according to limited sample information so as to obtain the best generalization
so as to obtain the best generalization performance [41,42]. Also in the last section of this paper, we
performance [41,42]. Also in the last section of this paper, we learn and model the depth errors of ToF
learn and model the depth errors of ToF cameras by using a LS-SVM [43] algorithm.
cameras by using a LS-SVM [43] algorithm.
Better parameters generate better SVM recognition performance to build the LS-SVM model.
Better parameters generate better SVM recognition performance to build the LS-SVM model.
We need to determine the penalty parameter C and Gaussian kernel parameter γ. Cross-validation
We need to determine the penalty parameter C and Gaussian kernel parameter γ. Cross-validation [44]
[44] is a common method which suffers from large computation demands and long running times.
is a common method which suffers from large computation demands and long running times.
A particle filter [45] can be used to approximate the probability distribution of parameters in the
A particle filter [45] can be used to approximate the probability distribution of parameters in the
parameter state space by spreading a large number of weighted discrete random variables, based
parameter state space by spreading a large number of weighted discrete random variables, based on
on which, this paper puts forward a parameter selection algorithm, which can fit the depth errors of
which, this paper puts forward a parameter selection algorithm, which can fit the depth errors of
ToF cameras quickly and meet the requirements of correcting the errors. The process of the PF-SVM
ToF cameras quickly and meet the requirements of correcting the errors. The process of the PF-SVM
algorithm is shown in Figure 12 below.
algorithm is shown in Figure 12 below.

Figure 12.Process
Figure12. Processof
ofPF-SVM
PF-SVMalgorithm.
algorithm.

4.1.
4.1. PF-SVM
PF-SVM Algorithm
Algorithm

4.1.1.
4.1.1. LS-SVM
LS-SVMAlgorithm
Algorithm
According
Accordingtotostatistical
statistical theory, during
theory, the process
during of black-box
the process modeling
of black-box for non-linear
modeling systems,
for non-linear
training set {x ,y }, i = 1,2, . . . ,n is generally given and non-linear function f is established
i i set {xi,yi}, i = 1,2,…,n is generally given and non-linear function f is established to
systems, training to minimize
Equation (8):
minimize Equation (8):
f ( x ) = w T ϕ( x ) + b, (7)
f  x   wT   x   b , n (7)
1 T 1
minw,b,δ J (w, δ) = w w + C ∑ δ , 2
(8)
2 2 i =1 n
1 1
min w,b, J  w,    wT w  C  2 , (8)
2 2 i 1
where ϕ(x) is a nonlinear function, and w is the weight. Moreover, Equation (8) satisfies the
constraint:
Sensors 2017, 17, 92 11 of 18

where ϕ(x) is a nonlinear function, and w is the weight. Moreover, Equation (8) satisfies the constraint:

yi = w T ϕ( xi ) + b + δi , i = 1, 2 · · · n, (9)

where δi ≥ 0 is the relaxation factor, and C > 0 is the penalty parameter.


The following equation introduces the Lagrange function L to solve the optimization problem in
Equation (8):
1 1 n n
L = kwk2 + C ∑ δi2 − ∑ αi ( ϕ( xi ) · w + b + δi − yi ), (10)
2 2 i =1 i =1

where αi is a Lagrange multiplier.


For i = 1,2, . . . n by elimination of w and δ, a linear equation can be obtained:
" # " # " #
0 eT b 0
= , (11)
e GG + C −1 I
T α y
(n+1)×(n+1)

where e is an element of one n-dimensional column vector, and I is the n × n unit matrix:
h iT
G = ϕ ( x1 ) T ϕ ( x2 ) T ··· ϕ( xn ) T , (12)

According to the Mercer conditions, the kernel function is defined as follows:


 
K xi , x j = ϕ ( xi ) · ϕ x j , (13)

We substitute Equations (12) and (13) into Equation (11) to get a linear equation from which α
and b can be determined by the least squares method. Then we can obtain the non-linear function
approximation of the training data set:
n
y( x ) = ∑ αi K(x, xi ) + b, (14)
i =1

4.1.2. PF-SVM Algorithm


The depth errors of ToF cameras mentioned above are used as training sample sets
{xi, yi }, i = 1,2, . . . n, where xi is the camera measurement distance, and yi is the camera measurement
error. Then the error correction becomes a black-box modeling problem of a nonlinear system. Our goal
is to determine the nonlinear model f and correct the measurement error with it.
The error model of ToF cameras obtained via the LS-SVM method is expressed in Equation (14).
In order to seek a group of optimal parameters for the SVM model to approximate the depth errors in
the training sample space, we put this model into a Particle Filter algorithm.
In this paper, the kernel function is:
!
−k x − yk2
k( x, y) = exp , (15)
2γ2

(1) Estimation state.


The estimated parameter state x at time k is represented as:

j T
h i
j j
x0 = C0 γ0 , (16)

j
where x0 is j-th particle when k = 0, C is the penalty parameter and γ is the Gauss kernel parameter.
Sensors 2017, 17, 92 12 of 18

(2) Estimation Model.


The relationship between parameter state x and parameter α,b in non-linear model y(x) can be
expressed by state equation z(α,b):
z(α, b) = F (γ, C ), (17)
  −1 

b
 0 1 ··· 1 0

   1 K ( x1 , x1 ) · · · K ( x1 , x n )  
  y1 
 α1   
 . =  
  .  , (18)
 .   .
. .
.. . .. .
.. .
 .    .
  . 
 
αn 1 K ( xn , x1 ) · · · K ( xn , xn ) + C1 yn

where Equation (17) is the deformation of Equation (11).


The relationship between parameter α,b and ToF camera error y(x) can be expressed by observation
equation f :

y( x ) = f (α, b), (19)


n
y( x ) = ∑ αi K(x, xi ) + b, (20)
i =1

where Equation (20) is the non-linear model derived from LS-SVM algorithm.

(3) Description of observation target.


In this paper, we use yi of training set {xi, yi } as the real description of the observation target,
namely the real value of the observation:

z = { y i }, (21)

(4) The calculation of the characteristic and the weight of the particle observation.
This process is executed when each particle is under characteristic observation. Hence the error
values of the ToF camera are calculated according to the sampling of each particle in the parameter
state x:
j
   
zj αj, b = F γj, Cj , (22)
j
 
y j (x) = f α j , b , (23)

Here we compute the similarity between the ToF camera error values and the observed target
camera values of each particle. The similarity evaluation RMS is defined as follows:
v
u1 n 
u 2
RMS = t ∑ y j − yi , (24)
n i =1

where y j is the observation value of particle j and yi is the real error value. The weight value of each
particle is calculated according to the Equation (24):

1 RMS2
w( j) = √ e− 2σ , (25)
2πσ

Then the weight values are normalized:

wj
wj = m , (26)
∑ wj
j =1
Sensors 2017, 17, 92 13 of 18

Sensors 2017, 17, 92 13 of 18


(5) Resampling
Resampling of the particles is conducted according to the normalized weights. In this process,
(5) Resampling
not only the particles with great weights but also a small part of particles with small weights should
be kept down.
Resampling of the particles is conducted according to the normalized weights. In this process,
not only the particles with great weights but j T
also a small part of particles with small weights should
(6) Outputting
be kept down. particle set x 0
j
 
 C 0
j
 0

 . This particle set is the optimal LS-SVM parameter.
h iT
j j j
(7) The
(6) measurement
Outputting particleerror
set xmodel of ToF cameras can be obtained by introducing the parameter
0 = C0 γ0 . This particle set is the optimal LS-SVM parameter.
into the LS-SVM model.
(7) The measurement error model of ToF cameras can be obtained by introducing the parameter into
the LS-SVM model.
4.2. Experimental Results
4.2. Experimental Resultsthree groups of experiments to verify the effectiveness of the algorithm. In
We’ve performed
Experiment 1, the depththree
We’ve performed errorgroups
model ofof experiments
ToF cameras to was modeled
verify with the experimental
the effectiveness data in
of the algorithm.
[32], and the results were compared with the error correction results in the original
In Experiment 1, the depth error model of ToF cameras was modeled with the experimental data in [32], text. In
Experiment 2, the depth error model of ToF cameras was modeled with the data in Section
and the results were compared with the error correction results in the original text. In Experiment 2, 3.1, and
the error error
the depth correction
modelresults under different
of ToF cameras test conditions
was modeled were
with the data compared.
in Section In the
3.1, and Experiment 3, the
error correction
error correction results under different reflectivity and different texture conditions were
results under different test conditions were compared. In Experiment 3, the error correction results compared.
under different reflectivity and different texture conditions were compared.
4.2.1. Experiment 1
4.2.1.InExperiment 1
this experiment, we used the depth error data of the ToF cameras which was obtained from
Section 3.2 of
In this [32] as the we
experiment, training
used thesample
depth set. Thedata
error training setToF
of the consists
cameras of 81 sets was
which of data, wherefrom
obtained x is
the distance measurement of the ToF camera and y is the depth error of the ToF
Section 3.2 of [32] as the training sample set. The training set consists of 81 sets of data, where x is camera, as shown
in
theFigure
distance 13 by blue dots. Inofthe
measurement thefigure, the solid
ToF camera andgreen linedepth
y is the represents
error the error
of the ToFmodeling
camera, results
as shownby
using the polynomial
in Figure givenIninthe
13 by blue dots. [32]. It shows
figure, the that
solidthe fitting
green effect
line is betterthe
represents when themodeling
error distance is 1.5–4 by
results m,
and the maximum absolute error is 8 mm. However, when the distance is less than
using the polynomial given in [32]. It shows that the fitting effect is better when the distance is 1.5–4 m, 1.5 m or more
than
and the4 m, the errorabsolute
maximum model deviated
error is 8from
mm. the true error
However, when values. By using
the distance our than
is less algorithm,
1.5 m or wemore
can
obtain
than 4 m, thetheresults C = deviated
error model 736 and from γ = the0.003. By substituting
true error theseour
values. By using two parameters
algorithm, we caninto the
obtain
abovementioned algorithm, we can also obtain the depth error model of
the results C = 736 and γ = 0.003. By substituting these two parameters into the abovementioned the ToF camera as shown
in the figure
algorithm, weby the
can redobtain
also solid the
line.depth
For this,
erroritmodel
can beofseen thatcamera
the ToF the error model can
as shown match
in the figurethe
byreal
the
errors well.
red solid line. For this, it can be seen that the error model can match the real errors well.

0.015
mean discrepancies
distance error model
distance error model in [32]
0.01
mean discrepancies[m]

0.005

-0.005

-0.01
0.5 1 1.5 2 2.5 3 3.5 4 4.5
mean measured distance[m]

Figure
Figure 13.
13. Depth
Depth error
error and
and error
error model.
model.

In order to verify the validity of the error model, we use the ToF camera depth error data
In order to verify the validity of the error model, we use the ToF camera depth error data obtained
obtained from Section 3.3 of [32] as a test sample set (the measurement conditions are the same as
from Section 3.3 of [32] as a test sample set (the measurement conditions are the same as Section 3.2
Sensors 2017,
Sensors 2017, 17,
17, 92
92 14 of
14 of 18
18

Section 3.2 of [32]). The test sample set consists of 16 sets of data, as shown in Figure 14 by the blue
line.
of In the
[32]). Thefigure, the solid
test sample set green lineofrepresents
consists the error
16 sets of data, modeling
as shown results14bybyusing
in Figure the polynomial
the blue line. In the
in [32].the
figure, It shows that the
solid green linefitting effectthe
represents is better when the results
error modeling distance byisusing
1.5–4 the
m, polynomial
and the maximumin [32].
absolute
It shows thaterror is fitting
the 8.6 mm. However,
effect is betterwhen
whenthe the distance
distance is 1.5–4
less than
m, and1.5 the
m or more than
maximum 4 m, the
absolute error
error is
model
8.6 mm.has deviated
However, whenfrom
the the true iserror
distance value.
less than 1.5 The
m orresults agree
more than 4 m,with the fitting
the error model effect of the
has deviated
aforementioned
from the true error error model.
value. The model
The results agree correction results
with the fitting obtained
effect by using our algorithm
of the aforementioned are
error model.
shown
The by the
model red solid
correction line in
results the figure.
obtained It shows
by using our that the results
algorithm of the by
are shown error
thecorrection are in
red solid line better
the
when the
figure. distance
It shows thatisthe
in 0.5–4.5 m,the
results of anderror
the absolute
correctionmaximum
are bettererror
whenisthe4.6distance
mm. Tableis in20.5–4.5
gives the
m,
detailed
and performance
the absolute maximum comparison results
error is 4.6 mm.of these
Table two error
2 gives corrections.
the detailed From Table
performance 2, we can
comparison see
results
that,
of while
these twoexpanding the rangeFrom
error corrections. of error
Tablecorrection,
2, we canthissee method can expanding
that, while also improve thethe accuracy
range of
of error
the error correction.
correction, this method can also improve the accuracy of the error correction.

0.01
mean discrepancies before correction
mean discrepancies after correction in [32]
mean discrepancies after correction
0.005
mean discrepancies[m]

-0.005

-0.01

-0.015
0.5 1 1.5 2 2.5 3 3.5 4 4.5
mean measured distance[m]

Figure 14.
Figure 14. Depth
Depth error
error correction
correction results.
results.

Table 2.
Table 2. Analysis
Analysis of
of depth
depth error
error correction
correction results.
results.

Maximal Error/mm Average Error/mm Variance/mm Optimal Running


Comparison Items
Comparison Maximal
1.5–4 Error/mm
0.5–4.5 Average Error/mm
1.5–4 0.5–4.5 Variance/mm
1.5–4 0.5–4.5 Range/m Running
Optimal Time/s
ThisItems
paper’s 1.5–4 0.5–4.5 1.5–4 0.5–4.5 1.5–4 0.5–4.5 Range/m Time/s
4.6 4.6 1.99 2.19 2.92 2.4518 0.5–4.5 2
algorithm
This paper’s
Reference [32] 4.6
4.6 4.6
8.6 1.99
2.14 2.19
4.375 2.92
5.34 2.4518
29.414 0.5–4.5
1.5–4 2 -
algorithm
algorithm
Reference [32]
4.6 8.6 2.14 4.375 5.34 29.414 1.5–4 -
4.2.2.algorithm
Experiment 2
The ToF depth error data of Section 3.1 on the condition of blue background is selected as the
4.2.2. Experiment 2
training sample set. As shown in Figure 15 by blue asterisks, the training set consists of eight sets of
data.TheTheToFerror model
depth established
error by our
data of Section 3.1algorithm is shown
on the condition of by
bluethe blue line in
background is Figure
selected15.
as The
the
model can
training fit theset.
sample error
Asdata
shownwell,
in but the 15
Figure training
by blue sample set should
asterisks, be as set
the training richconsists
as possible in order
of eight to
sets of
buildThe
data. the error
accuracy of the
model model. To
established by verify the applicability
our algorithm is shown of bythe
theerror model,
blue line we use15.white,
in Figure green
The model
andfit
can red background
the ToF depth
error data well, but theerror data sample
training as test samples,
set should andbethe dataas
as rich after correction
possible is shown
in order to buildin
figure by of
the accuracy thethe
black, green
model. Toand redthe
verify lines. It can be seen
applicability from
of the the model,
error figure that
we the
use absolute values
white, green of
and
the three
red groups ToF
background of residual errors
depth error is less
data thansamples,
as test the uncorrected
and the error data correction
data after after the application
is shown in of the
the
blue distance
figure error green
by the black, model.and
Thered
figure
lines.also illustrates
It can be seen that
fromthis
the error
figuremodel is absolute
that the very applicable
values ofto the
the
error groups
three correction of ToF cameras
of residual errors isfor
lessdifferent
than thecolor backgrounds.
uncorrected error data after the application of the blue
distance error model. The figure also illustrates that this error model is very applicable to the error
correction of ToF cameras for different color backgrounds.
Sensors 2017, 17, 92 15 of 18
Sensors 2017, 17, 92 15 of 18
Sensors 2017, 17, 92 15 of 18
0.025
0.025 training data [blue]
training
error data[blue]
model [blue]
0.02 error model
testing data [blue]
[white]
0.02 testing data
deviation after[white]
correction [white]
deviation
testing after
data correction [white]
[green]
testing data
deviation after[green]
correction [green]
0.015
deviation
testing after
data correction [green]
[red]
0.015
testing data
deviation after[red]
correction [red]
deviation[m]

deviation after correction [red]


deviation[m]

0.01
0.01

0.005
0.005

0
0

-0.005
-0.0050.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5
0.5 1 1.5 2 2.5
measured 3 3.5
distance[m] 4 4.5 5 5.5
measured distance[m]
Figure
Figure 15. Depth
Depth error
error correction
correction results
results of various colors and
and error model.
model.
Figure 15.
15. Depth error correction results of
of various
various colors
colors and error
error model.
4.2.3. Experiment 3
4.2.3. Experiment 33
4.2.3. Experiment
The experimental process similar to that of Section 3.1 hereof was adopted in order to verify
The
The experimental
experimental process
processsimilar
similartotothat
thatofofSection
Section3.13.1hereof
hereof was
wasadopted
adoptedin order
in orderto verify
to verify the
the validity of the error modeling method under different reflectivity and different texture
validity of the error modeling method under different reflectivity and
the validity of the error modeling method under different reflectivity and different texture different texture conditions.
conditions. The sample set, including 91 groups of data, involved the depth errors obtained from
The sample The
conditions. set, including
sample set, 91 including
groups of 91 data, involved
groups the depth
of data, involved errors
theobtained from obtained
depth errors the white from wall
the white wall surfaces photographed with a ToF camera at different distances, as shown with the
surfaces
the whitephotographed
wall surfaces with a ToF camera
photographed withatadifferent
ToF camera distances, as shown
at different with the
distances, blue solid
as shown withlinesthe
blue solid lines in Figure 16. The error model established by use of the algorithm herein is shown
in Figure 16. The error model established by use of the algorithm herein is shown
blue solid lines in Figure 16. The error model established by use of the algorithm herein is shown with the red solid
with the red solid lines in Figure 16. The figure indicates that this model fits the error data better.
lines
with intheFigure 16. The
red solid linesfigure indicates
in Figure 16. Thethat this model
figure indicatesfitsthat
the error data better.
this model fits theWith
errora data
newspaper
better.
With a newspaper fixed on the wall as the test target, the depth errors obtained with a ToF camera
fixed
With on the wall asfixed
a newspaper the test
on target,
the walltheasdepth errors
the test obtained
target, the depthwith errors
a ToF camera
obtained at with
different
a ToF distances
camera
at different distances are taken as the test data, as shown with the black solid lines in Figure 16,
are taken as distances
at different the test data,
are as shown
taken with
as the thedata,
test blackassolid
shownlines in Figure
with 16, while
the black the data
solid lines corrected
in Figure 16,
while the data corrected through the error model created here are shown with the green solid lines
through
while thethe error
data model created
corrected through here
theare shown
error modelwith the green
created heresolid lines inwith
are shown the same figure.
the green It can
solid be
lines
in the same figure. It can be seen from the figure that the absolute values of residual errors is less
seen
in thefrom
samethefigure.
figureItthat
canthe
beabsolute
seen from values of residual
the figure errors
that the is less values
absolute than theofuncorrected
residual errors errorisdataless
than the uncorrected error data after the application of the distance error model. The figure also
after
than the
theapplication
uncorrected of error
the distance errorthe
data after model. The figure
application alsodistance
of the illustrates that model.
error this errorThemodel
figure is very
also
illustrates that this error model is very applicable to the full measurement range of ToF cameras.
applicable to the
illustrates that full
this measurement
error model is veryrange of ToF cameras.
applicable to the full measurement range of ToF cameras.
-3
x 10
10 x 10-3
10 training data
training
error data
model
8
8 error model
deviation after correction
deviation
testing after correction
data
6
testing data
6
4
4
deviation[m]

2
deviation[m]

2
0
0
-2
-2
-4
-4
-6
-6
-8
-80.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0.5 1 1.5 2 2.5 distance[m]
measured 3 3.5 4 4.5 5
measured distance[m]
Figure 16. Depth error correction results and error model.
Figure 16. Depth
Figure 16. Depth error correction results
error correction results and
and error
error model.
model.
Sensors 2017, 17, 92 16 of 18

5. Conclusions
In this paper, we analyzed the influence of some typical external distractions, such as material
properties and color of the target object, distance, lighting and so on on the depth errors of ToF cameras.
Our experiments indicate that lighting, color, material and distance could cause different influences
on the depth errors of ToF cameras. As the distance becomes longer, the depth errors of ToF cameras
increase roughly linearly. To further improve the measurement accuracy of ToF cameras, this paper
puts forward an error correction method based on Particle Filter-Support Vector Machine (PF-SVM).
Then, the best parameters with particle filter algorithm on the basis of learning the depth errors of ToF
cameras are selected. The experimental results indicate that this method can reduce the depth error
from 8.6 mm to 4.6 mm within its full measurement range (0.5–5 m).

Acknowledgments: This research was supported by National Natural Science Foundation of China
(No. 61305112).
Author Contributions: Ying He proposed the idea; Ying He, Bin Liang and Yu Zou conceived and designed the
experiments; Ying He, Yu Zou, Jin He and Jun Yang performed the experiments; Ying He, Yu Zou, Jin He and
Jun Yang analyzed the data; Ying He wrote the manuscript; and Bin Liang provided the guidance for data analysis
and paper writing.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Henry, P.; Krainin, M.; Herbst, E.; Ren, X.; Fox, D. RGB-D mapping: Using kinect-style depth cameras for
dense 3D modeling of indoor environments. Int. J. Robot. Res. 2012, 31, 647–663. [CrossRef]
2. Brachmann, E.; Krull, A.; Michel, F.; Gumhold, S.; Shotton, J.; Rother, C. Learning 6D Object Pose Estimation
Using 3D Object Coordinates; Springer: Heidelberg, Germany, 2014; Volume 53, pp. 151–173.
3. Tong, J.; Zhou, J.; Liu, L.; Pan, Z.; Yan, H. Scanning 3D full human bodies using kinects. IEEE Trans. Vis.
Comput. Graph. 2012, 18, 643–650. [CrossRef] [PubMed]
4. Liu, X.; Fujimura, K. Hand gesture recognition using depth data. In Proceedings of the IEEE International
Conference on Automatic Face and Gesture Recognition, Seoul, Korea, 17–19 May 2004; pp. 529–534.
5. Foix, S.; Alenya, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11,
1917–1926. [CrossRef]
6. Wiedemann, M.; Sauer, M.; Driewer, F.; Schilling, K. Analysis and characterization of the PMD camera for
application in mobile robotics. IFAC Proc. Vol. 2008, 41, 13689–13694. [CrossRef]
7. Fuchs, S.; May, S. Calibration and registration for precise surface reconstruction with time of flight cameras.
Int. J. Int. Syst. Technol. App. 2008, 5, 274–284. [CrossRef]
8. Guomundsson, S.A.; Aanæs, H.; Larsen, R. Environmental effects on measurement uncertainties of
time-of-flight cameras. In Proceedings of the 2007 International Symposium on Signals, Circuits and
Systems, Iasi, Romania, 12–13 July 2007; Volumes 1–2, pp. 113–116.
9. Rapp, H. Experimental and Theoretical Investigation of Correlating ToF-Camera Systems. Master’s Thesis,
University of Heidelberg, Heidelberg, Germany, September 2007.
10. Falie, D.; Buzuloiu, V. Noise characteristics of 3D time-of-flight cameras. In Proceedings of the 2007
International Symposium on Signals, Circuits and Systems, Iasi, Romania, 12–13 July 2007; Volumes 1–2,
pp. 229–232.
11. Karel, W.; Dorninger, P.; Pfeifer, N. In situ determination of range camera quality parameters by segmentation.
In Proceedings of the VIII International Conference on Optical 3-D Measurement Techniques, Zurich,
Switzerland, 9–12 July 2007; pp. 109–116.
12. Kahlmann, T.; Ingensand, H. Calibration and development for increased accuracy of 3D range imaging
cameras. J. Appl. Geodesy 2008, 2, 1–11. [CrossRef]
13. Karel, W. Integrated range camera calibration using image sequences from hand-held operation. Int. Arch.
Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 945–952.
14. May, S.; Werner, B.; Surmann, H.; Pervolz, K. 3D time-of-flight cameras for mobile robotics. In Proceedings of
the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October
2006; Volumes 1–12, pp. 790–795.
Sensors 2017, 17, 92 17 of 18

15. Kirmani, A.; Benedetti, A.; Chou, P.A. Spumic: Simultaneous phase unwrapping and multipath interference
cancellation in time-of-flight cameras using spectral methods. In Proceedings of the IEEE International
Conference on Multimedia and Expo (ICME), San Jose, CA, USA, 15–19 July 2013; pp. 1–6.
16. Freedman, D.; Krupka, E.; Smolin, Y.; Leichter, I.; Schmidt, M. Sra: Fast removal of general multipath
for ToF sensors. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland,
6–12 September 2014.
17. Kadambi, A.; Whyte, R.; Bhandari, A.; Streeter, L.; Barsi, C.; Dorrington, A.; Raskar, R. Coded time
of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles.
ACM Trans. Graph. 2013, 32, 167. [CrossRef]
18. Whyte, R.; Streeter, L.; Gree, M.J.; Dorrington, A.A. Resolving multiple propagation paths in time of flight
range cameras using direct and global separation methods. Opt. Eng. 2015, 54, 113109. [CrossRef]
19. Lottner, O.; Sluiter, A.; Hartmann, K.; Weihs, W. Movement artefacts in range images of time-of-flight
cameras. In Proceedings of the 2007 International Symposium on Signals, Circuits and Systems, Iasi,
Romania, 13–14 July 2007; Volumes 1–2, pp. 117–120.
20. Lindner, M.; Kolb, A. Compensation of motion artifacts for time-of flight cameras. In Dynamic 3D Imaging;
Springer: Heidelberg, Germany, 2009; Volume 5742, pp. 16–27.
21. Lee, S.; Kang, B.; Kim, J.D.K.; Kim, C.Y. Motion Blur-free time-of-flight range sensor. Proc. SPIE 2012, 8298,
105–118.
22. Lee, C.; Kim, S.Y.; Kwon, Y.M. Depth error compensation for camera fusion system. Opt. Eng. 2013, 52, 55–68.
[CrossRef]
23. Kuznetsova, A.; Rosenhahn, B. On calibration of a low-cost time-of-flight camera. In Proceedings of the
Workshop at the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014;
Lecture Notes in Computer Science. Volume 8925, pp. 415–427.
24. Lee, S. Time-of-flight depth camera accuracy enhancement. Opt. Eng. 2012, 51, 527–529. [CrossRef]
25. Christian, J.A.; Cryan, S. A survey of LIDAR technology and its use in spacecraft relative navigation. In
Proceedings of the AIAA Guidance, Navigation, and Control Conference, Boston, MA, USA, 19–22 August
2013; pp. 1–7.
26. Nitzan, D.; Brain, A.E.; Duda, R.O. Measurement and use of registered reflectance and range data in scene
analysis. Proc. IEEE 1977, 65, 206–220. [CrossRef]
27. Spirig, T.; Seitz, P.; Vietze, O. The lock-in CCD 2-dimensional synchronous detection of light. IEEE J.
Quantum Electron. 1995, 31, 1705–1708. [CrossRef]
28. Schwarte, R. Verfahren und vorrichtung zur bestimmung der phasen-und/oder amplitude information einer
elektromagnetischen Welle. DE Patent 19,704,496, 12 March 1998.
29. Lange, R.; Seitz, P.; Biber, A.; Schwarte, R. Time-of-flight range imaging with a custom solid-state image
sensor. Laser Metrol. Inspect. 1999, 3823, 180–191.
30. Piatti, D.; Rinaudo, F. SR-4000 and CamCube3.0 time of flight (ToF) cameras: Tests and comparison.
Remote Sens. 2012, 4, 1069–1089. [CrossRef]
31. Chiabrando, F.; Piatti, D.; Rinaudo, F. SR-4000 ToF camera: Further experimental tests and first applications
to metric surveys. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 149–154.
32. Chiabrando, F.; Chiabrando, R.; Piatti, D. Sensors for 3D imaging: Metric evaluation and calibration of a
CCD/CMOS time-of-flight camera. Sensors 2009, 9, 10080–10096. [CrossRef] [PubMed]
33. Charleston, S.A.; Dorrington, A.A.; Streeter, L.; Cree, M.J. Extracting the MESA SR4000 calibrations.
In Proceedings of the Videometrics, Range Imaging, and Applications XIII, Munich, Germany, 22–25 June
2015; Volume 9528.
34. Ye, C.; Bruch, M. A visual odometry method based on the SwissRanger SR4000. Proc. SPIE 2010, 7692, 76921I.
35. Hong, S.; Ye, C.; Bruch, M.; Halterman, R. Performance evaluation of a pose estimation method based on the
SwissRanger SR4000. In Proceedings of the IEEE International Conference on Mechatronics and Automation,
Chengdu, China, 5–8 August 2012; pp. 499–504.
36. Lahamy, H.; Lichti, D.; Ahmed, T.; Ferber, R.; Hettinga, B.; Chan, T. Marker-less human motion analysis
using multiple Sr4000 range cameras. In Proceedings of the 13th International Symposium on 3D Analysis
of Human Movement, Lausanne, Switzerland, 14–17 July 2014.
Sensors 2017, 17, 92 18 of 18

37. Kahlmann, T.; Remondino, F.; Ingensand, H. Calibration for increased accuracy of the range imaging camera
SwissrangerTM . In Proceedings of the ISPRS Commission V Symposium Image Engineering and Vision
Metrology, Dresden, Germany, 25–27 September 2006; pp. 136–141.
38. Weyer, C.A.; Bae, K.; Lim, K.; Lichti, D. Extensive metric performance evaluation of a 3D range camera.
Int. Soc. Photogramm. Remote Sens. 2008, 37, 939–944.
39. Mure-Dubois, J.; Hugli, H. Real-Time scattering compensation for time-of-flight camera. In Proceedings
of the ICVS Workshop on Camera Calibration Methods for Computer Vision Systems, Bielefeld, Germany,
21–24 March 2007.
40. Kavli, T.; Kirkhus, T.; Thielmann, J.; Jagielski, B. Modeling and compensating measurement errors caused
by scattering time-of-flight cameras. In Proceedings of the SPIE, Two-and Three-Dimensional Methods for
Inspection and Metrology VI, San Diego, CA, USA, 10 August 2008.
41. Vapnik, V.N. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 1995.
42. Ales, J.S.; Bernhand, S. A tutorialon support vector regression. Stat. Comput. 2004, 14, 199–222.
43. Suykens, J.A.K.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Lett. 1999, 9,
293–300. [CrossRef]
44. Zhang, J.; Wang, S. A fast leave-one-out cross-validation for SVM-like family. Neural Comput. Appl. 2016, 27,
1717–1730. [CrossRef]
45. Gustafsson, F. Particle filter theory and practice with positioning applications. IEEE Aerosp. Electron. Syst. Mag.
2010, 25, 53–82. [CrossRef]

© 2017 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like