Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

sensors

Article
Modular Approach for Odometry Localization Method for
Vehicles with Increased Maneuverability
Chenlei Han * , Michael Frey and Frank Gauterin

Institute of Vehicle System Technology, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany;
michael.frey@kit.edu (M.F.); frank.gauterin@kit.edu (F.G.)
* Correspondence: chenlei.han@kit.edu

Abstract: Localization and navigation not only serve to provide positioning and route guidance
information for users, but also are important inputs for vehicle control. This paper investigates the
possibility of using odometry to estimate the position and orientation of a vehicle with a wheel
individual steering system in omnidirectional parking maneuvers. Vehicle models and sensors
have been identified for this application. Several odometry versions are designed using a modular
approach, which was developed in this paper to help users to design state estimators. Different
odometry versions have been implemented and validated both in the simulation environment and in
real driving tests. The evaluated results show that the versions using more models and using state
variables in models provide both more accurate and more robust estimation.

Keywords: localization; odometry; unscented Kalman filter (UKF); vehicle models; omnidirectional;
modular approach

1. Introduction

1.1. Background
 The development towards more assistance and automation in vehicles has increased
Citation: Han, C.; Frey, M.; Gauterin, significantly in recent years. As a preliminary stage to fully automated driving, the com-
F. Modular Approach for Odometry bined use of a lane-keeping assistance system (LKAS) and a Stop&Go adaptive cruise
Localization Method for Vehicles with control (ACC) is already possible with series components. Parking assistance and highway
Increased Maneuverability. Sensors pilot are already permitted legally in Austria, and have been since January 2019 [1]. One of
2021, 21, 79. https://dx.doi.org/ the most important prerequisites is a highly accurate and robust localization. With increas-
10.3390/s21010079 ing levels of driving automation, the demand on localization and navigation also increases.
Localization and navigation not only serve to provide positioning and route guidance
Received: 2 December 2020
information for users, but also are important inputs for vehicle control.
Accepted: 22 December 2020
In addition to automation, the electrification of the automotive industry is continu-
Published: 25 December 2020
ously progressing. The added value of electric drives over combustion engines is clearly
evident, as electric drives can be integrated directly into every wheel. Together with novel
Publisher’s Note: MDPI stays neu-
suspensions, which allow greater steering angles, maneuverability can be significantly
tral with regard to jurisdictional claims
improved [2,3]. In the project “OmniSteer”, which was funded by the German Federal
in published maps and institutional
affiliations.
Ministry of Education and Research (BMBF), a demonstrator vehicle with wheel individual
steering and a ±90◦ wheel steering angle was developed [4]. The driving maneuvers in
Figure 1 can be realized. Such novel vehicle configuration and omnidirectional maneuvers
also demand new requirements for localization.
Copyright: © 2020 by the authors. Li-
censee MDPI, Basel, Switzerland. This 1.2. State of the Art
article is an open access article distributed Localization methods in the automotive industry can generally be divided into two
under the terms and conditions of the categories, according to Ref. [5]: global and relative localization (Figure 2).
Creative Commons Attribution (CC BY)
license (https://creativecommons.org/
licenses/by/4.0/).

Sensors 2021, 21, 79. https://dx.doi.org/10.3390/s21010079 https://www.mdpi.com/journal/sensors


Sensors 2021, 21, x FOR PEER REVIEW 2 of 27

Sensors
Sensors2021,
2021,21,
21,x79
FOR PEER REVIEW 2 2ofof27
26

(a) (b)
Figure 1. (a) Possible driving modes using novel suspensions; (b) possible continuously driven
parking maneuver using novel suspensions.

1.2. State of the Art (a) (b)


FigureLocalization
Figure 1.1.(a)(a)
Possible methods
driving
Possible in the
modes
driving modes automotive
using novel
using industry (b)
suspensions;
novel canpossible
suspensions; generally
(b) be divided
continuously
possible into
driven
continuously two
driven
categories,
parking
parking maneuver
maneuveraccording
using to Ref.
usingnovel
novel [5]: global and relative localization (Figure 2).
suspensions.
suspensions.

1.2. State of the Art Satellite navigation


Localization methods
Global in the automotive industry can generally be divided into two
localization
automotive industry
Localization in the

categories, according to Ref. [5]: global and relative


Landmark localization (Figure 2).
based navigation

Satellite navigation
Inertial navigation
Global localization
automotive industry
Localization in the

Relative localization Landmark based


Visual navigation
odometry

Inertial
Wheel navigation
odometry

Figure2.2.Classification
Figure Classification ofoflocalization
localizationmethods
Relative localization
methods according to Ref. [5].
Visual according
odometry to Ref. [5].

Globalnavigation
Global navigationsatellitesatellitesystems
systems (GNSS),
(GNSS), such
such as as GPS,
GPS, areare a recognized
a recognized approach
approach to
Wheel odometry
to obtaining
obtaining absolute
absolute position
position [6]. Real-time
[6]. Real-time kinematic
kinematic (RTK-GPS)
(RTK-GPS) can even can provide
even provide a po-
a position
with
sition
Figure centimeter-level
2.with ofprecision
centimeter-level
Classification [7].
precision
localization However,
methods [7].accordingit suffers
However, to it from
[5]. the
suffers
Ref. from badthe signal
bad condition
signal condi- in
urban
tion inareas
urban and the and
areas stability of GNSSofwill
the stability GNSS be will
degraded strongly
be degraded due to due
strongly poortosky poorview,
sky
building
view,
Global obstructions
buildingnavigation or
obstructions multi-path
satelliteorsystems reflections
multi-path [8].
reflections
(GNSS), such as [8].GPS, are a recognized approach
With
to obtaining landmark-based
With landmark-based navigation,
absolute positionnavigation, [6]. Real-time a LiDAR
a LiDAR
kinematic or other
or other visionsensor
vision
(RTK-GPS) sensor
can even detects
detects
providethethe ob-
aobject,
po-
ject,
which
sition which
with is a landmark
is acentimeter-level in
landmark in the precision the scenery,
scenery, and and
[7]. tries tries to
to assign
However, assign it to a
it to a previously
it suffers previously
from the bad saved saved
map.
signal map.
Land-
condi-
Landmark-based
mark-based
tion in urban areas navigation
navigation and the provides
provides
stability higher higher
of GNSS localization
localization
will accuracy
accuracy
be degraded in a in
strongly adue
known known environ-
environment.
to poor sky
ment.
However,
view, However,
building aitrelatively
is a relatively
it isobstructions expensive
expensive
or multi-path solutionsolution
reflections (especially
[8]. withwith
(especially LiDAR),LiDAR), andand strongly
strongly de-
depends
pends on on thethe environmental
environmental conditions
conditions [5].
[5].
With landmark-based navigation, a LiDAR or other vision sensor detects the object,
whichInertial
Inertial navigation
is a landmark navigation in theuses
uses ananinertial
scenery, inertial measurement
and triesmeasurement
to assign it unit
to a(IMU),
unit (IMU),
previously which
which records
records
saved map. the
the three-
three-
Land-
dimensional
dimensional
mark-based values of vehicle
values of provides
navigation acceleration
vehicle acceleration and
higher localization rotational
and rotational accuracyspeed
speed [6].
in a[6]. The measurement
environment.isis
The measurement
known
integrated
integrated
However, itinis time
in atime totodetermine
relatively determine
expensive the
theposition. Its
Itsadvantage
position.(especially
solution advantage withisisthe
theavailability
LiDAR), availability of
and stronglyofan
anIMU,IMU,
de-
which
which
pends is generally
onisthe generally independent
environmental independent of
conditions external
[5]. factors. Typical for inertial navigationisisthe
of external factors. Typical for inertial navigation the
accumulation
accumulation error.
error. Such
Such an
an IMU,
IMU, which
which cancan limit
limit the the
Inertial navigation uses an inertial measurement unit (IMU), which records the three- position
position errorerror
driftdrift
to to
less less
than than
1500
1500
m inmthe
dimensional in first
thevalues
first
hour, hour,
ofcostscosts
around
vehicle around EUREUR
acceleration 80,000.80,000.
and rotational speed [6]. The measurement is
Visual
Visual odometry
odometry (VO) calculates
(VO) calculates the relative
the relative transformation
transformation from
fromone oneimage
image totothe
the
integrated in time to determine the position. Its advantage is the availability of an IMU,
next and
next and finds the complete
finds theindependent trajectory
complete trajectory of the
of the camera
cameraTypical based
based on on the images [9]. VO is freeof
which is generally of external factors. forthe images
inertial [9]. VO isisfree
navigation the
of errors
errors thatthatare are dependent
dependent on terrain or vehicle parameters,itbut it also depends on
accumulation error. Such anon IMU,terrain
which or can
vehiclelimitparameters,
the positionbut erroralso driftdepends
to less thanon the1500 en-
the environment.
vironment.
m in the first hour, costs around EUR 80,000.
Wheel odometry (Abbr. odometry) is a method of estimating the position and ori-
Wheelodometry
Visual odometry(VO) (Abbr. odometry)
calculates the is a method
relative of estimatingfrom
transformation the position
one image andtoorien-
the
entation of a mobile system using the data from its propulsion system [10]. In the area
tation
next andoffindsa mobile system trajectory
the complete using the of data thefrom
camera its based
propulsionon thesystem images[10]. In the
[9]. VO areaofof
is free
of automotive engineering, measured variables from the chassis (wheel rotation and di-
automotive
errors that areengineering,
dependent on measured
terrain orvariables from the chassis
vehicle parameters, but it(wheel rotationonand
also depends thedirec-
en-
rection), steering system (steering wheel angle or wheel angle) and yaw rate sensor are
tion),
vironment. steering system (steering wheel angle or wheel angle) and yaw rate sensor are com-
commonly used. Since these sensors are already available in vehicles, there is no additional
monly
Wheel used. Since these
odometry (Abbr.sensors
odometry) are already
is a method available in vehicles,
of estimating thethere
positionis noand additional
orien-
cost. Odometry is the most widely used localization method. It offers good short-term
cost. of
tation Odometry
a mobile issystem
the mostusing widely
the usedfrom
data localization
its propulsionmethod. systemIt offers
[10]. good
In theshort-term
area of
accuracy, is inexpensive, and allows high sampling rates [11–13].
accuracy, isengineering,
automotive inexpensive,measured and allows high sampling
variables from the rates [11–13].
chassis (wheel rotation and direc-
The same as the inertial navigation, the basic idea of odometry is the integration of the
tion), steeringmotion
incremental systeminformation
(steering wheel overangle
time, or wheel
which angle) and
inevitably leads yaw torate sensor are errors.
accumulation com-
monly used. errors
Orientation Since these lead tosensors are already
large lateral position available in vehicles,
errors, which increase there is no additional
proportionally with
cost. Odometry is the most widely used localization method.
the distance travelled by the vehicle. Besides this, odometry is sensitive to unsystematic It offers good short-term
accuracy,
errors, such is inexpensive,
as slip, road and allows high
unevenness, sidesampling
wind, etc.rates [11–13].odometry is always fused
Therefore,
Sensors 2021, 21, 79 3 of 26

with other localization methods to increase the accuracy and ensure the robustness of the
fused localization results. Despite these limitations, most researchers [13–18] still agree
that odometry is an important part of the localization system, and the navigation tasks will
be simplified if the accuracy of the odometry can be improved [10].
In most papers, odometry is designed for vehicles with conventional front axle steering.
The most popular vehicle models to calculate the vehicle position are the kinematic two-
track model [17,19,20], the linear single-track model [21] and the kinematic yaw rate
model [22,23]. However, kinematic models have their own limitations during maneuvers
with large steering angles and high lateral acceleration. Besides, the conventional models
cannot fulfill the requirement of vehicles with wheel individual steering.
This paper introduces an odometry localization method using unscented Kalman filter
(UKF) for vehicles with wheel individual steering systems and an increased steering angle
(till ±90◦ ). Three vehicle models are designed to meet the new requirements of a wheel
individual steering system. In addition, a novel modular approach is introduced and imple-
mented to design a state estimator for odometry. Using this approach, different odometry
versions with a combination of the three vehicle models are designed. Different odometry
versions have been validated both in the simulation environment, with a validated vehicle
model, and in real driving tests using a demonstration vehicle. Different kinds of omnidi-
rectional parking maneuvers have been used for the validation. Three evaluation criteria
are used to evaluate the results.
The division of this work is organized as follows: In Section 2, the vehicle models
and sensors are shown. The process to design the state estimator for odometry using the
modular approach is introduced. The descriptions of the test vehicle, test maneuvers and
validation environment follow in Section 3. The evaluation results for different odometry
versions are shown and compared in Section 4. Section 5 concludes this paper.

2. Odometry Localization Method


2.1. Functionality of a State Estimator
Odometry can be classified as state estimation. The complete process of a state
estimator
Sensors 2021, 21, x FOR PEER REVIEW is illustrated in Figure 3. Model, sensor and estimation method are the three
4 of 27
important components of a state estimation.

Models Estimation method

Initial states
Prediction step

State-transition models
Previous states
Corrected state

Observation models
Predicted states

Sensors Virtual measurements


Correction step

Sensors Weighting
(Inputs)

Sensors
(Observations) Real measurements

Figure3.3.Functional
Figure Functionalsequence
sequenceofofaastate
stateestimator.
estimator.

Theestimation
The estimationmethod
process is
consists
centraloftotwo steps:
a state the prediction
estimator. stepsystems,
For linear and the correction
the most
step [26].
popular and Ineffective
the prediction step, theispredicted
state estimator the Kalman state 𝑿 (KF)
filter | and the
[24]. For estimate
nonlinearcovariance
systems,
𝑷 | areare
there manycalculated frombased
variations the state
on transition
the KF, twomodels,
of thewhich
mostdescribe
commonthe transitions
ones be-
being the
tween the temporally successive states 𝑿 and 𝑿 . The indexing notation
extended Kalman filter (EKF) and unscented Kalman filter (UKF). Both variations operate 𝑘|𝑘 − 1 ex-
presses the period from 𝑘 − 1 to 𝑘. The correction step follows the prediction step. Based
on the predicted state 𝑿 | , a virtual measurement 𝒀 | can be calculated using the
observation models, which describe the relationships between the state variable 𝑿 and
the available measurement from the sensors 𝒀 . At the same time, the real measurements
𝒀 are acquired by the sensors. The Kalman gain 𝑲 is calculated, by means of which a
orrected state

p
Observation models
Predicted states

Sensors Virtual measurements

Correction step
Sensors 2021, 21, 79 4 of 26
Sensors Weighting
(Inputs)

Sensors
within the(Observations)
existing KF framework but use different
Real measurements approaches to handle nonlinearity.
EKF uses an analytical linearization approach involving Jacobian matrices, while UKF uses
aFigure
statistical approachsequence
3. Functional called unscented transformation (UT) [25]. Due to the nonlinearity of
of a state estimator.
the system in this paper and its simple usability, the UKF has been used as the estimator.
Theestimation
The estimationprocess
processconsists
consistsofoftwo twosteps:
steps:the theprediction
predictionstep stepand
andthethecorrection
correction
step [26]. In the prediction step, the predicted
step [26]. In the prediction step, the predicted state X̂k|k−1 and state 𝑿 | and the estimate covariance
the estimate covariance Pk|k−1
𝑷 |calculated
are from thefrom
are calculated statethetransition models,models,
state transition which describe the transitions
which describe between
the transitions be-
the temporally
tween successive
the temporally states Xstates
successive 𝑿 Xk .and
k −1 and The𝑿indexing notation
. The indexing k|k − 1 𝑘|𝑘
notation − 1 ex-
expresses
the period
presses thefrom
period 1 to k.𝑘 −
k −from The1 correction
to 𝑘. The correction
step follows stepthe prediction
follows step. Based
the prediction onBased
step. the
predicted state X̂k|kstate
on the predicted −1 , a 𝑿
virtual
| ,measurement
a virtual Ŷ
measurement
k | k −1 can 𝒀
be calculated
| can using
be the observation
calculated using the
models, which describe the relationships between the
observation models, which describe the relationships between the state state variable X k variable 𝑿 and
and the available
measurement
the available from the sensors
measurement Yk . the
from At the same 𝒀
sensors time,. Atthe
thereal
same time, the realYkmeasurements
measurements are acquired
by
𝒀 the
aresensors.
acquired The
byKalman
the sensors. Kk isKalman
gain The calculated,
gainby 𝑲 means of whichby
is calculated, a weighting of thea
means of which
virtual measurement
weighting and the
of the virtual real measurement
measurement and thetakes real place, so that a takes
measurement corrected so X̂
state
place, k isa
that
obtained. The sensor values used for the weighting are called observations.
corrected state 𝑿 is obtained. The sensor values used for the weighting are called obser- The sensor
values canThe
vations. alsosensor
be used as inputs
values can alsoin models
be useddirectly.
as inputs in models directly.
2.2. Sensors
2.2. Sensors
The sensors available on the demonstrator vehicle were used, namely the speed
The sensors available on the demonstrator vehicle were used, namely the speed sen-
sensors and the wheel steering angle sensors of the four wheels, as well as the yaw rate
sors and the wheel steering angle sensors of the four wheels, as well as the yaw rate sensor
sensor in the geometric center of the vehicle (see Table 1 and Figure 4).
in the geometric center of the vehicle (see Table 1 and Figure 4).
Table 1. List of the sensors on the demonstration vehicle.
Table 1. List of the sensors on the demonstration vehicle.
Position Sensor Price/Unit
Position Sensor Price/Unit
Wheel steering angle sensors
Wheel steering angle sensors Bosch LWS 5.6.3
Bosch LWS 5.6.3 80 EUR
80 EUR
Wheel speed sensors Integrated Speed Sensor of Traction Motor 1 -
Wheel speed sensors Integrated Speed Sensor of Traction Motor 1 -
Yaw rate sensor UM7 IMU 150 EUR
1
Yaw rate sensor UM7 IMU 150 EUR
Heinzmann PMS 080.
1 Heinzmann PMS 080.

Figure4.4.Position
Figure Positionofofthe
thesensors
sensors(blue
(bluepoints:
points:wheel
wheelsteering
steeringangle
anglesensors;
sensors;red
redpoints:
points:wheel
wheelspeed
speed sensors; yellow point: yaw rate
sensors; yellow point: yaw rate sensor). sensor).

2.3. Vehicle Models


The aim of the vehicle models is to describe the vehicle position and orientation by
means of the measurable driving variables (wheel speed, yaw rate and steering angle).
Since the odometry method should be real-time-capable, the vehicle models should not be
complex. The meanings of the symbols are listed in Tables A1 and A2 in Appendix B.

2.3.1. Motion Model


The motion model describes the relationship between the vehicle position and the
orientation between time step k − 1 and k, which is shown in Figure 5a.
Sensors
021, 21, x FOR PEER2021, 21, 79
REVIEW 6 of 27 5 of 26

𝑣 𝜀 =𝛿 −𝛼
𝑣 ,
𝛼

𝑟,
𝑣 ,

𝒗 𝑋
𝛽

𝑟,
COG
𝑌
𝜔
(a) (b)
Figure
Figure 5. (a) 5. (a) Movement
Movement of afrom
of a vehicle vehicle
timefrom
steptime to k;𝑘(b)
k − 1step − 1kinematic
to 𝑘; (b) relationship
kinematic relationship between
between wheel velocity and
wheel
vehicle velocity. velocity and vehicle velocity.

The motion model can then be described at the geometry center of the vehicle by
𝜀 =𝛿 +𝛼 (4)
   
    v ·∆t· cos β k−1 + θk−1 + ωk−1 · ∆t
Model-2: Model for sidexslipk anglex𝛽 k −1  k −1  2 
 yk  =  yk−1  +  v ·∆t· sin β ∆t
k −1 + θdirection 1 · 2 the
k −1 + ωk −and

The side slip angle is the angle between the vehicle’s k −1 movement
θk θ k −1
vehicle’s longitudinal axis [27]. ωk−1 ·∆t (1)
 Equation (2) can be expressed as: 
f x ( x k −1 , θ k −1 , v k −1 , β k −1 , ω k −1 )
 f y (yk−1𝑣, θk− 𝜔 𝑟 ∙ 𝑠𝑖𝑛 𝜀 − 𝑟 ∙𝑐𝑜𝑠
−1 , vk−, 1 , β k−1 , ωk−1, ) , 𝜀
𝛽 = 𝜀= − arccos (5)
f θ ( θk−1 , ωk−1 )𝑣
This means that four
  T angles will be calculated from four wheel velocities 𝑣 .
side slip
where x y θ are the vehicle position and orientation in the global coordinate frame.
Using the four 𝛽 from each wheel, the following equation can be constructed:
∆t is the sample time. v, β and ω represent the vehicle velocity, the side slip angle and the
1 1 − 𝜔 𝑟 , ∙ 𝑠𝑖𝑛
yaw rate, 𝑣respectively. 𝜀 −are
They 𝑟 ,unknown
∙ 𝑐𝑜𝑠 𝜀 vehicle states and should be determined through
𝛽= ∙ 𝜀 − ∙ arccos (𝜀 , 𝑣 , 𝑣, 𝜔) correlations to
= 𝑓mathematical (6)the available
4 4 a suitable sensor. For 𝑣
non-measurable states,
sensors must be found, so that they can be estimated. The yaw rate can be measured by
Model-3: Model
the yaw forrate
wheel velocity
sensor. angle 𝜀velocity and the side slip angle are only measurable by
The vehicle
expensive devices.
According to Equation (4), the wheel velocity angle 𝜀 can be determined with
knowledge of the tire slip angle 𝛼 . However, the indirect measurement of the tire slip
2.3.2. Complementary Models
angle is highly dependent on the accuracy of the tire model and the accelerometer. There-
Model-1: Model for wheel velocity vi
fore, it is not suitable for real-time applications. Based on Equation (2), the wheel velocity
Normally,
angle can also be described asthe vehicle velocity is calculated as the average of the wheel speeds from the
non-driven and non-steered axle (usually rear axle). However, due to the wheel individual
steering 𝑣 , increased steering
and the 𝑣 ∙ sin angle,
𝛽 + 𝑟 , this
∙ 𝜔 calculation is no longer valid. Therefore,
𝜀 = arctan = arctan = 𝑓 (𝑣, 𝛽, 𝜔) (7)
each wheel must𝑣 , be considered𝑣 ∙ individually.
cos 𝛽 − 𝑟 , ∙ 𝜔The kinematic relationship between each
wheel velocity and the vehicle velocity is shown in Figure 5b, and can be described by
2.4. Design of Odometry Localization

vi,x
  Method
v· cosUsing
( β) Modular
 
0 Approach
 
ri,x
 
v· cos( β) − ω ·ri,y

The following elements
 vi,ycan = bederived
v· sin( βfrom
) + Figure
 0 3to×design
 ri,y astate
=  estimator.
v· sin( β) −They
ω ·ri,x  (2)
are state variables, sensor values
0 (as observations
0 or inputs),
ω state
0 transition models 0 and
observations models with their parameters. A modular approach to designing a state es-
where in
timator is developed vi,xthis
and vi,y are
paper, the wheel
which velocities
helps users decomposed
to determine in the x and
the essential y direction in the
elements
vehicle-fixed coordinate frame.
mentioned above. The process of this approachi,x r and r i,y
is illustrated by the flow chart in Figure 6. i indicates
represent the wheel contact points.
the wheel position (i = f l, f r, rl, rr).
The five elements to be determined are listed on the left side. This approach consists of
three steps, including the preparation step 0. In every step, the actions for a certain ele-
ment (e.g., for state variables) are described in the relevant row. Steps 1 and 2 should be
repeated until the termination condition after step 2 is fulfilled. The black arrows indicate
the process flow, the green arrows the signal flow.
Sensors 2021, 21, 79 6 of 26

Equation (2) can be used to derive the relationship between each wheel’s velocity and
the vehicle’s velocity:

vi = vi,x · cos(ε i ) + vi,y · sin(ε i )  (3)


= v· cos(ε i − β) + ω · r x,i · sin(ε i ) − ri,y cos(ε i ) = f vi (v, β, ω, ε i )

where ε i is the angle between the vehicle longitudinal axis and the vector of wheel velocity,
which describes the actual moving direction of each wheel. This angle will be called wheel
velocity angle in this paper, and it can be derived using the wheel steering angle δi and the
tire slip angle αi (tire slip angle is the angle between the tire main level and the vector of
wheel velocity):
ε i = δi + αi (4)
Model-2: Model for side slip angle β
The side slip angle is the angle between the vehicle’s movement direction and the
vehicle’s longitudinal axis [27]. Equation (2) can be expressed as:
!
vi − ω ri,x · sin ε i − ri,y · cos ε i
β = ε i − arccos (5)
v

This means that four side slip angles will be calculated from four wheel velocities vi .
Using the four β from each wheel, the following equation can be constructed:
!
1 4 1 4 vi − ω ri,x · sin ε i − ri,y · cos ε i
β = · ∑ ε i − · ∑ arccos = f β (ε i , vi , v, ω ) (6)
4 i =1 4 i =1 v

Model-3: Model for wheel velocity angle ε i


According to Equation (4), the wheel velocity angle ε i can be determined with knowl-
edge of the tire slip angle αi . However, the indirect measurement of the tire slip angle is
highly dependent on the accuracy of the tire model and the accelerometer. Therefore, it is
not suitable for real-time applications. Based on Equation (2), the wheel velocity angle can
also be described as
!
vi,y v· sin β + ri,x ·ω
 
ε i = arctan = arctan = f ε i (v, β, ω ) (7)
vi,x v· cos β − ri,y ·ω

2.4. Design of Odometry Localization Method Using Modular Approach


The following elements can be derived from Figure 3 to design a state estimator.
They are state variables, sensor values (as observations or inputs), state transition models
and observations models with their parameters. A modular approach to designing a state
estimator is developed in this paper, which helps users to determine the essential elements
mentioned above. The process of this approach is illustrated by the flow chart in Figure 6.
The five elements to be determined are listed on the left side. This approach consists of
three steps, including the preparation step 0. In every step, the actions for a certain element
(e.g., for state variables) are described in the relevant row. Steps 1 and 2 should be repeated
until the termination condition after step 2 is fulfilled. The black arrows indicate the process
flow, the green arrows the signal flow.
Sensors 2021, 21, 79 7 of 26
Sensors 2021, 21, x FOR PEER REVIEW 7 of 27

Next round
N
Step 0 Step 1 Step 2.1 Step 2.2

Are all
State Select or design variables
transition state transition in the
models models models
known
?

State Define
Defined state variables Y
variables state variables

Prepare
models and Stop
available signals Select or design
Observation
observation
models
models

Define
Observations
observations
Legend
Process flow
Inputs Define required signals as input
Signal flow

Figure 6. Flow chart


Figure of modular
6. Flow chart approach to designing
of modular approach state estimator.state estimator.
to designing

Step 0: In this step,


Step 0:the
In resources
this step, for
thearesources
state estimator
for a will
statebeestimator
prepared.will
Theberesources
prepared. The resources
here include,here include, on the one hand, the models, which may be used as themodel
on the one hand, the models, which may be used as the state transition state transition model
or the observation model, and model,
or the observation on the other
and on hand
the the available
other hand the signals on board,
available which
signals on board, which may
may be usedbeasused
inputsasofinputs
the models
of theormodels
as observations to take part to
or as observations in the
takecorrection stepcorrection step in
part in the
in Figure 3.
Figure 3.
During this step, users need to be familiar with the characteristics of the target sys-
During this step, users need to be familiar with the characteristics of the target system.
tem. The models prepared here should contain the relations between the variables to be
The models prepared here should contain the relations between the variables to be esti-
estimated and the available signals. The available signals here can be sensor measure-
mated and the available signals. The available signals here can be sensor measurements or
ments or outputs of a controller.
outputs of a controller.
• Round 1
• Round 1
Step 1: In this step, the unknown variables will be defined as state variables. Usually,
Step 1:inInRound
the unknown variables this step,
1 arethetheunknown
variables to variables will be defined as state variables. Usually,
be estimated.
the unknown variables in Round 1 are the
Step 2.1: After defining the first state variables, state transition variables to models
be estimated.
will be se-
lected among theStep 2.1: After
prepared modelsdefining
in stepthe0 tofirst state variables,
describe state transition
the state variables. Modelsmodels
with will be selected
among
outputs of the the prepared
defined models
state variables caninbestep
used.0 toThese
describe the are
models stateoften
variables. Models with outputs
differential
equations. Ifofthere
the defined state one
is no suitable variables
amongcan the be used. These
prepared models, models are often
users have differential
to design a equations.
If there
suitable model or useis ano suitable
random walkonemodel
among the prepared
instead. A randommodels,
walk modelusersmeans
have that
to design a suitable
modelwill
the state variable or inherit
use a random
its valuewalk
from model
the lastinstead.
time step.A random walk model means that the state
Subsequently,
variablethe willunknown
inherit itsvariables
value from in the
the state transition
last time step. models need to be
known. If the unknown variables can be measured
Subsequently, the unknown variables in the state by sensors, then transition
these sensor signalsneed to be known.
models
will be defined as inputs. It is also possible to use the defined state variables
If the unknown variables can be measured by sensors, then these sensor signals will beas the sources
of the unknown
definedvariables in theItmodels.
as inputs. is also possible to use the defined state variables as the sources of the
Step 2.2: The second
unknown variables part ofin step
the2 models.
is to select or design observation models. Obser-
vation models transform the state variables
Step 2.2: The second part of into a form
step ofselect
2 is to sensorormeasurements.
design observation The out-
models. Observation
put of the observation models must be a variable, which can be
models transform the state variables into a form of sensor measurements. measured by a sensor. The output of
This sensor measurement will be defined as an observation. In addition, observation mod-
the observation models must be a variable, which can be measured by a sensor. This sensor
els cannot be differential equations.
measurement will be defined as an observation. In addition, observation models cannot be
The sources of the unknown variables in observation models can be sensor signals or
differential equations.
the defined state variables. It should be noted that at least one state variable must be used
The sources of the unknown variables in observation models can be sensor signals
as the source of the unknown variable in an observation model. Otherwise, such an ob-
or the
servation model does defined state variables.
not correct It should
the state variables beloses
and noted itsthat at least
function. one state
If there is novariable must be
used as the source of
suitable model, this step can be skipped. the unknown variable in an observation model. Otherwise, such an
observation model does not correct the state variables and loses its function. If there is no
suitable model, this step can be skipped.
After steps 2.1 and 2.2, if some unknown variables in the state transition models and
observation models still have no sources, a second round is necessary.
Sensors 2021, 21, 79 8 of 26
Sensors 2021, 21, x FOR PEER REVIEW 8 of 27

After steps 2.1 and 2.2, if some unknown variables in the state transition models and
• Round 2
observation models still have no sources, a second round is necessary.

Step 1: Similar to step 1 in the first round, the unknown variables in the state transition
Round 2
models and the observation models, which have not had their sources identified in Round
Step 1: Similar to step 1 in the first round, the unknown variables in the state transi-
1, will be defined as new state variables.
tion models and the observation models, which have not had their sources identified in
Round Step 2: be
1, will It defined
is sameasas step
new 2 in
state Round 1.
variables.
The2:whole
Step process
It is same as stepcan
2 in stop,
Roundafter
1. all unknown variables have found their sources.
The whole process can stop, after all unknown variables have found their sources.
2.4.1. Odometry Basic Version
2.4.1. Odometry Basic Version
Using this modular approach, several versions for odometry localization are designed.
Using this modular approach, several versions for odometry localization are de-
Figure 7 shows the basic version of odometry localization.
signed. Figure 7 shows the basic version of odometry localization.

Odometry localization basic version

Motion Motion
State model model R R
transition (position) (orientation)
models

State
variables

Observation Model-1
models for

Observations

Inputs

Round 1 Round 2

Figure 7. Modular approach to designing a state estimator for odometry localization (basic ver-
Figure 7. Modular approach to designing a state estimator for odometry localization (basic version).
sion).

TheThe
Step0: 0:
Step models
models and theand the available
available signals havesignals haveintroduced
already been already in
been
Sec- introduced in
Sections 2.2 and
tions 2.2 and 2.3. 2.3.
• Round
Round 1: 1:
Step
Step1: 1:In the
In case
the of odometry
case localization,
of odometry the variablesthe
localization, to bevariables
estimated toare be
the estimated
ve- are the
hicle position and orientation. The first state variables are 𝑿 = 𝑥 𝑦 𝜃 .  T
vehicle position and orientation. The first state variables are X = x y θ
Step 2.1: After defining the first state variables, state transition models have to be
.
Step
designed to 2.1: After
describe thedefining the first
state variables. state(1)’s
Equation variables, statecan
motion model transition
be used tomodels
de- have to be
designed to describe
scribe the vehicle position the state variables.
and orientation. Equation
The motion model is(1)’s motion
divided into twomodel
blocks,can be used to
respectively,
describe theforvehicle
the position and orientation.
position The blocks
and orientation. The of the motion
motion modelis
model and the stateinto two blocks,
divided
variables are connected
respectively, by a double
for the position andarrow line, because
orientation. Thethe motion
blocks of model is a discrete
the motion model and the state
differential equation and needs the value from the last time step to determine the actual
variables are connected by a double arrow line, because the motion model is a discrete
time step. There are three unknown variables, 𝑣 , 𝛽 and 𝜔 , in Equation (1)’s motion
differential
model, in which equation and 𝜔needs
the variable thethe
can use value from
yaw rate the last
sensor time
signal. step tothe
Therefore, determine
yaw the actual
time step. signal
rate sensor Thereisare threeasunknown
defined an input 𝜔 variables, v, βI and
(the subscript stands inthe
ω,for Equation (1)’s motion model,
input). How-
in which
ever, there the variable
are no ω can
accessible usetothe
sensors yaw rate
measure sensor
vehicle 𝑣 andTherefore,
signal.
velocity the 𝛽yaw rate sensor
side slip angle
directly.isThese
signal definedtwo variables
as an inputremain
ω I temporarily
(the subscriptunknown.
I stands for the input). However, there are no
Step 2.2: The goal of this step is to design the observation models to describe the
accessible sensors to measure vehicle velocity v and side slip angle β directly. These two
sensor signals through at least one state variable. However, this step can be skipped if no
variables remain temporarily unknown.
Step 2.2: The goal of this step is to design the observation models to describe the
sensor signals through at least one state variable. However, this step can be skipped if no
suitable model exists. Because GPS data are not planned to be used, there is no model to
 T
describe the sensor signals by the defined state variables x y θ .
After steps 2.1 and 2.2, there are still two unknown variables in the motion model that
have no source to provide them signals. A second round is necessary.
• Round 2:
Step 1: Two remaining unknown variables v und β from the last round are defined as
 T
new state variables. The state variable vector becomes X = x y θ v β .
Sensors 2021, 21, 79 9 of 26

Step 2.1: After defining the new state variable, the existing motion model can be
completed by the newly defined v and β, which means the unknown variables in the motion
model use the newly defined state variables v and β as sources. After that, state transition
models for v and β will be designed. If the users do not want to have a complex system,
it is recommended in this round to use the defined state variables and the available sensor
signals as inputs for the state transition models, so there will not be too many new state
variables. If it is difficult to find a suitable model, a random walk model can be used instead
of a model with physical meaning. In Figure 7, the random walk model is represented by
a diamond with the letter “R”. The connection between the random walk model and the
state variable is also a double arrow line.
Step 2.2: According to model-1 (Equation (3)), wheel velocities can be expressed by the
state variables v and β. Thus, the wheel speed sensor signals are defined as an observation
vO,i (the subscript O stands for observation). Model-1 also requires the yaw rate and wheel
velocity angle as inputs. Assuming that the tire side slip angle can be ignored here, then the
wheel velocity angles are equal to the wheel steering angles. Just like the yaw rate signal
that has been already defined as an input ω I in Round 1, the wheel steering angle signals
are also defined as inputs δI,i .
This process can be finished after Round 2, because all formerly unknown variables
are now known. Based on the blocks in Figure 7, the five essential elements that make up
the state estimator can be formulated and written down in the right column.
Based on the basic version, some modifications have been made to optimize the state
estimator in the following sections.

2.4.2. Odometry Version 111


In the basic version, the yaw rate sensor was directly used in the motion model and
model-1. However, the raw signal from the yaw rate sensor always contains noise and
error. If the signal is used in models without processing, the error will be inherited by the
output of the models. Although real measurements from sensors can be directly used for
unknown variables in models, it is also possible to define the known variables (here the
yaw rate ω) as state variables. A better result for the yaw rate can be reached through a
suitable state transition model and correction by the yaw rate sensor. This also benefits
the position and orientation estimation. Figure 8 shows the modified Version 111 based
Sensors 2021, 21, x FOR PEER REVIEW
on the basic version. Dashed lines and dashed blocks denote the differences 10 of 27
from the
basic version.

Odometry localization (Version 111)

Motion Motion
State model model R R R
transition (position) (orientation)
models

State
variables

Observation Model-1
models 1 for

Observations

Inputs

Figure 8.
Figure 8. Modular approach to
Modular approach to designing
designingaastate
stateestimator
estimatorfor
forodometry
odometrylocalization
localization(Version
(Version 111
111 based
based on basic version, changes marked with dashed lines or dashed blocks and blue letters, re-
on basic version, changes marked with dashed lines or dashed blocks and blue letters, respectively).
spectively).

Before introducing additional versions, the meaning of the version numbers is ex-
plained in Figure 9. There are three complementary models in total. The three numbers of
the version number represent how the three complementary models have been used in
the version, respectively. There are two states to describe the usage of model-1 and three
states each for model-2 and model-3. Model-1 describes the relation between the vehicle
Observations

Inputs

Sensors 2021, 21, 79 10 of 26


Figure 8. Modular approach to designing a state estimator for odometry localization (Version 111
based on basic version, changes marked with dashed lines or dashed blocks and blue letters, re-
spectively).
With the new defined state variable ω, the motion model and model-1 will be updated.
Steps Before
2.1 and introducing
2.2 should be additional versions,
executed again. the meaning
To simplify of the
the system, the random
versionwalk
numbers
modelis ex-
is used again in step 2.1 as the state transition model for ω. In step 2.2, the measurement of
plained in Figure 9. There are three complementary models in total. The three numbers
the version
from the yawnumber represent
rate sensor can behow the expressed
directly three complementary models have
by the state variable ω. A been used in
passage
the version,
model respectively.
is inserted There are two
here and represented by astates
diamondto describe the usage
with number “1”. of model-1 and three
statesTheeach forcolumn
right model-2 and model-3.
of Figure Model-1according
8 was modified describestothe
therelation
change. between the vehicle
The differences
from
statestheandFigure
each7 wheel’s
are marked in blue.
speed, which is important for vehicles with a wheel individual
Before
steering introducing
system. Thus, additional
model-1 must versions,
be usedthe and
meaning of the version
the number numbers
for model-1 onlyiscontains
ex-
plained in Figure 9. There are three complementary
the information of which input signal or state has been used. models in total. The three numbers
of theForversion number
example, represent
Version 111 how
means thethat
three complementary
model-1 has usedmodels have beenangles,
wheel steering used insignal
the
𝛿 , , as input, while model-2 and model-3 are not used. The differences betweenthree
version, respectively. There are two states to describe the usage of model-1 and 𝛿 , and
states each for model-2 and model-3. Model-1 describes the relation between the vehicle
𝜀 will be explained in Version 212.
states and each wheel’s speed, which is important for vehicles with a wheel individual
The basic version is not numbered in this paper, because the yaw rate signal from an
steering system. Thus, model-1 must be used and the number for model-1 only contains
inertial measurement unit usually is not used directly without passing a filter.
the information of which input signal or state has been used.

X X X

Model-1 Model-2 Model-3


1: used 1: model not used. 1: impossible to use
2: used 2: model used; used 2: possible to use, not used
3: model usesd used 3: possible to use, used

Figure9.9.Explanation
Figure Explanationofof version
version numbers.
numbers.

2.4.3.For example, Version


Odometry Version 111
212 means that model-1 has used wheel steering angles, signal
δI,i , as input, while model-2 and model-3 are not used. The differences between δI,i and ε i
Usually, assumptions are made to simplify model complexity. While using model-1
will be explained in Version 212.
(Equation (2))version
The basic in the previous versions,
is not numbered we assumed
in this that the
paper, because the wheel velocity
yaw rate signal angle
from anequals
the wheel steering angle, which does not meet reality, especially when
inertial measurement unit usually is not used directly without passing a filter. the lateral acceler-
ation is high. Thus, the wheel velocity angles 𝜀 should also be treated as an unknown
2.4.3. Odometry
variable Version
and defined as212
state variables, which is realized in Version 212 (see Figure 10). In
this Usually,
version,assumptions
a random walk modeltoissimplify
are made used asmodel
the state transition
complexity. model
While at model-1
using first. For the
observation
(Equation (2)) model, a passage
in the previous modelwe
versions, isassumed
used, although the wheel
that the wheel velocity
velocity angle angle
equalsdoes
the not
wheel steering angle, which does not meet reality, especially when the lateral acceleration is
high. Thus, the wheel velocity angles ε i should also be treated as an unknown variable and
defined as state variables, which is realized in Version 212 (see Figure 10). In this version,
a random walk model is used as the state transition model at first. For the observation
model, a passage model is used, although the wheel velocity angle does not equal the
wheel steering angle, and the difference between the two variables can be modeled as
measurement noise.
After defining the wheel velocity angles as state variables, model-1 can use either
the state variables ε i or the inputs δI,i . In Version 212, the new state variables ε i are used
in model-1.
equal the wheel steering angle, and the difference between the two variables can be mod-
equalasthe
eled wheel steering
measurement angle, and the difference between the two variables can be mod-
noise.
eled After
as measurement noise.
defining the wheel velocity angles as state variables, model-1 can use either the
Sensors 2021, 21, 79 state After defining
variables 𝜀 orthe
thewheel
inputsvelocity
𝛿 , . In angles
Versionas212,
statethe
variables,
new statemodel-1 can𝜀useare
variables either
11 ofthe
used 26
in
state variables 𝜀 or the inputs 𝛿 , . In Version 212, the new state variables 𝜀 are used in
model-1.
model-1.
Odometry localization (Version 212)
Odometry localization (Version 212)
Motion Motion
State model
Motion model
Motion R R R R
transition
State (position)
model (orientation)
model R R R R
models
transition (position) (orientation)
models

State
variables
State
variables

Observation Model-1
models
Observation 1 for
Model-1 1
models 1 for 1

Observations
Observations

Inputs
Inputs

Figure 10. Modular approach to designing a state estimator for odometry localization (Version 212
Figureon
based
Figure 10.Version
10. Modular
Modular approach
111, changesto
approach designing
marked
to withaadashed
designing state
state estimator
lines or for
estimator for odometry
dashed blockslocalization
odometry (Version
and blue letters,
localization 212
respec-
(Version 212
based on
tively). Version 111, changes marked with dashed lines or dashed blocks and blue letters, respec-
based on Version 111, changes marked with dashed lines or dashed blocks and blue letters, respectively).
tively).
2.4.4. Odometry Version 232
2.4.4.There
Odometry
are Version 232 random walk models in Version 212. The more random walk
arealtogether
altogetherfour
four random walk models in Version 212. The more random
models
walk There
thereare
models altogether
are, theare,
there thefour
worse the
worse random
stability ofwalk models
a state
the stability a stateinestimation,
ofestimation, Version
because 212.
the The
statemore
because variables
the staterandom
can-
var-
walk models
not be cannot
iables there
constrained are, the worse
in a reasonable
be constrained the
in a range stability of
by physical
reasonable a state
rangemodels. estimation, because
Thus,models.
by physical model-2Thus,the state
(Equation var-
model-2(6))
iables
was
(Equationcannot
used in
(6)) be
wasconstrained
Version 232 in
used in a the
to Version
replace reasonable
232 random
to replacerange
walk by
the model physical
random forwalk models.
β (see Figure
model Thus,
for11). model-2
Because
𝛽 (see Fig-
(Equation
the
ure second (6))
term
11). Because wasofused
the in Version
Equation
second of232
(6) has
term to effect
less
Equationreplace the random
(6)compared
has walk
to the
less effect model
first,
compared for
the second
to the 𝛽 first,
(see
termFig-
is
the
ure
not 11). Because
implemented. the second
This modelterm of
can Equation
use the (6)
inputs has less
or effect
the compared
state
second term is not implemented. This model can use the inputs 𝛿 , or the state variables
δ I,i variables to εthe
i . first,
The the
state
𝜀second
variables term is not
ε i are
. The state usedimplemented.
variables in𝜀this usedThis
areversion. model
in this can use the inputs 𝛿 , or the state variables
version.
𝜀 . The state variables 𝜀 are used in this version.
Odometry localization (Version 232)
Odometry localization (Version 232)
Motion Motion
Model-2
State model
Motion model
Motion R R R
for
Model-2
transition
State (position)
model (orientation)
model R R R
models for
transition (position) (orientation)
models

State
variables
State
variables

Observation Model-1
models
Observation 1 for
Model-1 1
models 1 for 1

Observations
Observations

Inputs
Inputs

Figure
Figure 11.
11. Modular
Modularapproach
approach totodesigning
designing aastate
stateestimator
estimatorforforodometry
odometrylocalization
localization (Version
(Version 232
232
Figureon
based 11.Version
Modular approach
212, changestomarked
designing
witha dashed
state estimator
lines or for odometry
dashed blockslocalization (Version
and blue letters, 232
respec-
based on Version 212, changes marked with dashed lines or dashed blocks and blue letters, respectively).
based on Version 212, changes marked with dashed lines or dashed blocks and blue letters, respec-
tively).
tively).Odometry Version 213
2.4.5.
Similar to the idea in Version 232, model-3 (Equation (7)) is used in Version 213
(see Figure 12) in order to keep the state variables within a reasonable range. Unlike
Version 232, which limits β directly through model-2, model-3 uses ω, v and β to calculate
ε i . ε i will be corrected by the observations δO,i , so that ω, v and β will also be corrected.
2.4.5.
2.4.5. Odometry
Odometry Version
Version 213
213
Similar
Similar to the idea in Version
to the idea in Version 232,
232, model-3
model-3 (Equation
(Equation (7))
(7)) is
is used
used in
in Version
Version 213
213 (see
(see
Sensors 2021, 21, 79
Figure
Figure 12) in order to keep the state variables within a reasonable range. Unlike Version
12) in order to keep the state variables within a reasonable range. Unlike Version
12 of 26
232,
232, which limits 𝛽
which limits 𝛽 directly
directly through
through model-2,
model-2, model-3 uses 𝜔,
model-3 uses 𝜔, 𝑣 and 𝛽
𝑣 and 𝛽 to calculate 𝜀𝜀 ..
to calculate
𝜀𝜀 will
will be
be corrected
corrected by by the
the observations
observations 𝛿 𝛿 , ,, so
so that 𝜔, 𝑣
that 𝜔, 𝑣 and
and 𝛽 𝛽 will
,
will also
also be
be corrected.
corrected.

Odometry
Odometry localization
localization (Version
(Version 213)
213)

Motion
Motion Motion
Motion Model-3
Model-3
State
State model model R R R
model model R R R for
transition (position) (orientation) for
transition (position) (orientation)
models
models

State
State
variables
variables

Observation
Observation Model-1
Model-1
models
models 11 for 11
for

Observations
Observations

Inputs
Inputs

Figure 12.
12. Modular
Figure 12. Modular approach to designing aa state estimator for odometry localization (Version 213
Figure Modular approach
approach to
to designing
designing a state
state estimator
estimator for
for odometry
odometry localization
localization (Version
(Version 213
213
based
based on Version 212, changes marked with dashed lines or dashed blocks and blue letters, respec-
on Version 212, changes marked with dashed lines or dashed blocks and blue letters, respec-
based on
tively). Version 212, changes marked with dashed lines or dashed blocks and blue letters, respectively).
tively).

2.4.6. Odometry Version


2.4.6. Odometry Version 233
Version 233
233
Version
Version 233 (see Figure 13) is a combination of Versions 232 and 233. All of the three
Version 233 (see Figure 13)
233 (see Figure 13) is
is aa combination
combination of
of Versions
Versions 232
232 and
and 233.
233. All
All of
of the
the three
three
models
models are
are used
used in
in this
this version.
version.
models are used in this version.

Odometry
Odometry localization
localization (Version
(Version 233)
233)

Motion
Motion Motion
Motion Model-2
Model-2 Model-3
Model-3
State
State model model R R
model model R R for for
transition (position) (orientation) for for
transition (position) (orientation)
models
models

State
State
variables
variables

Observation
Observation Model-1
Model-1
models 11 for 11
models for

Observations
Observations

Inputs
Inputs

Figure
Figure 13.
13. Modular
Modular approach
Modular approach to
approach to designing
to designing a
designing aa state
state estimator
state estimator for
estimator for odometry
for odometry localization
odometry localization (Version
localization (Version 233
(Version 233
based
based on
on Version
Version 212,
212, changes
changes marked
marked with
with dashed
dashed lines
lines or
or dashed
dashed blocks
blocks and
and blue
blue letters,
letters, respec-
respec-
based on Version 212, changes marked with dashed lines or dashed blocks and blue letters, respectively).
tively).
tively).
2.4.7. Comparison of Odometry Versions
2.4.7.
2.4.7.AComparison
Comparison of
of Odometry
haveVersions
Odometry
total of 14 versions Versions
been designed and are listed in Table 2. The different
A total
versions
A total of 14
14 versions
versions
are categorized
of have been
been
according
have designed
to the and used.
model they
designed and are
are listed
listed in
in Table
Table 2.
2. The
The different
different
versions are categorized according to the model they used.
versions are categorized according to the model they used.
2021, 21, x FOR PEER
Sensors REVIEW
2021, 21, 79 13 of 27 13 of 26

Table 2. Different Table


odometry versions
2. Different and the models
odometry versionsthey
andused.
the models they used.
Model-1 Model-2 Model-3
Version Model-1 Model-2 Model-3
Version 𝜺𝒊 or 𝜹𝑰,𝒊 Used? Model-2 Used? 𝜺𝒊 or 𝜹𝑰,𝒊 Used? Model-3 Used?
εi or δI,i Used? Model-2 Used? εi or δI,i Used? Model-3 Used?
111 𝛿, No - -
δI,i 𝛿 , No Model
111 112 No -- No -
No 1
112 212 δI,i 𝜀 No -- NoNo Model 1
212 εi No - No
121 𝛿, Yes 𝛿, -
121 122 δI,i 𝛿 Yes
Yes 𝛿δI,i, No -
,
122 δI,i Yes δI,i No Model
132 𝛿, Yes 𝜀 No
132 δI,i Yes ε No 1 and 2Model 1 and 2
222 𝜀 Yes 𝛿 i, NoNo
222 εi Yes δI,i
232 232 εi 𝜀 Yes
Yes 𝜀ε
i
NoNo
113 𝛿 No - Yes Model
113 δI,i , No - Yes
213 213 εi 𝜀 No No -- Yes Yes 1 and 3Model 1 and 3
123 𝛿, Yes 𝛿, Yes
123 δI,i Yes δI,i Yes Model
133 𝛿 Yes 𝜀 Yes
133 δI,i , Yes εi Yes 1, 2 and
223 223 εi 𝜀 Yes
Yes 𝛿δI,i, Yes Yes
Model 1, 2 and 3
3
233 233 εi 𝜀 Yes
Yes 𝜀ε i Yes Yes

The covariance matrices of system noise and measurement noise for different ver-
The covariance matrices of system noise and measurement noise for different versions
sions are determined empirically.
are determined empirically.
3. Validation 3. Validation
3.1. Test Vehicle 3.1. Test Vehicle
In order to testIn
the performances
order of different versions,
to test the performances testversions,
of different drives were carried
test drives outcarried out with
were
with the demonstration vehicle shown
the demonstration vehicle inshown
Figurein14a. The 14a.
Figure demonstration vehicle was
The demonstration de- was developed
vehicle
veloped in project “OmniSteer”
in project and uses
“OmniSteer” a novel
and uses chassis
a novelsystem.
chassis This novel
system. chassis
This novelsystem
chassis system [4] in
Figure 14b,c applies ofa concept ◦ steering angle can be
[4] in Figure 14b,c applies a concept steeringof steering wishbones,
wishbones, so that a 90°sosteering
that a 90
angle can
reached
be reached in both in both directions.
directions.

(a) (b)
(c)
Figure 14. Figure 14. (a) Demonstration
(a) Demonstration vehicle ofvehicle
projectof“OmniSteer”;
project “OmniSteer”;
(b) novel(b) novel system
chassis chassis system steering angle 90◦ ;
at increased
at increased
steering
(c) novel chassis angleat90°;
system (c) novel
increased chassisangle
steering system ◦
at .increased steering angle –90°.
−90

3.2. Driving Maneuvers


3.2. Driving Maneuvers
Omnidirectional maneuvers requiring
Omnidirectional increased
maneuvers steering
requiring angles steering
increased usually appear dur-
angles usually appear dur-
ing parking, especially
ing parking, especially when the leeway is not sufficient. Odometry and
when the leeway is not sufficient. Odometry is also an efficient is also an efficient
suitable method to suitable
and estimatemethod
the vehicle positionthe
to estimate and orientation
vehicle in and
position parking maneuvers
orientation in parking maneu-
[21]. For these vers
reasons, different odometry versions in this paper have been
[21]. For these reasons, different odometry versions in this paper evaluated
have been evaluated
through omnidirectional parking maneuvers.
through omnidirectional parkingFigure 15 shows
maneuvers. the 15
Figure eight driving
shows maneu-
the eight driving maneuvers
vers considered.considered.
The red vehicles
The red orvehicles
bicycle are the obstacles.
or bicycle are the obstacles.
Sensors 2021, 21, 79 14 of 26
Sensors 2021, 21, x FOR PEER REVIEW 14 of 27

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Figure 15. Driving


Drivingmaneuvers
maneuversconsidered.
considered.(a)(a)
DM1:
DM1:Vehicle parks
Vehicle in the
parks parking
in the lot without
parking turning
lot without or stopping,
turning 90°
or stopping,
steering angle
90◦ steering on each
angle wheel
on each can be
wheel canrealized; (b) DM2:
be realized; Vehicle
(b) DM2: moves
Vehicle forward
moves intointo
forward the parking lot and
the parking the rear
lot and partpart
the rear of the
of
vehicle is then pulled out. (c) DM3: Vehicle parks directly on the opposite side of the street without stopover;
the vehicle is then pulled out. (c) DM3: Vehicle parks directly on the opposite side of the street without stopover; (d) DM4:(d) DM4:
Conventional reversing parking method; (e) DM5: Vehicle parks in a parking lot by steering the rear wheels in opposite
Conventional reversing parking method; (e) DM5: Vehicle parks in a parking lot by steering the rear wheels in opposite
directions to the front wheels; (f) DM6: Similar to DM5, but with correction at the end position; (g) DM7: Vehicle moves
directions to the front wheels; (f) DM6: Similar to DM5, but with correction at the end position; (g) DM7: Vehicle moves
forward into the parking lot and the rear part of the vehicle is then pulled out, to avoid the obstacle; (h) DM8: Vehicle
forward
turns on into the parking
its own lot and the rear part of the vehicle is then pulled out, to avoid the obstacle; (h) DM8: Vehicle turns
vertical axis.
on its own vertical axis.
3.3. Validation Environments
3.3. Validation Environments
The validation of the odometry consists of two steps. Firstly, the 14 odometry ver-
sionsThe validation
have of the
been tested in aodometry consists
simulation of two steps.
environment. Firstly,
Figure the 14 the
16a shows odometry versions
validation pro-
have been tested in a simulation environment. Figure 16a shows the validation
cess in the simulation environment. A validated multibody dynamic model [28], including process
in the
the simulation
novel suspensionenvironment.
and steeringAsystem,
validated
has multibody
been used todynamic
generatemodel [28], including
the reference values.
the novel suspension and steering system, has been used to generate the
Noises and errors determined from the real sensors have been added into the signals reference values.
of
Noises
the andsensors.
virtual errors determined
The artificialfrom thesignals
sensor real sensors
serve ashave been
inputs foradded into thealgorithm.
the odometry signals of
The vehicle position and orientation estimated by the odometry are to be compared with
Sensors 2021, 21, 79 15 of 26

Sensors 2021, 21, x FOR PEER REVIEW 15 of 27


the virtual sensors. The artificial sensor signals serve as inputs for the odometry algo-
rithm. The vehicle position and orientation estimated by the odometry are to be compared
with the reference from the vehicle model. Figure 16b shows one of the omnidirectional
the reference from the vehicle model. Figure 16b shows one of the omnidirectional park-
parking maneuvers.
ing maneuvers.

(a) (b)
Figure
Figure 16.
16. (a)
(a)Validation
Validationprocess
processininsimulation
simulationenvironment;
environment;(b)(b)
virtualization of omnidirectional
virtualization of omnidirectional
parking maneuver in simulation environment.
parking maneuver in simulation environment.

The
The second
secondstep
stepisisvalidation byby
validation realreal
driving tests.
driving To evaluate
tests. and validate
To evaluate the esti-
and validate the
mated localization results, the exact determination of the reference position
estimated localization results, the exact determination of the reference position of the vehicle
of the
is crucial.
vehicle is Since there
crucial. is no
Since centimeter-level
there GNSS on GNSS
is no centimeter-level the testonvehicle,
the testtwo laser pointers
vehicle, two laser
have beenhave
pointers mounted
been on the longitudinal
mounted axis of the vehicle.
on the longitudinal Before
axis of the and after
vehicle. a test
Before drive,
and after
the position
a test drive,wasthe marked
positiononwasthe marked
ground by on use
theof the twoby
ground laser
usepointers. With
of the two the pointers.
laser help of
the marked
With the help points,
of thethe real position
marked points,and
the orientation
real position are
andcalculated (detail
orientation are in Appendix
calculated A).
(detail
in Appendix A).
3.4. Robustness Analysis
3.4. Robustness Analysis
A localization method must not only be accurate, but also robust. The 14 odometry
versions have been method
A localization examined to see
must notifonly
they be
areaccurate,
robust against therobust.
but also failureThe
of one or more
14 odometry
sensors. The
versions have sensor
been failures
examined were
to generated
see if theyartificially.
are robustThe corresponding
against the failure sensor
of one signals
or more
were set to zero. A total of 14 cases have been defined (see Table 3). Robustness
sensors. The sensor failures were generated artificially. The corresponding sensor signals analysis
was first
were carried
set to zero. Aouttotal
without
of 14 detection
cases haveofbeen
the failed
definedsensor and then
(see Table with detection
3). Robustness by
analysis
switching
was off the corresponding
first carried observation
out without detection models.
of the failed sensor and then with detection by
switching off the corresponding observation models.
Table 3. Cases to analyze the robustness of odometry.
Table 3. Cases to analyze the robustness of odometry.
Case No. Failed Sensors
1 No.
Case No Failed
sensorSensors
failed
21 WheelNospeed sensor
sensor failed
FL
32 Wheel speed sensor FR
Wheel speed sensor FL
43 Wheel speed
Wheel speedsensor RL
sensor FR
54 Wheel
Wheel speed
speed sensor
sensor RRRL
65 Wheel speed
Yaw rate sensor RR
sensor
6 Yaw rate sensor
77 Wheel steering
Wheel steeringsensor
sensorFL
FL
88 Wheel
Wheel steering sensorFR
steering sensor FR
99 Wheel steering
Wheel sensor
steering sensorRL
RL
1010 Wheel steering sensor
Wheel steering sensor RR RR
1111 Wheel speed sensor FL + Wheel steering sensor FL
Wheel speed sensor FL + Wheel steering sensor FL
12 Wheel speed sensor FR + Wheel steering sensor FR
1213 Wheel speed
Wheel sensor
speed FRRL
sensor + Wheel
+ Wheel steering
steeringsensor
sensorFR
RL
1314 Wheel speed
Wheel sensor
speed RLRR
sensor + Wheel
+ Wheel steering
steeringsensor
sensorRL
RR
14 Wheel speed sensor RR + Wheel steering sensor RR
021, 21, x FOR PEER REVIEW 16 of 27

Sensors 2021, 21, 79 16 of 26


3.5. Evaluation Criteria (EC)
To evaluate the performances of the 14 odometry versions, the following errors are
considered: 3.5. Evaluation Criteria (EC)
• To evaluate
EC-1: End position the performances
and orientation of the
error in 14 odometry
global coordinateversions,
systemthe following
(with super-errors are
considered:
script “E”):
• EC-1: End position and orientation error in global coordinate system (with superscript “E”):
𝑒 , = 𝑒 , = (𝑥 − 𝑥 )q+ (𝑦 − 𝑦 ) (8)
2 2
e Epos,end = e Epos,t = xtE − x̂tE + ytE − ŷtE (8)
𝑒 , =𝑒 , =𝜃 −𝜃 (9)
E E
eang,end = eang,t = θtE − θ̂tE (9)
• EC-2: Maximum position and orientation error in global coordinate system while
driving: • EC-2: Maximum position and orientation error in global coordinate system while driving:

𝑒 = max 𝑒 = max (𝑥 − 𝑥 ) + (𝑦 − 𝑦 2)
q 
( xnE − x̂nE ) + (ynE − ŷnE ) (10)
 
, , max 2
eE pos,max = e Epos,n = max (10)

𝑒 = max E ,𝜃 −𝜃   (11)
eang,max = max θnE − θ̂nE (11)
• EC-3: Average error of position change and orientation change:
• EC-3: Average error of position change and orientation change:
∑ |d𝑝 − d𝑝̂ |
𝑒̅ = (12)
∑ d𝑝 ∑t |dpn − d p̂n |
edp = n=1 t (12)
∑n=1 dpn
∑ d𝜃 − d𝜃
𝑒̅ = ∑tn=1 dθn − dθ̂n
(13)
∑ e d𝜃
dθ = (13)
∑tn=1 dθn
The meaning The meaning
of the of the
symbols symbols in(8)
in Equations Equations (8) be
to (13) can to (13)
foundcaninbeFigure
found 17.
in Figure 17.

Figure 17. SchematicFigure


illustration of evaluation
17. Schematic criteria.
illustration of evaluation criteria.

4. Results and4.Discussion
Results and Discussion
4.1. Simulation4.1. Simulation Results
Results
Figure
Figure 18 shows the18trajectories
shows the trajectories of different
of different odometry odometry
versionsversions
in twoinomnidirec-
two omnidirectional
tional parking maneuvers (DM1 and DM3 of Figure 15) in the simulation environment. Most of
parking maneuvers (DM1 and DM3 of Figure 15) in the simulation environment.
the proposed
Most of the proposed odometry
odometry versions
versions (except
(except versions
versions withwith only
only model-1,
model-1, namely111,
namely 111, 112 and
212) are able to estimate the vehicle position and orientation in the omnidirectional parking
112 and 212) are able to estimate the vehicle position and orientation in the omnidirec-
maneuvers.
tional parking maneuvers.
Sensors 2021, 21, x FOR PEER REVIEW 17 of 27

Sensors 2021, 21, x FOR PEER REVIEW 17 of 27


Sensors 2021, 21, 79 17 of 26

(c)
(a) (b)

Figure 18. Simulation results of different odometry versions: (a) Trajectories in DM1; (b) Trajectories in DM3; (c) Legend.

To compare the accuracy of different odometry versions, the simulation (c) results have
(a) (b)
been evaluated according to EC-2 and EC-3. The evaluation results are shown as a boxplot
Simulation in Figure 19. In addition to the (a) boxplot, the data used
DM1;for the boxplot have been plotted in
Figure 18.18.
Figure Simulationresults
resultsofofdifferent
differentodometry
odometry versions: Trajectories
versions: (a) Trajectories ininDM1; (b)
(b) Trajectories
Trajectories ininDM3;
DM3; (c)(c) Legend.
Legend.
grey. Different odometry versions have been grouped according to the models used (the
motion model is
To compare thenotaccuracy
included of here, because odometry
different the motion model is thethe
versions, basic model andresults
simulation nec- have
To compare
essary for every the accuracy
version). It of different
should be noted odometry
that the versions,
ordinates the simulation
(y-axis) are all results have
logarithmic.
been
beenevaluated
evaluated according
according totoEC-2
EC-2andand EC-3.
EC-3. TheThe evaluation results
evaluation are shown as a box-
Figure 19a shows that the odometry versions equipped results
with more are models
shown as a boxplot
provide
plot in
in FigureFigure 19. In
19. In addition addition
to the The to the
boxplot, boxplot, the
the data used data used for
for the boxplot the boxplot
haveodometry have
been plotted been
better position estimations. improvement is significant after the is in
plotted
grey. in grey.
Different
equipped with
Different
odometry
two models.
odometry
versions versions
have been grouped
The improvement
have been grouped
does notaccording
according
to theafter
look so obvious models
using
to
usedthe mod-
all (the
els used
motion (the
three model
models.motion
isThis model
not included
can ishere,
not included
be explained because
throughthe here, because
themotion
diagramsmodel theis motion
created the
bybasic model
model is
the modular andthenec-
ap- basic
model and
proach.
essary necessary
for every for every
version). It shouldversion).
be notedIt should
that thebe noted that
ordinates the ordinates
(y-axis) (y-axis) are
are all logarithmic.
all logarithmic.
Figure 19a shows that the odometry versions equipped with more models provide
better position estimations. The improvement is significant after the odometry is
equipped with two models. The improvement does not look so obvious after using all
three models. This can be explained through the diagrams created by the modular ap-
proach.

Sensors 2021, 21, x FOR PEER REVIEW 18 of 27

(a) (b)

(a) (b)

(c) (d)

Figure 19. Simulations


Figure results
19. Simulations of different
results odometry
of different odometryversions: (a) Maximum
versions: (a) Maximum position
position errors;
errors; (b) Maximum
(b) Maximum orientation
orientation
errors; (c) Average errors of position change; (d) Average errors of orientation
errors; (c) Average errors of position change; (d) Average errors of orientation change. change.

According to the motion model in Equation (1), 𝑣, 𝛽 and 𝜔 are used to estimate
the vehicle position. For the versions with only model-1, e.g., Version 111 in Figure 8, these
three variables are not constrained by the state transition models with physical meaning.
Thus, they can only be corrected by the sensors through the observation models. Figure
20a shows the correction paths of 𝜔, 𝑣 and 𝛽 of Version 111. 𝜔 will be corrected by the
Sensors 2021, 21, 79 18 of 26

(c) (d)
Figure 19a shows that the odometry versions equipped with more models provide
better
Figure 19. Simulations results position odometry
of different estimations. The improvement
versions: (a) Maximum isposition
significant
errors;after
(b) the odometry
Maximum is equipped
orientation
with two models. The improvement does not look
errors; (c) Average errors of position change; (d) Average errors of orientation change. so obvious after using all three models.
This can be explained through the diagrams created by the modular approach.
According to
According to the
the motion
motion model
model inin Equation
Equation(1), (1), 𝑣,v, 𝛽β and
and ω 𝜔 are
are used
used to to estimate
estimate
the vehicle
the vehicleposition.
position.ForFor
thethe versions
versions withwith
onlyonly model-1,
model-1, e.g., e.g.,
VersionVersion
111 in111 in Figure
Figure 8, these8,
these variables
three three variables
are notare not constrained
constrained by the
by the state state transition
transition models with models with meaning.
physical physical
meaning.
Thus, theyThus, they be
can only cancorrected
only be corrected by the through
by the sensors sensors through the observation
the observation models.models.
Figure
Figure 20a shows the correction paths of v and of Version 111.
20a shows the correction paths of 𝜔, 𝑣 and 𝛽 of Version 111. 𝜔 will be corrected
ω, β ω will be correctedby theby
the yaw rate sensor and by the four wheel speed sensors through
yaw rate sensor and by the four wheel speed sensors through model-1, while 𝑣 and 𝛽 model-1, while v and
β can
can onlybebecorrected
only correctedby bythe
thefour
fourwheel
wheelspeed
speed sensors
sensors through
through model-1.
model-1. Although
Although it it is
is
enough to determine two variables using model-1 (Equation (3)) four times,
enough to determine two variables using model-1 (Equation (3)) four times, if the vehicle if the vehicle
moves straight
moves straight with
with the
the same
same wheel
wheel steering
steering angles,
angles, the
the four
four equations
equations will
will be
be the
the same.
same.
v and cannot be determined uniquely. This leads to an unstable position
𝑣 and 𝛽 cannot be determined uniquely. This leads to an unstable position estimation.
β estimation.

R R R Model-2
R R R R R R Model-3
for für

Model-1
1 for Model-1 Model-1
1 for 1 1 for 1

(a) Version 111 (b) Version 232 (c) Version 213


Figure 20. (a) Version 111: Correction paths for ω (blue), v and β (red). (b) Version 232: Correction paths for ω (blue), v (red)
Figure 20. (a) Version 111: Correction paths for 𝜔 (blue), 𝑣 and 𝛽 (red). (b) Version 232: Correction paths for 𝜔 (blue),
and β(green). (c) Version 213: Correction paths for ω (blue), v and β (red).
𝑣 (red) and 𝛽(green). (c) Version 213: Correction paths for 𝜔 (blue), 𝑣 and 𝛽 (red).
With the introduction of model-2, the state variable β can be limited in a reasonable
With
range, andthe
theintroduction of model-2,
position estimation the state
becomes variable
stable. 𝛽 can
Figure 20bbe limited in
illustrates a reasonable
the correction
range, and the position estimation becomes stable. Figure 20b illustrates
step of Version 232. Similarly, v and β in the versions with model-3 (e.g., Version the correction
213
step of Version 232. Similarly, 𝑣 and 𝛽 in the versions with model-3
in Figure 12) can be corrected by the wheel speed sensors through model-1 and by(e.g., Version 213the
in
Figure 12) can be corrected by the wheel speed sensors through model-1 and
wheel steering angles through model-3. The correction paths are shown in Figure 20c. by the wheel
steering
The angles
versions through
with model-3.
model-3 have aThe correction
more accurate paths are shown
estimation in Figure
of position than20c. The ver-
the versions
sionsmodel-2
with with model-3
because have a more term
the second accurate estimation
of Equation ofmodel-2
(6) in positionisthan the versions
ignored. with
The accuracy
of this model is reduced. For this reason, the accuracy of the versions with three models
has not been improved significantly.
Compared to the position estimation, Figure 19b shows that the versions with less
models have a better orientation estimation. According to the motion model in Equation (1),
vehicle orientation is only determined by the state variable ω. In versions with model-1
and with model-1 and 2, ω is corrected by the yaw rate sensor and the wheel speed sensors
through model-1. Because model-2 does not affect the orientation estimation, they have the
same quality in orientation estimation. With the introduction of model-3, the estimation
accuracy decreases slightly. The reason may be that model-3 makes no positive contribution,
or that the covariance matrices are not optimal.
EC-3 in Figure 19c,d shows the same tendency as EC-2 in Figure 19a,b.
After the accuracy assessment, the robustness of different versions was examined.
This analysis was carried out according to Section 3.4. The results are shown in Figure 21.
It can be seen that failure detection is necessary if sensor failures occur. The most robust
Versions (213 and 233) can provide an absolute position error less than 0.3 m and an average
position change error less than 0.2 m/m. Their orientation errors are all less than 3 deg and
20 deg/rotation.
regard to the position estimation, the versions with model-3 (213 and 233) can provide
good estimation under all failures, while Version 212 (only with model-1) cannot with-
stand even one failure. Version 232 (with model-1 and 2) can still provide good estimation
if one of the wheel speed sensors or the yaw rate sensor fails. However, the failure of the
wheel steering angle sensors lets Version 232 give no more reasonable position estimation,
Sensors 2021, 21, 79 19 of 26
because the error in the wheel steering signals will influence model-1 and model-2 directly.

Sensors 2021, 21, x FOR PEER REVIEW 20 of 27

(a) (b)

(c) (d)
Figure 21. Simulation results of different odometry versions under sensor failures: (a) Maximum position errors;
Figure 21. Simulation results of different odometry versions under sensor failures: (a) Maximum position errors; (b) Max-
(b)
imumMaximum orientation
orientation errors;
errors; (c) (c) Maximum
Maximum positionposition errors;
errors; (d) (d) Maximum
Maximum orientation
orientation errors. errors.

Focusing on the results with error detection, the more models that are included,
Regarding orientation estimation, although the versions with model-1 (212 and 232)
the more robust the results will be. Even in orientation estimation, although the versions
can suppress the vehicle orientation error by less than 1 deg and 5 deg/rotation in most
with model-1 and with model-1 and 2 are better than the versions with model-1 and 3 and
cases, these two versions cannot withstand the yaw rate sensor failure. In contrast, the
model-1, 2 and 3 in most cases, there are outliers, which are less accurate.
versions with model-3 (213 and 233) can provide stable orientation estimations even if the
Figure 22 describes the robustness in other aspects. The influences of failures on four
yaw rate sensor fails, because model-3 also helps to correct 𝜔.
versions are shown individually. These four versions use the state variables ε i among
the versions using the same models. They also have better robustness in their group.
With regard to the position estimation, the versions with model-3 (213 and 233) can provide
good estimation under all failures, while Version 212 (only with model-1) cannot withstand
even one failure. Version 232 (with model-1 and 2) can still provide good estimation if
one of the wheel speed sensors or the yaw rate sensor fails. However, the failure of the
wheel steering angle sensors lets Version 232 give no more reasonable position estimation,
because the error in the wheel steering signals will influence model-1 and model-2 directly.

(a)
imum orientation errors; (c) Maximum position errors; (d) Maximum orientation errors.

Regarding orientation estimation, although the versions with model-1 (212 and 232)
can suppress the vehicle orientation error by less than 1 deg and 5 deg/rotation in most
cases, these two versions cannot withstand the yaw rate sensor failure. In contrast, the
Sensors 2021, 21, 79 versions with model-3 (213 and 233) can provide stable orientation estimations even20ifofthe
26
yaw rate sensor fails, because model-3 also helps to correct 𝜔.

(a)

Sensors 2021, 21, x FOR PEER REVIEW 21 of 27

(b)

(c)

(d)

Simulation
Figure 22.Figure results ofresults
22. Simulation four odometry versions
of four odometry under sensor
versions failures:
under sensor (a) Maximum
failures: position
(a) Maximum posi-
errors; (b)tion
Maximum orientation
errors; (b) Maximumerrors; (c) Average
orientation position
errors; (c) Averageerrors; (d)errors;
position Average orientation
(d) Average errors.
orientation
errors.
Regarding orientation estimation, although the versions with model-1 (212 and 232)
4.2. Real
can suppress theDriving
vehicleResults
orientation error by less than 1 deg and 5 deg/rotation in most
cases, these two versions
Figure cannot
23 shows thewithstand
trajectoriesthe
of yaw rate odometry
different sensor failure. In contrast,
versions the ver-
in two omnidirec-
sions with model-3
tional (213maneuvers
parking and 233) can provide
in the stabletests.
real driving orientation estimations even if the yaw
rate sensor fails, because model-3 also helps to correct ω.

4.2. Real Driving Results


Figure 23 shows the trajectories of different odometry versions in two omnidirectional
parking maneuvers in the real driving tests.
The figures for evaluation are structured as in Section 4.1. Instead of EC-2 and EC-3,
EC-1 has been used because a reference system is not available. Start and end positions
have been measured and used for evaluation.

(a) (b) (c)


tion errors; (b) Maximum orientation errors; (c) Average position errors; (d) Average orientation
errors.

4.2. Real Driving Results


Sensors 2021, 21, 79 Figure 23 shows the trajectories of different odometry versions in two omnidirec-
21 of 26
tional parking maneuvers in the real driving tests.

Sensors 2021, 21, x FOR PEER REVIEW 22 of 27


(a) (b) (c)
Figure 23. Real driving results of different odometry versions: (a) Trajectories in DM1; (b) Trajectories in DM3; (c) Legend.
Figure 23. Real driving results of different odometry versions: (a) Trajectories in DM1; (b) Trajectories in DM3; (c) Legend.
for this. Firstly, since EC-1 replaces EC-2 and EC-3 to evaluate the odometry, the error at
The results of position estimation from the real driving tests are basically consistent
Theposition
the end figures for evaluation
cannot are estimation
reflect the structured accuracy
as in Section 4.1.
truly. Insteadreason
Another of EC-2mayand
beEC-3,
the
with the simulation
calibration error
results.
of the
As can be
wheel asteering
seenwhich
angle,
in Figure
affects
24a, with the useofofmodel-3
the contribution
model-2 in
and
EC-1 has been used because reference system is not available. Start and end positions
model-3, the
calculating
have position
𝜀.
been measured estimation has improved
and used for evaluation. significantly. However, the improvement
of usingThe model-3
A reduction
results ofis not reflected
inposition
the orientation inaccuracy
estimationthefrom
realwith
driving
the tests.cannot
model-3
real driving There
testsbeare two
seen
are in possible
the real
basically reasons
driv-
consistent
for this.
with Firstly,
ing tests; since
see Figure 24b.
the simulation EC-1 replaces
All the
results. AsversionsEC-2 and
have in
can be seen EC-3
almost
Figureto evaluate
the24a,
same the
accuracy.
with odometry,
the useBecause the error
and at
the first-
of model-2
the end
model-3, position
used yawthe rate cannot
sensor has
position reflect the estimation
a poor resolution
estimation has improved accuracy
of 0.1°/s, truly.
the negative
significantly. Another reason
contribution
However, may
of model-3
the improvement be the
calibration
to using
of error
orientation
model-3 ofisthe
notwheel
estimation steering
cannot
reflected be theangle,
innoticed. whichtests.
real driving affects the contribution
There are two possibleof model-3
reasons in
calculating ε i .

(a) (b)

24. Real driving results of different odometry versions using ◦ /s: (a) End po-
Figure
Figure 24. Real driving results of different odometry versions usinga ayaw
yawrate
ratesensor
sensorwith
withaaresolution
resolutionof
of0.1
0.1°/s: (a) End
sition errors;errors;
position (b) End(b)orientation errors.
End orientation errors.

AFor
reduction in the
this reason, orientation
a yaw accuracy
rate sensor with awith model-3
resolution cannothas
of 0.01°/s be seen
been in the real driving
implemented.
tests; see25
Figure Figure
shows 24b.
the All the versions
results have almost theofsame
after the implementation accuracy.
the new Because
yaw rate the
sensor. Wefirst-used
can
yaw ◦ /s, the negative contribution of model-3 to
see rate sensor in
a reduction has a poor resolution
orientation accuracy inofthe
0.1versions with model-3.
orientation estimation cannot be noticed.
For this reason, a yaw rate sensor with a resolution of 0.01◦ /s has been implemented.
Figure 25 shows the results after the implementation of the new yaw rate sensor. We can
see a reduction in orientation accuracy in the versions with model-3.
Figures 26 and 27 show the results of robustness analysis in real driving tests, which match
the conclusions made in the simulation section.
Figure 24. Real driving results of different odometry versions using a yaw rate sensor with a resolution of 0.1°/s: (a) End
position errors; (b) End orientation errors.

For this reason, a yaw rate sensor with a resolution of 0.01°/s has been implemented.
Sensors 2021, 21, 79 Figure 25 shows the results after the implementation of the new yaw rate sensor. We can 22 of 26
see a reduction in orientation accuracy in the versions with model-3.

Sensors 2021, 21, x FOR PEER REVIEW 23 of 27

(a) (b)
Sensors 2021, 21, x FOR PEER REVIEW 23 of 27
25. Real
Figure Figure driving
25. Real results
driving resultsofofFigures
different
different26 odometry versions
and 27 show
odometry using
theusing
versions results of arobustness
a yaw yaw
rate ratewith
sensor sensor
analysis with aof
in real
a resolution resolution
driving
0.01°/s:tests,
(a) End0.01◦ /s:
ofwhich
(a) Endposition
positionerrors;
errors;
(b)(b)
EndEnd match the
orientation
orientation conclusions
errors.
errors. made in the simulation section.
Figures 26 and 27 show the results of robustness analysis in real driving tests, which
match the conclusions made in the simulation section.

(a)(a) (b) (b)

26. Real
Figure Figure 26.driving
Figure 26. Real
Real results
driving
driving of different
results
results odometry
ofofdifferent
different versions
odometry
odometry versionsunder
versions under
under sensor
sensor failures:
failures:
sensor (a)position
(a) End
failures: (a) Endposition
End position (b)errors;
errors; errors; ori-(b)
End (b) EndEnd
ori- orienta-
entation errors.
tion errors.
entation errors.

(a)

(a)

(b)

Figure
Figure 27. Real driving results27. Real
of four driving
odometry results
versions ofsensor
under four failures:
odometry versions
(a) End under
position errors; (b)sensor failures:
End orientation (a) End position
errors. errors; (b) End orientation errors. (b)
5. Conclusions
Figure 27. Real driving results of four odometry versions under sensor failures: (a) End position errors; (b) End orientation
errors. This paper investigates the possibility of using an odometry method to estimate the
position and orientation of vehicles with increased maneuverability in omnidirectional
parking maneuvers. Specific vehicle models were designed, and the sensors have been
5. Conclusions
identified. A modular approach has been developed and implemented to help determine
Sensors 2021, 21, 79 23 of 26

5. Conclusions
This paper investigates the possibility of using an odometry method to estimate the
position and orientation of vehicles with increased maneuverability in omnidirectional
parking maneuvers. Specific vehicle models were designed, and the sensors have been
identified. A modular approach has been developed and implemented to help determine
the essential elements for a state estimator. Different odometry versions were designed with
this approach, and validated both in the simulation environment and in real driving tests.
During the real driving tests, the position errors of Versions 213 and 233 are always
under 20 cm, and the angular error is under 0.3◦ . Since there are no comparable results for
omnidirectional parking maneuvers in the literature, we have compared the results with
the results in Ref. [13] for normal parking maneuvers. Compared to the average position
error of 47 cm and to the angular error of 1.9◦ in Ref. [13], the error level of odometry in
this paper is acceptable for parking maneuvers.
In addition, the following conclusions were made regarding different odometry versions:
• The greater the number of models used to constrain state variables, the higher the
estimation accuracy, theoretically. In reality, it also depends on the quality and effective
range of a model. Sensor quality is also a factor;
• The use of the state variables ε i or the sensor signals δi as inputs in model-1 and
model-2 plays no role in the estimation accuracy. If the state variables ε i have state
transition model-3, the robustness of the odometry will be ensured if the sensors are
defective;
• The versions with model-1 and 2, model-1 and 3 and model-1, 2 and 3 have almost
the same accuracy. Versions 213 and 233 are the two most robust versions.
The advantages of the modular approach can be summarized as follows:
• It simplified the process of designing a state estimator. The elements for a state
estimator can be easily derived from the diagram of this method;
• The contribution of a certain model to accuracy and robustness can be predicted using
this approach;
• New models or new sensors can be easily implemented, and possible effects can
be analyzed;
• It provides a clear overview of all used models and sensors. It helps users to manage
different estimators.

6. Future Works
As mentioned in Section 1.2, odometry is not suitable to be used alone. A part of
future works is to investigate the contribution of the odometry in this paper during the
fusion with GPS. Another work in progress is the sensitivity analysis for the parameter of
the vehicle. For example, a larger wheel diameter may reduce the accuracy of odometry,
because the wheel rotation speed will be reduced under the same vehicle speed, so that the
wheel speed sensor provides a speed signal with low resolution.

Author Contributions: Conceptualization, C.H. and M.F.; methodology, C.H.; software, C.H.; validation,
C.H.; writing—original draft preparation, C.H.; writing—review and editing, M.F. and F.G.; project
administration, M.F.; All authors have read and agreed to the published version of the manuscript.
Funding: The authors are grateful to the Federal Ministry for Education and Research (FKZ,
16EMO0140) for funding the “Automated multi-directional chassis system based on wheel-selective
wheel drives—OmniSteer” joint project. The authors acknowledge support by the KIT-Publication
Fund of the Karlsruhe Institute of Technology.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are available on request from the
corresponding author.
Sensors 2021, 21, 79 24 of 26

Sensors 2021, 21, x FOR PEER REVIEW 25 of 27


Acknowledgments: We would like to thank the Schaeffler Technologies AG & Co., KG, who was
among the “OmniSteer” research project partners, for the provision of vehicle data and support in
conducting test drives.
Acknowledgments: We wouldFurthermore, we Schaeffler
like to thank the appreciate the valuable
Technologies AG & comments and
Co., KG, who suggestions of the
was
anonymous
among reviewers
the “OmniSteer” for theproject
research improvement ofthe
partners, for this paper. of vehicle data and support in
provision
conducting test drives. Furthermore, we appreciate the valuable comments and suggestions of the
Conflicts of
anonymous for The
Interest:
reviewers authors declare
the improvement of thisno conflict of interest.
paper.
Conflicts of Interest: The authors declare no conflict of interest.
Appendix A. Reference Position of Real Driving Tests
Appendix A. Reference
To measure the Position
markedofpoints
Real Driving
on the Tests
ground, a coordinate system can be drawn on
the To measure
ground. the marked
However, it points on theto
is difficult ground, a coordinate
guarantee system
an exact canangel
right be drawn on
between the X- and
the ground.
Y-axes. In However, it is the
order to get difficult to guarantee
marked an exact
points on right angel
the ground between the
accurately, theX-measurement
and was
Y-axes. In order to get the marked points on the ground accurately, the measurement was
carried out using the method in Figure A1. The principle of this method is to measure the
carried out using the method in Figure A1. The principle of this method is to measure the
distancebetween
distance betweentwotwo points
points instead
instead of measuring
of measuring the distance
the distance betweenbetween
one pointone
andpoint and one
line,
one to to
line, avoid
avoidangular error.
angular error.

Figure
FigureA1.
A1. Method
Method to measure
to measure the reference
the reference position position of the vehicle.
of the vehicle.

Before
Before starting
starting the measurement,
the measurement, two reference
two reference points (Refpoints (Ref2)1have
1 and Ref andtoRef
be 2) have to be
defined.
defined. TheTheblue points
blue points (PF 𝑃and
(𝑃 and ) arePR
the points
) are thetopoints
be measured. Only the distances
to be measured. Only the distances
between
between thethe
blue point
blue and reference
point points have
and reference to be
points determined;
have for point 𝑃 for
to be determined; theypoint
are P they are
F
𝑑 and 𝑑 . The coordinate of 𝑃 can be obtained by solving the following equations:
d F1 and d F2 . The coordinate of PF can be obtained by solving the following equations:
(𝑃 − 0) + 𝑃 − 0 = 𝑑
2 2 (A1)
( PFx − 0)2 +(𝑃 PFy − 0+ 𝑃= d−2F1𝑑 ( PFx= −
− 0)
2 2
𝑑 0) + PFy − d R = d F2 (A1)
Please note that all points to be measured must lie on one side of the connecting lines
Please note that all points to be measured must lie on one side of the connecting lines
of both reference points.
of both reference points.
Appendix B. Nomenclature
Appendix B. Nomenclature
Table A1. Nomenclature for symbols.
Table A1. Nomenclature for symbols.
Symbol Description
(𝑥, 𝑦) Symbol Vehicle position in world coordinate system
Description
𝜃 Yaw angle of vehicle
𝑣 ( x, y) Vehiclevelocity
Vehicle position in world coordinate system
θ Yaw angle of vehicle
𝑣 Wheel velocity
v Vehicle velocity
𝑣, Wheel velocity along vehicle longitudinal axis
vi Wheel velocity
𝑣, Wheel velocity along vehicle lateral direction
vi,x Wheel velocity along vehicle longitudinal axis
𝛽 Side slip angle
vi,y Wheel velocity along vehicle lateral direction
𝜔 Yaw rate
vi,y Wheel velocity along vehicle lateral direction
𝛿 Wheel steering angle
β Side slip angle
𝜀 Wheel velocity angle Yaw rate
ω
δi Wheel steering angle
εi Wheel velocity angle
Distance between tire–road contact point and vehicle center of gravity
ri,x
in vehicle longitudinal direction
Distance between tire–road contact point and vehicle center of gravity
ri,y
in vehicle lateral direction
∆t Sample time
Sensors 2021, 21, 79 25 of 26

Table A2. Nomenclature for subscripts.

Subscript Description
k−1 Last time step
k Actual time step
i Position of wheel, i = f l, f r, rl, rr
O Sensor signal used as observation
I Sensor signal used as input value

References
1. Verordnung des Bundesministers für Verkehr, Innovation und Technologie, Mit der die Automatisiertes Fahren Verordnung
Geändert Wird (1.Novelle zur AutomatFahrV). Available online: https://rdb.manz.at/document/ris.c.BGBl__II_Nr__66_2019
(accessed on 23 December 2020).
2. Klein, M.; Mihailescu, A.; Hesse, L.; Eckstein, L. Einzelradlenkung des Forschungsfahrzeugs Speed E. ATZ Automob. Z. 2013, 115,
782–787. [CrossRef]
3. Bünte, T.; Ho, L.M.; Satzger, C.; Brembeck, J. Zentrale Fahrdynamikregelung der Robotischen Forschungsplattform RoboMobil.
ATZ Elektron. 2014, 9, 72–79. [CrossRef]
4. Nees, D.; Altherr, J.; Mayer, M.P.; Frey, M.; Buchwald, S.; Kautzmann, P. OmniSteer—Multidirectional chassis system based on
wheel-individual steering. In 10th International Munich Chassis Symposium 2019: Chassis. Tech Plus; Pfeffer, P.E., Ed.; Springer:
Wiesbaden, Germany, 2020; pp. 531–547, ISBN 978-3-658-26434-5.
5. Zekavat, R.; Buehrer, R.M. Handbook of Position Location. Theory, Practice, and Advances, 2nd ed.; John Wiley & Sons Incorporated:
Newark, MJ, USA, 2018; ISBN 9781119434580.
6. Winner, H.; Hakuli, S.; Lotz, F.; Singer, C. Handbuch Fahrerassistenzsysteme; Springer Fachmedien Wiesbaden: Wiesbaden, Germany,
2015; ISBN 978-3-658-05733-6.
7. Scherzinger, B. Precise robust positioning with inertially aided RTK. Navigation 2006, 53, 73–83. [CrossRef]
8. Vivacqua, R.; Vassallo, R.; Martins, F. A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving
Application. Sensors 2017, 17, 2359. [CrossRef] [PubMed]
9. Aqel, M.O.A.; Marhaban, M.H.; Saripan, M.I.; Ismail, N.B. Review of visual odometry: Types, approaches, challenges, and appli-
cations. SpringerPlus 2016, 5, 1897. [CrossRef] [PubMed]
10. Hertzberg, J.; Lingemann, K.; Nüchter, A. Mobile Roboter. In Eine Einführung aus Sicht der Informatik; Springer: Berlin/Heidelberg,
Germany, 2012; ISBN 978-3-642-01726-1.
11. Noureldin, A.; Karamat, T.B.; Georgy, J. Fundamentals of Inertial Navigation, Satellite-based Positioning and their Integration; Springer:
Berlin/Heidelberg, Germany, 2013; ISBN 978-3-642-30466-8.
12. Groves, P.D. Principles of GNSS, inertial, and multisensor integrated navigation systems, 2nd edition [Book review]. IEEE Aerosp.
Electron. Syst. Mag. 2015, 30, 26–27. [CrossRef]
13. Borenstein, J.; Everett, H.R.; Feng, L.; Wehe, D. Mobile robot positioning: Sensors and techniques. J. Robot. Syst. 1997, 14, 231–249.
[CrossRef]
14. Berntorp, K. Particle filter for combined wheel-slip and vehicle-motion estimation. In Proceedings of the American Control
Conference (ACC), Chicago, IL, USA, 1–3 July 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 5414–5419, ISBN 978-1-4799-8684-2.
15. Brunker, A.; Wohlgemuth, T.; Frey, M.; Gauterin, F. GNSS-shortages-resistant and self-adaptive rear axle kinematic parameter
estimator (SA-RAKPE). In Proceedings of the 28th IEEE Intelligent Vehicles Symposium, Redondo Beach, CA, USA, 11–14 June 2017;
IEEE: Piscataway, NJ, USA, 2017; pp. 456–461, ISBN 978-1-5090-4804-5.
16. Brunker, A.; Wohlgemuth, T.; Frey, M.; Gauterin, F. Dual-Bayes Localization Filter Extension for Safeguarding in the Case of
Uncertain Direction Signals. Sensors 2018, 18, 3539. [CrossRef] [PubMed]
17. Larsen, T.D.; Hansen, K.L.; Andersen, N.A.; Ravn, O. Design of Kalman filters for mobile robots; evaluation of the kinematic
and odometric approach. In Proceedings of the 1999 IEEE International Conference on Control Applications Held Together
with IEEE International Symposium on Computer Aided Control System Design, Kohala Coast, HI, USA, 22–27 August 1999;
IEEE: Piscataway, NJ, USA, 1999; pp. 1021–1026, ISBN 0-7803-5446-X.
18. Lee, K.; Chung, W. Calibration of kinematic parameters of a Car-Like Mobile Robot to improve odometry accuracy. In Proceedings of
the 2008 IEEE International Conference on Robotics and Automation (ICRA), Pasadena, CA, USA, 19–23 May 2008; pp. 2546–2551,
ISBN 978-1-4244-1646-2.
19. Kochem, M.; Neddenriep, R.; Isermann, R.; Wagner, N.; Hamann, C.D. Accurate local vehicle dead-reckoning for a parking
assistance system. In Proceedings of the 2002 American Control Conference, Anchorage, AK, USA, 8–10 May 2002; Hilton An-
chorage and Egan Convention Center: Anchorage, AK, USA; IEEE: Piscataway, NJ, USA, 2002; Volume 5, pp. 4297–4302,
ISBN 0-7803-7298-0.
20. Tham, Y.K.; Wang, H.; Teoh, E.K. Adaptive state estimation for 4-wheel steerable industrial vehicles. In Proceedings of the
1998 37th IEEE Conference on Decision and Control, Tampa, FL, USA, 16–18 December 1998; IEEE: Piscataway, NJ, USA, 1998;
pp. 4509–4514, ISBN 0-7803-4394-8.
Sensors 2021, 21, 79 26 of 26

21. Brunker, A.; Wohlgemuth, T.; Frey, M.; Gauterin, F. Odometry 2.0: A Slip-Adaptive EIF-Based Four-Wheel-Odometry Model for
Parking. IEEE Trans. Intell. Veh. 2019, 4, 114–126. [CrossRef]
22. Chung, H.; Ojeda, L.; Borenstein, J. Accurate mobile robot dead-reckoning with a precision-calibrated fiber-optic gyroscope.
IEEE Trans. Robot. Automat. 2001, 17, 80–84. [CrossRef]
23. Marco, V.R.; Kalkkuhl, J.; Seel, T. Nonlinear observer with observability-based parameter adaptation for vehicle motion estimation.
IFAC-PapersOnLine 2018, 51, 60–65. [CrossRef]
24. Rhudy, M.; Gu, Y. Understanding Nonlinear Kalman Filters Part 1: Selection of EKF or UKF. Interact. Robot. Lett. 2013.
Available online: https://yugu.faculty.wvu.edu/research/interactive-robotics-letters/understanding-nonlinear-kalman-filters-
part-i (accessed on 23 December 2020).
25. Julier, S.J.; Uhlmann, J.K. New extension of the Kalman filter to nonlinear systems. In Proceedings of the Signal Processing Sensor
Fusion, and Target Recognition VI, Orlando, FL, USA, 28 July 1997; pp. 182–193.
26. Welch, G.; Bishop, G. An Introduction to the Kalman Filter. In Proceedings of the ACM SIGGRAPH, Los Angeles, CA, USA,
12–17 August 2001; ACM Press, Addison-Wesley: New York, NY, USA, 2001.
27. Mitschke, M.; Wallentowitz, H. Dynamik der Kraftfahrzeuge, Vierte, Neubearbeitete Auflage; Springer: Berlin/Heidelberg, Germany,
2004; ISBN 978-3-662-06802-1.
28. Frey, M.; Han, C.; Rügamer, D.; Schneider, D. Automatisiertes Mehrdirektionales Fahrwerksystem auf Basis Radselektiver
Radantriebe (OmniSteer). Schlussbericht zum Forschungsvorhaben: Projektlaufzeit: 01.01.2016-31.03.2019. Available online:
https://www.tib.eu/suchen/id/TIBKAT:1687318824/ (accessed on 23 December 2020).

You might also like