Literature 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Copyright e IFAC Control in Transportation Systems,

Braunschweig, Germany, 2000

NAVIGATION AND CONTROL OF AN


AUTONOMOUS VEHICLE

Andreas Simon * Ina Sohnitz * Jan C. Becker *


Waiter Schumacher *

* Institute of Control Engineering


Technical University Braunschweig
Hans-Sommer-Str. 66
38106 Braunschweig, Germany

Abstract: This paper describes the fusion of sensor data for the navigation of an
autonomous vehicle as well as two lateral control concepts to track the vehicle along
a desired path. The fusion of navigation data is based on information provided by
multiple object-detecting sensors. The object data is fused to increase the accuracy
and to obtain the vehicle's state from the relative movement w.r.t. the objects.
The presented lateral control methods are an LQG/ H 2 -design and an input-output
linearizing algorithm. These control schemes were both implemented on a test vehicle.
Copyright ~ 2000 IFAC

Keywords: Autonomous vehicles, multisensor integration, sensor fusion, robust


control, LQG control, least square estimation, navigation systems, state estimation,
tracking

1. INTRODUCTION

Autonomous vehicles have been under research


investigation for the last thirty years following a absolute
veNcle
number of different objectives. The project partly position

described in this paper aimed at the automation


of the durability tests of commercial road vehi-
cles (Weisser et al., 1998).
The purpose of the decentralized control system
of the autonomous vehicle (Becker et al., 1998) is
to manage all guidance and control tasks. It uses
information from a number of different sensors to
determine a feasible motion of the vehicle (fig. 1).
These sensors can be' classified into ego-position-
detecting sensors and object-detecting sensors. A
stereo vision sensor (Goldbeck et al., 1998) de-
tects the road lane markings and outputs the
lane boundaries as a third-order polynomial. Ob-
jects in the vehicle environment are detected by Fig. 1. Structure of the vehicle guidance system
the stereo vision sensor, four laserscanners and
a radar sensor and combined by a sensor fusion

449
module (Becker, 1999). A navigation data fusion vertically and is aimed for mid and long
module combines sensor data to estimate the ego- range obstacles. This sensor simultaneously
position of the vehicle. Input to this module is performs the tasks of object detection and
the data of a DGPS based inertial measurement lane recognition.
system and information about objects and lane • Two laserscanners are mounted at the left
markings in the vehicle environment. The sub- and right end of the front bumper. Each
sequent motion planning module generates the sensor covers a range of 270°. The sensors
desired motion, containing the desired path and overlap in the most important front viewing
velocity of the vehicle. The task of the vehicle con- area. Further, a laserscanner with three laser
trol system is to generate a steering angle signal, beams at different angles is mounted in the
which ensures the robust tracking of the vehicle center of the front and the rear bumper,
along the desired path. This signal is transmitted respectively. These laserscanners are more
to a robot driver, which is able to turn the steering robust to dynamic disturbances than single
wheel, operate the accelerator, brake and clutch beam sensors. Thus the laserscanner system
pedal and to change gears. The desired velocity by itself provides a total look-around. The
is also sent to the robot, as the longitudinal con- laserscanners are design for short, mid and
trol to adjust the velocity is implemented in the long range.
control unit of the robot. • A long range radar sensor with a small view-
ing angle is mounted on the front bumper.
It can be seen that the degree of redundancy
2. MULTISENSOR CONCEPT AND SENSOR increases with the importance of the observed
FUSION area. Different wavelength and signal analysis
principles are used by the sensors.
2.1 Multisensor Concept

In this section, the multisensor fusion system of


the autonomous vehicle is presented. The objec- 2.2 Sensor Data Fusion
tives of this system are to create a unified de-
scription for the target data in this heterogeneous Since the dissimilar sensors work independently
sensor environment and to accomplish an overall and unsynchronized from each other and target
reduction of the large amount of sensor data for tracking is performed by each sensor individually,
the subsequent systems of the autonomous vehicle the data received from the sensors has to be
as well as improving the accuracy of the estimated time synchronized first in order to be ready for
target states by combining information delivered further processing. This is called data alignment.
by different sensors. The prediction equations of a Kalman filter are
implemented to predict sensor and covariance
The autonomous vehicle is equipped with a stereo
data from the sensor measurement time to the
vision sensor, four laserscanners and a radar sen-
computing time of the fusion module.
sor for the detection and the tracking of obstacles
that could potentially collide with the vehicle. The The data set of all sensors consists of position
detection range of these sensors is overlapping in and kinematic variables. Each sensor uses a tar-
a large area (fig. 2) in front of the vehicle, which is get description that fits best to its measurement
obviously the most important area for the vehicle principles. The vision system describes an object
path planning. For safety reasons this task is per- by a cuboid that totally surrounds the detected
target with its sides parallel to the axes of the
sensor. In contrast the laserscanners used in this
project describe an object by three characteristic
rear
Laser-
points. These points consist of the point of the
scanner detected object closed to the sensor as well as of
the outermost points to the left and right of the
object. Such a compact object representation is
necessary to guarantee real-time communication
Fig. 2. Range of the object-detecting sensors of the sensors with the fusion system.
The next task in this multisensor environment
formed by a multitude of heterogeneous sensors.
is to decide whether observations from different
This multisensor concept includes the following
sensors actually represent the same target. This
sensors for object detection:
problem is called data association. Since the tar-
• A stereo vision sensor is mounted behind the get tracking is done by each sensor individually,
front screen. It is directed forwards with a the task of the data association unit in this sensor
viewing angle of 30° horizontally and 23° fusion module is to decide whether tracks from

450
the different sensors actually represent the same In fig. 4 the structure of the navigation data fusion
target. The sensors are also mounted at different is presented. The availability of the navigation
locations on the vehicle and thus have different sensor data may be limited, e.g. due to missing
viewing angles and perspectives onto the road lane markings and laserscanner objects or due to
and the observed obstacles. Therefore, further an occlusion of GPS satellites. The only sensors
calculations in addition to the gating and target with a constant data flow used in this project are
association algorithms are implemented. the sensors of the robot driver, which measure 0
and v F . Therefore, the core part of this structure
The final task is the target state estimation or
is the dead reckoning navigation supported by
track data fusion. Here the corresponding targets
a simple vehicle model, which is fed with the
are combined after it has been decided that two
output of the robot drive v F and o. The main
tracks actually represent the same target using
disadvantage of a pure dead reckoning navigation
Kalman filter based algorithms such as measure-
would be a rapid drift of the estimated vehicle
ment fusion or information filter.
position. To reduce this effect, the input of this
Since the combined sensors may detect up to navigation is supported by evaluating the relative
90 objects during one sample period, a major movement of objects in the vehicle environment
constraint for the implementation of the fusion as described below.
algorithms is that the system has to work online
and in real time. Therefore good performance is
just as important as a computationally efficient
implementation of the algorithms.
The sensor fusion achieves a large reduction of
the vast amount of data from the object-detecting
sensors of the vehicle as well as an improvement
of the accuracy by using the synergy of comb-
ing information. The output of the object sensor
fusion is therefore well suited to be fused with
the position sensors of the vehicle to improve the
estimation of the vehicle ego-position.
Data Fusion X,Y,'l/J

3. NAVIGATION DATA FUSION lane boundary


information
digital map
absolute
3.1 Overview vehicle
detected objects
14---...... position
x,y,'l/J
The objective of the fusion of navigation data
is the estimation of the absolute vehicle posi-
tion (x, y) and orientation tP in world coordi- Fig. 4. Structure of the navigation data fusion
nates (fig. 3). This task is managed by combin-
ing information of a number of different sensors The vehicle position estimated by the dead reck-
such as a DGPS based inertial measurement unit, oning navigation is also supported by considering
object-detecting sensors and a lane marking sen- the trajectories of detected objects and further
sor based on stereo vision. The output of the information of lane boundaries. This information
object-detecting sensors is preprocessed by the is provided by sensors described above, e.g. by
sensor fusion module as depicted in fig. 1. In addi- a vision sensor and laserscanners. However, the
tion, the current steering angle 0 and the velocity remaining drift of this estimation can only be
of the rear-axis v F are used. stabilized over a long period by using an absolute
position sensor such as a DGPS sensor. In this pa-
per the module to evaluate the object movements
is described in detail.

3.2 Object Movement Evaluation

x The objective of this module is to increase the


confidence of the velocity v F and the angular yaw
Fig. 3. Relevant state quantities for the navigation velocity tb of the vehicle by considering informa-
fusion tion about the relative motion of objects in the

451
vehicle environment. According to fig. 5 the rela- error of the solution of eq. 5 and 6 in the previous
tive movement of an obstacle v O is a superposition measurement interval is interpreted as the object
of motion yWo. The approach to solve eq. 5 and 6
for v F and ~ in the measurement interval k is
• the motion of the obstacle over ground v Wo
divided into three steps:
• and the vehicle motion (v F , ~).
F • Estimating the absolute object motion yWO
X,V
based on the previous estimation interval
k-l
v~VO"'-iHEt1I--'-_ • Determining the new angular yaw velocity
object ~(k) using eq. 5 and 6 and considering the
last velocity vF(k - 1)
vF • Determining the new vehicle velocity v F (k)
I
I using eq. 5 and considering ~(k)
I
I
To determine ~ and v F the equation system
y
I
+---:-----++t- consisting of eq. 5 and 6 is solved by applying the
P~ least squares method. Before using this method
the equation system has to be expressed in the
Fig. 5. Relative object motion following representation
The relative velocity of each object in the vehicle y = Cx+e, (9)
environment can be expressed by
where e is the equation error to be minimized. The
F
v~ = P~ . ~ + v";'o - v (1) error vector e can be interpreted as an observer
v~ = -p~ . ~ + v';'O (2) error corresponding to the observer vector y. All
known uncertainty in eq. 9 is represented bye.
with In general, if e is correlated and unbiased the
p~ . ~ = a ~ cos () (3) solution of eq. 9 is
P~ . ~ = a ~ sin (). (4) x = (C T E- 1 C}-lCTE- 1 y, (10)
where E is the covariance matrix Cov(e) of the
In the following the movements of the objects are observer error e. The quality of this estimation
assumed to be not correlated. If n objects are of x can be expressed as the covariance of the
observed, a system of m = 2n linear equations estimation error x = x-x. For deriving the esti-
for the quantities v F and ~ is given by mation error the measurement vector y is splitted
v~ = pfy . tb + v~o - v
F into the true observer value and its measurement
vg, = P~y . ~ + v~o - v
F error: .y = Yo + e. With these considerations, the
estimation error is given by
(5)
o 0 .i. wo F
x = _(CT E- 1 C}-lCTE-1e (11)
v(n-l):>: = P(n-l)y . 'I' + v(n_l):>: - v
o
v n :>: =
O' Wo
P ny . t/J + v n :>: - V
F Cov(x) = (CTE-1C}-1. (12)

and Since all uncertainty is represented bye, the ma-


o . wo trix C and the observer vector y contain only
= -PI:>:
o
• t/J + v 1y
. wo
measurement data without any information about
= -P2:>: . t/J + v 2y uncertainty. For the determination of ~ the ob-
(6) server vector y is a composition of the measured
o . wo object velocities yO and the vehicle velocity v F •
= -P(n-l):>: • t/J + V(n-l)y
The matrix C degenerates to a column vector
= _pO .• i.
n:>: 'I'
+ VWO
ny containing the object positions:
or in vector notation by O F

y~ = P~ . ~ + y";'O _ yF (7)
v I:>: +v
O
V 2:>: +v
F °
PH'
P2y
y~ = _p~ . ~ + y';'o. (8)
o F 0
It has to be taken into account that all quantities v(n-l):>: v + P(n-l)y
O F 0
v n:>: +v
used are uncertain. Their distribution is consid-
ered to be Gaussian and can be characterized by
y=
v~°
C= Pn
-Pl:>:
o (13)
0
their mean and variance. Since the object mo- v2y -P2:>:
tion over ground yWO is unknown, it has to be
estimated for objects which are detected again.
However, for new objects yWO is assumed to be
zero. To accomplish this, the remaining equation

452
In addition, for solving eq. 9, y has to be com-
pleted with the absolute object motion v Wo :
El = Yl +
(14)
The equation error e of eq. 9 represents the un- (22)
certainty of all quantities which can be described
by their variances. e is characterized by

E{e} = 0 (15)
E = Cov{e} = Cov{ y. - Cx}
Finally, the new estimation of the angular yaw
= Cov{y} + Cov{Cx} + Cov{v wo }' velocity "j;(k) can be determined using eq. 10 and
(16)
12. For this purpose the previous estimation of the
where v Wo is the absolute object movement de- vehicle velocity v F (k -1) and the estimated abso-
rived from the previous estimation interval k - 1. lute object movement v Wo is used. For deriving
the new estimation of the vehicle velocity v F (k),
IT the measurements of the object positions are eq. 5 must be used together with "j;(k). Eq. 13 is
considered to be uncorrelated, only the primary transformed to:
diagonal of Cov{Cx} is filled: o
p~
O'
Cov{ Cxhi = Var{p~. ~ } fUr 0 < i ::; n P2y

{ Var{p"'i . 1/J} fUr n < i ::; 2n Yv =


(17)

The products Var{p7 . "j;} can be expanded to: C~ = [-1 -1 ... -1 -1]
(23)
Var{pO '
.. 1/J}
,
= u Pi2 .",
0 .
The determination of the covariance matrix of the
2 2 2
= U 0 • u, + p.0 2
. u· + 1/J. 2 . U 20 observer error has to be carried out similar to
Pi '" '''' Pi eq.16:
(18)
Ev = Cov{e v} = Cov{Yv}
In general, the covariance matrix Cov{y} can be = Cov{v~} + Cov{p~"j;} + Cov{v~o}
expressed by
= Y l - u~FI + Cov{p~"j;} + Cov{v wo }
(19)
(24)
where Y 1 is the covariance matrix of v~ + v and F
with
Y s is the covariance matrix of v~. Since the mea-
surement errors of v~ and v~ can approximately
considered to be uncorrelated, the submatrix Y 2 [~ ~]
1- - . ... ' . (25)
disappears. Thus, the submatrices of Cov{y} are: 1 ... 1

The estimation of vF(k) using "j;(k) and the cur-


rent measurement data is divided into four steps:
• update of Cov{Cx} in eq. 17 with the new
estimation "j; (k )
• determining Yv using v Wo (eq. 23)
• forming the corresponding covariance matrix
E v (eq. 24)
• applying eq. 10 and eq. 12 for estimating
vF(k) and u~F(k)
Referring to eq. 16, 17 and 20 the covariance
matrix E of the observer error e is obtained by: IT necessary, the estimation of v F and "j; can be
repeated iteratively until the changes of these
E = Cov{e} = [:~ ::] + Cov{v WO } (21) quantities fall below a certain lower bound.
The estimation described above is performed in
where the submatrices are every measurement cycle and is currently being

453
tested. It uses sensor information about the vehi- 4.2 LQG / H 2 -Control
cle environment which is primary used for plan-
ning driving maneuvers in order to increase the The H 2 -design was chosen, as it proved in simula-
confidence of the velocity and the angular yaw tions to reject disturbances better than a PIDT2 -
velocity of the vehicle. The vehicle environment is controller and to react faster than a Hoc-optimal
not observed over a period of time, i.e. no object controller. The control design concept presented
tracking information is used. This is the task of here differs in details from the concept in (Sohnitz
another module (fig. 4). and Schwarze, 1999). In addition to the lateral
error between desired and driven path the yaw
rate was used as a second measurement input to
improve the accuracy of the estimation of the state
variables. And the weighting functions were mod-
ified to achieve integral control behaviour directly
from the 2-norm optimal design process, so that
4. LATERAL CONTROL OF AN a second, suboptimal design could be neglected.
AUTONOMOUS VEHICLE
The vehicle's lateral dynamics G can be expressed
in state space form as a 3-dof fifth order linear
4.1 Overview model (see eq. (26) on the next page), where
the input u is the steering angle, the external
The control of the lateral movement of au- disturbance z is the curvature of the desired path
tonomous road vehicles has been subject to in- and the output y contains the lateral tracking
tense research over the last decades. Most of the error elat and the yaw rate ~. The state variables
studies were focused on the steering of highway are
vehicles, where a smooth road surface and curves
with large radii are to be expected. The vehicles X = [.8, 'l/J,. 8, elt, elat] T , (27)
were supposed to drive at speeds up to 100 kph. where .8 is the side slip angle, ~ is the yaw rate, 8
In this section the lateral control module of an au- is the angle at the front wheels and e lt describes
tomated vehicle is described, which was developed the error between the vehicle's course angle and
to drive on a special durability test track. There, the course angle of the road. The velocity v
the road surface varies from rough asphalt over is not included in the state variables, but is
humps to deep potholes, the radii of the curves regarded as one of the model parameters instead.
reach minimum values of 10 meters and the road This is possible because the velocity changes only
is only about 4 meters wide. The vehicles drive at slightly compared to the lateral dynamics. The
changing velocities between 15 to 60 kph. other model parameters are the mass rn, the yaw
moment of inertia J zz , the cornering stiffnesses
The objective of the lateral control module is to of the front and rear wheels C// r , the distances
ensure an accurate tracking performance along the between the center of gravity and the front and
desired path despite the narrow curves and the rear axle If/r, the gain Vat and the time constant
different adhesion conditions of the test track and Tat of the steering actuator and the air resistance
the varying velocity. factor L.
The desired motion of the vehicle, consisting of Eq. (26) can be separated into the system G u
the desired path and velocity, is provided by the
motion planning module desribed in (Simon and x = Ax+ buu (28)
Becker, 1999). This concept differs from most of y=Cx,
the other projects dealing with automated vehicle
guidance, where the vehicles are driven based on to design the feeback control law and the system
relative information. The relative information is Gz
usually derived by a vision or magnetic sensor x= Ax + bzz
which follows the lane markers. In this project, the (29)
y= Cx,
absolute position and orientation of the vehicle
are measured by a very precise navigation system, to find a feedforward control to improve the track-
fusing information from a DGPS-sensor and an ing performance. Based on this model a 2-norm
inertial platform. The lateral tracking error is then optimal controller KH2 guaranteeing steady-state
calculated as a distance between the desired path accuracy can be designed. Therefore, the function
and the measured position, and the control task is h2syn() from MATLAB's p-Analysis and Synthe-
to minimize this error. In the following subsections sis Toolbox is applied to the interconnected sys-
a linear and a nonlinear control schemes will be tem G H2 shown in fig. 6. The system consists of
introduced and simulation and practical results the plant G u and the weighting functions W u and
will be presented. W y . The function h2syn() calculates a controller

454
Lu 2 - (cf + c~) c~l~ - cflf
-1 !:.!..- o 0
mu m~2 mu

[~]
c~I~-cflf c~lr + cfl} cflf
o0
Ju Juu
:it = J'f (26)
0 0 - - 00 x+ [ { ;]
T. t
Lu 2 - (cf + cr ) crlr - cfl f _!:.!..- o 0
mu mu 2 mu
0 0 0 u 0

y= [00001]
01000' x

- - - 15tdesign
2nd design
- )rddesign

8.8

Fig. 6. LFT on KH2


r::p..: §:d
*-180
1f-200
....
.cc
----___
---- _
....

K H2 which stabilizes the system G H2 and mini- -228.8 i 2 - 3


mizes the 2-norm of the closed-loop transfer func- Frequency in radls
tion from input [du, dml to output [vu, Vy]. The
2-norm optimal control law is (Doyle et al., 1989) Fig. 7. Bode plot of KH2G u

i = (A + BF + Le) i

~]
- L y (30)

~:I ~
10 .' .
u = Fi, (31)
where i is the estimated state vector and where
..
F and L correspond to the solutions of the Con- 10-1 10' 10' 10'
troller and Filter Algebraic Riccati Equation, re-
spectively. The resulting closed loop system con- ..
-8
O------~
- -. lstdesian
2nd design
sisting of K H2 and GH2 can be presented in linear .5-100
- 3rddesi

fractional transformation (LFT) form, as shown in


fig. 6. *
if -200
_300L_ _~.,--_ _ ======:::J
I~' It I~ 10'
The first order weighting function W y is used to Frequency in radls
achieve integral behaviour of the controller.
Fig. 8. Bode plot of KH2(l + K H2 G u )-l
W = ~ (32)
y s +Pw input of elat to the according outputs. After de-
This is done by placing the real pole -Pw of W II
very close to the imaginary axis, demanding a e 2 - _. Istduign
2nd dezjgn
.5 15
ls . - 3rddesi
very high gain of the controller at low frequencies. ,El
(Choosing a pole directly at zero would not yield :J 0.5
any result as no controller could stabilize this pole 0
0 4 10 12
outside of the closed loop.) The gain w y and the

:
!':~~/
scalar weighting function W u can be used to mod-
ify the overall performance of the controller, where
a compromise between a fast tracking and an ef-
fective disturbance rejection has to be made. The I
results of the design iterations for a Volkswagen -<l.5 2 4--~6-~--1~0----,12
0
Time in s
Caravelle at an operating velocity of 10 kph can be
seen in fig. 7 to 9. The controller derived from the Fig. 9. Closed loop step response
last design process shows good tracking dynamics
(fig. 9) while having the best phase margin of signing KH2, the controller's robust stability and
about 30° (fig. 7) and promising satisfying distur- performance can be ensured by investigating the
bance rejection (8). The step and the frequency Structured Singular Value (SSV) of the system M
responses were all calculated from the reference (see fig. 10). The nominal system G u is perturbed

455
far smaller than the margin of 1 the robust sta-
bility and performance of the closed loop system
are ensured for all possible parameter changes.
To adjust the controller to the changing velocity,
the gain scheduling method was chosen. There-
fore, the design process had to be repeated for
different operating points between standstill and
~p f---m-r
kO o- - - - '
60 kph.
The curvature of the desired path z is regarded
Fig. 10. Block structure to analyze the perfor-
as an external disturbance, which can be seen
mance robustness
from eq. (26). A curvature compensation filter F z
by parameter uncertainties, resulting in the ex- can be formed by inverting the transfer function
tended system G up • The change in the parameters between the steering angle u and the curvature
can be introduced into the closed loop by the of the vehicle's path ,j,~P. As the filter generates
bounded perturbation matrix a u ' Additionally, dynamic steering signals, better tracking results
the performance criteria can be expressed by the are expected than attained by the commonly used
weighting functions W k and Wm. The inverse of steady state feedforward loops. The curvature
W k is used to normalize the maximum amplitude compensation filter is also adapted to the velocity
of the steering angle to 1. The bode plot of the using gain scheduling. Fig. 13 shows the resulting
gain of W k- 1 is shown in fig. 11. The measurement
lateral control scheme which is based on the linear

:s:
model of the lateral vehicle dynamics.

~'i
10'3 10" 10"
Frequency in radls
lo° 10'
!
10'

1
Fig. 11. Bode plot of W k-
Fig. 13. Block diagram of the linear lateral control
noise m was normalized to a maximum amplitude
scheme
of 10 cm by choosing W m = 0.1.
The uncertain parameters of the linear model are
the cornering stiffness of the front and rear wheels ,
the vehicles mass and its yaw moment of inertia. 4.3 Input-Output Linearization
The other parameters can be measured exactly
and do not change. As a change in the vehicle's The second control concept, which was expected
mass results in simultaneous changes of all the to improve the tracking performance compared
other uncertain parameters, their corresponding to the linear controller, is based on an affine,
perturbations are coupled. The uncertain param- nonlinear model of the longitudinal and lateral
eters are supposed to change in a range of ±30% vehicle dynamics (see eq. (34) on the next page).
around their nominal values, which is a very pes- A newly introduced magnitude is the force in
simistic assumption. Fig. 12 shows the result of longitudinal direction at the front wheels Fx/ ,
a robust stability analysis of the system M, i.e. which accelerates or decelerates the vehicle. The
the robust performance analysis of the closed loop force at the rear wheels was assumed to be zero, to
control system. What can be seen are the upper simplify the differential equations. FXf is regarded
and lower bound of the SSV of M. As the SSV is as a second disturbance input, in addition to
z. Model (34) can be input-output linearized by
applying the nonlinear state-feedback algorithm
0.8 described in (Isidori, 1995). The model equations
can be written in general form

::~'::
x = a(x) + b(x)u (34)
y = c(x) = elat
"" :: :',
The linearizing control law is
02 :. ~' :.: :
,t';. t : ': ':
o
: \,,"_,: t.
., .... !
::::
. .',.: I.,.... ~ ~
u = u· - L~ c(x)
(35)
10- 1 100 10 1 LbL~ c(x) ,
Frequency in radls

which implies that the system is ofrelative degree


Fig. 12. Bounds of the SSV for robust performance 3. The L~ are the i - th Lie derivative along a and

456
. F 0-{3 L cf 0 cf+cr {3 crlr-Cflf ,p
-t/J + - zf - - + - v{3 +- - - - - - + ------'-..:...
m V m mv m ¥ 2m V2
I!Fzf + cflf
~---'--_~ 0 + c~l~ - cflf {3 -
c~lr + cfl f ,p- 0
J.. J'f J.. v 0
Vd
--0
T.t + T.t ·u (33)
,p
I.!.] F z ! 0 - {3
v z - -
m
L cf 0
- - - - v{3 - - -
v m mv
ve K
cf+ Cr {3
+ -- - -
m v
c~l~ - cflf
---=-~
m v2
0
0
0
F Zf _!:- v2 +2 cf 0{3 _ cf +c r {32 _ Ct 02 + Crl~ -Cflf ,pf3 + Ct'f ,po
m m m m m m v mv

u· = 'e'lat is the input to the linearized system, angle at the front wheels 8. They are estimated
which behaves like three chained integrators (see by an Extended Kalman Filter.
fig. 14). The input u· can now be generated by
a linear control law which can be found easily
for the integrator system. To ensure steady-state 4.4 Simulation and Practical Results
accuracy a fourth state is introduced, as can be
seen in fig. 14. The inner states of the linearized Fig. 15 and 16 show the simulated and the prac-
tical results for a worst case driving manoeuver
with a Volkswagen Caravelle on a test track. The
curves reach minimum radii of 10 rn, so that
steering angles of more than one revolution (611")
are necessary to keep the track. The resulting

Fig. 14. Block diagrams of the nonlinear lateral


control scheme and its linear equivalent
system are Fig. 15. Results for the LQG j H 2 -controller
elat = L~ c(x) and €lat = L~ c(x).
It can be noticed that, in contrary to the linear
controller, the influence of z and v on the lat- ~,:~
-:.3 0 ...... _ ~ I", .. _,,,'
I \ \ _ - -",

eral dynamics are automatically taken into ac- -0.5 ','

count for the steering command. To apply the


input-output linearization the full state vector
.
X = (.8,1/1,8, e", elat, v]
T .
and the dlstrubances z
..
e3
co'
·;-3~
o

-6
~
'
5 10

\
15

"
20

'. •
25 30

'
35

and F:z:f need to be determined. The following o 5 10 15 20 25 30 35

~:~-$i
magnitudes are measured in the test vehicle: "j;,
elat and v. Apart from that, the acceleration V:z:
4 --- Ex mefU
in the vehicle's longitudinal direction is provided o 5 10 15 20 25 30 35
by the inertial sensor system, so that F:xf can be TImeins
calculated by the simplified relation
Fig. 16. Results for the IjO-linearizing controller
F:z:f = rn v:z:.
The curvature of the desired path z is provided course of the desired and the driven path from
by the motion planning module, as described the practical tests can be seen in fig. 17 and
before. The error of the course angles e" can be 18. Fig. 19 shows the curvature of the desired
determined by integrating path. The control concepts have obviously both
advantages as well as disadvantages. While the
€" = v z -"j; -~' IjO-linearizing controller achieves a better track-
Hence, the only magnitudes which can not be ing accuracy it produces a more nervous steering
derived directly are the side slip angle .8 and the signal which results in increased yaw movements

457
160r-=:------------, 5. CONCLUSIONS
140

120 In this paper the navigation and control systems


of an autonomous vehicle have been presented.
A sensor fusion module processes the data of the
60 heterogenous object-detecting sensors, which were
40 developed for collision avoidance purposes. The
20 -OeaircdPam
proposed navigation concept uses this data for
---Drivenl'l:b
00 10 I~ 20 25 30 3~ 40
the enhancement of the estimation of the vehicle
X.t. inm ego-position. This position is one of the measure-
ment inputs to the lateral control module, which
Fig. 17. Path in absolute coordinates (LQG I H 2 ) is designed as a tracking control system. Two
1.0·,-------------, control concepts have been introduced. The linear
system proved to be more robust and achieved a
better disturbance rejection while the nonlinear
E
100 controller reacted faster and more accurate.
c

so
6. ACKNOWLEDGMENT
- [)e5ucdPillh
--·Drive.nPatb The authors would like to thank the Ministry
00 10 I~ 20 2~ 30 35 40
X.bsinm of Economics, Technology and Transport of the
State of Lower Saxony, the Volkswagen AG and
Fig. 18. Path in absolute coordinates (I/O-lin.) the Robert Bosch GmbH for supporting this work.

-"'~
I
E 0.05
c
.- 0
7. REFERENCES
~ -0.05
U
" -0.1 Becker, J. (1999). Fusion of data from the object-
o 5 10 15 20 25 30 35 detecting sensors of an autonomous vehi-
Time in s
cle. In: 1999 IEEE Int. ConI. on Intelligent
Fig. 19. Curvature of the desired path z Transportation Systems. pp. 362-367.
Becker, J., A. Simon, 1. Sohnitz, H. Gollinger and
of the vehicle. Its disturbance rejection is quite W. Schumacher (1998). A path planning and
poor, while the LQG I H 2-controller can cope with control structure of an autonomous vehicle.
very noisy sensor signals. The passanger comfort In: 1998 IEEE Int. Conf. on Intelligent Vehi-
is definitely higher with the LQG I H 2 -controller. cles. pp. 457-462.
In both cases, the lateral errors elat seem to have Doyle, J.C., K. Glover and B.A. Francis
undesirably high values of up to 0.8 m. But it P.P. Khargonekar (1989). State-space solu-
needs to be taken into account, that the curves tions to standard H2 and Hoc control prob-
are extraordinary narrow and that the curvature lems. IEEE Trans. on Automatic Control
of the desired path does not exactly correspond to 34(8), 831-846.
the path itself. The path is much smoother than Goldbeck, J., G. Drager, B. Huertgen, S. Ernst
one might expect from the curvature in fig. 19, but and F. Wilms (1998). Lane following com-
the curvature is used as an input to the controller bining vision and DGPS. In: 1998 IEEE Int.
and causes most of the steering activities. Apart Conf. on Intelligent Vehicles. pp. 445-450.
from that, the signal delays originated by the Isidori, A. (1995). Nonlinear Control Systems.
data transmission between the sensors, the motion Springer.
planning module, the control unit and the robot Sohnitz, 1. and K. Schwarze (1999). Lateral con-
driver were not considered when designing the trol of an autonomous vehicle: Design and
controllers. This should be done in the future to first practical results. In: 1999 IEEE Int.
increase the accuracy. To improve the disturbance ConI. on Intell. Transp. Syst. pp. 448-452.
rejection of the I/O-linearizing control law it can Simon, A. and J. Becker (1999). Vehicle guidance
be useful to replace the Extended Kalman Filter for an autonomous vehicle. In: 1999 IEEE
by a Linear Kalman Filter, which would improve Int. ConI. on Intelligent Transportation Sys-
the filter robustness. In summary, one can say that tems. pp. 429-434.
both control concepts are suited for the practi- Weisser, H., P. Schulenberg, R. Bergholz and
cal application. Only, the LQG I H 2 -controller is U. Lages (1998). Autonomous driving on ve-
preferable to the I/O-linearizing control concept hicle test tracks. In: 1998 IEEE Int. ConI. on
as long as no robust behaviour can be ensured for Intelligent Vehicles. pp. 439-443.
the latter.

458

You might also like