Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT)

Self-Driving Car Lane-keeping Assist using PID


and Pure Pursuit Control
Mohamed Khaled Diab Hossam Hassan Ammar
Smart Engineering Systems Research Center (SESC) Smart Engineering Systems Research Center (SESC)
School of Engineering and Applied Science ,Nile University School of Engineering and Applied Science ,Nile University
Giza, Egypt Giza, Egypt
Email: mkhaled@nu.edu.eg Email: hhassan@nu.edu.eg

Raafat Elsayed Shalaby


Faculty of Electronic Engineering Menofia University
Menouf, Egypt
Smart Engineering Systems Research Center (SESC)
School of Engineering and Applied Science ,Nile University
Giza, Egypt
Email: rshalaby@nu.edu.eg

Abstract—Detection of lane boundaries is the primary role for L Total wheel base of vehicle
monitoring an autonomous car’s trajectory. Three lane identifica- R Radius of curvature
tion methodologies are explored in this paper with experimental
illustration: Edge detection, Hough transformation, and Birds
eye view. The next step after obtaining the boundary points is I. I NTRODUCTION
to add a regulation rule to effectively trigger the regulation of
steering and velocity to the motors. A comparative analysis is It’s clear human driver substitution isn’t far off. Following
made between different steering controllers like PID or by using the latest self-driving vehicle milestone that drove more than
PID with a pure pursuit controller for the Lane Keeping Assist (511,638) total reported miles [1]. And to take advantage of
(LKA) system. A camera that sends wireless data to ROS via
Nvidia Jetson Nano is used to obtain environmental information. those outcome makers, the Technology Business and Research
The data is interpreted by the processor, which transmits the Center has put a lot of work into two aspects in particular
desired output control via rosserial communication to Arduino. improving the deployment of subsystems and autonomous
car algorithms and reducing the cost of autonomy system.
Index Terms—Lane-keeping Assist; Lane boundary detection; Since driving is an attention-consuming task and replacing the
Edge detection; Birds eye view; Self-driving; Computer vision; driver with an automated device that handles both analysis
ROS; PID; Tuning methods; Pure pursuit; CARLA; Jetson Nano and decision-making is not a simple task automation tech-
nology that has been categorized into three types of devices:
first sensing equipment such as LIDAR, ultrasonic sensors,
N OMENCLATURE odometry sensors, Global Navigation Satellite Device (GNSS)
Fyf Lateral tire force on front tire and 2nd video cameras devices [2]–[5] . Nevertheless, it is
Fyr Lateral tire force on rear tire very expensive to implement all this sensor in one vehicle.
Kp Proportional gain Despite the fact that a human driver can function safely based
Ki Integral gain on visual senses alone. It shows that there is enough signal in
Kd Derivative gain photos alone to drive a vehicle safely autonomously. Due to
ẋ Longitudinal velocity at c.g. of vehicle the low sensor price, the vision-only goal in the ego-vehicle
ẏ Lateral velocity at c.g. of vehicle culture stands as a golden norm [6]. Thus, data from the
ψ Yaw angle of vehicle in global axes camera or vision-based sensors can be loaded into the car’s
β Slip angle at vehicle c.g. high-performance computational process unit (CPU) due to the
v Total velocity at c.g. of vehicle ability to identify the target and train the camera algorithm,
ψ̇ Yaw rate of vehicle and from that point on it determines on the basis of its control
lr Longitudinal distance from c.g. to rear tires algorithm and then provides lateral and longitude drivers with
lf Longitudinal distance from c.g. to front tires output data [7]. The challenge when developing the system’s
m Total mass of vehicle planning layer is to make the autonomous vehicle completely
Iz Yaw moment of inertia of vehicle aware of its environment and to create a framework for param-
δ Steering angle eter analysis in which the vehicle performs concrete actions
ld Look-ahead distance inside it. The Lane identification and monitoring methods are

978-1-7281-9673-2/20/$31.00
Authorized ©2020
licensed use limited to: University IEEE Sydney. Downloaded on March 03,2023 at 14:00:04 UTC from IEEE Xplore. Restrictions apply.
of Technology
2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT)

separated into smaller ones in order to minimize the processing Ẋ = ẋcos(ψ) − ψ̇sin(ψ) (8)
time and meet the rapid response that the control system
Ẏ = ẋsin(ψ) + ẏcos(ψ) (9)
targets [8]. However, various approaches are used recently
to identify the lanes on the basis of Hough lines or bird’s Where longitudinal and lateral speeds in the body frame is
eye view [9]. In this study, a distinction of lane detection denoted by ẋ and ẏ respectively, while yaw is described by
methodologies will be applied using advanced hardware in ψ̇. m is vehicle mass, Iz is yaw inertia, and Fyf and Fyr are
addition to an electrical RC car on a 1/10 scale. The remainder tire forces for front and rear wheels respectively.
of the articles as follows Section 2 we describe the model
of the device we used in our experimental studies. The
overview of our approach is presented in Section 3 This section
addresses the PID controller and simulation performance. At
last, deductions and future works are provided in Section 5.
II. S YSTEM M ODEL
The basic bicycle kinematic and dynamic model describes
the device architecture, and more hardware and computer
modifications are shown. There is an example of the laboratory
area where the experimental experiments were tested.
A. Kinematics and Dynamics Fig. 1. Bicycle kinematic model [5]
Under any presumption, autonomous self-driving vehicles
can be interpreted by bicycle kinematics and dynamics. As- B. Experimental setup Description
suming that only the front wheel can be steered and that
The hardware and electronic schematics as defined in Figure
both the front and the rear wheels are lumped together into
2 provide the required components which have been used as
separate single wheels at the middle of the front and rear axles,
our model for autonomous car operation. The laptop has been
as seen in figure 1. The following nonlinear continuous-time
used in loop control for real-time hardware and to upload the
equations[6] define the vehicle’s kinematic model in inertial
code to Nvidia Jetson Nano and the communication between
frame:
laptop ”Master” and Jetson Nano will be by ROS [10]. The
USB camera was used to display an image in real-time.
ẋ = vcos(ψ + β) (1)
The system is operated by two motors, for linear movement
ẏ = vsin(ψ + β) (2) DC motor with a motor driver which generates a 4-wheel
v drive (4WD) system and using the servo motor for steering
ψ̇ = sin(β) (3) mechanism. Arduino was used to power the engines, to ensure
lr
that the machine was still operating in real-time. Thus, because
lr
β = tan−1 ( tan(σf )) (4) of its high computational performance, Nvidia Jetson Nano
lf + lr performs the computational operation on the vehicle, and
In the inertial frame (x, y), x, y are the coordinates of Arduino is used for actuator hardware power.
the center of mass. Inertial heading is described by ψ and
speed is denoted by v. The distance from the center of mass
of vehicle to the front and rear axles are represented by lf
and lr respectively. While β is the slip angle at the center
of gravity. In the same direction of velocity, a represents
the acceleration of the center of mass/gravity. The control
inputs the fronts and rear steering angle σf and acceleration.
As only two parameters lf and lr to identify, this makes
system identification easier for above mentioned kinematic
model. The same definition of inertial position coordinates
and heading angle is valid for the dynamic vehicle model as
of the kinematic model. The following differential equations Fig. 2. Hardware and electronics schematic
describes the dynamic model:
The changed configuration is illustrated in Figure 3, where
ẍ = ψ̇ ẏ + ax (5) the camera is mounted on top of the car to be able to
2 control the view of the road ahead within the lane limits and
ÿ = −ψ̇ ẋ + (Fyf cosσf + Fyr ) (6) long enough for computing time. Inside the system, all the
m
2 electronics are combined to make it compact and autonomous
ψ̈ = (lf Fyf − lr Fyr ) (7) without external connection.
Iz

Authorized licensed use limited to: University of Technology Sydney. Downloaded on March 03,2023 at 14:00:04 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT)

the features within an image of a particular shape. Since it


involves defining the desired features in any parametric form,
the classical Hough transformation is most widely used to
identify normal curves such as lines, circles, ellipses, etc. But
in Hough transform process parallel lines appear as seen in
Figure 6 to converge on images from the front-facing camera
due to camera perspective, so this was the key challenge when
we used a transformation of Hough lines. So, the eye view of
the bird solves this problem [14], which is seen in Figure 7.
Fig. 3. Modified RC car integrating Nvidia Jetson Nano and USB camera Will used the Bird’s eye view algorithm in this paper to detect
the lanes along the road. So, first, Lane-keeping assist divided
into two parts: lane recognition and second, PID controller for
For the experimental research, a noise-free test area has been changing the steering angle.
used. The Nile University project lab was used to execute the
tests where a simple lane model was developed using black
electric tape on the field. To measure the simple driving modes
i.e. straight drive and steering control, the road was rendered
exactly according to the simulation with a single curve.
III. M ETHODOLOGY
The self-driving car’s overall workflow may be illustrated
in Figure 4. The programming and visualization user inter- Fig. 5. Edge detection algorithm on CARLA simulation
face is linked to the vehicle interface through ROS wireless
communication as shown in Figure 2.

Fig. 6. Hough transformation on CARLA simulation

Fig. 4. Processing pipeline

A lane-keeping assist (LKA) system is one of a major


system in a self-driving car [11]. The (LKA) system is a
control system that aids a driver in maintaining safe travel
within a marked lane of a highway. This system work when
Fig. 7. Bird’s eye algorithm on CARLA simulation
the car swerved from a lane and the LKA automatically returns
the vehicle between the lane by adjusts the steering without
additional input from the driver. So, for lane detection, there Our lane-keeping assist was done according to the algorithm
are many techniques used to detect-it lane like Hough lines, 1. The first research on the CARLA [15] simulation platform
Edge Detection, and Bird’s Eye view. Edge Detection [12] of is our lane recognition algorithm. It begins by taking a raw
this method is based on the concept of finding points within an image from CARLA and then applying a bird’s eye view after
image where the brightness of the image varies dramatically. that add some image processing to the images to eliminate
Edge is characterized as an ordered collection of segments noise and smooth the image to fit it for lane search algorithms.
on a curved surface. This collection consists of points where The output of lane search algorithms will have many lanes are
image brightness varies sharply. Edge Detection is a method intersection at the same points this means their many equations
used in image processing to identify and retrieve features. represent the same lane to avoid this problem by apply hough
This technique greatly decreases the data to be processed and transform [16]. The straight-line y = mx + c can be expressed
thus can delete less meaningful information while maintaining in polar coordinates as
valuable image properties is shown in figure 5. The Hough
transform [13] is a method that can be used to distinguish ρ = x cos(θ) + y sin(θ) (10)

Authorized licensed use limited to: University of Technology Sydney. Downloaded on March 03,2023 at 14:00:04 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT)

Where (rho, theta) is the description of a vector from the


origin to the nearest line point. This vector would be linearly
perpendicular. The formula for the curvature radius for curve
y = f (x) at any point x is given by:

dy 2 2 3
(1 + ( dx ) )
Radius of curvature =
d2 y (11)
dx2
Fig. 9. Bird’s eye algorithm on CARLA simulation

Now we need to figure out how far the car is from the
center-line to schedule the performance of the bird’s eye view IV. PID AND P URE P URSUIT C ONTROLLER FOR LKA
for our controller. Two errors, lateral error and yaw error, SYSTEM
are measured as such. Lateral error (e) is the distance error Our controller to be included in this paper is a pure pursuit
between the vehicle’s rear axis center and the road’s mid-line controller [18] for minimized lateral error and a PID controller
[17]. The yaw error (θ) is the angle shown in Figure 8 between for minimized yaw error as shown in figure 10. The aim of
the middle line and the normal of the vehicle. Finally, the the ”LKA” lane-keeping help scheme is to stay the vehicle
bird’s eye algorithm on CARLA simulation is seen in figure between two lanes and if the vehicle swept off a lane, LKA
9. would have to return the vehicle between the two lanes. This
is the PID controller’s goal to minimize the yaw error of this
Algorithm 1: Lane Detection algorithm mistake between the vehicle going and the lane middle line.
The PID controller Mathematical Model given by:
CARLA Start; Z t
Camera initialization; de(t)
c(t) = Kp e(t) + Ki e(t)dt + Kd (12)
while Camera streaming is active do 0 dt
for Every Frame coming from the Camera do where c(t) is steering control signal Kp , Ki and Kd are the
Apply perspective transform (Bird Eye view).
proportional, integral, and derivative gains, respectively, e(t) is
Remove noise by apply image processing and
the error between the vehicle and the center of the two lanes.
some filters.
The time step d(t) is identified according to the update rate
Detect lanes by apply lane search algorithms.
of the state of the vehicle and the computational speed. The
for d=1 to num of lanes do
proportional, integral, and derivative gains Kp , Ki and Kd are
Apply Hough transform.
designing based on time domain, controllers aim to minimize
d++
end different integral performance indices namely [19]:
Apply 2nd order polynomial fit on left and Z ∞
right lanes. ISE = e2 dt (13)
Calculate center-lane by take the mean of right 0
Z ∞
and left lanes.
IAE = |e| dt (14)
Calculate the radius of curvature of the road. 0
Draw virtual lane form the center of the car. Z ∞
Calculate the error between virtual lane and IT SE = t2 |e| dt (15)
center-lane. 0
end Z ∞
end IT AE = t |e| dt (16)
0

Fig. 10. Steering control system diagram

ISE, IAE, ITSE, and ITAE are Integral square error, Integral
Fig. 8. Virtual and Deviation Module absolute error, Integral time square error, and Integral time
absolute error, respectively, so this is a four different methods

Authorized licensed use limited to: University of Technology Sydney. Downloaded on March 03,2023 at 14:00:04 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT)

TABLE I V. S IMULATION R ESULTS AND A NALYSIS


PID CONTROLLER GAINS USING DIFFERENT TUNING CRITERIA
A PID controller with various tuning parameters (ISE, IAE,
Type of controller Tuning criteria Kp Ki Kd ITSE, and ITAE) was introduced as a yaw controller. Pure
PID ISE 0.2 0.051 0.0025
PID IAE 0.2 0.01 0.0061
pursuit was introduced as a lateral controller. We test our Lane
PID ITSE 0.2 0.0013 0.005 Keeping Assists (LKA) at a constant velocity 50 km/h and we
PID ITAE 0.2 0.01 0.005 test our LKA with two different controllers first we tested by
PID controller only and neglect the lateral error for our vehicle
and the second test apply both PID and Pure pursuit controller
to tune a PID controller. The tuning gains of PID for steering to minimized lateral and yaw errors. Figure 12 shows the effect
angle using different tuning methods are summarized in Table of pure pursuit on our LKA system and figure 13 also displays
1. the study of our Lane Keeping Assist with PID and Pure
One of the most common approaches to the route tracking pursuit controller this figure shows the (mean square error,
problem for self-driving cars is the pure pursuit strategy standard deviation, and range of steering).
and variants thereof. The pure pursuit method consists of
geometrically measuring the curvature of a circular arc that
links the direction of the rear axle to a destination point on
the vehicle’s track ahead. The aim point is determined from
a look-ahead distance ld to the expected direction from the
current rear axle position. The (gx , gy ) destination point is
shown in Figure 11. The steering angle δ of the vehicle can
be calculated using just the direction of the target point and
the angle α between the heading vector of the vehicle and the
look-ahead vector using eq. (11) to calculate the Radius of
curvature (R) of the road .

ld R
= (17)
sin 2α sin π2 − α
Fig. 13. Statistical analysis at 50 km/h
ld R
= (18)
2 sin(α) cos(α) cos(α) The results review shows that when we used the PID
controller only to control the steering the lateral error will
ld be large but the car still between two-lane. To make LKA
= 2R (19)
sin(α) safer we apply a pure pursuit controller with PID to make a
vehicle stay in the middle of the lanes, figure 14 shows the
2 sin(α) difference between the PID controller and PID with a pure
κ= (20)
ld pursuit controller.
Using eq. (19) and (20) to calculate the steering angle of
vehicle δ. Where κ is the curvature of the circular arc and L
is the vehicle wheelbase. The steering angle can be written as:

δ = tan−1 (κL) (21)

Fig. 14. Output Trajectory between PID controller and PID with pure pursuit
controller

VI. C ONCLUSION AND F UTURE W ORK


Fig. 11. Pure Pursuit geometry [20] In conclusion, this paper demonstrated the introduction of
different ego-vehicle lane identification and follow algorithms

Authorized licensed use limited to: University of Technology Sydney. Downloaded on March 03,2023 at 14:00:04 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT)

Fig. 12. Yaw and Lateral Errors at 50 km/h

such as Edge Detection, Hough transformation, and Bird’s [8] A. B. Hillel, R. Lerner, D. Levi, and G. Raz, “Recent progress in road
eye view. However, this paper evaluated only a closed-loop and lane detection: a survey,” Machine vision and applications, vol. 25,
no. 3, pp. 727–745, 2014.
framework with PID and Pure pursuit specified as the control [9] D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans, and
systems for autonomous trajectory monitoring of the mod- L. Van Gool, “Towards end-to-end lane detection: an instance segmen-
ule’s hardware implementation. Deflection from the middle tation approach,” in 2018 IEEE Intelligent Vehicles Symposium (IV).
IEEE, 2018, pp. 286–291.
of the lane and corresponding steering order is processed [10] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs,
and graphically displayed. All the answers for quantifying R. Wheeler, and A. Y. Ng, “Ros: an open-source robot operating system,”
the findings were statistically related. The findings for each in ICRA workshop on open source software, vol. 3, no. 3.2. Kobe, Japan,
2009, p. 5.
graphical outcome are more positive for PID with pure pursuit [11] C. Badue, R. Guidolini, R. V. Carneiro, P. Azevedo, V. B. Cardoso,
for steering controller, depending on the standard deviation. A. Forechi, L. Jesus, R. Berriel, T. Paixao, F. Mutz et al., “Self-driving
However, owing to the dominance it demonstrated over other cars: A survey,” arXiv preprint arXiv:1901.04407, 2019.
[12] R. Muthalagu, A. Bolimera, and V. Kalaichelvi, “Lane detection tech-
control methods, potential analysis would be linked with nique based on perspective transformation and histogram analysis for
further evaluating the model with the two other modules using self-driving cars,” Computers & Electrical Engineering, vol. 85, p.
the Neural Network as the main controller. 106653, 2020.
[13] I. El Hajjouji, S. Mars, Z. Asrih, and A. El Mourabit, “A novel
fpga implementation of hough transform for straight lane detection,”
R EFERENCES Engineering Science and Technology, an International Journal, vol. 23,
no. 2, pp. 274–280, 2020.
[1] L. Fridman, D. E. Brown, M. Glazer, W. Angell, S. Dodd, B. Jenik,
[14] M. Martı́nez-Dı́az, F. Soriguera, and I. Pérez, “Autonomous driving: a
J. Terwilliger, A. Patsekin, J. Kindelsberger, L. Ding et al., “Mit ad-
bird’s eye view,” IET Intelligent Transport Systems, vol. 13, no. 4, pp.
vanced vehicle technology study: Large-scale naturalistic driving study
563–579, 2019.
of driver behavior and interaction with automation,” IEEE Access, vol. 7,
[15] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “Carla:
pp. 102 021–102 038, 2019.
An open urban driving simulator,” arXiv preprint arXiv:1711.03938,
[2] S. Kato, E. Takeuchi, Y. Ishiguro, Y. Ninomiya, K. Takeda, and
2017.
T. Hamada, “An open approach to autonomous vehicles,” IEEE Micro,
[16] Y. Jiang, F. Gao, and G. Xu, “Computer vision-based multiple-lane
vol. 35, no. 6, pp. 60–68, 2015.
detection on straight road and in a curve,” in 2010 International
[3] S. Campbell, N. O’Mahony, L. Krpalcova, D. Riordan, J. Walsh,
Conference on Image Analysis and Signal Processing. IEEE, 2010,
A. Murphy, and C. Ryan, “Sensor technology in autonomous vehicles:
pp. 114–117.
a review,” in 2018 29th Irish Signals and Systems Conference (ISSC).
[17] C. Sun, X. Zhang, Q. Zhou, and Y. Tian, “A model predictive controller
IEEE, 2018, pp. 1–4.
with switched tracking error for autonomous vehicle path tracking,”
[4] B. S. Jahromi, T. Tulabandhula, and S. Cetin, “Real-time hybrid multi-
IEEE Access, vol. 7, pp. 53 103–53 114, 2019.
sensor fusion framework for perception in autonomous vehicles,” Sen-
[18] M.-W. Park, S.-W. Lee, and W.-Y. Han, “Development of lateral control
sors, vol. 19, no. 20, p. 4357, oct 2019.
system for autonomous vehicle based on adaptive pure pursuit algo-
[5] H. Kim and I. Lee, “Localization of a car based on multi-sensor
rithm,” in 2014 14th International Conference on Control, Automation
fusion.” International Archives of the Photogrammetry, Remote Sensing
and Systems (ICCAS 2014). IEEE, 2014, pp. 1443–1447.
& Spatial Information Sciences, vol. 42, no. 1, 2018.
[19] H. H. Ammar and A. T. Azar, “Robust path tracking of mobile robot
[6] M. Johnson-Roberson, C. Barto, R. Mehta, S. N. Sridhar, K. Rosaen,
using fractional order pid controller,” in International Conference on
and R. Vasudevan, “Driving in the matrix: Can virtual worlds replace
Advanced Machine Learning Technologies and Applications. Springer,
human-generated annotations for real world tasks?” arXiv preprint
2019, pp. 370–381.
arXiv:1610.01983, 2016.
[20] J. M. Snider et al., “Automatic steering methods for autonomous
[7] L. Xu, Y. Wang, H. Sun, J. Xin, and N. Zheng, “Integrated longitudinal
automobile path tracking,” Robotics Institute, Pittsburgh, PA, Tech. Rep.
and lateral control for kuafu-ii autonomous vehicle,” IEEE Transactions
CMU-RITR-09-08, 2009.
on Intelligent Transportation Systems, vol. 17, no. 7, pp. 2032–2041,
2015.

Authorized licensed use limited to: University of Technology Sydney. Downloaded on March 03,2023 at 14:00:04 UTC from IEEE Xplore. Restrictions apply.

You might also like