Autonomous Driving Developed With An FPGA Design

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

2019 International Conference on Field-Programmable Technology (ICFPT)

Autonomous Driving Developed


with an FPGA Design
Euan Jones∗ , Keegan Pepper∗ , Aimei Li† , Shiyue Li† , Yuteng Zhang† , and Donald Bailey∗
∗ Department
of Mechanical and Electrical Engineering
Massey University
Palmerston North, New Zealand
epj.toko1@gmail.com, 16399851@massey.ac.nz, D.G.Bailey@massey.ac.nz
† School of Artificial Intelligence
Hebei University of Technology
Tianjin, China
m15677340378@163.com, lishiyue111111@126.com, 15122255850@163.com

Abstract—For this project, the task is developing algorithms


to program a FPGA controlled vehicle for the FPT’19 design
competition. This competition is to encourage the development
of level 5 self-driving cars. To achieve level 5 self-driving cars, the
use of image processing for object detection, lane detection, traffic
light detection and pedestrian detection will be required. With
this detection, the vehicle can be guided safely around the track
provided. This paper summarises the algorithms developed to
achieve autonomous driving techniques, following the regulations
of the competition.

I. I NTRODUCTION
Fig. 1. Testing FPGA car

In recent times, industries have been encouraging the de- II. C AR H ARDWARE
velopment of autonomous driving cars so one day level 5 au-
A. Car Used for Testing
tonomous driving cars can be achieved. Currently autonomous
driving development is at a stage where it uses GPS func- The car that will be used to test our algorithms (see Fig. 1)
tions, maps, and sensors but for a vehicle to achieve level 5 is designed on an MDF base cut using a laser cutter. The
autonomous driving, the issue of protecting a human life must car has two DC motors driving the wheels and a ball point
be solved. The vehicle controller must have the capability to at the front. The controller is based on a Terasic DE10-Nano
make decisions and perform image recognition similar to a FPGA board. A custom circuit board has been made to connect
safe driver in order to avoid pedestrians. There is difficulty our motors and power to the FPGA board. It also allows
achieving real-time image recognition on an embedded system additional hardware peripherals to be connected if required.
using existing microprocessors. Currently real time image During testing an HDMI connection to a monitor is used to
recognition can be achieved on FPGAs, so this platform is see the output of the car’s vision and image processing.
explored in this paper. B. Camera Used
Throughout this paper explanations of how an FPGA con- We are using a Terasic D5M camera module, with a
trolled car will be developed following the FPT’19 design replacement 3.58 mm focal length low distortion wide-angle
competition regulations [1]. At the top level, methods of path lens. The wider angle enables objects right below the camera
following and localisation need to be developed to follow the in front of the car, and off to the side of the road to be
prescribed course. This requires developing object, pedestrian, viewed. The low distortion lens reduces the distortion in the
lane and traffic light detection algorithms to achieve full periphery normally associated with wide-angle lenses, and
autonomy. These detection methods will aid in the decision avoids significant curvature in images of straight lines. This
making of the car, so it moves correctly around the course. In simplifies the image processing associated with line detection.
this paper, Section II will briefly describe the hardware and
car used for testing our algorithms, Section III will outline our III. C ONTROL A RCHITECTURE AND A LGORITHMS
control architecture and details on algorithms developed. The In this section, an overview of how our different algo-
paper will then end with a conclusion of the project. rithms and methods of detection interact and flow through

978-1-7281-2943-3/19/$31.00 ©2019 IEEE 431


DOI 10.1109/ICFPT47387.2019.00085

Authorized licensed use limited to: BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE. Downloaded on January 30,2021 at 04:51:06 UTC from IEEE Xplore. Restrictions apply.
our architecture. The control algorithms are implemented as Lane
a hierarchical series of state machines to handle the car’s Following
decision making and selecting the corresponding operations
and actions needed at each time step.
Data for the control algorithms are derived from the images
Are we in
captured of the scene in front of the car. These images are the centre of No
Adjust
processed in real time (60 frames per second) as the pixels steering
the lane?
are streamed from the camera.
Debugging FPGAs can be a very time-consuming process. Yes
To save time we integrated our methods and algorithms
into MatLab to test and confirm that the image processing
algorithms work. Image processing in MatLab helped us to
Continue
understand the detailed image processing techniques, allowing
us to modify them to our needs. Once the algorithms have been
Fig. 3. Lane following algorithm, based on detecting lane edges
verified within MatLab, the MatLab code was ported to VHDL
for programming the FPGA. Firstly, in order to reduce the effects of noise without
For testing and manual control, we have connected an HC- blurring edges, the image is preprocessed by guided filtering
06 Bluetooth module to our car. This enables us to send serial [2]. Then, the image is globally binarized using the Otsu
commands wirelessly to control the car movement and to method. As can be seen from the top panel in Fig. 4, there
toggle the auto exposure on the camera. While running image are many small areas of interference, which may cause errors
processing commands in MatLab we can see how the robot in detection. Morphological direction filtering [3] is used to
reacts to certain images through the serial commands it sends eliminate the interference of small areas. The edges of the
to the car. This gives us an understanding of how well our road lanes are detected using the Sobel edge detector. The
algorithm works. region of interest is the lines on both sides of the lane and the
stop lines in front of the zebra crossing and at intersections.
Firstly, we perform the standard Hough transform. According
Main
Controller to the length of the line in the image, 30 extreme points in
the Hough transform matrix are found by using Hough peak.
Then we use the Hough lines function to extract the white line
on both sides of the road and the stop line from the Hough
Line
Detection transform matrix. The results are shown in Fig. 4.

Traffic light
Detection
Image Motor
Stream Controller
Obstacle
Detection

Pedestrian
Detection

Fig. 2. Main control architecture

A. Main Control Architecture


Fig. 2 shows the main control architecture of our design.
The top level state machine within the main controller manages
the main navigation task, and receives inputs from the various
detection modules. These provide control signals indicating
what type of lines have been detected, the presence of any
obstacles or pedestrians, and the state of traffic lights. Fig. 4. Image processing for line detection. Top: binary image of Otsu
One part of the control architecture is the algorithm for algorithm; Middle: morphological direction filtered image; Bottom: Hough-
maintaining the car within the lane, as illustrated in Fig. 3. transformation result image

Fig. 5 shows all the decisions the car will make while in the
B. Line Detection Algorithms process of detecting lines. These decisions will be determined
Lane following is based on detecting the lines on the side by the type of line that is detected i.e. zebra crossing, thick
and centre of the road. stop line or a corner line.

432

Authorized licensed use limited to: BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE. Downloaded on January 30,2021 at 04:51:06 UTC from IEEE Xplore. Restrictions apply.
Line
Detected

What Does
Intersection Corner
type of line road turn left Right
is it? or right?

Traffic Crossing
Left
Lights

Are there Turn car left Turn car right


No traffic lights? Yes

Do we Are there Traffic


need to turn No pedestrians No Lights
here? waiting?

Yes Yes

Turn car down Slow down


the intersection and stop at line

Continue

Fig. 5. Line detection decision processing

Traffic
whether the light is green (go), yellow (slow down) or red
Lights (stop).
The first step is to convert the image into HSV (hue,
saturation, and value) colour space to make the algorithm
less sensitive to changes in lighting. In RGB colour space,
What variations in lighting and shading can have significant effect
Slow down and
Green colour are the Red stop at the line of all of the RGB values, making it difficult to perform reliable
lights? segmentation. HSV separates the colour components from
the strength and intensity components. Hue is the underlying
Orange colour, given from the angle of a colour wheel. Saturation is
the strength of the colour (radius within the colour wheel) and
value is the intensity or brightness.
How far
< 20 cm away is the > 20 cm Within HSV colour space, a simple thresholding band is
line? used to select hue values corresponding to the red, yellow and
green colours, and on the saturation to avoid detecting bright
white regions. The detected region with the brightest value
Continue indicates which light is currently lit up.

Fig. 6. Traffic light decision processing D. Object Detection Algorithm

C. Traffic Light Detection Algorithm A vital task for our autonomous car is to detect objects
and pedestrians. While detecting objects, the car makes the
When our algorithm detects that our vehicle is approaching decisions seen in the flow chart from Fig. 7.
an intersection, traffic light detection will be triggered. The First, the captured image is transformed into the HSV colour
traffic light decision processing (Fig. 6) will determine whether space to enable coloured objects within the road area to be
the car stops or goes depending on the detected colour of the detected. Morphological opening is performed on the image,
light. and the image area and “holes” are filled. This uses both
Since the position of the lights within the image is known, it dilation and erosion together. As shown in Fig. 8, the objects
is only necessary to process the top part of the image to detect are extracted from the background and detected within the
the traffic lights. Colour thresholding is used to determine image.

433

Authorized licensed use limited to: BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE. Downloaded on January 30,2021 at 04:51:06 UTC from IEEE Xplore. Restrictions apply.
Obstacle
Avoidance

Obstacle What Pedestrian


type of object
is it?
Stop the car
Is our
driving lane Yes
clear?
Yes
Is the
Stop the car No person still on
the road? Fig. 9. Mannequin on pedestrian crossing

Is the
No other lane Yes No with a PID controller (proportional, integral, and derivative)
clear?
to output the motor PWM value based on the error between
the actual motor angular velocity and the desired value. By
Change lanes
adjusting the target values between the two motors, one side
of the car will travel faster than the other, effectively steering
Have we the vehicle.
No passed the Yes Encapsulating the motion control in this way enables the
obstacle?
main control architecture to send high level commands to the
Change back motor controller to realise the required autonomous motion.
to driving lane Continue
IV. C ONCLUSION
Fig. 7. Object detection decision processing At this stage, the overall control architecture for the au-
tonomous vehicle has been defined, and initial image pro-
cessing algorithms and control architectures for lane detection,
traffic light detection, obstacle detection and pedestrian detec-
tion have been developed. We are currently converting these
control architectures to finite state machines, and the image
processing algorithms into stream based image processing on
the FPGA, all implemented in VHDL. This would enable real-
time autonomous control of our car, capable of participating
Fig. 8. Left: obstacles within the image; Right: masked image after morpho- in the FPT’19 design challenge.
logical filtering To conclude this paper, the car to test our algorithms has
been specified along with the camera. How the car will be
controlled has been specified and decision architecture has
E. Pedestrian Detection
been supplied in support of this. An overview has been
Pedestrians (in this case the mannequin, seen in Fig. 9) can provided of the following image processing algorithms: lane
come in a range of shapes (as the arms and legs are positioned) detection, object detection, traffic light detection and pedes-
and sizes (depending on distance from the car). To solve this, trian detection. The project is at the point where algorithm
we detect the torso of the mannequin and not worry about development is finishing and can be implemented in VHDL
the position of the arms and legs. The same morphological and onto the FPGA for testing. Once testing begins, we can
processing described for our object detection is used, then the determine that our algorithms are successful in creating a fully
proportions of the height and width are found by fitting a autonomous car.
bounding box to the torso to confirm it is a pedestrian.

F. Motor Control
R EFERENCES
The car uses DC motors, and each motor has a rotary [1] “FPT2019 FPGA Design Competition,”
encoder which provides a feedback signal. The motor driver http://fpt19.tju.edu.cn/Contest/FPT2019 FPGA Design Competition/
consists of an H-bridge, with the speed of the motor controlled Contents and Conditions.htm
[2] K. He, J. Sun. and X. Tang, “Guided image filtering,” European
by a digital pulse width modulation (PWM) waveform. The Conference on Computer Vision, pp. 1–14, 2010.
encoder provides a series of pulses, from which the angle of [3] P. Soille, and H. Talbot, “Directional morphological filtering,” IEEE
rotation and motor speed are derived. To give stable operation Transactions on Pattern Analysis and Machine Intelligence, vol. 23,
no. 11, pp. 1313–1329, 2001.
at a desired angular velocity, a feedback control loop is used

434

Authorized licensed use limited to: BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE. Downloaded on January 30,2021 at 04:51:06 UTC from IEEE Xplore. Restrictions apply.

You might also like