Professional Documents
Culture Documents
Autonomous Driving Developed With An FPGA Design
Autonomous Driving Developed With An FPGA Design
Autonomous Driving Developed With An FPGA Design
I. I NTRODUCTION
Fig. 1. Testing FPGA car
In recent times, industries have been encouraging the de- II. C AR H ARDWARE
velopment of autonomous driving cars so one day level 5 au-
A. Car Used for Testing
tonomous driving cars can be achieved. Currently autonomous
driving development is at a stage where it uses GPS func- The car that will be used to test our algorithms (see Fig. 1)
tions, maps, and sensors but for a vehicle to achieve level 5 is designed on an MDF base cut using a laser cutter. The
autonomous driving, the issue of protecting a human life must car has two DC motors driving the wheels and a ball point
be solved. The vehicle controller must have the capability to at the front. The controller is based on a Terasic DE10-Nano
make decisions and perform image recognition similar to a FPGA board. A custom circuit board has been made to connect
safe driver in order to avoid pedestrians. There is difficulty our motors and power to the FPGA board. It also allows
achieving real-time image recognition on an embedded system additional hardware peripherals to be connected if required.
using existing microprocessors. Currently real time image During testing an HDMI connection to a monitor is used to
recognition can be achieved on FPGAs, so this platform is see the output of the car’s vision and image processing.
explored in this paper. B. Camera Used
Throughout this paper explanations of how an FPGA con- We are using a Terasic D5M camera module, with a
trolled car will be developed following the FPT’19 design replacement 3.58 mm focal length low distortion wide-angle
competition regulations [1]. At the top level, methods of path lens. The wider angle enables objects right below the camera
following and localisation need to be developed to follow the in front of the car, and off to the side of the road to be
prescribed course. This requires developing object, pedestrian, viewed. The low distortion lens reduces the distortion in the
lane and traffic light detection algorithms to achieve full periphery normally associated with wide-angle lenses, and
autonomy. These detection methods will aid in the decision avoids significant curvature in images of straight lines. This
making of the car, so it moves correctly around the course. In simplifies the image processing associated with line detection.
this paper, Section II will briefly describe the hardware and
car used for testing our algorithms, Section III will outline our III. C ONTROL A RCHITECTURE AND A LGORITHMS
control architecture and details on algorithms developed. The In this section, an overview of how our different algo-
paper will then end with a conclusion of the project. rithms and methods of detection interact and flow through
Authorized licensed use limited to: BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE. Downloaded on January 30,2021 at 04:51:06 UTC from IEEE Xplore. Restrictions apply.
our architecture. The control algorithms are implemented as Lane
a hierarchical series of state machines to handle the car’s Following
decision making and selecting the corresponding operations
and actions needed at each time step.
Data for the control algorithms are derived from the images
Are we in
captured of the scene in front of the car. These images are the centre of No
Adjust
processed in real time (60 frames per second) as the pixels steering
the lane?
are streamed from the camera.
Debugging FPGAs can be a very time-consuming process. Yes
To save time we integrated our methods and algorithms
into MatLab to test and confirm that the image processing
algorithms work. Image processing in MatLab helped us to
Continue
understand the detailed image processing techniques, allowing
us to modify them to our needs. Once the algorithms have been
Fig. 3. Lane following algorithm, based on detecting lane edges
verified within MatLab, the MatLab code was ported to VHDL
for programming the FPGA. Firstly, in order to reduce the effects of noise without
For testing and manual control, we have connected an HC- blurring edges, the image is preprocessed by guided filtering
06 Bluetooth module to our car. This enables us to send serial [2]. Then, the image is globally binarized using the Otsu
commands wirelessly to control the car movement and to method. As can be seen from the top panel in Fig. 4, there
toggle the auto exposure on the camera. While running image are many small areas of interference, which may cause errors
processing commands in MatLab we can see how the robot in detection. Morphological direction filtering [3] is used to
reacts to certain images through the serial commands it sends eliminate the interference of small areas. The edges of the
to the car. This gives us an understanding of how well our road lanes are detected using the Sobel edge detector. The
algorithm works. region of interest is the lines on both sides of the lane and the
stop lines in front of the zebra crossing and at intersections.
Firstly, we perform the standard Hough transform. According
Main
Controller to the length of the line in the image, 30 extreme points in
the Hough transform matrix are found by using Hough peak.
Then we use the Hough lines function to extract the white line
on both sides of the road and the stop line from the Hough
Line
Detection transform matrix. The results are shown in Fig. 4.
Traffic light
Detection
Image Motor
Stream Controller
Obstacle
Detection
Pedestrian
Detection
Fig. 5 shows all the decisions the car will make while in the
B. Line Detection Algorithms process of detecting lines. These decisions will be determined
Lane following is based on detecting the lines on the side by the type of line that is detected i.e. zebra crossing, thick
and centre of the road. stop line or a corner line.
432
Authorized licensed use limited to: BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE. Downloaded on January 30,2021 at 04:51:06 UTC from IEEE Xplore. Restrictions apply.
Line
Detected
What Does
Intersection Corner
type of line road turn left Right
is it? or right?
Traffic Crossing
Left
Lights
Yes Yes
Continue
Traffic
whether the light is green (go), yellow (slow down) or red
Lights (stop).
The first step is to convert the image into HSV (hue,
saturation, and value) colour space to make the algorithm
less sensitive to changes in lighting. In RGB colour space,
What variations in lighting and shading can have significant effect
Slow down and
Green colour are the Red stop at the line of all of the RGB values, making it difficult to perform reliable
lights? segmentation. HSV separates the colour components from
the strength and intensity components. Hue is the underlying
Orange colour, given from the angle of a colour wheel. Saturation is
the strength of the colour (radius within the colour wheel) and
value is the intensity or brightness.
How far
< 20 cm away is the > 20 cm Within HSV colour space, a simple thresholding band is
line? used to select hue values corresponding to the red, yellow and
green colours, and on the saturation to avoid detecting bright
white regions. The detected region with the brightest value
Continue indicates which light is currently lit up.
C. Traffic Light Detection Algorithm A vital task for our autonomous car is to detect objects
and pedestrians. While detecting objects, the car makes the
When our algorithm detects that our vehicle is approaching decisions seen in the flow chart from Fig. 7.
an intersection, traffic light detection will be triggered. The First, the captured image is transformed into the HSV colour
traffic light decision processing (Fig. 6) will determine whether space to enable coloured objects within the road area to be
the car stops or goes depending on the detected colour of the detected. Morphological opening is performed on the image,
light. and the image area and “holes” are filled. This uses both
Since the position of the lights within the image is known, it dilation and erosion together. As shown in Fig. 8, the objects
is only necessary to process the top part of the image to detect are extracted from the background and detected within the
the traffic lights. Colour thresholding is used to determine image.
433
Authorized licensed use limited to: BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE. Downloaded on January 30,2021 at 04:51:06 UTC from IEEE Xplore. Restrictions apply.
Obstacle
Avoidance
Is the
No other lane Yes No with a PID controller (proportional, integral, and derivative)
clear?
to output the motor PWM value based on the error between
the actual motor angular velocity and the desired value. By
Change lanes
adjusting the target values between the two motors, one side
of the car will travel faster than the other, effectively steering
Have we the vehicle.
No passed the Yes Encapsulating the motion control in this way enables the
obstacle?
main control architecture to send high level commands to the
Change back motor controller to realise the required autonomous motion.
to driving lane Continue
IV. C ONCLUSION
Fig. 7. Object detection decision processing At this stage, the overall control architecture for the au-
tonomous vehicle has been defined, and initial image pro-
cessing algorithms and control architectures for lane detection,
traffic light detection, obstacle detection and pedestrian detec-
tion have been developed. We are currently converting these
control architectures to finite state machines, and the image
processing algorithms into stream based image processing on
the FPGA, all implemented in VHDL. This would enable real-
time autonomous control of our car, capable of participating
Fig. 8. Left: obstacles within the image; Right: masked image after morpho- in the FPT’19 design challenge.
logical filtering To conclude this paper, the car to test our algorithms has
been specified along with the camera. How the car will be
controlled has been specified and decision architecture has
E. Pedestrian Detection
been supplied in support of this. An overview has been
Pedestrians (in this case the mannequin, seen in Fig. 9) can provided of the following image processing algorithms: lane
come in a range of shapes (as the arms and legs are positioned) detection, object detection, traffic light detection and pedes-
and sizes (depending on distance from the car). To solve this, trian detection. The project is at the point where algorithm
we detect the torso of the mannequin and not worry about development is finishing and can be implemented in VHDL
the position of the arms and legs. The same morphological and onto the FPGA for testing. Once testing begins, we can
processing described for our object detection is used, then the determine that our algorithms are successful in creating a fully
proportions of the height and width are found by fitting a autonomous car.
bounding box to the torso to confirm it is a pedestrian.
F. Motor Control
R EFERENCES
The car uses DC motors, and each motor has a rotary [1] “FPT2019 FPGA Design Competition,”
encoder which provides a feedback signal. The motor driver http://fpt19.tju.edu.cn/Contest/FPT2019 FPGA Design Competition/
consists of an H-bridge, with the speed of the motor controlled Contents and Conditions.htm
[2] K. He, J. Sun. and X. Tang, “Guided image filtering,” European
by a digital pulse width modulation (PWM) waveform. The Conference on Computer Vision, pp. 1–14, 2010.
encoder provides a series of pulses, from which the angle of [3] P. Soille, and H. Talbot, “Directional morphological filtering,” IEEE
rotation and motor speed are derived. To give stable operation Transactions on Pattern Analysis and Machine Intelligence, vol. 23,
no. 11, pp. 1313–1329, 2001.
at a desired angular velocity, a feedback control loop is used
434
Authorized licensed use limited to: BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE. Downloaded on January 30,2021 at 04:51:06 UTC from IEEE Xplore. Restrictions apply.