Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Implementation of a Self-Driving Vehicle…

Abstract- This project presents the construction of a


self-driving vehicle model using a Raspberry Pi for
autonomous control. The vehicle is based on a RC toy
car, which was modified in order to mount the sensors
and the Raspberry Pi on it. A camera on top and
ultrasonic sensors are used to get information of the
surroundings of the vehicle; this information is
processed and result in steering and throttle commands,
which guide the vehicle along a determined path.
Fig. 1 Chassis of the RC car used for the project

I. INTRODUCTION B. Steering System

The following document details the execution of a project In this case, the RC car we chose features front-wheel
that consists of programming a toy car to lead it across a steering with only three position: left, right and straight. For
predetermined path autonomously and to elude certain this purpose, a DC motor and a gearbox determine the
obstacles located on that path. This project simulates the direction of the wheels.
functioning of autonomous cars that are currently being
developed. Thanks to this technology, automobiles will no
longer need drivers, given that they will rely on sensors and
artificial vision that will help the car recognize its
surroundings in addition to localization and path-planning.
These features have the potential of making trips safer and
quicker. In the following sections, the hardware and
software of the project will be explained. Moreover, the
performance tests and results of the toy car will be shown.
Finally, the document will address the conclusions related Fig. 2 Steering System of the RC car
to the development of this technology that we found while
executing the project. C. Throttle System
For the throttle system, the car uses a DC motor and a
driver that allows us to control speed and direction of
II. HARDWARE DESCRIPTION rotation. The DC motor is the one that comes with the RC
car, but in order to control the vehicle, a driver is required.
For convenience, this vehicle uses as a base an RC toy car,
For this reason, we decided to use the driver TB6612FNG
which provides the chassis, the steering and throttle system.
with a power supply voltage of 5V and four function
Besides, the vehicle requires a camera and three ultrasonic
modes: CW, CCW, short brake and stop. [1]
sensors to obtain data about the surroundings. For
processing and control tasks, a Raspberry Pi is chosen.

A. Chassis
As specified before, the chassis we decided to use is
obtained from an RC car. This chassis needs to be able to
support all the other elements required to build the self-
driving vehicle. Fig. 1 shows a picture of the chassis used
for this specific project.

Fig. 3 TB6612FNG motor driver


D. Camera
The camera chosen for this application is the LifeCam HD
3000 from Microsoft. It provides images with a 720p
resolution. Its dimensions are 109 mm of length and 44.5
mm of width. [2]

Fig. 6 Raspberry Pi 3B

G. Power Supply
To energize the Raspberry Pi we decided to use a
power bank and to energize the sensors and the
dc motors we use eight batteries of 1.5V in series.

H. Assembly
Fig. 4 LifeCam HD 3000 camera Fig. 7 shows how the vehicle was assembled. The camera is
attached to an acrylic bracket that keeps it on top of the
E. Ultrasonic Sensors
vehicle. This location helps the camera get images of the
Ultrasonic sensors measure distance sending a signal, the path right in front of the car so that we can determine if the
time it takes to return is proportional to the distance. This car needs to turn or keep going straight. Ultrasonic sensors
application requires ultrasonic sensors to detect the distance are mounted using acrylic holders. The Raspberry Pi is
of obstacles from the car. For this reason, we decide to use mounted on top of an acrylic platform and the connections
three sensors with this disposition: one on the front of the are made on a protoboard located on top of the chassis.
vehicle and the other two on each side. The sensors used Lastly, the batteries and the power bank are attached to the
are the HC-SR04 with a working voltage of 5V. They have back of the chassis.
a measuring range from 2cm to 400 cm. [3]

Fig 5. Ultrasonic sensor HC-SR04

F. Raspberry Pi
Raspberry Pi is a small single-board computer. The Fig. 7 Assembly of the self-driving vehicle
model used for this self-driving vehicle is the
Raspberry Pi 3B. It presents 4 usb ports, an HDMI
port, 1 GB RAM and 40 GPIO pins. The Raspberry Pi III. SOFTWARE DESCRIPTION
allows us to process the images gotten by the camera,
the information from the ultrasonic sensors, and to The programming of the Raspberry was made in the Python
apply algorithms in order to obtain signals to control programming language. Libraries such as OpenCV, Numpy
the car. [4] and Matplotlib were used to develop the code.
In the beginning, it was taken into account to use the Canny
method to perform the line detection algorithm; however,
this method generates a lot of computational cost.
Due the track to perform the tests only had black and white
colors, it was decided to use a simple masking by
intensities. In this way, the black lines of the white
background could be separated.
After having that mask, the Hough transform was applied
to detect lines. Thus, data could be obtained as the starting
and ending point of each line. This is important since you
can have a lot of information about the position and
orientation of the lines on the track.
In the project developed, it was decided to use two main
characteristics: distance and slope.
Lane recognition was done as follows: Due the center lines
are segmented, the line with the longest line will be part of
the extreme line. Consequently, the line position with the The next step was to test the ultrasonic sensors, because the
maximum distance found will be the one that will obstacles that arise during the driving of the vehicle will be
determine the lane in which the vehicle is located. identified by this sensor. We wanted to check which was
The movement control was performed using the slope of the optimal and necessary distance to detect, considering
the maximum line found. When it was inclined with a the geometry of the cart, and it turned out to be 30 cm.
positive slope and greater than a delta, it should turn to the
right and when it is negative, it should turn to the left. In
this way, it is ensured that the detected lines are within an
acceptable range of positive and negative slope values (-d <
m < + d), in which it will only move forward.

IV. TESTING AND RESULTS


The first step to follow was the assembly of the self-driving
vehicle, with all the components mentioned in II. Hardware
Description, for its correct operation.

Then, the most appropriate edge detection algorithm for the


Once the vehicle was assembled, we proceeded to test the
identification of the lanes had to be tested, and for this the
traction and direction by executing a program that controls
Canny algorithm and the Hough transform were tested. The
the rotation of the motors with PWM, which accelerated,
latter was the one that had the best results, due to its rapid
braked, turned to the right or left according to the number
response and because it allowed to visually show in a more
that was sent to the raspberry by keyboard.
natural way (in color blue) the discontinuous and
1. Forward continuous lines of the lane.
2. Back
3. Right
4. Left
5. Stop
REFERENCES

[1] Pololu, "Pololu Robotics & Electronics," 9 July 2013.


[Online]. Available:
https://www.pololu.com/file/0J86/TB6612FNG.pdf.

[2] Microsoft, "LifeCam HD," [Online]. Available:


https://www.microsoft.com/accessories/es-
es/products/webcams/lifecam-hd-3000/t3h-
00002#specsColumns-testCarousel. [Accessed 3
December 2018].

[3] Mouser, "HC-SR04," [Online]. Available:


https://www.mouser.com/ds/2/813/HCSR04-
1022824.pdf. [Accessed 3 December 2018].

[4] RaspberryPi, "RaspberryPi 3," [Online]. Available:


Finally, after verifying the operation of the different https://www.raspberrypi.org/products/raspberry-pi-3-
systems that make up our vehicle independently, it was model-b/. [Accessed 3 December 2018].
time to unite them. To do this, a class was created, called
Car, which had as attributes the directions forward, back,
left, right, and stop, in addition to the function "sense", to
determine obstacles. We tested this logic several times, in
order to determine the best parameters for the direction and
traction, since they are controlled with PWM, resulting in
the following:

Traction Steering
Forward 100% 0%
Right 60% 100%
Left 50% 100%
Back 100% 0%

The percentage of PWM for the right is greater than for the
left because the geometry of the vehicle requires it to turn.

V. CONCLUSIONS

- The autonomous driving of a vehicle can be achieved by


coupling a camera and a controller, which detecting and
analyzing lines determines which lane is in, and the
appropriate decision for that position, which may be to turn
or move on.

- The rigidity of the tires to be used must be taken into


account, since the turn will depend on the surfaces the
vehicle will pass through, if it does not have strong traction.

- The effectiveness of vehicle location control within the


lane is primarily due to the location of the camera, vehicle
speed and controller processing speed.

You might also like