Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 59

PROJECT REPORT

ON
Autonomous Car System using IOT

In partial fulfillment of the requirements for the award of the degree of Bachelor of
Technology in Electronics & Communication Engineering, Maulana Abul Kalam Azad
University of Technology.

Submitted by

Rahul Biswas (Roll. No. - 11900317019)


Pratik Goutam (Roll. No. - 11900317021)
Gargi Karmakar (Roll. No. - 11900317034)

Under the guidance of

Debajyoti Misra
HOD, Dept. Of ECE, SIT

DEPARTMENT OF
ELECTRONICS & COMMUNICATION ENGINEERING
SILIGURI INSTITUTE OF TECHNOLOGY
PO: SUKNA, SILIGURI, PIN: 734 009,
WEST BENGAL, 2020-2021

1
SILIGURI INSTITUTE OF TECHNOLOGY
PO: SUKNA, SILIGURI, PIN: 734009, WEST BENGAL
2020-2021
DEPARTMENT OF ELECTRONICS &
COMMUNICATION ENGINEERING

CERTIFICATE

Certified that the project work entitled “Autonomous Car System using IOT” is a bonafide
work carried out by

Rahul Biswas (Roll. No. - 11900317019)


Pratik Goutam (Roll. No. - 11900317021)

Gargi Karmakar (Roll. No. - 11900317034)

In partial fulfillment for the award for degree of BACHELOR OF TECHNOLOGY in


ELECTRONICS & COMMUNICATION ENGINEERING of the MAULANA ABUL
KALAM AZAD UNIVERSITY OF TECHNOLOGY, KOLKATA during the year 2020-
2021. It is certified that all corrections/suggestions indicated for Internal Assessment have
been incorporated in the report deposited in the Department. The project report has been
approved as it satisfies the academic requirements in respect of Project Work prescribed for
Bachelor of Engineering Degree.

------------------------------- ----------------------------
Mr. Debajyoti Mishra Mr. Debajyoti
Mishra
HOD & Project Guide HOD,
ECE DEPARTMENT, SIT

2
ACKNOWLEDGEMENT

The outcome of this project required a lot of guidance and assistance from many people and
we are extremely privileged to have got this all along the work of our project. All that we
have done is only due to such supervision and assistance and we would not forget to thank
them.

We would like to sincerely thank our project guide Mr. Debajyoti Misra, H.O.D., Department
of Electronics and Communication engineering and our Project Guide, whose guidance and
supervision enabled us to look for different techniques and apply innovative ideas. We are
thankful to him for the time and valuable advice he has given us.

We are grateful to Mr. Debajyoti Mishra, HOD, Department Of Electronics and


communication engineering, Siliguri Institute of Technology, for providing necessary
facilities in the department.

We would like to thank all the faculties of the Department of Electronics and Communication
Engineering for their encouragement, Support and appreciation.

We are thankful to our parents for their motivation and support. We offer our Thanks to all
our friends who helped us whenever we needed and have been a source of comfort.

And last but not the least we express our hearty thanks to God for his blessings on us and all
those who supported us directly and indirectly to complete this work.

3
ABSTRACT

An autonomous vehicle system with lane detection using Raspberry Pi is presented in this
paper. The autonomous car or the driverless car can be referred to as a robotic car in simple
language. This car is capable of sensing the environment, navigating and fulfilling the human
transportation capabilities without any human input. It is a big step in advancing future
technology. It highlights the idea to develop an automated car which can be driven from
anywhere using the internet over a secured server. This car will also have limited automation
features like traffic light detection, obstacle avoidance system and lane detection system so
that it can drive itself safely in case of connectivity failure. The main goal here is to minimize
the risk of human life and ensure highest safety during driving. At the same time the car will
assure comfort and convenience to the controller. A miniature car including the above
features has been developed which showed optimum performance in a simulated
environment. The system mainly consists of a Raspberry Pi, an Arduino, a Picamera, a sonar
module, a web interface and internet modem. The Raspberry Pi was mainly used for the
Computer Vision algorithms and for streaming video through the internet. The proposed
system is very cheap and very efficient in terms of automation.

4
CONTENTS

CHAPTER NO. TITLE PAGE NO.


1 Introduction 8-11

2 System Description 12-23

2.1 Functional Block Diagram 15

2.2 Hardware Requirements 15-16

2.3 Software Requirements 16

2.4 Hardware Description 17-21

2.5 Software Description 22-23

3 Proposed Architecture 24-30

3.1 Different Layers of Architecture 30

4 Operations 31-36

4.1 Streaming video and remote access 32

4.2 Obstacle Avoidance 33

4.3 Traffic Light Detection 34-35

4.4 Lane Detection 35-36

5 Results 37-40

6 Merits and Demerits 41-43

6.1 Merits 42

6.2 Demerits 43

7 Source Code 44-51

7.1 For Lane Detection 45-47

7.2 For controlling motors 48-50

7.3 Initialize Ultrasonic sensor 50-51

7.4 Initialize Pi Cam 51

8 Appendix - Component Required 52-53

9 Conclusion 54-55

5
10 References 56-57

LIST OF FIGURES

Figure No. Title Page No.

1.1 Block Diagram of IOT 9

1.2 Basic IOT Architecture 11

2.1 Proposed Prototype of Car 13

2.2 Block Diagram of Project 15

2.3 Arduino Uno 17

2.4 Raspberry Pi Model B 18

2.5 L298N Motor Driver 19

2.6 Pi Camera Module V2 20

2.7 Ultrasonic Sensor 21

2.8 OpenCV logo 22

2.9 Python logo 22

2.10 Arduino IDE logo 23

2.11 VS Code logo 23

3.1 Unassembled Chassis Kit 25

3.2 Assembled Chassis Kit 25

3.3 L298N Motor Driver Connection 26

3.4 Raspberry Pi Model Pin Diagram 27

3.5 Ultrasonic sensor on breadboard 27

3.6 Proposed Architecture 29

3.7 CNN Architecture 30

4.1 Timing Diagram of Ultrasonic Sensor 33

4.2 Traffic Light Detection process 34

6
4.3 Lane Detection Process 35

4.4 Lane Detection Example 36

5.1 Original Image 38

5.2 Canny Image 39

5.3 Cropped Canny Image 39

5.4 Line Image 40

5.5 Final Resulted Image 40

7
CHAPTER 1

8
INTRODUCTION

Internet of Things refers to a connection of billions of complex devices like electronics,


sensors, gateways, actuators, and platform hubs. These tangible devices connect and interact
with each other over a wireless network. Connected objects (or things) share data with each
other and operate without any intervention by humans.
Things have evolved due to the convergence of multiple technologies, real-time analytics,
machine learning, ubiquitous computing, commodity sensors, and embedded systems.
Traditional fields of embedded systems, wireless sensor networks, control systems,
automation (including home and building automation), and others all contribute to enabling
the Internet of things. In the consumer market, IoT technology is most synonymous with
products pertaining to the concept of the "smart home", including devices and appliances
(such as lighting fixtures, thermostats, home security systems and cameras, and other home
appliances) that support one or more common ecosystems, and can be controlled via devices
associated with that ecosystem, such as smartphones and smart speakers. The IoT can also be
used in healthcare systems.

Fig 1.1 : Block diagram of IoT

9
With the ever-growing technological advancement, human civilization is looking for
automation in every sphere of life. Automated cars are one of the latest trends which has been
massively recognized by people all around the world as they want maximum security and
comfort during driving. Nowadays, road accidents are one of the prime concerns for the
people. It became very frequent and uncertain. Most of the road accidents occur due to lack
of abidance of the traffic rules. Most of the time, the drivers become drowsy or distracted
during driving and eventually hit objects ahead of them. If the driving process can be handled
with the aid of Computer Vision and efficient sensors then the risk of human mistakes can be
highly reduced. Besides, sometimes it becomes necessary to access the car from a remote
location in order to reduce hassles. In this case, it would be a lot more convenient if the car
could be viewed from a remote computer and driven by interaction through the computer
keyboard. This could be as easy as playing a computer game. Our work is based on Internet
of Things technology and Computer Vision to remotely control our vehicle and automation
features.
Various lane detection techniques have been observed. Lane detection techniques using
OpenCV based on Receiver n d Operating Characteristic curve and Detection Error Trade-off
curve and using perspective images have already been worked on. In this paper lane detection
is done using canny edge algorithm and Hough line transformation which has shown good
rate of success in the working condition. So far many related works are done involving
remote controlling an autonomous car using Bluetooth with android or iPhone. Several
papers have been published E regarding autonomous cars and obstacle avoidance systems
which lack either versatile control over internet or live video streaming. Concepts from
papers like home surveillance system, automatic toll collection, and obstacle avoidance
system are combined to further develop the idea. A sample car is designed for the purpose of
testing in a created environment.

10
Fig 1.2: Basic IoT Architecture

11
CHAPTER 2

12
SYSTEM DESCRIPTION

The overall system can be divided into different categories.

Fig 2.1: Author’s proposed prototype of the automated self driving car

Firstly the car can be remotely controlled through the Internet using a web browser. In case of
connectivity failure it can act autonomously in a good weather condition. The proposal
consists of complex Computer Vision algorithms and video transmission with the Internet.
Raspberry pi and Arduino are the main devices to implement the prototype. The Raspberry Pi
streams the video to the internet. A user can access the streaming using a web browser. It
takes a lot of processing power for simultaneously working on video streaming and running

13
Computer Vision. The Raspberry Pi 2 model B is a single-board computer with a powerful
processing unit and serial and camera interface (CSI). The Raspberry Pi camera module can
be used to take high-definition video. It can be accessed through the V4L (Video for Linux)
APIs, and there are numerous third-party libraries built for it, including the Picamera Python
library which will be beneficial to the live streaming purpose. Apache is a popular web server
application that was installed on the Raspberry Pi to allow it to serve web pages. Apache can
serve HTML files over HTTP, and with additional modules can serve dynamic web pages
using scripting languages such as python. A web page was hosted that shows the video
streaming sent from the Picamera. To access the web page one only needs to know the IP
address of the Raspberry Pi and a username and password to log in. From the web page the
car can be fully driven.

For the connectivity failure the car needs to work on its own. It needs to keep itself safe from
collisions and abide by the traffic rules. The Arduino controls the motor driver circuit. It is
connected with sonar, an ultrasonic sensor which evaluates the attributes of a target by
interpreting the echoes from radio waves. It is used to detect the distance of obstacles from
the car. If an obstacle is detected then the Arduino stops the motor from running operation.
Meanwhile the Raspberry Pi uses computer vision algorithms to detect the lane and traffic
light signals. Python Open Source Computer Vision (OpenCV) is a library of programming
functions mainly aimed at real-time computer vision. It has over 2500 optimized algorithms
which can be used for image processing, detection, object identification, classification of
actions, traces and other functions. The Raspberry Pi is interfaced with the Arduino with
serial communication. It controls the Arduino to run the car accordingly.

14
2.1 FUNCTIONAL BLOCK DIAGRAM:

Given below is the block diagram for the overall connections between the different
components used in making the autonomous self driving car.

Fig 2.2: Block Diagram of our project

2.2 HARDWARE REQUIREMENTS:

● Pi Camera : It captures images of the surrounding environment for making the


dataset for the CNN to learn and train and then in the actual implementation, it uses it
to guide the car.
● Raspberry Pi : It is interfaced with the Pi camera to provide images(video) for the
viewing of the car. The CNN is coded here and output for the direction and working
for the car is sent as input to the controller.

15
● Arduino Microcontroller : It takes the output of the CNN network as input and is
connected to the Dc brakes and sensors for the obstacle detection and actual moving
and stopping of the car.
● Ultrasonic Sensor : Ultrasonic sensors are integrated with each other and the
controller for detecting obstacles on the course based on the echo system.
● DC Motor (4) : It uses the electrical energy inputted by the Arduino Microcontroller
and converts it into mechanical energy which causes the movement in the tyres.
● 1 L298N Motor Driver: The L298N is a dual H-Bridge motor driver which allows
speed and direction control of two DC motors at the same time.
● 1 Chassis: A chassis is the load-bearing framework of our automated car, which
structurally supports all the components used in our project.
● 4 Wheels: Used for the movement of our vehicle.
● Breadboard: upon which our circuit is mounted.
● Jumper Wires : used for connection between different components as per required.
● 1 12V DC power source: used for powering our self-driving car and its various
components.

2.3 SOFTWARE REQUIREMENTS :

● Arduino IDE : It is the platform where the programs are written for Arduino board
functioning causing physical movement of the car.
● OpenCV : It inputs the image from the Pi camera and converts it into grayscale and
then resizes it and passes it to the Neural Network.
● Raspberry Pi Camera Interface : It captures the live feed as high rate of images per
second which is input to the Raspberry Pi CNN.
● Python 3.9 : Used for development and execution of necessary code required in our
project.
● Visual Studio Code : Visual Studio Code is a source-code editor made by Microsoft
for Windows, Linux and macOS. Features include support for debugging, syntax
highlighting, intelligent code completion, snippets, code refactoring, and embedded
Git.

16
2.4 HARDWARE DESCRIPTION:

2.4.1. Arduino Microcontroller:

Fig 2.3: Arduino Uno

Arduino is an open-source hardware and software company, project and user community that
designs and manufactures single-board microcontrollers and microcontroller kits for building
digital devices. Its hardware products are licensed under a CC-BY-SA license, while software
is licensed under the GNU Lesser General Public License (LGPL) or the GNU General
Public License (GPL), permitting the manufacture of Arduino boards and software
distribution by anyone. Arduino boards are available commercially from the official website
or through authorized distributors.

Arduino board designs use a variety of microprocessors and controllers. The boards are
equipped with sets of digital and analog input/output (I/O) pins that may be interfaced to
various expansion boards ('shields') or breadboards (for prototyping) and other circuits. The
boards feature serial communications interfaces, including Universal Serial Bus (USB) on
some models, which are also used for loading programs. The microcontrollers can be
programmed using the C and C++ programming languages, using a standard API which is
also known as the "Arduino language". In addition to using traditional compiler toolchains,
the Arduino project provides an integrated development environment (IDE) and a command
line tool (arduino-cli) developed in Go.

17
2.4.2 Raspberry Pi:

Fig 2.4: Raspberry Pi 4 Model B

Raspberry Pi is a series of small single-board computers (SBCs) developed in the United


Kingdom by the Raspberry Pi Foundation in association with Broadcom. The Raspberry Pi
project originally leaned towards the promotion of teaching basic computer science in schools
and in developing countries. The original model became more popular than anticipated,
selling outside its target market for uses such as robotics. It is widely used in many areas,
such as for weather monitoring, because of its low cost, modularity, and open design. It is
typically used by computer and electronic hobbyists, due to its adoption of HDMI and USB
devices.
After the release of the second board type, the Raspberry Pi Foundation set up a new entity,
named Raspberry Pi Trading, and installed Eben Upton as CEO, with the responsibility of
developing technology. The Foundation was rededicated as an educational charity for
promoting the teaching of basic computer science in schools and developing countries.

18
2.4.3 L298N Motor Driver:

Fig 2.5: L298N Motor Driver

This L298N Motor Driver Module is a high power motor driver module for driving DC and
Stepper Motors. This module consists of an L298 motor driver IC and a 78M05 5V regulator.
L298N Module can control up to 4 DC motors, or 2 DC motors with directional and speed
control.
L298 Module Features & Specifications:
● Driver Model: L298N 2A
● Driver Chip: Double H Bridge L298N
● Motor Supply Voltage (Maximum): 46V
● Motor Supply Current (Maximum): 2A
● Logic Voltage: 5V
● Driver Voltage: 5-35V
● Driver Current:2A

19
● Logical Current:0-36mA
● Maximum Power (W): 25W
● Current Sense for each motor
● Heatsink for better performance
● Power-On LED indicator
2.4.4 Pi Camera:

Fig 2.6: Pi Camera Module V2

The Raspberry Pi Camera Module v2 replaced the original Camera Module in April 2016.
The v2 Camera Module has a Sony IMX219 8-megapixel sensor (compared to the 5-
megapixel OmniVision OV5647 sensor of the original camera).

The Camera Module can be used to take high-definition video, as well as stills photographs.
It’s easy to use for beginners, but has plenty to offer advanced users if you’re looking to
expand your knowledge. There are lots of examples online of people using it for time-lapse,
slow-motion, and other video cleverness. You can also use the libraries we bundle with the
camera to create effects.

20
2.4.5 Ultrasonic Sensor:

Fig 2.7 Ultrasonic Sensor(HC-SR 04)

An ultrasonic sensor is an electronic device that measures the distance of a target object by
emitting ultrasonic sound waves, and converts the reflected sound into an electrical signal.
Ultrasonic waves travel faster than the speed of audible sound (i.e. the sound that humans can
hear). Ultrasonic sensors have two main components: the transmitter (which emits the sound
using piezoelectric crystals) and the receiver (which encounters the sound after it has
travelled to and from the target).

In order to calculate the distance between the sensor and the object, the sensor measures the
time it takes between the emission of the sound by the transmitter to its contact with the
receiver. The formula for this calculation is D = ½ T x C (where D is the distance, T is the
time, and C is the speed of sound ~ 343 meters/second). For example, if a scientist set up an
ultrasonic sensor aimed at a box and it took 0.025 seconds for the sound to bounce back, the
distance between the ultrasonic sensor and the box would be:

21
D = 0.5 x 0.025 x 343
or about 4.2875 meters.

2.5 SOFTWARE DESCRIPTION:

2.5.1 OpenCV:

Fig 2.8: OpenCV logo

OpenCV (Open Source Computer Vision Library) is a library of programming functions


mainly aimed at real-time computer vision. Originally developed by Intel, it was later
supported by Willow Garage then Itseez (which was later acquired by Intel). The library is
cross-platform and free for use under the open-source Apache 2 License. Starting in 2011,
OpenCV features GPU acceleration for real-time operations.

2.5.2 Python 3.9:

22
Fig 2.9: Python Logo

Python is an interpreted high-level general-purpose programming language. Python's design


philosophy emphasizes code readability with its notable use of significant indentation. Its
language constructs as well as its object-oriented approach aim to help programmers write
clear, logical code for small and large-scale projects.
Python is dynamically-typed and garbage-collected. It supports multiple programming
paradigms, including structured (particularly, procedural), object-oriented and functional
programming. Python is often described as a "batteries included" language due to its
comprehensive standard library.

2.5.3 Arduino IDE:

Fig 2.10: Arduino IDE Logo

The Arduino Integrated Development Environment - or Arduino Software (IDE) - contains a


text editor for writing code, a message area, a text console, a toolbar with buttons for
common functions and a series of menus. It connects to the Arduino hardware to upload
programs and communicate with them.

2.5.4 Visual Studio Code:

Fig 2.11: Visual Studio Code Logo

23
Visual Studio Code is a source-code editor made by Microsoft for Windows, Linux and
macOS. Features include support for debugging, syntax highlighting, intelligent code
completion, snippets, code refactoring, and embedded Git. Users can change the theme,
keyboard shortcuts, preferences, and install extensions that add additional functionality.

CHAPTER 3

24
PROPOSED ARCHITECTURE

For building the structure of our Automated Car following steps are to be followed:
1. Build the Chassis:
Assemble the Chassis Kit as shown with the help of screws and a screwdriver.

Fig 3.1: Unassembled Chassis Kit

25
Fig 3.2: Assembled Chassis Kit

2. Connect the Motors to L298N Motor Driver:


Attach the 4 motors to the motor driver as shown in the circuit below.

Fig 3.3 : Connection of 4 wires to the L298N Motor Driver

26
3. Connect the Raspberry Pi to the L298N Motor Driver:
Before adding Raspberry Pi we need to prepare it by following the steps:
3.1 Preparing the Pi:
1. Download Raspbian Jessie Operating Systems (OS)
2. Format & Install Raspbian to SD Card
3. Insert the Wireless Internet into the Pi's USB slots.
3.2 Connecting to the L298N Motor Driver:
Once complete, position it at the front of the Pi-Car. Next use jumper wires to connect
the Pi's GPIO pins to the Motor Driver.
Connect the Pins as listed below:
GPIO Pin 2 to 5V (Use Female-to-Male Jumper Wire)
GPIO Pin 6 to GND (Use Female-to-Male Jumper Wire)
GPIO Pin 7 to INI4 (EVA)
GPIO Pin 11 to INI3 (5V)
GPIO Pin 13 to INI2 (5V)
GPIO Pin 15 to INI1 (EVB)
For identifying the pins of the Pi see the picture below.

27
Fig 3.4: Raspberry Pi Model Pin Diagram

4. Add the HC-SR04 Ultrasonic Distance Measuring Sensor Module


Attach the sensor module and wires to a breadboard as shown in the picture below.
Ensure to include the 1 ohm resistor as this will protect Pi.

Fig 3.5: UltraSonic Sensor placed on a breadboard


Once completed, position the breadboard at the front of the car, on the bottom layer.
4.1 Wiring the sensor to Pi:
Connect the wires on the breadboard to the Pi and Motor driver as follows:
Echo Cable (Yellow Cable in picture above) to Pin 16 on the Pi (Using
Female-to-Male Jumper Wire)
Trigger Cable (Orange Cable in picture above) to Pin 12 on the Pi (Use
Female-to-Male Jumper Wire)
Ground Cable (Black Cable in Picture above) to GND on the motor driver
(Use Female-to-Male Jumper Wire)
Power Cable (Red Cable in Picture above) to 5V on the motor driver (Use
Female-to-Male Jumper Wire)

5. Add the camera to the Pi:


In the model, Pi Camera is mounted on top of the car which takes input images in
high frame/second to feed the environment data live. These images are then processed
in grayscale format using image processing to reduce the dimensional matrix required
for RGB images. Using Convolutional Neural Network for image classification, the
convolution layer finds patterns in the images which classify the various architectures
of the road. These images are classified into Left, Right, Forward and Reverse for the
movement of the car. These images are firstly used for training the system under
various circumstances. The training is done to make the system capable of predicting

28
or classifying the actions to be taken for driving. The system uses a combination of
various sensors to detect objects and the speed of the objects present on the road.
After the decision is made, the output is given as input to the Arduino
Microcontroller. The Arduino controls the DC brake system and directs it or controls
its speed. The software used in this project are Arduino IDE for writing the code for
the Arduino board, OpenCV which will help to crop out the section of the video from
Raspberry-Pi camera interface and converts it to grayscale, resize it and then pass it to
the Convolution Neural Network, Spyder environment and Raspberry-Pi camera
interface to remotely capture the live feed by just letting the IP address of the
Raspberry-Pi.

29
Fig 3.6: Proposed Architecture of our system

30
3.1 DIFFERENT LAYERS OF THE CNN ARCHITECTURE

Fig 3.7 : CNN Architecture

1. Input Layer: Grayscale image from the Pi Camera is the input for the CNN with
width 28, height 28 and depth 1.
2. Convolution Layer: The filter is applied to the images and it is convolved and
feature 0maps are extracted.
3. ReLU Layer: This is the activation function layer which will apply element wise
activation function to the output of the convolution layer.
4. RELU: R(z)= max (0, z) This introduces non-linearity to the network which handles
complexity. It produces rectified feature maps as output.
5. Pooling Layer: This layer is periodically inserted in the convents to reduce the
dimensionality of the rectified feature maps. We use a max pool with 2 x 2 filters and
stride 2, the resultant volume will be of dimension 14x14x12.
6. Fully-Connected Layer: This layer is the output layer which classifies the image.

31
CHAPTER 4

32
OPERATIONS

4.1 STREAMING VIDEO AND REMOTE ACCESS

The Raspberry Pi works on the Linux operating system. It can host web pages through
Apache server. It responds to requests to serve up web pages, which can be simple HTML or
sophisticated web-based apps. To make the Pi capable of hosting websites Apache is installed
on it. Apache is a free, open-source HTTP (Hypertext Transfer Protocol) web server
application. A website was built for hosting the streamed video and for controlling the car
remotely.
MJPG streamer was used to stream video from Raspberry Pi. The easiest way to install it is
by using subversion. There is a facility in linux operating system named daemon which runs
the selected programs automatically during system boot up. The scripts for MJPG
streamer,traffic light detector and lane detector are all run through daemon. So whenever the
Raspberry Pi is powered up it automatically keeps streaming the video from its camera to its
web server. Now if we type http://(Raspberry Pi's IP address):port number then the streaming
data can be viewed from any web browser.
The web page hosting the video streaming was developed using python flask framework.
From the web page, a python script is used to handle keyboard interrupt from the user. This
keyboard interruption can be processed and sent through the internet to the remote Raspberry
Pi which is located inside the car. The Pi in turn sends a signal to Arduino to control the
motor through serial communication.

33
4.2 OBSTACLE AVOIDANCE

A sonar sensor (HC-SR04) has been used for this purpose. It emits very high frequency (40
KHz) of sound. It has two transducers - a transmitter and a receiver. The "Transmit"
transducer sends out a short burst of (8 cycles) of pulse train. The sonar module timing
diagram is shown in Fig. 4.1.

Fig 4.1 Timing Diagram of UltraSonic Sensor

The "Receive" transducer in turns waits for an echo. If an object exists on its perimeter an
echo bounces back to the "Receive" transducer. Distance of the object is calculated from the
following equation.
d=v* (t/2) …..(1)
Here d is the distance from object, t is the total time from transmission to reception and v is
the velocity of sound which is typically 340 m/s in room temperature.
The sonar is connected with an Arduino which calculates the distance and controls the motor
rotation accordingly. The sonar is placed in front of the vehicle and is mounted on a servo
motor which can rotate up to 180 degrees. This way if an obstacle comes ahead, then it

34
rotates the sonar and checks if the road is clear around. If no obstacle is found then it turns
the car and picks an alternative way.
4.3 TRAFFIC LIGHT DETECTION

The traffic light detection procedure can be briefly described by the following Figure 4.2

Fig 4.2 Traffic Light Detection Process

1) Preprocessing : The image frames captured from the video were converted into gray
scale images.
2) Haar Feature-Based Cascade Classifier : Haar feature based cascade classifier was
chosen for traffic light detection. This feature was highly popular for its success on
face detection. It has two parts-training and detection. We can generate both using
OpenCV. To build a Haar Cascade it was needed to generate some positive images
and some negative images. The result is better as the sample image is generated.

35
However for the processing power limitation of Raspberry Pi only 1000 positive
samples and 1000 negative samples were taken.
The positive samples were the images of different traffic light signals at different
angles. The negative samples were collected from the related environment where no
traffic signals were present. With the utilization of OpenCV_createsamples command
many other positive samples were randomly generated superimposing on the
negatives. A vector file was created merging all the positive samples. This training
part was done using OpenCV_traincascade command. This cascade classifier is used
to detect the traffic light post that is the region of interest.
3) Gaussian Filter : To reduce the image noises gaussian blur filter was used.
4) Color Detection : The BGR image was converted to HSV, because it gets much easier
to represent the colors in HSV. Then the threshold value for green and red colors were
selected individually for the image. And finally the green or red part was extracted.

4.4 LANE DETECTION

For the lane detection technique the popular canny edge detection and Hough line transform
was used. This algorithm is highly efficient for a road with clearly visible lane markers. In
edge detection algorithms the boundaries of an image are generally detected. Canny
algorithm is selected for its very low error rate, good localization and minimal response that
is only one detector response per edge. For good efficiency several steps are needed to be
maintained. Figure 5.3 shows the process in a nutshell.

36
Fig 4.3 Lane Detection Process
1) Gaussian Filter: Suitable masking was done to filter out the noise from the original image
using a gaussian filter.
2) Finding Intensity Gradient: After smoothing was done Sobel kernel was used in both
horizontal and vertical direction to get the first derivative in both directions. Gradient is the
change of brightness in a series of pixels.
3) Removing non Edge: Pixels that were not part of an edge were removed. Thus an image
with a thin edge is observed.
4) Hysteresis: Canny uses two thresholds. If a pixel gradient is higher than the upper
threshold, it is accepted as an edge. Otherwise it is rejected. If a pixel gradient is between the
thresholds then it is only accepted if it is connected with a pixel of the upper threshold.
5) Hough Line Transformation: After canny edge detection Hough line transformation is
applied. Hough transformation is very efficient for detecting any shape if it can be
mathematically expressed even if it is a little distorted. To determine the Hough line two
parameters are needed- ⍴, the perpendicular distance from origin to the line and 𝜽, the angle
formed by this perpendicular line and horizontal axis measured counter clockwise. These can
form the parametric line equation.
⍴ = xcos𝜽 + ysin𝜽

37
There exists an OpenCV function for doing this named cv2.HoughLines(). This function
takes the ⍴,𝜽 as argument. It also takes an extra argument which determines the threshold for
allowing a pixel as a line.

Fig 4.4 Lane Detection Example

CHAPTER 5

38
RESULTS

A miniature car including the above features has been developed which showed optimum
performance in a simulated environment. The sonar sensor is set up in front of the car. The
camera is fixed in front of the car and the processing units are set within.

Whenever an obstacle was placed in front of the car it reduced its speed and stopped. For
better performance multiple sonars can be used. For real life applications, a much efficient
and powerful sensor can certainly minimize the hassle.

Python Flask framework was used to host the video sent from the Raspberry Pi. A web page
was developed to show the video stream and to provide a user interface for supporting the
remote control from the web page. The MJPG streamer streamed data flawlessly except for
that it suffered ms of delay.

39
The traffic light detection system was very accurate for the given environment. Although to
make it work in the real life environment a lot more training data will be needed. The car
could successfully detect the location and interpretation of traffic signals.

The lane detection algorithm worked flawlessly and was able to detect its lane without much
of an error. It successfully drove itself on a printed road. Fig. 5, Fig 5.2, Fig. 5.3, Fig. 5.4
and Fig. 5.5 shows the results of the image processing for detecting lanes.

Fig 5.1 Original Image

Fig 5.2 Canny Image formed after Canny Edge Detection

40
Fig 5.3 Cropped Canny Image

Fig 5.4 Line Image formed after applying Hough Line Transform on Cropped Canny Image

41
Fig 5.5 Result after combining Line Image with Original Image

42
CHAPTER 6

MERITS AND DEMERITS

6.1 MERITS:

1. Decreased the number of accidents:


Autonomous cars prevent human errors from happening as the system controls the vehicle. It
leaves no opportunity for distraction, not just like humans who are prone to interruptions. It
also uses complicated algorithms that determine the correct stopping distance from one
vehicle to another. Thereby, lessening the chances of accidents dramatically.

2. Lessens traffic jams:


Driverless cars in a group participate in platooning. This allows the vehicles to brake or
accelerate simultaneously. Platoon systems allow automated highway systems which may
significantly reduce congestion and improve traffic by increasing up the lane capacity.

43
Autonomous cars communicate well with one another. They help in identifying traffic
problems early on. It detects road fixing and detours instantly. It also picks up hand signals
from the motorists and reacts to it accordingly.

3. Stress-free parking:
Autonomous cars drop you off at your destination and directly head to a detected vacant
parking spot. This eliminates the wasting of time and gas looking for a vacant one.

4. Time-saving vehicle:
As the system takes over the control, the driver has a spare time to continue work or spend
this time catching up with their loved-ones without having the fear about road safety.

5. Accessibility to transportation:
Senior citizens and disabled personnel are having difficulty driving. Autonomous vehicles
assist them towards safe and accessible transportation.

6.2 DEMERITS:

1. Expensive:
High-technology vehicles and equipment are expensive. They prepare a large amount of
money for research and development as well as in choosing the finest and most functional
materials needed such as the software, modified vehicle parts, and sensors. Thus, the cost of
having Autonomous cars is initially higher. However, this may lower down after 10 years
giving way for the average earner to have one.

2. Safety and security concerns:


Though it has been successfully programmed, there will still be the possible unexpected
glitch that may happen. Technologies are continuously updating and almost all of this
equipment may have a faulty code when the update was not properly and successfully done.

44
3. Prone to Hacking:
Autonomous vehicles could be the next major target of the hackers as this vehicle
continuously tracks and monitors details of the owner. This may lead to the possible
collection of personal data.

4. Fewer job opportunities for others:


As the artificial intelligence continues to overcome the roles and responsibilities of humans,
taxi, trucks, or even co-pilots may be laid off as their services will no longer be needed. This
may significantly impact the employment rate and economic growth of a certain country.

5. Non-functional sensors:
Sensor failures often happened during drastic weather conditions. This may not work during a
blizzard or a heavy snowfall.

CHAPTER 7

45
SOURCE CODE

7.1 For Lane Detection:

import cv2
import numpy as np

def canny(img):
if img is None:
cap.release()
cv2.destroyAllWindows()
exit()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
kernel = 5

46
blur = cv2.GaussianBlur(gray,(kernel, kernel),0)
canny = cv2.Canny(gray, 50, 150)
return canny

def region_of_interest(canny):
height = canny.shape[0]
width = canny.shape[1]
mask = np.zeros_like(canny)
triangle = np.array([[
(200, height),
(800, 350),
(1200, height),]], np.int32)
cv2.fillPoly(mask, triangle, 255)
masked_image = cv2.bitwise_and(canny, mask)
return masked_image

def houghLines(cropped_canny):
return cv2.HoughLinesP(cropped_canny, 2, np.pi/180, 100,
np.array([]), minLineLength=40, maxLineGap=5)

def addWeighted(frame, line_image):


return cv2.addWeighted(frame, 0.8, line_image, 1, 1)

def display_lines(img,lines):
line_image = np.zeros_like(img)
if lines is not None:
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(0,0,255),10)
return line_image

def make_points(image, line):


slope, intercept = line
y1 = int(image.shape[0])

47
y2 = int(y1*3.0/5)
x1 = int((y1 - intercept)/slope)
x2 = int((y2 - intercept)/slope)
return [[x1, y1, x2, y2]]

def average_slope_intercept(image, lines):


left_fit = []
right_fit = []
if lines is None:
return None
for line in lines:
for x1, y1, x2, y2 in line:
fit = np.polyfit((x1,x2), (y1,y2), 1)
slope = fit[0]
intercept = fit[1]
if slope < 0:
left_fit.append((slope, intercept))
else:
right_fit.append((slope, intercept))
left_fit_average = np.average(left_fit, axis=0)
right_fit_average = np.average(right_fit, axis=0)
left_line = make_points(image, left_fit_average)
right_line = make_points(image, right_fit_average)
averaged_lines = [left_line, right_line]
return averaged_lines

cap = cv2.VideoCapture("test1.mp4")
while(cap.isOpened()):
_, frame = cap.read()
canny_image = canny(frame)
# cv2.imshow("canny_image", canny_image)
cropped_canny = region_of_interest(canny_image)
# cv2.imshow("cropped_canny",cropped_canny)

48
lines = houghLines(cropped_canny)
averaged_lines = average_slope_intercept(frame, lines)
line_image = display_lines(frame, averaged_lines)
# cv2.imshow("line_image",line_image)
combo_image = addWeighted(frame, line_image)
cv2.imshow("result", combo_image)

if cv2.waitKey(1) & 0xFF == ord('q'):


break

cap.release()
cv2.destroyAllWindows()

7.2 For controlling the motors (including input from sensors) :

import RPi.GPIO as gpio


import time
import sys
import Tkinter as tk
from sensor import distance

def init():
gpio.setmode(gpio.BOARD)
gpio.setup(7, gpio.OUT)
gpio.setup(11, gpio.OUT)
gpio.setup(13, gpio.OUT)
gpio.setup(15, gpio.OUT)

def reverse(tf):

49
gpio.output(7, False)
gpio.output(11, True)
gpio.output(13, False)
gpio.output(15, True)
time.sleep(tf)

def forward(tf):
gpio.output(7, True)
gpio.output(11, False)
gpio.output(13, True)
gpio.output(15, False)
time.sleep(tf)

def turn_right(tf):
gpio.output(7, True)
gpio.output(11, False)
gpio.output(13, False)
gpio.output(15, True)
time.sleep(tf)

def turn_left(tf):
gpio.output(7, False)
gpio.output(11, True)
gpio.output(13, True)
gpio.output(15, False)
time.sleep(tf)

def stop(tf):
gpio.output(7, False)
gpio.output(11, False)
gpio.output(13, False)
gpio.output(15, False)
time.sleep(tf)
gpio.cleanup()

50
def key_input(event):
init()
print "Key:", event.char
key_press = event.char
sleep_time = 0.060

if key_press.lower() == "w":
forward(sleep_time)
elif key_press.lower() == "s":
reverse(sleep_time)
elif key_press.lower() == "a":
turn_left(sleep_time)
elif key_press.lower() == "d":
turn_right(sleep_time)
elif key_press.lower() == "p":
stop(sleep_time)
else:
pass

curDis = distance("cm")
print("Distance:", curDis)

if curDis <15:
init()
reverse(0.5)

command = tk.TK()
command.bind('<keypress>', key_input)
command.mainloop()

7.3 To initialize Ultrasonic Sensor:

51
import RPi.GPIO as gpio
import time

def distance(measure='cm'):
gpio.setmode(gpio.BOARD)
gpio.setup(12, gpio.OUT)
gpio.setup(16, gpio.IN)

time.sleep(0.3)
gpio.ouput(12, True)
time.sleep(0.00001)

gpio.output(12, False)
while gpio.input(16) == 0:
nosig = time.time()

while gpio.input(16) == 1:
sig = time.time()

tl = sig - nosig

if measure == 'cm':
distance = tl / 0.000058
elif measure == 'in:
distance = tl / 0.000148
else:
print('Improper choice of measurement: in or cm')
distance = None

gpio.cleanup()
return distance

print(distance('cm'))

52
7.4 To initialise and capture video using Pi Cam (command):

raspivid is used to capture the video


"-o -" causes the output to be written to stdout
"-t 0" sets the timeout to disabled
"-n" stops the video being previewed (remove if you want to see the video on the HDMI
output)
cvlc is the console vlc player
"-vvv" and its argument specifies where to get the stream from
"-sout" and its argument specifies where to output it to
#!/bin/bash

raspivid -o - -t 0 -hf -w 600 -h 400 -fps 30 |cvlc -v stream:///dev/stdin --sout


'#standard{access=http,mux=ts,dst=:8554}' :demux=h264

CHAPTER 8

53
APPENDIX- COMPONENT REQUIRED TO IMPLEMENT
THIS PROJECT

Serial No. Name of Component Specification Quantity


1. Raspberry Pi Raspberry Pi 3 Model B+ 1
2. Pi Camera 5MP Camera Board 1
3. Ultrasonic Sensor HC-SR 04 1

4. Motor Driver L298N 2A 1

5. Arduino Uno ESP8266 1

6. DC Motor 100rpm 4

7. DC Power Supply 12V DC 1

54
8. Laptop/Desktop ------------------------ 1

9. Wheels ------------------------ 4

10. Chassis ------------------------ 1

11. Breadboard ------------------------ 1

12. Jumper Wires ------------------------ As per req.

CHAPTER 9

55
CONCLUSION

In this paper a method to implement some automation features in a regular car is described
such as Video streaming, Obstacle avoidance, Traffic light detection and Lane detection.
Utilizing this a small prototype is designed and built. This prototype successfully achieved
the goals. However Raspberry Pi may be powerful but yet we need a much powerful
computing machine if we need to implement it on a real car. Or multiple Raspberry Pi could
be cascaded to perform different tasks.

As for the cascade classifier we have created for traffic light detection we need a lot more
positive and negative samples from different streets and different weather and light
conditions if we want to implement it in real life.

As the whole system relies heavily on the Internet, it would be very convenient in a region
where 4G data is available. A fast Internet connection is one of the limitations of this project.

56
The implemented model can drive itself on a specified lane but it can not navigate its way to
a given location. So a navigation system can be built on top of it. A much better classifier for
traffic light detection can be designed to get better performance in a real life scenario.

The experimental results showed that the system is able to achieve a standard requirement to
provide valuable information to the driver to ensure safety. Our technology still continues to
develop and to be tested. Autonomous cars may provide the significant comfort we need.
However, we need to bear in mind that there are still disadvantages affiliated with it.

CHAPTER 10

57
REFERENCES

1. Setting up the Pi Car: https://www.hackster.io/bestd25/pi-car-016e66

2. OpenCV lane detection source code: https://github.com/misbah4064/lane_detection

3. Working Video on the model: Click here to view

4. Wikipedia: https://www.wikipedia.org/

5. IJERT: http://www.ijert.org/

6. Demonstration video series: Click here to view playlist

58
7. Geeks for Geeks: https://www.geeksforgeeks.org/

59

You might also like