Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

International Conference on Automation, Computing and Renewable Systems ICACRS 2022

Artificial Intelligence Based Self-Driving Car Using


Robotic Model
Dr. S Sujitha
Department of Electrical and Prof. Surat Pyari W Y Jhansipriya
Electronics Engineering Department of Electrical and Department of Electrical and
New Horizon College of Engineering Electronics Engineering Electronics Engineering
Bangalore,India New Horizon College of Engineering New Horizon College of Engineering
dr.s.sujitha@gmail.com Bangalore,India Bangalore,India
suratpyari271184@gmail.com deeksha380@gmail.com
Yannam Roopeswar Reddy
Department of Electrical and Vinod Kumar R P Ravi Nandan
Electronics Engineering Department of Electrical and Department of Electrical and
New Horizon College of Engineering Electronics Engineering Electronics Engineering
Bangalore,India New Horizon College of Engineering New Horizon College of Engineering
roopeswarreddy18751@gmail.com Bangalore,India Bangalore,India
vk6682926@gmail.com ravinandan516@gmail.com

Abstract—The future vehicle is completely associated and frameworks that can grasp this information. The appearance
consistently on the web. It is all-electric and independent. We of profound learning empowered extraordinary steps
accept that it takes both to acknowledge it. The expanded towards visual understanding. Deep brain networks have
requirement for safe electronic frameworks in vehicles, that accomplished God like execution on undertakings like
drivers and travelers can depend on, are the underpinning of
picture arrangement or traffic sign acknowledgment.
trust and shape the future towards more elevated levels of
robotized driving. The effective and business utilization of self-
Notwithstanding, this presentation support was achieved by
driving/driverless/automated/computerized vehicle will make expanding the network size, which means that profound
human existence more straightforward. And this paper means organizations have huge computational expenses.
to talk about this issue. This paper surveys the vital innovation These huge organizations are infeasible or if nothing else
of a self-driving vehicle. In this paper, the four vital advances hard to utilize the minim planted gadgets as tracked down in
in self-driving vehicle, in particular, red and green light self-driving vehicles.
detection, object detection, lane end detection and stop sign
detection, are tended to and overviewed. The principal
research establishments and bunches in various nations are
summed up. At last, the discussions of self-driving vehicle are
talked about and the improvement pattern of self-driving
vehicle is anticipated.

Keywords—Object detection, Vehicle route, Lane detection,


Self-driving vehicle, Robotic Kit, Raspberry PI, OpenCV.

I. INTRODUCTION Fig. 1 sample lane detection of autonomous vehicles


As the World is advancing, researchers and analysts are
battling to take the human life in safer place. Individuals II. BACKGROUND
around the World are currently very much excited about the
sendoff of independent vehicles. At the end of the day, these A great many people in the profound learning and PC vision
vehicles are prepared with exceptional sensors, processors networks comprehend what picture arrangement that believe
and another information base which is answerable for the in the proposed model should know what single article or
activity of this vehicle and requires no driver. It explores scene is available in the Fig.1. Order is extremely coarse and
itself following up to the objective point mentioned by significant level. Many are likewise acquainted with object
clients. To be sure, it is the enormous transformation in the recognition, where it attempt to find and group different
field of advanced mechanics, which is contributing a ton to objects inside the picture, by drawing bouncing boxes
make this planet more secure spot. Current camera around them and afterward arranging what's in the case.
frameworks can create excellent pictures at high rates at Discovery is mid-level with further more gritty data,
extremely low costs, permitting them to be put in numerous however it's still a piece unpleasant since it is just drawing
frameworks and car vehicles. This builds the interest for jumping boxes and don't actually find out about object
shape. Semantic Division is the most enlightening of these  Layer 3 : This layer includes the Raspberry Pi Set-
three, where it is likely to group the best in class cutting Up.
edge techniques.  Layer 4 : This layer includes OpenCV Set-Up with
The first step in numerous independent driving frameworks
Raspberry Pi.
in light of visual data sources is object acknowledgment,
object confinement, and semantic division. A semantic  Layer 5 : Adding Exhaust Fan to Raspberry Pi.
division network groups each pixel in an image, bringing  Layer 6 : Installation of RaspiCam onto the model.
about a picture that is portioned by class. The objective of
semantic division is to name every pixel in a picture as
having a place with a given semantic class. In Run of the
mill metropolitan scenes, these classes could be road, traffic
signs, road markings, vehicles, people on foot, or walkways.
While utilizing profound learning for semantic
segmentation, the acknowledgment of significant items in
the picture, for example, people or vehicles is acknowledged
at more elevated levels of a brain organization. By plan,
these layers work on a c coarser scale what's more, are
interpretation invariant, with the end goal that minor
departure from a pixel-separate grouping of little highlights,
which are regularly just found in lower layers of a network.

V. HARDWARE DESIGN
1. LIST OF HARDWARE COMPONENTS
 Robo Chessis Kit, which includes four
motors, four wheels with a plastic base.
 Motor Driver L298, to control two motors
at a time having a dc source supply of 5
volts.
Fig. 2Parameters in a self-driving car  Arduino Uno, this microcontroller acts as
a slave device, which receives the
instructions from the main
III. PROPOSED SYSTEM controller(Raspberry pi).
The proposed system ofour project “ARTIFICIAL  Power Bank, 10000mah 3A.
INTELLIGENCE BASED SELF DRIVEN CAR USING  32 Bit Raspberry Pi 3 B+ (2017) model,
ROBOTIC MODEL ”uses pattern of Machine Learning, used for image processing, having
Artificial Intelligence and Iot.In this project we use 480x360 pixels video streaming.
RaspiCam2 to identify an exceptional example that will be
imprinted on the streets .This camera will catch the example  RaspiCam2 camera module having 8MP.
and cycle the framework utilizing Raspberry Pi and train the  USB Cables
vehicle to continue on a predefined path.  Jumper Wires & Headers
The camera will likewise catch encompassing pictures to  Electrolytic Capacitor
decide the various impediments close to it, in the event that
 LED
the snag gets excessively close are going to connect with the
vehicle then the vehicle will stop until the hindrance close to  PCB Board
it moves.  555 Timer
 16 GB SD Card
IV. KEY PARAMETERS OF CARS WITH SELF-DRIVING
CONTROL  Adapter
The key parameters of a self-driving car are :  Ethernet Cable
 Layer 1 : This layer includes the building base of 2. C++ :
the project. The base includes motoring structure of C++ is a widely used general-purpose,
four motors with pairs of two connected parallel to intermediate-level language that allows the
Motor Driver L298. programmers to express the concepts in ease and
 Layer 2 : This layer includes Arduino Set-Up with understandable way. So, in this project of ours we
a PCB Layout. This PCB layout contains an are using C++ as it is a better choice while working
electrolytic capacitor ,LED and 555 Timer with Raspberry Pi and Arduino Uno to make our
project simple.
3. OpenCV :
OpenCV is the gigantic open-source library for the
PC vision, AI, and picture handling and presently it
assumes a significant part continuously activity
which is vital in the present frameworks. By
utilizing it, one can handle pictures and recordings
 RGB image :
to distinguish articles, faces, or in any event,
For this type of image three matrix structures are
penmanship of a human. At the point when it
formed, which includes :
coordinated with different libraries, for example,
MATRIX 1 – RED
NumPy, python is fit for handling the OpenCV
MATRIX 2 – GREEN
cluster structure for investigation. To Distinguish
MATRIX 3 – BLUE
picture design and its different elements we use
vector space and perform numerical procedure on
these highlights.

The main OpenCV variant was 1.0. OpenCV is


delivered under a BSD permit and consequently it's
free for both scholar and business use. It has C++,
C, Python and Java points of interaction and
supports Windows, Linux, Macintosh operating
system, iOS and Android. At the point when
OpenCV was planned the primary center was
ongoing applications for computational LANE DETECTION :
effectiveness. Everything is written in streamlined Lane Detection would be built with the concept of Computer
Vision.
C/C++ to exploit multi-center handling.
Canny Edge Recognition :
VI. OUPUT PARAMETERS As the actual name proposes, this identifier would recognize edges
OBJECT DETECTION in a picture. The edges identified by the cycle are white, while all
This adventure changed the shape-based approach and the other things is dark. The Watchful edge recognition calculation
involved Haar feature based flood classifiers for object does it utilizing 5 steps :
acknowledgment. Since every thing requires its own Noise reduction, Gradient calculation, Non-maximum suppression,
Double threshold, Edge Tracking by Hysteresis.
classifier and follows a comparable cycle in planning and
distinguishing proof, this endeavor just radiated on stop sign
and traffic light revelation.
Open CV gives a mentor as well as locater. Positive models
(contain target object) were obtained using a PDA, and were
altered that super needed object is observable. Negative
models (without target object), of course, were assembled
randomly. In particular, traffic light certain models contains
comparable number of red traffic lights and green traffic
light. A comparable negative model dataset was used for
both stop sign and traffic light planning.
IMAGE PROCESSING
In this process of Image Processing the image will be
converted into a numerical value .Then the dimensions of
the image is calculated giving a definite number of pixels. Fig. 4sample lane detection of autonomous vehicles
As the number of pixels increase , resolution of an image
increases leading to increase in clarity. These pixels are Step 2: Define ROI (Region of Interest) :
converted into a matrix structure ranging between 0-255. To keep a car in its lane when driving, you concentrate just
on the next 100 metres of your present road. You don't give
There are two types of images : a damn about the road on the opposite side of the fence,
 Grayscale image : either. This is the area that interests us. We remove all of the
In the matrix formed for this type of image , 0 extraneous information from the image and only display the
Indicates black colour and 255 Indicates complete area that will aid in locating the lane.
white.
Fig5 : Image of ROI Fig6 : Image after lines grouped

Step3. Get Lines :

The ROI would then be passed to obtain all of the straight VII. OUPUT/RESULT
lines in the image. You can do this with the aid of • An “Autonomous Vehicle” is constructed with a
cv2.HoughLinesP(). A list of all the straight lines that this low cost price using latest technology such as
function was able to identify in the input image is what it Machine Learning, IoT etc.
returns. The symbols for each line are [x1, y1, x2, y2]. • Looks for the target/methodology as mentioned
Now, while this may seem fairly straightforward, the below and performs the required operation as
fundamental idea behind Hough Lines detection. needed.
Looks for the simulation and a robotic model, satisfying the
following conditions:

Red & Green Light Detection

Object Detection

Lane End Detection

Stop Sign Detection

CONCLUSION
Fig5 : Detecting the lines on a road
As development develops all through the world, self-driving
vehicles will transform into the future strategy for
transportation all around. The legitimate, moral, and social
Step 4 : Getting smooth line : 
implications of self-driving vehicles include the
Once we receive the lines from Step 3, we will divide them
contemplations of hazard, commitment, and capability. Free
into two groups in this step (left and right). If you look at the
vehicles will help the economy through eco-amicability, the
Step 3 output image, Lines 1 and 2 will be in the left group,
environment through diminished petroleum product side-
while Line 3 will be in the right group. Once the data has
effects, society through more cooperation, and the general
been grouped, we determine the average slope (m) and
arrangement of regulations through a simpler course of
intercept (c) for each group before attempting to draw a line
action of commitment. In any case, these considerations turn
through each group by executing get Line
around two central pieces of autonomous vehicles: how they
CoordinatesFromParameters() and providing the average m
work and how they are kept secure. As advancement moves,
and average c.
the security advancement concerning self-driving vehicles
will similarly continue to create to fight software engineers,
work on the accuracy of internal systems, and hinder
setbacks. At the point when this huge number of headways
are great, society will be the slightest bit closer to the ideal
universe of vehicles fit for flying by far most yearned for as
children.
APPLICATIONS [2] Flyte, Margaret Galer. "Safe design of in-
1. Geo Hearing - Land use vehicleinformation and support systems: the human
2. Independent Driving factorsissues." International journal of vehicle
3. Precision Agriculture design 16, no.2-3 (1995): 158-169.
4. Trucking [3] T. Victor, M. Dozza, J. Bargman, C.-N. Boda, J.
5. Logistics Engstr ̈om, C. Flannagan,J. D. Lee, and G.
6. Heavy Machinery Markkula, “Analysis of naturalistic driving study
7. Automation in Farming and Agriculture data:Safer glances, driver inattention, and crash
risk,” Tech. Rep., 2015.
ADVANTAGES [4] G. J. Brostow, J. Fauqueur, and R. Cipolla,
1. Greatly Improved Safety “Semantic object classesin video: A high-definition
2. Improved Transport Interconnectivity ground truth database,” Pattern RecognitionLetters,
3. Reduced Pollution and Emission vol. 30, no. 2, pp. 88–97, 2009.
[5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and
DISADVANTAGES L. Fei-Fei, “Imagenet:A large-scale hierarchical
1. More Infrastructure image database,” in Computer Vision andPattern
2. High Material Cost Recognition, 2009. CVPR 2009.IEEE Conference
3. Lost Jobs (Increase in Unemployment) on. IEEE,2009, pp. 248–255.
4. Security Issues [6] J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M.
Finocchio, A. Blake,M. Cook, and R. Moore,
REFERENCES
“Real-time human pose recognition in partsfrom
[1] Broggi, Alberto, Massimo Bertozzi, and single depth images,” Communications of the
AlessandraFascioli. "Architectural issues on vision- ACM, vol. 56, no. 1,pp. 116–124, 2013.
basedautomatic vehicle guidance: the experience of
theARGO project." Real-Time Imaging 6, no. 4
(2000):313-324.

You might also like