Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

International Journal of Advance and Innovative Research

ISSN 2394 - 7780


Volume 7, Issue 1 (III): January - March, 2020 Part - 3

SELF DRIVING CAR SIMULATOR USING NEURAL NETWORKS

Jeetesh Kumar Tiwari, Cedric Thanikkal, Deep Vora and Prof. Dipali Bhole
U.G. Student, Computer Engineering Department Shree L. R. Tiwari College of Engineering, Maharashtra

ABSTRACT
Autonomous driving is a topic that has gathered huge attention from both the research community as well as
other industries or companies, due to its potential to radically change mobility and transport. In this by include
Machine Learning algorithms that can automatically learn to control a software simulated vehicle based on its
own experience of driving. More specifically we are planning to employ two Reinforcement Learning (RL)
algorithms called Deep Deterministic Policy Gradient (DDPG), and Actor-Critic with Experience Replay
(ACER). Reinforcement learning is a broad class of algorithms for solving Markov Decision Problems.The
algorithms were trained and evaluated in a synthetic or environment that is developed by software. The input to
both models are images captured by a front-facing virtual camera and internal states of the vehicle, i.e.,
velocity, acceleration,breaking system etc. The RL-methods are capable of controlling the vehicle (i.e. changing
lane in this case), using only images through CNN algorithm to provide information regarding the position of
the vehicle.

PROBLEM STATEMENT
The traditional way to control a simulated self driving car is to implement a lot of code for perception
techniques distinguishing between different classes of objects seen by diverse technologies such as RGB
cameras, radar, etc. This information feeds traditional navigation algorithms based on mapping and planning.
The methods need a big process capacity and expensive technologies that increase the costs. The incipient way
studied in this project, much more simpler, feeds a deep convolutional neural network which controls the car
movements with just simple images and applying an endto-end strategy.
SCOPE AND MOTIVATION
Each year just in US around 37,000 people lose their life in car accidents. That is a 5.6 % increase from 2015.
Human errors caused up to 90% percent of car accidents. Autonomous vehicles may help reduce this huge
number of fatalities. One of the first, most popular and most useful technologies is the line detection and lane
keeping. It started developing earlier and to this day it is still being improved. The aim is to increase the safety
of vehicles on road, has led to the development of different systems, that can be implemented to real life
modles.Different approaches to develop systems for self-driving vehicles exist and almost all of them are very
complex and with very high hardware requirements. The solution presented in this paper proposes the machine
learning based system to be as simple as possible with only software implementation. Based on a input image
the neural network should choose one of the four available commands (forward, left, right or stop). With the
help of the training data the system learns to follow the road ahead and stay in its lane by tackling the traffic or
obstacle sin its lane. The system automatically learns necessary road features with only the steering angle as the
input from the virtual driver.
FUNCTIONAL REQUIREMENT
Artificial Neural Networks – Neural Networks for artificial intelligence are mathematical models inspired by
natural structures in the human brain and applied in modern computers. Typical applications are to use the
models as complex function approximators. In this chapter brief explanations and motivations for these
building-blocks in Neural Networks are summarized. The mathematical models, algorithms, and concepts
presented in this section are relatively short.
Feed Forward Neural Networks – A Neural Network is simply a computational graph, with the objective to
approximate some function f ∗(x). The Neural Network models the function through f(x, θ) by adjusting its
parameters θ. The input x, flows through layers of artificial neurons, where an artificial neuron i, in layer j is
defined as Where n is the number of output nodes. In a feed forward neural network all the hidden activations,
aj, from one layer are passed through the network as input to the next layer.
Reinforcement Learning – Premises for RL are that there exists an environment and a controllable Actor. The
Actor is capable of changing the state of the environment with actions, e.g., move left or right. In a broad sense,
the environment is rewarding the Actor based on its current behavior and in reinforcement learning the
objective is to find the behavior/policy that maximizes the cumulative reward. This section will introduce the
reader to the basic theory in RL and recent algorithms that are relevant.

65
International Journal of Advance and Innovative Research
ISSN 2394 - 7780
Volume 7, Issue 1 (III): January - March, 2020 Part - 3

Reward and objective function –The simulator is based on an Reward system, where the Actor if performed a
positive action with respect to the simulation will receive a Reward, whereas if a wrong action is conducted, a
percent of reward is deducted from the sum. Non-functional requirements.
SYSTEM REQUIREMENTS
Hardware Requirements
• Computing Device (Laptop/PC)
• i5-3rd Generation Processor
• 4 GB Ram
• 128 MB Video Memory
SOFTWARE REQUIREMENTS
NumPy
It is a library for the Python programming language, adding support for large, multidimensional arrays and
matrices, along with a large collection of high-level mathematical functions to operate on these arrays.
TensorFlow
It is a free and open-source software library for dataflow and differentiable programming across a range of
tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks.
Matplotlib
It is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It
provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like
Tkinter, wxPython
Open CV
It is a library of programming functions mainly aimed at real-time computer vision. Originally developed by
Intel, it was later supported by Willow Garage then Itseez. The library is cross-platform and free for use under
the open-source BSD license.
Use Case Diagram
The Use Case Diagram of the automated car system behaves in such a way , where the actor “car” has total 3
use cases, as soon as the project is opened the car starts running ,the car can also change speed and change lanes

Figure: Use Case Diagram


66
International Journal of Advance and Innovative Research
ISSN 2394 - 7780
Volume 7, Issue 1 (III): January - March, 2020 Part - 3

OBJECT DIAGRAM
Figure shows a static view of the structure of Lane & Car identification model at a specific time. Here as we can
see, there are various objects Front Camera, Right Door Camera, System are few examples to be named.

Figure: Object Diagram


There are various attributes associated with each object, For example, for the object FrontCamera, there are
attributes like RelativeDistance(), RelativeSpeed(). Similarly there are attributes for other objects as well. Here
the object diagram is used to render a set of objects and their relationships as an instance.
STATE CHART DIAGRAM
Figure represents various states that the system takes while processing. The foremost state is related to the
initial observation in which the image detection is processed. After the image is recorded the entire processing
of the image takes place which will then lead to state where the system will apply the convolution networks
algorithm and decide whether to change the lane or not, thus being in an intermediate state. The final state is the
Terminal State in which the actor critic will start again and will reward the agent, until it ends.

Figure: State Chart Diagram


SEQUENCE DIAGRAM
Here the objects used in the Sequence Diagram are the Environment , Neural Networks Functions , Image
detection and applying the changes back to the screen .

Figure: Sequence Diagram


67
International Journal of Advance and Innovative Research
ISSN 2394 - 7780
Volume 7, Issue 1 (III): January - March, 2020 Part - 3

CONCLUSION
Using the self-driving car simulator, the model can traverse through a path within a feasible time provided the
input is well defined and highly accurate. This model can be implemented as a full fledged real world self-
driving car application with some modifications.
REFERENCES
1. Net-Scale Technologies, “Autonomous off-road vehicle control using
endtoendlearning,”www.scale.com/doc/net-scale-dave-report.pdf,accessed March 21, 2017.
2. R. Hadsell, P. Sermanet, J. Ben, A. Erkan, M. Scoffier, K. Kavukcuoglu, U. Muller, and Y. LeCun,
“Learning long-range vision for autonomous off-road driving,” Journal of Field Robotics, 2009.
3. M.Bojarski, D.D.Testa, D.Dworakowski, B.Firner, B.Flepp, P.Goyal, L.D. Jackel, M. Monfort, U. Muller,
J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for selfdriving cars,” 2016.
4. G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou,
“Information theoretic mpc for model-based reinforcement learning,” , 2017 Nvidia DAVE-2.

68

You might also like