Professional Documents
Culture Documents
21AIE401DRL TeamNo4 AIE19005 20 36 Report
21AIE401DRL TeamNo4 AIE19005 20 36 Report
C. Environment
CARLA is a simulation platform that was designed to make
it easier to develop, train, and test autonomous systems. It
provides open digital content such as urban designs, houses, and
vehicles, as well as tools and protocols for gathering and sharing
data. CARLA has many customizable features including various
climatic conditions, control over all static and dynamic
characters, and the ability to create maps. It is a comprehensive
platform for simulating autonomous driving scenarios.
There are two impediments surrounding the parking place,
and the project’s simulation of the parking environment is ver-
tical. Figure 1 depicts the creation of the parking surroundings Fig.4 Angular Penalty Structure
reference system using the base as the centre of the parking spot.
Critic Model
VII. CONCLUSION
We created learning data for the neural network utilizing the
deep semisupervised method DDPG. Then, to guarantee that the
automobile is always approaching the parking centre and
achieves the ideal car inclination during the parking process, a
straightforward and efficient reward system is built.
Additionally, in order to secure the safety of the parking process,
the converging of the strategy, and the commuter lot
The experiences replayed pool’s good parking activities and
states are sampled with a step size of 100 rounds, and the
overall converging of the reward value in 500 separate episodes
over the course of the full DDPG algorithm training procedure
is as follows:
REFERENCES
[1] ]Pan, You, Wang Lu - Virtual To Real Reinforcement Learning, arXiv
(2017).
[2] Shai Shalev-Shwartz, Shaked Shammah, Amnon Shashua - Safe, Multi-
Agent, Reinforcement Learning for Autonomous Driving, arXiv (2016).
[3] Sahand Sharifzadeh, Ioannis Chiotellis, Rudolph Triebel, Daniel Cre-
mers - Learning to Drive using Inverse Reinforcement Learning ANd
DQN, arXiv (2017).
[4] Markus Kuderer, Shilpa Gulati, Wolfram Burgard - Learning Driving
Styles for Autonomous Vehicles from Demonstration, IEEE (2015).
[5] Jeff Michels, Ashutosh Saxena, Andrew Y. Ng - High Speed Obstacle
Avoidance using Monocular Vision and Reinforcement Learning, ACM
(2005).
[6] Ahmad El Sallab, Mohammed Abdou, Etienne Perot Senthil Yogamani
- Deep Reinforcement Learning Framework for Autonomous Driving,
arXiv (2017).
[7] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and
Vladlen Koltun -CARLA: An Open Urban Driving Simulator.
[8] Jan Koutn´ık, Giuseppe Cuccu, Ju¨rgen Schmidhuber, Faustino
Gomez- Evolving Large-Scale Neural Networks for Vision-Based
Reinforcement Learning.
[9] Chenyi Chen, Ari Seff, Alain Kornhauser, Jianxiong Xiao-
DeepDriving: Learning Affordance for Direct Perception in
Autonomous Driving.
[10] Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas
Heess, Tom Erez, Yuval Tassa, David Silver Daan Wierstra- Continuous
Control With Deep Reinforcement Learning.
[11] Brody Huval, Tao Wang, Sameep Tandon, Jeff Kiske, Will Song, Joel
Pazhayampallil, Mykhaylo Andriluka, Pranav Rajpurkar, Toki Migi-
matsu, Royce Cheng-Yue, Fernando Mujica, Adam Coates, Andrew Y.
Ng - An Empirical Evaluation of Deep Learning on Highway Driving.
[12] Abdur R. Fayjie, Sabir Hossain, Doukhi Oualid, and Deok-Jin Lee-
Driverless Car: Autonomous Driving Using Deep Reinforcement Learn-
ing In Urban Environment. [14]Matt Vitelli, Aran Nayebi, CARMA: A
Deep Reinforcement Learning Approach to Autonomous Driving.