Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 22

SELF_DRIVING_CAR

AUTOMATED VEHICLE:

 Self driving cars are capable of sensing its environment and can operate without the
human input
 Self driving cars have been receiving tremendous attention recently , due to the
technological growth in Artificial Intelligence
WHY SELF DRIVING:

 To ensure the safety of the pedestrians .

 Reduction in traffic deaths

 To make Long distance driving simpler.

 Improvement in fuel economy

 To demonstrate and test the safety, efficiency and effectiveness


of this project in real life conditions and consider security, legal
and societal issues.
CHALLENGES

 Know to drive
 Navigation
 Lane detecting and Tracking
 Traffic Light Detection
 Traffic sign Detection
 Pedestrian Detection
 Object recognition
 Pothole detection
 Near by vehicle Detection
COMPANIES FOCUSED ON SELF DRIVING
CARS

GOOGLE-WAYMO
TESLA-AUTOPILOT
AUDI
FORD
PROJECT FOCUSED
 This project is mainly focused on Level 3 Automation .
 Level 3 Automation: Level 3 vehicles, also known as conditional driving automation
vehicles, have “environmental detection” capabilities.
 As a result, drivers are able to engage in activities behind the wheel, such as using
smartphones, under limited conditions while their car accelerates past a slow-moving
vehicle or otherwise navigates expressway traffic.
Lidar 360 Degree 18 –m: GPS : allows cars to navigate without human
Provide 3-D Scan to map the assistance using satellite signal, tachometer
environment’s. altimeter, and gyroscope

Radar:
Camera: Emit radio waves
Track detailed information about the car’s to determine
surrounding, including other cars, lane Distance between
detection, pedestrians, obstacles and traffic obstacles and
Sign. sensor

Radar:
Emit radio waves to determine
Distance between obstacles and
sensor

Odometer: used for measuring the


distance traveled by a vehicle and
Improve GPS information.
Technology stack:

Nvidia Jetson Xavier  ROS


Intel RealSense Depth Camera  Python
Lidar 360 A2M6 18 -meter  OpenCV
TF02-Pro LIDAR (40M)  TensorFlow 2.x
GPS System  YOLO v2 or SSD MobileNet-v2
Odometer  Google Colab Pro
Working:
 First system will Identify lanes of the road is first task and it is important to keep the vehicle in the
constraints of the lane. We have used OpenCV, an open-source library of computer vision algorithms.
Various data from camera, placed on dashboard are applied to the algorithms to Convert original image to
grayscale, apply slight Gaussian Blur, Edge Detector, Define Region of Interest.

 With the modern trend towards Artificial Intelligence, we are going to implement deep learning-based
computer vision technology to identify the object.Here, we have used SSD MobileNet to detect cars and
humans on the road and measure the distance b/w ego vehicle and the closest vehicle in front. The models
return the distance in pixels which can be converted to meters based on the camera parameters. This model
has functionalities like, Traffic light detection, Vehicle detection, pedestrian detection, Road sign detection to
make the car as autonomous.SSD MobileNet -V2 does quite well in detecting other cars on the road and The
inference time is also very fast. This Ai model model trained with more than 50,000 images.
 By Using ROS we are able to build a map, localize the robot using lidars or GPS, plan paths along maps,
avoid obstacles, process point clouds or cameras data to extract information. It provides the required tools to
easily access sensors data, process that data, and generate an appropriate response for the motors and other
actuators of the Car. Ros will control the vehicle by process all the data from Lane, AI Model and Lidar.
Work Flow:
Lane Detection:
 Lane detection is a critical component of self-driving cars and autonomous vehicles. It is one
of the most important research topics for driving scene understanding.

 Once lane positions are obtained, the vehicle will know where to go and avoid the risk of
running into other lanes or getting off the road. This can prevent the driver/car system from
drifting off the driving lane.

 By finding curvature of Lane, we can predicate weather vehicle to take left or right.
1. Perception System with sensors and
cameras keeps watching the
environment
2. Object detection identifiers what the
various Obstacles are Present.
3. Decision making system uses its
knowledge of objects and other
variables to determine a response
from the vehicle
4. Path planning system constantly
recalibrate the path the vehicle must
take.
Lane Detection
OBJECT DETECTION:
 To perform object detection, this work uses datasets that provide information of the
environment through the LiDAR and camera.
 Using the information from these sensors, objects are detected, classified, and the
distance and direction of the obj relative to the car is measured.
 Usually object detection is achieved using a combination of feature-based modelling
and appearance-based modelling.
 The image has more information that can be used to identify objects as compared to
laser scan and allows both features based and appearance based modelling.
 image is primarily used to detect object and classify them, and the LiDAR is used to
measure the location of the object relative to the vehicle.
SSD MobileNet-V2 Yolo-V2
Lidar-eye of autonomous vehicle:

 This device is LiDAR that acts as an eye of the self-driving vehicles. It provides them a 360-
degree view of the surrounding helping them to drive themselves safely.

 Continuously rotating LiDAR system sends thousands of laser pulses every second. These
pulses collide with the surrounding objects and reflect back.

 The resulting light reflections are then used to create a 3D point cloud.

 An onboard computer records each laser’s reflection point and translates this rapidly
updating point cloud into an animated 3D representation.
3D LIDAR 2D LIDAR

You might also like