03 ML+and+DL+in+ADAS+-+Sensors+&+Sensor+Fusion

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

ML and DL in ADAS

Perception Processing and


Environment Sensors Actuation
(Information extraction decision making
(Outside or inside vehicle)
from sensor data)

➢ High Amount of data


from one sensor
➢ More sensors on one
vehicle
➢ The Perception process of sensors
like Radar, Camera, Lidar is using
more ML / DL algorithms for
Environemnt Perception (Object
detection, classification, tracking)
➢ Advanced Automotive Platforms with AI
➢ Sensor Data fusion is using a lot of computing features
ML / DL algorithms ➢ AI algorithms are getting their place in desicion
making also
➢ Directly using them

© Copyright content ➢ Parallel to rule based algorithms to


increase the confidence of output
ML and DL in ADAS
Perception Processing and
Environment Sensors Actuation
(Information extraction decision making
(Outside or inside vehicle)
from sensor data)

➢ The Perception process of sensors


like Radar, Camera, Lidar is using
more ML / DL algorithms for
Environemnt Perception (Object
detection, classification, tracking)

➢ Sensor Data fusion is using a lot of


ML / DL algorithms

© Copyright content
Use of Deep Learning in Automotive Radar Sensor
➢ Object Classification:
➢ Traditional approach: Use of object dimensions, speed and other paramters to estimate the type of object.
➢ Efficiency: very limited
➢ Deep Learning approach: Use of CNN Models and using directly Radar spectrum (RD map) as input.
➢ More efficient
➢ Reference Technical Paper: Deep Learning-based Object Classification onAutomotive Radar
Spectra.

© Copyright content
https://www.iss.uni-stuttgart.de/forschung/publikationen/kpatel_RadarClassification.pdf
Use of Deep Learning in Automotive Radar Sensor
➢ Vehicle Detection:
© Copyright content
➢ Traditional approach: Use of Clustering algrithms to group detections from one object and then using using
manual feature extraction to detect the vehicles.
➢ Deep Learning based Approach: Using Range, azimuth and Doppler Tensor to generate RA, RD and AD
Tesnors (2D) as input to the Deep Learning network to detect vehicles.
➢ Reference Technical Paper: Vehicle Detection With Automotive Radar Using Deep Learning on
Range-Azimuth-Doppler Tensors

https://openaccess.thecvf.com/content_ICCVW_2019/papers/CVRSUAD/Major_Vehicle_Detection_With_Automotive_Radar_Using_Deep_Learning_on_Range-Azimuth-Doppler_ICCVW_2019_paper.pdf
Use of Deep Learning in Automotive Lidar © Copyright content
➢ Deep Learning based: Lidar 3D Point cloud semantic segmentation, object detection and classification
➢ Reference Paper: Deep Learning for LiDAR Point Cloudsin Autonomous Driving: A Review
➢ It contains references to more than 140 papers in this field.

https://arxiv.org/pdf/2005.09830.pdf
Use of Deep Learning in Automotive Lidar © Copyright content
➢ Deep Learning based: Lidar 3D Point cloud semantic segmentation, object detection and classification
➢ Reference Paper: Deep Learning for LiDAR Point Cloudsin Autonomous Driving: A Review
➢ It contains references to more than 140 papers in this field.

https://arxiv.org/pdf/2005.09830.pdf
Use of Deep Learning in Automotive Lidar
➢ Deep Learning based: Lidar 3D Point cloud semantic segmentation, object detection and classification
➢ Reference Paper: Deep Learning for LiDAR Point Cloudsin Autonomous Driving: A Review
➢ It contains references to more than 140 papers in this field.

© Copyright content
https://arxiv.org/pdf/2005.09830.pdf
Deep Learning enabled Automotive Thermal Camera – from Adasky

➢ VIPER is a revolutionary intelligent, high-resolution LWIR


thermal camera, designed for vehicle safety and perception
systems to enhance ADAS and enable all levels of automation.

➢ It is shutterless thermal camera available that can see and


detect day or night, without being compromised by complete
darkness, blinding lights or harsh weather

➢ VIPER can detect objects up to 300 meters (984 ft) and classify
living beings at over than 200m (656 ft) in all visibility conditions

➢ Its state-of-the-art computer vision algorithms also include


object detection and classification of multiple types of objects,
as well as free-space detection, and even TTC (Time To Collision)
calculation to support Forward Collision Warning (FCW) and
Automatic Emergency Braking (AEB) features, all made possible
by using only a single thermal camera.
© Copyright content
https://www.adasky.com/viper/
ML & DL enabled Automotive Multi Function Mono Camera – from Continental

© Copyright content
https://www.continental-automotive.com/en-gl/Passenger-Cars/Autonomous-Mobility/Enablers/Cameras/Mono-Camera
Use of Deep Learning in Automotive Sensor Data Fusion
➢ Object Detection and Classification using Sensor Data fusion
➢ Automotive Radar detections and camera images are fused together using
Deep Learning Network
➢ Radar points are mapped on the camera images and then data fusion is
carried out.
➢ Dataset from nuScenes and the dataset from TUM are used for work.
➢ Reference paper: A Deep Learning-based Radar and Camera SensorFusion
Architecture for Object Detection

Dataset
Results

https://arxiv.org/pdf/2005.07431v1.pdf
© Copyright content
Use of Deep Learning in Automotive Sensor Data Fusion
➢ 3D Object Detection using Sensor Data fusion © Copyright content
➢ For training and evaluation a dataset containing 455 framesof synchronized camera, lidar, and radar data is
used.
➢ Each radar pointcloud contains approximately 1000 - 10000points, obtained using the Astyx 6455 HiRes
sensor.
➢ Each pointcontains x,y,z position, magnitude, and doppler information(radial velocity).
➢ The camera images have a size of2048×618pixels and were captured with a Point Grey Blackfly camera.
➢ The lidar pointcloud was obtained with a Velodyne VLP-16
➢ Reference paper: Deep Learning Based 3D Object Detection for Automotive Radar and Camera

https://www.astyx.com/fileadmin/redakteur/dokumente/Deep_Learning_Based_3D_Object_Detection_for_Automotive_Radar_and_Camera.PDF
Thank you

You might also like