Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/330578927

LIDAR Based Object Detection

Technical Report · October 2017


DOI: 10.13140/RG.2.2.16245.22248

CITATION READS
1 2,319

1 author:

Sarthak Batra
Georgia Institute of Technology
5 PUBLICATIONS 1 CITATION

SEE PROFILE

All content following this page was uploaded by Sarthak Batra on 24 January 2019.

The user has requested enhancement of the downloaded file.


Object detection and tracking using LiDAR

Introduction

Detection, classification and tracking of dynamic objects is considered a crucial step in several
applications (eg. Robot navigation, drone surveillance, autonomous driving, etc.) Moving objects like
humans, animals and vehicles must be accurately and swiftly detected; this is usually followed by a more
complex process [1]. This paper is a review of LiDAR (Light Detection and Ranging) technology and its
use in object detection and tracking, with a focus on autonomous driving and robot navigation.

Current State-of-the-Art and Commercial Products

The state of the art product for ground and marine vehicles is the Velodyne HDL-64E. It provides a 360-
degree field of view, 2.2 million points per second data rate and costs $75000 [2]. 64 lasers combine with
64 photodiodes to result in 64 lines of data output [3].

The Velodyne Puck is a cheaper option and while it costs $8000, it only provides 16 channels compared
to 64 on the HDL-64E. In addition to less detail, there is also a significant decrease in accuracy (~one
cm), range (~20meters) and processing speed (~75% less data points per second) [4]. The Puck has the
advantage of being lighter, weighing only 830 grams while the HDL-64E weighs 12.7 kg.

A commercial product in this category is the laser distance sensor (LDS) used in robot vacuums such as
Neato XV11. The assembly, costs under $100 and includes a laser and an image sensor under the LiDAR
hood to help triangulate the region around the robot [5].

Underlying technology

Unlike most sensors that detect the energy emitted from an object, LiDAR is an active sensor and emits
its own source of energy. A LiDAR assembly usually consist of a laser, scanner and optics, photodetector,
and a position and navigation system (GPS, IMU). In most implementations, a beam of light is fired from
a known elevation and angle. The light bounces of an object and returns to the sensors in the LiDAR
assembly. The “time of flight” of the laser is half the round-trip time measured. Knowing the speed of
light (~2.99*10^8 m/s), distance to the object can be calculated.

When multiple light pulses are emitted, with varying direction, each distance measurement can
correspond to a pixel. The pixels or stream of data points, referred to as a “point cloud”, is returned by the
LiDAR sensors. The X-Y-Z coordinates can be rendered as a three-dimensional image.

The LDS, introduced when discussing Neato XV11, is capable of fine angular and distance resolution. It
measures the angle the laser returns to its image sensor and can tell how far away the nearest object is.
Real-time behavior (>100,000 data point measurements per second), combined with low false positive
rates and efficient mapping algorithms make this an effective and cheaper alternative [6].

Implementation of the technology

LiDAR assemblies are used in agriculture industry to spray fertilizers in appropriate areas, mapping
during archaeology surveys, wind and pollution measurements and has several other applications. The
implementations of the assembly vary based on the primary task to be performed. For self-driving cars,
detection, classification and tracking of dynamic objects in a real-time situation is necessary.

A combination of 2-D and 3-D processing techniques form an occupancy grid of several cells, where a
probability of occupancy is assigned to each cell. For performing detection, segmentation is performed on
the point cloud. The point measurements corresponding to the segmented objects are determined.

The distribution of local spatial and reflectivity properties is then extracted in the classification step. A
support vector machine (SVM) classifier is trained to distinguish the classes of interest in a supervised
learning framework [7].

These operations can be performed over the cloud instead of real time to keep the hardware lighter and
simpler. However, there could be delays transporting and acquiring data from the cloud. The purpose of
the robot/machine would dictate the approach used by evaluating the trade-offs.

Conclusion

The high definition lidar system patent, belonging to Velodyne Acoustics [8], has kept the LiDAR prices
high. This created a need for lower resolution and cheaper LiDAR. Current appliances and projects use
use these cheaper LiDARs. On the other hand, the detail captured by state of the art products is such, that
not only can objects like pedestrians be detected but the direction they’re facing can also be determined.

Since the accuracy, detail, and range is not optimum on the cheaper LiDAR assemblies, the current robots
are severely limited. Velodyne is working on even cheaper "solid state" LIDAR solutions that don't offer
a 360-degree view.

The problems also arise from light being extremely fast. Measuring it requires precise and expensive
electronics. Some LiDAR assemblies, like the Neato XV11 LiDAR, don’t rely on time of flight
measurements. This removes the need for all the hyper-sensitive electronics and reduces the price of the
ranging sensor. However, cutting costs on these LIDAR devices also cuts down on their capabilities.

New techniques use visual and IR photography to deliver 3D images and maps and can compete or be
combined with LiDAR for more sophisticated and more accurate object classification and tracking.
1. G. Postica, A. Romanoni, and M. Matteucci, “Robust moving objects detection in lidar data
exploiting visual cues,” in Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International
Conference on. IEEE, 2016

2. Velodyne LiDAR “HDL-64E: High Definition Real-Time 3D LiDAR” HDL-64E datasheet, Oct.
2017

3. Ron Amadeo, “Google’s Waymo invests in LIDAR technology, cuts costs by 90 percent,”
arstechnica.com, para. 7, Jan 1, 2017. [Online]. Available:
https://arstechnica.com/cars/2017/01/googles-waymo-invests-in-lidar-technology-cuts-costs-by-
90-percent/ [Accessed: October 24, 2017]

4. Velodyne LiDAR “Puck Hi-Res: High Resolution Real-Time 3D LiDAR Sensor” Puck Hi-Res
datasheet, Oct. 2017

5. K. Konolige, J. Augenbraun, N. Donaldson, C. Fiebig and P. Shah, "A low-cost laser distance
sensor," 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, 2008,
pp. 3002-3008.

6. A. Azim and O. Aycard, “Detection, classification and tracking of moving objects in a 3d


environment,” in Intelligent Vehicles Symposium (IV), IEEE, 2012

7. D. V. Prokhorov, "Object recognition in 3D lidar data with recurrent neural network," 2009 IEEE
Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Miami,
FL, 2009, pp. 9-15.

8. David S. Hall, “High definition lidar system,” U.S. Patent US 7969558 B2, issued date: Jun 28,
2011

View publication stats

You might also like