Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

.................................................................................................................................................................................................................

AN OPEN APPROACH TO
AUTONOMOUS VEHICLES
.................................................................................................................................................................................................................
AUTONOMOUS VEHICLES ARE AN EMERGING APPLICATION OF AUTOMOTIVE TECHNOLOGY,
BUT THEIR COMPONENTS ARE OFTEN PROPRIETARY. THIS ARTICLE INTRODUCES AN OPEN

PLATFORM USING COMMODITY VEHICLES AND SENSORS. THE AUTHORS PRESENT

ALGORITHMS, SOFTWARE LIBRARIES, AND DATASETS REQUIRED FOR SCENE RECOGNITION,

PATH PLANNING, AND VEHICLE CONTROL. RESEARCHERS AND DEVELOPERS CAN USE THE

COMMON INTERFACE TO STUDY THE BASIS OF AUTONOMOUS VEHICLES, DESIGN NEW

ALGORITHMS, AND TEST THEIR PERFORMANCE.

...... Autonomous vehicles are becom-


ing a new piece of infrastructure. Automotive
grated to develop autonomous vehicles. The
design and implementation of algorithms for
makers, electronics makers, and IT service scene recognition, path planning, and vehicle
providers are interested in this technology, control also require multidisciplinary skills
and academic research has contributed signif- and knowledge, often incurring a significant
Shinpei Kato icantly to producing their prototype systems.
For example, one notable work was pub-
engineering effort. Finally, a basic dataset,
such as a map for localization, must be pro-
Eijiro Takeuchi lished by Carnegie Mellon University.1
Despite this trend, autonomous vehicles
vided to drive on a public road.
Overall, autonomous vehicles are com-
Yoshio Ishiguro are not systematically organized. Given that
commercial vehicles protect their in-vehicle
posed of diverse technologies. Building their
platform requires a multidisciplinary collabo-
Yoshiki Ninomiya system interface from users, third-party ven-
dors cannot easily test new components of
ration in research and development. To facili-
tate this collaboration, we introduce an open
Kazuya Takeda autonomous vehicles. In addition, sensors are platform for autonomous vehicles that many
not identical. Some vehicles might use only researchers and developers can study to
Nagoya University cameras, whereas others might use a combi- obtain a baseline for autonomous vehicles,
nation of cameras, laser scanners, GPS design new algorithms, and test their per-
receivers, and milliwave radars. formance, using a common interface.
Tsuyoshi Hamada In addition to hardware issues, autono-
mous vehicles must address software issues. Vehicles and sensors
Nagasaki University Because an autonomous-vehicles platform is We introduce an autonomous-vehicles plat-
largescale, it is inefficient to build it up from form with a set of sensors that can be purchased
scratch, especially for prototypes. Open- in the market. We assume that intelligent mod-
source software libraries are preferred for that ules of autonomous driving, such as scene rec-
purpose, but they have not yet been inte- ognition, path planning, and vehicle control,
.......................................................

60 Published by the IEEE Computer Society 


0272-1732/15/$31.00 c 2015 IEEE

Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on July 30,2022 at 12:12:59 UTC from IEEE Xplore. Restrictions apply.
Velodyne HDL-64e (3D Lidar)
Velodyne HDL-32e (3D Lidar)

ZMP Robocar HV

Hokuyo UTM-30LX (Lidar)

Point Grey Ladybug 5 (camera)

Ibeo LUX 8L (3D Lidar)

Workstation computer Laptop computer

Javad RTK-GNSS (GPS)

Point Grey Grasshopper 3 (camera)

Figure 1. ZMP Robocar “HV” and examples of sensors and plug-in computers. Omni view cameras, Lidar sensors and GNSS
receivers are equipped outside of the vehicle, while single-view cameras and computers are set inside.

are located in a plug-in computer connected to  Velodyne Lidar sensors produce 3D


vehicular Controller Area Network (CAN) bus point-cloud data, which can be used
networks. Given that commercial vehicles are for localization and mapping, while
not designed to employ such a plug-in com- also being used to measure the dis-
puter, we need to add a secure control gateway tance to surrounding objects.
to the CAN bus networks. This is often a com-  Ibeo Lidar sensors produce long-
plex undertaking that prevents researchers and range 3D point-cloud data, although
developers from using real-world vehicles. their vertical resolution is lower than
The ZMP Robocar product provides a that of the Velodyne Lidar sensors.
vehicle platform containing a control gateway  Hokuyo Lidar sensors produce short-
through which a plug-in computer can send range 2D laser scan data and are use-
operational commands (such as pedal strokes ful for emergent stopping rather than
and steering angles) to the vehicle. Figure 1 for localization and mapping.
shows one of their product lines, the ZMP  Point Grey Ladybug 5 and Grasshop-
Robocar “HV,” which is based on the Toyota per 3 cameras can detect objects. The
Prius. (Details about the product are available Ladybug 5 camera is omnidirectional,
at www.zmp.co.jp.) We added a plug-in com- covering a 360-degree view, whereas
puter and a set of sensors, such as cameras and the Grasshopper 3 camera is single-
light detection and ranging (Lidar) sensors, to directional, operating at a high rate.
this basic platform so that the results of scene The former can be used to detect mov-
recognition, path planning, and vehicle con- ing objects, and the latter can be used
trol could apply to autonomous driving. to recognize traffic lights.
The specification of sensors and com-  Javad RTK sensors receive global posi-
puters highly depends on the functional tioning information from satellites.
requirements of autonomous driving. Our They are often coupled with gyro sen-
prototype system uses various sensors and sors and odometers to fix the position-
computers (see Figure 1): ing information.
.............................................................

NOVEMBER/DECEMBER 2015 61
Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on July 30,2022 at 12:12:59 UTC from IEEE Xplore. Restrictions apply.
..............................................................................................................................................................................................
COOL CHIPS

Point cloud data 3D map data Image data

Localization Object detection


Current
Current position position Object positions image

Mission planning Object tracking

Global waypoint Object positions on image


Object positions
in real world
Motion planning Reprojection

Local waypoint

Path following

Velocity and angle

Figure 2. Basic control and dataflow of algorithms. The output velocity and angle are sent as
commands to the vehicle controller.

These sensors can be connected to com- prises a set of algorithms. For instance, scene
modity network interfaces, such as Ethernet recognition requires localization, object-detec-
and USB 3.0. Note that sensors for autono- tion, and object-tracking algorithms. Path
mous vehicles are not limited to them. For planning often falls into mission and motion
example, milliwave radars and inertial meas- planning, whereas vehicle control corresponds
urement units are often preferred for autono- to path following.
mous vehicles. Figure 2 shows the algorithms’ basic con-
Regarding a plug-in computer, because trol and data flow. Here, we introduce exam-
commercial vehicles support only a DC 12-V ples of the algorithms used in our auto-
power supply, we add an inverter and battery nomous-vehicles platform.
to provide an AC 100-V power supply for
the prototype vehicle in Figure 1. Note that
Localization
the choice of a plug-in computer depends on
Localization is one of the most basic and
performance requirements for autonomous
important problems in autonomous driving.
vehicles. In our project, we started with a
Particularly in urban areas, localization preci-
high-performance workstation to test algo-
sion dominates the reliability of autonomous
rithms. Once the algorithms were developed,
driving. We use the Normal Distributions
we switched it to a mobile laptop to downsize
Transform (NDT) algorithm to solve this
the system. Now, we are aiming to use an
localization problem.2 To be precise, we use
embedded system on a chip (SoC), taking
the 3D version of NDT to perform scan
into account the production image.
matching over 3D point-cloud data and 3D
map data.3 As a result, localization can per-
Algorithms form at the order of centimeters, leveraging a
We can roughly classify autonomous-driving high-quality 3D Lidar sensor and a high-pre-
components into scene recognition, path cision 3D map. We chose the NDT algo-
planning, and vehicle control. Each class com- rithms because they can be used in 3D forms
............................................................

62 IEEE MICRO

Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on July 30,2022 at 12:12:59 UTC from IEEE Xplore. Restrictions apply.
and their computation cost does not suffer with other frames on a time basis so that we
from map size (the number of points). can predict the trajectories of moving objects
Localization is also a key technique to for mission and motion planning. We use
build a 3D map. Because 3D Lidar sensors two algorithms to solve this tracking prob-
produce 3D point-cloud data in real time, if lem. Kalman Filters is used under a linear
our autonomous vehicle is localized correctly, assumption that our autonomous vehicle is
a 3D map is created and updated by register- driving at constant velocity while tracking
ing the 3D point-cloud data at every scan. moving objects.6 Its computational cost is
This is often referred to as simultaneous lightweight and suited for real-time process-
localization and mapping. ing. Particle Filters, on the other hand, can
work for nonlinear tracking scenarios, in
Object detection which both our autonomous vehicle and
Once we localize our autonomous vehicle, tracked vehicles are moving.7
we next detect objects, such as vehicles, In our platform, we use both Kalman Fil-
pedestrians, and traffic signals, to avoid acci- ters and Particle Filters, depending on the
dents and violation of traffic rules. We focus given scenario. We also apply them for track-
on moving objects (vehicles and pedestrians), ing on both the 2D (image) plane and the
although our platform can also recognize 3D (point-cloud) plane.
traffic signals and lights. We use the Deform-
able Part Models (DPM) algorithm to detect Projection and reprojection
vehicles and pedestrians.4 DPM searches and We augment scene recognition supported by
scores the Histogram of Oriented Gradients our platform with sensor fusion of a camera
features of target objects on the image cap- and a 3D Lidar sensor. To calculate the
tured from a camera.5 We chose these algo- extrinsic parameters required to make this
rithms because they scored the best numbers sensor fusion, we calibrate the camera and
in past Pattern Analysis, Statistical Modeling, the 3D Lidar sensor. We can then project the
and Computational Learning (Pascal) Visual 3D point-cloud information obtained by the
Object Classes challenges. 3D Lidar sensor onto the image captured by
Apart from image processing, we also use the camera so that we can add depth infor-
point-cloud data scanned from a 3D Lidar mation to the image and filter out the region
sensor to detect objects by Euclidean cluster- of interest of object detection.
ing. Point-cloud clustering aims to obtain the The result of object detection on the
distance to objects rather than to classify image can also be reprojected onto the 3D
them. The distance information can be used point-cloud coordinates using the same
to range and track the objects classified by extrinsic parameters. We use the reprojected
image processing. This combined approach object positions to determine the motion
with multiple sensors is often referred to as plan and, in part, the mission plan.
sensor fusion.
Operating under the assumption that our Mission planning
autonomous vehicle drives on a public road, Our mission planner is semiautonomous.
we can improve the detection rate using a 3D Under the traffic rules, we use a rule-based
map and the current position information. mechanism to autonomously assign the path
Projecting the 3D map onto the image origi- trajectory, such as for lane change, merge,
nated on the current position, we know the and passing. In more complex scenarios, such
exact road area on the image. We can there- as parking and recovering from operational
fore constrain the region of interest for image mistakes, the driver can supervise the path. In
processing to this road area so that we can either case, once the path is assigned, the local
save execution time and reduce false positives. motion planner is launched.
Our mission planner’s basic policy is that
Object tracking we drive on the cruising lane throughout the
Because we perform the object-detection route provided by a commodity navigation
algorithm on each frame of the image and application. The lane is changed only when
point-cloud data, we must associate its results our autonomous vehicle is passing the
.............................................................

NOVEMBER/DECEMBER 2015 63
Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on July 30,2022 at 12:12:59 UTC from IEEE Xplore. Restrictions apply.
..............................................................................................................................................................................................
COOL CHIPS

preceding vehicle or approaching an intersec- across unexpected obstacles. To cope with


tion followed by a turn. this scenario, our path follower ensures a
minimum distance to obstacles, overwriting
Motion planning the given plan.
The motion planner is a design knob for
autonomous driving that corresponds to the Software stack
driving behavior, which is not identical We build the software stack of autonomous
among users and environments. Hence, our driving on the basis of open-source software.
platform provides only a basic motion-plan- As a result, even the code implementation of
ning strategy. A high-level intelligence must the algorithms we have discussed becomes
be added on top of this platform, depending accessible to the public. The software-stack
on the target scenarios. framework is called Autoware, and it can be
In unstructured environments, such as downloaded from the project repository
parking lots, we provide graph-search algo- (https://github.com/cpfl/autoware).
rithms, such as A*,8 to find a minimum-cost In this section, we introduce the core com-
path to the goal in space lattices.9 In structured ponents of open-source software used in
environments, such as roads and traffic lanes, Autoware. Our autonomous vehicle is entirely
on the other hand, the density of vertices and operated by Autoware. It has already run over
edges is likely high and not uniform, which a few hundred miles in Nagoya, Japan.
constrains the selection of feasible headings.
We therefore use conformal spatiotemporal Robot Operating System
lattices to adapt the motion plan to the envi- Autoware is based on Robot Operating Sys-
ronment.10 State-of-the-art research encour- tem (www.ros.org), a component-based mid-
ages the implementation of these algorithms.1 dleware framework developed for robot
applications. In ROS, the system is abstracted
Path following by nodes and topics. The nodes represent
We control our autonomous vehicle to follow individual component modules, whereas the
the path generated by the motion planner. topics hold input and output data between
We use the Pure Pursuit algorithm to solve nodes. This is a strong abstraction model for
this path-following problem.11 According to compositional development.
the Pure Pursuit algorithm, we break down ROS nodes are usually standard Cþþ
the path into multiple waypoints, which are programs. They can use any other software
discrete representations of the path. At every libraries installed in the system. Meanwhile,
control cycle, we search for the close way- ROS nodes can launch several threads implic-
point in the heading direction. We limit the itly. Topics are also managed by first-in, first-
search to outside of the specified threshold out queues when accessed by multiple nodes
distance so that we can relax the change in simultaneously. Real-time issues, however,
angle in the case of returning onto the path must be addressed.
from a deviated position. The velocity and ROS also provides an integrated visualiza-
angle of the next motion are set to such values tion tool called RViz. Figure 3 shows an
that bring the vehicle to the selected way- example of visualization for perception tasks
point with predefined curvature. in Autoware. The RViz viewer is useful for
We update the target waypoint accord- checking the status of tasks.
ingly until the goal is reached. The vehicle
keeps following updated waypoints and Point-Cloud Library
finally reaches the goal. If the control of gas Point-Cloud Library (http://pointclouds.org)
and brake stroking and steering is not aligned was developed to manage point-cloud data.
with the velocity and angle output of the It supports many algorithms in the library
Pure Pursuit algorithm because of some package, including the 3D NDT algorithm.3
noise, the vehicle could temporarily get off Autoware uses this PCL version of 3D NDT
the path generated by the motion planner. for localization and mapping. In our experi-
Localization errors could also pose this prob- ence, the localization or mapping error can
lem. As a result, the vehicle could come be capped at 10 to 20 cm.
............................................................

64 IEEE MICRO

Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on July 30,2022 at 12:12:59 UTC from IEEE Xplore. Restrictions apply.
Figure 3. Visualization tools for developers. Both a 3D map plane and a 2D image plane can
be displayed. The results of traffic light recognition and path generation are also displayed.

Another usage of PCL in Autoware is to ing, converting, and drawing images), which
implement the Euclidean clustering algo- are useful to construct a framework of image-
rithm for object detection. In addition, ROS processing programs. Autoware combines
uses PCL internally in many places. For OpenCV with the RViz viewer of ROS for
example, the RViz viewer provided by ROS visualization.
is a PCL application.
CUDA
OpenCV CUDA (https://developer.nvidia.com/cuda-
OpenCV (http://opencv.org) is a popular zone) is a framework for general-purpose com-
computer vision library for image processing. puting on GPUs (GPGPU). Because complex
It supports the DPM algorithm4 and the His- algorithms of autonomous driving, such as
togram of Oriented Gradients features.5 Apart NDT, DPM, and A*, are often computing
from algorithm implementation, OpenCV intensive and data parallel, their execution
provides API library functions (such as load- speeds can be improved significantly using
.............................................................

NOVEMBER/DECEMBER 2015 65
Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on July 30,2022 at 12:12:59 UTC from IEEE Xplore. Restrictions apply.
..............................................................................................................................................................................................
COOL CHIPS

Figure 4. User interface tools for drivers. Smart tablets are used to input the destination of the trip, displaying a route
candidate. To present more valuable information to drivers, smart glasses and displays can be used together.

CUDA. For example, the DPM algorithm ple, it can be used with smart glasses, as shown
contains a nontrivial number of computing- in Figure 4.
intensive and data-parallel blocks that can be
significantly accelerated by CUDA.12 Because Datasets
autonomous vehicles need real-time perform- In our platform, a 3D map is required to
ance, CUDA is a strong way to speed up the localize the autonomous vehicle. As we men-
algorithms. tioned earlier, we can generate a 3D map
using Autoware. However, this turns into a
Android time-consuming routine. It should be pro-
Autoware uses Android (www.android.com) vided by mapmakers as part of general data-
applications for the human-driver interface. sets for autonomous driving. We are
The driver can search for the route using a navi- collaborating with Aisan Technology, a map-
gation map on an Android tablet (see Figure maker in Japan, for field-operational tests of
4). The result of a searched route is reflected to autonomous driving in Nagoya. Autoware is
the 3D map used in Autoware. designed to be able to use both their 3D
maps and those generated by Autoware itself.
openFrameworks This compatibility to the third-party format
Another piece of software used in Autoware allows Autoware to be deployed widely in
for the driver interface is openFrameworks industry and academia.
(www.openframeworks.cc). Although Android Simulation of autonomous driving also
applications are functional to receive an input needs datasets. Field-operational tests always
from the driver, openFrameworks provides take some risk, and they are not always effi-
creative displays to visualize the status of cient in time, so simulation is desired in most
autonomous driving to the driver. For exam- cases. Fortunately, ROS provides an excellent
............................................................

66 IEEE MICRO

Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on July 30,2022 at 12:12:59 UTC from IEEE Xplore. Restrictions apply.
record-based simulation framework called and State Lattice algorithms, can run within
ROSBAG. We suggest that ROSBAG is used 50 ms, whereas the DPM algorithm still con-
primarily to test functions of autonomous sumes more than 100 to 200 ms on the GPU.
vehicles, and field-operational tests are per- This implies that our system currently exhibits
formed for a real verification. a performance bottleneck in object detection.
Autoware lets researchers and developers Assuming that the vehicle is self-driving at 40
use such structured 3D maps and record data. km/hour in urban areas and that autonomous
Simulation of autonomous driving on a 3D functions should be effective every 1 m, the
map can be easily conducted using Autoware. execution time of each real-time task must be
Samples of these datasets can also be down- less than 100 ms, although a multitasking
loaded through the Autoware website. Given environment could make the problem more
that vehicles and sensors are often expensive, complicated. If our autonomous driving sys-
simulation with the sample datasets might be tem remains as is, the performance of GPUs
preferred to on-site experiments in some cases. must improve by at least a factor of two to
meet our assumption. Application-specific
Performance requirements integrated circuit and field-programmable
Considering that cameras and Lidar sensors gate array solutions could also exist.
often operate at 10 to 100 Hz, each task of Real-time tasks other than planning, local-
autonomous driving must run under some ization, and detection are negligible in terms
timing constraints. of execution time, but some tasks are very sen-
We provide a case study of our autono- sitive to latency. For example, a vehicle-con-
mous driving system on several computers trol task in charge of steering and acceleration
employing Intel CPUs and Nvidia GPUs. On and brake stroking should run with a few-
Intel CPUs, such as the Xeon and Core i7 ser- milliseconds latency.
ies, the A* search algorithm consumes the Finally, because such throughput- and
most time, requiring a scale of seconds or latency-aware tasks are coscheduled in the
more, if the search area is large. Although same system, operating systems must be reli-
faster is better, this might not be a significant able to multitask real-time processing. This
problem given that A* search is often used for leads to the conclusion that codesign of hard-
mission planning, which launches only when ware and software is an important considera-
the path needs to change. A more significant tion for autonomous driving.
problem resides in real-time tasks, such as
motion planning, localization, and object
detection. The DPM algorithm spends more
than 1,000 ms on VGA-size images with the
T his article has introduced an open
platform for commodity autonomous
vehicles. The hardware components, includ-
default parameter setting in OpenCV, whereas ing ZMP Robocars, sensors, and computers,
the NDT and the State Lattice algorithms can can be purchased in the market. The autono-
run in several tens of milliseconds. Their exe- mous-driving algorithms are widely recog-
cution times must be reduced to drive autono- nized, and Autoware can be used as a basic
mous vehicles in the real world. software platform comprising ROS, PCL,
We implemented these algorithms using OpenCV, CUDA, Android, and openFrame-
Nvidia GPUs. As reported elsewhere,12 Nvi- works. Sample datasets, including 3D maps
dia GPUs bring 5 to 10 times performance and simulation data, are also packaged as
improvements over Intel CPUs for the DPM part of Autoware. To the best of our knowl-
algorithm. In fact, we also observed equiva- edge, this is the first autonomous-vehicles
lent effects on the A*, NDT, and State Lattice platform accessible to the public. MICRO
algorithms, although the magnitude of
improvement depends on the parameters. ....................................................................
One particular case study of our autono- References
mous driving system used a laptop composed 1. C. Urmson et al., “Autonomous Driving in
of an Intel Core i7-4710MQ CPU and an Urban Environments: Boss and the Urban
Nvidia GTX980M GPU. We demonstrated Challenge,” J. Field Robotics, vol. 25, no. 8,
that most real-time tasks, including the NDT 2008, pp. 425–466.
.............................................................

NOVEMBER/DECEMBER 2015 67
Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on July 30,2022 at 12:12:59 UTC from IEEE Xplore. Restrictions apply.
..............................................................................................................................................................................................
COOL CHIPS

2. P. Biber et al., “The Normal Distributions include operating systems, cyber-physical


Transform: A New Approach to Laser Scan systems, and parallel and distributed sys-
Matching,” Proc. IEEE/RSJ Int’l Conf. Intelli- tems. Kato has a PhD in engineering from
gent Robots and Systems, 2003, pp. Keio University. Contact him at shinpei@
2743–2748. is.nagoya-u.ac.jp.
3. M. Magnusson et al., “Scan Registration for
Autonomous Mining Vehicles Using 3D- Eijiro Takeuchi is a designated associate pro-
NDT,” J. Field Robotics, vol. 24, no. 10, fessor in the Institute of Innovation for Future
2007, pp. 803–827. Society at Nagoya University. His research
interests include autonomous mobile robots
4. P. Felzenszwalb et al., “Object Detection
and intelligent vehicles. Takeuchi has a PhD in
with Discriminatively Trained Part-Based
engineering from the University of Tsukuba.
Models,” IEEE Trans. Pattern Analysis and
Contact him at takeuchi@coi.nagoya-u.ac.jp.
Machine Intelligence, vol. 32, no. 9, 2010,
pp. 1627–1645.
Yoshio Ishiguro is an assistant professor in
5. N. Dalal and B. Triggs, “Histograms of Ori- the Institute of Innovation for Future Soci-
ented Gradients for Human Detection,” ety at Nagoya University. His research inter-
Proc. IEEE CS Conf. Computer Vision and ests include mixed reality, augmented reality,
Pattern Recognition, 2005, pp. 886–893. and human–computer integration. Ishiguro
6. R. Kalman, “A New Approach to Linear Fil- has a PhD in information science and stud-
tering and Prediction Problems,” ASME J. ies from the University of Tokyo. Contact
Basic Eng., vol. 82, no. 1, 1960, pp. 35–45. him at ishiy@acm.org.
7. M. Arulampalam et al., “A Tutorial on Par-
ticle Filters for Online Nonlinear/non- Yoshiki Ninomiya is a designated professor
Gaussian Bayesian Tracking,” IEEE Trans. in the Institute of Innovation for Future Soci-
Signal Processing, vol. 50, no. 2, 2002, pp. ety at Nagoya University. His research inter-
174–188. ests include machine intelligence and image
processing, and its applications. Ninomiya
8. P. Hart et al., “A Formal Basis for the Heu-
has a PhD in engineering from Nagoya
ristic Determination of Minimum Cost
University. Contact him at ninomiya@coi.
Paths,” IEEE Trans. Systems Science and
nagoya-u.ac.jp.
Cybernetics, July 1968, pp. 100–107.
9. M. Pivtoraiko et al., “Differentially Con- Kazuya Takeda is a professor in the School
strained Mobile Robot Motion Planning in of Information Science at Nagoya Univer-
State Lattices,” J. Field Robotics, vol. 26, sity. His research interests include media sig-
no. 3, 2009, pp. 308–333. nal processing and its applications. Takeda
10. N. McNaughton et al., “Motion Planning for has a PhD in engineering from Nagoya Uni-
Autonomous Driving with a Conformal Spatio- versity. Contact him at kazuya.takeda@
temporal Lattice,” Proc. IEEE Int’l Conf. Robotics nagoya-u.jp.
and Automation, 2011, pp. 4889–4895.
11. R. Coulter, Implementation of the Pure Pur- Tsuyoshi Hamada is an associate professor
suit Path Tracking Algorithm, tech. report in the Nagasaki Advanced Computing Cen-
CMURI-TR-92-01, Robotics Institute, 1992. ter at Nagasaki University. His research
interests include massively parallel architec-
12. M. Hirabayashi et al., “Accelerated Deform-
tures based on FPGAs and GPUs. Hamada
able Part Models on GPUs,” to be published
has a PhD in engineering from the Univer-
in IEEE Trans. Parallel and Distributed Sys-
sity of Tokyo. Contact him at hamada@
tems, 2015.
nacc.nagasaki-u.ac.jp.

Shinpei Kato is an associate professor in the


Graduate School of Information Science at
Nagoya University. His research interests
............................................................

68 IEEE MICRO

Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on July 30,2022 at 12:12:59 UTC from IEEE Xplore. Restrictions apply.

You might also like