Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Review

Autonomous Navigation Technology for Low-Speed Small


Unmanned Vehicle: An Overview
Xiaowei Li 1,2, * , Qing Li 1 , Chengqiang Yin 3 and Junhui Zhang 1

1 Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100029, China


2 School of Integrated Circuits, University of Chinese Academy of Sciences, Beijing 100049, China
3 School of Machinery and Automation, Weifang University, Weifang 261061, China
* Correspondence: lixiaowei@sdjtu.edu.cn

Abstract: In special locations (scenes) such as campuses and closed parks, small unmanned vehicles
have gained more attention and application. Autonomous navigation is one of the key technologies
of low-speed small unmanned vehicles. It has become a research hotspot, but there are still many
problems, such as perception sensitivity, navigation, and positioning accuracy, motion planning
accuracy, and tracking control accuracy. In order to sort out the research status of the key technologies
of autonomous navigation for small unmanned vehicles more clearly, this paper firstly reviews the
key technologies of autonomous navigation and presents an analysis and summary. Finally, future
research trends of small unmanned vehicles with low speed are given.

Keywords: autonomous navigation; environment awareness; integrated navigation; motion planning

1. Introduction
An unmanned vehicle is a kind of vehicle that can sense the environment and navigate
Citation: Li, X.; Li, Q.; Yin, C.; Zhang,
autonomously without human operation. It uses millimeter wave radar, lidar, GPS, and
J. Autonomous Navigation
a camera to detect the surrounding environment, understands the sensing information,
Technology for Low-Speed Small
identifies obstacles and traffic signs through the scene perception system, and makes
Unmanned Vehicle: An Overview.
appropriate path planning decisions [1]. Unmanned vehicles break through the traditional
World Electr. Veh. J. 2022, 13, 165.
driving mode, which can effectively improve the driving safety and stability of vehicles
https://doi.org/10.3390/
wevj13090165
and reduce the incidence of traffic accidents. It is an important way of intelligent travel in
the future. At present, with the development of unmanned technology, unmanned vehicles
Academic Editor: Joeri Van Mierlo have been gradually applied in cities [2], agriculture [3], industry [4,5], special operation
Received: 8 August 2022
environments [6], and other fields. Compared with the complexity of cities, there are a
Accepted: 28 August 2022 lot of demands in closed environments (such as airports), which are easy to be applied in
Published: 30 August 2022 such environments [7].
Low-speed self-driving vehicles gradually mature with the progress of driverless
Publisher’s Note: MDPI stays neutral
vehicles. Since 2010, the demand for low-speed driverless vehicles in closed environments,
with regard to jurisdictional claims in
such as sanitation, logistics, ports, and other fields, has increased rapidly. Major countries
published maps and institutional affil-
around the world have also actively deployed in the field of low-speed driverless vehicles,
iations.
and vigorously promoted the application solutions in specific scenarios. Among them, the
application of low-speed small driverless vehicles is becoming more and more common.
Specifically, low-speed small driverless vehicles are different from those used in urban
Copyright: © 2022 by the authors. environments. The driving environment is relatively closed and the speed is less than
Licensee MDPI, Basel, Switzerland. 45 km/h, especially in low-speed application scenarios such as airports, ports, mining
This article is an open access article areas, sanitation, terminal distribution, campuses, scenic spots, and closed parks; it is
distributed under the terms and divided into three categories: manned, cargo and special vehicles. As shown in Table 1.
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).

World Electr. Veh. J. 2022, 13, 165. https://doi.org/10.3390/wevj13090165 https://www.mdpi.com/journal/wevj


World Electr. Veh. J. 2022, 13, 165 2 of 21

Table 1. Types of low-speed small unmanned vehicles.

Category Species
Campus bus [8]
Carry human beings Scenic spot sightseeing bus
Park tour bus
Express delivery vehicle [9]
Cargo type
Workshop material transfer truck [10]
Wharf transport vehicle [11]
Sanitation vehicles (sweepers [12], snow
removal vehicles [8], high-speed marking
Special purpose vehicle vehicles, etc.)
Patrol car (community, airport, etc.)

(1) Low-speed small unmanned vehicles carrying human beings


As the driverless cars of General Motors, Waymo, Uber, Google, and Tesla gradu-
ally matured, driverless technology turned its attention to public scenes such as school
campuses, residential communities, and office parks. After 2007, a number of driverless
low-speed bus vehicles represented by EasyMile, May Mobility, and Navya appeared one
after another, and then unmanned bus manned services were launched in China, Japan, the
United States, Germany, and Australia. The most likely use for unmanned manned buses
is to supplement or expand the public transportation system. It is especially suitable for
meeting the travel needs of major areas such as university campuses, retirement communi-
ties, and entertainment or business districts [7]. Navya’s unmanned low-speed bus is 4.7 m
long and 2.1 m wide, can carry 15 passengers, has a maximum speed of 45 km/h, and has
been tested at a speed of 25 km/h on public roads [7]. On 1 September 2016, the world’s
first unmanned bus line was put into trial operation in Dubai. This small electric driverless
bus is called EZ10, so it is feasible for driverless buses to come earlier than smart cars [13].
(2) Low-speed small unmanned vehicles carrying cargo
Logistics distribution field: In the development process of the modern logistics indus-
try, unmanned distribution logistics service based on unmanned driving technology [14]
has become the development trend [15]. Although there are still many limitations in the
current development process, unmanned distribution logistics services will become the
main form of logistics industry operations in the future. From the global application
situation, unmanned distribution is the most widely used field of low-speed automatic
driving. Starship Technologies, a British company, made the unmanned delivery vehicle
earlier. Its unmanned delivery vehicle is equipped with nine cameras and a complete
obstacle avoidance system, which can completely perform tasks automatically, drive at
a speed of four miles per hour, and can transport 20 pounds (about 9 kg) of goods at a
time. Unmanned distribution is the most widely developed in the United States at present.
The Nuro Company of the United States also launched the low-speed unmanned distribu-
tion vehicle R1 in December 2018. In addition, Robby Robot of RobbyTechnologies in the
United States and CarriRo Delivery released by ZMP, a Japanese robot company, have also
conducted research on low-speed small unmanned delivery vehicles. The fifth generation
intelligent express vehicle was released by China Jingdong Logistics and the unmanned de-
livery vehicle with L4 level was launched by Meituan. With the outbreak of the COVID-19
epidemic, the demand for “unmanned economy” scenes has been continuously expanded,
and unmanned vehicles can replace manual labor to complete the “last mile” distribution
work to a certain extent. Many enterprises are using unmanned vehicles to distribute
materials and support the anti-epidemic.
(3) Special low-speed small unmanned vehicles
Sanitation cleaning field: With the development of artificial intelligence, an unmanned
sweeper is proposed, which can realize automatic driving cleaning and overcome obstacles
World Electr. Veh. J. 2022, 13, 165 3 of 21

independently [16]. For large-scale cleaning operations and cleaning work in severe rain
and snow, unmanned driving can not only improve the working environment of sanitation
personnel, and improve work efficiency, but also greatly reduce the operating cost of
sanitation cleaning. In addition, indoor cleaning robots have been developed earlier.
After the development of unmanned technology, unmanned small sweepers in outdoor
scenes have also been first applied. For example, Envay, a German driverless company,
and FYBOTS, a French company, have successively withdrawn from driverless sanitation
vehicles, while well-known enterprises in China include Wo Xiaobai, an unmanned sweeper
released by Smart Walker.
Security field [17]: For airports, closed parks, campuses, and communities, low-speed
small unmanned vehicles can effectively replace patrol personnel, realize 24-h patrol
inspection, and effectively ensure the safety of public places. For example, the unmanned
patrol car launched by Singapore Otsaw Company and the unmanned patrol car launched
by UISEE were applied to Hong Kong International Airport in November 2021.
To sum up, due to its unique system structure, a low-speed unmanned vehicle can
not only realize conventional driving behavior but also realize special driving abilities
such as autonomous environment perception, motion planning, and navigation control. It
can effectively improve the safety and reliability of vehicles, reduce the fatigue and labor
intensity of drivers, and make low-speed unmanned vehicles such as small sweepers, small
scenic sightseeing buses, and small logistics distribution vehicles become the research
hotspot of researchers. Its development also has broad application prospects and scientific
research value. Unmanned vehicles integrate image processing, automatic control, artificial
intelligence, and many other technologies to realize the autonomous navigation of vehicles.
In [18], a neural network model to solve the problem of autonomous navigation on the
ground is developed to improve the ability of navigation in complex terrain environments.
As to whether unmanned vehicles can be utilized in a complex environment, autonomous
navigation is the core evaluation standard to mark their intelligence degree.
The so-called autonomous navigation means that unmanned vehicles can sense the
external environment through their own sensors, can plan their routes autonomously, and
avoid dynamic and static obstacles when encountering dynamic and static obstacles [19].
Therefore, autonomous navigation mainly solves three problems here: First, where am I?
The second is where I go, and the third is how I should go. Finally, the unmanned vehicle
can reach its destination autonomously and safely. If we want to solve these three problems,
we need to study the autonomous navigation technology of unmanned vehicles. Generally
speaking, the current mainstream autonomous navigation schemes are divided into visual
navigation [20,21] and laser navigation. In [22], a new PID autonomous navigation system
for intelligent vehicles based on ROS and lidar is proposed, which uses lidar to scan the
environment and build a map, and realizes visualization through Rviz. The PID algorithm
is integrated into ROS, and the deviation correction accuracy is improved by filtering. It has
a good effect on map construction and autonomous navigation and is superior to traditional
amcl navigation in autonomy and efficiency. In [23], a light autonomous vehicle system
VOLWA based on pure vision is proposed, which adopts the visual position recognition
algorithm of removing dynamic targets by channel selection, effectively improves the
performance of position recognition, and the autonomous navigation system runs well.
In addition to the above two independent sensor autonomous navigation systems, there
are also multi-sensor fusion methods such as lidar and vision fusion methods to realize
autonomous navigation systems [24,25].
Generally speaking, autonomous navigation technology mainly includes environment
awareness technology, map building technology, motion planning technology, and tracking
control technology. At present, there are few overview papers on autonomous navigation
technology of low-speed small unmanned road vehicles. This paper mainly reviews and
analyzes the key technologies of autonomous navigation of road vehicles, and puts forward
the prospect of future development.
World Electr. Veh. J. 2022, 13, 165 4 of 21

The rest of this article is as follows: Section 2 reviews environment awareness tech-
nology, Section 3 reviews map building technology, Section 4 reviews navigation and
positioning technology, Section 5 reviews motion planning technology, Section 6 reviews
tracking control technology, and Section 6 summarizes this article.

2. Review of Environmental Perception


At present, most teams engaged in the research of driverless vehicle technology focus
on perception or control, and the research focus is mainly on the driverless field at medium
and high speed, while there are relatively few driverless researchers at low speed and in a
specific environment. In order to achieve the goal of autonomous navigation, low-speed
small unmanned vehicles first need to be able to understand the state information of the
vehicle itself and the environmental information around the vehicle. The acquisition of this
information requires the vehicle sensing system to provide the basis for system decision-
making, which is also one of the key technologies of autonomous navigation systems [26].
The environmental sensing system includes internal state acquisition sensors and external
sensors for environmental sensing [27].
The state information of unmanned vehicles mainly includes the speed, acceleration,
inclination angle, position, and other information of the vehicles [28]. This kind of informa-
tion is mainly measured by inclination sensors, gyroscopes, and other sensors. In Table 2,
the vehicle’s own attitude sensor characteristics are shown [29].

Table 2. Sensor characteristics of vehicle itself.

Sensor Function Characteristic Precision Disadvantages


signals are
Navigation and blocked by
Geolocation and
GPS positioning Centimeter scale high-rise
time information
sensors buildings, and
so on.
Affected by
force, violent
Control accumulated
IMU rate and Centimeter scale
and Navigation errors, resulting
magnetic field.
in drift
position, Errors often
An analog or
Encoders direction and Meter level occur in
digital signal
velocity measurement

The external environment sensing system requires external sensing sensors, including
lidar, radar, and ultrasonic sensors, as well as monocular, stereo, omni-directional, infrared,
and event cameras [30].
In Table 3, external sensors are briefly shown [30].
World Electr. Veh. J. 2022, 13, 165 5 of 21

Table 3. Vehicle External Sensor Characteristics.

Unmanned Applicable Vehicle Type


Detection
Sensor Accuracy Function Advantages Disadvantages Delivery Sanitation
Distance (m) Bus Patrol Car
Vehicle Vehicle
No environment √ √ √
Millimeter wave detect the position and
<250 Medium affected, detection Small detection angle
radar [31] speed of the target
distance long
Long detection distance,
wide field of view, high √
detect the position and Bad weather, the
Lidar <200 High data acquisition
speed of the target performance will decline
accuracy, no lighting
conditions affect
Low precision, narrow √
Low cost and small
Ultrasonic <5 Low target detection visual field and blind
volume
spot
Monocular Image has color, texture Affected by weather and √ √ √ √
- High target detection
Camera and high resolution lighting conditions
Vulnerable to weather √ √ √ √
Stereo Get color and motion
<100 High Distance estimation and lighting conditions,
Camera information
narrow vision
Omni Vulnerable to weather √
Slam and 3D
Direction - High Great vision and lighting conditions,
reconstruction
Camera High computing power
No color or texture √ √
Infrared Good performance at
- Low object detection information, low
Camera night
accuracy
Event Affected by weather and √
- Low object detection Dynamic imaging
Camera lighting conditions
World Electr. Veh. J. 2022, 13, 165 6 of 21

To sum up, low-speed driverless vehicles need to choose different sensor configuration
schemes according to different application scenarios. Although most sensor configurations
are still the same as those of driverless vehicles used on expressways, considering the
particularity of low-speed driverless vehicles, there are some differences between onboard
sensors and high-speed driverless vehicles.
Low-speed small unmanned vehicles carrying human beings: In the configuration of
sensors, sensors such as IMU, GNSS, lidar, encoder, and camera are routinely selected. How-
ever, considering that the environment in which vehicles run does not have the complexity
of urban roads, and does not exist in densely populated areas and long-distance highways,
the cost will not be very high. The selection of sensor configuration parameters reflects
the advantages of economy, and the performance and quantity of sensor configuration are
obviously different from those of high-speed unmanned vehicles [32].
Low-speed small unmanned vehicles carrying cargo: In the configuration of sensors,
considering the use scenarios, the on-board sensor configuration of driverless vehicles car-
rying goods also reflects the advantages of economy, and considering the high probability
of not working at night, the implementation complexity is smaller; however, considering
the human-computer interaction function between cargo unmanned vehicles and recipients,
it is necessary to add additional sensors to the vehicle-mounted electronic control system.
Special low-speed small unmanned vehicles: Considering also that unmanned sweep-
ers need to collect garbage and unmanned inspection vehicles need to detect and identify
targets, the sensing system scheme based on visual sensors can be considered in sensor con-
figuration, and expensive sensors such as lidar can not be considered in campus and closed
park environments. For example, an infrared camera can be used. As shown in Table 3, the
infrared camera has some disadvantages. To solve the problem of low resolution, in [33], it
is mentioned that enhancing the contrast of infrared images, adopting the yolov3 neural
network, and building a stereoscopic infrared vision system are effective methods to solve
the detection problem. In [34], it is mentioned that the fusion of visible light camera and
infrared camera can be used to obtain information-rich images, and the infrared image and
laser radar point cloud can be used to obtain the depth and position information of the
target, These initiatives may overcome the shortcomings of infrared cameras.
The environmental perception of unmanned vehicles mainly includes: (1) road state
information, and (2) pedestrian, vehicle, and other obstacles detection.
(1) Road state information perception
Mainly for the identification of the road environment of low-speed unmanned vehicles.
Because the current urban road environment is complex, for unmanned vehicles, the
environment is complex, the requirements are high, and the potential safety hazards are
quite numerous. Expensive equipment leads to high costs. Therefore, for low-speed
driverless vehicles, different road states such as closed or semi-closed park roads, campus
roads, scenic roads, and community roads are mainly identified. Different roads may
have complex road pavement environments such as road smoothness and road water
accumulation; because of the characteristics of low-speed unmanned vehicles, there are few
lane lines, ground traffic signs, and traffic lights. The perception of road state information
mainly depends on the camera and adopts the way of visual recognition.
At present, the perception of vehicle road state in urban scenes mainly focuses on lane
detection, traffic signs, and traffic signals, while the research on road surface state is less,
and the related data sets are mainly for the data of test sections or urban roads, while the
data sets for complex road conditions are less.
Specifically, for the low-speed small unmanned vehicles studied in this paper, ac-
cording to different application scenarios, different vehicles have different perception
requirements for road state information. The details are as follows:
Low-speed small unmanned vehicles carrying human beings: These kinds of vehicle-
sare mainly used as campus buses or scenic sightseeing buses on campuses or in closed
parks. Different from unmanned vehicles used on conventional urban roads, they mainly
face unstructured or semi-structured roads, so it is necessary to consider identifying and
World Electr. Veh. J. 2022, 13, 165 7 of 21

analyzing road infrastructure conditions such as road conditions [35], for facing the rugged
road surface, it is necessary to consider configuring high-efficiency energy regenerative
shock absorbers [36].
Low-speed small unmanned vehicles carrying cargo: For small unmanned vehicles
in the logistics field, because the working environment is mostly unstructured or semi-
structured roads such as communities, it is important to consider identifying the road
status of communities to avoid vehicle failure caused by potholes and protrusions [37].
Special low-speed small unmanned vehicles: For unmanned vehicles in the field of
sanitation, because most of the use scenarios are unstructured or semi-structured road
conditions, it is necessary to consider the potholes, protrusions, accumulated water, and ice
on the road surface in the process of garbage cleaning, so as to avoid incomplete garbage
collection or wrong garbage collection identification (such as invalid garbage such as
accumulated water and glass bottles) during the operation of the sweeper. For unmanned
inspection vehicles in the field of security, it is necessary to focus on the road environment
such as potholes, protrusions, accumulated water, and ice on the road surface, so as to
avoid skidding of inspection vehicles and vehicle failure.
(2) Target detection around pedestrians, vehicles, and other vehicles
Due to the characteristics of low-speed unmanned vehicles’ application scenarios,
target detection tasks include vehicle [38], pedestrian, non-motor vehicle, and general
obstacle detection.
There are two main means of environmental perception: visual perception and radar
perception [39]. For low-speed unmanned vehicles, visual perception has become a widely
used environmental perception method because of its low input cost, simple operation,
and large amount of information. Image segmentation and object detection are two key
technologies in visual perception systems [40]. The main form of image segmentation is
semantic segmentation [41]. In visual perception technology, a semantic segmentation
algorithm is usually used to segment background categories such as sidewalks, sky, and
buildings because of its richer image understanding, so as to extract the workable area
of unmanned vehicles. Target detection technology is widely used to detect pedestrians,
vehicles, and other targets in the perceptual picture because of its higher detection speed
and the characteristics of obtaining the position and number of perceptual objects in the
scene. Recently, Deep Neural Network (DNN) has been used to solve the problem of object
detection and classification, road detection, and image semantic segmentation in automatic
driving perception [42–44]. At present, the frequently used classic networks include Fast
R-CNN, R-FCN, mask RCNN, yolov5, SSD, etc., all of which have achieved good results on
Kitti and other automatic driving road data sets.
In [45], a Yolov5 neural network model is adopted, which has the characteristics of
fast training speed, easy installation and use, strong robustness, and high recognition
accuracy under severe weather conditions, which is beneficial to improving the accuracy
of unmanned vehicle target detection. Another commonly used algorithm model is SSD,
which is used in [46] which has fast detection speed and good real-time performance, but the
model is difficult to converge, which has certain limitations on small target detection effects.
Radar sensing mainly includes active ranging sensors such as millimeter wave radar
and lidar radar, which are realized by multi-sensor fusion. Because the fusion of lidar,
millimeter wave radar, ultrasonic, and other sensors can better meet the sensing needs of
low-speed unmanned vehicles under complex roads and bad weather conditions, compared
with the single laser point cloud data, the data processing capacity can be reduced and
the real-time performance can be improved through the fusion processing algorithm. At
the same time, whether there are obstacles or not can be obtained through the fusion data
during trajectory planning, without knowing the specific information of obstacles. For
lidar, it collects the point cloud information of the environment, which contains accurate
three-dimensional coordinates and reflectivity of the object surface, which is of great help
to determine the accurate position of the target in three-dimensional space. However, the
point cloud data itself is sparse and disordered, which makes it more difficult to process
World Electr. Veh. J. 2022, 13, 165 8 of 21

and store the point cloud compared with the image. Moreover, the point cloud lacks color
information, which makes it difficult to judge the category of the target and identify the
dynamic target with color.
At present, most visual inspection algorithms are based on a deep convolution neu-
ral network, and its internal principle is convolution operation, and more convolution
operations may have the problem of insufficient computing power; in fact, most visual
data in the field of autonomous driving is a useless background in processing. Tradi-
tional visual processing algorithms still need to consider the whole background, even if
most of the background areas have not changed, this processing algorithm wastes a lot
of computing power and time. Event cameras have overwhelming advantages in target
tracking, motion recognition, and other fields, especially suitable for automatic driving.
The emergence of an event camera has successfully solved the problem of visual processing
in automatic driving.
Visual perception is an important means to perceive the external environment of
low-speed small unmanned vehicles. Facing complex weather conditions, it is difficult to
realize complex weather environment perception only by a single sensor. Other sensors
can be integrated through visual sensors. The typical scene is to integrate sensors such
as radar and camera on foggy and snowy days [47], as this overcomes the shortcomings
of a single sensor in complex weather, and can solve the problem of complex weather
environment perception to a certain extent. Vision sensor plays an irreplaceable role in
road target recognition and scene understanding, and vision is also an essential means in
road damage and water accumulation in the complex road environment. Therefore, the
fusion of vision and lidar point cloud information has higher detection accuracy and good
real-time performance than a single recognition method in obstacle recognition [48,49], and
various sensors often need to work together and complement each other’s advantages to
create an environment perception solution for automatic driving [50,51]. In Table 4, the
main Sensor Fusion Technologies are briefly shown [30,52].

Table 4. Sensor Fusion Technologies.

Sensor Combination Realization Function Characteristic


Localization, object detection
High calculation efficiency
Vision-LiDAR/Radar and environment
and modeling accuracy
modeling
Dynamic object Avoid trajectory loss caused
Vision-LiDAR
Tracking by LIDAR
Cumulative error is reduced
Absolute localization
GPS-IMU and the calculation amount is
system
small
Frequency: 35 Hz Average
Vision-Odometry Localization error: 34 cmAngle error:
1–3 degrees.
Better results in attitude
Odometry-Magnetic Sensor Localization
estimation.
Stereo vision- Improve the accuracy of state
Localization
IMU [53] estimation
GPS, Odometry, Inertial, Laser
Location estimation Small estimation error
Sensor [54]
LIDAR, IMU, Wheel Vehicle moves rapidly, error
Localization
Odometry [55] will increase
Phase delay of target
Radar and Infrared [56] Multi-object tracking
measurement data
World Electr. Veh. J. 2022, 13, 165 9 of 21

To sum up, it can be seen that the target detection technology around low-speed small
unmanned vehicles is the same as that of ordinary unmanned vehicles such as urban high-
speed. The difference may be that the types of obstacles to identifying targets are slightly
different, and the focus of detection technology may be different to some extent, which is
also determined by the simple application scenarios of low-speed unmanned vehicles.
Low-speed small unmanned vehicles carrying human beings: The application sce-
narios are mainly in non-urban environments such as campuses and parks, with few
traffic lights on the route, driving along a fixed route, only running during the day, stable
environment and no complex urban buildings [57], mainly based on visual perception,
supplemented by radar perception. In special weather environments such as rain, snow
and fog, the scheme of visual plus radar fusion perception can be adopted to ensure the
safety and reliability of vehicle operation.
Low-speed small unmanned vehicles carrying cargo: For unmanned distribution
vehicles, it is necessary to consider the complexity of traffic conditions in the driving
route. At present, Zhongtong unmanned distribution vehicles developed by China Matrix
Data Technology adopt four 32-line lidar schemes, which realize long-distance ranging
and high-precision environment perception. The driving scene of the unmanned delivery
vehicle in JD.COM, China is a non-motor vehicle lane with a speed of 15 km/h. The radar
and image pre-fusion algorithm PAI3D is adopted to realize 3D target detection, which
overcomes the problem of inaccurate depth estimation of 3D detection only by monocular
vision, and reduces the problems of false detection and missed detection of obstacles [58].
China SF unmanned vehicles are applied on campus, carry out contactless distribution
services, adopt the scheme of vision and radar fusion perception, have the ability of 120 m
distance perception, and can operate normally under complex weather conditions such as
foggy days, rainy days and nights.
Special low-speed small unmanned vehicles: In the field of sanitation, for unmanned
sweepers, it is necessary to consider the identification and detection of road garbage
classification, with emphasis on early warning of garbage that cannot be collected to avoid
equipment failure, and in [59], two Mask R-CNN neural network models are constructed
to realize automatic identification of road garbage. In [60], two algorithms, namely, the
garbage type identification algorithm and road garbage coverage algorithm based on the
Faster-RCNN model, are proposed to identify road garbage and realize high-efficiency and
energy-saving automatic cleaning. In [61], an unmanned sweeper adopts a centralized
multi-sensor fusion algorithm based on vision and radar sensors to detect pedestrians,
vehicles, and other multi-targets. In the field of security, unmanned patrol cars need to
focus on pedestrians and vehicles, and [62] uses thermal images, fast contour models and
particle filters to detect, track and identify pedestrians in real-time.
For the construction of an environment awareness system for low-speed small un-
manned vehicles, considering that it is still in the stage of demonstration application, this is
reflected in the importance of low-speed small unmanned vehicles during the pneumonia
epidemic in COVID-19, and the price factor is not the main problem of scene application. In
the future, with the popularization of large-scale applications, the cost of vehicle-mounted
lidar, millimeter wave radar, and the camera will gradually decrease. Sensors usually
applied to high-speed unmanned vehicles on urban roads will be applied to low-speed
small unmanned vehicles, which will also improve the safety and reliability of the sensing
system of low-speed small unmanned vehicles.

3. Review of Map Building


Mapping is the description of the environment in the process of vehicle movement,
and the forms of mapping are divided into metric maps and topological maps. A metric
map accurately represents the positional relationship of objects on the map; divided into
sparse maps and dense maps.
Among them, sparse maps express the environment abstractly and cannot express all
the information of the surrounding environment, while dense maps are usually composed
World Electr. Veh. J. 2022, 13, 165 10 of 21

of many small blocks according to a certain resolution, but they consume a lot of storage
space and may fail. Topological maps are composed of nodes and edges, which mainly
express the connectivity between map elements, and are not good at expressing maps with
complex structures [63]. In recent years, the mapping method based on multi-sensor fusion,
such as lidar, camera, and IMU fusion, has gradually become a focus of development.
Through the information of various sensors, the robustness of mapping of low-speed small
unmanned vehicles can be increased, and more accurate and informative maps can be
obtained. In addition, due to the serious positioning errors caused by GPS unlocking,
SLAM technology can complete map construction for unmanned vehicles in an unknown
environment, which is the basis of autonomous navigation.
(1) Multi-sensor fusion technology to build high-precision maps. Lidar is used as the
main sensor, dot matrix cloud scanning, and then the images collected by the auxiliary
camera are fused together, and then the data are analyzed, such as traffic signs, street signs,
etc., and finally, the scene map is synthesized. Based on multi-sensor fusion technology,
outdoor high-precision map scenes can be constructed. It can be applied to typical low-
speed application scenarios such as factories, communities, and schools.
(2) SLAM map construction method. As the core of autonomous navigation for low-
speed small unmanned vehicles, high-precision maps can gradually build maps with global
consistency in an unknown environment without prior information. At present, the sensors
commonly used in SLAM map construction are lidar and visual cameras, which correspond
to the laser SLAM algorithm and visual SLAM algorithm, respectively.
Visual SLAM can be divided into monocular, binocular, and RGBD types according
to different cameras. The biggest problem with a monocular camera is that it cannot
obtain accurate depth information, and it is difficult to calibrate the binocular camera. It
consumes computing power to calculate depth information by using pixel values. The
RGBD camera can obtain environmental information through its own laser transmitter
without consuming a lot of computing power to calculate depth information, but its small
measurement range and large noise lead to an inability to apply to large-scale SLAM of
unmanned vehicles, and SLAM realized by vision is easily affected by illumination; Laser
SLAM has high measurement accuracy, strong anti-interference ability, wide measurement
range, fast ranging speed and is not affected by illumination information, which is very
suitable for obtaining scale information of the environment. However, the information
obtained by lidar lacks texture and color information, and the realization of loop detection
is relatively difficult. Therefore, it is necessary to complete the fusion of laser point cloud
information and visual information and use the fused information to realize SLAM.
Mapping technology is one of the core technologies of unmanned vehicles. For low-speed
unmanned vehicles, mapping technology may be different in different application scenarios.
Low-speed small unmanned vehicles carrying human beings: in [64] the medium
and low-speed unmanned buses are mapped by laser SLAM, and the collected data are
inherently robust to environmental conditions, which improves the accuracy of mapping.
Low-speed small unmanned vehicles carrying cargo: Version 4.0 of unmanned dis-
tribution vehicles in JD.COM, China uses the scheme of vision and radar integration and
adopts IMU and SLAM technology to realize point cloud mapping and full scene mapping.
Unmanned forklift adopts lidar 3D SLAM technology to realize real-time mapping of point
cloud, which is simple to deploy and can be put into use quickly [65].
Special low-speed small unmanned vehicles: in [66], laser SLAM is used to scan
and model the surrounding environment and generate point cloud images. However, for
unmanned sweepers and patrol cars used in outdoor places such as campuses, communities,
and parks, considering the economic requirements, it is necessary to focus on using visual
SLAM to build maps in the future.
The application scenarios of low-speed small unmanned vehicles are mostly non-
urban roads. The routes for such as unmanned sweepers, distribution vehicles and patrol
cars are relatively fixed, and maps can be updated offline; in the future, the scheme of
World Electr. Veh. J. 2022, 13, 165 11 of 21

regional cloud server will be adopted, multi-vehicles will work together, and maps will be
built and updated in real-time.

4. Review of Navigation and Positioning Technology


Low-speed small unmanned vehicles need to rely on their autonomous navigation
system to achieve their own high-precision positioning, which is also the basis for vehicle
motion planning and decision control.
Autonomous navigation and positioning technologies for low-speed small unmanned
vehicles can be divided into three categories:
Relative navigation and positioning: Inertial navigation system (INS) and dead reck-
oning system (DRS) are commonly used systems for relative positioning, and their outputs
are highly autonomous, almost free from external interference, and have good smoothness
and short-term accuracy [67]. The main disadvantage of relative positioning is that the
cumulative measurement errors will accumulate over time and tend to be unbounded.
In [68], an algorithm for relative positioning and vehicle odometer using GPS carrier phase
measurement is proposed. Single-frequency or dual-frequency GPS measurements are
used to evaluate the accuracy of relative positioning algorithms. The accuracy of relative
position vector estimation is centimeter level.
Absolute navigation and positioning: GPS measurement calculation is based on the
pseudo-range sent by different satellites, which is one of the most widely used absolute
positioning systems. It can provide three-dimensional position and velocity information
without accumulated errors on a global scale, and obtain consistent accuracy over time [67].
However, GPS is easily occluded, which affects positioning accuracy, its output frequency
is low, and its position data usually has random noise.
Integrated navigation positioning: Considering the advantages and disadvantages of
relative positioning and absolute positioning, in order to make up for the shortcomings
of the single positioning method. Usually, integrated navigation positioning schemes are
adopted, such as GPS + IMU, GPS + odometer, and so on.
For low-speed unmanned vehicles, GPS navigation is suitable for mobile vehicles on
outdoor open driving roads [69]. When buildings and trees are blocked, GPS signals are
weak, resulting in an unlocked phenomenon, which cannot be used for positioning and
navigation of unmanned vehicles. In order to realize the accurate positioning of vehicles in
the unknown road environment, wheel odometer, visual odometer, laser odometer, and
radar odometer and their combination of visual laser odometer and visual-inertial odometer
are adopted [28]. With the development of computer technology in hardware and software,
artificial intelligence (AI) related solutions have attracted more and more attention. When
GPS works well, artificial intelligence will learn GPS/INS behavior patterns and build a
model to map vehicle dynamics (attitude, speed, or position) and corresponding errors.
Errors from the neural network model will be used to compensate for INS drift during GPS
signal interruption, so as to meet the problem of positioning accuracy degradation caused
by GPS lock loss [70].
To sum up, the application scenario of low-speed small unmanned vehicles is simpler
than that of the urban environment, without too many high-rise buildings, and the problems
of high-precision positioning are mostly caused by trees and other environmental occlusions.
Low-speed small unmanned vehicles carrying human beings: Navya’s unmanned
buses rely on satellite-based positioning, while other manufacturers such as EasyMile use
SLAM for positioning more [32].
Low-speed small unmanned vehicles carrying cargo: China Neolithic unmanned
distribution vehicles adopt RTK positioning and lidar, and cooperate with the positioning
scheme of high-precision maps. When there are trees and a weak GPS environment, they
rely on lidar to cooperate with high-precision maps for positioning. However, China Smart
Walker unmanned distribution vehicle adopts a differential GPS positioning system, and
China Meituan unmanned distribution vehicle adopts multiple positioning technologies
such as laser positioning, visual positioning, and semantic positioning to realize a multi-
World Electr. Veh. J. 2022, 13, 165 12 of 21

scene positioning system at the vehicle end. The unmanned delivery vehicle in JD.COM,
China is based on high-precision stereo image data combined with the GNSS satellite
positioning system to achieve high-precision positioning.
Special low-speed small unmanned vehicles: The unmanned sweeper developed by
China Smart Walker adopts integrated navigation, laser, and visual SLAM fusion position-
ing scheme, which realizes high-precision positioning on unstructured roads. In [71], the
unmanned sweeper uses laser point cloud, encoder data, and IMU data to locate the vehicle
in combination, and locates the vehicle position through the measured data. In [72], the
unmanned patrol car adopts a GNSS-RTK positioning system and receives GPS and BDS
observation data at the same time. The number of available satellites is more than twice that
of a single satellite system, so the positioning accuracy is significantly improved. In [73], a
cost-effective GNSS/SINS integrated positioning system based on a fiber optic gyroscope
(FOG) is designed. When satellite signals are blocked in a complex terrain environment,
the test accuracy can meet the actual accuracy requirements of an integrated navigation
system. When GNSS/RTK is used as the positioning system of an unmanned patrol car,
the heading angle of GNSS may be affected due to the interference of the working scene.
In [74], the electronic compass compensation method is adopted, and the sensor cost is
reduced by more than 99%, which is of great significance for market application.
At present, one of the problems in the popularization and application of low-speed
small unmanned vehicles is the price of a high-precision positioning scheme. Some of
them are on campuses, scenic spots, communities, and other places. Low-speed small
unmanned sweepers, distribution vehicles, patrol cars, and so on require higher economy,
which cannot reach the higher price of urban road unmanned vehicles. With the large-scale
application of high-precision GPS and other positioning sensors, the price will gradually
decrease, and high-precision integrated navigation positioning sensors will be gradually
configured to realize high-precision positioning on low-speed small unmanned vehicles.

5. Review of Motion Planning Techniques


Motion planning is to find a constrained path for moving vehicles between given
two points. Motion planning includes path planning (space) and trajectory planning (time).
(1) Path planning
The path planning algorithm of low-speed unmanned vehicles needs to consider fac-
tors such as vehicle speed, road adhesion, vehicle steering angle, and external weather en-
vironments such as rain, snow, and fog, which is different from conventional indoor robots.
Path planning is the link between the perception system and decision control system
of vulgar small unmanned vehicles, and it is the basis of autonomous navigation. The task
of vehicle routing planning is to find a collision-free path from the initial state to the target
state in a complex outdoor environment with obstacles.
Path planning can be divided into global path planning and local path planning.
Global path planning is to determine the practicable and optimal path by using the known
road information, such as obstacle position and road boundary, when the surrounding
environment of the vehicle is known through the perception system and the map has been
built. Under the guidance of the driveable area generated by global path planning, local
path planning determines the trajectory of low-speed unmanned vehicles in front of the
path according to the local environmental information perceived by sensors. Global path
planning is mainly aimed at the scene where the vehicle environment is known, while local
path planning is suitable for the scene where the vehicle environment is unknown.
Global path planning algorithms can be divided into four types: graph-based search
methods, sampling-based methods, curve interpolation methods, and intelligent optimiza-
tion methods.
1 Method based on graph search
In this method, the state space is discretized into a search graph, and the optimal
solution is calculated by various heuristic search algorithms. Dijkstra algorithm is one of
the classical graph search algorithms. It can search the shortest path, but it will lead to too
World Electr. Veh. J. 2022, 13, 165 13 of 21

many unrelated nodes. Through the improvement of this algorithm, some more adaptive
algorithms such as a * algorithm and LPA * algorithm are proposed. Its advantages are
good real-time performance, fast global optimal (sub-optimal) path, and strong adaptability
to dynamic and static environments in low dimensional space, but there are also problems
of unreliable search performance in high dimensional space.
2 Sampling-based path planning algorithm
Unlike the search-based approach, does not need to complete model the environment.
In addition, this algorithm has the feature of random sampling, fast searching, and high
efficiency planning. They have support for solving complex high-dimensional problems,
but the computational efficiency is low. In [75], a dynamic avoidance obstacle control
system of a small unmanned sweeper is completed by using the lattice programming
algorithm, and an avoidance obstacle method that conforms to the driving characteristics
of the sweeper is proposed.
3 Method of curve interpolation
This method fits the route curves of small unmanned vehicles with low speed under
some safe and efficient conditions, such as polynomial curve, double circular arc curve, sine
function curve, Bessel curve, B spline curve, and so on. Generally speaking, a polynomial
algorithm mainly needs to consider geometric constraints to determine the parameters
of curves. Based on parametric curves to describe the trajectory, this type of algorithm
is more intuitive, but also can more accurately describe the road conditions that vehicles
need to meet, and the planned trajectory is also very flat, curvature changes are continuous
and can be constrained. Disadvantages include a large amount of computation, real-time
performance is not very good, and its evaluation function is difficult to find the best
parameter. Future research directions mainly focus on simplified algorithms and more
perfect evaluation functions. At present, the curve fitting algorithm adopts a wide range of
planning methods.
4 Swarm intelligence optimization algorithm
With the application of intelligent optimization technology, various swarm intelligence
algorithms have been applied to path planning, which have the feature of self-learning,
self-determination, and adaptive search, including GA, PSO, Firefly Algorithm, Artificial
Fish Algorithm, ABC, and ACO algorithm.
Local path planning algorithm refers to real-time path planning to avoid obstacles
when encountering obstacles, that is, autonomous obstacle avoidance algorithm. Typical
local path planning algorithms include artificial potential field method and its variants,
Bug algorithm, etc.
5 Algorithm based on deep reinforcement learning [76]
Without prior knowledge of environmental characteristics, low-speed small unmanned
vehicles can use depth deterministic strategy gradient (DDPG) and reinforcement learning
(RL) algorithm for continuous action space to navigate and exit an unknown environment
full of obstacles [28,77]. That is to say, through the interaction and trial and error between
onboard sensors and the surrounding environment, online knowledge learning is carried
out, and the knowledge of the unknown environment around the vehicle is continuously
acquired. In order to adapt to the environment and obtain the best decision-making action,
the driving strategy of the vehicle’s dynamic path is improved, but on the other hand, it
also has the problem of a high training cost.
In addition, for low-speed small unmanned vehicles, the current mainstream algo-
rithms still have some problems, such as high marginal computing cost, weak constraint
ability, and inability to measure the optimization degree of optimization objectives. In [78],
a multi-objective programming model is proposed based on the demand of large-scale cus-
tomer distribution, aiming at minimizing the overall resource consumption of distribution
customers and optimizing customer satisfaction. The path planning of unmanned logistics
distribution under an intelligent network is established.
(2) Trajectory planning
World Electr. Veh. J. 2022, 13, 165 14 of 21

When the path planning process is to satisfy the lateral and longitudinal dynamics
constraints of small unmanned vehicles at low speed, it becomes trajectory planning. The
goal of trajectory planning is to plan a safe, reliable, and energy-saving trajectory for
unmanned vehicles to complete the established driving tasks; to keep a proper distance
from obstacles during driving to avoid a collision.
The typical is the optimal control method, it can obtain the path point’s time infor-
mation [79]. In [80], a trajectory planning method is proposed, which considers boundary
constraints, shortens the calculation time, and realizes fast trajectory planning. But, when
have many obstacles in the vehicle driving environment, this method may be very time-
consuming due to the need to solve complex optimization problems, and its inherent non-
convexity makes it difficult for the problem to converge without a proper initial guess.For
low-speed unmanned vehicles, because the vehicle moving scene is not on a regular urban
road, there may be various conditions such as uneven or accumulated water on the road
surface, so for low-speed unmanned vehicles, it is necessary to consider the road surface
state and make optimal motion planning.
Low-speed small unmanned vehicles carrying human beings: for low-speed un-
manned buses in [81], the global path planning based on the Dijkstra algorithm is adopted
to generate the optimal path trajectory, which can improve the accuracy of trajectory control,
reduce the computational complexity of the controller and improve the driving efficiency
of vehicles. For driverless buses, in [82], a framework successfully applied to autonomous
electric buses is proposed. For global planning, ArcGIS is used to model the road network
and plan the global optimal path. A cubic polynomial curve is used to generate a local
trajectory, which reduces the calculation time. To sum up, the motion planning of low-speed
unmanned vehicles carrying human beings needs to choose the best trajectory planning
route for obstacles such as pedestrians and vehicles on the road, and safety and reliability
are the first elements.
Low-speed small unmanned vehicles carrying cargo: for unmanned delivery vehicles,
Ref. [83] adopts the strategy of combining ant colony optimization (ACO) with genetic
algorithm (GA) to achieve the accuracy of vehicle routing planning. As for the economy of
the last mile express delivery, Ref. [84] proposes a new route planning method based on
clonal ant colony adaptive optimization (CAACO). The cost and time of fast delivery are
lower than those of ACO, SA, and GA. To sum up, the factors of economy and reliability
should be considered in the motion planning of unmanned distribution vehicles, and the
surface conditions of roads such as communities and campuses should be comprehensively
considered in the path planning, so as to avoid the problems of uneven potholes, snow-
covered icy roads, vehicle skidding and other problems that easily lead to vehicle failures.
Special low-speed small unmanned vehicles: For the sanitation field, because the
low-speed unmanned sweepers involve road cleaning operations, road conditions should
be considered in path planning and trajectory planning, Ref. [85] proposes a road surface
state function, which comprehensively considers various road conditions that sweepers
may encounter, and realizes the optimal path that can avoid poor road conditions, instead
of the shortest path that is normally considered. In the field of security, the shortest path
needs to be considered more in the path planning of unmanned patrol cars, in [86], uses a
fast exploration-based stochastic tree (RRT) for path planning, and the realization effect
is close to the optimal solution controlled by numerical optimization based on nonlinear
programming.

6. Review of Tracking Control Technology


Trajectory tracking control is one of the core technologies to realize the safe and stable
operation of low-speed unmanned vehicles. According to the given vehicle path, the target
path point is tracked in real-time, and the vehicle is controlled to move along the planned
path to minimize the trajectory deviation. Through the research of PID control, sliding
mode control, pure tracking algorithm, MPC model predictive control, robust control, and
World Electr. Veh. J. 2022, 13, 165 15 of 21

other [87–92] algorithms, the trajectory tracking accuracy can be improved and the expected
target effect can be achieved.
Trajectory tracking is an important research direction for low-speed small unmanned
vehicle control, and high-precision trajectory tracking control is particularly important
for the safe driving of vehicles. Researchers have gradually devoted themselves to the
research of trajectory tracking technology for low-speed unmanned vehicles. Because
low-speed small unmanned vehicles work under complex road conditions such as campus
and community, their working speed is much lower than that of existing urban traffic
vehicles. Therefore, it is faced with how to reduce the tracking error and maintain the
stability of the vehicle at different speeds under unpredictable working conditions and
uncertain conditions. In the process of trajectory tracking control, however, the low-speed
driverless vehicle is a complex nonlinear system [93], the coupling degree is high, so it is
difficult to establish an accurate vehicle dynamics system model [94], and two-wheeled
bicycles or four-wheeled robot models are often used instead of vehicles for research.
Moreover, when the vehicle runs under different road conditions and climatic conditions,
the parameters such as ground adhesion coefficient and road surface state are time-varying,
which will affect the stable operation of low-speed unmanned vehicles. At the same time,
low-speed unmanned vehicles will change with the weight of carrying or carrying people,
which is unpredictable. For low-speed small unmanned vehicles, it is very important
to study high-precision tracking control technology and overcome modeling difficulties
for low-speed unmanned vehicles. At the same time, in order to overcome the random
network delay caused by steering angle oscillation, a new method is proposed in [95]. An
uncertain model adaptive model predictive control (UM-AMPC) algorithm is proposed,
which can improve the tracking accuracy of unmanned vehicles under random network
delays. In [96], the reinforcement learning method can improve tracking accuracy and
stability and can complete trajectory tracking tasks. In [97], using reinforcement learning
and PID control algorithm to realize trajectory tracking of mobile vehicles can reduce
the computational complexity of reinforcement learning reward function and improve
tracking accuracy.
The following will introduce the trajectory tracking control technology of low-speed
small unmanned vehicles without application scenarios:
Low-speed small unmanned vehicles carrying human beings: Because the model
of low-speed unmanned buses is unclear, it is difficult for traditional trajectory tracking
methods to achieve a balance between accuracy and stability, and because the increase of
passengers will lead to changes in the quality of unmanned buses, the model of unmanned
buses will also change significantly. To solve this problem, Ref. [98] proposed a path
tracking controller based on front axle reference fuzzy pure tracking control, and designed
a feedback feedforward control algorithm for speed control, which improved the speed
tracking efficiency and pure tracking stability. Because the driverless bus is a lagging
and strongly coupled system, trajectory tracking control needs stronger robustness under
different conditions. Ref. [13] designed an adaptive rolling window design trajectory
tracking controller. In addition, the compensation model of heading error and the adaptive
strategy of controller parameters are established to improve the control accuracy.
Low-speed small unmanned vehicles carrying cargo: For unmanned delivery vehicles,
as they have the characteristics of changing cargo weight and are affected by nonlinear
and time-varying characteristics, Ref. [99] proposes an improved predictive control scheme
based on MTN, and uses BP as its learning algorithm for tracking control, which has good
real-time performance, robustness, and convergence.
Special low-speed small unmanned vehicles: For special vehicles such as unmanned
sweepers, accurate trajectory tracking is also very important. In [100], an error band control
strategy is proposed by using the unique structure of low-speed small road sweepers, and
the segmented PID control algorithm is applied to the path tracking of road sweepers.
However, the PID controller has the problem of poor universality. When the working
conditions change significantly, the control parameters are no longer the optimal parame-
World Electr. Veh. J. 2022, 13, 165 16 of 21

ters [101]. When the vehicle is in large curvature and complex working conditions, in order
to overcome the influence of uncertain road surface and road surface curvature on the
stability of vehicle trajectory tracking under difficult driving conditions, Ref. [102] designed
a trajectory tracking controller based on model predictive control, which is used to deal
with system state constraints and actuator drive constraints, and solved how to realize
effective trajectory tracking control without lateral velocity. In view of the strong coupling,
nonlinearity, and parameter uncertainty of vehicles, in [103], an improved sliding controller
is used to track the trajectory of autonomous vehicles, so that the coupled longitudinal and
lateral speeds, lateral deviations, and yaw angles converge asymptotically to a given equi-
librium position, thus eliminating the influence of model disturbances and uncertainties. In
order to get rid of the dependence on the mathematical model of the controlled system, the
data-driven control method is studied. For the unmanned sweeper, in [94], an improved
MFAC—is designed, which adopts the model-free adaptive control algorithm based on
the pre-aiming deviation angle, and has high tracking accuracy and performance for the
unmanned sweeper under different driving environments.
To sum up, it can be seen that for low-speed small unmanned vehicles, it is necessary
not only to consider the internal factors of the vehicle in the process of vehicle trajectory
tracking, such as the increase in garbage collected by garbage sweepers, the increase in
the number of passengers on manned buses, and the number of express delivery vehicles
loaded by express delivery vehicles, and changes in the weight of the whole vehicle; In
addition, it is necessary to consider the road surface where vehicles run, because there will
be complex road conditions such as uneven roads and accumulated water, which will lead
to vehicle skidding and affect the stability of trajectory tracking [104], the parameters of
trajectory tracking model cannot be described accurately, so it will be difficult to design a
precise trajectory tracking controller. We can consider introducing a model-free adaptive
control algorithm into the trajectory tracking system of small unmanned vehicles at low
speed and combine the iterative learning control algorithm with other observers to study
it [105,106], improving the accuracy and stability of trajectory tracking control.

7. Conclusions
By reviewing the key technologies of autonomous navigation of low-speed un-manned
vehicles, this paper summarizes the key technical characteristics and existing problems in
low-speed application scenarios, and expounds on the importance of low-speed unmanned
vehicles under the current COVID-19 epidemic, which provides support for continuing
the research on related low-speed unmanned vehicles, and puts forward the direction of
technical upgrading and transformation. Low-speed driverless vehicles are playing an in-
creasingly important role in various fields of human society, and are increasingly becoming
the direction of large-scale application. As the foundation and core of unmanned driving
technology, autonomous navigation technology has made continuous development and
breakthrough, but there are still some technical problems. The following are some prospects
for the future development of low-speed unmanned autonomous navigation technology.
(1) In complex environments, such as rain, snow, fog, dust, uneven road surface,
accumulated water, and unstructured roads such as in the wild [107,108], the current
autonomous navigation technology of unmanned vehicles can not solve the key technical
problems such as positioning and scene perception [109], especially for low-speed small
unmanned vehicles, which is more important. In the next step, the autonomous navigation
technology of low-speed unmanned vehicles will develop in the direction of being suitable
for various complex and changeable environments.
(2) At present, the autonomous navigation method of low-speed unmanned vehicles
still needs to be equipped with more sensors, such as lidar, which is expensive. It is still
necessary to further optimize the software algorithm, strengthen the fusion perception
ability, and improve the autonomous navigation performance and adaptability to the
environment in combination with artificial intelligence technology.
World Electr. Veh. J. 2022, 13, 165 17 of 21

(3) At present, most low-speed unmanned driving technologies adopt single-vehicle


intelligent schemes. In the future, multi-vehicle cooperative autonomous navigation can
be carried out by combining technologies such as vehicle networking and vehicle-road
coordination, which can reduce system costs, improve environmental awareness, and
realize autonomous navigation networking and man-machine integration.

Author Contributions: Formal analysis, C.Y.; investigation, X.L.; resources, Q.L., J.Z.; writing—
original draft preparation, X.L.; writing—review and editing, Q.L.; funding acquisition, J.Z. All
authors have read and agreed to the published version of the manuscript.
Funding: This work was supported by Jiangsu Planned Projects for Postdoctoral Research Funds
(2020Z411).
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Chen, Q.; Xie, Y.; Guo, S.; Shu, Q. Sensing system of environmental perception technologies for driverless vehicle: A review of
state of the art and challenges. Sens. Actuators A Phys. 2021, 319, 112566. [CrossRef]
2. Aubert, D.; Kluge, K.C.; Thorpe, C.E. Autonomous navigation of structured city roads, Mobile Robots V. SPIE 1991, 1388, 141–151.
3. Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. A review of autonomous navigation systems in agricultural environments. In
Proceedings of the SEAg 2013: Innovative Agricultural Technologies for a Sustainable Future, Barton, Australia, 22–25 September
2013; pp. 10–25.
4. Fusic, S.J.; Kanagaraj, G.; Hariharan, K.; Karthikeyan, S. Optimal path planning of autonomous navigation in outdoor environment
via heuristic technique. TRIP 2021, 12, 100473.
5. Ju, W. Application of autonomous navigation in robotics. J. Phys. Conf. Ser. 2021, 1906, 012018. [CrossRef]
6. Abishega, R.; Saranya, S.; Devayani, M.; Subiksha, C.P.; Sudha, G. Driverless vehicle for garbage collection. Mater. Today Proc. 2021.
[CrossRef]
7. Jones, R.; Sadowski, J.; Dowling, R.; Worrall, S.; Tomitsch, M.; Nebot, E. Beyond the Driverless Car: A Typology of Forms and
Functions for Autonomous Mobility. Appl. Mobilities 2021, 1–21. [CrossRef]
8. Kerimzhanova, A. Autonomous Public Area Maintenance Machines and Best Practices of Them Globally. 2021. Available online:
https://www.theseus.fi/handle/10024/494474 (accessed on 16 April 2019).
9. Jennings, D.; Figliozzi, M. Study of sidewalk autonomous delivery robots and their potential impacts on freight efficiency and
travel. TRR 2019, 2673, 317–326. [CrossRef]
10. Sankari, J.; Imtiaz, R. Automated guided vehicle (AGV) for industrial sector. In Proceedings of the 2016 10th International
Conference on Intelligent Systems and Control (ISCO), Coimbatore, India, 7–8 January 2016; pp. 1–5.
11. Zhiwei, S.; Weiwei, H.; Ning, W.; Xiaojun, W.; Authony, W.C.Y.; Saputra, V.B.; Quan, B.C.H.; Simon, C.J.; Qun, Z.; Susu, Y.; et al.
Map free lane following based on low-cost laser scanner for near future autonomous service vehicle. In Proceedings of the 2015
IEEE Intelligent Vehicles Symposium (IV), Seoul, Korea, 28 June–1 July 2015; pp. 706–711.
12. Bao, Y.; Zhang, Z. Development and performance analysis of a small road sweeper and dust collector. In Proceedings of the
2021 3rd International Symposium on Robotics & Intelligent Manufacturing Technology (ISRIMT), Changzhou, China, 24–26
September 2021; pp. 415–418.
13. Yu, L.; Kong, D.; Shao, X.; Yan, X. A path planning and navigation control system design for driverless electric bus. IEEE Access
2018, 6, 53960–53975. [CrossRef]
14. Tetouani, S.; Chouar, A.; Lmariouh, J.; Soulhi, A.; Elalami, J. A “Push-Pull” rearrangement while routing for a driverless delivery
vehicle. Cogent Eng. 2019, 6, 1567662. [CrossRef]
15. Abril, N.; Dias, M.G.; Junges, N.A.; Dias, A. DLVR–A miniature driverless delivery system. Integration 2019, 2.
16. Xu, H.; Xiao, J.; Feng, Y. Development and research status of road cleaning vehicle. J. Phys. Conf. Ser. 2020, 1626, 012153.
[CrossRef]
17. Chen, B.M. On the Trends of Autonomous Unmanned Systems Research. Engineering 2021, 12, 20–23. [CrossRef]
18. Demidova, K.; Logichev, M.; Zhilenkova, E.; Dang, B. Autonomous navigation algorithms based on cognitive technologies
and machine learning. In Proceedings of the 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic
Engineering (EIConRus), St. Petersburg and Moscow, Russia, 27–30 January 2020; pp. 280–283.
19. U.S. Department of Transportation. Preparing for the Future of Transportation: Automated Vehicles 3.0. Available online:
https://www.transportation.gov/av.2018-10-4 (accessed on 16 April 2019).
20. Zhan, W.; Xiao, C.; Wen, Y.; Zhou, C.; Yuan, H.; Xiu, S.; Zhang, Y.; Zou, X.; Liu, X.; Li, Q. Autonomous visual perception for
unmanned surface vehicle navigation in an unknown environment. Sensors 2019, 19, 2216. [CrossRef] [PubMed]
World Electr. Veh. J. 2022, 13, 165 18 of 21

21. O’Mahony, N.; Campbell, S.; Krpalkova, L.; Riordan, D.; Walsh, J.; Murphy, A.; Ryan, C. Deep Learning for Visual Navigation of
Unmanned Ground Vehicles: A review. In Proceedings of the 2018 29th Irish Signals and Systems Conference (ISSC), Belfast, UK,
21–22 June 2018; pp. 1–6.
22. Hu, H.; Li, Z. Research on Intelligent Car PID Autonomous Navigation System Based on ROS and Lidar. J. SIMUL 2022, 10, 31.
23. Fu, Z.; Hou, Y.; Liu, C.; Zhang, Y.; Zhou, S. A Lightweight Autonomous Vehicle System Based On Pure Visual Navigation. In
Proceedings of the 2021 International Symposium on Computer Science and Intelligent Controls (ISCSIC), Rome, Italy, 12–14
November 2021; pp. 312–317.
24. Debeunne, C.; Vivet, D. A review of visual-LiDAR fusion based simultaneous localization and mapping. Sensors 2020, 20, 2068.
[CrossRef] [PubMed]
25. He, Y.; Csiszár, C. Model for Crowdsourced Parcel Delivery Embedded into Mobility as a Service Based on Autonomous Electric
Vehicles. Energies 2021, 14, 3042. [CrossRef]
26. Wang, B.; Han, Y.; Tian, D.; Guan, T. Sensor-based environmental perception Technology for Intelligent Vehicles. J. Sensors 2021,
2021, 8199361. [CrossRef]
27. Van Brummelen, J.; O’Brien, M.; Gruyer, D.; Najjaran, H. Autonomous vehicle perception: The technology of today and tomorrow.
Transp. Res. C-Emerg. Technol. 2018, 89, 384–406. [CrossRef]
28. Mohamed, S.A.; Haghbayan, M.H.; Westerlund, T.; Heikkonen, J.; Tenhunen, H.; Plosila, J. A survey on odometry for autonomous
navigation systems. IEEE Access 2019, 7, 97466–97486. [CrossRef]
29. Campbell, S.; O’Mahony, N.; Krpalcova, L.; Riordan, D.; Walsh, J.; Murphy, A.; Ryan, C. Sensor technology in autonomous
vehicles: A review. In Proceedings of the 2018 29th Irish Signals and Systems Conference (ISSC), Belfast, UK, 21–22 June
2018; pp. 1–4.
30. Balestrieri, E.; Daponte, P.; De Vito, L.; Lamonaca, F. Sensors and measurements for unmanned systems: An overview. Sensors
2021, 21, 1518. [CrossRef]
31. Zhou, T.; Yang, M.; Jiang, K.; Wong, H.; Yang, D. Mmw radar-based technologies in autonomous driving: A review. Sensors 2020,
20, 7283.
32. Ainsalu, J.; Arffman, V.; Bellone, M.; Ellner, M.; Haapamäki, T.; Haavisto, N.; Josefson, E.; Ismailogullari, A.; Lee, B.; Madland,
O.; et al. State of the art of automated buses. Sustainability 2018, 10, 3118. [CrossRef]
33. Liu, O.; Yuan, S.; Li, Z. A Survey on Sensor Technologies for Unmanned Ground Vehicles. In Proceedings of the 2020 3rd
International Conference on Unmanned Systems (ICUS), Harbin, China, 27–28 November 2020; pp. 638–645.
34. Yang, Z.; Wang, J.; Li, J.; Yan, M. Multiview infrared target detection and localization. Opt. Eng. 2019, 58, 113104. [CrossRef]
35. Lo’pez-Lambas, M.E.; Alonso, A. The driverless bus: An analysis of public perceptions and acceptance. Sustainability 2019,
11, 4986. [CrossRef]
36. Li, H.; Zheng, P.; Zhang, T.; Zou, Y.; Pan, Y.; Zhang, Z.; Azam, A. A high-efficiency energy regenerative shock absorber for power
auxiliary devices of new energy driverless buses. Appl. Energy 2021, 295, 117020. [CrossRef]
37. Rateke, T.; Von Wangenheim, A. Road surface detection and differentiation considering surface damages. Auton. Robot. 2021,
45, 299–312. [CrossRef]
38. Liu, Q.; Li, Z.; Yuan, S.; Zhu, Y.; Li, X. Review on vehicle detection technology for unmanned ground vehicles. Sensors 2021,
21, 1354. [CrossRef]
39. Reina, G.; Underwood, J.; Brooker, G.; Durrant-Whyte, H. Radar-based perception for autonomous outdoor vehicles. JFR 2011,
28, 894–913. [CrossRef]
40. Dvornik, N.; Shmelkov, K.; Mairal, J.; Schmid, C. Blitznet: A real-time deep network for scene understanding. In Proceedings of
the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4154–4162.
41. Liu, Y.; Miura, J. RDS-SLAM: Real-time dynamic SLAM using semantic segmentation methods. IEEE Access 2021, 9, 23772–23785.
[CrossRef]
42. Mallozzi, P.; Pelliccione, P.; Knauss, A.; Berger, C.; Mohammadiha, N. Autonomous vehicles: State of the art, future trends, and
challenges. Automot. Syst. Softw. Eng. 2019, 347–367.
43. Fayyad, J.; Jaradat, M.A.; Gruyer, D.; Najjaran, H. Deep learning sensor fusion for autonomous vehicle perception and localization:
A review. Sensors 2020, 20, 4220. [CrossRef] [PubMed]
44. Tseng, K.K.; Sun, H.; Liu, J.; Li, J.; Yung, K.L. Image semantic segmentation with an improved fully convolutional network. Soft
Comput. 2020, 24, 8253–8273. [CrossRef]
45. Li, C.; Cao, Y.; Peng, Y. Research on Automatic Driving Target Detection Based on Yolov5s. J. Phys. Conf. Ser. 2022, 2171, 012047.
[CrossRef]
46. Hu, B. Object Detection for Automatic Driving Based on Deep Learning. In Proceedings of the 2020 International Conference on
Computing and Data Science (CDS), Stanford, CA, USA, 1–2 August 2020; pp. 1–8.
47. Liu, Z.; Cai, Y.; Wang, H.; Chen, L.; Gao, H.; Jia, Y.; Li, Y. Robust target recognition and tracking of self-driving cars with radar and
camera information fusion under severe weather conditions. In Proceedings of the IEEE Transactions on Intelligent Transportation
Systems, Indianapolis, IN, USA, 19–22 September 2021; pp. 1–14.
48. Wang, H.; Lou, X.; Cai, Y.; Li, Y.; Chen, L. Real-time vehicle detection algorithm based on vision and lidar point cloud fusion. J.
Sens. 2019, 2019, 8473980. [CrossRef]
World Electr. Veh. J. 2022, 13, 165 19 of 21

49. Nguyen, A.; Nguyen, N.; Tran, K.; Tjiputra, E.; Tran, Q. Autonomous navigation in complex environments with deep multimodal
fusion network. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las
Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5824–5830.
50. Zhao, Y.; Li, J.; Li, L.; Zhang, M.; Guo, L. Environmental perception and sensor data fusion for unmanned ground vehicle.
MPE 2013. [CrossRef]
51. Hashim, H.A.; Abouheaf, M.; Abido, M.A. Geometric stochastic filter with guaranteed performance for autonomous navigation
based on imu and feature sensor fusion. CEP 2021, 116, 104926. [CrossRef]
52. Shit, R.C. Precise localization for achieving next-generation autonomous navigation: State-of-the-art, taxonomy and future
prospects. CCN 2020, 160, 351–374. [CrossRef]
53. Juan-Rou, H.; Zhan-Qing, W. The Implementation of IMU/Stereo Vision SLAM System for Mobile Robot. In Proceedings of the
2020 27th Saint Petersburg International Conference on Integrated Navigation Systems (ICINS), St. Petersburg, Russia, 25–27
May 2020; pp. 1–4.
54. Perea-Strom, D.; Morell, A.; Toledo, J.; Acosta, L. GNSS integration in the localization system of an autonomous vehicle based on
particle weighting. IEEE Sens. J. 2019, 20, 3314–3323. [CrossRef]
55. Kim, H.; Liu, B.; Goh, C.Y.; Lee, S.; Myung, H. Robust vehicle localization using entropy-weighted particle filter-based data fusion
of vertical and road intensity information for a large scale urban area. IEEE Robot. Autom. Lett. 2017, 2, 1518–1524. [CrossRef]
56. Mobus, R.; Kolbe, U. Multi-target multi-object tracking, sensor fusion of radar and infrared. In Proceedings of the IEEE Intelligent
Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 732–737.
57. Yang, B.; Ando, T.; Nakano, K. Pilot Tests of Automated Bus Aiming for Campus Transportation Service. In Proceedings of the
2020 5th International Conference on Universal Village (UV), Boston, MA, USA, 24–27 October 2020; pp. 1–5.
58. Xia, H.; Yang, H. Is last-mile delivery a’killer app’for self-driving vehicles? Commun. ACM 2018, 61, 70–75. [CrossRef]
59. Liu, T.; Guo, X.; Pei, X. Research on Recognition of Working Area and Road Garbage for Road Sweeper Based on Mask R-CNN
Neural Network. In Proceedings of the 2021 4th International Conference on Control and Computer Vision, Macau, China, 13–15
August 2021; pp. 76–82.
60. Min, H.; Zhu, X.; Yan, B.; Yu, Y. Research on visual algorithm of road garbage based on intelligent control of road sweeper. J. Phys.
Conf. Ser. 2019, 1302, 032024. [CrossRef]
61. Wu, X.; Ren, J.; Wu, Y.; Shao, J. Study on Target Tracking Based on Vision and Radar Sensor Fusion. SAE Tech. Pap. 2018, 1, 613.
62. Treptow, A.; Cielniak, G.; Duckett, T. Active people recognition using thermal and grey images on a mobile security robot. In
Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6
August 2005; pp. 2103–2108.
63. Cheng, H.; Chen, H.; Liu, Y. Topological indoor localization and navigation for autonomous mobile robot. IEEE Trans. Autom. Sci.
Eng. 2014, 12, 729–738. [CrossRef]
64. Ibrahim, M.; Akhtar, N.; Jalwana, M.A.A.K.; Wise, M.; Mian, A. High Definition LiDAR mapping of Perth CBD. In Proceedings of
the 2021 Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, Australia, 29 November 2021–1 December
2021; pp. 1–8.
65. Pfrunder, A.; Borges, P.V.K.; Romero, A.R.; Catt, G.; Elfes, A. Real-time autonomous ground vehicle navigation in heterogeneous
environments using a 3D LiDAR. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 2601–2608.
66. Chen, G.; Lv, P.; Li, H.; Yang, G. Robo-Sweeper: Bionics based Unmanned Sweeper Platform. In Proceedings of the 2021 IEEE 23rd
Int Conf on High Performance Computing & Communications; 7th Int Conf on Data Sci-ence & Systems; 19th Int Conf on Smart
City; 7th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Applica-tion (HPCC/DSS/SmartCity/DependSys),
Haikou, China, 20–22 December 2021; pp. 1381–1388.
67. Zhang, M.; Liu, K.; Li, C. Unmanned ground vehicle positioning system by GPS/dead-reckoning/IMU sensor fusion. In
Proceedings of the 2nd Annual International Conference on Electronics, Electrical Engineering and Information Science (EEEIS
2016), Xi’an, China, 2–4 December 2016; pp. 737–747.
68. Martin, S.; Bevly, D.M. Comparison of GPS-based autonomous vehicle following using global and relative positioning. Int. J. Veh.
Auton. Syst. 2012, 10, 229–255. [CrossRef]
69. Rahiman, W.; Zainal, Z. An overview of development GPS navigation for autonomous car. In Proceedings of the 2013 IEEE 8th
Conference on Industrial Electronics and Applications (ICIEA), Melbourne, VIC, Australia, 19–21 June 2013; pp. 1112–1118.
70. Wang, G.; Xu, X.; Yao, Y.; Tong, J. A novel BPNN-based method to overcome the GPS outages for INS/GPS system. IEEE Access
2019, 7, 82134–82143. [CrossRef]
71. Do, H.; Le, A.V.; Yi, L.; Hoong, J.; Tran, M.; Duc, P.; Vu, M.; Wegger, O.; Mohan, R. Heat conduction combined grid-based
optimization method for reconfigurable payment sweeping robot path planning. Robot. Auton. Syst. 2022, 152, 104063. [CrossRef]
72. Navarro, S.; Lerma, J.L. Accuracy analysis of a mobile mapping system for close range photogrammetric projects. Measurement
2016, 93, 148–156. [CrossRef]
73. Wang, H.; Wang, M.; Wen, K.; Wu, W. Design and algorithm research of a gnss/fog-sins integrated navigation system
for unmanned vehicles. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July
2017; pp. 6157–6162.
World Electr. Veh. J. 2022, 13, 165 20 of 21

74. Han, X.; Zhang, X.; Liu, Y. Improving GNSS Navigation and Control with Electronic Compass in Unmanned System. J. Adv.
Comput. Intell. 2019, 23, 427–436. [CrossRef]
75. Yu-Fan, L.; Quan, S.; Hai-Long, C.; Peng, P.; Yue, Z. Design of dynamic obstacle avoidance system for self-driving sweeper
based on lattice-planner. In Proceedings of the 2021 IEEE International Conference on Recent Advances in Systems Science and
Engineering (RASSE), Shanghai, China, 12–14 December 2021; pp. 1–5.
76. Dooraki, A.; Lee, D.J. A Multi-Objective Reinforcement Learning Based Controller for Autonomous Navigation in Challenging
Environments. Machines 2022, 10, 500. [CrossRef]
77. Sivashangaran, S.; Zheng, M. Intelligent Autonomous Navigation of Car-Like Unmanned Ground Vehicle via Deep Reinforcement
Learning. IFAC-PapersOnLine 2021, 54, 218–225. [CrossRef]
78. Yuzhe, J.; Hanqing, Y.; Yuan, Q. Study on Path Planning Model and Algorithm of Driverless Logistics Distribution under Intelligent
Network. In Proceedings of the 2021 IEEE International Conference on Artificial Intelligence and Computer Applications
(ICAICA), Dalian, China, 28–30 June 2021; pp. 1245–1251.
79. Wang, X.; Peng, H.; Liu, J.; Dong, X. Optimal control based coordinated taxiing path planning and tracking for multiple carrier
aircraft on flight deck. Def. Technol. 2020, 18, 238–248. [CrossRef]
80. Wang, X.; Liu, J.; Su, X.; Haijun, P. A review on carrier aircraft dispatch path planning and control on deck. Chin. J. Aeronaut. 2020,
33, 3039–3057. [CrossRef]
81. Luo, M.; Hou, X.; Yang, J. Surface optimal path planning using an extended Dijkstra algorithm. IEEE Access 2020, 8, 147827–147838.
[CrossRef]
82. Yu, L.; Kong, D.; Yan, X. A driving behavior planning and trajectory generation method for autonomous electrical bus. Future
Internet 2018, 10, 51. [CrossRef]
83. Chen, Y.; Chen, M.; Chen, Z.; Cheng, L.; Yang, Y.; Li, H. Delivery path planning of heterogeneous robot system under road
network constraints. Comput. Electr. Eng. 2021, 92, 107197. [CrossRef]
84. Zhang, Y.; Liu, Y.; Li, C.; Liu, Y.; Zhou, J. The Optimization of Path Planning for Express Delivery Based on Clone Adaptive Ant
Colony Optimization. J. Adv. Transport. 2022, 2022, 4825018. [CrossRef]
85. Li, X.; Li, Q.; Zhang, J. Research on global path planning of unmanned vehicles based on improved ant colony algorithm in the
complex road environment. Meas. Control 2022, 00202940221118132. [CrossRef]
86. Lau, G.; Liu, H.H. Real-time path planning algorithm for autonomous border patent: Design, simulation, and experience. J. Intell.
Robot. Syst. 2014, 75, 517–539. [CrossRef]
87. Wang, S.; Yin, C.; Gao, J.; Sun, Q. Lateral Displacement Control for Agricultural Tractor Based on Cascade Control Structure. Int.
J. Control Autom. Syst. 2020, 18, 2375–2385. [CrossRef]
88. Park, M.; Kang, M. Experimental verification of a drift controller for autonomous vehicle tracking: A circular trajectory using
LQR method. Int. J. Control Autom. Syst. 2021, 19, 404–416. [CrossRef]
89. Xu, S.; Peng, H. Design, analysis, and experiments of preview path tracking control for autonomous vehicles. IEEE Trans. Intell.
Transp. Syst. 2020, 21, 48–58. [CrossRef]
90. Hu, C.; Zhao, L.; Cao, L. Steering control based on model predictive control for obstacle avoidance of unmanned ground vehicle.
Meas. Control 2020, 53, 501–518. [CrossRef]
91. Zhang, X.; Zhu, X. Autonomous path tracking control of intelligent electric vehicles based on lane detection and optimal preview
method. Expert. Syst. Appl. 2019, 121, 38–48. [CrossRef]
92. Raffo, G.V.; Gomes, G.K.; Normey-Rico, J.E. A predictive controller for autonomous vehicle path tracking. IEEE Trans. Intell.
Transp. Syst. 2009, 10, 92–102. [CrossRef]
93. Liu, S.; Hou, Z.; Tian, T.; Deng, Z.; Li, Z. A novel dual successive projection-based model-free adaptive control method and
application to an autonomous car. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3444–3457. [CrossRef]
94. Yao, W.; Pang, Z.; Chi, R.; Shao, W. Track Tracking Control of Unmanned Intelligent Sweeping Vehicles Based on Improved
MFAC. In Proceedings of the Chinese Intelligent Systems Conference, Shenzhen, China, 24–25 October 2020; pp. 506–515.
95. Luan, Z.; Zhang, J.; Zhao, W.; Wang, C. Trajectory tracking control of autonomous vehicle with random network delay. IEEE
Trans. Veh. Technol. 2020, 69, 8140–8150. [CrossRef]
96. Liu, M.; Zhao, F.; Yin, J.; Niu, J.; Liu, Y. Reinforcing-tracking: An effective trajectory tracking and navigation method for
autonomous urban driving. IEEE Intell. Transp. 2021.
97. Wang, S.; Yin, X.; Li, P.; Zhang, M.; Wang, X. Trajectory tracking control for mobile robots using reinforcement learning and PID.
Iran. J. Sci. Technol. Trans. Electr. Eng. 2020, 44, 1059–1068. [CrossRef]
98. Yu, L.; Yan, X.; Kuang, Z.; Chen, B.; Zhao, Y. Driverless bus path tracking based on fuzzy pure pursuit control with a front axle
reference. Appl. Sci. 2019, 10, 230. [CrossRef]
99. Wu, Y.; Li, C.; Yuan, C.; Li, M.; Li, H. Predictive Control for Small Unmanned Ground Vehicles via a Multi-Dimensional Taylor
Network. Appl. Sci. 2022, 12, 682. [CrossRef]
100. Wang, X.; Xu, K.; Xu, L.; Miao, Z.; Zhou, J. Hinged Sweeper Kinematic Modeling and Path Tracking Control. In Proceedings of
the 2019 Chinese Intelligent Systems Conference, Haikou, China, 26–27 October 2019; pp. 26–27.
101. Amer, N.H.; Zamzuri, H.; Hudha, K.; Kadir, Z.A. Modelling and control strategies in path tracking control for autonomous
ground vehicles: A review of state of the art and challenges. J. Intell. Robot. Syst. 2017, 86, 225–254. [CrossRef]
World Electr. Veh. J. 2022, 13, 165 21 of 21

102. Yang, L.; Yue, M.; Ma, T. Path following predictive control for autonomous vehicles subject to unknown tire-ground adhesion and
varied road curvature. Int. J. Control Autom. Syst. 2019, 17, 193–202. [CrossRef]
103. Zhang, J.; Xu, S.; Rachid, A. Sliding mode con-troller for automatic steering of vehicles. In Proceedings of the IECON‘01. 27th
Annual Conference of the IEEE Industry-trial Electronics Society (Cat. No. 37243), Denver, CO, USA, 29 November–2 December
2001; pp. 2149–2153.
104. Wang, S.; Zhao, J. A trajectory tracking method for wheeled mobile robots based on diversity observer. Int. J. Control Autom. 2020,
18, 2165–2169. [CrossRef]
105. Wang, Y.; Li, H.; Qiu, X.; Xie, X. Concersus tracking for non-linear multi-agent systems with unknown disturbance by using
model free adaptive iterative learning control. Appl. Math. Comput. 2020, 365, 124701.
106. Hou, Z.; Yu, X.; Yin, C. A data-driven iterative learning control framework based on controller dynamic linearization. In
Proceedings of the 2018 Annual American Control Conference (ACC), Milwaukee, WI, USA, 27–29 June 2018; pp. 5588–5593.
107. Chen, D.; Zhuang, M.; Zhong, X.; Wu, W. RSPMP: Real-time semantic perception and motion planning for autonomous navigation
of unmanned ground vehicle in off-road environments. Appl. Intell. 2022, 1–17. [CrossRef]
108. Andrew Bagnell, J.; Bradley, D.; Silver, D.; Sofman, B. Learning for Autonomous Navigation-Advances in machine learning for
rough terrain mobility. RA-M 2010, 17, 74.
109. Wigness, M.; Eum, S.; Rogers, J.G.; Han, D.; Kwon, H. A rugd dataset for autonomous navigation and visual perception in
unstructured outdoor environments. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and
Systems IROS, Macau, China, 3–8 November 2019; pp. 5000–5007.

You might also like