Survey on Adversarial Attacks (3)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

SoK: Adversarial Analysis of Autonomous

Vehicular Perception Systems


Arkajyoti Mitra Habeeb Olufowobi
dept. of Computer Science and Engineering dept. of Computer Science and Engineering
University of Texas at Arlington University of Texas at Arlington
email address or ORCID email address or ORCID

Abstract—Autonomous driving has made significant strides, real-time by integrating data from sensors such as cameras,
marking a transformative shift in the automotive industry. From LiDAR, radar, and ultrasonic sensors.
the conceptual stages to tangible reality, autonomous vehicles are Autonomous vehicular perception opens up new frontiers
now commercially available to the global population, promising
safer, more efficient transportation. The main driving force in mobility and urban planning. Self-driving cars can reduce
behind it has been a well-built perception system. With the congestion and pollution while optimizing the use of existing
aid of sensors such as LiDAR, radar, and cameras, autonomous infrastructure. Moreover, they promise to enhance accessibil-
driving has become a reality on roads, fundamentally reshaping ity for individuals with disabilities and elderly populations,
our perception of transportation. However, amidst this progress, unlocking newfound freedom and independence.
with these technological advances, the attack surfaces have
also inevitably grown, raising concerns about the security of However, tremendous advances are not without their chal-
these vehicles and prompting ongoing research efforts. While lenges. One of the most pressing concerns is the vulnerability
numerous works have focused on revealing the vulnerabilities of of perception systems to adversarial attacks. Adversarial at-
autonomous driving, the literature regarding these vulnerabilities tacks involve deliberately manipulating input data to deceive or
lacks systematic organization. This paper aims to systemati- confuse perception systems, leading to potentially dangerous
cally organize knowledge surrounding the perception systems
of autonomous vehicles, delving into intricate details to under- outcomes on the road.
stand their functioning and limitations. In this taxonomy, we These attacks can take various forms, such as adding
examine several proposed attacks on these perception systems imperceptible perturbations to images or videos, strategically
and categorize them based on their feasibility, vivid impact, placing objects in the environment, or exploiting the algo-
and limitations. We further discuss existing defenses against rithmic behavior of perception systems. The consequences of
such adversarial attacks on the perception systems. Finally,
through meticulous analysis, we propose new research directions successful adversarial attacks on perception systems can be
with the hope of future endeavors, emphasizing the need to severe, ranging from misclassification of objects to complete
enhance security measures, refine decision-making algorithms, system failure.
and explore novel methods for robust perception in varied Adversarial attacks threaten the safety and reliability of self-
environmental conditions. driving cars, undermining the trust and confidence of both
passengers and regulators in autonomous vehicle technology.
I. I NTRODUCTION Addressing this challenge requires robust defense mechanisms
and rigorous testing protocols to detect and mitigate potential
Autonomous vehicular perception stands as the cornerstone; vulnerabilities in perception systems.
the driving force behind the commercialization of self-driving Significant advancements in perception algorithms have
cars lies in the promise of enhanced safety, efficiency, and spurred extensive research into adversarial analysis within
convenience. Central to this transformative vision is the devel- vehicular perception systems in recent years. However, this
opment of perception techniques that need to perceive, inter- body of research lacks organization, making it challenging
pret, and respond to their dynamically changing environment for scholars and practitioners to navigate when exploring the
in real-time. literature on this subject. The existing literature on adversarial
With sophisticated machine learning algorithms such as analysis in perception may be scattered, hindering efficient
neural networks, autonomous vehicular perception enables retrieval and synthesis of relevant information. This lack of
vehicles to navigate complex traffic scenarios, anticipate po- cohesion and centralization in the research landscape poses a
tential hazards, and make split-second decisions with accuracy significant obstacle to the advancement of knowledge in this
that surpasses human capabilities. Moreover, these perception domain. Our work bridges those gaps and provides detailed
systems hold the potential to mitigate the human errors re- insights into existing research. Furthermore, we explore and
sponsible for the majority of traffic accidents, thereby saving provide suggestions for future research directions toward im-
countless lives and reducing the economic costs associated proved mitigation strategies and possible attack vectors that
with road accidents. Autonomous vehicles can construct de- require further exploration.
tailed and dynamic representations of their environment in The contribution of the paper are as follows:

1
• We provide a systematic organization of the knowledge For example, road signs can be covered or manipulated, or
around vehicular perception systems, categorizing known additional lines can be painted to confuse the lane detection
adversarial attacks into three broad domains: sensor- system [2]. Based on the existing literature, physical camera
based, model-based, and miscellaneous attacks. attacks can be broadly categorized into two, proximal attacks,
• We provide a detailed evaluative study on feasibility, where the attacker is in close range to the target vehicle it,
stealthiness, transferability, and robustness of these at- while the vehicle may be moving or stationary, and stationary
tacks. vector attacks where the attacker is stationary and road objects
• We finally provide a detailed outlook on potential future are manipulated to confuse the perception layer. such as
research directions exploring new attack vectors and tampering with road signs or road lanes.
improving defenses for vehicular perception. 1) Blinding Attack: Blinding attacks are a form of proximal
attacks where an attacker utilizes light emitting devices such
II. R ELATED W ORKS as lasers in close proximity of the target vehicle. This form
In this section, we provide a review of previous studies of attack exploits auto-adjustment done by the camera to the
conducted on adversarial attacks in perception systems. The varying light intensities, a key vulnerability in any camera. The
seminal work by Szegedy et al. [] introduced the concept disruption with the blinding light can hinder the perception
of adversarial examples, demonstrating that imperceptible about oncoming traffic that can convert to a life-threatening
perturbations could cause deep neural networks (DNNs) to outcome. Petit et al. [2] illustrated this in their experiments
misclassify inputs. Since then, researchers have extensively with numerous varying light sources which were activated at
studied adversarial attacks across various perception systems, varying distances, resulting in momentary blindness to the
including image classification, object detection, and natural camera sensor C2-270, a camera specifically designed for
language processing. mitigating collisions.

III. S ENSOR - BASED P ERCEPTION ATTACKS C. RADAR


In this section, we categorize attacks primarily centered Radar systems are crucial for providing accurate spatial
on introducing errors into sensor readings as these sensors awareness to an autonomous driving system (ADS), aiding in
gather data from the environment. Given the immense scope of the detection of obstacles like other vehicles and pedestrians.
sensor attacks, we undertake the substantial task of organizing They are particularly valuable for their ability to operate
them into broad domains. The categorization aids readers in effectively under various environmental conditions, including
navigating through attacks targeting specific types of sensors. poor visibility. However, radar systems are not immune to
manipulation and can be compromised through specific types
A. LiDAR of attacks. For instance, attackers can employ proximal radar
LiDAR sensors are one of the expensive sensors that jamming or spoofing tactics. Proximal jamming involves using
are on-board an autonomous vehicle providing a detailed devices that emit radio waves matching the frequencies used
environments information around them. LiDARs provide a by the radar, overwhelming it with false signals and disrupting
3D overview of the surrounding by projecting laser pulses its ability to accurately measure distances and velocities.
periodically and observing the time taken to receive the laser Spoofing, on the other hand, involves the creation of fake
signals. Compared to the information captured by a camera radar signals that trick the system into perceiving non-existent
sensor, LiDARs provide accurate depth information about their objects or incorrectly positioning actual ones. These attacks
surrounding. However, LiDAR-based perception are vulnera- can severely impair the ADS’s functionality, leading to unsafe
ble to spoofing attacks. The attacker spoof the coordinates of driving decisions. The existing literature categorizes these as
an imaginary vehicle in front of the victim autonomous vehicle either proximal attacks, where the attacker operates close to
through the utilization of strategically timed laser signals to the the moving or stationary vehicle, or more distanced attacks
victim’s LiDAR sensor [1]. The spoofed signals take advantage where the interference is indirect yet specifically targets the
of the ignored occlusion patterns by the object detection model radar’s functionality to degrade system performance.
in cars.
D. GPS
B. Camera GPS systems are vital for navigation in autonomous driving
Cameras are crucial in providing accurate environmental systems (ADS), offering precise location tracking to facilitate
perception to an autonomous driving system (ADS) by aiding route planning and vehicle positioning. However, GPS is
in detection of various objects in a scene, such as lanes, susceptible to certain types of attacks that can undermine
road signs, and pedestrians. Furthermore, cameras precisely its accuracy and reliability. These include GPS spoofing and
simulate human vision, helping in enhancing system’s adapt- signal jamming. In GPS spoofing, an attacker transmits fake
ability to different environments by observing road textures, GPS signals that the vehicle’s receiver mistakes for authentic
lighting variations, and weather condition. However, despite satellite signals, leading the ADS to calculate an incorrect po-
their advantages, cameras are susceptible to physical attacks sition. This can misdirect the vehicle to unintended locations or
which can compromise the safety of the autonomous vehicle. disrupt its geographical awareness. GPS jamming, conversely,

2
involves the use of devices that emit radio noise or signals at stop signals or swerving to avoid imaginary objects, potentially
the same frequency as GPS satellites, effectively drowning out leading to traffic accidents or major disruptions.
the real signals and causing the receiver to lose track of its To counter such threats, it’s essential to develop camera
location. These disruptions can compromise navigation safety, systems with advanced image processing capabilities that
leading to potential accidents or deviations from intended can identify and ignore these deceptive inputs. Furthermore,
paths. The literature classifies these threats into two broad combining multiple types of sensors, like radar and LiDAR,
categories: proximal attacks, where the attacker is in close with cameras helps verify the information each sensor pro-
proximity to the target vehicle, whether moving or stationary, vides, enhancing the overall reliability and safety of the ADS.
and more sophisticated, distanced attacks that manipulate the Implementing such multi-layered defense strategies is crucial
GPS signal itself, misleading the ADS’s perception of its to protect autonomous vehicles from these visual perception
environment. attacks and ensure they can navigate safely through complex
environments.
IV. M ODEL - BASED P ERCEPTION ATTACKS
1) Adversarial Attacks: Start with brief description of what
In this section, we systematically organize a comprehensive is an adversarial attack Adversarial attacks intend to mislead
review of research dedicated to adversarially attacking the the classification results by introducing imperceptible pertur-
perception systems of autonomous vehicles (AVs). We exam- bations to mislead the models in producing wrong results
ine a spectrum of adversarial tactics, ranging from intricately with significant confidence. Adversarial attacks can be broadly
crafted examples tailored to exploit vulnerabilities in white- classified into three categories, white-box, black-box, and
box models to robustly effective attacks capable of deceiving grey-box attacks. In white-box attacks, the adversary has
multiple black-box perception models. We meticulously cat- complete knowledge of the target model such as its parameters,
egorize these adversarial attacks based on the particular per- the loss function, and the dataset involved in training the
ceptions they aim to compromise. The following categorization model. Black-box attacks are done by an adversary, when the
provides a lucid and structured exposition of our analysis. model is available, however, anything else about the model
is not known. The adversary can only infer based on the
A. LiDAR-based Perception
inputs and outputs from the target model. In contrast, grey-box
Model-based perception attacks on LiDAR systems are a attacks involve partial awareness about the target model which
sophisticated threat in autonomous driving systems (ADS), can significantly lower the barrier to interpret the model’s
targeting the core algorithms that interpret environmental mechanism.
data. These attacks manipulate the LiDAR input, causing the Within these attack frameworks, numerous attacks algo-
perception models to misidentify or overlook objects, thereby rithms have been introduced for adversarial sample genera-
compromising navigational decisions. For instance, attackers tions. The paper further discusses how we can defend against
might inject false objects into the LiDAR’s point cloud or alter these attacks and what is the effectiveness and efficiency of
the representation of actual objects. This results in the ADS these defenses, which will be discussed in sectionwrite the
either evading non-existent obstacles or ignoring real hazards, section number where you have discussed/will be discussing.
posing severe safety risks.
The impact of such attacks is abject, potentially leading to C. RADAR-based Perception
accidents or erroneous vehicle behaviour in traffic. Defend-
ing against these threats involves enhancing the algorithms’ Radar-based attacks on autonomous driving systems (ADS)
resilience to abnormal inputs, employing comprehensive val- are significant concerns due to their potential to disrupt the
idation of sensor data, and integrating redundancy across radar sensors that help vehicles detect and respond to objects
multiple sensing technologies to verify data consistency. These in their path. These attacks might involve jamming the radar
measures are crucial for maintaining the integrity and safety with competing signals or spoofing, where false signals are
of ADS operations in the face of increasingly sophisticated cy- sent to trick the radar into seeing objects that aren’t there. For
ber threats. instance, an attacker could use a jamming device to flood the
radar with noise, making it difficult for the system to discern
B. Camera-based Perception real objects like cars or pedestrians. Alternatively, spoofing
Camera-based attacks on autonomous driving systems could make the radar perceive phantom obstacles, leading
(ADS) pose a significant threat by targeting the cameras that the vehicle to make unnecessary maneuvers, such as abrupt
are crucial for the vehicle’s ability to see and understand its braking or swerving, creating unsafe driving conditions.
surroundings. These attacks typically involve altering visual To safeguard against these types of threats, it’s critical to
elements within the vehicle’s environment, such as distorting equip radar systems with filters and algorithms designed to
or covering road signs, or creating fake lane markers. For distinguish between legitimate and fraudulent signals. Ad-
example, attackers might place stickers or paint on a stop sign ditionally, incorporating a fusion of different sensor tech-
to make it appear as a yield sign, or project images onto the nologies, like cameras and LiDAR, alongside radar can help
road to simulate non-existent obstacles. This can cause the cross-validate data, enhancing detection accuracy and system
vehicle to make dangerous decisions, such as ignoring real resilience. This multi-sensor approach is essential in building

3
robust defenses for ADS, ensuring they can operate safely and were resizing or adding padding to the images, these methods
effectively even in the presence of potential radar interference. were tested with a subset of images from the ImageNet dataset
Fast Gradient Sign Method (FGSM): This method involves
V. M ISCELLANEOUS ATTACKS ON P ERCEPTION S YSTEMS calculating the gradient of the loss function with respect to the
This section aims to categorize attacks that were not intently input data to create adversarial examples. Projected Gradient
focused on one particular kind of perception but can severely Descent (PGD): This method enhances resistance against a
affect the navigation of an AV significantly. These adversar- wide range of first-order attacks. Ensemble Adversarial Train-
ial attacks are mainly focused on challenging fundamental ing: To mitigate computational constraints, this method trains
concepts of perception that fuel the autonomous navigation models on a combination of adversarial examples generated
system. It can range from novel perspectives to downstreaming from multiple models. These algorithms were tested on various
tasks that follow after the perception algorithms have identified datasets to evaluate their robustness, efficiency, and effective-
the region of interests in the scene. ness.
Additionally, the paper discusses randomization schemes
A. Motion-planning/Trajectory Planning
as another defense mechanism. For example, random input
B. End-to-End Driving Stack transformations applied to images increased the robustness
C. Collaborative Perception on Connected AV of CNNs, resisting 60% of strong grey-box attacks and 90%
VI. D EFENSES AGAINST P ERCEPTION ATTACKS of strong black-box attacks. Other techniques include random
noising and random feature pruning.
A. Defense against adversarial model-based attacks
1) Adversarial Training: In this section we will be dis- B. Defense against sensor based attacks
cussing about the importance and various methods of im- 1) Redundancy: Redundancy is a crucial factor in enhanc-
plementing adversarial training for a deep neural network ing the reliability and security of autonomous driving (AD)
and machine learning models. To increase the robustness and systems. Extensive physical tests were conducted, involving
efficiency of models against adversarial attacks, adversarial the placement of lasers with varying intensities and at different
training methods are introduced during machine learning stage. distances from a camera. The results indicated that the lasers
Perturbed adversarial samples can deceive the model leading could effectively blind the camera by preventing it from tuning
to inaccurate predictions, however, gradually introducing them its exposure, which led to overexposed images. The main
during the training stage creates robustness in the model and parameters varied during these tests were the distance of the
it is able to make better predictions. we will further discuss laser from the camera and the intensity of the laser.
the samples and defense mechanisms against those samples. To successfully defend against such real-world attacks,
Over the recent years, numerous methods have been dis- Petit et al., [2] proposed a solution involving redundancy in
cussed to improve defenses against adversarial attacks. Adver- the sensor layer was proposed. It was observed that placing
sarial training being the first and foremost, where perturbed multiple cameras on the AD vehicle, at different observa-
adversarial samples are introduced during the training stage, tion angles and distances, significantly increased the effort
which allows the model to effectively identify these adversarial required by an attacker to blind all cameras simultaneously.
samples and improve the accuracy of prediction. Numerous al- For instance, positioning cameras on the front bumper and
gorithms and methods [3] have been researched for generating windshield introduced varying perspectives that made it more
these samples to introduce into the training stage. difficult for a single laser attack to compromise the system.
Fast gradient sign method (FGSM) [4] is an efficient white- The data from these multiple cameras is later combined to
box attack which introduces noise rendering the model in form a single cohesive image. However, aligning the cameras
predicting the wrong results with high confidence.Basic Iter- correctly is essential to ensure the images overlap properly,
ative Method (BIM) and Projected gradient descent(PGD) [3] which, though a trivial task, presents challenges. One of the
algorithms implement taking steps in the negative directions of primary challenges is capturing synchronized images with
the gradient to train the model with perturbed data resulting in the same exposure settings to maintain the integrity of the
wrong predictions within feasible constraints. After elaborate merged image. This redundancy strategy not only enhances the
testing and training of model on the MNIST and CIFAR-10 robustness of AD systems but also significantly mitigates the
databases [5]. Furthermore, to avoid computational constraints, risk of successful attacks, thereby improving overall safety.
the author [3] provides detailed outlook on ensemble adver-
sarial training. All of these adversarial training algorithms VII. D IRECTIONS U NDEREXPLORED
were tested against large number of datasets such as MNIST, R EFERENCES
CIFAR-10 and ImageNet, to test their robustness, efficiency,
[1] J. Sun, Y. Cao, Q. A. Chen, and Z. M. Mao, “Towards robust {LiDAR-
and effectiveness. based} perception in autonomous driving: General black-box adversarial
Another method discussed to implement defenses against sensor attack and countermeasures,” in 29th USENIX Security Symposium
adversarial attacks is to introduce randomization schemes [6]. (USENIX Security 20), 2020, pp. 877–894.
[2] J. Petit, B. Stottelaar, M. Feiri, and F. Kargl, “Remote attacks on
Such as, random input transformations applied to input images automated vehicles sensors: Experiments on camera and lidar,” Black Hat
increased robustness of the CNN’s. various methds discussed Europe, vol. 11, no. 2015, p. 995, 2015.

4
[3] K. Ren, T. Zheng, Z. Qin, and X. Liu, “Adversarial attacks and defenses
in deep learning,” Engineering, vol. 6, no. 3, pp. 346–360, 2020.
[4] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing
adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
[5] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards
deep learning models resistant to adversarial attacks,” arXiv preprint
arXiv:1706.06083, 2017.
[6] C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille, “Mitigating adversarial
effects through randomization,” arXiv preprint arXiv:1711.01991, 2017.

You might also like