Professional Documents
Culture Documents
Cybersecurity Modeling of Autonomous Systems - A Game-Based Approach
Cybersecurity Modeling of Autonomous Systems - A Game-Based Approach
Cybersecurity Modeling of Autonomous Systems - A Game-Based Approach
entitled
by
Farha Jahan
Submitted to the Graduate Faculty as partial fulfillment of the requirements for the
Doctor of Philosophy Degree in Engineering
Submitted to the Graduate Faculty as partial fulfillment of the requirements for the
Doctor of Philosophy Degree in Engineering
The University of Toledo
May 2022
Autonomous Systems are soon expected to integrate into our lives as home assis-
tants, delivery drones, and driverless cars. The level of automation in these systems,
from being manually controlled to fully autonomous, would depend upon the auton-
omy approach chosen to design these systems. This selection would also affect other
the dawn of the areas of human-machine teams (HMT) and cyber-physical human
systems (CPHS) have attempted to address the human trust in autonomy while tra-
ditional domains of security, along with these new domains, continue to attempt to
address the security concerns. This dissertation revolves around these general ideas
and attempts to answer many open questions. How did we get here? Where is the
future? How do we ensure that the autonomous systems are secure enough so that
we may trust their autonomous operation? Can we model the attacker and defender
behavior based on the strategies for defense or attack? Given the importance of
This first phase of this research reviews the historical evolution of autonomy,
its approaches, and the current trends in related fields to build robust autonomous
systems. Towards such a goal and with the increased number of cyberattacks, the
iii
security of these systems needs special attention from the research community. To
gauge the extent of stat-of-the-art in this area, we study the works that attempt
model the system architecture from a security perspective, identify the threats and
vulnerabilities and then model the cyberattacks. A survey in this direction explores
the various attack models that have been proposed over the years and identifies the
The second phase of this work focuses on developing generic autonomous system
architecture, both theoretical and analytical, on enabling the next step of security
modeling. It was construed that any autonomous system can be represented using
three major modules - perception, cognition, and control. Further, the proposed
autonomous system model performed detailed threat, vulnerability, and attack mod-
eling. The next step involved exploring various theories and methods to gauge the
theories such as game theory work best for profit/loss-oriented cybersecurity prob-
lems.
veloped the strategic game formulation and established the method to calculate the
decision payoff and associated Nash equilibrium. Twenty-one different scenarios were
simulate the game using an OMNeT++ based simulator known as VEINS to obtain
the optimal strategy to maintain a secure system state. Distributed Denial of Service
attack was chosen as the attack as there have been many real-world instances of this
attack, simply by using a bunch of phones or IoT devices. The simulation results
also give a perspective on the attacker’s strategies that could have maximum impact
of DDoS attack in the Vehicular AdHoc Network. Another tool called Gambit was
iv
utilized to calculate the Nash equilibrium for each scenario.
After a detailed discussion of the results and their analysis, the work is concluded,
summarizing the various actions that attacker and defender can independently take.
Each party’s best and worst scenarios were identified among the different scenarios.
to the success of this model. Underestimating attacker capabilities may lead to the
attack. Therefore, vulnerability analysis and modeling of the target system serve as
v
[To my daughters Aaiza and Elhaam.]
Acknowledgments
It has been a long journey made easy and fulfilling with the strong mentorship of Dr.
in my endeavors throughout the Ph.D. program. Thank you for your generosity and
I would like to thank all committee members, Drs. Niyaz, Niamat, Kim and Kaur,
for taking their time out of their busy schedules to serve on my dissertation committee
and provide valuable comments and feedback. I would like to thank the College of
Graduate Studies for supporting my research with the University Fellowship. Terri
and Mary have always been supportive and prompt in answering any questions I had.
spouse has been my strongest pillar of support in myriad ways. Along with being a
to accomplish my dreams. Thank you! My family and in-laws never doubted me. I
am thankful to them for their unwavering belief in my efforts. I would also like to
thank my family away from home, Cheryl, Eric, Christy, John, and Michelle for their
blessings, prayers, and the time we spent together. I would like to thank the Little
Sprouts Academy for providing care for my daughter. They put me at ease with their
Last but not the least, my friends, without whom the world will be empty. I would
vii
Contents
Abstract iii
Acknowledgments vii
Contents viii
List of Figures xv
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Background 6
viii
2.1.2.4 Sliding Scale Approach . . . . . . . . . . . . . . . . . 19
2.2.1 UxVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.3 Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
of Things (AIoT) . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.5 Swarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
ix
3.2.1 Autonomous System Security . . . . . . . . . . . . . . . . . . 46
ory 56
5.1.3 Gambit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.3.1 Scenario AD . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
x
5.3.1.3 Message Payload vs Bit Rate . . . . . . . . . . . . . 96
Rate . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Density . . . . . . . . . . . . . . . . . . . . . . . . . 107
Density . . . . . . . . . . . . . . . . . . . . . . . . . 107
xi
5.3.6 Scenario AAADD . . . . . . . . . . . . . . . . . . . . . . . . . 108
References 117
xii
List of Tables
xiii
5.11 Payoff Matrix: Message Payload vs Vehicle Density . . . . . . . . . . . . 98
5.15 Payoff Matrix: Message Payload, Beacon Interval vs Vehicle Density . . . 101
5.16 Payoff Matrix: Number of RSUs, Beacon Interval vs Bit Rate . . . . . . 101
5.17 Payoff Matrix: Number of RSUs, Beacon Interval vs Vehicle Density . . . 102
5.18 Payoff Matrix: Number of RSUs, Message Payload vs Bit Rate . . . . . 103
5.19 Payoff Matrix: Number of RSUs, Message Payload vs Vehicle Density . 103
5.22 Payoff Matrix: Beacon Interval vs Bit Rate , Vehicle Density . . . . . . . 105
5.25 Payoff Matrix: Message Payload, Beacon Interval vs Bit Rate . . . . . . 106
5.26 Payoff Matrix: Number of RSUs, Beacon Interval vs Vehicle Density . . . 107
5.27 Payoff Matrix: Number of RSUs, Message Payload vs Vehicle Density . 108
5.28 Payoff Matrix: Number of RSUs, Message Payload, Beacon Interval vs Bit
xiv
List of Figures
2-4 Statistics of number of devices connected worldwide from 2015 to 2025 (in
attack (qk ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
xv
5-10 Packet Loss Distribution with respect to NumRSU . . . . . . . . . . . . 88
5-12 Packet Loss with respect to Bit Rate and Number of RSUs . . . . . . . . 89
xvi
List of Abbreviations
BI . . . . . . . . . . . . . . . . . . . . . . . . Beacon Interval
BPNN . . . . . . . . . . . . . . . . . . . . Backpropagation Neural Network
BR . . . . . . . . . . . . . . . . . . . . . . . Bit Rate
BSM . . . . . . . . . . . . . . . . . . . . . . Basic Safety Message
CB . . . . . . . . . . . . . . . . . . . . . . . Colonel Blotto
CCH . . . . . . . . . . . . . . . . . . . . . . Control Channel
CIA . . . . . . . . . . . . . . . . . . . . . . . Confidentiality, Integrity, Availability
COVID-19 . . . . . . . . . . . . . . . . Corona Virus Disease of 2019
CPS . . . . . . . . . . . . . . . . . . . . . . Cyber-Physical Systems
xvii
ECU . . . . . . . . . . . . . . . . . . . . . . Electronic Control Units
GA . . . . . . . . . . . . . . . . . . . . . . . Genetic Algorithm
GE . . . . . . . . . . . . . . . . . . . . . . . General Electric
GM . . . . . . . . . . . . . . . . . . . . . . . General Motors
GPS . . . . . . . . . . . . . . . . . . . . . . Global Positioning System
GUI . . . . . . . . . . . . . . . . . . . . . . Graphical User Interface
NE . . . . . . . . . . . . . . . . . . . . . . . Nash Equilibrium
NFN . . . . . . . . . . . . . . . . . . . . . . Network Function Virtualization
NIST . . . . . . . . . . . . . . . . . . . . . National Institute of Standards and Technology
NS2 . . . . . . . . . . . . . . . . . . . . . . . Network Simulator Version 2
xviii
OBU . . . . . . . . . . . . . . . . . . . . . . On-Board Unit
OMNET++ . . . . . . . . . . . . . . Objective Modular Network Testbed in C++
OS . . . . . . . . . . . . . . . . . . . . . . . . Operating System
PITM . . . . . . . . . . . . . . . . . . . . . Person-in-the-middle
PNT . . . . . . . . . . . . . . . . . . . . . . Position, Navigation, and Time
PSNE . . . . . . . . . . . . . . . . . . . . . Pure Strategy Nash Equilibrium
SA . . . . . . . . . . . . . . . . . . . . . . . .
Situation Awareness
SCH . . . . . . . . . . . . . . . . . . . . . .
Service Channel
SDN . . . . . . . . . . . . . . . . . . . . . .
Software Defined Network
SDR . . . . . . . . . . . . . . . . . . . . . .
Software Defined Radio
SNR . . . . . . . . . . . . . . . . . . . . . .
Signal-to-Noise Ratio
SNIR . . . . . . . . . . . . . . . . . . . . .
Signal-to-Noise Plus Interference Ratio
STRIDE . . . . . . . . . . . . . . . . . . Spoofing, Tampering, Repudiation, Information Disclo-
sure, Denial of Service, and Elevation of Privilege
SUMO . . . . . . . . . . . . . . . . . . . . Simulation of Urban MObility
xix
WSM . . . . . . . . . . . . . . . . . . . . . WAVE Short Messages
WSMP . . . . . . . . . . . . . . . . . . . WAVE Short Message Protocol
WSN . . . . . . . . . . . . . . . . . . . . . Wireless Sensor Network
xx
Chapter 1
Introduction
The history of remotely-controlled machines goes back to the 1950s, when they
are evolving into autonomous systems that can make rational decisions. Be it a
driverless car stopping for a pedestrian crossing a road or a caretaker robot calling
911 in case of an emergency. Innovation is not the only reason to work closely with
autonomous systems in the future. Instead, necessity might compel us to trust these
systems with our lives. The ongoing COVID-19 pandemic has accelerated the use of
Even after the pandemic subsides, these industries would continue with automated
1.1 Motivation
humans and assist them in various tasks. These tasks could be the one that requires:
such as space exploration, search and rescue missions, or nuclear power plants; or
may lead to disasters. An incident was reported in a San Francisco mall where
1
a patrolling robot failed to recognize a toddler and accidentally attacked him [4].
Recently, a self-driving Uber vehicle was caught up in a fatal accident [5]. Armed
the USA, China, and South Korea. Sci-Fi novels and movies like ‘I, Robot’ and
‘Terminator’ have created a negative image and fear for these systems that they may
go against humans and harm them. There is an ongoing campaign against killer
robots (fully autonomous weapons) that would have complete decision controls on
their target [6]. If malicious users hack or control such systems by exploiting their
scenes from movies. These autonomous systems are still in the evolving stage. It is
essential to analyze the security and safety issues associated with these machines and
thoroughly test them before they are made part of our lives [7, 8].
Cyberattacks will surge as the number of these systems increases on the grid,
driverless cars on the road, and unmanned aerial vehicles (UAVs) hover. Cyber se-
curity is one of the significant challenges that businesses and new technologies will
continue to face with increased stats every year. We need strategic and predictive
mally utilizing available resources. Game theory is the field of mathematical modeling
based on strategies among rational decision-makers who choose their actions such that
their payoffs/objectives are maximized. Game theory is widely being applied in vari-
ous fields of science after gaining popularity in the economics world. Game-theoretic
techniques can be used to examine a large number of possible threat scenarios. Specific
actions can be taken, eliminated, or improved using the strategic outcomes, thereby
more efficiently controlling future attacks. These techniques are already being utilized
2
in real-world scenarios.
1.3 Contributions
tacks in UAVs [9, 10], I realized the need for a generalized analytical model for au-
tonomous systems to represent and evaluate attack scenarios. Such a model would
novel mathematical and economic theories to this area. Therefore, this dissertation
focused on addressing the lack of any major success in this direction by making the
following contributions:
• Autonomous system modeling study: The first phase of this work involved
the area of cybersecurity and system modeling of these systems. This resulted
architecture was found in the literature. Therefore, the next phase focused
drones, robots, and driverless cars. This work was also published in the journal
lack of use of game theoretical models in the area of cybersecurity of AS. This
3
of cyberattacks on these systems. The game-based model provided insights into
attacker and defender strategies and reached equilibrium when both players op-
erate on their utility function with no incentive to deviate, as they have the best
response based on the other player’s payoff. This will also help strategize the
published yet.
Chapter 1 introduces the dissertation topic and the motivation behind it. It
also discusses the problem statement, describes various research objectives, and the
the evolution of autonomous systems and the levels of autonomy. It also explores
different autonomy approaches which answer multiple questions such as who and in
what circumstances should have the control of the system; human, machine, or both.
We then delve into cybersecurity modeling of some autonomous systems such as UAVs
and robots. It also introduces the game theory, related concepts, and application to
cybersecurity.
Chapter 3 describes the system modeling along with threats, vulnerabilities, and
attack modeling. It discusses the insights of the dissertation till now, the research
4
Based on the insights gained from the Chapters 2,3, in Chapter 4 we model a
car, robot, and drones. We then propose a strategic non-cooperative, non-zero sum
that achieve the Nash Equilibrium (NE) and the expected payoffs of the players.
Chapter 5 is the case study for the proposed strategic game through simulation.
This chapter introduces the simulation tools and the setup requirements. It then
(DoS/DDoS) attack as the attack on a driverless car. Different attack strategies are
considered, and the results are presented as payoff matrices and Nash equilibrium.
This chapter also plots trends and surface plots of packet loss and payoffs for various
game scenarios.
obtained in this research. It discusses the constraints and limitations that lead to
5
Chapter 2
Background
The researchers from different focus groups have put a lot of effort into bringing
us to the current level of understanding regarding autonomy. In the last few years,
excellent surveys on autonomy have been published [13–15]. Goodrich et al. presented
related terminologies and jargon [16]. The level of trust could be a significant factor in
deciding the autonomy levels in autonomous systems [17]. However, to the best of our
The work can help carry out further research on the security modeling of generalized
the concepts and factors that define autonomy and its levels. In general, the term
by factors and functionality of the system, from being manual to fully autonomous
and in-between. Having a perspective of the application domain and the scope of
implementation may provide a different look at the security modeling of these systems.
6
explain the state and functioning of a system, its relation with different components,
cal model has a governing equation with subequations based on its relationship with
different components and variables. It has some initial parameters and boundary
conditions followed by limitations and constraints. They are classified based on the
decision-makers with some incentives to attack, and the system administrator, net-
work, or end-user needs to protect their system from such malicious activities. Game
theory provides a strategic model to represent the relationship between the attacker
and the defender with the incentives for their actions. More details on game theory
Following are the key contributions of this chapter toward the body of knowledge
on autonomous systems:
this area,
7
2.1 Autonomy: History, Approaches, and Trends
In the last few decades, concepts and modeling of automation have evolved con-
(HMT), Artificial Intelligence (AI), and Unmanned Systems (UMS) share the con-
ideation of such systems can be dated back to religious myths [18], poets [19, 20],
artists [21], and storytellers [22], thereby materializing into remote-controlled in-
ventions [23, 24], movies [25, 26] and science fiction literature [27]. With the ad-
sensors and actuators, processors, navigation, and communication, the definition and
levels of autonomy (LoA) have gone through multiple revisions and modifications to
be up-to-date with the other emerging and advancing technologies so that it can be
“Robotics” word was first invented by Isaac Asimov in the story “Liar” in 1941,
while “Automation” was first coined by Dal Harder, a Ford executive, in 1947. As
recounted in [28], the early history (1954) of vehicles remotely controlled by human
operators were called master-slave manipulators. In the 1950s, when industries like
General Electric (GE) were working to build industrial robots such as “Yes Man” and
“Handyman”, US Army was exploring the ideas of teleoperated rovers (MOBOT) and
In 1978, Sheridan and Verplank [28] discussed the idea of supervisory control.
They explained how it is different from teleoperators and manipulators, listing the
10-level scale of autonomy (LoA) which has been a background work for further re-
search in this area till today [?, 30–34]. As more systems were gaining autonomy,
operators’ roles were getting reduced to a supervisor or passive monitor of these sys-
8
tems, and performance problems on human out-of-the-loop emerged because of many
failure incidents of these systems [35–37]. Norman realized the lack of feedback, poor
communication interfaces, and lower levels of situation awareness [36]. A new ap-
proach to automation was a solution to these problems: new roles for automation,
process, and revisiting the LoA [38]. Endsley et al. focused their research on im-
[40], moving towards adaptive autonomy and enhancing the LoA [41]. Adaptive au-
the LoA changes initiated by particular events in the task environment, physiological
methods, task load, or by changes in operator performance [43]. An initial survey [44]
in those fields at that time, provided application history of such systems in the field
of aircraft, nuclear power, and highway efficiency, and brought forth the areas that
Parasuraman et al. [43] took a step back and did extensive research on the is-
sues related to adaptive autonomy and how it should be approached in design. They
outlined what, when, and how the adaptation would be invoked. Such as, would it
implementation? The idea was that adaptive automation would aid in solving hu-
between the operator and the system, as and when needed. It could increase the
affect the operator’s situation awareness [38]. As recounted in [14], Endsley and
Kaber proposed a revised 10-scale taxonomy based on input functions rather than
9
on a course of action and then implementing the selected one [45]. A year later,
similar model to Endsley and Kaber [45] and adopting a four-stage view of processing
lection, and Action implementation. The automation can be applied to each of these
By the beginning of the 21st century, the Department of Defense (DoD) actively
employed UAVs on different “dull, dirty, and dangerous” missions and had long-term
innovative programs for deploying these UAVs in various areas. One of the focus
areas, along with the development of several other technological requirements for
the enhancement of UAVs’ reliability and survivability, was autonomy [46]. Within
the next decade, ongoing research and goals of the Air Force Research Laboratory
on autonomous capability level (ACL) metrics [?]. The National Institute of Stan-
dards and Technology (NIST) realized the need for some standard definitions for au-
assembled an ad hoc group for the generic framework development of unmanned sys-
tems’ autonomy level specification called Autonomy Levels for Unmanned Systems
LoA [48–50],
10
• illustrating applications of ALFUS in the military, homeland security, and man-
ufacturing [52].
where the human user, autonomous system, or another system can adjust the LoA
during the operation. After the comeback from a severe “AI winter” [53], AI created a
boom in intelligent and smart systems such as smartphones, smart home appliances,
facilitated the easy deployment and operations of adjustable autonomous agents, pro-
viding multiple channels of communication between the human user and the system,
such as gestures, voice, and touch. Social acceptability, trust, reliability, mutual situ-
ation awareness, coordination of tasks among users and agents, and transfer of control
strategies formed the elements of the new set of concerns for the researchers, along
with safety and robustness. Efforts were made to include these variables in the LoA
framework [14].
In summary, an autonomous system is any machine that can sense and perceive
different plans of action, and based on instructions from the user; it decides to act,
as shown in the figure 2-1. These decisions are based on the autonomy level based
and academic research on robotics and automation started in the 1950s. However,
the progress was slow for more than a decade due to the lack of proper understanding
summary of various taxonomy for LoA that various researchers proposed over the
years, listed in Table 2.1, shows the work done to overcome the challenges raised by
11
Figure 2-1: Autonomous System Functions
also paved the way for more trust and social acceptance of these systems.
12
Table 2.1: Level of Automation (LoA) frameworks summary
Level LoA [28] LoA [45] ACL (AFRL) [?] ALFUS (NIST) [51]
1 Low/ No assistance from sys- Manual Control Remotely guided • High level HRI
Remote tem/Humans decide • Low level tactical
Control Behaviour
2 System offers set of deci- Action Support Real-time Health/ • Simple environment
sion alternatives Diagnosis
3 Narrows selection down to Batch Processing Adapt to failure and
few flight conditions
4 Suggests one alternative Shared Control Onboard route replan • Mid level HRI
7 Executes automatically, Rigid System Group tactical goals • Low level HRI
informs human • Collaborative, high
8 Informs human if asked Automated Decision Mak- Distributed control complexity missions
ing • Difficult environment
9 Informs human if the Supervisory Control Group Strategic goals
system decides to
10 High/ system acts autonomously, Full Automation Fully autonomous • Near 0 HRI
Fully Au- ignores human swarms • High complexity
tonomous • Extreme environment
2.1.2 Approaches of Autonomy
and deciding who would have the control to adjust it and in what scenarios. It would
fundamentally mean what, when, who, and how the necessary actions need to be
taken. Various works have been done to recognize the balance between the flexibility
of control over the autonomy levels such that HMT outperforms either the human
or machine working alone. Comparative studies were also done to test the efficiency,
supervisory approach can be very well used for robots that would need supervision
at some point through the task completion process. A goal-driven autonomy could
could alter its route to the destination. It could decide to pick up a fellow rider in a
‘Share-a-Ride’ business model while on its way to drop off its customer and update its
goal thereby. The mixed-initiative approach is also one of the promising approaches
the vehicle and the driver is a challenge and a topic of further research where each
can take control if one feels that the other is not in a situation to make a better
decision. For example, if the driver is sleepy or in a drunken state, the car can take
conditions like heavy rain or blizzard. In such a scenario, the car will not be in
a good state to be driven autonomously, and the driver should take control of the
vehicle to avoid mishaps. In the same scenario, an autonomous vehicle with sliding
scale autonomy would lower its autonomy and give more control to the user. As
the weather clears, it could take back the full control of the vehicle. A systematic
literature review on adjustable autonomy has been done [56] that listed approaches
14
different approaches is provided in Table 2.2.
The user acts as a system supervisor who monitors the activities of the system and
has the privilege to modify the system behavior dynamically without taking over the
complete control of the system, which could be necessary to avoid task failure [57]. In
this approach, both the user and the system may have individual subtasks to perform
in the entire mission, and the system passes control when it is done with its subtask.
[58]. In these situations, the human power of judgment, experience, and intuition
exceed intelligence algorithms while the system is better controlling the navigation
Some research has been done in the area where the autonomous robotic team re-
quests help from the human operator. The system only passes control when it is stuck
and no longer can perform its assigned task and needs human intervention to take
further action. The efficiency of such type of adjustable autonomy depends on how
rapidly and accurately the human operator responds to the situation while engaged
in different unrelated tasks [60]. For example, the Roomba vacuum cleaner can serve
as an excellent example of such a system that would need human intervention when
and learning from “interactions at different levels of granularity” would increase the
situational awareness of the team and, in turn, increase the overall productivity [62].
15
2.1.2.3 Mixed-Initiative/Teamwork-centered Approach
In this approach, the user and the system smoothly exchange controls throughout
the mission. The idea of such a system where humans and robots complement each
other and collaborate in a safe, productive, and cost-efficient environment is not novel.
The goal of NASA’s Astronaut-Rover (ASRO) project, first tested in 1999 [?], is to
bring together human and planetary rovers to work together seamlessly, communi-
cating throughout the mission and be a scout, technical field assistant, infrastructure
assistant and many more to the crew [63], with adjustable LoA during system opera-
tion [64]. In such systems, the back-and-forth transfer of control between the system
and the human should be smooth and quick, along with the guarantee that each
entity would be able to handle its part competently [65]. This approach would be
able to address several challenges, including maintaining consistent and stable oper-
ation, user trust, and situation awareness during the transfer of control at different
LoA [66,67]. Research in the area of urban search and rescue missions utilizing mixed-
initiative control autonomy shows that a robot was able to make better navigation
decisions [68]. It holds for a large-scale team of robots as well, indicating that the
theoretical benefits of this approach could be met if the system and the operators
have complementary abilities in such a way that the systems must be able to make
16
Table 2.2: Comparison of different approaches of autonomy
Supervisory User Easier to model and User role reduced to a Low SA for new or inat- User skill/expertise [57], [58], [59]
Control/Task- implement monitor causing bore- tentive monitor/user level dependant
based dom. Recognizing and
Approach responding to cyberat-
tacks difficult
System- System Relatively easier to Wait for the user to take User distracted with User skill/expertise [60], [62], [59]
Initiative model and implement. actions that needs user unrelated tasks level and response
Approach authorization time dependant
Mixed- Decision making is Reaction time would be Difficult to realize a cer- Depends on the interface User’s skill-level and [?], [65], [66],
Initiative/ shared between the less tain level of autonomy which would remind the system should [67], [69]
17
Teamwork- user and the system in terms of task assign- the user of the system’s complement each
centered ment. Smooth transfer of state, time off period and other
Approach control is a challenge user’s expertise
Sliding User’s control in- Autonomy levels between Swiftly attaining situ- Depends on the auton- Combines the forte of [70], [71], [?],
Scale Ap- versely proportional the pre-programmed ational awareness is a omy level the system user and autonomy: [72], [62]
proach to the system levels could be achieved. challenge for the user is working on and how each does what they
Increases robustness and engaged the user is with are good at
adaptability the system
Hierarchical User/Highest level Easier to manage and Higher levels rely on Single operator, multi- Group Coordination [73], [58], [74]
Approach in the hierarchical coordinate multiple sys- lower level outputs. ple autonomous systems is necessary for the
architecture deter- tems. When lower level When a higher level is would increase work- completion of the task
mines objectives and systems are compro- compromised, the whole load and hence decrease
control criteria mised, higher levels can system would be down situation awareness
disable them until issue
resolution
Policy- User/Policies Increased trust in the Making policies for all Based on policy-by- Completion of task [75], [67], [76],
based Ap- system as the user can situations is a challenge policy basis depends on the possi- [77]
proach set bounds ble actions a system
can take defined by
the policy
Goal-driven User/Goals More self-sufficient High degree of difficulty. Shouldn’t require human Goal-driven AI al- [78], [79], [80]
18
Collab- Multiple individual Multiple systems serve Collaborating multiple Shouldn’t require human Individual systems [81], [82]
orative systems towards completing a systems without human intervention must complete their
Approach higher goal intervention is a chal- respective goals to
lenge achieve a higher goal
2.1.2.4 Sliding Scale Approach
The intermediate LoA between the discrete modes (teleoperation, safe, shared, and
ous mode of autonomy where the system’s autonomy increases with the proportional
decrease of the user’s control of the system. It is achieved by blending human and sys-
tem desired characteristics or variables. The user can guide the actions/operations of
flexibility to the user for better management [70, 71]. In these works, variables that
characterize the systems are provided on a sliding scale, which would influence the
autonomy levels. In [?,72], the authors designed a trust scale to adjust the autonomy
level [62]. Another work implements sliding autonomy to develop a coordinated team
robots interact with a human operator in case they need help if stuck or to improve
efficiency [83].
through which a global problem can be solved based on the knowledge of lower-level
systems [73]. It helps to localize specific tasks to systems based on goals, control,
duration of execution, the complexity of tasks, and the amount of interaction or su-
pervision needed by the operator, hence defining the autonomy levels. A group of
UAVs as three loops (“Motion control inner loop”, “navigation”, and “mission man-
agement outer loop”) [58]. Their case studies conclude that an operator can control
an increased number of UAVs if the automation is increased in the “control and nav-
19
igation loops” with a good user-system collaborative decision making in the mission
levels [74].
Policies are a set of guidelines defined by the designer that an autonomous system
must abide by in any given situation. They are permissions given to the autonomous
the code. Such an approach increases the users’ trust in the system as they can set
ity, protection from malware, poorly-designed or buggy agents, and reasoning about
agent’s behavior [75]. One such work based on policies is the driving mission for the
human-robot team [67] in which one of the policies could be “if the road is slippery,
the human should drive”. Another example is the Electric Elves, a multi-agent sys-
tem acting as a personal assistant to a group of researchers for their daily activities
such as ordering, scheduling a meeting, selecting presenters in the research group, and
organizing lunch meetings is based on policies of the strategic transfer of control [76].
policy services [84] used in areas like modeling human-machine team, military, and
agents that govern the autonomy and adjust them dynamically so that the system
20
2.1.2.7 Goal-driven Approach
pectations during the execution of a plan, detects discrepancies if they occur, details
the reasons for failures, and creates new goals to pursue if the execution of the current
plan fails [78]. It incorporates a model for goal reasoning and has been applied in var-
StarCraft game for strategic planning, and so on. A group of researchers demon-
Events (ARTUE) in a navy training simulation- Tactical Action Officer (TAO) Sand-
box, and showed that it could perform well in a complex dynamic environment [86].
vehicles can successfully detect a potentially hostile surface vehicle when pursuing a
In collaborative autonomy, multiple individual agents, each having their own spe-
collective goal. The multiple individual agents form a complex system whose auton-
omy is decided based on the autonomous actions of the collective individual agents
depending on shared information, and goals [81]. A conceptual example of this ap-
proach could be seen in the system design for Mars exploration with a UAV and a
ground vehicle in collaboration. The goal is that the UAV would be able to dock
to a charging station on the ground rover without human intervention. The ground
rover would serve as a mobile base that would provide charging, communication, and
21
2.1.3 Current Trends
In recent years, the research focus has moved to implement adjustable autonomy
in the real world and other more challenging areas such as transfer-of-control, goal
individuals. Companies like Ford and Nissan were expected to launch self-driving
cars by 2021, while GM’s Cruise and Google’s Waymo were not far behind. However,
even in 2022, we don’t have fully driverless cars on the road, and most companies
are still testing their vehicles in states with little to no snowfall. Although cars
such as Tesla, Navya, and others have advanced autonomous features, there are still
cution model would facilitate a smooth transfer of control between the user and the
model, user state monitoring, intent recognition techniques, and efficient interfaces to
facilitate collaboration between the user and the system is required. These emerging
fields of AI and HMT are gaining momentum, and works are being done in facial
and emotion recognition [87, 88] that would be helpful in monitoring user’s state. In
contrast, augmented reality devices, gesture, and voice user interfaces would facilitate
easy communication.
promptly make informed decisions to react safely and reliably in complex dynamic
situations based on an accurate perception of their surroundings. More than one sen-
The fusion of data from multiple sensors and multiple modalities has become crucial
can be used in image registration [89] along with the detection and mapping of static
22
and dynamic obstacles along the trajectory [90]. A recent survey discusses the cur-
different aspects of these systems have a benchmark, and a proper metric is desig-
nated for each of them. It would not only set a standard for the development of
recognize appropriate systems for particular scenarios that require a certain level of
autonomy [92]. Some earlier works were done to establish a common metric for au-
tonomous systems by government agencies [?,50]. In the field of HRI, common metrics
agement, manipulation, and social” have been presented in [93]. At the same time,
technology, social interaction, and assistive technology [94]. A review of these works
is presented in [95] to identify common metrics and set a benchmark in the field
of HMT. Most recent works reviewed autonomy measures to compare and contrast
which has embedded computers and physical elements connected and controlled by
munication with the outside world. Most of these systems are deployed in critical
areas such as nuclear power plants, automatic pilot avionics, and war zones. Some of
these systems are highly vulnerable to cyberattacks, and hence, the security of these
systems poses significant concerns if they are entrusted with our lives. Complete
23
Figure 2-2: Taxonomy of Attacks on Autonomous Systems
missions, or even a slight change in the desired output data can leave the operator
in a confused or ignorant state. A few research works have been done on the com-
systems [47, 97–101]. Extrapolated from these works, we present a taxonomy of com-
mon attacks on autonomous systems in Figure 2-2 and a brief description of some of
these attacks and their effects in Table 2.3. A comprehensive list of cyberattacks and
24
Figure 2-3: Popular Autonomous Systems
its detailed survey on autonomous systems is beyond the scope of the paper. In this
section, we discuss some of the highly researched autonomous systems (Figure 2-3)
2.2.1 UxVs
plan to use them for their business functions. Door-to-door delivery and hauling
cargo as far as 300 miles with a weight of up to 200 pounds is not a distant dream.
Industries like Bell Labs are working on prototypes that can use gas or electric power
and transition from a helo to a plane during mid-flight, addressing some aerodynamic
concerns [112]. These systems are at higher risk of becoming targets of cyberattacks.
One of the earliest works to identify vulnerabilities in a UAV autopilot system was
done in [2]. A group of researchers analyzed system safety [109] and developed a
25
Table 2.3: Effects of cyberattacks on Autonomous Systems
Attack Description Effects on Autonomous Works
Types Systems
Jamming Caused by intentional Loss or corruption of pack- [102, 103]
interference, e.g. GPS ets disrupting communica-
Jamming tion
Spoofing Masquerading as a Gain access to the system, [10, 104, 105]
legitimate source, e.g. information, etc.
GPS Spoofing
Flooding Flooding of packets Loss of communication [102, 106]
thereby overloading through network conges-
the host, e.g. DoS, tion
DDoS
Side- Attack based on the Leakage of sensitive infor- [107, 108]
channel extra information mation without exploiting
Attack gained by the physi- any flaw or weakness in
cal analysis the components
Stealthy Tampering system Mislead the system to take [109]
Deception component or data undesirable action
Attack
Sensor in- Manipulate environ- Exercise direct control [110]
put spoof- ment to form implicit over system’s actions
ing control channel
A review of all recent significant attacks on UAVs has been presented in [114].
GPS Jamming and Spoofing are the two most common attacks on the navigation
signals. In a GPS spoofing attack, a false GPS signal is transmitted to the GPS
navigation, and time (PNT) calculation which results in the deviation of the system
reported a recent incident of GPS spoofing that around 20 ships off the Russian port of
Novorossiysk found themselves in the wrong spot - more than 32 kilometers inland, at
26
Gelendzhik Airport [115]. While the military GPS signals are encrypted, civilian GPS
signals are publicly known. Hence, the GPS spoofing attack poses a significant threat
to critical infrastructure and public lives if spoofed systems are used maliciously.
Some researchers demonstrated these attacks on a small but sophisticated UAV [105].
Major car manufacturing companies like Audi, BMW, Ford, GM, Google, and
Uber have envisioned a future of autonomous vehicles on the road. Table 2.4 lists
some of the major autonomous car manufacturers and their pilot projects as of 2019.
In 2021, Waymo tested about 2.3 million miles nationwide while Cruise racked up
the second place with 876,104 [116]. It was expected that 2019 might be the year of
the driverless cars as GM prepared to launch its fleet [117] while Waymo was already
on the streets of Phoenix, Arizona, opening initially to early riders [118]. Another
startup, Drive.ai, launched its self-driving shuttle service around a geofenced area
of Frisco, Texas [119]. In the effort to envision a ‘smart city’, Lake Nona, Florida,
would soon see AUTONOM, a self-driving bus deployed by Beep in partnership with
French company Navya [120]. Companies like Nissan planned to put driverless cars
on the street of Tokyo by 2020 [121]. Achieving self-driving is not easy, and most
automakers are behind schedule due to various difficulties. Yet, as of 2022, many new
self-parking and summoning, blind spot, and lane-monitoring systems. Future au-
such as Vehicular Ad hoc Network (VANET) so that they may exchange traffic and
experiment on Jeep Cherokee where an intruder took the car control from the driver
27
on the highway. In the beginning, the hackers toyed with air conditioning, radio, and
windshield wipers. It became scary when the accelerator stopped working on the long
overpass with no shoulder to escape. It would have been a lot worse if the intruder
had abruptly engaged the brakes or disabled them all together [122].
The communication over VANET would help the vehicles to plan for better driv-
on a user’s habit of commuting can reveal a lot of valuable information about the
could be executed in the middle of the commute. A secure network would throttle all
the possible attacks at the point of entry. Many researchers have reviewed the con-
tribution of others in this area by identifying and classifying the vulnerabilities and
security challenges in VANETs [?, 122, 128–135]. As these vehicles will communicate
and main characteristics of these systems and analyze the corresponding countermea-
sures against the possible attacks [136, 137]. Authenticating vehicles entering into
be the first line of defense [138]. Authors have dedicated their research to individual
attack detection such as DoS [139, 140], jamming [?, 141] and greedy behavior [142].
mechanisms [143, 144] in VANET would be an easy way for attackers to re-route and
2.2.3 Robotics
Robots are marking their place in our lives with the significant investments in
mented Reality (AR), IoT [147], reduced cost of electronic devices, and people’s need
28
Table 2.4: Driverless Car Technology Trend
Proposed Launch Manufac- Pilot Features Works
turer Project
2005 2021 Ford – Level 4 Automation, no [123]
gas pedal, no steering
wheel
2009 2020 Google Waymo Fully self-drive, no steer- [123]
ing wheel, no accelerator
and brake pedal
2014 2015 Tesla Model S Autopilot, 360 degree [124]
view, real-time traffic
updates, automatic park-
ing
2014 2025 Mercedes- Future Autonomous Driving [123]
Benz Truck
2025
2015 2021 Volvo Drive Me Level 4 Automation, In- [125]
telliSafe Auto Pilot lets
user activate and deacti-
vate autonomous mode
2015 2020 Nissan ProPI- Automatic lane change [123]
LOT on highways, au-
tonomous driving on
urban roads and inter-
sections
2016 2019 General Mo- Super Hands-Off lane following, [117]
tors Cruise brake and speed control
2014 2016 Induct Navya Autonomous shuttle, de- [126]
Technology ployed in specific loops,
closed environment &
half-mile radius
2016 2018 Drive.ai – Autonomous driving, [119]
remote monitoring, LED
screens display car’s next
action to pedestrian
2016 2021 BMW-Intel- BMW Develop open standard [127]
Mobileye iNEXT platform for highly and
fully automated driving
29
for assistance in mundane activities. Being an evolutionary industry, they are de-
signed based on the environment they would be deployed in and work side-by-side
with humans. Their reach varies from space to manufacturing, from home to war
front. Surgical or industrial robots need to have an extremely high level of accuracy,
while rescue robots should be fast and efficient in locating survivors in inaccessible
tasks, be a mobility aid for the elderly and needy, and a medium for communication
vere threats that malicious users can easily exploit. Industrial and academic re-
searchers have demonstrated attacks on industrial robots that can be hacked and
which could result in a catastrophic failure of that system [149]. They analyzed the
standard architecture of an industry robot from a security point of view. They devel-
oped an attack model based on the attacker’s goal, an access level to the system, and
their capabilities [150]. An independent security firm took the initiative to evaluate
currently available robots in the market from different vendors. Their initial search
report reflects several cybersecurity vulnerabilities in the robot technology [151]. Al-
should implement and address them from the first phase of the software development
process.
Surgical robots are the new trends in the medical industry, even in the third world
countries such as India, which reported 26 da Vinci systems in 2015 [152]. These
robotic surgeries result in small incisions, minimal blood loss, and faster recovery of
patients with less post-operative pain. These robots work very closely with humans,
and it is essential to ensure the security and safety of robots that operate around
people and animals in homes and organizations alike. Various attacks were reported
30
that compromised potential entry point to get into the hospital network in the last few
years [153, 154]. Network vulnerabilities could easily be exploited to access surgical
The same goes for household robots on a home network as well. Since such
type of robots is near children and adults in the house, it is more likely that the
by pedophiles and online sexual offenders [156]. Besides privacy issues, the home
robot’s sensors can be used to collect sensitive data, which can launch different types
a scheduler would know when the owners would be away from home. A planned
service robots, analyzed the threat, and listed different available defense mechanisms
Two future concepts are more or less intertwined, in researchers’ opinion. One is
the Internet of Autonomous Things (IoAT), where smart autonomous devices would
be connected through the network and would be able to solve the problems or adapt
themselves through information exchange with their peers. The other concept is
“actively manage data and decisions on behalf of users” [157]. These two concepts
overlap each other in the sense that smart devices would have some autonomous
decision-making and would be connected through a network. The future is not so far
from where IoT devices will be information generators, and the edge devices will be
31
Figure 2-4: Statistics of number of devices connected worldwide from 2015
to 2025 (in billions) published in Statista, 2016 [1].
Note: Data is a forecast from 2017-2025.
Figure 2-4 predict that the number of connected devices will be 75 billion by 2025 [1].
Keeley discussed the market of IoAT and how they will be able to solve newer problems
through self-organization and team operation. Markets like personal security, home
automation, and healthcare will lead the way with IoAT. Intelligent actuators will be
As big data, machine learning, and Blockchain technologies advance alongside the
innovation of IoT devices, it would be sooner than expected that these devices would
achieve autonomy at the level of human actors [159]. Future IoT infrastructures
smart and reliable autonomous IoT infrastructures, it should be easily scalable via
context [160]. Moreover, it should support confidentiality and prevent personal in-
formation infringement allowing the users to keep their confidential data “in-house”.
32
For example, an AIoT would range from the smart pantry for automatic inventory
tracking or washing machines that would order detergent once the supply is about
to finish [157]. Again, since this area calls for further research, cybersecurity and
data privacy should be one of the primary goals in the architectural design of such a
network. For sure, a list of cyberattacks that could be launched on IoT devices [100]
2.2.5 Swarms
nature where, for example, a swarm of insects or a flock of birds perform tasks beyond
individual capabilities. It has found applications in varied areas [161] such as detecting
agricultural fields [163], and monitoring for undesired environmental events. Swarm
robotics is much researched for military applications as well. Since individual entities
that make the swarm are dispensable and redundant, they can be applied in mining
particular task faster than an individual robot as they are self-organized and work in
robotics of flocking, foraging [164], navigating and searching applications [165, 166].
Some early works considered security challenges in swarm robotics and analyzed pos-
sible threats to the swarms [167, 168]. They also compare the specific characteristics
of the swarms with other similar systems such as multi-robots, multi-agent systems,
mobile sensor networks, and MANET. This area of research is still raw, and not much
work has been done on individual attacks on the swarm robotic network.
33
2.3 Game Theory
We make decisions all the time in our everyday life: as simple as waking up early
or playing chess. Our choices and the decisions of others around us impact our results.
We strive toward optimal decision-making to have optimal results for everyone. When
ology, Sociology, Control Theory, and many more. Games can be static or dynamic,
ric. There are a wide variety of games, but all of them have these common elements:
Players, Strategies, Outcomes, and Preferences. It is also important to note that the
outcomes result from everyone’s actions, but the players can have their preferences.
The game does not specify what the players do. It specifies only their options and the
consequence of each of them. Few other features of a game can be finite or infinite,
game’s solution is not to decide who wins the game and who loses. Instead, it is a
way to think about what players might do in a given scenario and what would be the
A game can be represented as a tree that contains all the information. It is called
the extensive form. Each tree node indicates which player is to move and which
moves are available to the player, and the leaf node indicates the outcome of the
player’s moves. Extensive form games are sequential, i.e., each player takes a turn
to play, and the players know about the moves of players who have already played.
The normal form is a game representation in a matrix form. Simultaneous games are
represented in normal form. The rows and columns indicate the player’s strategy,
34
Each player employs different independent strategies to optimize their decision-
making to beat their opponent in a game. Some strategies could lead to better
outcomes no matter how good the other player’s strategy is. Such strategies are
called dominant strategies. If X and Y are two strategies and X dominates over Y, Y
is said to be the dominant strategy. A rational player would never play the dominated
strictly dominated strategies to reduce the game until we no longer find one. In a game
with no dominant strategy, we look for a strategy that provides the best response or
maximum profit against another player. Such strategies are called pure strategies,
and the game reaches an equilibrium state, referred to as the Nash Equilibrium.
Hence, Nash Equilibrium can be defined as the optimal state of the game where no
players have an incentive to change from their chosen strategy. A game can have one
or more pure strategies. A player may decide to randomize between the strategies
with some probability. The probability distribution when no player can improve their
expected payoff by deviating to another mixed strategy is called the mixed strategy
Nash Equilibrium. A pure strategy is a special case of mixed strategy when the
The world is progressing towards smart gadgets, smart homes, smart vehicles, and
smart cities. There have been some real-world attacks exploiting the vulnerabilities
of the current cyber-physical systems and networks. With the advancement in tech-
nologies toward autonomous systems, attacks are bound to increase. It has become
important more than ever before that the security challenges and concerns related to
This chapter started with a glimpse into the historical evolution of autonomy
35
and the progressive work done in this broad field. Knowing the background of a
complex topic clears questions like ’hows and when’ and provides a comprehensive
intelligent systems could be a good start to thinking about how the system’s autonomy
will work. We also explored some of the trending areas in the development and
on the work done in the industry and academia on some latest autonomous systems
related to security. UXVs, driverless cars, and robots are in the active area of research.
They are gaining presence in our lives. AIoT/IoAT and Swarms are new research
areas, and not much has been done in cybersecurity. This chapter also introduces
36
Chapter 3
Systems
Attacks
can do. In what scenarios will it alarm the user? How will it transfer the control?
Or, can it defend itself under attack? At the same time, the operator should also be
aware of the cues system is sending. Researchers have tried to model different aspects
of the system using different techniques. From a security point of view, it is essential
to analyze the system model to find the vulnerabilities and threats to each part of the
system, which can be further utilized to create an attack model. These areas have
been studied, and various theoretical and analytical models have been proposed. We
have tried to capture these works concerning UAVs, robots, and driverless cars as
these autonomous systems are widely being researched and incorporated into the real
world.
37
3.1.1 System Modeling
System modeling describes an abstract view of a system, ignoring its details. It can
into other details. It reflects how the system reacts to certain events or communicates
• UAvs: Significant parts of a UAV architecture have been described in [2], which
make up the guidance, navigation, and control systems. Modeling a UAV from
communication among various modules. The model described in the paper has
six modules starting from the data acquisition module, which collects data from
the sensors and sends the required information to respective modules, such as
altitude data to Altitude and Heading Reference System (AHRS) module and
camera data to the telemetry module. The navigation module provides Position,
Navigation, and Timing (PNT) information, while the control module sends
speed and orientation control signals to the system’s actuators. These systems
also have a data logging module that logs flight details such as PNT data to
keep track of the missions and for further analysis in case of failure. Another
and used differential game theory for collision avoidance and formation control
full-sized autonomous vehicles. A team from MIT was one of them that com-
pleted the race. They discussed their autonomous vehicle architecture in [171]
where the requirement was to perceive and navigate a road network segment
38
in a GPS-denied and highly dynamic environment. Another team designed the
tem [172]. Based on these works, Guo et al. modeled a mobile robot system
that consists of a robotic platform and a planner to detect sensor and actuator
attacks. Sensors are the eyes and ears of an autonomous system to the out-
side world, while actuators can be compared to the limbs that execute control
• Robots: [174] was the first work to present a generic model of an autonomous
sented the architecture of an industrial robot for security and threat modeling.
the network, which in turn sends these signals to the sensors and actuators of
Each autonomous system has unique dynamics, and the generalized study of such
systems could be limited to standard parts. Goppert et al. [175] modeled the
dynamics of a UAV system using JSBSim and Scios for the control, guidance, and
navigation system of the UAV. It was used to simulate the response of the UAVs to
several identified cyberattacks, such as fuzzing attacks and digital update rate attacks.
An increased sampling rate would make the system unstable in a digital update rate
dynamic system with zero-mean Gaussian white noise and a constant covariance
matrix [113, 176]. Authors have also used game theory to describe the kinematic
model of a UAV using variables to express the 3-D coordinate frame [103]. Guo et
al. [173] modeled a mobile robot as a nonlinear discrete-time dynamic system where
the robot from one state to another after the robot actuators execute the command
39
generated by the planner.
Threat modeling identifies and understands the threats to a system and then
defines countermeasures to mitigate the threats. Not only does it helps to visualize
the system model through potential adversaries’ eyes, but it also helps to evaluate
security risks and countermeasures in the case of possible attacks [177]. Envisioning
Privilege) can help to analyze the data flow through the system [178]. Another threat
Analyzing each part of the system for a different aspect of security by follow-
ing the CIA model (Confidentiality, Integrity, Availability) lays the foundation for
tackers’ strategies, and attack vectors [180]. Based on the vulnerabilities of the UAV’s
auto-pilot, threats in different components and their effect on the proper functioning
of the UAV were analyzed by Kim et al. [2]. These were preliminary works in this field
Another group of researchers developed a threat model for a smart device ground
control station (a portable hand-held ground control station for UAVs) that allow
soldiers to pilot UAVs on the battlefield. The critical components addressed in the
and mitigation steps in case of attacks on these devices [181]. They presented a risk
network, and human errors. Similar discussion based on policies to defend against
CIA threats in the context of unmanned autonomous systems has been done in [178]
40
along with threat modeling and risk analysis using the STRIDE approach. Some
software products like Microsoft Threat Modeling Tool and ThreatModeler automate
threat modeling, with the latter offering more sophisticated features [182].
and industries. Most preliminary works in the identification of direct and indirect
threats to the robotic system could be found in [174]. Gage discussed direct threats
threats. [108] identified four threat vectors in a mobile service robot: attacks on sensor
data, hardware attacks, software attacks, and attacks on infrastructure. While [183]
[184] models cybersecurity threats, risks, and safety issues of using robots. They
grouped the threats based on the origin of attack (natural, accidental, or intentional),
target (physical, cyber, or both), impact on robot and external entities, and risk.
industrial robot have been discussed in [150]. An attacker could alter the production
outcome, introduce defects in the products, cause physical damage to the robot, or
cause harm to coworkers. They can also be used as an entry point to extract company
The research done by Kim et al. identified control system security and application
logic security as the two vulnerabilities of an autopilot system in UAVs and catego-
rized the identified threats under them, as shown in Figure 3-1 [2]. In control system
programs, such as buffer overflow attacks and malware installation. Attacks in which
manipulated input data are fed into the control systems exploit the application logic
41
Figure 3-1: Vulnerabilities of an autopilot system reproduced from [2]
(ADS-B) attacks. The communication mode among UAVs or ground control stations
attacks ranging from disruption of communication links to capturing and using one
ferent layers (physical layer, link layer, network layer) of a communication network
of UAVs [111]. Krishna et al. reviewed the cyber vulnerabilities of UAVs based on
There are more than 100 built-in or installed Electronic Control Units (ECUs)
within a modern car to control and regulate various functions of the vehicle [185].
communications with other vehicles and infrastructure. Each system in the auto-
42
based on sensors and control modules, behavioral and privacy aspects of humans, and
currently available in the market from different vendors to show how insecure robot
al. [150] were able to identify several weaknesses of an industrial robot. Lack of
mandatory user authentication, unsecured network, and naive cryptography are vul-
nerabilities identified in the computer interface used to interact with the robot. An
attacker could easily bypass or disable user authentication, tamper with existing ac-
With the increase in cyberattacks, it has become the need of the hour for govern-
ment, organizations, and researchers to be ready with planning so that future attacks
can be handled rapidly and efficiently. Attack modeling helps realize the attacks
before they happen and prepares the organization with mitigation steps to take if
an attack occurs. There are various attack modeling techniques to analyze the cy-
berattacks, such as attack graph or tree, diamond model, attack vectors, and attack
surfaces [187].
Guo et al. gave the attacker model for a mobile robot where the attacker can
launch actuator or sensor attacks [173]. Potential cyber attacks on automated vehi-
cles have been discussed in [188] in which the authors have identified attack surfaces
and the possible attacks on automated and connected vehicles. They extend their
research by performing real and effective blinding, jamming, relaying, and spoofing
attacks on the camera and LiDAR sensors of automated vehicle [104]. One of the
works in this area classifies cyberattacks as passive and active attacks. A passive
43
attack extracts information from the system without affecting the system resources
like eavesdropping. An active attack objective is to harm the system in some ways,
like the DoS attack that compromises the availability of communication channel [189].
categorized the study based on attacker, attack vector, target, motive, and poten-
tial consequences. This taxonomy was modified to reflect the attack taxonomy of a
UAV in [114] with minor modifications such as the addition of a new subcategory of
As stated earlier, robots have already entered our lives and are making a place
around humans. These robots assist us daily, including medical services in hospitals,
if such robots are attacked. Various works have been done to analyze the possible
teleoperated surgical robot [191]. They identified possible attacks and classified them
to a network through an interface for operator interaction, such as a joystick and I/O
work [150] profiles an attacker based on access level, technical capabilities, access to
Guo et al. also presented a kinematic model for sensor and actuator attacks where
sensor attacks result in wrong sensor readings that might generate erroneous control
commands, and actuator attacks could directly alter the control commands [173]. A
group of researchers modeled the stealthy deception attack. They analyzed the se-
or both, which causes unbound estimation error without being detected by the mon-
44
itoring system using steady-state Kalman Filter [109]. They extended their work to
model direct control acquisition and onboard navigation attacks, including individual
and combined stealthy deception attacks on the Inertial Measurement Unit (IMU)
and GPS. They further proposed a real-time safety assessment algorithm to verify the
safety of the UAV subject to cyberattacks based on reachability analysis [113]. An-
other group defines the UAV model under a GPS spoofing attack where falsified data
could be injected into the navigation component either through a GPS signal simula-
system [176]. They formulated a real-time manipulation method for the UAV under
the fault detector. They computed the attainable location set of the UAV under such
attacks.
Many researchers have also used decision-making theories to model attack strate-
gies as games [103, 192–194]. Bhattacharya et al. analyzed the coordination of multi
mer and modeled this scenario as a zero-sum pursuit-evasive game [103]. A zero-sum
network interdiction game between a vendor of a delivery drone and an attacker was
modeled using prospect theory [192]. Here, the attacker’s objective was to prevent
the goods delivery drone from taking the optimal path from the warehouse to the
rity
The application of game theory to physical and cyber security is gaining ground.
Indeed, game theory has been applied to network security and model cyber attacks.
45
systems based on game theory and behavioral dynamics. However, few works at-
tempt to address the cybersecurity issue of autonomous systems. This section briefly
One of the early works from 2015 focused on developing a game-based security
based predictive control (MPC) problem. They solved it using a dynamic signaling
game model. For resilience enhancement, the work focuses only on perception through
the use of a limited-range sensor. The authors used MATLAB-Simulink for the simu-
combined the robust deep reinforcement learning (DRL) model and game theory to
address the security and safety in autonomous vehicle systems [196]. The scenario
involves an attacker injecting false data to the AV to reduce the distance between
Avs to cause a crash or reduce traffic speed/movement. The possible values for such
a data injection attack are countless. Therefore, the authors propose using an LSTM
block for both attacker and the AV to learn the deviation in space between vehicles
due to their own or other players’ actions. This data is then fed to the DRL model,
which plays the game for each player and attempts to minimize (for AV) or maxi-
mize (for the attacker) such deviations. This work focuses on data injection attacks,
and the authors have shown that the algorithm converges to a mixed strategy Nash
equilibrium point. A similar work from the same authors proposed the use of the
Colonel Blotto (CB) game for secure state estimation between interdependent critical
infrastructure (ICI) such as power, gas, and water supply systems [197]. Although
not automated, most of these systems are heavily automated and only monitored for
faults and errors. This work proposes a model for ICIs, uses Kalman Filtering for state
estimation and explores maximum state estimation errors due to compromised sen-
46
sors. Finally, the authors use the pure strategy and mixed strategy non-cooperative
sults verify that the derived MSNE is the best strategy, and ICI must be considered
Various other works have attempted to apply game theory in cyber-physical sys-
tems such as train control, drone delivery, smart grid, etc. The work on securing
tack on the wireless communication sub-system [198]. The authors applied dynamic
programming and used analytical results to find the equilibrium. The scenario de-
picted is similar to [196] in the sense that trains are expected to be communicating
the headway, i.e., distance from one another, which may get affected due to the jam-
ming attack and result in a crash or equipment failure if the trains need to stop at a
short distance. Another 2017 work attempts to solve a zero-sum network interdiction
game targeted at a drone delivery system where the two players are trying to max-
this work incorporates a secondary concept of prospect theory in the game that en-
ables the subjective perception of attack success probabilities. The authors prove that
The simulation results show that subjective decision-making leads to the victim losing
the game. Another 2017 work focuses on a DDoS attack on the advanced metering
infrastructure (AMI) of a smart grid and employs honeypots to detect and gather
attack information [200]. The authors prove the existence of several Bayesian-Nash
equilibriums for the proposed Bayesian honeypot game and present simulated results
using the OPNET simulator. A notable recent work attempts to propose a generic
47
game-theoretic approach for modeling cyber-attack and defense strategies applied to
presented that allows players to choose between 3 levels of actions (No action, low-
intensity action, high-intensity action) [201]. However, the design of various stages of
this proposed work assumes extraction of parameters assuming that the game depends
on the network component of the system, ignoring other essential system components.
network traffics through routers or switches. Now, we have cloud and IoT envi-
hoc networks (e.g., MANET, VANET), etc. The large scope of this domain has
tacks. A wide variety of games have been utilized in these popular works includ-
game [210, 211], signal game [212], maximin game [213], non-cooperative Bayesian
and stochastic game [215]. Several works have also combined other bio-inspired or
mechanisms, such as the use of holt-winters and genetic algorithm (GA) with fuzzy
logic [216], Backpropagation neural network (BPNN) [217], and as previously men-
tioned, use of LSTM/DRL [196], and Kalman Filtering & Linear programmin [197].
In one of the research works, the authors proposed a DoS mitigation framework
based on the penalty-incentive mechanism to punish the attacker by lowering their pri-
48
ority and postponing their requests so that other requests can be processed normally.
interaction game in a VANET during a denial of service attack. The strategies of the
attacker are to attack or stop attacking, while the defender’s strategy is to sustain,
move away or stop the vehicle. They used NS2 as a network simulator, SUMO for
the mobility model, and MOVE to create a traffic model for their simulation [219].
Another work published in April 2022 (a month before the writing of this disserta-
tion) provides a mathematical representation using game theory for the prevention of
DDoS attacks on drone networks using UAVSim, a UAV simulation testbed built on
OMNET++ [220]. However, this work formulates a zero-sum game for UAVs while
our work focuses on the formulation of a non-zero-sum game that resembles real-world
losses. Also, our proposed payoff function can be applied to any autonomous system
based on the quantification of cost while the work mentioned above focuses strictly on
UAVs. Our work is also more extensive in evaluating attacker and defender strategies
To ensure and assure that these systems are safe and secure to be used by hu-
mans, a new approach towards cybersecurity and autonomy is needed. The research
• uncertainties in modeling
well over its lifetime. Rigorous mathematical modeling could provide a basis for a
framework that would help in the early development study of various capabilities,
factors, and trade-offs between human interaction and machine automation [221].
factors like security in mind. Though we are moving ahead towards an autonomous
future, there are many research challenges that researchers have to face. We have
• Building Human Trust: After Uber’s self-driving car crash in 2018, a survey
performed by Statista shows that trust in self-driving cars dropped to 27% [222].
about to face. While these systems promise a more comfortable and efficient
life, safety and security measures need to be taken before deployment among
the public. The world should be ready in legal, social, economic, and ethical
contexts before these systems are incorporated into our lives, as failures of these
systems are inevitable at some point in time, either through known or unknown
causes. The trust in these systems could only be built by thorough analysis
and testing. The service providers and manufacturers of these systems should
compromised.
large, diverse, and complete dataset to be secure and safe. A simulated dataset
is incomplete as it fails to capture critical conditions in the real world. Its rel-
evancy and integrity are questionable as it lacks the human factor. Machine
50
are monitored and analyzed by a machine learning-based system, it can detect
malicious activity early and alert the driver or take some preventive measures
to save itself from fatal accidents. With the collected data, machine learning
usual commands. They can also be used to establish behavioral profiles of any
potential attacker [224]. It would also improve the effectiveness of the security
• Data Security: A lot more research needs to be done in the area of security of
nology that can be used to provide a more secure and robust solution for these
is a chain of blocks linked through a cryptographic hash from the previous block
with a timestamp and distributed over the network, which makes it resistant to
modification of the data. [225] shows that Blockchain has the necessary capabili-
ties for swarm robotics operations to be more secure, autonomous, and flexible.
This technology would not only provide private and reliable communication
among swarm agents, but it would also overcome the vulnerabilities, potential
threats, and attacks associated with them [226]. The decentralized storage of
Blockchain would guarantee the confidentiality and integrity of the driver’s data
shared securely over the network. In many cases, the end controller of these sys-
51
tems would be handheld devices such as mobile phones and controllers that lack
to embed security into the hardware design. Another solution is to secure the
of the entire network topology, which helps to block specific attacks such as
a secure mobility model between UAVs and ground Wireless Sensor Network
(WSN) nodes where communication would be through the SDN controller for
tions into the SDN controller. Along with the holistic view of security, SDN
In this chapter, the discussion began with the cybersecurity of autonomous sys-
tems related to the work done in the industry as well as academia on some latest
autonomous systems related to a security listed in Table 3.1. This study focuses on
the system, threats, vulnerabilities, and attack modeling of a few widely researched
autonomous systems, which shows how much work has been done in these areas.
Our insight into these discussions is that cybersecurity and modeling of individual
autonomous systems are at a very early stage. As per our findings and to the best
of our knowledge, UAV is the most active field of research concerning system model-
52
ing and attack scenarios. System modeling of autonomous vehicles started with the
(V2V) and vehicle-to-infrastructure (V2I) VANET network has been done, modeling
of threat and attack vectors of autonomous vehicles itself needs lots of attention.
which can be inferred from Table 3.1 as most of the works on robot security were
could be an area of research. New modeling techniques like game theory and ma-
chine learning could be applied in these areas as well. Also, the autonomy approach
should be taken into account while modeling threats and attack scenarios for these
autonomous systems.
searched the applications of game theory in cybersecurity. Our research shows that
game models have been previously used in network security to model attacks. It has
train control systems, drone delivery systems, and smart grids. Several works on
Cloud or IoT environments, and wireless sensor networks against attacks and intru-
sions. This study shows that little work has been done to address the cybersecurity
issue in autonomous systems. Finally, we discussed the future research directions and
challenges in enhancing the automation and security of these systems. Adopting au-
tonomous systems discussed above will usher in a new era of technological advances
and economic growth. Driverless cars are expected to reduce road accidents, fuel
consumption, traffic congestion, and air pollution [233]. Robots would be deployed
in homes and various industrial sectors to provide assistance and efficiency in routine
53
and high-precision jobs. The question is, how secure and safe are these technologies
to be adopted into society. With every new revolution in transportation, there are
risk factors involved. These systems should not be immaturely deployed to the pub-
lic. Researchers in industry and academia alike have to shoulder the responsibility of
addressing security flaws. The manufacturers have to ensure the safety of their users,
considering even the remote scenarios of mishaps and attacks. Also, the government
has to have proper legal policies and support infrastructure in place [234].
54
Table 3.1: List of System, Vulnerabilities, Threat, and Attack Modeling
Studies
55
Chapter 4
Cybersecurity Modeling of
Theory
The world is progressing towards an era of AS. Autonomous operations with vo-
luminous data processing, integrated AI, and high definition imaging would develop
that would change the outlook of this booming industry. These AS would increase
efficiency and task productivity with improved safety in work environments. For
example, any accident investigation that could manually take three hours to collect
information could be done in less than an hour using a drone, reducing the traffic
delays and saving time and money [235]. Driverless cars are estimated to save mil-
lions of lives worldwide by avoiding accidents caused by human errors [236]. As the
level of autonomy of these systems moves towards full automation, attack vectors and
their impact would increase as well, which may result in deadly consequences [237].
Attacks with increased complexity have been on the rise in recent days. It is critical
to consider the security of these systems and explore the solutions thereby. Also, the
56
could be to apply game theory in this regard [11].
The main contributions of this chapter are multi-fold. First, we model a general-
and, based on the defensive measures, the impact would vary accordingly. Second,
an AS to numerically compute the mixed strategies that achieve the Nash Equilib-
rium (NE) and the expected payoffs of the players. The AS would act as a defender,
A game-theoretic framework can analyze the system’s response and payoffs for both
the players in an attack situation when specific measures are in action. Third, we
have considered the probability of a successful attack in defense and no defense sce-
narios and the cost of damage in our computation. In addition, we consider the
game as a ’non-zero-sum,’ which maps to the real world more realistically than the
works of [?, 197, 220]. Fourth, we extend the works in [238] to a n × n bimatrix
game represented in a normal form. This method is more accessible than the al-
n > 2. Although various works have analyzed these systems’ threat and attack mod-
these systems. Also, in Section 3.2 we discuss the various cyber attack-defense game.
Still, to the best of our knowledge, none has proposed a game related to the security
of each module [239] before we move to the design and instructions of the game.
57
Figure 4-1: High-level Autonomous System Architecture.
includes motion planning, vehicle control, and actuator control, along with sensing
and mapping as significant blocks [240]. Petnga et al. discuss a high-level architecture
tion) and physical (sensors and actuators) components of these systems [241]. Based
ception, ii) cognition, and iii) control. Fig. 4-1 shows a high-level architecture of an
AS.
An AS senses the environment through sensors that act as eyes/ears for the AS.
The perception module combines data from various sensors to create a picture of the
environment through a sophisticated algorithm. There are two types of sensors: Exte-
about robot workspaces like LASERs, LiDAR, and cameras. Proprioceptive sensors
measure parameters for subsystems internal to the AS, such as compass, gyroscope,
and potentiometers. These sensors have private information about the owner or the
machine’s status, posing high-security risks if they are compromised. On the contrary,
the data from multiple sensors are also fused, which not only helps in localization and
grid mapping for navigation but also in detecting dynamic objects and recognizing
58
them, such as pedestrians and traffic signs [242]. An attack on the perception of the
making [243].
Cognition is the ability of a system to make complex decisions based on the sys-
tem’s intelligence algorithms on the data it receives from the perception module and
the hardware. An AS with a high level of autonomy would have to make a more com-
plex analysis of the information for mission planning with many unknown factors.
It has to assess the complexity of the given task and the environment, level of au-
tonomy, risks, costs, and the broader mission before making effective decisions [239].
For example, an autonomous vehicle needs to make judgments about the best route,
be aware of its surroundings, and avoid collisions to reach its destination. Also,
the cognition module should perform a threat assessment to ensure the system’s se-
curity and detect any malicious activities. Application layer attacks such as GPS
jamming/spoofing and Sybil attacks may cause the system to make erroneous deci-
sions [244].
Control can be described as the ability of the AS to execute the decisions made
by the cognition module through physical or digital means [239]. In 2015, A remote
attack on the actuators of Jeep Cherokee was launched that took over the controls
of the steering wheel and brake systems [121]. Guo et al. proposed a mobile robot
intrusion detection system for the detection of sensor and actuator attacks [245].
Hwang et al. modeled the attack and analyzed the security of the system for deception
control module will vary with the system (driverless car, robot, UX Vs where X could
network) under attack, the criticality of the mission, and the operating environment.
For instance, an attack on a system designed for operation in a highly critical en-
59
Figure 4-2: Process Flow of a Game
vironment will have more impact on the surroundings than on the one working in
module of a Roomba vacuum cleaner wouldn’t yield much incentive to the attacker
than on a UAV or a driverless car. However, it still may cause inconvenience to the
owner, like cleaning the same area again and again or going in circles. The degree of
autonomy may also vary based on the severity or motive of the cyber attack. The
attack may even disconnect the user from the system or deny requests for support.
In this section, we introduce the autonomous system security game model, define
the payoff functions based on the optimal actions for a given set of conditions of
rational players and then reach a state of equilibrium. Fig 4-1 shows the game model
60
4.2.1 Autonomous System (AS) Security Game Represen-
tation
A non-cooperative game is one in which the players don’t cooperate with each
others’ strategy. They try to bring down other player’s payoff. It is a non-zero sum
game as there would always be some loss to the defender. We represent the game
using normal form and Nash equilibrium is reached. The security game model is
S j is the strategy space and U j is the utility for j ∈ N . In an attack scenario, the
1. Players: There are two players involved in this game; the attacker and the
defender.
network, or another AS(s) who would benefit from the maximum damage caused by
the attack to the target AS. There is a possibility that the attacker plans to attack
more than one module simultaneously. The attacker action set would include no
Defender - The other player is called the defender, whose actions would minimize
the vulnerability of the system and take security measures in case of an attack. Such
itself. Defender action set would consist of no defense, defend one module, or defend
multiple modules.
all the possible strategies of the players, z is the sum of all possible combinations of
61
Table 4.1: Enumeration of Attack/Defense Strategies
Strategies Perception (P) Cognition (Cg) Control (Cn)
S1t (P) 1 0 0
S2t (Cg) 0 1 0
S3t (Cn) 0 0 1
S4t (CgCn) 0 1 1
S5t (PCn) 1 0 1
S6t (PCg) 1 1 0
S7t (PCgCn) 1 1 1
strategies are enumerated in Table 4.1. Each module is represented by 0s and 1s. 0
means no attack for the attacker, and 1 means the system is under attack. Similarly,
0 for defender means no defense, and 1 means the module under consideration is
being defended. For example, from an attacker’s perspective, S7t indicates all the
three modules are under attack, and from the defender’s perspective, all the three
In game theory, each strategy results in a payoff to the players. The security
breach can result in data loss, communication, or the system itself. The attacker
would incur the cost of attacking. We denote the cost associated with implementing
these attacks as CA . The defender would employ strategies to block or mitigate the
attacks. For example, AS would switch to an Inertial Navigation System (INS) and
other sensors if the navigation system is down. The cost incurred by the defender
are the monetary measure of the time, effort, or resources used. W represents the
be high enough to cause a cascading failure effect, from a few crashes to traffic jams to
62
Table 4.2: Enumeration of Possible Cases of Attack and Defense
Attack Status Condition Probability
Case
0 no attack
1 successful mk 6= ml pk
2 unsuccessful mk 6= ml 1 − pk
3 successful mk = ml qk
4 unsuccessful mk = ml 1 − qk
loss of business and trust of the end-users. Such political, social, and environmental
impacts of the attack are difficult to quantify and are beyond the scope of our work.
For simplicity, we consider the economic value of the damage directly related to the
defender.
Let pk be the probability of a successful attack when no defense has been applied
the modules that the attacker decides to attack and the defender decides to defend,
measures are active, i.e., mk = ml . Table 4.2 enumerates all the possible scenarios
of an attack that should be considered when calculating the damage caused by the
attack. Case 1 indicates that the attacked module was not the one that was defended.
This leaves the module vulnerable, so there is a probability pk that the attack was
successful. And probability 1 − pk the attacker was not successful in exploiting the
vulnerability of the module. For case 3, the attacked module had defenses but failed
63
Therefore, for each module k, the probability of each case is given by:
pk , if mk 6= ml , a = 1
1 − p k ,
if mk 6= ml , a = 0
bk = (4.1)
qk , if mk = ml , a = 1
1 − q k ,
if mk = ml , a = 0
property. Suppose, attacker plans strategy S4a = {0, 1, 1} and defender plans S2d =
{0, 1, 0}. Let s be an element of the set of all possible outcomes, S = {(H0 , H3 , H1 ), (H0 , H3 , H2 ),
denotes the specific case listed in Table 4.2. The total economic loss for the defender
can be calculated as the summation of all possible outcomes, i.e., the product of the
probabilities of each attacked module, and the total cost of damage [246]:
X n
Y n
X
Wi = bk · C k (4.2)
s∈S k=1|Sia (k)=1 k=1|a=1
We have not considered the situation of no attack and no defense, as this will yield
zero payoff to the attacker. If the attacker succeeds in the attack, they will cause
damage W to the defender. The payoff would be benefit minus the cost of attack. If
the defender has defending measures, the attack would cost him the amount of damage
and the amount they spent on defending the system. The payoffs of both the players
corresponding to the possible strategies of the attacker (Sia ) and the defender(Sjd )
64
Table 4.3: Payoff Matrices for the AS Security Game
where, RD is the total cost of the modules, CAi is the sum of cost of attack on
individual modules ( nk=1 CAk ) and CDj is the sum of cost of defense on individual
P
modules ( nk=1 CDk ). In case the attack is unsuccessful, from equation (4.2), Wi = 0.
P
Based on eqn(4.3), Table 4.3 shows the 3x3 ordered pair of payoff matrices [A,
Let X be a set of all mixed strategies of the attacker which is reduced to a vector
z
X
xi > 0 and xi = 1 (4.4)
i=1
Similarly, let Y represent the set of defender’s mixed strategies. For a bimatrix game
[A, D] where A = [aij ] and D = [dij ], if the attacker chooses the mixed strategy x and
the defender chooses y, the expected payoff of the attacker and the defender would
be
z X
X z z X
X z
A(x, y) = xi yj aij , D(x, y) = xi yj dij (4.5)
i=1 j=1 i=1 j=1
65
As discussed in [238], if the expected payoff value of the attacker is v(a), we have
or,
For the above and equation 4.4 to hold simultaneously, the coefficients of xi in the
above equation must be ≤ v(a). Since xi > 0, these coefficients must be equal to v(a)
x1 + x2 +... + xz = 1
Hence,
66
In matrix form, the above equation can be written as below where J is the z-vector
(1,1, ..., 1)
v(a) 1
v(a) 1
Ay T = . = v(a). . = v(a).J T
. .
. .
v(a) 1
We will have ,
y T = v(a)A−1 J T (4.6)
Since, sum of the components of y, i.e., yJ T must be 1 (or, Jy T = 1), we can write,
1
v(a)JA−1 J T = 1 =⇒ v(a) =
JA−1 J T
A−1 J T
yT = (4.7)
JA−1 J T
A∗
Since, A−1 = |A|
, y can be written as
A∗ J T T
y=( ) (4.8)
JA∗ J T
Similarly, if the the expected payoff value of the defender is v(d) We can see that
67
And since, yi > 0 and y1 + y2 + ... + yz = 1,
We will have ,
x = v(d)JD−1 (4.9)
Hence, for a n × n bimatrix game, the unique equilibrium strategies for the defender
and the attacker are given by equations (4.8) and (4.10), respectively, and the ex-
|A| |D|
v(a) = ∗
, v(d) = (4.11)
JA J T JD∗ J T
68
Table 4.4: Quantification of actions for the AS Security Game
This section presents a case study to validate the applicability of the game pro-
posed. We consider an autonomous system with three modules and quantify the cost
of the attacker and the defending action taken by the system, as shown in Table 4.4.
Table 4.5 shows the payoff matrix of the game, taking the best-case scenario for the
attacker where all the attacks are successful. Equation (4.3) calculates the payoffs of
the players.
a/d
For both attacker/defender’s strategy S1 = {1, 0, 0}, the possible outcomes are
{{H3 , H0 , H0 } , {H4 , H0 , H0 }}. For this particular example, from table 4.4, the prob-
Attacker’s payoff will be −5. If defender’s strategy is S2d for attacker’s strategy S1a ,
the probability of attack of the defended module, pk = 1. The rest of the cells of the
bimatrix is calculated likewise. From the payoff matrix [A,D] (Table 4.5),
69
−5, 5, 5 54, 40, 38
A=
10, −10, 10 and D = 34, 50, 28
15, 15, −15 24, 20, 48
∗ ∗ T T
JD = 360, 400, 340 , (A J ) = 250, 400, 450
And, the expected payoffs of the attacker and the defender are v(a) = 30/11 =
Figure 4-3 shows the variation of expected payoffs of the players with qk . If the
value of qk is changed to 0.5 for the Perception module (module #1) - this will change
the value of the first cell of the payoff matrix (-5, 54) to (0, 49). The expected payoff
of the attacker would increase to 3.5, with minimal change for the defender (payoff =
37.64). When the value of qk for the Control module (module #2) is changed to 0.5,
the attacker’s payoff is 4.28, and the defender’s payoff decreases to 34.85. This shows
that the control module needs to be better defended than the perception or cognition
modules. Subsequently, the defender could analyze the payoffs for various scenarios
and decide to distribute and prioritize the resources among the modules accordingly.
Finally, the attack cost and probability of a successful attack are estimated.
70
Figure 4-3: Variation in Expected Payoffs of the players with probability of
successful attack (qk ).
the strategies of the attacker and the defender. We evaluate the cost of damage
matrix method for calculating the Nash equilibrium for a n × n bimatrix game. For
simplicity, we analyze a game where both players have three strategies. The game
considers attack/defense on only one module at a time. Future work would include
71
Chapter 5
Simulation
In this chapter, we evaluated the game model using several simulation experiments
attacker and the defender. These strategies represent a variation in specific variables
in different scenarios and show the feasibility of applying our proposed game theory-
An existing simulation environment, ’Veins’, was used to implement the attack. Veins
library called SUMO (simulation of urban mobility) that work with OMNeT++. The
72
demo simulation scenario (RSUExampleScenario) was used that simulates a stream of
vehicles originating at one network section. It also shows how the network responds to
and the Gambit tool used for NE identification. The VANET Communication subsec-
tion talks about the communication services, protocols, and terms used in the Veins
Simulator. The Veins Simulator subsection scratches the surface of the simulator to
get a basic idea of how it works and explains the working of the TracIDemoRSU11p
cars and roadside infrastructure for safe driving in various conditions such as high-
into safety and non-safety applications. Safety applications broadcast a beacon, usu-
ally at a frequency of 3-10 times per second, to vehicles in the direct communication
range of approximately 300 meters. The beacon contains vehicle status informa-
tion such as velocity, location, etc. A safety application beacon sent from an RSU
may warn about icy road conditions, an accident, a stop sign/vehicle ahead, etc.
Non-safety applications are used for commercial and entertainment purposes such as
services based on the IEEE 802.11p standard. The transceiver using DSRC services
would be On-board Units (OBUs) mounted on the vehicle or some portable units and
ary vehicle, [247]. According to the DSRC communication architecture, PHY and
MAC layers use IEEE 802.11p Wireless Access for Vehicular Environments (WAVE).
Most of the communication happens over the air with no routing in a vehicular net-
73
work. So, IEEE 1609 Working Group (WG) defined a new Layer 3 protocol called the
WAVE Short Message Protocol (WSMP) for Network and Transport Layer along with
the support of Internet Protocol IPv6, Transfer Control Protocol (TCP), and User
Datagram Protocol (UDP). Hence, single-hop messages that don’t require routing use
WSMP, while multiple hop packets use IPv6 + TCP/UDP. The WSMP packets are
mat, the most important being Basic Safety Message (BSM). A BSM carries core state
information about the vehicle needed for collision avoidance, such as speed and lo-
cation. There are currently seven 10 MHz channels in a 5.8 - 5.9 GHz band from
Ch 172 to 184 with even numbering. Low bandwidth safety-critical messages are
exchanged on a Control Channel (CCH - Ch 178) that devices will tune to regularly.
The other six channels are Service Channels (SCH) to access any advertised services.
BSMs are sent on Ch 172. Devices can contact each other during a CCHInterval
information about the services being offered in the nearby area and indicate the SCH
and SUMO. OMNeT++ is a network simulator, while SUMO is a road traffic simula-
tor. Traffic Control Interface (TraCI) enables the communication between Omnet++
and SUMO using TCP-based client/server architecture where SUMO acts as a server.
TraCI allows creating and adding a new vehicle to the queue and retrieving informa-
tion about the vehicles to execute the next step. The coordinates obtained by TraCI
is used to update the locations of the vehicles in OMNET++. This suite provides a
good tool for simulating interactive vehicle simulations. Figure 5-1 shows the snap-
74
Figure 5-1: Veins (Omnet++) Environment
shot of the Veins simulation environment for an example scenario. A veins scenario
controller for obstacles, and a TraCI scenario manager. The figure shows five vehicles
represented as nodes and an RSU denoted by a yellow diamond. The circle around
each node and RSU is the range that they can communicate, called the maximum
The veins demo scenario uses the map of the University of Erlangen-Nuremberg
that simulates 194 vehicles leaving the Computer Science building. The number of
vehicles can be modified in the erlangen.rou.xml file. The vehicles are annotated as
nodes in the omnet++ simulation environment. The communication between the ve-
hicles and RSUs is based on the DSRC Communication protocol. The TraCIScenari-
oManager handles the creation and insertion of a vehicle to the queue and the position
update of the vehicle. It keeps track of the number of vehicles in the simulation en-
75
vironment, network boundaries, and entry and exit of vehicles. TraCIDemoRSU11p
is the application layer module. When the simulation starts, the TraCIDemoRSU11p
module initializes RSUs and nodes. If the service channel is not enabled, all the
data are sent to the control channel (CCH). This is stage zero: initialize stage. In
the next stage (stage 1), TraCISenarioManager connects to TraCIServer and sends
true. So, for a simulation time of 1000s, the total number of BSMs an RSU sends
is 1000. If any node is within the maxInterfDist, it will receive the beacon from the
RSU and other nodes. The default maxInterfDist is 1000m. For our simulation, we
set it to 250m. The first node starts after a few seconds for proper synchronization.
The handlePositionUpdate() function checks the speed of the vehicle. If the vehicle’s
speed is less than 1 for more than 10s, it considers an accident. The accident for
node[0] happens at 73s of the simulation. According to the map used, the nodes
are on a single lane and about to enter a two-lane road. If the accident occurs 2s
later, the nodes will be on a two-lane road, and the other cars will easily use the
other lane to bypass the accident. Also, the accident location is within the range
of the RSU, which broadcasts the accident information. In that case, an alternative
route is calculated using the Dijkstra shortest path algorithm [249]. Otherwise, the
knowledge base is updated. If channel switching is on, it starts a WSA event (Traffic
Information Service).
The packets generated by the vehicles and RSU(s) (beacons, WSM, WSA) are sent
to other vehicles and RSU as AirFrames. The received AirFrame signal is processed
by a Decider (Decider80211p). The decider checks if the network interface card (nic)
detected the frame. It will be detected only if its power is greater than the minPower
level. It also checks if the signal was received while transmitting. If that’s the case,
the packet will not be accepted and will be considered as a TxRxLostPacket. If the
channel is idle, it locks the frame and decodes it. Otherwise, if the channel is BUSY,
76
the frame is treated as interference. Once the Decider entirely receives the signal, it
checks if there was any bit error considering the bitrate, bandwidth, minimum signal
interference, and payload bitrate. If the header or the payload of the packet has a
bit error and it could not be decoded even without interference, then it is declared as
variable nCollision. If the packet has no bit errors, it is forwarded to the mac layer
for further processing. These packets are counted by the variable ReceivedBroadcasts
which is the sum of all the three types of packets: beacons, WSM, and WSA.
5.1.3 Gambit
Gambit is a software tool that constructs and analyses finite and non-cooperative
games. It has a graphical user interface (GUI) for building games in extensive or
normal forms. Gambit was implemented initially in BASIC in the mid-1980s and
was later rewritten in C++. Over the years, significant contributions have been
made to the GUI, portability across platforms, and the addition of algorithms for the
Gambit is cross-platform and can run under multiple operating systems: Windows,
Mac, and Unix. It is also available as a python extension. The current version is
Gambit 16 [250]. The GUI provides a development environment for small to medium
games. It has the feature of calculating the dominant strategy, Nash Equilibrium, and
Quantal Response Equilibria (QRE) of a game. As the size of the game increases,
the computation time required for the computation analysis also increases rapidly.
The payoffs in the game can be entered as decimal numbers or as rational numbers.
Gambit support two file formats for representing games. Extensive form games are
represented as a tree and stored as .efg extension, while strategic form games are
displayed as tables and use .nfg extension. Since Gambit 12, the graphical interface
77
also supports a new file format, ”Gambit workbook” with .gbt extension, which stores
additional information such as the layout of the game tree, colors assigned to the
{a, d}, S j is the strategy space and U j is the utility for j ∈ N . Strategy space
S t = {Sit |t ∈ N , i ∈ 1 to z} is the action set of all the possible strategies of the players,
While we are motivated to model the game based on major architectural modules
• The attacker/defender has to reason over different attacks and their counter-
• The attacker/defender must consider the various parameters affecting its at-
tack/defensive measures.
We need a more elaborate model to address this challenge. Or, model the game at
a more granular level and then level up. So, for now, we model the game at the lowest
level of granularity. Figure 5-2 shows the process flow of selecting the strategy and
then representing the game through the payoff matrix. The attacker has to rationale
which module to attack and the type of attack it could execute against the defender
78
Figure 5-2: Game Modeling
strategy, which can cause maximal damage with minimum cost. The strategy is
selected based on the Table 4.1 [251] and the attack model 2-2 [252]. There are some
parameters associated with any attacks or defense measures. The result of the attack
can vary by changing the values of such parameters. The same goes for the defender,
who has to choose the parameters to counter or minimize the damage. The strategy
space (Xm , Yn for the defender and the attacker, respectively) grows combinatorially
the number of defender and attacker parameters, respectively. Pure strategy/ mixed
strategy Nash Equilibrium (NE) from each payoff matrix is the payoff for the final
matrix. The NE from the final payoff matrix gives the best strategy for the attacker
and the defender. The process becomes cumbersome as the variables increase. We
can automate the process after selecting the parameters to reduce the labor work.
79
5.2.1 Strategy Selection
an attacker’s vehicle. The adversary can launch several attacks on VANET to com-
promise the network’s availability, confidentiality, integrity, and resources. The most
common attack that can be launched is a DoS/DDoS attack where the communica-
so that authentic messages do not reach their destinations. Such an attack intends
transmissions with a high-power interference signal. Traffic disruption, data loss, and
high battery consumption will also fulfill the attacker’s goal. The DoS attack would
Multiple attackers may focus on the same network, with an increased chance of suc-
the device that sent the message would never know that the real message is lost. So
attack can be launched from either an RSU or a vehicle(s) or, in an extreme case,
80
Table 5.2: Parameter selection
Attack Parameters Defend Parameters
Beacon Interval(BI) Bit Rate (BR)
Message Payload (MP) Vehicle Density (VD)
No. of RSUs (NumRSU)
both. We have to find the parameters in the veins network that would impact the
transmission of messages.
network congestion, resulting in packet delay and loss from genuine nodes. The
and route by rsu[0]. The attacker can cause a fake accident on single lanes and
We considered the scenario of the actual accident where the location of multiple RSUs
is the same as the real one. We varied different RSU parameters to find the ones that
would favor the attacker and nodes’(vehicles) parameters for the defender strategies,
Vehicles receive packets by RSUs and other vehicles within its maxInterfdist range.
81
These packets are sent as broadcast messages which can potentially be received by
many other vehicles. The number of vehicles receiving the messages would vary
depending on the location of the vehicles in that range. Given the scenario depicted
in the figure 5-3, vehicle A is out of the range of the RSU. So a broadcast sent by
an RSU might be successfully received by all the other four vehicles (B, C, D, E).
Vehicle C can send and receive messages to/from vehicles B, D, E, and RSU. But
for A, only B is within its range. The number of other vehicles within range of the
RSU or a vehicle will constantly change as the vehicles move. A simulation where
the RSU sends only 1 packet might record 3 successful receptions and 1 packet loss.
The standard equation to calculate packet loss rate as Packet Lost/Packet sent (here
1/1= 100%) may not be reasonable. It makes more sense to calculate the rate as
The most straightforward attempt for the attacker to create traffic congestion is to
increase the traffic data exchange between vehicles and the infrastructure. Table 5.3
82
Table 5.3: Percentage Packet Loss with Beacon Interval
BI Avg Packet Loss Avg Received Broadcasts %PacketLoss
1s 146 4995 2.8
0.1s 124 5132 2.4
0.01s 220 8726 2.5
0.001s 554 48140 1.1
0.0001s 1022 113783 0.9
and the corresponding plot 5-4 show the behavior of decreasing the interval between
two messages (BI) on packet loss rate. An incident on the road triggers additional
safety messages (WSM and WSA) from the vehicles and the RSU within a circular
area, resulting in a high amount of messages received, decreasing the packet loss
rate. However, the graph for minimum speed at BI of 1s, Figure 5-6, shows that
the accident happened for node[0], and some of the cars (around 30 cars) had to
stop for a while till the accident duration was over. On plotting the minimum speed
reached for the vehicles with respect to varied BI Figure 5-5, we see that some of
83
Figure 5-6: Packet Loss variation analysis with respect to Beacon Interval
the vehicles (nodes 21-24) didn’t stop for other BIs compared with the plot for BI
1s. This could cause more accidents. Also, the number of stopped/slowed vehicles
increases (speed ' 0s) when the traffic jam should have been resolved on the route.
Also, on plotting the packet loss for individual vehicle nodes, plot 5-6, shows that
the packet loss for vehicle nodes stuck in the traffic jam increases with a decrease in
beacon interval. This abrupt behavior of the traffic works in the attacker’s favor.
the headers, which may carry various types of data, including velocity, position,
and hazard information. An increase in beacon size may contribute toward channel
84
Figure 5-7: Message Payload vs Packet Loss
saturation and more processing time for the vehicles [254]. For this reason, we select
to see the impact on packet loss rate. We refer to it as Message Payload (MP) to easily
relate to further simulation and results analysis. Keeping all the other parameters
to the default value, the increment in % packet loss with respect to message payload
is small, as shown in the Table 5.4 and the corresponding plot 5-7. The packet loss
distribution for each vehicle node, as shown in 5-8 has a similar trend. The small
85
Figure 5-8: Packet Loss Distribution with respect to MP
An artist in Berlin tricked Google Maps into creating a virtual traffic jam by
using a handcart full of smartphones [255]. On the same notion, we assumed that the
attacker could put portable RSUs at the same location to hide the RSU location data
from the vehicles. Different RSU location data within a certain range can be used to
alert the vehicles of fake messages. There is an increase in the % packet loss with an
increase in the number of RSU as shown in Table 5.5 and the corresponding plot 5-10,
keeping all the other parameters to a default value. A DSRC RSU capital cost per unit
cost of the device ($1300), installation ($850), and configuration ($2000). A portable
RSU would cost less as it won’t need any installation. If the budget of the attacker
permits, they would prefer to have a higher number of RSUs for a successful attack.
The following parameter, Bit Rate (BR), was challenging. Veins follow the IEEE
802.11p, which supports transmission rates from 3 to 27 Mbps (Data Rate) over a
86
Figure 5-9: No. of RSU vs Packet Loss
the valid data rate supported by Veins are 3Mbps, 4.5 Mbps, 6Mbps, 9Mbps, 12Mbps,
18Mbps, 24Mbps, and 27Mbps. The advanced WiFi protocols offer a data rate much
greater than 27Mbps. Assuming an adaptive bit rate, we evaluated packet loss by
keeping the bit rate of RSUs to 3Mbps while varying the bit rate of the vehicles to
(6Mbps, 9Mbps, 12Mbps, 18Mbps, 24Mbps, and 27Mbps). The plot 5-11 shows an
increase in packet loss rate with an increase in bit rate. According to the plot 5-12,
the higher the bit rate of the vehicles, the % packet loss increases for a low count of
RSUs. But, as the number of RSUs increases, the % packet loss increases even for
the low bit rate. One more thing to notice here is that % packet loss converges at a
bit rate of 24Mbps and then follows a similar trend for any number of RSUs. It will
be the best advantage for the defender if the bit rate of the vehicles is low. A low bit
rate will favor the attacker if they have a higher number of RSUs.
maximum harm. When the traffic is reduced or diverted to a different route, it can
87
Figure 5-10: Packet Loss Distribution with respect to NumRSU
88
Figure 5-12: Packet Loss with respect to Bit Rate and Number of RSUs
reduce the attack’s impact. Assuming that the vehicles identified the malicious RSUs
and alerted the other vehicles to change route could also help reduce the damage
caused by the DDoS attack. So vehicle density is selected as the next parameter.
We varied transmission power and min power level without any significant differ-
ence in the packet loss at default settings. We selected beacon interval (BI), message
payload (MP), and number of RSUs (NumRSU) as attacker parameters that the at-
tacker can tune. In contrast, bit rate (BR) and vehicle density (VD) were selected as
defender parameters that could be controlled by the vehicles or the VANET network
In game theory, each strategy results in a payoff to the players. According to our
previous work [251], the Table 4.2 enumerates the possible scenarios of a successful
attack and the probabilities associated with each scenario where mk , ml represents
89
the modules the attacker decides to attack and the defender decides to defend, re-
spectively. The equations from chapter 4 are discussed again for the ease of the
readers.
Therefore, for each module k, the probability of each case is given by:
pk , if mk 6= ml , a = 1
1 − p k ,
if mk 6= ml , a = 0
bk = (5.1)
qk , if mk = ml , a = 1
1 − q k ,
if mk = ml , a = 0
Let Ck be the cost of damage incurred by the successfully attacked module. When
Suppose, attacker plans strategy S4a = {0, 1, 1} and defender plans S2d = {0, 1, 0}. Let
if the game of attack and defend is played, where H‘X’ indicates the cases from Table
4.2. The total economic loss for the defender can be calculated as the summation of
all possible outcomes, the product of the probabilities of each attacked module, and
X n
Y n
X
Wi = bk · Ck (5.2)
s∈S k=1|Sia (k)=1 k=1|a=1
The payoffs of both the players corresponding to the possible strategies of the
attacker (Sia ) and the defender(Sjd ) (refer to the Table 4.1) for a non-zero sum is
given by:
90
where, RD is the total cost of the modules, CAi is the sum of cost of attack on
individual modules ( nk=1 CAk ) and CDj is the sum of cost of defense on individual
P
modules ( nk=1 CDk ). In case the attack is unsuccessful, from equation (4.2), Wi = 0.
P
We found playing with different parameters that some could be attackers strategy
and some defenders, as shown in the table 5.2. According to the equation 5.3, the
game’s payoff (Attacker’s payoff, Defender’s Payoff) or utility function is formulated
as,
delivery/loss ratio. Since each metric involves packets, we quantify the cost of the
players based on the number of packets sent, received, or lost in the network. Since
we are dealing with only cognition module (S2a , S2d = {0, 1, 0}), it is safe to assume
that the module being attacked is the one being defended and the attacks are always
When there is no attack, except for the SNIRLostPackets, all the other terms in the
above equation is 0.
91
Therefore, the payoff function is
u = ((-Avg. packets sent by RSU(s)+ Vehicle’s avg. packet loss), (Vehicles’ Avg. Received Broadcasts))
Based on the Parameter Selection Table 5.2, there are seven parameter combina-
tions or strategies for the attacker and three strategies for the defender, which have
Table 5.6 also indicates the size of each matrix and the structure of the final
payoff matrix. Table 5.7 shows the range of values that we selected for varying each
parameter. So, for BI vs. BR, we calculate payoffs for three values of BI (0.1s, 0.01s,
0.001s) against six BR (6, 9, 12, 18, 24, 27). The ‘+’ sign between the parameters
means that we vary both operands. The size of the payoff matrix increases as we
92
increase the number of varying parameters. Since the number and size of matrices are
pretty large, we used an open-source graphical user interface called Gambit for game
theory computation. Henceforth, it is not feasible to include all the 21 payoff matrices
with all the payoff values in the dissertation. The larger tables are represented in a
This section is classified based on a number of attacker’s parameters ‘A’ that could be
changed with respect to the number of defender’s parameters ‘D’ as listed below. Each
Scenario is labeled with the ‘A’s and ‘D’s depending upon the number of parameters
being varied for respective players. The number of scenarios will differ depending on
the number of strategies considered for attack analysis in the real world.
• Scenario AD: These include those scenarios where both the players can manip-
• Scenario AAD: These include the scenarios where the attacker can manipulate
2 parameters as their while the defender can defend with only 1 strategy.
• Scenario AAAD: These include the scenarios where the attacker can manipulate
• Scenario ADD: These include the scenarios where the attacker can manipulate
• Scenario AADD: These include the scenarios where both the players can ma-
• Scenario AAADD: These include the scenarios where the attacker can manipu-
sity
94
5.3.1 Scenario AD
Table 5.8 shows the payoff matrix between the beacon interval and the Bit Rate.
As the beacon interval decreases, the attacker’s payoff is relatively low, and they
would never go for those strategies. And as the bit rate increases, the defender’s
payoff decreases. So, the optimal strategy for both players is (BI 0.1s, BR 12Mbps),
Using the Gambit tool to calculate the Nash equilibrium, the highlighted cell in
the Table 5.8 indicates the pure Nash Equilibrium. For a single RSU with a bit rate
of 3Mbps, the attacker would prefer to set a beacon Interval of 0.1s. It would be
in the best interest of the vehicles to adapt the bit rate to 12Mbps to receive the
maximum number of packets. The Nash Equilibrium and the value of the payoffs for
Table 5.9 shows the payoff matrix between beacon interval and vehicle density.
Again, the payoffs of the attacker for BI 0.01s and 0.001s are relatively low with
95
Figure 5-13: Beacon Interval vs Bit Rate
respect to BI 0.1s. This implies that the attacker would never prefer those strategies
and eliminate rows BI 0.01s and BI 0.001s. For the defender, the payoff for vehicle
density 80 is the highest. Hence, the optimal strategy for the players is (BI 0.1s, VD
Table 5.10 shows the payoff matrix between message payload and bit rate. A
higher bit rate yields a higher payoff for the attacker but reduces the payoffs of the
defender. Ideally, an attacker would prefer to play (MP 800bit, BR 27Mbps), but the
defender has the lowest payoffs for BR 27Mbps. Column BR 9, BR 18, BR 24, and
BR 27 will be eliminated as they are strictly dominated by BR 12. For the attacker,
96
Figure 5-14: Beacon Interval vs Vehicle Density
MP 80000bit dominates other strategies. Then the best response (NE) for the players
We see a similar trend in the payoffs for Message Payload with respect to vehi-
cle density, Table 5.11. Following the process of elimination of strictly dominated
It wouldn’t be worth it for the attacker to deploy more than one spoofed RSU,
keeping all other parameters unaltered, especially for the lower bit rate. As seen
from Table 5.12, the defender would never prefer a higher bit rate. Eliminating the
Following the similar trend in the payoffs for the number of RSUs with respect
to vehicle density, Table 5.13, we find that the attacker’s payoff is maximum with 2
RSUs. In other words, they are still incurring some loss which is minimum when the
traffic is either at its full potential or around 50%. For the defender, a VD of 100
works better.
98
Table 5.13: Payoff Matrix: Number of RSUs vs Vehicle Density
NumRSU VD 20 VD 40 VD 60 VD 80 VD 100
2 -1222,4281 -1020,5101 -1177,5138 -1274,5160 -1020,6804
3 -132601,6056 -142901,4379 -1647349,3825 -148993,3827 -151241,3392
4 -158233,-3167 -185084,-1458 -194815,-689 -196974,-563 -96842,-342
5 -166844,47859 -201764,-6316 -220611,21198 -220341,-3699 -227415,-2884
Table 5.14: Payoff Matrix: Message Payload, Beacon Interval vs Bit Rate
MP BI BR 6 BR 9 BR 12 BR 18 BR 24 BR 27
0.1s -5825,5361 -5879,5071 -5673,6733 -4883,4266 -2399,4028 -3023,2924
800 0.01s -59759,8408 -59544,7940 -59395,11019 -58837,8093 -57390,6949 -57231,6282
0.001s -598611,47364 -598662,47347 -598787,41355 -598129,43060 -596610,43552 -596493,43707
0.1s -5764,5255 -5852,5112 -5613,6566 -4877,4412 -2591,3836 -3287,2556
8000 0.01s -59247,8945 -59358,7834 -59339,8750 -58350,7987 -57096,6779 -1256,7097
0.001s -94819,11183 -94766,10913 -94813,10391 -94095,13092 -91457,9409 -91142,9042
0.1s -5501,5201 -5573,4913 -5467,5514 -4071,6005 -3320,3129 -3590,2099
80000 0.01s -5472,5124 -5544,4600 -5430,5214 -4575,4248 -3209,3098 -3308,2413
0.001s -5435,5174 -5559,4647 -5437,5520 -4062,9901 -3167,3052 -3116,2505
Table 5.14 is a payoff matrix between message payload, beacon interval, and bit
rate. The table clearly shows that the rows BI 0.01s and BI 0.001s for message
payload 800bits and 8000 bits will be eliminated. If we follow the strict dominance
are left with a square matrix of rows (8,9) and columns (3,4). The square matrix
has two pure nash equilibria (-5430,5214) and (-4062,9901) and one mixed strategy
equilibrium with an expected payoff of (-5418, 5269). Out of the three, the best
Now, as we generate payoff matrices for other combinations, the size and dimen-
sions of the table start increasing. The most extensive payoff matrix is of size 36x30
for the parameter combination (NumRSU + BI + MP, BR + VD) with 1080 cells.
99
Figure 5-15: Beacon Interval (80000bit) vs Bit Rate
the real world owing to the diversity of potential parameters that could be used to
attack. Listing such a large number of potential parameters would lead to extreme
which, based on the previous data, reduces the number of strategies that must be
considered when finding an optimal solution [256]. These strategies, when included
in the complete payoff matrix, must get eliminated by the strict dominance rule.
Table 5.15 is a payoff matrix between message payload and beacon interval with
respect to vehicle density. Caching on the knowledge acquired from Table 5.14 that
BI 0.01s and BI 0.001s for message payloads 800bit and 8000bit yield high losses for
100
the attacker, we can eliminate those strategies from their plan of attack. The payoff
matrix 5.15 shows the NE at 80000 Mbps and BI of 0.1s for 40 vehicles.
The compact representation of the payoff matrix for the number of RSUs and
Beacon Interval with respect to the bit rate, Table 5.16, shows the attacker can
launch an attack with just one spoofed RSU. The 2 RSUs in the simulation indicate
that 1 is the real RSU and the other is the spoofed one with the same parameter
settings so that the intrusion detection algorithms can’t differentiate it. Similarly, for
multiple RSUs, one is the original one, while the rest are spoofed. Again, the payoff
matrix also confirms previous analysis that a higher bit rate is not a good strategy
Table 5.16: Payoff Matrix: Number of RSUs, Beacon Interval vs Bit Rate
RSUs BI BR 6 BR 9 BR 12 BR 18 BR 24 BR 27
0.1s -7701,5732 -7749,4847 -7020,6450 -6145,5205 -3599,3732 -3134,3211
2 0.01s -59289,10766 -59790,9125 -59534,8687 -57809,8399 -53784,9431 -53904,9491
0.001s -563772,42599 -567723,39843 -565641,40480 -557739,132092 -540864,58878 -537708,60538
0.1s -120728,17839 -72587,24538 -103386,18275 -104748,17423 -3359,4776 -3026,4551
5 0.01s -144567,23727 -147147,22288 -143938,24524 -145981,22786 -38328,13707 -36768,13923
0.001s -432851,31821 -424796,31249 -437122,29786 -415360,31610 -211238,27471 -207017,27298
101
5.3.2.4 NumRSU + Beacon Interval vs Vehicle Density
From Figure 5-9, we expect a higher number of RSUs to give a higher %packet
loss. We see from previous tables that 2 RSUs resulted in better payoffs for the
attacker. The Table 5.17 for number of RSUs and beacon interval with respect to
vehicle density agrees to the Figure 5-9. It has two pure strategies that yield expected
payoffs (-7416, 6065) and (51180, 147776). If the attacker wants to maximize their
payoff, they will choose to play with 5 RSUs and BI 0.001s. In that case, the best
Table 5.17: Payoff Matrix: Number of RSUs, Beacon Interval vs Vehicle Den-
sity
RSUs BI VD 20 VD 40 VD 60 VD 80 VD 100
0.1s -7191,5601 -7416,6065 -7845,5828 -7821,5765 -7948,5786
2 0.01s -54815,17336 -57226,13003 -58597,10189 -59264,10459 -568737,46103
0.001s -506574,120054 -542108,77624 -559862,56950 -568737,46105 -568737,46105
0.1s 20066,56571 -85485,37801 -141598,29264 -141739,29196 -162510,25148
5 0.01s 1783,75617 -115647,50047 -178499,35959 -198552,31263 -205593,31184
0.001s 51180,147776 -289407,95388 -465762,68248 -473244,67046 -542937,56054
The Tables 5.10 and 5.11 shows that the best strategy for the attacker is to set
Message Payload as 80000 Mbps. From the previous Tables, we expect a lower bit
rate to be a better strategy for the defender. Eliminating dominated strategies for the
attacker and the defender in Table 5.18, results in NE as 2 RSUs, MP 80000 Mbps
at BR 9Mbps.
The Table 5.19 has 3 pure strategies and 2 mixed strategies NE. The expected
payoffs for the pure strategies are (-1751, 5847), (-1444, 6375) and (24576, 52750) and
102
Table 5.18: Payoff Matrix: Number of RSUs, Message Payload vs Bit Rate
NumRSU MP BR 6 BR 9 BR 12 BR 18 BR 24 BR 27
800 2115,2189 -2249,5955 -2035,4984 -636,4547 3024,3748 2358,2598
2 8000 -1812,6282 -2609,4715 -2453,4426 -498,3934 3324,4020 2877,3911
80000 -4931,5529 -2034,6128 -3112,4616 -383,4584 1911,2749 3804,3516
800 -98091,17126 -143133,-1557 -142218,-1285 -142808,-1578 604,650 304,369
5 8000 -94588,17825 -140175,-1815 -143832,-1504 -142466,-1523 309,575 363,445
80000 -92850,16520 -145891,-1066 -143237,-1572 -142203,-1639 255,499 469,443
for mixed strategies are (-1644, 6136) and (-1407, 7709). The attacker can random-
(5 RSUs, MP 8000bit), while the defender can randomize among the strategies 20
vehicles, 60 vehicles, or 100 vehicles. Out of the five equilibria, the best response for
The payoff matrix for Number of RSUs, Message Payload and Beacon Interval with
respect to Bit Rate, Table 5.20, has only 1 mixed strategy and no pure strategies. The
expected payoff for the mixed strategy is (-4003, 5831). The attacker can randomize
between their strategies 0.01s and 0.001s for 2 RSUs and 80000bit with probabilities
( 1399
1937
538
) and ( 1937 ), respectively. The defender can randomize between their strategies
sity
Table 5.21 shows the payoff matrix for changes in the number of RSUs, message
payload, and beacon interval with respect to vehicle density. The payoffs for RSU
count 2 and BI of 0.01s and 0.001s are eliminated as the payoffs are quite low. Sim-
ilarly, all the payoffs for RSU 3 and 4 yield low payoffs for the attacker and hence,
eliminated. The Nash Equilibrium of this game is (5 RSUs, 800bit, 0.001s, VD 20)
Table 5.21: Payoff Matrix: Number of RSUs, Message Payload, Beacom In-
terval vs Vehicle Density
RSUs MP BI VD 20 VD 40 VD 60 VD 80 VD 100
800 0.1 -6917,5531 -7605,5971 -7076,5710 -7571,5697 -7612,5716
2 8000 0.1 -7199,5444 -7019,6206 -7359,5867 -7742,5212 -7698,6047
80000 0.1 -6273,5675 -6469,6874 -7027,5835 -7177,5296 -6521,5346
0.1 20066,56571 -85304,37805 -141441,29347 -141618,29287 -162055,25228
800 0.01 4700,74953 -120151,48142 -180880,34801 -191090,32361 -204476,29444
0.001 112993,94699 -202795,62242 -367484,45400 -370107,45146 -439694,37686
5
0.1 23012,53301 -76617,37623 -132519,27546 -158607,22785 -160121,23845
80000 0.01 27053,53788 -82930,36312 -142563,25765 -154906,23502 -162653,22285
0.001 25388,53399 -76590,37261 -132960,27347 -157105,22949 -167500,21087
104
5.3.4 Scenario ADD
From Table 5.8, we analyzed that BI 0.01s and 0.001s are the dominant strategies
for the attacker that yields very low payoffs. Table 5.22 shows a section of the payoff
matrix for beacon interval with respect to bit rate and vehicle density. The NE is (BI
Table 5.22: Payoff Matrix: Beacon Interval vs Bit Rate , Vehicle Density
BR 12
BI VD 20 VD 40 VD 60 VD 80 VD 100
0.1s -5968,1176 -5922,1531 -5917,1405 -5903,1441 -5634,6714
The Table 5.23 shows section of the payoff matrix for BR 6Mbps and BR 12Mbps
for message payload with respect to bit rate and vehicle density. It has 2 pure
strategies and 3 mixed strategies, out of which (-279, 6854) is the best response of
the game.
Table 5.24 shows the reduced payoff matrix for the number of RSUs with respect to
bit rate and vehicle density after eliminating all the dominated strategies. It has two
pure strategies that result in an expected payoff of (-2159,5953) and (16346, 35006)
105
with % packet loss of 4.7% and 83%, respectively. It also has one mixed strategy with
an expected payoff (-1449, 8461). We select (16346, 35006) as the best response for
the players. Here, we also see that attackers can have a greater attack impact with 5
RSUs, but the number of vehicles they can disrupt is only 20.
The reduced payoff matrix for beacon interval and message payload with respect
to bit rate and vehicle density is shown in Table 5.25. This matrix can be reduced
further by eliminating the dominated rows. The last row (80000bit, 0.001s) has the
smallest -ve value (minimum loss) for the attacker against all the defender strategies,
and the other 4 rows can be eliminated. The remaining payoffs show that the defender
has the highest payoff of 5765 when the vehicle density is 100. Hence, (80000, 0.001s,
Table 5.25: Payoff Matrix: Message Payload, Beacon Interval vs Bit Rate
BR 9
MP BI VD 20 VD 40 VD 60 VD 80 VD 100
800 0.1s -5963,4303 -5900,4976 -5846,5462 -5867,5271 -5879,5071
8000 0.1s -5925,4642 -5799,5825 -5808,5338 -5795,5302 -5800,5195
0.1s -5630,4248 -5489,6203 -5552,5806 -5537,5523 -5561,5327
80000 0.01s -5783,4019 -5561,5472 -5504,4991 -5522,5093 -5525,4988
0.001s -5337,4340 -5447,5590 -5363,5634 -5447,5590 -5438,5765
106
Table 5.26: Payoff Matrix: Number of RSUs, Beacon Interval vs Vehicle Den-
sity
BR 12 BR 18
NumRSU BI VD 20 VD 40 VD 60 VD 20
2 0.1s -7389,6014 -6872,6713 -6681,6873 -6635,5410
0.1s 11129,38685 -52013,27818 -88069,21495 24507,40613
5 0.01s -4784,494423 -81530,323858 -123716,229042 13929,536381
0.001s 127213,83326 -215531,51588 -309610,42393 186332,88672
The reduced payoff matrix for a number of RSUs and beacon interval with respect
to bit rate and vehicle density is shown in Table 5.26. It has two pure strategies with
expected payoffs of (-6081, 6873) and (186332, 88672) with a packet loss of 4.7% and
the attacker could go with the strategy 2 RSUs and 0.1s BI. If the attack goal is
maximum packet loss irrespective of the number of vehicles, they could go with the
other strategy (5 RSUs, 0.001s BI, BR15). Here, the best response for the defender
The NE for the payoff matrix for the number of RSUs and message payload with
respect to bit rate and vehicle density which yields packet loss of 84%, is (5 RSUs,
8000 bits, BR 18Mbps, 20 vehicles), shown in Table 5.27. This Table shows the payoff
107
Table 5.27: Payoff Matrix: Number of RSUs, Message Payload vs Vehicle
Density
VD 20
NumRSU MP BR 6 BR 9 BR 12 BR 18
800 15567,34520 15328,34420 16345,35006 17040,34297
5 8000 15482,34918 15786,34474 15493,34425 31008,36759
80000 16432,34730 14676,34420 17607,34674 16054,34181
Vehicle Density
Table 5.28 shows a section of the payoff matrix for the number of RSUs, message
payload, and beacon interval with respect to bit rate and vehicle density. Since the
payoff matrix is quite large, there are 2 pure strategies and 13 mixed strategies NE.
Out of the 15 equilibria, The NE (5 RSU, 800 bit MP, BI 0.001s, BR15, 20 VD) yields
Table 5.28: Payoff Matrix: Number of RSUs, Message Payload, Beacon In-
terval vs Bit Rate , Vehicle Density
VD 20
NumRSU MP BI BR 6 BR 9 BR 12 BR 18
0.1s 10053,38148 10968,38417 11129,38685 24507,40613
5 800 0.01s 12255,60494 -4030,56151 -3709,55876 14864,59969
0.001s 72841,60327 -2432,52082 77770,60939 117956,64819
The final payoff matrix is shown in Table 5.29. The Nash equilibrium is selected
from the respective payoff matrix based on each strategy. In general, there could be
any number of equilibria. The best-case scenario is to have one Nash equilibrium.
In cases where there are more than one equilibria, we choose the better one for
108
both the players. In the payoff matrix, which has a mixed strategy, we evaluate the
expected payoffs of the players and compare them with other equilibria, if any. The
one with the best response with each player is selected for the final payoff matrix. For
example, payoffs for BI vs. BR and BI vs. VD, we look at Table 5.8 and Table 5.9.
Both tables have one equilibrium which is selected to populate the final matrix.
Now, let’s have a look at the attacker’s strategy BI+MP with BR as the defender’s
strategy in Table 5.29. The payoff matrix (Table 5.14) for this strategy has two pure
Nash equilibria. The attacker can also randomize their strategy with a probability of
( 4381 , 966 ) for MP of 80000bit and BI of 0.01s and 0.001s. In response, the defender
5347 5347
can randomize between their strategies (BR 12Mbps, BR 18Mbps) with probabilities
513 7
( 520 , 520
). The expected payoff for the mixed strategy is (-5418, 5269). The best one
out of the three equilibria is (-4062, 9901). Once the payoffs for all the strategies are
populated in the final matrix, the Nash equilibrium is calculated as (112993, 94699).
The payoffs for attacker strategies (rows:1, 3, 4) are strictly dominated by row 2, so
these rows will be eliminated. In the defender’s turn,they will eliminate column 1 as
it is strictly dominated by columns 2 and 3. The attacker will then eliminate rows
2 and 6. The rest of the defender’s payoffs for strategy BR+VD (88672, 64819) is
lower than their payoffs for strategy VD (147776, 52750). This eliminates strategy
BR+VD. Now, the attacker is left with two payoffs (51180, 112993). The payoff for
this game. The surface plot for the final payoff matrix shown in figure 5-16 shows the
109
Table 5.29: Final Payoff Matrix for Attack/Defense Strategies
Attacker/ Defender BR VD BR + VD
BI -5634,6514 -5820,5293 -5634,6714
MP -388,4944 -419,5800 -279,6854
NumRSU -2159,5953 -1020,6804 16346,35006
MP + BI -4062,9901 -5653,5574 -5438,5765
NumRSU + BI -7020,6450 51180,147776 186332,88672
NumRSU + MP -2034,6128 24576,52750 31008,36759
NumRSU + MP + BI -4002,5831 112993,94699 117956,64819
effectiveness of defense mechanisms against those attacks. Many potential threats can
be generated by tweaking just a few parameters in the real world, and simulating such
a scenario helps visualize and play around with parameters easily. Our work explores
the idea of implementing game theory on the simulation of the DoS/DDoS attack on
different parameters for the attacker and the defender, evaluated the payoffs, and
then analyzed the result to find the optimal strategy for both the players. Table 5.30
is a summary of the players strategy with respective equilibrium. It also shows the
percentage packet loss in each scenario. The ‘Attack Strategy’ and ‘Defence Strat-
egy’ columns show the values of the parameters that would yield the best response
when those parameters are varied. The rest of the parameters are at their default
values. There are 4 cases in which packet loss is above 85%. The highest packet
18Mbps. We would expect the attacker to choose this strategy. The defender would
strive to receive as many packets as possible to lower the impact of the DoS attack,
hence going for a higher payoff, which could be achieved by reducing the traffic to
20 vehicles. In turn, the attacker would want to achieve maximum payoff by varying
the message payload along with RSUs and BI, which could be achieved with 5 RSUs,
800bit MP, and BI 0.001s when the vehicle density is 20. The average packet loss for
this strategy is 848577, which yields a packet loss of approx. 90%. This shows that an
110
Figure 5-16: Attacker Strategies vs Defender Strategies (Final Payoff)
attacker would need 5 RSUs to have a successful DoS/DDoS attack on the vehicular
network. With defender’s strategy into action, the impact of attack could be reduced
to only 20 vehicles instead of 100. If the vehicle receives a large number of packets, it
can alert other vehicles to change routes and minimize the traffic as much as possible.
The attacks where packet loss is low can be used to disrupt the traffic, as shown in
Figure 5-5. Low % packet loss also indicates that impact of DoS/DDoS attack can
be maintained without serious damage. The % packet loss for the number of RSUs,
message payload, and beacon interval with respect to vehicle density is not feasible to
calculate as the expected payoff from the mixed strategy lacks the information about
111
Table 5.30: Attack/Defend Strategy and Packet Loss at Equilibrium with
respect to Payoff Matrices
This chapter explores the interaction between an attacker and driverless vehicles
attacker strategies, we can assess the security of a system, i.e., the extent to which
defenses will protect the system against a specific set of threats. Modeling a game for a
112
the game further. The application of the bottom-up approach in solving the game
this could mean that they can avoid the deeper technicality of the game and efficiently
113
Chapter 6
This chapter concludes the dissertation with a summary of the contributions and
discusses the limitations of this work which could be addressed in future works.
6.1 Conclusion
With the advancement in artificial intelligence (AI) and machine learning (ML) in
the last few decades, the increased usage of autonomous systems is evident in almost
agriculture, healthcare, military, and space exploration have found many use cases for
these systems. The driverless car revolution has already begun, and the self-driving
technology has been embraced by Tesla, Uber, and Waymo [257]. Robots would soon
be integrated into our lives like a home assistant, a pet [258] or a friend like Sophia
(the AI) [7]. There was a need to explore the area of autonomous systems to evaluate
the following criteria gave clarity to the subsequent contribution of the dissertation :
114
• cybersecurity of an autonomous system
Experimental modeling and evaluation of these systems were beyond our scope
of research. The analysis of the selected articles reflects that UAV is the most at-
tractive field of study while driverless cars and robots are still in their early stage of
game model to represent the system in any attack scenario. The game model can be
applied at any level of granularity with the right attack and defence parameters and
quantification of cost. Security game modeling can be used to reason over diverse
potential threats and attack scenarios. The game specifies only their options and the
consequence of each of them. The solution of the game is not to decide who wins the
game and who loses. Rather, it is a way to think what players might do in a given
scenario and what would be the consequence. Hence, there can be different solutions
simulation results show how the attacker can use different parameters to increase
their success in launching a DDoS attack and its impact on the traffic. Similarly,
the defender can minimize the attack impact by changing other parameters. Such
115
6.2 Limitations and Future Work
A fundamental constraint that we found during the review process is the lack of
systems. The study of cybersecurity is new in this domain, and so is its imple-
mentation. This resulted in a limited number of primary articles reviewed for the
security and modeling sections of the survey. The discussion focuses on the security
of three autonomous systems (UAVs, robots, and driverless cars) worth a detailed
review because of their popularity. Swarms are an extension of these systems, and
believe in having studied the most applicable works available in this area of research.
We would address the challenges and limitations associated with the current sim-
ulation methodology for our future works. The simulation work can be extended to
many other different attack scenarios for different types of attacks. It can be ex-
node(s). Same or different parameters can be evaluated to see the attack’s impact.
In our future work, we can also assess what impact these parameters have on time
delay, distance covered by the vehicles, and battery consumption. If the vehicle has
to process a higher number of received packets, the battery charge will be consumed
higher. Since autonomous vehicles will be battery operated, this will reduce the dis-
tance they would typically cover at a specific battery level. In addition, the learning
of attacker/defender over time also needs to be taken into account. For example,
the attacker/defender must reason over different types of attacks and their counter-
measures on each module. We need a more elaborate model for both attacker and
defender to address these and other challenges. A framework or GUI could be de-
veloped to evaluate the Nash/QR equilibrium based on the user’s data of players,
116
References
[1] Statista. (2018) Internet of Things (IoT) connected devices installed
http://tinyurl.com/j3t9t2w
//tinyurl.com/s42w7wl
[4] K. Hill. (2016) Security robot accidentally attacks child. [Online]. Available:
https://tinyurl.com/yynv5se3
[6] CCW. (2018) Five years of campaigning, CCW continues. [Online]. Available:
https://www.stopkillerrobots.org/2018/03/fiveyears/
[7] D. Galeon. (2017) World’s First AI Citizen in Saudi Arabia Is Now Calling For
117
[9] F. Jahan, A. Y. Javaid, W. Sun, and M. Alam, “Gnssim: An open source
systems: A survey,” ACM Computing Surveys (CSUR), vol. 52, no. 3, p. 50,
2019.
[12] F. Jahan, W. Sun, and Q. Niyaz, “A non-cooperative game based model for
[16] H.-M. Huang, “Autonomy levels for unmanned systems (alfus) framework vol-
118
[17] S. Shahrdar, L. Menezes, and M. Nojoumian, “A Survey on Trust in Au-
net/myths-legends/talos-crete-00157
[19] W. B. Yeats, “The Winding Stairs and Other Poems.” Reprinted by Kessinger
Publishing, 1933.
[23] J. Turi. (2014) Tesla’s toy boat: A drone before its time. [Online]. Available:
https://www.engadget.com/2014/01/19/nikola-teslas-remote-control-boat/
[24] J. Turi. (2014) GE’s bringing good things, and massive robots,
ge-man-amplifying-robots/
[25] ZDNet. (2018) 15 of the best movies about AI, ranked. [Online]. Available:
https://tinyurl.com/yculsjxp
[26] S. Wold and P. Staff. (2015) The 100 Greatest Movie Robots of All Time.
[27] A. Newitz. (2013) 15 Books That Will Change the Way You Look at Robots.
an-army-base-on-the-moon/
[31] R. van der Kleij, T. Hueting, and J. M. Schraagen, “Change detection sup-
2018.
//casmodeling.springeropen.com/articles/10.1186/s40294-016-0026-7
Man, and Cybernetics - Part A: Systems and Humans, vol. 30, no. 3, pp. 286–
interaction, not over-automation,” Phil. Trans. R. Soc. Lond. B, vol. 327, no.
for Situation Awareness,” IFAC Proceedings Volumes, vol. 28, no. 15, pp.
pii/S1474667017452591
Proceedings of the Human Factors Society annual meeting, vol. 32, no. 2. SAGE
systems for advanced cockpits,” vol. 31, no. 12, pp. 1388–1392, 1987.
[41] J. W. McDaniel, “Rules for fighter cockpit automation,” in Aerospace and Elec-
tronics Conference, 1988. NAECON 1988., Proceedings of the IEEE 1988 Na-
121
[45] M. R. Endsley and D. B. Kaber, “Level of automation effects on performance,
[47] H.-M. Huang, E. Messina, and J. Albus, “Toward a generic model for autonomy
[48] H. M. Huang, E. Messina, and J. Albus, “Autonomy level specification for in-
Report,” in SPIE Defense and Security Symposium 2004, 2004, pp. 386–397.
articleid=844165
[50] H.-M. Huang, E. Messina, R. Wade, R. English, B. Novak, and J. Albus, “Au-
vol. 5804, no. June, pp. 439–448, 2005. [Online]. Available: http:
//dx.doi.org/10.1117/12.603725
122
[52] H.-M. Huang, “Autonomy levels for unmanned systems (ALFUS) framework:
https://www.cognilytica.com/2018/02/22/will-another-ai-winter/
in sliding autonomy for multi-robot spatial assembly,” no. 603, pp. 489–496,
September 2005.
[55] F. Heger and S. Singh, “Sliding autonomy for complex coordinated multi-robot
download?doi=10.1.1.148.9892{&}rep=rep1{&}type=pdf
systematic literature review,” Artificial Intelligence Review, vol. 51, no. 2, pp.
149–186, 2019.
tecture for single operator, multiple uav command and control,” Massachusetts
123
[60] B. P. Sellner, L. M. Hiatt, R. Simmons, and S. Singh, “Attaining sit-
80–87.
teams,” Intelligent Autonomous Systems 10, IAS 2008, pp. 332–341, 2008.
[63] T. Fong, N. Cabrol, C. Thorpe, and C. Baur, “A personal user interface for col-
2001.
124
S. Fouse, and D. Housten, “Mixed-initiative adjustable autonomy for hu-
conference, 2008.
for urban search and rescue.” in AAAI mobile robot competition, 2002, pp. 33–
37.
pp. 165–172.
[70] L. Lin, M. a. Goodrich, and S. Clark, “Sliding Autonomy for UAV Path
Available: http://www.humanrobotinteraction.org/journal/index.php/HRI/
article/view/12
[71] M. Desai and H. A. Yanco, “Blending human and robot inputs for sliding scale
[72] M. Desai, “Sliding scale autonomy and trust in human-robot interaction,” Mas-
[76] P. Scerri, D. Pynadath, and M. Tambe, “Adjustable autonomy for the real
126
ground rover configuration. part i: System design,” in 2018 Aviation Technol-
rgb, 3d, thermal, and multimodal approaches for facial expression recognition:
analysis and machine intelligence, vol. 38, no. 8, pp. 1548–1568, 2016.
survey and real-world user experiences in mixed reality,” Sensors, vol. 18, no. 2,
p. 416, 2018.
127
[89] M. Giering, V. Venugopalan, and K. Reddy, “Multi-modal sensor registration
for vehicle perception via deep neural networks,” in High Performance Extreme
cles in agriculture for process evaluation,” Frontiers in Robotics and AI, vol. 5,
p. 28, 2018.
[92] C.-E. Hrabia, N. Masuch, and S. Albayrak, “A metrics framework for quan-
cially assistive robotics,” Interaction Studies, vol. 8, no. 3, pp. 423–439, 2007.
128
for Space Applications XI, vol. 10641. International Society for Optics and
the Symposium and Bootcamp on the Science of Security. ACM, 2016, pp.
82–89.
[98] D. Ding, Q.-L. Han, Y. Xiang, X. Ge, and X.-M. Zhang, “A survey on security
their security issues,” Computers in Industry, vol. 100, pp. 212–223, 2018.
2018.
[101] G. Wu, J. Sun, and J. Chen, “A survey on the security of cyber-physical sys-
tems,” Control Theory and Technology, vol. 14, no. 1, pp. 2–10, 2016.
[102] A. Y. Javaid, W. Sun, and M. Alam, “Single and multiple uav cyber-attack sim-
129
[104] J. Petit, B. Stottelaar, M. Feiri, and F. Kargl, “Remote attacks on automated
vehicles sensors: Experiments on camera and lidar,” Black Hat Europe, vol. 11,
p. 2015, 2015.
//tinyurl.com/yydz75x5
of dos attacks on the ar. drone 2.0,” in 2016 XIII Latin American Robotics
pp. 127–132.
pp. 606–614.
978-3-319-70833-1{ }8
[109] C. Kwon, W. Liu, and I. Hwang, “Security analysis for cyber-physical systems
uavs with sensor input spoofing attacks,” in Proceedings of the 10th USENIX
1702.01251
[112] J. Pappalardo. (2018) The Dream of Drone Delivery Just Became Much More
Aerospace Information Systems, vol. 13, no. 1, pp. 27–45, 2016. [Online].
Available: http://arc.aiaa.org/doi/10.2514/1.I010388
[115] D. Hambling. (2017) Ships fooled in GPS spoofing attack suggest Russian
[116] D. Welch and K. Naughton. (2019) GM Falls Millions of Miles Short on Cruise
[117] J. Williams. (2018) 2019 may be year of the driverless car: Here’s where top
http://tinyurl.com/y25d4g9x
driveai-self-driving-design-frisco-texas/
131
[120] J. Roulette. (2019) Self-driving buses roll into Orlando’s Lake Nona, a
//tinyurl.com/yy3kc8kh
[121] A. Greenberg. (2015) Hackers Remotely kill a Jeep on the Highway- With me
Available: https://tinyurl.com/y4wb3qwt
[124] F. Lambert. (2018) Watch what Tesla Autopilot can see in incredible 360º video.
[125] T. H. Jr. (2018) Move over Tesla, this self-driving car will let you
http://tinyurl.com/yad58cgj
com/index.php/fr/navya/histoire
[127] Sarah Sloat. (2016) BMW, Intel, Mobileye Link Up in Self-Driving Tech
2013.
survey,” Indian Journal of Science and Technology, vol. 9, no. 28, 2016.
132
[130] M. Azees, P. Vijayakumar, and L. J. Deborah, “Comprehensive survey on secu-
//ieeexplore.ieee.org/abstract/document/7238573/
133
transport systems: Standards, threats analysis and cryptographic countermea-
Systems, vol. 17, no. 4, pp. 1015 – 1028, 2015. [Online]. Available:
https://ieeexplore.ieee.org/abstract/document/7327222/
[139] M. N. Mejri and J. Ben-Othman, “Entropy as a new metric for denial of service
document/7037603/
document/5735780/
Conference (VNC), 2010 IEEE. Jersey City, NJ, USA: IEEE, 2010. [Online].
Available: https://ieeexplore.ieee.org/abstract/document/5698247/
//tinyurl.com/yymnbxrt
[147] N. Fearn. (2018, jan) The Cutting-Edge Tech set to Define 2018.
doc id=739321
[148] M. Hans, B. Graf, and R. D. Schraft, “Robotic home assistant care-o-bot: Past-
38th IEEE Symposium on Security and Privacy (SP). IEEE, 2017, pp. 268–286.
1–17, 2017.
[152] Ians. (2015) Surgical robots to become ubiquitous in Indian hospitals. [Online].
Available: http://tinyurl.com/y2rfp2es
[153] E. Snell. (2015) Phishing Attack Affects 3,300 Partners HealthCare Patients.
[154] K. Zetter, “Hospital networks are leaking data, leaving critical devices
y5kxsfol
for mobile wi-fi robot toys from online pedophiles,” in Privacy, Security, Risk
and Trust (PASSAT) and 2011 IEEE Third Inernational Conference on Social
Available: https://tinyurl.com/yylnuarh
136
[159] S. Industry. (2017) Blockchain makes IoT devices autonomous. [Online].
Available: https://tinyurl.com/y5w784re
IEEE Robotics \& Automation Magazine, vol. 16, no. 1, pp. 103—-112, 2009.
with robot swarms for agricultural applications,” in Advanced Video and Signal
Research, vol. 29, no. 14, pp. 1743–1760, 2010. [Online]. Available:
https://doi.org/10.1177/0278364910375139
[166] Z.-y. Tan, Ying and Zheng, “Research advance in swarm robotics,”
https://www.sciencedirect.com/science/article/pii/S221491471300024X
137
[167] F. Higgins, A. Tomlinson, and K. M. Martin, “Threats to the swarm: Secu-
[168] Y. K. Sharma and A. Bagla, “Security challenges for swarm robotics,” SECU-
pp. 585–590.
urban vehicle,” Journal of Field Robotics, vol. 25, no. 10, pp. 727–774, 2008.
[173] P. Guo, H. Kim, N. Virani, J. Xu, M. Zhu, and P. Liu, “Exploiting physical dy-
namics to detect actuator and sensor attacks in mobile robots,” arXiv preprint
arXiv:1708.01834, 2017.
138
[174] D. M. Gage, “Security considerations for autonomous robots,” in Security and
[176] J. Su, J. He, P. Cheng, and J. Chen, “A stealthy gps spoofing strategy for ma-
from cyber threats,” The Journal of Defense Modeling and Simulation, vol. 16,
http://tinyurl.com/yxeph2cb
vehicle smart device ground control station cyber security threat model,” in
139
[182] B. Beyst. (2016) Comparing ThreatModeler to Microsoft Threat Modeling
Available: http://dx.doi.org/10.5772/intechopen.69796
ysis of a modern automobile,” in Security and Privacy (SP), 2010 IEEE Sym-
[186] S. Parkinson, P. Ward, K. Wilson, and J. Miller, “Cyber threats facing au-
Intelligent Transportation Systems, vol. 18, no. 11, pp. 2898–2915, 2017.
546–556, 2015.
140
tonomous and unmanned vehicles,” The Journal of Defense Modeling and Sim-
and defences,” in Internet of Things (iThings) and IEEE Green Computing and
ing (CPSCom) and IEEE Smart Data (SmartData), 2016 IEEE International
[192] A. Sanjab, W. Saad, and T. Başar, “Prospect theory for enhanced cyber-
pp. 1–6.
sided jamming games among teams of mobile agents,” Proceedings of the IEEE
[195] Z. Xu and Q. Zhu, “A cyber-physical game framework for secure and resilient
[197] A. Ferdowsi, W. Saad, and N. B. Mandayam, “Colonel blotto game for se-
arXiv:1709.09768, 2017.
[199] A. Sanjab, W. Saad, and T. Başar, “Prospect theory for enhanced cyber-
pp. 1–6.
[200] K. Wang, M. Du, S. Maharjan, and Y. Sun, “Strategic honeypot game model
for distributed denial of service attacks in the smart grid,” IEEE Transactions
[202] Q. Wu, S. Shiva, S. Roy, C. Ellis, and V. Datla, “On modeling and simulation of
game theory-based defense mechanisms against dos and ddos attacks,” in Pro-
142
against ddos attacks on tcp/tcp-friendly flows,” in 2011 IEEE symposium on
[205] Y. Wang, Y. Zhang, L. Zhang, L. Zhu, and Y. Liu, “Game based ddos attack
pp. 1–5.
fined Networks & Network Function Virtualization. ACM, 2017, pp. 53–58.
[209] G. Yan, R. Lee, A. Kent, and D. Wolpert, “Towards a bayesian network game
framework for evaluating ddos attacks and defense,” in Proceedings of the 2012
553–566.
143
[210] M. Wright, S. Venkatesan, M. Albanese, and M. P. Wellman, “Moving target
ceedings of the 2016 ACM Workshop on Moving Target Defense. ACM, 2016,
pp. 93–104.
tion for the detection of vm-based ddos attacks in the cloud,” IEEE Transac-
[214] B. Subba, S. Biswas, and S. Karmakar, “A game theory based multi layered
nal of Wireless Information Networks, vol. 25, no. 4, pp. 399–421, 2018.
theoretical based system using holt-winters and genetic algorithm with fuzzy
144
logic for dos/ddos mitigation on sdn networks,” IEEE Access, vol. 5, pp. 9485–
9496, 2017.
[217] L. Gao, Y. Li, L. Zhang, F. Lin, and M. Ma, “Research on detection and defense
mechanisms of dos attacks based on bp neural network and game theory,” IEEE
[218] G. Wu, Z. Li, and L. Yao, “Dos mitigation mechanism based on non-cooperative
repeated game for sdn,” in 2018 IEEE 24th International Conference on Parallel
[220] A. Mairaj and A. Y. Javaid, “Game theoretic solution for an unmanned aerial
vehicle network host under ddos attack,” Computer Networks, p. 108962, 2022.
[221] R. M. Husar and J. Stracener, “System autonomy modeling during early concept
[223] D. Causevic. (2018) How Machine Learning Can Enhance Cybersecurity for
ArticleID=15659
[225] E. C. Ferrer, “The blockchain: a new framework for robotic swarm systems,”
[227] S. Williamson. (2018) Blockchain May Be the Answer to Making Self Driving
solutions,” IEEE Communications Magazine, vol. 55, no. 7, pp. 101–109, 2017.
[229] Q. Niyaz, W. Sun, and A. Y. Javaid, “A deep learning based ddos detection
2016.
[230] A. Ydenberg, N. Heir, and B. Gill, “Security, sdn, and vanet technology of
[231] R. Kumar, M. A. Sayeed, V. Sharma, and I. You, “An sdn-based secure mobility
[233] D. M. West. (2017) Securing the future of driverless cars. [Online]. Available:
https://www.brookings.edu/research/securing-the-future-of-driverless-cars/
[234] K. Kaur and G. Rampersad, “Trust in driverless cars: Investigating key fac-
146
[235] Jon Walker , “The Self-Driving Car Timeline – Predictions from the Top 11
[236] Teena Maddox, “How autonomous vehicles could save over 350K lives
//tinyurl.com/y6me578b
2015.
[238] G. Owen, Game theory, 4th ed. Bingley, England: Emerald Group Publishing,
2013.
[241] L. Petnga and H. Xu, “Security of unmanned aerial vehicles: Dynamic state
[242] J. Kocić, N. Jovičić, and V. Drndarević, “Sensors and sensor fusion in au-
147
Deceiving autonomous cars with toxic signs,” arXiv preprint arXiv:1802.06430,
2018.
[245] P. Guo, H. Kim, N. Virani, J. Xu, M. Zhu, and P. Liu, “Exploiting physical dy-
namics to detect actuator and sensor attacks in mobile robots,” arXiv preprint
arXiv:1708.01834, 2017.
Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 39, no. 5,
united states,” Proceedings of the IEEE, vol. 99, no. 7, pp. 1162–1182, 2011.
[250] McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. ,
Available: http://www.gambit-project.org.
148
Symposium on Security and Privacy. Los Alamitos, CA, USA: IEEE
//doi.ieeecomputersociety.org/10.1109/PADSW.2018.8644627
systems: A survey,” ACM Computing Surveys (CSUR), vol. 52, no. 5, pp. 1–34,
2019.
[253] S. Biswas, J. Mišić, and V. Mišić, “Ddos attack on wave-enabled vanet through
[255] ABC News , “Berlin artist uses handcart full of smartphones to trick Google
Maps’ traffic algorithm into thinking there is traffic jam,” 2020. [Online].
Available: https://tinyurl.com/3mjeu84m
37–44.
[257] S. McBride. (2018) The Driverless Car Revolution Has Begun – Here’s How
[258] L. Dormehl. (2017, April) MiRo is the robot dog that promises to be a geek’s
149