Cybersecurity Modeling of Autonomous Systems - A Game-Based Approach

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 169

A Dissertation

entitled

Cybersecurity Modeling of Autonomous Systems: a Game-based Approach

by
Farha Jahan

Submitted to the Graduate Faculty as partial fulfillment of the requirements for the
Doctor of Philosophy Degree in Engineering

Dr. Weiqing Sun, Committee Chair

Dr. Devinder Kaur,Committee Member

Dr. Mohammed Niamat, Committee Member

Dr. Junghwan Kim, Committee Member

Dr. Quamar Niyaz, Committee Member

Dr. Amy Thompson, Acting Dean


College of Graduate Studies

The University of Toledo


May 2022
Copyright 2022, Farha Jahan

This document is copyrighted material. Under copyright law, no parts of this


document may be reproduced without the expressed permission of the author.
An Abstract of
Cybersecurity Modeling of Autonomous Systems: a Game-based Approach
by
Farha Jahan

Submitted to the Graduate Faculty as partial fulfillment of the requirements for the
Doctor of Philosophy Degree in Engineering
The University of Toledo
May 2022

Autonomous Systems are soon expected to integrate into our lives as home assis-

tants, delivery drones, and driverless cars. The level of automation in these systems,

from being manually controlled to fully autonomous, would depend upon the auton-

omy approach chosen to design these systems. This selection would also affect other

operational as well as essential aspects such as cybersecurity and trust. Consequently,

the dawn of the areas of human-machine teams (HMT) and cyber-physical human

systems (CPHS) have attempted to address the human trust in autonomy while tra-

ditional domains of security, along with these new domains, continue to attempt to

address the security concerns. This dissertation revolves around these general ideas

and attempts to answer many open questions. How did we get here? Where is the

future? How do we ensure that the autonomous systems are secure enough so that

we may trust their autonomous operation? Can we model the attacker and defender

behavior based on the strategies for defense or attack? Given the importance of

cybersecurity of these systems, we propose that simulation and modeling of these

interactions to predict or select appropriate behavior is expected to lead to a greater

trust in autonomous systems through explainable cause and action sequences.

This first phase of this research reviews the historical evolution of autonomy,

its approaches, and the current trends in related fields to build robust autonomous

systems. Towards such a goal and with the increased number of cyberattacks, the
iii
security of these systems needs special attention from the research community. To

gauge the extent of stat-of-the-art in this area, we study the works that attempt

to improve the cybersecurity of these systems. We also found that it is essential to

model the system architecture from a security perspective, identify the threats and

vulnerabilities and then model the cyberattacks. A survey in this direction explores

the various attack models that have been proposed over the years and identifies the

research gap that needs to be addressed by the research community.

The second phase of this work focuses on developing generic autonomous system

architecture, both theoretical and analytical, on enabling the next step of security

modeling. It was construed that any autonomous system can be represented using

three major modules - perception, cognition, and control. Further, the proposed

autonomous system model performed detailed threat, vulnerability, and attack mod-

eling. The next step involved exploring various theories and methods to gauge the

appropriateness of its application to security modeling. It was found that economic

theories such as game theory work best for profit/loss-oriented cybersecurity prob-

lems.

We modeled the attack and defense mechanisms applied to the above-mentioned

autonomous system modules using a non-cooperative non-zero-sum game. We de-

veloped the strategic game formulation and established the method to calculate the

decision payoff and associated Nash equilibrium. Twenty-one different scenarios were

identified using combinations of 3 attacker and 2 defender strategies. Finally, we

simulate the game using an OMNeT++ based simulator known as VEINS to obtain

the optimal strategy to maintain a secure system state. Distributed Denial of Service

attack was chosen as the attack as there have been many real-world instances of this

attack, simply by using a bunch of phones or IoT devices. The simulation results

also give a perspective on the attacker’s strategies that could have maximum impact

of DDoS attack in the Vehicular AdHoc Network. Another tool called Gambit was

iv
utilized to calculate the Nash equilibrium for each scenario.

After a detailed discussion of the results and their analysis, the work is concluded,

summarizing the various actions that attacker and defender can independently take.

Each party’s best and worst scenarios were identified among the different scenarios.

In essence, this work provides a predictive all-scenario evaluation for a network of

autonomous systems to allow system designers to be prepared to take action in real-

time in case of a cyber attack. Knowing or estimating attacker capability is critical

to the success of this model. Underestimating attacker capabilities may lead to the

unpreparedness of the system and result in catastrophic damages in case of a cyber

attack. Therefore, vulnerability analysis and modeling of the target system serve as

a prerequisite for applying this game theory-based security modeling.

v
[To my daughters Aaiza and Elhaam.]
Acknowledgments

It has been a long journey made easy and fulfilling with the strong mentorship of Dr.

Weiqing Sun. I am short of words to acknowledge his support and encouragement

in my endeavors throughout the Ph.D. program. Thank you for your generosity and

consideration which enabled me to have a work-life balance.

I would like to thank all committee members, Drs. Niyaz, Niamat, Kim and Kaur,

for taking their time out of their busy schedules to serve on my dissertation committee

and provide valuable comments and feedback. I would like to thank the College of

Graduate Studies for supporting my research with the University Fellowship. Terri

and Mary have always been supportive and prompt in answering any questions I had.

This acknowledgment would be incomplete without thanking my family. My

spouse has been my strongest pillar of support in myriad ways. Along with being a

caregiver, he is a friend and a teacher. He has guided, supported, and motivated me

to accomplish my dreams. Thank you! My family and in-laws never doubted me. I

am thankful to them for their unwavering belief in my efforts. I would also like to

thank my family away from home, Cheryl, Eric, Christy, John, and Michelle for their

blessings, prayers, and the time we spent together. I would like to thank the Little

Sprouts Academy for providing care for my daughter. They put me at ease with their

love and kindness for my daughter so that I could concentrate on my research.

Last but not the least, my friends, without whom the world will be empty. I would

like to thank my friends here and back in India.

vii
Contents

Abstract iii

Acknowledgments vii

Contents viii

List of Tables xiii

List of Figures xv

List of Abbreviations xvii

1 Introduction 1

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.4 Dissertation Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Background 6

2.1 Autonomy: History, Approaches, and Trends . . . . . . . . . . . . . . 8

2.1.1 Historical Evolution . . . . . . . . . . . . . . . . . . . . . . . 8

2.1.2 Approaches of Autonomy . . . . . . . . . . . . . . . . . . . . . 14

2.1.2.1 Supervisory Control/Task-based Approach . . . . . . 15

2.1.2.2 System-Initiative Approach . . . . . . . . . . . . . . 15

2.1.2.3 Mixed-Initiative/Teamwork-centered Approach . . . 16

viii
2.1.2.4 Sliding Scale Approach . . . . . . . . . . . . . . . . . 19

2.1.2.5 Hierarchical Approach . . . . . . . . . . . . . . . . . 19

2.1.2.6 Policy-based Approach . . . . . . . . . . . . . . . . . 20

2.1.2.7 Goal-driven Approach . . . . . . . . . . . . . . . . . 21

2.1.2.8 Collaborative Approach . . . . . . . . . . . . . . . . 21

2.1.3 Current Trends . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.2 Cybersecurity of Autonomous Systems . . . . . . . . . . . . . . . . . 23

2.2.1 UxVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.2.2 Driverless Cars . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.2.3 Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

2.2.4 Internet of Autonomous Things (IoAT)/ Autonomous Internet

of Things (AIoT) . . . . . . . . . . . . . . . . . . . . . . . . . 31

2.2.5 Swarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.3 Game Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

2.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3 Security Modeling of Autonomous Systems 37

3.1 Modeling System, Vulnerabilities, Threats, and Attacks . . . . . . . . 37

3.1.1 System Modeling . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.1.1.1 Theoretical Model . . . . . . . . . . . . . . . . . . . 38

3.1.1.2 Analytical Model . . . . . . . . . . . . . . . . . . . . 39

3.1.2 Threat Modeling . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.1.3 Vulnerability Modeling . . . . . . . . . . . . . . . . . . . . . . 41

3.1.4 Attack Modeling . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.1.4.1 Theoretical Model . . . . . . . . . . . . . . . . . . . 43

3.1.4.2 Analytical Model . . . . . . . . . . . . . . . . . . . . 44

3.2 Applications of Game Theory to Cybersecurity . . . . . . . . . . . . . 45

ix
3.2.1 Autonomous System Security . . . . . . . . . . . . . . . . . . 46

3.2.2 Cyber-Physical System Security . . . . . . . . . . . . . . . . . 47

3.2.3 Network Security . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.3 Research Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4 Cybersecurity Modeling of Autonomous System Using Game The-

ory 56

4.1 Autonomous System Architecture . . . . . . . . . . . . . . . . . . . . 57

4.2 Strategic Game Model . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.2.1 Autonomous System (AS) Security Game Representation . . 61

4.2.2 Payoff Calculation . . . . . . . . . . . . . . . . . . . . . . . . 62

4.2.3 Nash Equilibrium Calculation . . . . . . . . . . . . . . . . . . 65

4.3 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

5 Non-Cooperative Game Modeling Simulation 72

5.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

5.1.1 VANET Communication . . . . . . . . . . . . . . . . . . . . . 73

5.1.2 Veins Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . 74

5.1.3 Gambit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

5.2 Game Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

5.2.1 Strategy Selection . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.2.2 Payoff Quantification and Calculation . . . . . . . . . . . . . . 89

5.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5.3.1 Scenario AD . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5.3.1.1 Beacon Interval vs Bit Rate . . . . . . . . . . . . . . 95

5.3.1.2 Beacon Interval vs Vehicle Density . . . . . . . . . . 95

x
5.3.1.3 Message Payload vs Bit Rate . . . . . . . . . . . . . 96

5.3.1.4 Message Payload vs Vehicle Density . . . . . . . . . 97

5.3.1.5 Number of RSUs vs Bit Rate . . . . . . . . . . . . . 98

5.3.1.6 Number of RSUs vs Vehicle Density . . . . . . . . . 98

5.3.2 Scenario AAD . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

5.3.2.1 Message Payload + Beacon Interval vs Bit Rate . . 99

5.3.2.2 Message Payload + Beacon Interval vs Vehicle Density 100

5.3.2.3 NumRSU + Beacon Interval vs Bit Rate . . . . . . 101

5.3.2.4 NumRSU + Beacon Interval vs Vehicle Density . . . 102

5.3.2.5 NumRSU + Message Payload vs Bit Rate . . . . . . 102

5.3.2.6 NumRSU + Message Payload vs Vehicle Density . . 102

5.3.3 Scenario AAAD . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.3.3.1 NumRSU+Message Payload + Beacon Interval vs Bit

Rate . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.3.3.2 NumRSU+Message Payload + Beacon Interval vs Ve-

hicle Density . . . . . . . . . . . . . . . . . . . . . . 104

5.3.4 Scenario ADD . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.3.4.1 Beacon Interval vs Bit Rate + Vehicle Density . . . . 105

5.3.4.2 Message Payload vs Bit Rate + Vehicle Density . . . 105

5.3.4.3 NumRSU vs Bit Rate + Vehicle Density . . . . . . . 105

5.3.5 Scenario AADD . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.3.5.1 Beacon Interval + Message Payload vs Bit Rate +

Vehicle Density . . . . . . . . . . . . . . . . . . . . . 106

5.3.5.2 NumRSU + Beacon Interval vs Bit Rate + Vehicle

Density . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.3.5.3 NumRSU+Message Payload vs Bit Rate + Vehicle

Density . . . . . . . . . . . . . . . . . . . . . . . . . 107

xi
5.3.6 Scenario AAADD . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.3.6.1 NumRSU+Message Payload + Beacon Interval vs Bit

Rate + Vehicle Density . . . . . . . . . . . . . . . . 108

5.3.7 Final Payoff Matrix . . . . . . . . . . . . . . . . . . . . . . . . 108

5.4 Analysis and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 109

5.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

6 Conclusion and Future Work 114

6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

6.2 Limitations and Future Work . . . . . . . . . . . . . . . . . . . . . . 116

References 117

xii
List of Tables

2.1 Level of Automation (LoA) frameworks summary . . . . . . . . . . . . . 13

2.2 Comparison of different approaches of autonomy . . . . . . . . . . . . . . 17

2.3 Effects of cyberattacks on Autonomous Systems . . . . . . . . . . . . . . 26

2.4 Driverless Car Technology Trend . . . . . . . . . . . . . . . . . . . . . . 29

3.1 List of System, Vulnerabilities, Threat, and Attack Modeling Studies . . 55

4.1 Enumeration of Attack/Defense Strategies . . . . . . . . . . . . . . . . . 62

4.2 Enumeration of Possible Cases of Attack and Defense . . . . . . . . . . . 63

4.3 Payoff Matrices for the AS Security Game . . . . . . . . . . . . . . . . . 65

4.4 Quantification of actions for the AS Security Game . . . . . . . . . . . . 69

4.5 Payoff Matrices for the AS Security Game . . . . . . . . . . . . . . . . . 69

5.1 Default Simulation Parameters . . . . . . . . . . . . . . . . . . . . . . . 80

5.2 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

5.3 Percentage Packet Loss with Beacon Interval . . . . . . . . . . . . . . . . 83

5.4 Percentage Packet Loss with MP . . . . . . . . . . . . . . . . . . . . . . 84

5.5 Percentage Packet Loss with increase in No. of RSUs . . . . . . . . . . . 85

5.6 Parameter Combinations and Payoff Matrices Size . . . . . . . . . . . . . 92

5.7 Parameter Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5.8 Payoff Matrix: Beacon Interval vs Bit Rate . . . . . . . . . . . . . . . . 95

5.9 Payoff Matrix: Beacon Interval vs Vehicle Density . . . . . . . . . . . . . 95

5.10 Payoff Matrix: Message Payload vs Bit Rate . . . . . . . . . . . . . . . . 97

xiii
5.11 Payoff Matrix: Message Payload vs Vehicle Density . . . . . . . . . . . . 98

5.12 Payoff Matrix: Number of RSUs vs Bit Rate . . . . . . . . . . . . . . . 98

5.13 Payoff Matrix: Number of RSUs vs Vehicle Density . . . . . . . . . . . . 99

5.14 Payoff Matrix: Message Payload, Beacon Interval vs Bit Rate . . . . . . 99

5.15 Payoff Matrix: Message Payload, Beacon Interval vs Vehicle Density . . . 101

5.16 Payoff Matrix: Number of RSUs, Beacon Interval vs Bit Rate . . . . . . 101

5.17 Payoff Matrix: Number of RSUs, Beacon Interval vs Vehicle Density . . . 102

5.18 Payoff Matrix: Number of RSUs, Message Payload vs Bit Rate . . . . . 103

5.19 Payoff Matrix: Number of RSUs, Message Payload vs Vehicle Density . 103

5.20 Payoff Matrix: Number of RSUs, Message Payload, Beacom Interval vs

Bit Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.21 Payoff Matrix: Number of RSUs, Message Payload, Beacom Interval vs

Vehicle Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.22 Payoff Matrix: Beacon Interval vs Bit Rate , Vehicle Density . . . . . . . 105

5.23 Payoff Matrix: Message Payload vs Bit Rate . . . . . . . . . . . . . . . 105

5.24 Payoff Matrix: Number of RSUs vs Bit Rate . . . . . . . . . . . . . . . 106

5.25 Payoff Matrix: Message Payload, Beacon Interval vs Bit Rate . . . . . . 106

5.26 Payoff Matrix: Number of RSUs, Beacon Interval vs Vehicle Density . . . 107

5.27 Payoff Matrix: Number of RSUs, Message Payload vs Vehicle Density . 108

5.28 Payoff Matrix: Number of RSUs, Message Payload, Beacon Interval vs Bit

Rate , Vehicle Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.29 Final Payoff Matrix for Attack/Defense Strategies . . . . . . . . . . . . . 110

5.30 Attack/Defend Strategy and Packet Loss at Equilibrium with respect to

Payoff Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

xiv
List of Figures

2-1 Autonomous System Functions . . . . . . . . . . . . . . . . . . . . . . . 12

2-2 Taxonomy of Attacks on Autonomous Systems . . . . . . . . . . . . . . . 24

2-3 Popular Autonomous Systems . . . . . . . . . . . . . . . . . . . . . . . . 25

2-4 Statistics of number of devices connected worldwide from 2015 to 2025 (in

billions) published in Statista, 2016 [1].

Note: Data is a forecast from 2017-2025. . . . . . . . . . . . . . . . . . . 32

3-1 Vulnerabilities of an autopilot system reproduced from [2] . . . . . . . . 42

4-1 High-level Autonomous System Architecture. . . . . . . . . . . . . . . . . 58

4-2 Process Flow of a Game . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4-3 Variation in Expected Payoffs of the players with probability of successful

attack (qk ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

5-1 Veins (Omnet++) Environment . . . . . . . . . . . . . . . . . . . . . . . 75

5-2 Game Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

5-3 Interference Range for Sending/Receiving Messages . . . . . . . . . . . . 81

5-4 Beacon Interval vs Packet Loss . . . . . . . . . . . . . . . . . . . . . . . 82

5-5 Minimum Speed Distribution with respect to Beacon Interval . . . . . . 83

5-6 Packet Loss variation analysis with respect to Beacon Interval . . . . . . 84

5-7 Message Payload vs Packet Loss . . . . . . . . . . . . . . . . . . . . . . . 85

5-8 Packet Loss Distribution with respect to MP . . . . . . . . . . . . . . . 86

5-9 No. of RSU vs Packet Loss . . . . . . . . . . . . . . . . . . . . . . . . . . 87

xv
5-10 Packet Loss Distribution with respect to NumRSU . . . . . . . . . . . . 88

5-11 Bit Rate vs Packet Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

5-12 Packet Loss with respect to Bit Rate and Number of RSUs . . . . . . . . 89

5-13 Beacon Interval vs Bit Rate . . . . . . . . . . . . . . . . . . . . . . . . . 96

5-14 Beacon Interval vs Vehicle Density . . . . . . . . . . . . . . . . . . . . . 97

5-15 Beacon Interval (80000bit) vs Bit Rate . . . . . . . . . . . . . . . . . . . 100

5-16 Attacker Strategies vs Defender Strategies (Final Payoff) . . . . . . . . . 111

xvi
List of Abbreviations

ACL . . . . . . . . . . . . . . . . . . . . . . Autonomous Capability Level


ADFA-LD . . . . . . . . . . . . . . . . Australian Defence Force Academy Linux Dataset
ADS-B . . . . . . . . . . . . . . . . . . . . Automatic Dependent Surveillance - Broadcast
AFRL . . . . . . . . . . . . . . . . . . . . Air Force Research Laboratory
AI . . . . . . . . . . . . . . . . . . . . . . . . Artificial Intelligence
AIoT . . . . . . . . . . . . . . . . . . . . . Autonomous Internet of Things
AHRS . . . . . . . . . . . . . . . . . . . . Altitude and Heading Reference Systems
ALFUS . . . . . . . . . . . . . . . . . . . Autonomy Levels for Unmanned Systems
AMI . . . . . . . . . . . . . . . . . . . . . . Advanced Metering Infrastructure
ARTUE . . . . . . . . . . . . . . . . . . . Autonomous Response to Unexpected Events
AS . . . . . . . . . . . . . . . . . . . . . . . . Autonomous System
ASRO . . . . . . . . . . . . . . . . . . . . Astronaut Rover

BI . . . . . . . . . . . . . . . . . . . . . . . . Beacon Interval
BPNN . . . . . . . . . . . . . . . . . . . . Backpropagation Neural Network
BR . . . . . . . . . . . . . . . . . . . . . . . Bit Rate
BSM . . . . . . . . . . . . . . . . . . . . . . Basic Safety Message

CB . . . . . . . . . . . . . . . . . . . . . . . Colonel Blotto
CCH . . . . . . . . . . . . . . . . . . . . . . Control Channel
CIA . . . . . . . . . . . . . . . . . . . . . . . Confidentiality, Integrity, Availability
COVID-19 . . . . . . . . . . . . . . . . Corona Virus Disease of 2019
CPS . . . . . . . . . . . . . . . . . . . . . . Cyber-Physical Systems

DARPA . . . . . . . . . . . . . . . . . . . Defense Advanced Research Projects Agency


DDoS . . . . . . . . . . . . . . . . . . . . . Distributed Denial of Service
DoD . . . . . . . . . . . . . . . . . . . . . . Department of Defense
DoS . . . . . . . . . . . . . . . . . . . . . . . Denial of Service
DoT . . . . . . . . . . . . . . . . . . . . . . Department of Transportation
DRL . . . . . . . . . . . . . . . . . . . . . . Deep Reinforcement Learning
DSRC . . . . . . . . . . . . . . . . . . . . Dedicated Short Range Communication

xvii
ECU . . . . . . . . . . . . . . . . . . . . . . Electronic Control Units

GA . . . . . . . . . . . . . . . . . . . . . . . Genetic Algorithm
GE . . . . . . . . . . . . . . . . . . . . . . . General Electric
GM . . . . . . . . . . . . . . . . . . . . . . . General Motors
GPS . . . . . . . . . . . . . . . . . . . . . . Global Positioning System
GUI . . . . . . . . . . . . . . . . . . . . . . Graphical User Interface

HIDS . . . . . . . . . . . . . . . . . . . . . Host-Based Intrusion Detection System


HMT . . . . . . . . . . . . . . . . . . . . . Human Machine Teaming
HRI . . . . . . . . . . . . . . . . . . . . . . . Human Robot Interaction

ICI . . . . . . . . . . . . . . . . . . . . . . . Interdependent Critical Infrastructure


ICMP . . . . . . . . . . . . . . . . . . . . . Internet Control Message Protocol
IDE . . . . . . . . . . . . . . . . . . . . . . . Integrated Development Environment
IMT . . . . . . . . . . . . . . . . . . . . . . Inertial Measurement Unit
INS . . . . . . . . . . . . . . . . . . . . . . . Inertial Navigation System
IoAT . . . . . . . . . . . . . . . . . . . . . . Internet of Autonomous Things
IoT . . . . . . . . . . . . . . . . . . . . . . . Internet of Things
I/O . . . . . . . . . . . . . . . . . . . . . . . Input Output
IPv6 . . . . . . . . . . . . . . . . . . . . . . Internet Protocol version 6

LASER . . . . . . . . . . . . . . . . . . . Light Amplification by Stimulated Emission of Radia-


tion
LiDAR . . . . . . . . . . . . . . . . . . . . Light Detection and Ranging
LoA . . . . . . . . . . . . . . . . . . . . . . Level of Automation
LSTM . . . . . . . . . . . . . . . . . . . . Long-Short Term Memory

MANET . . . . . . . . . . . . . . . . . . Mobile Ad hoc Network


MATLAB . . . . . . . . . . . . . . . . . Matrix Laboratory
MIT . . . . . . . . . . . . . . . . . . . . . . Massachusetts Institute of Technology
ML . . . . . . . . . . . . . . . . . . . . . . . Machine Learning
MOBOT . . . . . . . . . . . . . . . . . . Mobile Robot
MP . . . . . . . . . . . . . . . . . . . . . . . Message Payload
MPC . . . . . . . . . . . . . . . . . . . . . Model-based Predictive Control
MSNE . . . . . . . . . . . . . . . . . . . . Mixed Strategy Nash Equilibrium

NE . . . . . . . . . . . . . . . . . . . . . . . Nash Equilibrium
NFN . . . . . . . . . . . . . . . . . . . . . . Network Function Virtualization
NIST . . . . . . . . . . . . . . . . . . . . . National Institute of Standards and Technology
NS2 . . . . . . . . . . . . . . . . . . . . . . . Network Simulator Version 2

xviii
OBU . . . . . . . . . . . . . . . . . . . . . . On-Board Unit
OMNET++ . . . . . . . . . . . . . . Objective Modular Network Testbed in C++
OS . . . . . . . . . . . . . . . . . . . . . . . . Operating System

PITM . . . . . . . . . . . . . . . . . . . . . Person-in-the-middle
PNT . . . . . . . . . . . . . . . . . . . . . . Position, Navigation, and Time
PSNE . . . . . . . . . . . . . . . . . . . . . Pure Strategy Nash Equilibrium

QRE . . . . . . . . . . . . . . . . . . . . . . Quantal Response Equilibria

RSU . . . . . . . . . . . . . . . . . . . . . . Road Side Unit

SA . . . . . . . . . . . . . . . . . . . . . . . .
Situation Awareness
SCH . . . . . . . . . . . . . . . . . . . . . .
Service Channel
SDN . . . . . . . . . . . . . . . . . . . . . .
Software Defined Network
SDR . . . . . . . . . . . . . . . . . . . . . .
Software Defined Radio
SNR . . . . . . . . . . . . . . . . . . . . . .
Signal-to-Noise Ratio
SNIR . . . . . . . . . . . . . . . . . . . . .
Signal-to-Noise Plus Interference Ratio
STRIDE . . . . . . . . . . . . . . . . . . Spoofing, Tampering, Repudiation, Information Disclo-
sure, Denial of Service, and Elevation of Privilege
SUMO . . . . . . . . . . . . . . . . . . . . Simulation of Urban MObility

TAO . . . . . . . . . . . . . . . . . . . . . . Tactical Action Officer


TCP . . . . . . . . . . . . . . . . . . . . . . Transfer Control Protocol
TraCI . . . . . . . . . . . . . . . . . . . . . Traffic Control Interface

UAV ...................... Unmanned Aerial Vehicle


UDP ...................... User Datagram Protocol
UGV ..................... Unmanned Ground Vehicle
UMS ...................... Unmanned Systems
UUV ...................... Unmanned Underwater Vehicle

VANET . . . . . . . . . . . . . . . . . . . Vehicular Ad hoc Network


VD . . . . . . . . . . . . . . . . . . . . . . . Vehicle Density
V2I . . . . . . . . . . . . . . . . . . . . . . . Vehicle-to-Infrastructure
V2V . . . . . . . . . . . . . . . . . . . . . . Vehicle-to-Vehicle

WAVE . . . . . . . . . . . . . . . . . . . . Wireless Access for Vehicular Environments


WG . . . . . . . . . . . . . . . . . . . . . . . Working Group
WiFi . . . . . . . . . . . . . . . . . . . . . . Wireless Fidelity
WSA . . . . . . . . . . . . . . . . . . . . . WAVE Service Advertisement

xix
WSM . . . . . . . . . . . . . . . . . . . . . WAVE Short Messages
WSMP . . . . . . . . . . . . . . . . . . . WAVE Short Message Protocol
WSN . . . . . . . . . . . . . . . . . . . . . Wireless Sensor Network

xx
Chapter 1

Introduction

The history of remotely-controlled machines goes back to the 1950s, when they

were called master-slave manipulators. With technological advancement, machines

are evolving into autonomous systems that can make rational decisions. Be it a

driverless car stopping for a pedestrian crossing a road or a caretaker robot calling

911 in case of an emergency. Innovation is not the only reason to work closely with

autonomous systems in the future. Instead, necessity might compel us to trust these

systems with our lives. The ongoing COVID-19 pandemic has accelerated the use of

automation in different industries, such as warehouses, to minimize human contact.

Even after the pandemic subsides, these industries would continue with automated

systems for reduced labor costs and increased profits [3].

1.1 Motivation

The primary objective of developing an autonomous system is to collaborate with

humans and assist them in various tasks. These tasks could be the one that requires:

precision, such as surgeries; operation in challenging and life-threatening situations,

such as space exploration, search and rescue missions, or nuclear power plants; or

assistance to elders at home. However, improper implementation or malicious intent

may lead to disasters. An incident was reported in a San Francisco mall where
1
a patrolling robot failed to recognize a toddler and accidentally attacked him [4].

Recently, a self-driving Uber vehicle was caught up in a fatal accident [5]. Armed

and autonomous weapons are manufactured by high-tech military organizations in

the USA, China, and South Korea. Sci-Fi novels and movies like ‘I, Robot’ and

‘Terminator’ have created a negative image and fear for these systems that they may

go against humans and harm them. There is an ongoing campaign against killer

robots (fully autonomous weapons) that would have complete decision controls on

their target [6]. If malicious users hack or control such systems by exploiting their

vulnerabilities, machines turning against humans would not just be a concept or

scenes from movies. These autonomous systems are still in the evolving stage. It is

essential to analyze the security and safety issues associated with these machines and

thoroughly test them before they are made part of our lives [7, 8].

1.2 Problem Statement

Cyberattacks will surge as the number of these systems increases on the grid,

driverless cars on the road, and unmanned aerial vehicles (UAVs) hover. Cyber se-

curity is one of the significant challenges that businesses and new technologies will

continue to face with increased stats every year. We need strategic and predictive

mathematical models to thwart cyberattacks or minimize attack impact while opti-

mally utilizing available resources. Game theory is the field of mathematical modeling

based on strategies among rational decision-makers who choose their actions such that

their payoffs/objectives are maximized. Game theory is widely being applied in vari-

ous fields of science after gaining popularity in the economics world. Game-theoretic

techniques can be used to examine a large number of possible threat scenarios. Specific

actions can be taken, eliminated, or improved using the strategic outcomes, thereby

more efficiently controlling future attacks. These techniques are already being utilized

2
in real-world scenarios.

1.3 Contributions

Through my master’s research on implementing GPS spoofing and jamming at-

tacks in UAVs [9, 10], I realized the need for a generalized analytical model for au-

tonomous systems to represent and evaluate attack scenarios. Such a model would

also allow system designers to be prepared with a strategic solution to thwart or

minimize the impact of cyberattacks. In addition, there were opportunities to apply

novel mathematical and economic theories to this area. Therefore, this dissertation

focused on addressing the lack of any major success in this direction by making the

following contributions:

• Autonomous system modeling study: The first phase of this work involved

a study of autonomous systems and a detailed literature survey of research in

the area of cybersecurity and system modeling of these systems. This resulted

in a journal publication in ACM Computing Surveys [11].

• Autonomous system architecture: Through the first phase study and to

the best of our knowledge, a lack of a generalized Autonomous Systems (AS)

architecture was found in the literature. Therefore, the next phase focused

on proposing a generalized AS architecture based on the common modules of

drones, robots, and driverless cars. This work was also published in the journal

publication mentioned above.

• Non-Cooperative Game Model: The preliminary work also confirmed the

lack of use of game theoretical models in the area of cybersecurity of AS. This

allowed us to propose a non-cooperative game model for the protection of AS

against cyberattacks. We applied game theory to propose a mathematical model

3
of cyberattacks on these systems. The game-based model provided insights into

attacker and defender strategies and reached equilibrium when both players op-

erate on their utility function with no incentive to deviate, as they have the best

response based on the other player’s payoff. This will also help strategize the

distribution of resources to minimize attack impact. This work was published

in the IEEE Security and Privacy Workshops on Autonomous Systems [12].

• Simulation-based case study: Finally, the model proposed in the previous

phase was verified by simulating an attack scenario for an autonomous system

utilizing several open-source simulation and analysis tools. As of the writing

of this dissertation, this work is ready to be submitted to a journal and is not

published yet.

1.4 Dissertation Outline

Chapter 1 introduces the dissertation topic and the motivation behind it. It

also discusses the problem statement, describes various research objectives, and the

significant contribution of my work in cybersecurity.

Chapter 2 presents the background of the autonomous systems, and it investigates

the evolution of autonomous systems and the levels of autonomy. It also explores

different autonomy approaches which answer multiple questions such as who and in

what circumstances should have the control of the system; human, machine, or both.

We then delve into cybersecurity modeling of some autonomous systems such as UAVs

and robots. It also introduces the game theory, related concepts, and application to

cybersecurity.

Chapter 3 describes the system modeling along with threats, vulnerabilities, and

attack modeling. It discusses the insights of the dissertation till now, the research

challenges, and future directions this study can lead to.

4
Based on the insights gained from the Chapters 2,3, in Chapter 4 we model a

generalized AS architecture based on common modules of an AS, such as a driverless

car, robot, and drones. We then propose a strategic non-cooperative, non-zero sum

game for modeling attacks on an AS to numerically compute the mixed strategies

that achieve the Nash Equilibrium (NE) and the expected payoffs of the players.

Chapter 5 is the case study for the proposed strategic game through simulation.

This chapter introduces the simulation tools and the setup requirements. It then

elaborates on the simulation of the Denial of Service/ Distributed Denial of Service

(DoS/DDoS) attack as the attack on a driverless car. Different attack strategies are

considered, and the results are presented as payoff matrices and Nash equilibrium.

This chapter also plots trends and surface plots of packet loss and payoffs for various

game scenarios.

Chapter 6 concludes the dissertation by summarizing major results and findings

obtained in this research. It discusses the constraints and limitations that lead to

future work recommendations.

5
Chapter 2

Background

The researchers from different focus groups have put a lot of effort into bringing

us to the current level of understanding regarding autonomy. In the last few years,

excellent surveys on autonomy have been published [13–15]. Goodrich et al. presented

an overview of human-robot interaction (HRI) [?]. Huang has described autonomy-

related terminologies and jargon [16]. The level of trust could be a significant factor in

deciding the autonomy levels in autonomous systems [17]. However, to the best of our

knowledge, the research community lacks literature discussing autonomous systems’

cybersecurity in general. The primary objective of this chapter is to provide an in-

depth study of cybersecurity modeling of different classes of autonomous systems.

The work can help carry out further research on the security modeling of generalized

autonomous system architecture. In addition, we discuss the historical evolution of

the concepts and factors that define autonomy and its levels. In general, the term

‘autonomy’ discusses the representation and implementation of its levels determined

by factors and functionality of the system, from being manual to fully autonomous

and in-between. Having a perspective of the application domain and the scope of

implementation may provide a different look at the security modeling of these systems.

When we represent a system in a mathematical language and concepts, it is called

a mathematical model, and the process is called mathematical modeling. It helps to

6
explain the state and functioning of a system, its relation with different components,

and its response to any external/internal changes in the environment. A mathemati-

cal model has a governing equation with subequations based on its relationship with

different components and variables. It has some initial parameters and boundary

conditions followed by limitations and constraints. They are classified based on the

abstract structure of the variables. It can be linear or non-linear, static or dynamic,

discrete or continuous, statistical, logical, etc. In cybersecurity, attackers are rational

decision-makers with some incentives to attack, and the system administrator, net-

work, or end-user needs to protect their system from such malicious activities. Game

theory provides a strategic model to represent the relationship between the attacker

and the defender with the incentives for their actions. More details on game theory

are seen later in this chapter.

Following are the key contributions of this chapter toward the body of knowledge

on autonomous systems:

• A discussion on the historical evolution of autonomy and the current trends in

this area,

• A discussion on the cybersecurity of these systems that are being explored in

industry and academia alike,

• An overview of game theory, and

• A discussion on the application of game theory to cybersecurity.

7
2.1 Autonomy: History, Approaches, and Trends

2.1.1 Historical Evolution

In the last few decades, concepts and modeling of automation have evolved con-

siderably. Fields such as Human-Robot Interaction (HRI), Human Machine Teaming

(HMT), Artificial Intelligence (AI), and Unmanned Systems (UMS) share the con-

cepts of autonomy. As with many inconceivable technologies of current era, the

ideation of such systems can be dated back to religious myths [18], poets [19, 20],

artists [21], and storytellers [22], thereby materializing into remote-controlled in-

ventions [23, 24], movies [25, 26] and science fiction literature [27]. With the ad-

vancement in associated areas and state-of-the-art technologies like robot mechanics,

sensors and actuators, processors, navigation, and communication, the definition and

levels of autonomy (LoA) have gone through multiple revisions and modifications to

be up-to-date with the other emerging and advancing technologies so that it can be

adapted for more robust use.

“Robotics” word was first invented by Isaac Asimov in the story “Liar” in 1941,

while “Automation” was first coined by Dal Harder, a Ford executive, in 1947. As

recounted in [28], the early history (1954) of vehicles remotely controlled by human

operators were called master-slave manipulators. In the 1950s, when industries like

General Electric (GE) were working to build industrial robots such as “Yes Man” and

“Handyman”, US Army was exploring the ideas of teleoperated rovers (MOBOT) and

Project Horizon [29].

In 1978, Sheridan and Verplank [28] discussed the idea of supervisory control.

They explained how it is different from teleoperators and manipulators, listing the

10-level scale of autonomy (LoA) which has been a background work for further re-

search in this area till today [?, 30–34]. As more systems were gaining autonomy,

operators’ roles were getting reduced to a supervisor or passive monitor of these sys-

8
tems, and performance problems on human out-of-the-loop emerged because of many

failure incidents of these systems [35–37]. Norman realized the lack of feedback, poor

communication interfaces, and lower levels of situation awareness [36]. A new ap-

proach to automation was a solution to these problems: new roles for automation,

opting for adaptive automation, incorporating situational awareness in the design

process, and revisiting the LoA [38]. Endsley et al. focused their research on im-

proving situational awareness [39, 40]. They worked on bringing human-in-the-loop

[40], moving towards adaptive autonomy and enhancing the LoA [41]. Adaptive au-

tonomy, proposed back in 1976 by Rouse [42], is an autonomous system in which

the LoA changes initiated by particular events in the task environment, physiological

methods, task load, or by changes in operator performance [43]. An initial survey [44]

on telerobotic, supervisory control, and automation tried to address problems arising

in those fields at that time, provided application history of such systems in the field

of aircraft, nuclear power, and highway efficiency, and brought forth the areas that

were still immature and the prospects.

Parasuraman et al. [43] took a step back and did extensive research on the is-

sues related to adaptive autonomy and how it should be approached in design. They

outlined what, when, and how the adaptation would be invoked. Such as, would it

be a measurement or modeling-based adaptive system? What would be the logic of

implementation? The idea was that adaptive automation would aid in solving hu-

man out-of-the-loop performance problems and provide a dynamic allocation of tasks

between the operator and the system, as and when needed. It could increase the

operator’s performance, thereby increasing management load, which would, in turn,

affect the operator’s situation awareness [38]. As recounted in [14], Endsley and

Kaber proposed a revised 10-scale taxonomy based on input functions rather than

fixed-task allocation, organized according to four generic functions of “monitoring

displays”, generating various course of actions or “strategies to meet goals”, deciding

9
on a course of action and then implementing the selected one [45]. A year later,

Parasuraman et al. proposed an extension of their previous work [43], suggesting a

similar model to Endsley and Kaber [45] and adopting a four-stage view of processing

information - Information acquisition, Information analysis, Decision and action se-

lection, and Action implementation. The automation can be applied to each of these

stages to different degrees and levels [33].

By the beginning of the 21st century, the Department of Defense (DoD) actively

employed UAVs on different “dull, dirty, and dangerous” missions and had long-term

innovative programs for deploying these UAVs in various areas. One of the focus

areas, along with the development of several other technological requirements for

the enhancement of UAVs’ reliability and survivability, was autonomy [46]. Within

the next decade, ongoing research and goals of the Air Force Research Laboratory

(AFRL) were to demonstrate the autonomous capability of level 8 out of 10 based

on autonomous capability level (ACL) metrics [?]. The National Institute of Stan-

dards and Technology (NIST) realized the need for some standard definitions for au-

tonomous levels based on system specifications and performance measurements. They

assembled an ad hoc group for the generic framework development of unmanned sys-

tems’ autonomy level specification called Autonomy Levels for Unmanned Systems

(ALFUS) [47]. The results of their various workshops include:

• a complete list of terminologies and definitions [16],

• a set of metrics for ALFUS detailed model identifying “mission complexity,

environmental difficulty, and HRI” as the combination of factors indicating

LoA [48–50],

• an executive model showing general trends in the transitions of levels of factors

mentioned above [51], and

10
• illustrating applications of ALFUS in the military, homeland security, and man-

ufacturing [52].

Meanwhile, research communities were moving towards “adjustable autonomy”

where the human user, autonomous system, or another system can adjust the LoA

during the operation. After the comeback from a severe “AI winter” [53], AI created a

boom in intelligent and smart systems such as smartphones, smart home appliances,

and assisted technologies. In the AI research domain, these intelligent autonomous

systems are referred to as “agents”. Research in adaptive and advanced interfaces

facilitated the easy deployment and operations of adjustable autonomous agents, pro-

viding multiple channels of communication between the human user and the system,

such as gestures, voice, and touch. Social acceptability, trust, reliability, mutual situ-

ation awareness, coordination of tasks among users and agents, and transfer of control

strategies formed the elements of the new set of concerns for the researchers, along

with safety and robustness. Efforts were made to include these variables in the LoA

framework [14].

In summary, an autonomous system is any machine that can sense and perceive

its environment, analyze the situation, do the necessary computation to formulate

different plans of action, and based on instructions from the user; it decides to act,

as shown in the figure 2-1. These decisions are based on the autonomy level based

on environmental difficulty, mission complexity, and human involvement. Industrial

and academic research on robotics and automation started in the 1950s. However,

the progress was slow for more than a decade due to the lack of proper understanding

and implementation of autonomy, trained operators with good situational awareness

in case of failures, and resources such as high-computing processors, cameras, and

sensors. Supervisory control advanced to adaptive and adjustable autonomy. A

summary of various taxonomy for LoA that various researchers proposed over the

years, listed in Table 2.1, shows the work done to overcome the challenges raised by
11
Figure 2-1: Autonomous System Functions

design approaches of autonomous systems. Research advancement in AI and HMT

also paved the way for more trust and social acceptance of these systems.

12
Table 2.1: Level of Automation (LoA) frameworks summary

Level LoA [28] LoA [45] ACL (AFRL) [?] ALFUS (NIST) [51]

1 Low/ No assistance from sys- Manual Control Remotely guided • High level HRI
Remote tem/Humans decide • Low level tactical
Control Behaviour
2 System offers set of deci- Action Support Real-time Health/ • Simple environment
sion alternatives Diagnosis
3 Narrows selection down to Batch Processing Adapt to failure and
few flight conditions

4 Suggests one alternative Shared Control Onboard route replan • Mid level HRI

5 Executes the suggestion if Decision Support Group Coordination • Mid complexity,

human approves multi-functional missions


• Moderate environment
13

6 Allows human restricted Blended Decision Making Group tactical replan


time to veto before auto-
matic execution

7 Executes automatically, Rigid System Group tactical goals • Low level HRI
informs human • Collaborative, high
8 Informs human if asked Automated Decision Mak- Distributed control complexity missions
ing • Difficult environment
9 Informs human if the Supervisory Control Group Strategic goals
system decides to

10 High/ system acts autonomously, Full Automation Fully autonomous • Near 0 HRI
Fully Au- ignores human swarms • High complexity
tonomous • Extreme environment
2.1.2 Approaches of Autonomy

An important challenge in an autonomous system includes implementing the LoA

and deciding who would have the control to adjust it and in what scenarios. It would

fundamentally mean what, when, who, and how the necessary actions need to be

taken. Various works have been done to recognize the balance between the flexibility

of control over the autonomy levels such that HMT outperforms either the human

or machine working alone. Comparative studies were also done to test the efficiency,

robustness, and workload in various modes of an autonomy spectrum [54, 55]. A

supervisory approach can be very well used for robots that would need supervision

at some point through the task completion process. A goal-driven autonomy could

be applied to driverless cars. Depending upon the traffic situation or roadblocks, it

could alter its route to the destination. It could decide to pick up a fellow rider in a

‘Share-a-Ride’ business model while on its way to drop off its customer and update its

goal thereby. The mixed-initiative approach is also one of the promising approaches

toward autonomy in a semi-autonomous car. The smooth transfer of control between

the vehicle and the driver is a challenge and a topic of further research where each

can take control if one feels that the other is not in a situation to make a better

decision. For example, if the driver is sleepy or in a drunken state, the car can take

control of the driving. It would be difficult to rely on optical sensors in weather

conditions like heavy rain or blizzard. In such a scenario, the car will not be in

a good state to be driven autonomously, and the driver should take control of the

vehicle to avoid mishaps. In the same scenario, an autonomous vehicle with sliding

scale autonomy would lower its autonomy and give more control to the user. As

the weather clears, it could take back the full control of the vehicle. A systematic

literature review on adjustable autonomy has been done [56] that listed approaches

to autonomy. An updated list with additional references and a comparative study of

14
different approaches is provided in Table 2.2.

2.1.2.1 Supervisory Control/Task-based Approach

The user acts as a system supervisor who monitors the activities of the system and

has the privilege to modify the system behavior dynamically without taking over the

complete control of the system, which could be necessary to avoid task failure [57]. In

this approach, both the user and the system may have individual subtasks to perform

in the entire mission, and the system passes control when it is done with its subtask.

For example, mission and payload management in a UAV requires monitoring of

sensors and making knowledge-based decisions to meet overall mission requirements

[58]. In these situations, the human power of judgment, experience, and intuition

exceed intelligence algorithms while the system is better controlling the navigation

and motion controls [59].

2.1.2.2 System-Initiative Approach

Some research has been done in the area where the autonomous robotic team re-

quests help from the human operator. The system only passes control when it is stuck

and no longer can perform its assigned task and needs human intervention to take

further action. The efficiency of such type of adjustable autonomy depends on how

rapidly and accurately the human operator responds to the situation while engaged

in different unrelated tasks [60]. For example, the Roomba vacuum cleaner can serve

as an excellent example of such a system that would need human intervention when

stuck in a corner [61]. In peer-to-peer human-robot teams, maintaining coordination

and learning from “interactions at different levels of granularity” would increase the

situational awareness of the team and, in turn, increase the overall productivity [62].

15
2.1.2.3 Mixed-Initiative/Teamwork-centered Approach

In this approach, the user and the system smoothly exchange controls throughout

the mission. The idea of such a system where humans and robots complement each

other and collaborate in a safe, productive, and cost-efficient environment is not novel.

The goal of NASA’s Astronaut-Rover (ASRO) project, first tested in 1999 [?], is to

bring together human and planetary rovers to work together seamlessly, communi-

cating throughout the mission and be a scout, technical field assistant, infrastructure

assistant and many more to the crew [63], with adjustable LoA during system opera-

tion [64]. In such systems, the back-and-forth transfer of control between the system

and the human should be smooth and quick, along with the guarantee that each

entity would be able to handle its part competently [65]. This approach would be

able to address several challenges, including maintaining consistent and stable oper-

ation, user trust, and situation awareness during the transfer of control at different

LoA [66,67]. Research in the area of urban search and rescue missions utilizing mixed-

initiative control autonomy shows that a robot was able to make better navigation

decisions [68]. It holds for a large-scale team of robots as well, indicating that the

theoretical benefits of this approach could be met if the system and the operators

have complementary abilities in such a way that the systems must be able to make

progress without waiting for human intervention [69].

16
Table 2.2: Comparison of different approaches of autonomy

Approach Authority Pros Cons Situation Awareness Goal Achievement Representa-


(SA) tive Work

Supervisory User Easier to model and User role reduced to a Low SA for new or inat- User skill/expertise [57], [58], [59]
Control/Task- implement monitor causing bore- tentive monitor/user level dependant
based dom. Recognizing and
Approach responding to cyberat-
tacks difficult

System- System Relatively easier to Wait for the user to take User distracted with User skill/expertise [60], [62], [59]
Initiative model and implement. actions that needs user unrelated tasks level and response
Approach authorization time dependant

Mixed- Decision making is Reaction time would be Difficult to realize a cer- Depends on the interface User’s skill-level and [?], [65], [66],
Initiative/ shared between the less tain level of autonomy which would remind the system should [67], [69]
17

Teamwork- user and the system in terms of task assign- the user of the system’s complement each
centered ment. Smooth transfer of state, time off period and other
Approach control is a challenge user’s expertise

Sliding User’s control in- Autonomy levels between Swiftly attaining situ- Depends on the auton- Combines the forte of [70], [71], [?],
Scale Ap- versely proportional the pre-programmed ational awareness is a omy level the system user and autonomy: [72], [62]
proach to the system levels could be achieved. challenge for the user is working on and how each does what they
Increases robustness and engaged the user is with are good at
adaptability the system
Hierarchical User/Highest level Easier to manage and Higher levels rely on Single operator, multi- Group Coordination [73], [58], [74]
Approach in the hierarchical coordinate multiple sys- lower level outputs. ple autonomous systems is necessary for the
architecture deter- tems. When lower level When a higher level is would increase work- completion of the task
mines objectives and systems are compro- compromised, the whole load and hence decrease
control criteria mised, higher levels can system would be down situation awareness
disable them until issue
resolution

Policy- User/Policies Increased trust in the Making policies for all Based on policy-by- Completion of task [75], [67], [76],
based Ap- system as the user can situations is a challenge policy basis depends on the possi- [77]
proach set bounds ble actions a system
can take defined by
the policy

Goal-driven User/Goals More self-sufficient High degree of difficulty. Shouldn’t require human Goal-driven AI al- [78], [79], [80]
18

Approach Hackers can manipulate intervention gorithms should be


the goal itself. interactive with the
environment

Collab- Multiple individual Multiple systems serve Collaborating multiple Shouldn’t require human Individual systems [81], [82]
orative systems towards completing a systems without human intervention must complete their
Approach higher goal intervention is a chal- respective goals to
lenge achieve a higher goal
2.1.2.4 Sliding Scale Approach

The intermediate LoA between the discrete modes (teleoperation, safe, shared, and

fully autonomous) can be achieved in sliding scale autonomy. It is more of a continu-

ous mode of autonomy where the system’s autonomy increases with the proportional

decrease of the user’s control of the system. It is achieved by blending human and sys-

tem desired characteristics or variables. The user can guide the actions/operations of

autonomous systems in different modes or dimensions, which provides authority and

flexibility to the user for better management [70, 71]. In these works, variables that

characterize the systems are provided on a sliding scale, which would influence the

autonomy levels. In [?,72], the authors designed a trust scale to adjust the autonomy

level [62]. Another work implements sliding autonomy to develop a coordinated team

of robots to dock both ends of a suspended beam in assembling structures. These

robots interact with a human operator in case they need help if stuck or to improve

efficiency [83].

2.1.2.5 Hierarchical Approach

It is comparatively easy for two systems, a human and a machine, to cooperate

and coordinate. As the number of systems or agents increases, coordination and

management conflicts arise. In this approach, systems are structured in a hierarchy

through which a global problem can be solved based on the knowledge of lower-level

systems [73]. It helps to localize specific tasks to systems based on goals, control,

duration of execution, the complexity of tasks, and the amount of interaction or su-

pervision needed by the operator, hence defining the autonomy levels. A group of

researchers proposed a “hierarchical control loop” architecture for single user-multiple

UAVs as three loops (“Motion control inner loop”, “navigation”, and “mission man-

agement outer loop”) [58]. Their case studies conclude that an operator can control

an increased number of UAVs if the automation is increased in the “control and nav-
19
igation loops” with a good user-system collaborative decision making in the mission

management loop. Similarly, Proscevicius suggests a five-level control hierarchy for

autonomous mobile robots to speed up communications and control in the hierarchy

levels [74].

2.1.2.6 Policy-based Approach

Policies are a set of guidelines defined by the designer that an autonomous system

must abide by in any given situation. They are permissions given to the autonomous

system to adjust autonomy in changing operational environments without changing

the code. Such an approach increases the users’ trust in the system as they can set

bounds on the system based on their competency level. Moreover, a policy-based

approach provides re-usability, efficiency, extensibility, context-sensitivity, verifiabil-

ity, protection from malware, poorly-designed or buggy agents, and reasoning about

agent’s behavior [75]. One such work based on policies is the driving mission for the

human-robot team [67] in which one of the policies could be “if the road is slippery,

the human should drive”. Another example is the Electric Elves, a multi-agent sys-

tem acting as a personal assistant to a group of researchers for their daily activities

such as ordering, scheduling a meeting, selecting presenters in the research group, and

organizing lunch meetings is based on policies of the strategic transfer of control [76].

Knowledgeable Agent-oriented System (KAoS) is an example of platform-independent

policy services [84] used in areas like modeling human-machine team, military, and

space applications [85]. It enables users to define policies of autonomous systems or

agents that govern the autonomy and adjust them dynamically so that the system

can adapt to the changing situation [77].

20
2.1.2.7 Goal-driven Approach

It is a conceptual model to build an autonomous agent that observes a set of ex-

pectations during the execution of a plan, detects discrepancies if they occur, details

the reasons for failures, and creates new goals to pursue if the execution of the current

plan fails [78]. It incorporates a model for goal reasoning and has been applied in var-

ious domains such as responding to unexpected events in strategy simulations [79],

StarCraft game for strategic planning, and so on. A group of researchers demon-

strated this conceptual framework through Autonomous Response to Unexpected

Events (ARTUE) in a navy training simulation- Tactical Action Officer (TAO) Sand-

box, and showed that it could perform well in a complex dynamic environment [86].

Preliminary works by Wilson et al. show that goal-driven autonomous underwater

vehicles can successfully detect a potentially hostile surface vehicle when pursuing a

different goal of surveying a bay area [80].

2.1.2.8 Collaborative Approach

In collaborative autonomy, multiple individual agents, each having their own spe-

cific individual task to complete, collaborate to serve towards completing a higher

collective goal. The multiple individual agents form a complex system whose auton-

omy is decided based on the autonomous actions of the collective individual agents

depending on shared information, and goals [81]. A conceptual example of this ap-

proach could be seen in the system design for Mars exploration with a UAV and a

ground vehicle in collaboration. The goal is that the UAV would be able to dock

to a charging station on the ground rover without human intervention. The ground

rover would serve as a mobile base that would provide charging, communication, and

docking capabilities to the UAV [82].

21
2.1.3 Current Trends

In recent years, the research focus has moved to implement adjustable autonomy

in the real world and other more challenging areas such as transfer-of-control, goal

reasoning, deliberation, and collaboration with multiple heterogeneous agents and

individuals. Companies like Ford and Nissan were expected to launch self-driving

cars by 2021, while GM’s Cruise and Google’s Waymo were not far behind. However,

even in 2022, we don’t have fully driverless cars on the road, and most companies

are still testing their vehicles in states with little to no snowfall. Although cars

such as Tesla, Navya, and others have advanced autonomous features, there are still

significant challenges to making them completely driverless. Automated path and

motion planning with an efficient computational paradigm and a well-planned exe-

cution model would facilitate a smooth transfer of control between the user and the

system. As discussed in [61], to build robust autonomous systems, a human action

model, user state monitoring, intent recognition techniques, and efficient interfaces to

facilitate collaboration between the user and the system is required. These emerging

fields of AI and HMT are gaining momentum, and works are being done in facial

and emotion recognition [87, 88] that would be helpful in monitoring user’s state. In

contrast, augmented reality devices, gesture, and voice user interfaces would facilitate

easy communication.

Autonomous systems operating in the complex dynamic environment need to

promptly make informed decisions to react safely and reliably in complex dynamic

situations based on an accurate perception of their surroundings. More than one sen-

sor is employed in these systems to assess the surrounding environment accurately.

The fusion of data from multiple sensors and multiple modalities has become crucial

in the perception process of autonomous systems in various application fields and

can be used in image registration [89] along with the detection and mapping of static

22
and dynamic obstacles along the trajectory [90]. A recent survey discusses the cur-

rent developments in the areas of perception, planning, coordination, and control of

autonomous systems [91].

The fast pace of research advancements in autonomous systems requires that

different aspects of these systems have a benchmark, and a proper metric is desig-

nated for each of them. It would not only set a standard for the development of

an autonomous system with a certain degree of autonomy but would be helpful to

recognize appropriate systems for particular scenarios that require a certain level of

autonomy [92]. Some earlier works were done to establish a common metric for au-

tonomous systems by government agencies [?,50]. In the field of HRI, common metrics

for autonomous systems concerning five categories of “navigation, perception, man-

agement, manipulation, and social” have been presented in [93]. At the same time,

another work addresses benchmarking of socially assistive robots on aspects of robot

technology, social interaction, and assistive technology [94]. A review of these works

is presented in [95] to identify common metrics and set a benchmark in the field

of HMT. Most recent works reviewed autonomy measures to compare and contrast

autonomy approaches and discussed the capabilities of autonomous systems [96].

2.2 Cybersecurity of Autonomous Systems

Autonomous systems can be broadly seen as a type of cyber-physical system

which has embedded computers and physical elements connected and controlled by

sophisticated software, exhibiting distinct behaviors through multiple means of com-

munication with the outside world. Most of these systems are deployed in critical

areas such as nuclear power plants, automatic pilot avionics, and war zones. Some of

these systems are highly vulnerable to cyberattacks, and hence, the security of these

systems poses significant concerns if they are entrusted with our lives. Complete

23
Figure 2-2: Taxonomy of Attacks on Autonomous Systems

failure of these systems through cyberattacks, failure to correctly respond to critical

missions, or even a slight change in the desired output data can leave the operator

in a confused or ignorant state. A few research works have been done on the com-

prehensive review of threats, vulnerabilities, attacks, and control on cyber-physical

systems [47, 97–101]. Extrapolated from these works, we present a taxonomy of com-

mon attacks on autonomous systems in Figure 2-2 and a brief description of some of

these attacks and their effects in Table 2.3. A comprehensive list of cyberattacks and

24
Figure 2-3: Popular Autonomous Systems

its detailed survey on autonomous systems is beyond the scope of the paper. In this

section, we discuss some of the highly researched autonomous systems (Figure 2-3)

and work done on their cybersecurity.

2.2.1 UxVs

The increasing popularity of unmanned systems (UAVs, UGVs, UUVs) in various

mission-critical tasks has forced researchers to work on their autonomy-level related to

task complexity, human interaction, environmental difficulty, and so forth [?,52,111].

Commercialization of UAVs, a.k.a drones, is gaining momentum as multiple industries

plan to use them for their business functions. Door-to-door delivery and hauling

cargo as far as 300 miles with a weight of up to 200 pounds is not a distant dream.

Industries like Bell Labs are working on prototypes that can use gas or electric power

and transition from a helo to a plane during mid-flight, addressing some aerodynamic

concerns [112]. These systems are at higher risk of becoming targets of cyberattacks.

One of the earliest works to identify vulnerabilities in a UAV autopilot system was

done in [2]. A group of researchers analyzed system safety [109] and developed a

25
Table 2.3: Effects of cyberattacks on Autonomous Systems
Attack Description Effects on Autonomous Works
Types Systems
Jamming Caused by intentional Loss or corruption of pack- [102, 103]
interference, e.g. GPS ets disrupting communica-
Jamming tion
Spoofing Masquerading as a Gain access to the system, [10, 104, 105]
legitimate source, e.g. information, etc.
GPS Spoofing
Flooding Flooding of packets Loss of communication [102, 106]
thereby overloading through network conges-
the host, e.g. DoS, tion
DDoS
Side- Attack based on the Leakage of sensitive infor- [107, 108]
channel extra information mation without exploiting
Attack gained by the physi- any flaw or weakness in
cal analysis the components
Stealthy Tampering system Mislead the system to take [109]
Deception component or data undesirable action
Attack
Sensor in- Manipulate environ- Exercise direct control [110]
put spoof- ment to form implicit over system’s actions
ing control channel

real-time safety assessment algorithm [113] to investigate the performance of such

systems against stealthy deception attacks.

A review of all recent significant attacks on UAVs has been presented in [114].

GPS Jamming and Spoofing are the two most common attacks on the navigation

of UAVs. GPS jamming is a type of interference that specifically restricts GPS

signals. In a GPS spoofing attack, a false GPS signal is transmitted to the GPS

receiver of the autonomous system to introduce an unnoticeable error in the position,

navigation, and time (PNT) calculation which results in the deviation of the system

from its original path towards a malicious destination. US Maritime Administration

reported a recent incident of GPS spoofing that around 20 ships off the Russian port of

Novorossiysk found themselves in the wrong spot - more than 32 kilometers inland, at

26
Gelendzhik Airport [115]. While the military GPS signals are encrypted, civilian GPS

signals are publicly known. Hence, the GPS spoofing attack poses a significant threat

to critical infrastructure and public lives if spoofed systems are used maliciously.

Some researchers demonstrated these attacks on a small but sophisticated UAV [105].

Others simulated these attacks on academic testbeds [9, 10, 102].

2.2.2 Driverless Cars

Major car manufacturing companies like Audi, BMW, Ford, GM, Google, and

Uber have envisioned a future of autonomous vehicles on the road. Table 2.4 lists

some of the major autonomous car manufacturers and their pilot projects as of 2019.

In 2021, Waymo tested about 2.3 million miles nationwide while Cruise racked up

the second place with 876,104 [116]. It was expected that 2019 might be the year of

the driverless cars as GM prepared to launch its fleet [117] while Waymo was already

on the streets of Phoenix, Arizona, opening initially to early riders [118]. Another

startup, Drive.ai, launched its self-driving shuttle service around a geofenced area

of Frisco, Texas [119]. In the effort to envision a ‘smart city’, Lake Nona, Florida,

would soon see AUTONOM, a self-driving bus deployed by Beep in partnership with

French company Navya [120]. Companies like Nissan planned to put driverless cars

on the street of Tokyo by 2020 [121]. Achieving self-driving is not easy, and most

automakers are behind schedule due to various difficulties. Yet, as of 2022, many new

cars already incorporate state-of-the-art driver-assistance features such as autopilot,

self-parking and summoning, blind spot, and lane-monitoring systems. Future au-

tonomous vehicles need to be connected through sophisticated vehicular networks

such as Vehicular Ad hoc Network (VANET) so that they may exchange traffic and

routine information such as speed, location, or notification of any traffic collision.

It increases the potential for cyberattacks. In 2015, Wired documented a hacking

experiment on Jeep Cherokee where an intruder took the car control from the driver

27
on the highway. In the beginning, the hackers toyed with air conditioning, radio, and

windshield wipers. It became scary when the accelerator stopped working on the long

overpass with no shoulder to escape. It would have been a lot worse if the intruder

had abruptly engaged the brakes or disabled them all together [122].

The communication over VANET would help the vehicles to plan for better driv-

ing decisions and performance. However, a significant amount of confidential data

would also be circulating on the network. A simple attack such as eavesdropping

on a user’s habit of commuting can reveal a lot of valuable information about the

user to the attackers. Also, next-level of cyberattacks, such as ransomware attacks,

could be executed in the middle of the commute. A secure network would throttle all

the possible attacks at the point of entry. Many researchers have reviewed the con-

tribution of others in this area by identifying and classifying the vulnerabilities and

security challenges in VANETs [?, 122, 128–135]. As these vehicles will communicate

with nearby devices and infrastructure, it is important to explore the architecture

and main characteristics of these systems and analyze the corresponding countermea-

sures against the possible attacks [136, 137]. Authenticating vehicles entering into

VANET through dual authentication methods or highly secure mechanisms should

be the first line of defense [138]. Authors have dedicated their research to individual

attack detection such as DoS [139, 140], jamming [?, 141] and greedy behavior [142].

Sending false congestion messages to the vehicles by exploiting congestion avoidance

mechanisms [143, 144] in VANET would be an easy way for attackers to re-route and

position them in zones of interest [145].

2.2.3 Robotics

Robots are marking their place in our lives with the significant investments in

robotics technology [146], advancements in cutting-edge technologies like 5G, Aug-

mented Reality (AR), IoT [147], reduced cost of electronic devices, and people’s need

28
Table 2.4: Driverless Car Technology Trend
Proposed Launch Manufac- Pilot Features Works
turer Project
2005 2021 Ford – Level 4 Automation, no [123]
gas pedal, no steering
wheel
2009 2020 Google Waymo Fully self-drive, no steer- [123]
ing wheel, no accelerator
and brake pedal
2014 2015 Tesla Model S Autopilot, 360 degree [124]
view, real-time traffic
updates, automatic park-
ing
2014 2025 Mercedes- Future Autonomous Driving [123]
Benz Truck
2025
2015 2021 Volvo Drive Me Level 4 Automation, In- [125]
telliSafe Auto Pilot lets
user activate and deacti-
vate autonomous mode
2015 2020 Nissan ProPI- Automatic lane change [123]
LOT on highways, au-
tonomous driving on
urban roads and inter-
sections
2016 2019 General Mo- Super Hands-Off lane following, [117]
tors Cruise brake and speed control
2014 2016 Induct Navya Autonomous shuttle, de- [126]
Technology ployed in specific loops,
closed environment &
half-mile radius
2016 2018 Drive.ai – Autonomous driving, [119]
remote monitoring, LED
screens display car’s next
action to pedestrian
2016 2021 BMW-Intel- BMW Develop open standard [127]
Mobileye iNEXT platform for highly and
fully automated driving

29
for assistance in mundane activities. Being an evolutionary industry, they are de-

signed based on the environment they would be deployed in and work side-by-side

with humans. Their reach varies from space to manufacturing, from home to war

front. Surgical or industrial robots need to have an extremely high level of accuracy,

while rescue robots should be fast and efficient in locating survivors in inaccessible

areas. Home-assistive robots such as Care-O-Bot should be able to help in household

tasks, be a mobility aid for the elderly and needy, and a medium for communication

and social integration [148].

Several cybersecurity issues and vulnerabilities identified by researchers pose se-

vere threats that malicious users can easily exploit. Industrial and academic re-

searchers have demonstrated attacks on industrial robots that can be hacked and

manipulated to introduce a few millimeters of a defect in a manufactured product

which could result in a catastrophic failure of that system [149]. They analyzed the

standard architecture of an industry robot from a security point of view. They devel-

oped an attack model based on the attacker’s goal, an access level to the system, and

their capabilities [150]. An independent security firm took the initiative to evaluate

currently available robots in the market from different vendors. Their initial search

report reflects several cybersecurity vulnerabilities in the robot technology [151]. Al-

though some of these vulnerabilities are common cybersecurity problems, vendors

should implement and address them from the first phase of the software development

process.

Surgical robots are the new trends in the medical industry, even in the third world

countries such as India, which reported 26 da Vinci systems in 2015 [152]. These

robotic surgeries result in small incisions, minimal blood loss, and faster recovery of

patients with less post-operative pain. These robots work very closely with humans,

and it is essential to ensure the security and safety of robots that operate around

people and animals in homes and organizations alike. Various attacks were reported

30
that compromised potential entry point to get into the hospital network in the last few

years [153, 154]. Network vulnerabilities could easily be exploited to access surgical

systems, bypassing intrusion detection systems and firewalls [?].

The same goes for household robots on a home network as well. Since such

type of robots is near children and adults in the house, it is more likely that the

onboard camera can be exploited to capture inappropriate audio/video streams [155]

by pedophiles and online sexual offenders [156]. Besides privacy issues, the home

robot’s sensors can be used to collect sensitive data, which can launch different types

of cyber or physical environment attacks. For example, a home robot acting as

a scheduler would know when the owners would be away from home. A planned

burglary could be attempted, or a family member could be physically harmed in the

worst-case scenario. A group of researchers investigated possible attacks on these

service robots, analyzed the threat, and listed different available defense mechanisms

against such attacks [108].

2.2.4 Internet of Autonomous Things (IoAT)/ Autonomous

Internet of Things (AIoT)

Two future concepts are more or less intertwined, in researchers’ opinion. One is

the Internet of Autonomous Things (IoAT), where smart autonomous devices would

be connected through the network and would be able to solve the problems or adapt

themselves through information exchange with their peers. The other concept is

an Autonomous Internet of Things (AIoT) connecting smart devices which would

“actively manage data and decisions on behalf of users” [157]. These two concepts

overlap each other in the sense that smart devices would have some autonomous

decision-making and would be connected through a network. The future is not so far

from where IoT devices will be information generators, and the edge devices will be

31
Figure 2-4: Statistics of number of devices connected worldwide from 2015
to 2025 (in billions) published in Statista, 2016 [1].
Note: Data is a forecast from 2017-2025.

consumers with cloud-based control. The statistics generated by Statista shown in

Figure 2-4 predict that the number of connected devices will be 75 billion by 2025 [1].

Most of these devices would be autonomous (or semi-autonomous). In [158], Tom

Keeley discussed the market of IoAT and how they will be able to solve newer problems

through self-organization and team operation. Markets like personal security, home

automation, and healthcare will lead the way with IoAT. Intelligent actuators will be

the tools, arms, and hands for IoAT devices.

As big data, machine learning, and Blockchain technologies advance alongside the

innovation of IoT devices, it would be sooner than expected that these devices would

achieve autonomy at the level of human actors [159]. Future IoT infrastructures

should be able to support heterogeneous platforms, locations, and environments. For

smart and reliable autonomous IoT infrastructures, it should be easily scalable via

decentralized management mechanisms and self-adapt to the changes in environment

context [160]. Moreover, it should support confidentiality and prevent personal in-

formation infringement allowing the users to keep their confidential data “in-house”.

32
For example, an AIoT would range from the smart pantry for automatic inventory

tracking or washing machines that would order detergent once the supply is about

to finish [157]. Again, since this area calls for further research, cybersecurity and

data privacy should be one of the primary goals in the architectural design of such a

network. For sure, a list of cyberattacks that could be launched on IoT devices [100]

would be applicable on AIoT devices as well.

2.2.5 Swarms

Another emerging area of research is swarm robotics. It is directly inspired by

nature where, for example, a swarm of insects or a flock of birds perform tasks beyond

individual capabilities. It has found applications in varied areas [161] such as detecting

lives in disaster rescue missions, an inspection of industrial machinery [162], mapping

agricultural fields [163], and monitoring for undesired environmental events. Swarm

robotics is much researched for military applications as well. Since individual entities

that make the swarm are dispensable and redundant, they can be applied in mining

or places dangerous/inaccessible to humans. Also, swarms of robots can complete a

particular task faster than an individual robot as they are self-organized and work in

parallel with distributed command and control structure of communication.

Currently, researchers mainly focus on modeling methods and algorithms of swarm

robotics of flocking, foraging [164], navigating and searching applications [165, 166].

Some early works considered security challenges in swarm robotics and analyzed pos-

sible threats to the swarms [167, 168]. They also compare the specific characteristics

of the swarms with other similar systems such as multi-robots, multi-agent systems,

mobile sensor networks, and MANET. This area of research is still raw, and not much

work has been done on individual attacks on the swarm robotic network.

33
2.3 Game Theory

We make decisions all the time in our everyday life: as simple as waking up early

or playing chess. Our choices and the decisions of others around us impact our results.

We strive toward optimal decision-making to have optimal results for everyone. When

we formally study the art of decision-making between rational agents in a strategic

environment, it is called Game Theory, and the setting is represented as a game.

Game Theory is a broad discipline applied in Economics, Computer Science, Bi-

ology, Sociology, Control Theory, and many more. Games can be static or dynamic,

cooperative or non-cooperative, zero-sum or non-zero-sum, symmetric or asymmet-

ric. There are a wide variety of games, but all of them have these common elements:

Players, Strategies, Outcomes, and Preferences. It is also important to note that the

outcomes result from everyone’s actions, but the players can have their preferences.

The game does not specify what the players do. It specifies only their options and the

consequence of each of them. Few other features of a game can be finite or infinite,

complete or partial information, and it can run simultaneously or sequentially. The

game’s solution is not to decide who wins the game and who loses. Instead, it is a

way to think about what players might do in a given scenario and what would be the

consequence. Hence, there can be different solutions to the same game.

A game can be represented as a tree that contains all the information. It is called

the extensive form. Each tree node indicates which player is to move and which

moves are available to the player, and the leaf node indicates the outcome of the

player’s moves. Extensive form games are sequential, i.e., each player takes a turn

to play, and the players know about the moves of players who have already played.

The normal form is a game representation in a matrix form. Simultaneous games are

represented in normal form. The rows and columns indicate the player’s strategy,

while the matrix cells show the players’ payoffs or outcomes.

34
Each player employs different independent strategies to optimize their decision-

making to beat their opponent in a game. Some strategies could lead to better

outcomes no matter how good the other player’s strategy is. Such strategies are

called dominant strategies. If X and Y are two strategies and X dominates over Y, Y

is said to be the dominant strategy. A rational player would never play the dominated

strategy as it leads to a worse outcome and can be eliminated. We iteratively eliminate

strictly dominated strategies to reduce the game until we no longer find one. In a game

with no dominant strategy, we look for a strategy that provides the best response or

maximum profit against another player. Such strategies are called pure strategies,

and the game reaches an equilibrium state, referred to as the Nash Equilibrium.

Hence, Nash Equilibrium can be defined as the optimal state of the game where no

players have an incentive to change from their chosen strategy. A game can have one

or more pure strategies. A player may decide to randomize between the strategies

with some probability. The probability distribution when no player can improve their

expected payoff by deviating to another mixed strategy is called the mixed strategy

Nash Equilibrium. A pure strategy is a special case of mixed strategy when the

probability distribution for the strategy is one.

2.4 Chapter Summary

The world is progressing towards smart gadgets, smart homes, smart vehicles, and

smart cities. There have been some real-world attacks exploiting the vulnerabilities

of the current cyber-physical systems and networks. With the advancement in tech-

nologies toward autonomous systems, attacks are bound to increase. It has become

important more than ever before that the security challenges and concerns related to

autonomous systems are addressed in the development phase itself.

This chapter started with a glimpse into the historical evolution of autonomy

35
and the progressive work done in this broad field. Knowing the background of a

complex topic clears questions like ’hows and when’ and provides a comprehensive

understanding of the subject. The different approaches to implementing autonomy in

intelligent systems could be a good start to thinking about how the system’s autonomy

will work. We also explored some of the trending areas in the development and

enhancement of autonomous systems in the ‘current trends’ section. Our subsequent

discussion was on cybersecurity of autonomous systems, where we threw some light

on the work done in the industry and academia on some latest autonomous systems

related to security. UXVs, driverless cars, and robots are in the active area of research.

They are gaining presence in our lives. AIoT/IoAT and Swarms are new research

areas, and not much has been done in cybersecurity. This chapter also introduces

Game Theory and some of the related concepts.

36
Chapter 3

Security Modeling of Autonomous

Systems

3.1 Modeling System, Vulnerabilities, Threats, and

Attacks

It is of utmost importance to have a clear picture of what an autonomous system

can do. In what scenarios will it alarm the user? How will it transfer the control?

Or, can it defend itself under attack? At the same time, the operator should also be

aware of the cues system is sending. Researchers have tried to model different aspects

of the system using different techniques. From a security point of view, it is essential

to analyze the system model to find the vulnerabilities and threats to each part of the

system, which can be further utilized to create an attack model. These areas have

been studied, and various theoretical and analytical models have been proposed. We

have tried to capture these works concerning UAVs, robots, and driverless cars as

these autonomous systems are widely being researched and incorporated into the real

world.

37
3.1.1 System Modeling

System modeling describes an abstract view of a system, ignoring its details. It can

be used to illustrate the system’s functions, behavior, or architecture without going

into other details. It reflects how the system reacts to certain events or communicates

with sub-parts and the environment.

3.1.1.1 Theoretical Model

• UAvs: Significant parts of a UAV architecture have been described in [2], which

make up the guidance, navigation, and control systems. Modeling a UAV from

the communication network perspective has been discussed in [169], detailing

communication among various modules. The model described in the paper has

six modules starting from the data acquisition module, which collects data from

the sensors and sends the required information to respective modules, such as

altitude data to Altitude and Heading Reference System (AHRS) module and

camera data to the telemetry module. The navigation module provides Position,

Navigation, and Timing (PNT) information, while the control module sends

speed and orientation control signals to the system’s actuators. These systems

also have a data logging module that logs flight details such as PNT data to

keep track of the missions and for further analysis in case of failure. Another

work proposed a hierarchical model for coordinated control of multiple UAVs

and used differential game theory for collision avoidance and formation control

as a pursuit-evasive game [170].

• Autonomous Vehicles: DARPA Urban Challenge accelerated the research for

full-sized autonomous vehicles. A team from MIT was one of them that com-

pleted the race. They discussed their autonomous vehicle architecture in [171]

where the requirement was to perceive and navigate a road network segment

38
in a GPS-denied and highly dynamic environment. Another team designed the

autonomous vehicle based on the ‘Sense-Plan-Act’ model of an autonomous sys-

tem [172]. Based on these works, Guo et al. modeled a mobile robot system

that consists of a robotic platform and a planner to detect sensor and actuator

attacks. Sensors are the eyes and ears of an autonomous system to the out-

side world, while actuators can be compared to the limbs that execute control

commands transmitted by the sensors [173].

• Robots: [174] was the first work to present a generic model of an autonomous

robot related to security modeling. Recently, some industrial researchers pre-

sented the architecture of an industrial robot for security and threat modeling.

In this architecture, a programmer sends commands to the robot controller over

the network, which in turn sends these signals to the sensors and actuators of

the robot to accomplish the assigned task [150].

3.1.1.2 Analytical Model

Each autonomous system has unique dynamics, and the generalized study of such

systems could be limited to standard parts. Goppert et al. [175] modeled the

dynamics of a UAV system using JSBSim and Scios for the control, guidance, and

navigation system of the UAV. It was used to simulate the response of the UAVs to

several identified cyberattacks, such as fuzzing attacks and digital update rate attacks.

An increased sampling rate would make the system unstable in a digital update rate

attack. Some of the works describe a UAV as a linear transitional time-invariant

dynamic system with zero-mean Gaussian white noise and a constant covariance

matrix [113, 176]. Authors have also used game theory to describe the kinematic

model of a UAV using variables to express the 3-D coordinate frame [103]. Guo et

al. [173] modeled a mobile robot as a nonlinear discrete-time dynamic system where

the robot from one state to another after the robot actuators execute the command
39
generated by the planner.

3.1.2 Threat Modeling

Threat modeling identifies and understands the threats to a system and then

defines countermeasures to mitigate the threats. Not only does it helps to visualize

the system model through potential adversaries’ eyes, but it also helps to evaluate

security risks and countermeasures in the case of possible attacks [177]. Envisioning

potential threats is a daunting task. Modeling strategies like STRIDE (Spoofing,

Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of

Privilege) can help to analyze the data flow through the system [178]. Another threat

modeling approach is Persona Non-Grata, which focuses on attackers, motivations,

and abilities [179].

Analyzing each part of the system for a different aspect of security by follow-

ing the CIA model (Confidentiality, Integrity, Availability) lays the foundation for

researchers to identify certain attacks [169]. As technology advances, so do the at-

tackers’ strategies, and attack vectors [180]. Based on the vulnerabilities of the UAV’s

auto-pilot, threats in different components and their effect on the proper functioning

of the UAV were analyzed by Kim et al. [2]. These were preliminary works in this field

that lacked identification of hardware vulnerabilities or insider threats to the system.

Another group of researchers developed a threat model for a smart device ground

control station (a portable hand-held ground control station for UAVs) that allow

soldiers to pilot UAVs on the battlefield. The critical components addressed in the

model are attack motivation, vulnerabilities in these systems, cybersecurity threats,

and mitigation steps in case of attacks on these devices [181]. They presented a risk

analysis summary of threats and their impact on hardware, software, communication

network, and human errors. Similar discussion based on policies to defend against

CIA threats in the context of unmanned autonomous systems has been done in [178]

40
along with threat modeling and risk analysis using the STRIDE approach. Some

software products like Microsoft Threat Modeling Tool and ThreatModeler automate

threat modeling, with the latter offering more sophisticated features [182].

The study of cybersecurity issues in robotics is also gaining pace in academia

and industries. Most preliminary works in the identification of direct and indirect

threats to the robotic system could be found in [174]. Gage discussed direct threats

on the sensor, actuator, communication network, processing elements, and derived

threats. [108] identified four threat vectors in a mobile service robot: attacks on sensor

data, hardware attacks, software attacks, and attacks on infrastructure. While [183]

discusses cyber threats to robots at “hardware, firmware/OS, and application levels”,

[184] models cybersecurity threats, risks, and safety issues of using robots. They

grouped the threats based on the origin of attack (natural, accidental, or intentional),

target (physical, cyber, or both), impact on robot and external entities, and risk.

Threat scenarios based on the vulnerabilities found in the security analysis of an

industrial robot have been discussed in [150]. An attacker could alter the production

outcome, introduce defects in the products, cause physical damage to the robot, or

cause harm to coworkers. They can also be used as an entry point to extract company

sensitive data or hacked to perform ransomware attacks.

3.1.3 Vulnerability Modeling

The research done by Kim et al. identified control system security and application

logic security as the two vulnerabilities of an autopilot system in UAVs and catego-

rized the identified threats under them, as shown in Figure 3-1 [2]. In control system

vulnerability, the attackers exploit the vulnerabilities in the hardware or software

programs, such as buffer overflow attacks and malware installation. Attacks in which

manipulated input data are fed into the control systems exploit the application logic

vulnerability such as GPS spoofing and automatic dependent surveillance-broadcast

41
Figure 3-1: Vulnerabilities of an autopilot system reproduced from [2]

(ADS-B) attacks. The communication mode among UAVs or ground control stations

is over wireless communication networks. It widens the grounds for cyber-physical

attacks ranging from disruption of communication links to capturing and using one

of the UAVs as an adversary. Another work emphasized the vulnerabilities of dif-

ferent layers (physical layer, link layer, network layer) of a communication network

of UAVs [111]. Krishna et al. reviewed the cyber vulnerabilities of UAVs based on

recent actual and simulated attacks [114].

There are more than 100 built-in or installed Electronic Control Units (ECUs)

within a modern car to control and regulate various functions of the vehicle [185].

These ECUs are connected through an in-vehicular network of sensors, processors,

control systems, communication applications, and a wireless gateway for external

communications with other vehicles and infrastructure. Each system in the auto-

mated vehicle has some vulnerabilities. A group of researchers extensively reviewed

the vulnerabilities in an autonomous vehicles. They highlighted the vulnerabilities

42
based on sensors and control modules, behavioral and privacy aspects of humans, and

connection infrastructure [186]. An independent security firm evaluated the robots

currently available in the market from different vendors to show how insecure robot

technology is and reported nearly 50 cybersecurity vulnerabilities [151]. Maggie et

al. [150] were able to identify several weaknesses of an industrial robot. Lack of

mandatory user authentication, unsecured network, and naive cryptography are vul-

nerabilities identified in the computer interface used to interact with the robot. An

attacker could easily bypass or disable user authentication, tamper with existing ac-

counts or exploit buffer overflow memory error.

3.1.4 Attack Modeling

With the increase in cyberattacks, it has become the need of the hour for govern-

ment, organizations, and researchers to be ready with planning so that future attacks

can be handled rapidly and efficiently. Attack modeling helps realize the attacks

before they happen and prepares the organization with mitigation steps to take if

an attack occurs. There are various attack modeling techniques to analyze the cy-

berattacks, such as attack graph or tree, diamond model, attack vectors, and attack

surfaces [187].

3.1.4.1 Theoretical Model

Guo et al. gave the attacker model for a mobile robot where the attacker can

launch actuator or sensor attacks [173]. Potential cyber attacks on automated vehi-

cles have been discussed in [188] in which the authors have identified attack surfaces

and the possible attacks on automated and connected vehicles. They extend their

research by performing real and effective blinding, jamming, relaying, and spoofing

attacks on the camera and LiDAR sensors of automated vehicle [104]. One of the

works in this area classifies cyberattacks as passive and active attacks. A passive

43
attack extracts information from the system without affecting the system resources

like eavesdropping. An active attack objective is to harm the system in some ways,

like the DoS attack that compromises the availability of communication channel [189].

A taxonomy of attacks on autonomous vehicles has been presented in [190], which

categorized the study based on attacker, attack vector, target, motive, and poten-

tial consequences. This taxonomy was modified to reflect the attack taxonomy of a

UAV in [114] with minor modifications such as the addition of a new subcategory of

‘communication stream’ and listing references of actual instances of attacks.

As stated earlier, robots have already entered our lives and are making a place

around humans. These robots assist us daily, including medical services in hospitals,

battlefields, emergency response, factories, and at home. Lives could be in danger

if such robots are attacked. Various works have been done to analyze the possible

cybersecurity attacks against them. Bonaci et al. presented an attacker model of a

teleoperated surgical robot [191]. They identified possible attacks and classified them

as intention modification, manipulation, and hijacking attacks. Robots are connected

to a network through an interface for operator interaction, such as a joystick and I/O

diagnostic ports. The attacker model of an industrial robot discussed in Maggie’s

work [150] profiles an attacker based on access level, technical capabilities, access to

equipment, attacker’s budget, and type of attacks that could be performed.

3.1.4.2 Analytical Model

Guo et al. also presented a kinematic model for sensor and actuator attacks where

sensor attacks result in wrong sensor readings that might generate erroneous control

commands, and actuator attacks could directly alter the control commands [173]. A

group of researchers modeled the stealthy deception attack. They analyzed the se-

curity of a cyber-physical system in case of a deception attack on sensors, actuators,

or both, which causes unbound estimation error without being detected by the mon-

44
itoring system using steady-state Kalman Filter [109]. They extended their work to

model direct control acquisition and onboard navigation attacks, including individual

and combined stealthy deception attacks on the Inertial Measurement Unit (IMU)

and GPS. They further proposed a real-time safety assessment algorithm to verify the

safety of the UAV subject to cyberattacks based on reachability analysis [113]. An-

other group defines the UAV model under a GPS spoofing attack where falsified data

could be injected into the navigation component either through a GPS signal simula-

tor or by injecting a data-level GPS spoofing attack in a hacked onboard navigation

system [176]. They formulated a real-time manipulation method for the UAV under

a GPS spoofing attack to drive it towards a malicious destination without triggering

the fault detector. They computed the attainable location set of the UAV under such

attacks.

Many researchers have also used decision-making theories to model attack strate-

gies as games [103, 192–194]. Bhattacharya et al. analyzed the coordination of multi

UAVs in case of a jamming attack on the communication channel by an aerial jam-

mer and modeled this scenario as a zero-sum pursuit-evasive game [103]. A zero-sum

network interdiction game between a vendor of a delivery drone and an attacker was

modeled using prospect theory [192]. Here, the attacker’s objective was to prevent

the goods delivery drone from taking the optimal path from the warehouse to the

customer’s location chosen by the vendor.

3.2 Applications of Game Theory to Cybersecu-

rity

The application of game theory to physical and cyber security is gaining ground.

Indeed, game theory has been applied to network security and model cyber attacks.

Researchers have proposed the secure design or operation of specific cyber-physical

45
systems based on game theory and behavioral dynamics. However, few works at-

tempt to address the cybersecurity issue of autonomous systems. This section briefly

introduces some of these recent works. It is divided into three subsections.

3.2.1 Autonomous System Security

One of the early works from 2015 focused on developing a game-based security

framework for multi-agent autonomous systems [195] to formulate a min-max model-

based predictive control (MPC) problem. They solved it using a dynamic signaling

game model. For resilience enhancement, the work focuses only on perception through

the use of a limited-range sensor. The authors used MATLAB-Simulink for the simu-

lations and concentrated only on Person-in-the-middle (PITM) attacks. Another work

combined the robust deep reinforcement learning (DRL) model and game theory to

address the security and safety in autonomous vehicle systems [196]. The scenario

involves an attacker injecting false data to the AV to reduce the distance between

Avs to cause a crash or reduce traffic speed/movement. The possible values for such

a data injection attack are countless. Therefore, the authors propose using an LSTM

block for both attacker and the AV to learn the deviation in space between vehicles

due to their own or other players’ actions. This data is then fed to the DRL model,

which plays the game for each player and attempts to minimize (for AV) or maxi-

mize (for the attacker) such deviations. This work focuses on data injection attacks,

and the authors have shown that the algorithm converges to a mixed strategy Nash

equilibrium point. A similar work from the same authors proposed the use of the

Colonel Blotto (CB) game for secure state estimation between interdependent critical

infrastructure (ICI) such as power, gas, and water supply systems [197]. Although

not automated, most of these systems are heavily automated and only monitored for

faults and errors. This work proposes a model for ICIs, uses Kalman Filtering for state

estimation and explores maximum state estimation errors due to compromised sen-

46
sors. Finally, the authors use the pure strategy and mixed strategy non-cooperative

CB game to calculate Nash equilibrium (PSNE/MSNE). Matlab-based simulation re-

sults verify that the derived MSNE is the best strategy, and ICI must be considered

a unified system to protect individual CIs.

3.2.2 Cyber-Physical System Security

Various other works have attempted to apply game theory in cyber-physical sys-

tems such as train control, drone delivery, smart grid, etc. The work on securing

control of communication-based train control systems focuses on jamming attacks.

It leverages a mixed-strategy zero-sum stochastic game to model the jamming at-

tack on the wireless communication sub-system [198]. The authors applied dynamic

programming and used analytical results to find the equilibrium. The scenario de-

picted is similar to [196] in the sense that trains are expected to be communicating

the headway, i.e., distance from one another, which may get affected due to the jam-

ming attack and result in a crash or equipment failure if the trains need to stop at a

short distance. Another 2017 work attempts to solve a zero-sum network interdiction

game targeted at a drone delivery system where the two players are trying to max-

imize/minimize the delivery time of packages [199]. Similar to popular approaches,

this work incorporates a secondary concept of prospect theory in the game that en-

ables the subjective perception of attack success probabilities. The authors prove that

Nash equilibrium can be obtained by solving standard linear programming problems.

The simulation results show that subjective decision-making leads to the victim losing

the game. Another 2017 work focuses on a DDoS attack on the advanced metering

infrastructure (AMI) of a smart grid and employs honeypots to detect and gather

attack information [200]. The authors prove the existence of several Bayesian-Nash

equilibriums for the proposed Bayesian honeypot game and present simulated results

using the OPNET simulator. A notable recent work attempts to propose a generic

47
game-theoretic approach for modeling cyber-attack and defense strategies applied to

any cyber-system. A non-cooperative zero-sum attacker-defender dynamic game is

presented that allows players to choose between 3 levels of actions (No action, low-

intensity action, high-intensity action) [201]. However, the design of various stages of

this proposed work assumes extraction of parameters assuming that the game depends

on the network component of the system, ignoring other essential system components.

3.2.3 Network Security

In this age, networking is more than the traditional approach to controlling

network traffics through routers or switches. Now, we have cloud and IoT envi-

ronments, software-defined networks (SDN), wireless sensor networks, wireless ad

hoc networks (e.g., MANET, VANET), etc. The large scope of this domain has

attracted researchers to apply game theory to modeling, design, simulation, and

defense mechanisms to protect these networks against different types of cyberat-

tacks. A wide variety of games have been utilized in these popular works includ-

ing the use of static/dynamic non-cooperative games [202–205], multi-player dy-

namic games [206–208], semi-network form game [209], competitive normal-form

game [210, 211], signal game [212], maximin game [213], non-cooperative Bayesian

game [214], non-cooperative and repeated games of incomplete information [205],

and stochastic game [215]. Several works have also combined other bio-inspired or

AI-based approaches with game theory to enhance the quality of detection/protection

mechanisms, such as the use of holt-winters and genetic algorithm (GA) with fuzzy

logic [216], Backpropagation neural network (BPNN) [217], and as previously men-

tioned, use of LSTM/DRL [196], and Kalman Filtering & Linear programmin [197].

In one of the research works, the authors proposed a DoS mitigation framework

based on a non-cooperative repeated game called PrioGuard in an SDN [218]. It is

based on the penalty-incentive mechanism to punish the attacker by lowering their pri-

48
ority and postponing their requests so that other requests can be processed normally.

Recent work proposed a zero-sum security algorithm to solve an attacker/defender

interaction game in a VANET during a denial of service attack. The strategies of the

attacker are to attack or stop attacking, while the defender’s strategy is to sustain,

move away or stop the vehicle. They used NS2 as a network simulator, SUMO for

the mobility model, and MOVE to create a traffic model for their simulation [219].

Another work published in April 2022 (a month before the writing of this disserta-

tion) provides a mathematical representation using game theory for the prevention of

DDoS attacks on drone networks using UAVSim, a UAV simulation testbed built on

OMNET++ [220]. However, this work formulates a zero-sum game for UAVs while

our work focuses on the formulation of a non-zero-sum game that resembles real-world

losses. Also, our proposed payoff function can be applied to any autonomous system

based on the quantification of cost while the work mentioned above focuses strictly on

UAVs. Our work is also more extensive in evaluating attacker and defender strategies

as we have considered multiple parameters that could be changed simultaneously and

considered numerous scenarios.

3.3 Research Challenges

To ensure and assure that these systems are safe and secure to be used by hu-

mans, a new approach towards cybersecurity and autonomy is needed. The research

community in this area lacks algorithmic solutions to address:

• uncertainties in modeling

• security of autonomous systems from malicious attacks

• accomplishing higher goals through cooperation and collaboration

Autonomy is a dynamic property that needs to adapt to varying unknown situations,


49
depending on the mission complexity. We need a resilient system that performs

well over its lifetime. Rigorous mathematical modeling could provide a basis for a

framework that would help in the early development study of various capabilities,

factors, and trade-offs between human interaction and machine automation [221].

It could further be used in the development of autonomy assessment tools, keeping

factors like security in mind. Though we are moving ahead towards an autonomous

future, there are many research challenges that researchers have to face. We have

tried to summarize a few of them as follows:

• Building Human Trust: After Uber’s self-driving car crash in 2018, a survey

performed by Statista shows that trust in self-driving cars dropped to 27% [222].

This is the most intimidating challenge the autonomous systems developer is

about to face. While these systems promise a more comfortable and efficient

life, safety and security measures need to be taken before deployment among

the public. The world should be ready in legal, social, economic, and ethical

contexts before these systems are incorporated into our lives, as failures of these

systems are inevitable at some point in time, either through known or unknown

causes. The trust in these systems could only be built by thorough analysis

and testing. The service providers and manufacturers of these systems should

be stringent when discussing security and be ready with countermeasures when

compromised.

• Diverse Training Dataset: Autonomous systems need to be trained on a

large, diverse, and complete dataset to be secure and safe. A simulated dataset

is incomplete as it fails to capture critical conditions in the real world. Its rel-

evancy and integrity are questionable as it lacks the human factor. Machine

learning is an area that could be applied to enhance the cybersecurity of au-

tonomous vehicles. As discussed in [223], if a car’s in-vehicular network logs

50
are monitored and analyzed by a machine learning-based system, it can detect

malicious activity early and alert the driver or take some preventive measures

to save itself from fatal accidents. With the collected data, machine learning

algorithms can be used to detect malware activities, network attacks, or un-

usual commands. They can also be used to establish behavioral profiles of any

potential attacker [224]. It would also improve the effectiveness of the security

algorithms as the data would be continuously updated and would be unbiased

of any technical or human intentions.

• Data Security: A lot more research needs to be done in the area of security of

communication data and over-the-air updates. Blockchain is an emerging tech-

nology that can be used to provide a more secure and robust solution for these

autonomous systems. Blockchain devised initially for digital currency (Bitcoin)

is a chain of blocks linked through a cryptographic hash from the previous block

with a timestamp and distributed over the network, which makes it resistant to

modification of the data. [225] shows that Blockchain has the necessary capabili-

ties for swarm robotics operations to be more secure, autonomous, and flexible.

This technology would not only provide private and reliable communication

among swarm agents, but it would also overcome the vulnerabilities, potential

threats, and attacks associated with them [226]. The decentralized storage of

Blockchain would guarantee the confidentiality and integrity of the driver’s data

with no downtime in the network connectivity of autonomous vehicles. It would

also ensure the accuracy of data in a V2V or V2I communication [227].

• Computing Power and Network Management: The current status of au-

tonomous systems lacks the computational capability to perform computation-

ally intensive tasks on large datasets, such as encryption of collected data to be

shared securely over the network. In many cases, the end controller of these sys-

51
tems would be handheld devices such as mobile phones and controllers that lack

the computational power to run advanced security algorithms. One solution is

to embed security into the hardware design. Another solution is to secure the

network against cyberattacks by network softwarization such as Network Func-

tion Virtualization (NFV) and Software-Defined Networking (SDN) [?]. SDN

provides efficient network management, programmability, and ease of recon-

figuration [228]. It provides a flexible and dynamic environment with a view

of the entire network topology, which helps to block specific attacks such as

DDoS based on the network’s policy [229,230]. A group of researchers proposed

a secure mobility model between UAVs and ground Wireless Sensor Network

(WSN) nodes where communication would be through the SDN controller for

authentication and coordination [231]. SDN provides a virtual, centralized,

software-based control that allows easy integration of security-relevant func-

tions into the SDN controller. Along with the holistic view of security, SDN

provides an improved response in case of any security incidents which would

otherwise take a long time to respond to in traditional network [232].

3.4 Chapter Summary

In this chapter, the discussion began with the cybersecurity of autonomous sys-

tems related to the work done in the industry as well as academia on some latest

autonomous systems related to a security listed in Table 3.1. This study focuses on

the system, threats, vulnerabilities, and attack modeling of a few widely researched

autonomous systems, which shows how much work has been done in these areas.

Our insight into these discussions is that cybersecurity and modeling of individual

autonomous systems are at a very early stage. As per our findings and to the best

of our knowledge, UAV is the most active field of research concerning system model-

52
ing and attack scenarios. System modeling of autonomous vehicles started with the

DARPA challenge. While lots of work related to network security of vehicle-to-vehicle

(V2V) and vehicle-to-infrastructure (V2I) VANET network has been done, modeling

of threat and attack vectors of autonomous vehicles itself needs lots of attention.

Identification of vulnerabilities and threats to robot security started very recently,

which can be inferred from Table 3.1 as most of the works on robot security were

published in 2017. There is a vast scope of research and simulation/implementation

in these areas. A generalized study on the cybersecurity of these individual systems

could be an area of research. New modeling techniques like game theory and ma-

chine learning could be applied in these areas as well. Also, the autonomy approach

should be taken into account while modeling threats and attack scenarios for these

autonomous systems.

Based on the study of security modeling of autonomous systems, we further re-

searched the applications of game theory in cybersecurity. Our research shows that

game models have been previously used in network security to model attacks. It has

been used in cyber-physical systems (CPS) to propose secure design or operation in

train control systems, drone delivery systems, and smart grids. Several works on

modeling, design, simulation, and analysis of game theory-based defense mechanisms

have been proposed to protect computer networks, software-defined networks (SDN),

Cloud or IoT environments, and wireless sensor networks against attacks and intru-

sions. This study shows that little work has been done to address the cybersecurity

issue in autonomous systems. Finally, we discussed the future research directions and

challenges in enhancing the automation and security of these systems. Adopting au-

tonomous systems discussed above will usher in a new era of technological advances

and economic growth. Driverless cars are expected to reduce road accidents, fuel

consumption, traffic congestion, and air pollution [233]. Robots would be deployed

in homes and various industrial sectors to provide assistance and efficiency in routine

53
and high-precision jobs. The question is, how secure and safe are these technologies

to be adopted into society. With every new revolution in transportation, there are

risk factors involved. These systems should not be immaturely deployed to the pub-

lic. Researchers in industry and academia alike have to shoulder the responsibility of

addressing security flaws. The manufacturers have to ensure the safety of their users,

considering even the remote scenarios of mishaps and attacks. Also, the government

has to have proper legal policies and support infrastructure in place [234].

54
Table 3.1: List of System, Vulnerabilities, Threat, and Attack Modeling
Studies

55
Chapter 4

Cybersecurity Modeling of

Autonomous System Using Game

Theory

The world is progressing towards an era of AS. Autonomous operations with vo-

luminous data processing, integrated AI, and high definition imaging would develop

new applications for UXVs (Unmanned aerial/underwater/land Vehicles) and robots

that would change the outlook of this booming industry. These AS would increase

efficiency and task productivity with improved safety in work environments. For

example, any accident investigation that could manually take three hours to collect

information could be done in less than an hour using a drone, reducing the traffic

delays and saving time and money [235]. Driverless cars are estimated to save mil-

lions of lives worldwide by avoiding accidents caused by human errors [236]. As the

level of autonomy of these systems moves towards full automation, attack vectors and

their impact would increase as well, which may result in deadly consequences [237].

Attacks with increased complexity have been on the rise in recent days. It is critical

to consider the security of these systems and explore the solutions thereby. Also, the

research community lacks generalized modeling of cyberattacks on AS. One approach

56
could be to apply game theory in this regard [11].

The main contributions of this chapter are multi-fold. First, we model a general-

ized AS architecture based on standard modules of AS such as driverless cars, robots,

and drones. An attack on an autonomous system can be on any of its modules,

and, based on the defensive measures, the impact would vary accordingly. Second,

we propose a strategic non-cooperative non-zero sum game for modeling attacks on

an AS to numerically compute the mixed strategies that achieve the Nash Equilib-

rium (NE) and the expected payoffs of the players. The AS would act as a defender,

while an adversary could be an individual attacker, a network node, or another AS.

A game-theoretic framework can analyze the system’s response and payoffs for both

the players in an attack situation when specific measures are in action. Third, we

have considered the probability of a successful attack in defense and no defense sce-

narios and the cost of damage in our computation. In addition, we consider the

game as a ’non-zero-sum,’ which maps to the real world more realistically than the

works of [?, 197, 220]. Fourth, we extend the works in [238] to a n × n bimatrix

game represented in a normal form. This method is more accessible than the al-

gebraic/differential method to calculate the mixed strategies of n × n games where

n > 2. Although various works have analyzed these systems’ threat and attack mod-

eling individually, the research community lacks a generalized security modeling of

these systems. Also, in Section 3.2 we discuss the various cyber attack-defense game.

Still, to the best of our knowledge, none has proposed a game related to the security

of the autonomous system.

4.1 Autonomous System Architecture

It is essential to understand the high-level architecture of an AS and the functions

of each module [239] before we move to the design and instructions of the game.

57
Figure 4-1: High-level Autonomous System Architecture.

Berntorp et al. gave a high-level control architecture of autonomous vehicles, which

includes motion planning, vehicle control, and actuator control, along with sensing

and mapping as significant blocks [240]. Petnga et al. discuss a high-level architecture

of a UAV reflecting the interactions between cyber (command, control, communica-

tion) and physical (sensors and actuators) components of these systems [241]. Based

on these studies, we identify three significant modules common to popular AS i) per-

ception, ii) cognition, and iii) control. Fig. 4-1 shows a high-level architecture of an

AS.

An AS senses the environment through sensors that act as eyes/ears for the AS.

The perception module combines data from various sensors to create a picture of the

environment through a sophisticated algorithm. There are two types of sensors: Exte-

roceptive and Proprioceptive sensors [184]. Exteroceptive sensors provide information

about robot workspaces like LASERs, LiDAR, and cameras. Proprioceptive sensors

measure parameters for subsystems internal to the AS, such as compass, gyroscope,

and potentiometers. These sensors have private information about the owner or the

machine’s status, posing high-security risks if they are compromised. On the contrary,

the data from multiple sensors are also fused, which not only helps in localization and

grid mapping for navigation but also in detecting dynamic objects and recognizing

58
them, such as pedestrians and traffic signs [242]. An attack on the perception of the

AS would disrupt the understanding of its environment leading to wrong decision

making [243].

Cognition is the ability of a system to make complex decisions based on the sys-

tem’s intelligence algorithms on the data it receives from the perception module and

the hardware. An AS with a high level of autonomy would have to make a more com-

plex analysis of the information for mission planning with many unknown factors.

It has to assess the complexity of the given task and the environment, level of au-

tonomy, risks, costs, and the broader mission before making effective decisions [239].

For example, an autonomous vehicle needs to make judgments about the best route,

be aware of its surroundings, and avoid collisions to reach its destination. Also,

the cognition module should perform a threat assessment to ensure the system’s se-

curity and detect any malicious activities. Application layer attacks such as GPS

jamming/spoofing and Sybil attacks may cause the system to make erroneous deci-

sions [244].

Control can be described as the ability of the AS to execute the decisions made

by the cognition module through physical or digital means [239]. In 2015, A remote

attack on the actuators of Jeep Cherokee was launched that took over the controls

of the steering wheel and brake systems [121]. Guo et al. proposed a mobile robot

intrusion detection system for the detection of sensor and actuator attacks [245].

Hwang et al. modeled the attack and analyzed the security of the system for deception

attack on sensors, actuators, or both [109].

The effects and consequences of cyberattacks on perception, cognition, or/and

control module will vary with the system (driverless car, robot, UX Vs where X could

be air, underwater, or ground ) and subsystem(s) (e.g., navigation, communication,

network) under attack, the criticality of the mission, and the operating environment.

For instance, an attack on a system designed for operation in a highly critical en-

59
Figure 4-2: Process Flow of a Game

vironment will have more impact on the surroundings than on the one working in

a relatively less critical environment. In other words, an attack on the navigation

module of a Roomba vacuum cleaner wouldn’t yield much incentive to the attacker

than on a UAV or a driverless car. However, it still may cause inconvenience to the

owner, like cleaning the same area again and again or going in circles. The degree of

autonomy may also vary based on the severity or motive of the cyber attack. The

attack may even disconnect the user from the system or deny requests for support.

4.2 Strategic Game Model

In this section, we introduce the autonomous system security game model, define

the payoff functions based on the optimal actions for a given set of conditions of

rational players and then reach a state of equilibrium. Fig 4-1 shows the game model

and 4-2 shows the process flow.

60
4.2.1 Autonomous System (AS) Security Game Represen-

tation

A non-cooperative game is one in which the players don’t cooperate with each

others’ strategy. They try to bring down other player’s payoff. It is a non-zero sum

game as there would always be some loss to the defender. We represent the game

using normal form and Nash equilibrium is reached. The security game model is

represented by G =< N , S j |j ∈ N , U j |j ∈ N > where N is the set of players {a, d},

S j is the strategy space and U j is the utility for j ∈ N . In an attack scenario, the

players, their actions and the payoffs are discussed as follows.

1. Players: There are two players involved in this game; the attacker and the

defender.

Attacker - An attacker could be a malicious individual/party, attack node(s) in the

network, or another AS(s) who would benefit from the maximum damage caused by

the attack to the target AS. There is a possibility that the attacker plans to attack

more than one module simultaneously. The attacker action set would include no

attack, attack one module, or attack multiple modules.

Defender - The other player is called the defender, whose actions would minimize

the vulnerability of the system and take security measures in case of an attack. Such

an entity would include a system administrator, developer, or the autonomous system

itself. Defender action set would consist of no defense, defend one module, or defend

multiple modules.

2. Strategy Space: Strategy space S t = {Sit |t ∈ N , i ∈ 1 to z} is the action set of

all the possible strategies of the players, z is the sum of all possible combinations of

attack/defense. For an autonomous system with three major modules, n = 3, there

will be z = (3 C1 +3 C2 +3 C3 ) action strategies. The possible attack and defense

61
Table 4.1: Enumeration of Attack/Defense Strategies
Strategies Perception (P) Cognition (Cg) Control (Cn)
S1t (P) 1 0 0
S2t (Cg) 0 1 0
S3t (Cn) 0 0 1
S4t (CgCn) 0 1 1
S5t (PCn) 1 0 1
S6t (PCg) 1 1 0
S7t (PCgCn) 1 1 1

strategies are enumerated in Table 4.1. Each module is represented by 0s and 1s. 0

means no attack for the attacker, and 1 means the system is under attack. Similarly,

0 for defender means no defense, and 1 means the module under consideration is

being defended. For example, from an attacker’s perspective, S7t indicates all the

three modules are under attack, and from the defender’s perspective, all the three

modules have a defense mechanism.

4.2.2 Payoff Calculation

In game theory, each strategy results in a payoff to the players. The security

breach can result in data loss, communication, or the system itself. The attacker

would incur the cost of attacking. We denote the cost associated with implementing

these attacks as CA . The defender would employ strategies to block or mitigate the

attacks. For example, AS would switch to an Inertial Navigation System (INS) and

other sensors if the navigation system is down. The cost incurred by the defender

for implementing defending measures is denoted by CD . The costs considered here

are the monetary measure of the time, effort, or resources used. W represents the

damage or the impact incurred by the attack.

The impact of a simple attack on a single module of an autonomous system could

be high enough to cause a cascading failure effect, from a few crashes to traffic jams to

62
Table 4.2: Enumeration of Possible Cases of Attack and Defense
Attack Status Condition Probability
Case
0 no attack
1 successful mk 6= ml pk
2 unsuccessful mk 6= ml 1 − pk
3 successful mk = ml qk
4 unsuccessful mk = ml 1 − qk

loss of business and trust of the end-users. Such political, social, and environmental

impacts of the attack are difficult to quantify and are beyond the scope of our work.

For simplicity, we consider the economic value of the damage directly related to the

defender.

Let pk be the probability of a successful attack when no defense has been applied

on that module, i.e, mk 6= ml where 0 < k 6 n, 0 < l 6 n, mk , ml represents

the modules that the attacker decides to attack and the defender decides to defend,

respectively. And qk be the probability of a successful attack when the defensive

measures are active, i.e., mk = ml . Table 4.2 enumerates all the possible scenarios

of an attack that should be considered when calculating the damage caused by the

attack. Case 1 indicates that the attacked module was not the one that was defended.

This leaves the module vulnerable, so there is a probability pk that the attack was

successful. And probability 1 − pk the attacker was not successful in exploiting the

vulnerability of the module. For case 3, the attacked module had defenses but failed

to counter the attack with a probability of qk .

Let a = 1 represents ‘attack successful’ and a = 0 represents ‘attack unsuccessful’.

63
Therefore, for each module k, the probability of each case is given by:





 pk , if mk 6= ml , a = 1




1 − p k ,
 if mk 6= ml , a = 0
bk = (4.1)




 qk , if mk = ml , a = 1




1 − q k ,
 if mk = ml , a = 0

Let Ck be the cost of damage incurred by a module successfully attacked. When

the attack was unsuccessful (a=0), Ck = 0 as there is no loss or damage to the

property. Suppose, attacker plans strategy S4a = {0, 1, 1} and defender plans S2d =

{0, 1, 0}. Let s be an element of the set of all possible outcomes, S = {(H0 , H3 , H1 ), (H0 , H3 , H2 ),

(H0 , H4 , H1 ), (H0 , H4 , H2 )} if the game of attack and defense is played, where x in Hx

denotes the specific case listed in Table 4.2. The total economic loss for the defender

can be calculated as the summation of all possible outcomes, i.e., the product of the

probabilities of each attacked module, and the total cost of damage [246]:

   
X n
Y n
X
Wi =  bk  ·  C k  (4.2)
s∈S k=1|Sia (k)=1 k=1|a=1

We have not considered the situation of no attack and no defense, as this will yield

zero payoff to the attacker. If the attacker succeeds in the attack, they will cause

damage W to the defender. The payoff would be benefit minus the cost of attack. If

the defender has defending measures, the attack would cost him the amount of damage

and the amount they spent on defending the system. The payoffs of both the players

corresponding to the possible strategies of the attacker (Sia ) and the defender(Sjd )

(refer to the Table 4.1) for a non-zero sum is given by:

uij = −CAi + Wi , RD − Wi − CDj if 1 ≤ i, j ≤ z (4.3)

64
Table 4.3: Payoff Matrices for the AS Security Game

Attacker/ S1d S2d S3d


Defender
S1a −CA1 + W1 , −CA1 + W1 , −CA1 + W1 ,
RD − W1 − CD1 RD − W1 − CD2 RD − W1 − CD3
S2a W2 −CA2 , RD − W2 −CA2 , RD − −CA2 + W2 ,
W2 − CD1 W2 − CD2 RD − W2 − CD3
S3a −CA3 + W3 , −CA3 + W3 , −CA3 + W3 ,
RD − W3 − CD1 RD − W3 − CD2 RD − W3 − CD3

where, RD is the total cost of the modules, CAi is the sum of cost of attack on

individual modules ( nk=1 CAk ) and CDj is the sum of cost of defense on individual
P

modules ( nk=1 CDk ). In case the attack is unsuccessful, from equation (4.2), Wi = 0.
P

Based on eqn(4.3), Table 4.3 shows the 3x3 ordered pair of payoff matrices [A,

D] for a non-cooperative non-zero-sum bimatrix game for autonomous system in case

the attacker attacks only one module at a time.

4.2.3 Nash Equilibrium Calculation

Let X be a set of all mixed strategies of the attacker which is reduced to a vector

x = (x1 , x2 , ..., xz ), satisfying

z
X
xi > 0 and xi = 1 (4.4)
i=1

Similarly, let Y represent the set of defender’s mixed strategies. For a bimatrix game

[A, D] where A = [aij ] and D = [dij ], if the attacker chooses the mixed strategy x and

the defender chooses y, the expected payoff of the attacker and the defender would

be
z X
X z z X
X z
A(x, y) = xi yj aij , D(x, y) = xi yj dij (4.5)
i=1 j=1 i=1 j=1

65
As discussed in [238], if the expected payoff value of the attacker is v(a), we have

x1 y1 a11 + x1 y2 a12 + ... + xz yz azz = v(a)

or,

x1 (y1 a11 + y2 a12 + ... + yz a1z )+

x2 (y1 a12 + y2 a22 + ... + yz a2z )+


..
.

+xz (y1 a1z + y2 az2 + ... + yz azz ) = v(a)

For the above and equation 4.4 to hold simultaneously, the coefficients of xi in the

above equation must be ≤ v(a). Since xi > 0, these coefficients must be equal to v(a)

for equation 4.4 to hold, as shown below:

x1 v(a) + x2 v(a)+... + xz v(a) = v(a)

v(a)(x1 + x2 +... + xz ) = v(a)

x1 + x2 +... + xz = 1

Hence,

y1 a11 + y2 a12 + ... + yz a1z = v(a)

y1 a12 + y2 a22 + ... + yz a2z = v(a)


..
.

y1 a1z + y2 az2 + ... + yz azz = v(a)

66
In matrix form, the above equation can be written as below where J is the z-vector

(1,1, ..., 1)    
v(a) 1
   
   
v(a) 1
Ay T =  .  = v(a).  .  = v(a).J T
   
 .  .
 .  .
   
v(a) 1

We will have ,

y T = v(a)A−1 J T (4.6)

Since, sum of the components of y, i.e., yJ T must be 1 (or, Jy T = 1), we can write,

1
v(a)JA−1 J T = 1 =⇒ v(a) =
JA−1 J T

Therefore, substituting for v(a) in equation (4.6),

A−1 J T
yT = (4.7)
JA−1 J T

A∗
Since, A−1 = |A|
, y can be written as

A∗ J T T
y=( ) (4.8)
JA∗ J T

Similarly, if the the expected payoff value of the defender is v(d) We can see that

y1 (x1 d11 + x2 d12 + ... + xz d1z )+

y2 (x1 d12 + y2 d22 + ... + yz d2z )+


..
.

+yz (x1 d1z + x2 dz2 + ... + xz dzz ) = v(d)

67
And since, yi > 0 and y1 + y2 + ... + yz = 1,

x1 d11 + x2 d12 + ... + xz d1z = v(d)

x1 d12 + y2 d22 + ... + yz d2z = v(d)


..
.

x1 d1z + x2 dz2 + ... + xz dzz = v(d)

In matrix form, it can be written as

xD = (v(d), v(d), ..., v(d)) = v(d)J

We will have ,

x = v(d)JD−1 (4.9)

On solving similarly, we get,


JD∗
x= (4.10)
JD∗ J T

Hence, for a n × n bimatrix game, the unique equilibrium strategies for the defender

and the attacker are given by equations (4.8) and (4.10), respectively, and the ex-

pected payoffs of the players can be given by [238],

|A| |D|
v(a) = ∗
, v(d) = (4.11)
JA J T JD∗ J T

where A∗ , D∗ is the adjoint of A and D, |A|, |D| is the determinant of A and D,

respectively and J is a z-vector (1, 1, ..., 1)

68
Table 4.4: Quantification of actions for the AS Security Game

Perception Cognition Control


(module 1) (module 2) (module 3)
Attack Cost (CAi ) 5 10 15
Defend Cost (CDj ) 6 10 12
Module Cost (Ck ) 10 20 30
Attack success prob, no defense (pk ) 1 1 1
Attack success prob, defended (qk ) 0 0 0

Table 4.5: Payoff Matrices for the AS Security Game

Attacker/ S1d S2d S3d


Defender
S1a -5, 54 5, 40 5, 38
S2a 10, 34 -10, 50 10, 28
S3a 15, 24 15, 20 -15, 48

4.3 Case Study

This section presents a case study to validate the applicability of the game pro-

posed. We consider an autonomous system with three modules and quantify the cost

of the attacker and the defending action taken by the system, as shown in Table 4.4.

Table 4.5 shows the payoff matrix of the game, taking the best-case scenario for the

attacker where all the attacks are successful. Equation (4.3) calculates the payoffs of

the players.
a/d
For both attacker/defender’s strategy S1 = {1, 0, 0}, the possible outcomes are

{{H3 , H0 , H0 } , {H4 , H0 , H0 }}. For this particular example, from table 4.4, the prob-

ability of a successful attack with defense is qk = 0. Using equation (4.2), W1 = 0.

Attacker’s payoff will be −5. If defender’s strategy is S2d for attacker’s strategy S1a ,

the probability of attack of the defended module, pk = 1. The rest of the cells of the

bimatrix is calculated likewise. From the payoff matrix [A,D] (Table 4.5),

69
   
−5, 5, 5  54, 40, 38
   
A=
 10, −10, 10  and D = 34, 50, 28
  
   
15, 15, −15 24, 20, 48
   
∗ ∗ T T
JD = 360, 400, 340 , (A J ) = 250, 400, 450

and JA∗ J T = JD∗ J T = 1100

|A| = 3000 = 2.72, and |D| = 41200 = 37.45

Therefore, the Nash equilibrium of the game is

x = (18/55, 4/11, 17/55), and y = (5/22, 4/11, 9/22).

And, the expected payoffs of the attacker and the defender are v(a) = 30/11 =

2.72, and v(d) = 412/11 = 37.45, respectively.

Figure 4-3 shows the variation of expected payoffs of the players with qk . If the

value of qk is changed to 0.5 for the Perception module (module #1) - this will change

the value of the first cell of the payoff matrix (-5, 54) to (0, 49). The expected payoff

of the attacker would increase to 3.5, with minimal change for the defender (payoff =

37.64). When the value of qk for the Control module (module #2) is changed to 0.5,

the attacker’s payoff is 4.28, and the defender’s payoff decreases to 34.85. This shows

that the control module needs to be better defended than the perception or cognition

modules. Subsequently, the defender could analyze the payoffs for various scenarios

and decide to distribute and prioritize the resources among the modules accordingly.

Finally, the attack cost and probability of a successful attack are estimated.

70
Figure 4-3: Variation in Expected Payoffs of the players with probability of
successful attack (qk ).

4.4 Chapter Summary

In this chapter, a game-theory-based framework has been proposed to model an

attack on an autonomous system. The proposed framework can be used to analyze

the strategies of the attacker and the defender. We evaluate the cost of damage

or loss of resources based on the probability of a successful attack. We propose a

matrix method for calculating the Nash equilibrium for a n × n bimatrix game. For

simplicity, we analyze a game where both players have three strategies. The game

considers attack/defense on only one module at a time. Future work would include

the analysis of attacks on multiple modules.

71
Chapter 5

Non-Cooperative Game Modeling

Simulation

In this chapter, we evaluated the game model using several simulation experiments

to investigate the influence of employing different strategies on the payoffs of the

attacker and the defender. These strategies represent a variation in specific variables

of various systems in the Veins framework. We compare the equilibria obtained

in different scenarios and show the feasibility of applying our proposed game theory-

based strategic model in a real-world system. We outline the simulation environment,

describe the experiments, and discuss the results of the evaluations.

5.1 Simulation Setup

To validate the application of the proposed game model on an autonomous system,

an attack/defense game was implemented on driverless cars in a VANET network.

An existing simulation environment, ’Veins’, was used to implement the attack. Veins

is based on OMNeT++, a network simulator, and an eclipse-based mobility model

library called SUMO (simulation of urban mobility) that work with OMNeT++. The

simulation runs on Ubuntu/Linux virtual machine. For this implementation, veins

72
demo simulation scenario (RSUExampleScenario) was used that simulates a stream of

vehicles originating at one network section. It also shows how the network responds to

an accident. Here is a brief background of VANET communication, Veins Simulator,

and the Gambit tool used for NE identification. The VANET Communication subsec-

tion talks about the communication services, protocols, and terms used in the Veins

Simulator. The Veins Simulator subsection scratches the surface of the simulator to

get a basic idea of how it works and explains the working of the TracIDemoRSU11p

example application that RSUExampleScenario invokes.

5.1.1 VANET Communication

Vehicular Ad Hoc Networks (VANET) facilitates communication among driverless

cars and roadside infrastructure for safe driving in various conditions such as high-

speed driving, accident, or severe weather. VANET applications can be categorized

into safety and non-safety applications. Safety applications broadcast a beacon, usu-

ally at a frequency of 3-10 times per second, to vehicles in the direct communication

range of approximately 300 meters. The beacon contains vehicle status informa-

tion such as velocity, location, etc. A safety application beacon sent from an RSU

may warn about icy road conditions, an accident, a stop sign/vehicle ahead, etc.

Non-safety applications are used for commercial and entertainment purposes such as

tolling, navigation, restaurants, gas stations, etc.

Safety and non-safety applications use Dedicated short-range communication (DSRC)

services based on the IEEE 802.11p standard. The transceiver using DSRC services

would be On-board Units (OBUs) mounted on the vehicle or some portable units and

RSUs. An RSU can be a roadside infrastructure or a unit mounted on a station-

ary vehicle, [247]. According to the DSRC communication architecture, PHY and

MAC layers use IEEE 802.11p Wireless Access for Vehicular Environments (WAVE).

Most of the communication happens over the air with no routing in a vehicular net-

73
work. So, IEEE 1609 Working Group (WG) defined a new Layer 3 protocol called the

WAVE Short Message Protocol (WSMP) for Network and Transport Layer along with

the support of Internet Protocol IPv6, Transfer Control Protocol (TCP), and User

Datagram Protocol (UDP). Hence, single-hop messages that don’t require routing use

WSMP, while multiple hop packets use IPv6 + TCP/UDP. The WSMP packets are

referred to as WAVE Short Messages (WSM), [248].

Most vehicle-based applications follow an SAE J2735 standard-based message for-

mat, the most important being Basic Safety Message (BSM). A BSM carries core state

information about the vehicle needed for collision avoidance, such as speed and lo-

cation. There are currently seven 10 MHz channels in a 5.8 - 5.9 GHz band from

Ch 172 to 184 with even numbering. Low bandwidth safety-critical messages are

exchanged on a Control Channel (CCH - Ch 178) that devices will tune to regularly.

The other six channels are Service Channels (SCH) to access any advertised services.

BSMs are sent on Ch 172. Devices can contact each other during a CCHInterval

or hear WAVE Service Advertisement(WSA) announcements. WSA messages carry

information about the services being offered in the nearby area and indicate the SCH

on which each is provided, [248].

5.1.2 Veins Simulator

Veins is an open-source framework for network simulations built on OMNeT++

and SUMO. OMNeT++ is a network simulator, while SUMO is a road traffic simula-

tor. Traffic Control Interface (TraCI) enables the communication between Omnet++

and SUMO using TCP-based client/server architecture where SUMO acts as a server.

TraCI allows creating and adding a new vehicle to the queue and retrieving informa-

tion about the vehicles to execute the next step. The coordinates obtained by TraCI

is used to update the locations of the vehicles in OMNET++. This suite provides a

good tool for simulating interactive vehicle simulations. Figure 5-1 shows the snap-

74
Figure 5-1: Veins (Omnet++) Environment

shot of the Veins simulation environment for an example scenario. A veins scenario

in OMNET++ has a world utility module, connection manager, annotation manager,

controller for obstacles, and a TraCI scenario manager. The figure shows five vehicles

represented as nodes and an RSU denoted by a yellow diamond. The circle around

each node and RSU is the range that they can communicate, called the maximum

interface distance maxInterfDist.

The veins demo scenario uses the map of the University of Erlangen-Nuremberg

that simulates 194 vehicles leaving the Computer Science building. The number of

vehicles can be modified in the erlangen.rou.xml file. The vehicles are annotated as

nodes in the omnet++ simulation environment. The communication between the ve-

hicles and RSUs is based on the DSRC Communication protocol. The TraCIScenari-

oManager handles the creation and insertion of a vehicle to the queue and the position

update of the vehicle. It keeps track of the number of vehicles in the simulation en-

75
vironment, network boundaries, and entry and exit of vehicles. TraCIDemoRSU11p

is the application layer module. When the simulation starts, the TraCIDemoRSU11p

module initializes RSUs and nodes. If the service channel is not enabled, all the

data are sent to the control channel (CCH). This is stage zero: initialize stage. In

the next stage (stage 1), TraCISenarioManager connects to TraCIServer and sends

down a BSM at the rate of beaconInterval if the beaconInterval parameter is set to

true. So, for a simulation time of 1000s, the total number of BSMs an RSU sends

is 1000. If any node is within the maxInterfDist, it will receive the beacon from the

RSU and other nodes. The default maxInterfDist is 1000m. For our simulation, we

set it to 250m. The first node starts after a few seconds for proper synchronization.

The handlePositionUpdate() function checks the speed of the vehicle. If the vehicle’s

speed is less than 1 for more than 10s, it considers an accident. The accident for

node[0] happens at 73s of the simulation. According to the map used, the nodes

are on a single lane and about to enter a two-lane road. If the accident occurs 2s

later, the nodes will be on a two-lane road, and the other cars will easily use the

other lane to bypass the accident. Also, the accident location is within the range

of the RSU, which broadcasts the accident information. In that case, an alternative

route is calculated using the Dijkstra shortest path algorithm [249]. Otherwise, the

knowledge base is updated. If channel switching is on, it starts a WSA event (Traffic

Information Service).

The packets generated by the vehicles and RSU(s) (beacons, WSM, WSA) are sent

to other vehicles and RSU as AirFrames. The received AirFrame signal is processed

by a Decider (Decider80211p). The decider checks if the network interface card (nic)

detected the frame. It will be detected only if its power is greater than the minPower

level. It also checks if the signal was received while transmitting. If that’s the case,

the packet will not be accepted and will be considered as a TxRxLostPacket. If the

channel is idle, it locks the frame and decodes it. Otherwise, if the channel is BUSY,

76
the frame is treated as interference. Once the Decider entirely receives the signal, it

checks if there was any bit error considering the bitrate, bandwidth, minimum signal

interference, and payload bitrate. If the header or the payload of the packet has a

bit error and it could not be decoded even without interference, then it is declared as

NOTDECODED and counted as SNIRLostPackets. If the bit error in the packet

is due to interference, it is considered as COLLISION, which increments the output

variable nCollision. If the packet has no bit errors, it is forwarded to the mac layer

for further processing. These packets are counted by the variable ReceivedBroadcasts

which is the sum of all the three types of packets: beacons, WSM, and WSA.

5.1.3 Gambit

Gambit is a software tool that constructs and analyses finite and non-cooperative

games. It has a graphical user interface (GUI) for building games in extensive or

normal forms. Gambit was implemented initially in BASIC in the mid-1980s and

was later rewritten in C++. Over the years, significant contributions have been

made to the GUI, portability across platforms, and the addition of algorithms for the

identification of equilibrium in two-player games.

Gambit is cross-platform and can run under multiple operating systems: Windows,

Mac, and Unix. It is also available as a python extension. The current version is

Gambit 16 [250]. The GUI provides a development environment for small to medium

games. It has the feature of calculating the dominant strategy, Nash Equilibrium, and

Quantal Response Equilibria (QRE) of a game. As the size of the game increases,

the computation time required for the computation analysis also increases rapidly.

The payoffs in the game can be entered as decimal numbers or as rational numbers.

Gambit support two file formats for representing games. Extensive form games are

represented as a tree and stored as .efg extension, while strategic form games are

displayed as tables and use .nfg extension. Since Gambit 12, the graphical interface

77
also supports a new file format, ”Gambit workbook” with .gbt extension, which stores

additional information such as the layout of the game tree, colors assigned to the

players, and computed equilibria.

5.2 Game Modeling

Our non-cooperative security game model is represented by:

G =< N , S j |j ∈ N , U j |j ∈ N > where N is the set of players - attacker and defender

{a, d}, S j is the strategy space and U j is the utility for j ∈ N . Strategy space

S t = {Sit |t ∈ N , i ∈ 1 to z} is the action set of all the possible strategies of the players,

z is the sum of all possible combinations of attack/defense.

While we are motivated to model the game based on major architectural modules

of an AS (Perception, cognition, and control), there are a few challenges:

• The attacker/defender has to reason over different attacks and their counter-

measures on each module. For example, communication attacks on cognition

modules can be connectivity-based (cellular, WiFi, Bluetooth) or protocol-based

(Spoofing, Flooding, etc.)

• Each attack can be executed using different mechanisms—for example, MAC

flooding, ping-of-death, ICMP flooding, etc.

• The attacker/defender must consider the various parameters affecting its at-

tack/defensive measures.

We need a more elaborate model to address this challenge. Or, model the game at

a more granular level and then level up. So, for now, we model the game at the lowest

level of granularity. Figure 5-2 shows the process flow of selecting the strategy and

then representing the game through the payoff matrix. The attacker has to rationale

which module to attack and the type of attack it could execute against the defender
78
Figure 5-2: Game Modeling

strategy, which can cause maximal damage with minimum cost. The strategy is

selected based on the Table 4.1 [251] and the attack model 2-2 [252]. There are some

parameters associated with any attacks or defense measures. The result of the attack

can vary by changing the values of such parameters. The same goes for the defender,

who has to choose the parameters to counter or minimize the damage. The strategy

space (Xm , Yn for the defender and the attacker, respectively) grows combinatorially

as the number of parameters increases and has a many-to-many relationship. Each

relationship is mapped to a payoff matrix. There will be a total of m.n relationship

mapping resulting in m.n payoff matrices, where m = 2i − 1, n = 2j − 1 and i, j are

the number of defender and attacker parameters, respectively. Pure strategy/ mixed

strategy Nash Equilibrium (NE) from each payoff matrix is the payoff for the final

matrix. The NE from the final payoff matrix gives the best strategy for the attacker

and the defender. The process becomes cumbersome as the variables increase. We

can automate the process after selecting the parameters to reduce the labor work.

79
5.2.1 Strategy Selection

In a VANET, an attacker can enter the network through a compromised RSU or

an attacker’s vehicle. The adversary can launch several attacks on VANET to com-

promise the network’s availability, confidentiality, integrity, and resources. The most

common attack that can be launched is a DoS/DDoS attack where the communica-

tion channel is overwhelmed by a large number of genuine or corrupt/fake messages

so that authentic messages do not reach their destinations. Such an attack intends

to disable the whole network by continuously or selectively jamming the important

transmissions with a high-power interference signal. Traffic disruption, data loss, and

high battery consumption will also fulfill the attacker’s goal. The DoS attack would

be on the cognition module of the autonomous vehicle. Since VANET is a real-time

communication system, losing regular transmissions could be potentially disastrous.

Multiple attackers may focus on the same network, with an increased chance of suc-

cess. Worse yet, since there is no acknowledgment in broadcast communications [253],

the device that sent the message would never know that the real message is lost. So

for our simulation, we chose to implement DoS/DDoS attack.

Table 5.1: Default Simulation Parameters


Parameters Default Value
Beacon Interval 1s
Bit Rate 6 Mbps
Message Payload 80 bits
RSU Count 1
Vehicle Count 194
Transmission Power 20mW
Min Power level -110 dBm
Accident Start 73s
Accident Duration 50s

Table 5.1 shows the default parameters for RSUExampleScenario in Veins. An

attack can be launched from either an RSU or a vehicle(s) or, in an extreme case,

80
Table 5.2: Parameter selection
Attack Parameters Defend Parameters
Beacon Interval(BI) Bit Rate (BR)
Message Payload (MP) Vehicle Density (VD)
No. of RSUs (NumRSU)

both. We have to find the parameters in the veins network that would impact the

transmission of messages.

An attacker can compromise an RSU or set multiple portable RSUs to cause

network congestion, resulting in packet delay and loss from genuine nodes. The

vehicles are introduced in the simulation at an interval of 3s by the TraCI application

and route by rsu[0]. The attacker can cause a fake accident on single lanes and

cause a traffic jam. Or take advantage of an actual accident by modifying RSU

parameters and broadcasting surplus messages, creating network traffic congestion.

We considered the scenario of the actual accident where the location of multiple RSUs

is the same as the real one. We varied different RSU parameters to find the ones that

would favor the attacker and nodes’(vehicles) parameters for the defender strategies,

as shown in the Table 5.2.

Figure 5-3: Interference Range for Sending/Receiving Messages

Vehicles receive packets by RSUs and other vehicles within its maxInterfdist range.
81
These packets are sent as broadcast messages which can potentially be received by

many other vehicles. The number of vehicles receiving the messages would vary

depending on the location of the vehicles in that range. Given the scenario depicted

in the figure 5-3, vehicle A is out of the range of the RSU. So a broadcast sent by

an RSU might be successfully received by all the other four vehicles (B, C, D, E).

Vehicle C can send and receive messages to/from vehicles B, D, E, and RSU. But

for A, only B is within its range. The number of other vehicles within range of the

RSU or a vehicle will constantly change as the vehicles move. A simulation where

the RSU sends only 1 packet might record 3 successful receptions and 1 packet loss.

The standard equation to calculate packet loss rate as Packet Lost/Packet sent (here

1/1= 100%) may not be reasonable. It makes more sense to calculate the rate as

Packet Lost/(Packet Lost+Packet Received) (here, 1/4 = 25%).

Figure 5-4: Beacon Interval vs Packet Loss

The most straightforward attempt for the attacker to create traffic congestion is to

increase the traffic data exchange between vehicles and the infrastructure. Table 5.3
82
Table 5.3: Percentage Packet Loss with Beacon Interval
BI Avg Packet Loss Avg Received Broadcasts %PacketLoss
1s 146 4995 2.8
0.1s 124 5132 2.4
0.01s 220 8726 2.5
0.001s 554 48140 1.1
0.0001s 1022 113783 0.9

Figure 5-5: Minimum Speed Distribution with respect to Beacon Interval

and the corresponding plot 5-4 show the behavior of decreasing the interval between

two messages (BI) on packet loss rate. An incident on the road triggers additional

safety messages (WSM and WSA) from the vehicles and the RSU within a circular

area, resulting in a high amount of messages received, decreasing the packet loss

rate. However, the graph for minimum speed at BI of 1s, Figure 5-6, shows that

the accident happened for node[0], and some of the cars (around 30 cars) had to

stop for a while till the accident duration was over. On plotting the minimum speed

reached for the vehicles with respect to varied BI Figure 5-5, we see that some of

83
Figure 5-6: Packet Loss variation analysis with respect to Beacon Interval

the vehicles (nodes 21-24) didn’t stop for other BIs compared with the plot for BI

1s. This could cause more accidents. Also, the number of stopped/slowed vehicles

increases (speed ' 0s) when the traffic jam should have been resolved on the route.

Also, on plotting the packet loss for individual vehicle nodes, plot 5-6, shows that

the packet loss for vehicle nodes stuck in the traffic jam increases with a decrease in

beacon interval. This abrupt behavior of the traffic works in the attacker’s favor.

Table 5.4: Percentage Packet Loss with MP


MP Avg Packet Loss Avg Received Broadcasts % Packet Loss
80 bits 146 4995 2.8
800 bits 147 4467 3.2
8000 bits 197 5100 3.7
80000 bits 272 5070 5.1

Beacon payload size is the amount of actual information in a beacon, excluding

the headers, which may carry various types of data, including velocity, position,

and hazard information. An increase in beacon size may contribute toward channel
84
Figure 5-7: Message Payload vs Packet Loss

saturation and more processing time for the vehicles [254]. For this reason, we select

BeaconLengthBits or the message payload of a packet as the next tunable parameter

to see the impact on packet loss rate. We refer to it as Message Payload (MP) to easily

relate to further simulation and results analysis. Keeping all the other parameters

to the default value, the increment in % packet loss with respect to message payload

is small, as shown in the Table 5.4 and the corresponding plot 5-7. The packet loss

distribution for each vehicle node, as shown in 5-8 has a similar trend. The small

difference that we see in the % packet loss is at 80000 bits.

Table 5.5: Percentage Packet Loss with increase in No. of RSUs


NumRSU Avg Packet Loss Avg Received Broadcasts % Packet Loss
1 146 4995 2.8
2 239 6803 3.4
3 42929 27880 60.6
4 80418 25437 76.0
5 105719 23703 81.7

85
Figure 5-8: Packet Loss Distribution with respect to MP

An artist in Berlin tricked Google Maps into creating a virtual traffic jam by

using a handcart full of smartphones [255]. On the same notion, we assumed that the

attacker could put portable RSUs at the same location to hide the RSU location data

from the vehicles. Different RSU location data within a certain range can be used to

alert the vehicles of fake messages. There is an increase in the % packet loss with an

increase in the number of RSU as shown in Table 5.5 and the corresponding plot 5-10,

keeping all the other parameters to a default value. A DSRC RSU capital cost per unit

as calculated by the US Department of Transportation (DoT) is $4150, including the

cost of the device ($1300), installation ($850), and configuration ($2000). A portable

RSU would cost less as it won’t need any installation. If the budget of the attacker

permits, they would prefer to have a higher number of RSUs for a successful attack.

The following parameter, Bit Rate (BR), was challenging. Veins follow the IEEE

802.11p, which supports transmission rates from 3 to 27 Mbps (Data Rate) over a

bandwidth of 10MHz. Depending on the modulation scheme (BPSK, QPSK, QAM),

86
Figure 5-9: No. of RSU vs Packet Loss

the valid data rate supported by Veins are 3Mbps, 4.5 Mbps, 6Mbps, 9Mbps, 12Mbps,

18Mbps, 24Mbps, and 27Mbps. The advanced WiFi protocols offer a data rate much

greater than 27Mbps. Assuming an adaptive bit rate, we evaluated packet loss by

keeping the bit rate of RSUs to 3Mbps while varying the bit rate of the vehicles to

(6Mbps, 9Mbps, 12Mbps, 18Mbps, 24Mbps, and 27Mbps). The plot 5-11 shows an

increase in packet loss rate with an increase in bit rate. According to the plot 5-12,

the higher the bit rate of the vehicles, the % packet loss increases for a low count of

RSUs. But, as the number of RSUs increases, the % packet loss increases even for

the low bit rate. One more thing to notice here is that % packet loss converges at a

bit rate of 24Mbps and then follows a similar trend for any number of RSUs. It will

be the best advantage for the defender if the bit rate of the vehicles is low. A low bit

rate will favor the attacker if they have a higher number of RSUs.

In a real-world scenario, an attacker would choose a peak time to attack to cause

maximum harm. When the traffic is reduced or diverted to a different route, it can

87
Figure 5-10: Packet Loss Distribution with respect to NumRSU

Figure 5-11: Bit Rate vs Packet Loss

88
Figure 5-12: Packet Loss with respect to Bit Rate and Number of RSUs

reduce the attack’s impact. Assuming that the vehicles identified the malicious RSUs

and alerted the other vehicles to change route could also help reduce the damage

caused by the DDoS attack. So vehicle density is selected as the next parameter.

We varied transmission power and min power level without any significant differ-

ence in the packet loss at default settings. We selected beacon interval (BI), message

payload (MP), and number of RSUs (NumRSU) as attacker parameters that the at-

tacker can tune. In contrast, bit rate (BR) and vehicle density (VD) were selected as

defender parameters that could be controlled by the vehicles or the VANET network

as the system administrator.

5.2.2 Payoff Quantification and Calculation

In game theory, each strategy results in a payoff to the players. According to our

previous work [251], the Table 4.2 enumerates the possible scenarios of a successful

attack and the probabilities associated with each scenario where mk , ml represents
89
the modules the attacker decides to attack and the defender decides to defend, re-

spectively. The equations from chapter 4 are discussed again for the ease of the

readers.

Let a = 1 represents ‘attack successful’ and a = 0 represents ‘attack unsuccessful’.

Therefore, for each module k, the probability of each case is given by:





 pk , if mk 6= ml , a = 1




1 − p k ,
 if mk 6= ml , a = 0
bk = (5.1)




 qk , if mk = ml , a = 1




1 − q k ,
 if mk = ml , a = 0

Let Ck be the cost of damage incurred by the successfully attacked module. When

the attack was unsuccessful (a=0), Ck = 0 as there is no property loss or damage.

Suppose, attacker plans strategy S4a = {0, 1, 1} and defender plans S2d = {0, 1, 0}. Let

s be an element of the set of all possible outcomes,

S = {(H0 , H3 , H1 ), (H0 , H3 , H2 ), (H0 , H4 , H1 ), (H0 , H4 , H2 )}

if the game of attack and defend is played, where H‘X’ indicates the cases from Table

4.2. The total economic loss for the defender can be calculated as the summation of

all possible outcomes, the product of the probabilities of each attacked module, and

the total cost of damage [246]:

   
X n
Y n
X
Wi =  bk  ·  Ck  (5.2)
s∈S k=1|Sia (k)=1 k=1|a=1

The payoffs of both the players corresponding to the possible strategies of the

attacker (Sia ) and the defender(Sjd ) (refer to the Table 4.1) for a non-zero sum is

given by:

uij = −CAi + Wi , RD − Wi − CDj if 1 ≤ i, j ≤ z (5.3)

90
where, RD is the total cost of the modules, CAi is the sum of cost of attack on

individual modules ( nk=1 CAk ) and CDj is the sum of cost of defense on individual
P

modules ( nk=1 CDk ). In case the attack is unsuccessful, from equation (4.2), Wi = 0.
P

We found playing with different parameters that some could be attackers strategy
and some defenders, as shown in the table 5.2. According to the equation 5.3, the
game’s payoff (Attacker’s payoff, Defender’s Payoff) or utility function is formulated
as,

u = ((-Attacker’s Cost + Defender’s Loss), (Defender’s Cost - Defender’s Loss)) (5.4)

A VANET performance is evaluated on throughput, end-to-end delay, and packet

delivery/loss ratio. Since each metric involves packets, we quantify the cost of the

players based on the number of packets sent, received, or lost in the network. Since

we are dealing with only cognition module (S2a , S2d = {0, 1, 0}), it is safe to assume

that the module being attacked is the one being defended and the attacks are always

successful, i.e., bk = qk = 1, hence, strategy space S = {(H0 , H3 , H0 )}.


In terms of packets, we can write the function as,

u = ((-Avg. packets sent by RSU(s)+ Vehicle’s Avg. Packet Loss),

(Avg. packets a vehicle should receive - Vehicle’s Avg. Packet Loss))

Packet Loss = TxRxLostPackets + SNIRLostPackets + no of collisions + TotalInterference.

When there is no attack, except for the SNIRLostPackets, all the other terms in the
above equation is 0.

Total Packets sent by RSU(s)


Avg. packets sent by RSU(s) =
No. of RSUs
Vehicles’ Total Packet Loss
Vehicle’s Avg. Packet Loss =
No. of vehicles
Vehicles’ (Total Received Broadcasts + Total Packet Loss)
Avg. packets a vehicle should receive =
No. of vehicles
Defender’s Payoff = Avg. packets a vehicle should receive - Vehicle’s Avg. Packet Loss

= Vehicles’ Avg. Received Broadcasts

91
Therefore, the payoff function is

u = ((-Avg. packets sent by RSU(s)+ Vehicle’s avg. packet loss), (Vehicles’ Avg. Received Broadcasts))

5.3 Simulation Results

Based on the Parameter Selection Table 5.2, there are seven parameter combina-

tions or strategies for the attacker and three strategies for the defender, which have

a many-to-many relationship as shown in Table 5.6. There are a total of 21 payoff

matrices mapped to each relationship.

Table 5.6: Parameter Combinations and Payoff Matrices Size


Attack/Defend BR VD BR+VD
BI 3x6 3x5 3x(6x5)
MP 3x6 3x5 3x(6x5)
NumRSU 4x6 4x5 4x(6x5)
MP+BI (3x3)x6 (3x3)x5 (3x3)x(6x5)
NumRSU+BI (4x3)x6 (4x3)x5 (4x3)x(6x5)
NumRSU+MP (4x3)x6 (4x3)x5 (4x3)x(6x5)
NumRSU+MP+BI (4x3x3)x6 (4x3x3)x5 (4x3x3)x(6x5)

Table 5.7: Parameter Values


Parameters Selected Values
BI(s) 0.1, 0.01, 0.001
MP(bit) 800, 8000, 80000
NumRSU 2, 3, 4, 5
BR(Mbps) 6, 9, 12, 18, 24, 27, 27
VD 20, 40, 60, 80, 100

Table 5.6 also indicates the size of each matrix and the structure of the final

payoff matrix. Table 5.7 shows the range of values that we selected for varying each

parameter. So, for BI vs. BR, we calculate payoffs for three values of BI (0.1s, 0.01s,

0.001s) against six BR (6, 9, 12, 18, 24, 27). The ‘+’ sign between the parameters

means that we vary both operands. The size of the payoff matrix increases as we
92
increase the number of varying parameters. Since the number and size of matrices are

pretty large, we used an open-source graphical user interface called Gambit for game

theory computation. Henceforth, it is not feasible to include all the 21 payoff matrices

with all the payoff values in the dissertation. The larger tables are represented in a

compact form by applying “Iterative Elimination of Strictly Dominated Strategies”.

This section is classified based on a number of attacker’s parameters ‘A’ that could be

changed with respect to the number of defender’s parameters ‘D’ as listed below. Each

Scenario is labeled with the ‘A’s and ‘D’s depending upon the number of parameters

being varied for respective players. The number of scenarios will differ depending on

the number of strategies considered for attack analysis in the real world.

• Scenario AD: These include those scenarios where both the players can manip-

ulate 1 parameter as their strategy.

– Beacon Interval vs Bit Rate

– Beacon Interval vs Vehicle Density

– Message Payload vs Bit Rate

– Message Payload vs Vehicle Density

– Number of RSUs vs Bit Rate

– Number of RSUs vs Vehicle Density

• Scenario AAD: These include the scenarios where the attacker can manipulate

2 parameters as their while the defender can defend with only 1 strategy.

– Message Payload + Beacon Interval vs Bit Rate

– Message Payload + Beacon Interval vs Vehicle Density

– NumRSU, Beacon Interval vs Bit Rate

– NumRSU, Beacon Interval vs Vehicle Density


93
– NumRSU+Message Payload vs Bit Rate

– NumRSU+Message Payload vs Vehicle Density

• Scenario AAAD: These include the scenarios where the attacker can manipulate

3 parameters while the defender can defend with only 1 strategy.

– NumRSU+Message Payload + Beacon Interval vs Bit Rate

– NumRSU+Message Payload + Beacon Interval vs Vehicle Density

• Scenario ADD: These include the scenarios where the attacker can manipulate

1 parameter while the defender can defend with 2 strategies.

– Beacon Interval vs Bit Rate + Vehicle Density

– Message Payload vs Bit Rate + Vehicle Density

– NumRSU vs Bit Rate + Vehicle Density

• Scenario AADD: These include the scenarios where both the players can ma-

nipulate 2 parameters as their strategy.

– Message Payload + Beacon Interval vs Bit Rate + Vehicle Density

– NumRSU + Beacon Interval vs Bit Rate + Vehicle Density

– NumRSU+Message Payload vs Bit Rate + Vehicle Density

• Scenario AAADD: These include the scenarios where the attacker can manipu-

late 3 parameters while the defender can defend by varying 2 parameters.

– NumRSU+Message Payload + Beacon Interval vs Bit Rate + Vehicle Den-

sity

94
5.3.1 Scenario AD

5.3.1.1 Beacon Interval vs Bit Rate

Table 5.8 shows the payoff matrix between the beacon interval and the Bit Rate.

As the beacon interval decreases, the attacker’s payoff is relatively low, and they

would never go for those strategies. And as the bit rate increases, the defender’s

payoff decreases. So, the optimal strategy for both players is (BI 0.1s, BR 12Mbps),

which is the Nash Equilibrium for this matrix.

Table 5.8: Payoff Matrix: Beacon Interval vs Bit Rate


BI BR 6 BR 9 BR 12 BR 18 BR 24 BR 27
0.1s -5821,5302 -5880,5130 -5634,6514 -4905,4271 -3452,3165 -1901,3642
0.01s -59805,9274 -59757,9938 -59545,9723 -58804,8048 -56271,7694 -57189,6260
0.001s -599139,47632 -599253,40824 -598953,47326 -598334,46391 -597006,44556 -595659,44533

Using the Gambit tool to calculate the Nash equilibrium, the highlighted cell in

the Table 5.8 indicates the pure Nash Equilibrium. For a single RSU with a bit rate

of 3Mbps, the attacker would prefer to set a beacon Interval of 0.1s. It would be

in the best interest of the vehicles to adapt the bit rate to 12Mbps to receive the

maximum number of packets. The Nash Equilibrium and the value of the payoffs for

the players are shown in the figure 5-13.

Table 5.9: Payoff Matrix: Beacon Interval vs Vehicle Density


BI VD 20 VD 40 VD 60 VD 80 VD 100
0.1s -5949,4351 -5871,5176 -5871,5212 -5820,5293 -5805,5290
0.01s -59750,11681 -59695,9902 -59746,8477 -59716,8687 -59617,9480
0.001s -599439,85752 -599445,53830 -599238,50959 -599301,48770 -599322,47115

5.3.1.2 Beacon Interval vs Vehicle Density

Table 5.9 shows the payoff matrix between beacon interval and vehicle density.

Again, the payoffs of the attacker for BI 0.01s and 0.001s are relatively low with
95
Figure 5-13: Beacon Interval vs Bit Rate

respect to BI 0.1s. This implies that the attacker would never prefer those strategies

and eliminate rows BI 0.01s and BI 0.001s. For the defender, the payoff for vehicle

density 80 is the highest. Hence, the optimal strategy for the players is (BI 0.1s, VD

80), as shown in figure 5-14.

5.3.1.3 Message Payload vs Bit Rate

Table 5.10 shows the payoff matrix between message payload and bit rate. A

higher bit rate yields a higher payoff for the attacker but reduces the payoffs of the

defender. Ideally, an attacker would prefer to play (MP 800bit, BR 27Mbps), but the

defender has the lowest payoffs for BR 27Mbps. Column BR 9, BR 18, BR 24, and

BR 27 will be eliminated as they are strictly dominated by BR 12. For the attacker,

96
Figure 5-14: Beacon Interval vs Vehicle Density

MP 80000bit dominates other strategies. Then the best response (NE) for the players

would be to choose a strategy (MP 80000bit, BR 6Mbps).

Table 5.10: Payoff Matrix: Message Payload vs Bit Rate


MP BR 6 BR 9 BR 12 BR 18 BR 24 BR 27
800 -462,4993 -469,4468 -294,6290 465,3965 1917,2847 3156,2884
8000 -462,4993 -469,4468 -294,6290 471,3961 1917,2847 2310,2498
80000 -388,4944 -418,4566 -228,4888 585,3969 1968,2764 2115,2189

5.3.1.4 Message Payload vs Vehicle Density

We see a similar trend in the payoffs for Message Payload with respect to vehi-

cle density, Table 5.11. Following the process of elimination of strictly dominated

strategies, we get the NE at (MP 80000bit, VD 40).


97
Table 5.11: Payoff Matrix: Message Payload vs Vehicle Density
MP VD 20 VD 40 VD 60 VD 80 VD 100
800 -541,3614 -501,4644 -429,5456 -454,4476 -462,4993
8000 -588,3579 -476,4757 -486,4270 -441,5296 -466,4469
80000 -521,4033 -419,5800 -421,5455 -385,5115 -382,5056

5.3.1.5 Number of RSUs vs Bit Rate

It wouldn’t be worth it for the attacker to deploy more than one spoofed RSU,

keeping all other parameters unaltered, especially for the lower bit rate. As seen

from Table 5.12, the defender would never prefer a higher bit rate. Eliminating the

dominated strategies results in (2 RSUs, BR 9Mbps) as the NE of the game, and

accordingly, the expected payoff of the players is (-2159, 5953).

Table 5.12: Payoff Matrix: Number of RSUs vs Bit Rate


NumRSU BR 6 BR 9 BR 12 BR 18 BR 24 BR 27
2 -2332,5203 -2159,5953 -2035,4984 -606,4669 1488,3748 2358,2598
3 -151277,20298 -151072,19904 -152438,18276 -152096,17193 1926,2898 2364,2624
4 -128667,17944 -128905,17110 -130602,16983 -121418,18642 1971,2981 3534,3390
5 -98090,17126 -100691,14993 -97840,17049 -99448,15368 3015,3801 2091,2329

5.3.1.6 Number of RSUs vs Vehicle Density

Following the similar trend in the payoffs for the number of RSUs with respect

to vehicle density, Table 5.13, we find that the attacker’s payoff is maximum with 2

RSUs. In other words, they are still incurring some loss which is minimum when the

traffic is either at its full potential or around 50%. For the defender, a VD of 100

works better.

98
Table 5.13: Payoff Matrix: Number of RSUs vs Vehicle Density
NumRSU VD 20 VD 40 VD 60 VD 80 VD 100
2 -1222,4281 -1020,5101 -1177,5138 -1274,5160 -1020,6804
3 -132601,6056 -142901,4379 -1647349,3825 -148993,3827 -151241,3392
4 -158233,-3167 -185084,-1458 -194815,-689 -196974,-563 -96842,-342
5 -166844,47859 -201764,-6316 -220611,21198 -220341,-3699 -227415,-2884

Table 5.14: Payoff Matrix: Message Payload, Beacon Interval vs Bit Rate
MP BI BR 6 BR 9 BR 12 BR 18 BR 24 BR 27
0.1s -5825,5361 -5879,5071 -5673,6733 -4883,4266 -2399,4028 -3023,2924
800 0.01s -59759,8408 -59544,7940 -59395,11019 -58837,8093 -57390,6949 -57231,6282
0.001s -598611,47364 -598662,47347 -598787,41355 -598129,43060 -596610,43552 -596493,43707
0.1s -5764,5255 -5852,5112 -5613,6566 -4877,4412 -2591,3836 -3287,2556
8000 0.01s -59247,8945 -59358,7834 -59339,8750 -58350,7987 -57096,6779 -1256,7097
0.001s -94819,11183 -94766,10913 -94813,10391 -94095,13092 -91457,9409 -91142,9042
0.1s -5501,5201 -5573,4913 -5467,5514 -4071,6005 -3320,3129 -3590,2099
80000 0.01s -5472,5124 -5544,4600 -5430,5214 -4575,4248 -3209,3098 -3308,2413
0.001s -5435,5174 -5559,4647 -5437,5520 -4062,9901 -3167,3052 -3116,2505

5.3.2 Scenario AAD

5.3.2.1 Message Payload + Beacon Interval vs Bit Rate

Table 5.14 is a payoff matrix between message payload, beacon interval, and bit

rate. The table clearly shows that the rows BI 0.01s and BI 0.001s for message

payload 800bits and 8000 bits will be eliminated. If we follow the strict dominance

rule, columns 2, 5, and 6 will be eliminated, followed by rows 1, 4, and 7. We

are left with a square matrix of rows (8,9) and columns (3,4). The square matrix

has two pure nash equilibria (-5430,5214) and (-4062,9901) and one mixed strategy

equilibrium with an expected payoff of (-5418, 5269). Out of the three, the best

response for the players is (-4062, 9901), as shown in figure 5-15.

Now, as we generate payoff matrices for other combinations, the size and dimen-

sions of the table start increasing. The most extensive payoff matrix is of size 36x30

for the parameter combination (NumRSU + BI + MP, BR + VD) with 1080 cells.

Applying game theory to defend against cyberattacks would be more complicated in

99
Figure 5-15: Beacon Interval (80000bit) vs Bit Rate

the real world owing to the diversity of potential parameters that could be used to

attack. Listing such a large number of potential parameters would lead to extreme

memory and run-time inefficiencies. We need a compact representation of the game,

which, based on the previous data, reduces the number of strategies that must be

considered when finding an optimal solution [256]. These strategies, when included

in the complete payoff matrix, must get eliminated by the strict dominance rule.

5.3.2.2 Message Payload + Beacon Interval vs Vehicle Density

Table 5.15 is a payoff matrix between message payload and beacon interval with

respect to vehicle density. Caching on the knowledge acquired from Table 5.14 that

BI 0.01s and BI 0.001s for message payloads 800bit and 8000bit yield high losses for

100
the attacker, we can eliminate those strategies from their plan of attack. The payoff

matrix 5.15 shows the NE at 80000 Mbps and BI of 0.1s for 40 vehicles.

Table 5.15: Payoff Matrix: Message Payload, Beacon Interval vs Vehicle


Density
MP BI VD 20 VD 40 VD 60 VD 80 VD 100
800 0.1s -5949,4355 -5871,5176 -5868,5205 -5829,5264 -5805,5290
8000 0.1s -5960,4176 -5859,5351 -5847,5371 -5796,5384 -5796,5504
0.1s -5685,4683 -5653,5574 -5641,5478 -5656,5449 -5653,5346
80000 0.01s -17670,1300 -17747,1478 -17734,1358 -17726,1388 -17733,1343
0.001s -16784,1127 -17588,1439 -17675,1274 -17706,1884 -17708,1282

5.3.2.3 NumRSU + Beacon Interval vs Bit Rate

The compact representation of the payoff matrix for the number of RSUs and

Beacon Interval with respect to the bit rate, Table 5.16, shows the attacker can

launch an attack with just one spoofed RSU. The 2 RSUs in the simulation indicate

that 1 is the real RSU and the other is the spoofed one with the same parameter

settings so that the intrusion detection algorithms can’t differentiate it. Similarly, for

multiple RSUs, one is the original one, while the rest are spoofed. Again, the payoff

matrix also confirms previous analysis that a higher bit rate is not a good strategy

for the defender.

Table 5.16: Payoff Matrix: Number of RSUs, Beacon Interval vs Bit Rate
RSUs BI BR 6 BR 9 BR 12 BR 18 BR 24 BR 27
0.1s -7701,5732 -7749,4847 -7020,6450 -6145,5205 -3599,3732 -3134,3211
2 0.01s -59289,10766 -59790,9125 -59534,8687 -57809,8399 -53784,9431 -53904,9491
0.001s -563772,42599 -567723,39843 -565641,40480 -557739,132092 -540864,58878 -537708,60538
0.1s -120728,17839 -72587,24538 -103386,18275 -104748,17423 -3359,4776 -3026,4551
5 0.01s -144567,23727 -147147,22288 -143938,24524 -145981,22786 -38328,13707 -36768,13923
0.001s -432851,31821 -424796,31249 -437122,29786 -415360,31610 -211238,27471 -207017,27298

101
5.3.2.4 NumRSU + Beacon Interval vs Vehicle Density

From Figure 5-9, we expect a higher number of RSUs to give a higher %packet

loss. We see from previous tables that 2 RSUs resulted in better payoffs for the

attacker. The Table 5.17 for number of RSUs and beacon interval with respect to

vehicle density agrees to the Figure 5-9. It has two pure strategies that yield expected

payoffs (-7416, 6065) and (51180, 147776). If the attacker wants to maximize their

payoff, they will choose to play with 5 RSUs and BI 0.001s. In that case, the best

strategy for the defender is to reduce the traffic to 20 vehicles.

Table 5.17: Payoff Matrix: Number of RSUs, Beacon Interval vs Vehicle Den-
sity
RSUs BI VD 20 VD 40 VD 60 VD 80 VD 100
0.1s -7191,5601 -7416,6065 -7845,5828 -7821,5765 -7948,5786
2 0.01s -54815,17336 -57226,13003 -58597,10189 -59264,10459 -568737,46103
0.001s -506574,120054 -542108,77624 -559862,56950 -568737,46105 -568737,46105
0.1s 20066,56571 -85485,37801 -141598,29264 -141739,29196 -162510,25148
5 0.01s 1783,75617 -115647,50047 -178499,35959 -198552,31263 -205593,31184
0.001s 51180,147776 -289407,95388 -465762,68248 -473244,67046 -542937,56054

5.3.2.5 NumRSU + Message Payload vs Bit Rate

The Tables 5.10 and 5.11 shows that the best strategy for the attacker is to set

Message Payload as 80000 Mbps. From the previous Tables, we expect a lower bit

rate to be a better strategy for the defender. Eliminating dominated strategies for the

attacker and the defender in Table 5.18, results in NE as 2 RSUs, MP 80000 Mbps

at BR 9Mbps.

5.3.2.6 NumRSU + Message Payload vs Vehicle Density

The Table 5.19 has 3 pure strategies and 2 mixed strategies NE. The expected

payoffs for the pure strategies are (-1751, 5847), (-1444, 6375) and (24576, 52750) and

102
Table 5.18: Payoff Matrix: Number of RSUs, Message Payload vs Bit Rate
NumRSU MP BR 6 BR 9 BR 12 BR 18 BR 24 BR 27
800 2115,2189 -2249,5955 -2035,4984 -636,4547 3024,3748 2358,2598
2 8000 -1812,6282 -2609,4715 -2453,4426 -498,3934 3324,4020 2877,3911
80000 -4931,5529 -2034,6128 -3112,4616 -383,4584 1911,2749 3804,3516
800 -98091,17126 -143133,-1557 -142218,-1285 -142808,-1578 604,650 304,369
5 8000 -94588,17825 -140175,-1815 -143832,-1504 -142466,-1523 309,575 363,445
80000 -92850,16520 -145891,-1066 -143237,-1572 -142203,-1639 255,499 469,443

for mixed strategies are (-1644, 6136) and (-1407, 7709). The attacker can random-

ize their strategies among (2 RSUs, MP 8000Mbps), (2 RSUs, MP 80000bit), and

(5 RSUs, MP 8000bit), while the defender can randomize among the strategies 20

vehicles, 60 vehicles, or 100 vehicles. Out of the five equilibria, the best response for

the players is (24576, 52750).

Table 5.19: Payoff Matrix: Number of RSUs, Message Payload vs Vehicle


Density
NumRSU MP VD 20 VD 40 VD 60 VD 80 VD 100
800 -2251,4281 -1870,5097 -2285,5138 -2488,5160 -2011,6804
2 8000 -2116,4193 -1778,4803 -1590,4481 -1901,4265 -1751,5847
80000 -1400,4559 -3479,5721 -1444,6375 -1933,6363 -2041,6178
800 24482,52695 -80111,35561 -136207,27480 -135952,27396 -159062,23599
5 8000 24576,52750 -77437,36094 -132124,26783 -152163,22991 -161420,21363
80000 24161,53173 -76954,37546 -136323,27874 -134689,27960 -158890,23599

5.3.3 Scenario AAAD

5.3.3.1 NumRSU+Message Payload + Beacon Interval vs Bit Rate

The payoff matrix for Number of RSUs, Message Payload and Beacon Interval with

respect to Bit Rate, Table 5.20, has only 1 mixed strategy and no pure strategies. The

expected payoff for the mixed strategy is (-4003, 5831). The attacker can randomize

between their strategies 0.01s and 0.001s for 2 RSUs and 80000bit with probabilities

( 1399
1937
538
) and ( 1937 ), respectively. The defender can randomize between their strategies

BR 6Mbps and BR 18Mbps with probabilities ( 259


310
51
) and ( 310 ), respectively.
103
Table 5.20: Payoff Matrix: Number of RSUs, Message Payload, Beacom In-
terval vs Bit Rate
RSUs MP BI BR 6 BR 9 BR 12 BR 18 BR 24 BR 27
0.1 -7756,6768 -7446,4948 -6882,6516 -6028,6072 -3428,3593 -2048,3825
800 0.01 -61003,2473 -60730,2253 -61149,2042 -59935,1829 -57582,2065 -57939,1800
0.001 -409357,4448 -410789,3858 -408556,4618 -407716,26444 -400651,5242 -399108,5275
2
0.1 -4710,5329 -4004,5150 -4115,5022 -2410,5008 -149,3053 810,3048
80k 0.01 -4335,5477 -4154,5759 -4292,4789 -2316,4939 930,3760 36,2601
0.001 -4233,6752 -3930,5934 -4717,4636 -2834,8151 823,3677 1244,3196
0.1 -103334,18598 -106029,16201 -103482,18267 -105087,17129 -3422,4747 -3263,4212
800 0.01 -149150,23221 -145759,22770 -149164,21643 -142261,22535 -37440,14087 -36624,13947
0.001 -275703,24725 -270629,24988 -279093,23248 -267882,24607 -109759,16925 -117769,15863
5
0.1 -98459,17063 -100998,15014 -102770,15447 -103326,14237 1586,3347 1747,2435
80k 0.01 -98339,17209 -99999,15522 -99738,11002 -98040,15117 1806,3301 1773,2616
0.001 -94035,17138 -107712,15808 -103244,14841 -97584,15698 1469,2832 1799,2646

5.3.3.2 NumRSU+Message Payload + Beacon Interval vs Vehicle Den-

sity

Table 5.21 shows the payoff matrix for changes in the number of RSUs, message

payload, and beacon interval with respect to vehicle density. The payoffs for RSU

count 2 and BI of 0.01s and 0.001s are eliminated as the payoffs are quite low. Sim-

ilarly, all the payoffs for RSU 3 and 4 yield low payoffs for the attacker and hence,

eliminated. The Nash Equilibrium of this game is (5 RSUs, 800bit, 0.001s, VD 20)

with an expected payoff of (112993,94699).

Table 5.21: Payoff Matrix: Number of RSUs, Message Payload, Beacom In-
terval vs Vehicle Density
RSUs MP BI VD 20 VD 40 VD 60 VD 80 VD 100
800 0.1 -6917,5531 -7605,5971 -7076,5710 -7571,5697 -7612,5716
2 8000 0.1 -7199,5444 -7019,6206 -7359,5867 -7742,5212 -7698,6047
80000 0.1 -6273,5675 -6469,6874 -7027,5835 -7177,5296 -6521,5346
0.1 20066,56571 -85304,37805 -141441,29347 -141618,29287 -162055,25228
800 0.01 4700,74953 -120151,48142 -180880,34801 -191090,32361 -204476,29444
0.001 112993,94699 -202795,62242 -367484,45400 -370107,45146 -439694,37686
5
0.1 23012,53301 -76617,37623 -132519,27546 -158607,22785 -160121,23845
80000 0.01 27053,53788 -82930,36312 -142563,25765 -154906,23502 -162653,22285
0.001 25388,53399 -76590,37261 -132960,27347 -157105,22949 -167500,21087

104
5.3.4 Scenario ADD

5.3.4.1 Beacon Interval vs Bit Rate + Vehicle Density

From Table 5.8, we analyzed that BI 0.01s and 0.001s are the dominant strategies

for the attacker that yields very low payoffs. Table 5.22 shows a section of the payoff

matrix for beacon interval with respect to bit rate and vehicle density. The NE is (BI

0.1s, BR 12Mbps, 100 VD) with an expected payoff of (-5634, 6714).

Table 5.22: Payoff Matrix: Beacon Interval vs Bit Rate , Vehicle Density
BR 12
BI VD 20 VD 40 VD 60 VD 80 VD 100
0.1s -5968,1176 -5922,1531 -5917,1405 -5903,1441 -5634,6714

5.3.4.2 Message Payload vs Bit Rate + Vehicle Density

The Table 5.23 shows section of the payoff matrix for BR 6Mbps and BR 12Mbps

for message payload with respect to bit rate and vehicle density. It has 2 pure

strategies and 3 mixed strategies, out of which (-279, 6854) is the best response of

the game.

Table 5.23: Payoff Matrix: Message Payload vs Bit Rate


BR 6 BR 12
MP VD 20 VD 40 VD 60 VD 80 VD 100 VD 20 VD 40 VD 60 VD 80 VD 100
800 -544,3617 -474,4757 -429,5456 -441,5296 -462,4993 -491,3995 -371,5837 -373,5476 -334,5537 -291,6287
8000 -515,4059 -410,5406 -389,5309 -398,5266 -410,5142 -559,3524 -424,4828 -315,5854 -261,6656 -279,6854
80000 -515,3723 -418,4798 -294,5613 -406,4910 -303,6878 -477,3864 -305,5662 -345,4738 -180,4936 -354,4611

5.3.4.3 NumRSU vs Bit Rate + Vehicle Density

Table 5.24 shows the reduced payoff matrix for the number of RSUs with respect to

bit rate and vehicle density after eliminating all the dominated strategies. It has two

pure strategies that result in an expected payoff of (-2159,5953) and (16346, 35006)

105
with % packet loss of 4.7% and 83%, respectively. It also has one mixed strategy with

an expected payoff (-1449, 8461). We select (16346, 35006) as the best response for

the players. Here, we also see that attackers can have a greater attack impact with 5

RSUs, but the number of vehicles they can disrupt is only 20.

Table 5.24: Payoff Matrix: Number of RSUs vs Bit Rate


BR 9 BR 12
NumRSU VD 100 VD 20 VD 40
2 -2159,5953 -1449,4377 -1450,5830
5 -100691,14993 16346,35006 -46828,25561

5.3.5 Scenario AADD

5.3.5.1 Beacon Interval + Message Payload vs Bit Rate + Vehicle Density

The reduced payoff matrix for beacon interval and message payload with respect

to bit rate and vehicle density is shown in Table 5.25. This matrix can be reduced

further by eliminating the dominated rows. The last row (80000bit, 0.001s) has the

smallest -ve value (minimum loss) for the attacker against all the defender strategies,

and the other 4 rows can be eliminated. The remaining payoffs show that the defender

has the highest payoff of 5765 when the vehicle density is 100. Hence, (80000, 0.001s,

VD 100) is our NE for this game.

Table 5.25: Payoff Matrix: Message Payload, Beacon Interval vs Bit Rate
BR 9
MP BI VD 20 VD 40 VD 60 VD 80 VD 100
800 0.1s -5963,4303 -5900,4976 -5846,5462 -5867,5271 -5879,5071
8000 0.1s -5925,4642 -5799,5825 -5808,5338 -5795,5302 -5800,5195
0.1s -5630,4248 -5489,6203 -5552,5806 -5537,5523 -5561,5327
80000 0.01s -5783,4019 -5561,5472 -5504,4991 -5522,5093 -5525,4988
0.001s -5337,4340 -5447,5590 -5363,5634 -5447,5590 -5438,5765

106
Table 5.26: Payoff Matrix: Number of RSUs, Beacon Interval vs Vehicle Den-
sity
BR 12 BR 18
NumRSU BI VD 20 VD 40 VD 60 VD 20
2 0.1s -7389,6014 -6872,6713 -6681,6873 -6635,5410
0.1s 11129,38685 -52013,27818 -88069,21495 24507,40613
5 0.01s -4784,494423 -81530,323858 -123716,229042 13929,536381
0.001s 127213,83326 -215531,51588 -309610,42393 186332,88672

5.3.5.2 NumRSU + Beacon Interval vs Bit Rate + Vehicle Density

The reduced payoff matrix for a number of RSUs and beacon interval with respect

to bit rate and vehicle density is shown in Table 5.26. It has two pure strategies with

expected payoffs of (-6081, 6873) and (186332, 88672) with a packet loss of 4.7% and

90.9%, respectively. If the attack goal is to disrupt as many vehicles as possible,

the attacker could go with the strategy 2 RSUs and 0.1s BI. If the attack goal is

maximum packet loss irrespective of the number of vehicles, they could go with the

other strategy (5 RSUs, 0.001s BI, BR15). Here, the best response for the defender

is to reduce the traffic to 20 vehicles.

5.3.5.3 NumRSU+Message Payload vs Bit Rate + Vehicle Density

The NE for the payoff matrix for the number of RSUs and message payload with

respect to bit rate and vehicle density which yields packet loss of 84%, is (5 RSUs,

8000 bits, BR 18Mbps, 20 vehicles), shown in Table 5.27. This Table shows the payoff

for 20 vehicles for a Bit rate of 6, 9, 12, and 18 Mbps.

107
Table 5.27: Payoff Matrix: Number of RSUs, Message Payload vs Vehicle
Density
VD 20
NumRSU MP BR 6 BR 9 BR 12 BR 18
800 15567,34520 15328,34420 16345,35006 17040,34297
5 8000 15482,34918 15786,34474 15493,34425 31008,36759
80000 16432,34730 14676,34420 17607,34674 16054,34181

5.3.6 Scenario AAADD

5.3.6.1 NumRSU+Message Payload + Beacon Interval vs Bit Rate +

Vehicle Density

Table 5.28 shows a section of the payoff matrix for the number of RSUs, message

payload, and beacon interval with respect to bit rate and vehicle density. Since the

payoff matrix is quite large, there are 2 pure strategies and 13 mixed strategies NE.

Out of the 15 equilibria, The NE (5 RSU, 800 bit MP, BI 0.001s, BR15, 20 VD) yields

the maximum packet loss of 89.89%.

Table 5.28: Payoff Matrix: Number of RSUs, Message Payload, Beacon In-
terval vs Bit Rate , Vehicle Density
VD 20
NumRSU MP BI BR 6 BR 9 BR 12 BR 18
0.1s 10053,38148 10968,38417 11129,38685 24507,40613
5 800 0.01s 12255,60494 -4030,56151 -3709,55876 14864,59969
0.001s 72841,60327 -2432,52082 77770,60939 117956,64819

5.3.7 Final Payoff Matrix

The final payoff matrix is shown in Table 5.29. The Nash equilibrium is selected

from the respective payoff matrix based on each strategy. In general, there could be

any number of equilibria. The best-case scenario is to have one Nash equilibrium.

In cases where there are more than one equilibria, we choose the better one for

108
both the players. In the payoff matrix, which has a mixed strategy, we evaluate the

expected payoffs of the players and compare them with other equilibria, if any. The

one with the best response with each player is selected for the final payoff matrix. For

example, payoffs for BI vs. BR and BI vs. VD, we look at Table 5.8 and Table 5.9.

Both tables have one equilibrium which is selected to populate the final matrix.

Now, let’s have a look at the attacker’s strategy BI+MP with BR as the defender’s

strategy in Table 5.29. The payoff matrix (Table 5.14) for this strategy has two pure

Nash equilibria. The attacker can also randomize their strategy with a probability of

( 4381 , 966 ) for MP of 80000bit and BI of 0.01s and 0.001s. In response, the defender
5347 5347

can randomize between their strategies (BR 12Mbps, BR 18Mbps) with probabilities
513 7
( 520 , 520
). The expected payoff for the mixed strategy is (-5418, 5269). The best one

out of the three equilibria is (-4062, 9901). Once the payoffs for all the strategies are

populated in the final matrix, the Nash equilibrium is calculated as (112993, 94699).

The payoffs for attacker strategies (rows:1, 3, 4) are strictly dominated by row 2, so

these rows will be eliminated. In the defender’s turn,they will eliminate column 1 as

it is strictly dominated by columns 2 and 3. The attacker will then eliminate rows

2 and 6. The rest of the defender’s payoffs for strategy BR+VD (88672, 64819) is

lower than their payoffs for strategy VD (147776, 52750). This eliminates strategy

BR+VD. Now, the attacker is left with two payoffs (51180, 112993). The payoff for

strategy NumRSU+MP+BI wrt VD is higher, giving us the Nash Equilibrium for

this game. The surface plot for the final payoff matrix shown in figure 5-16 shows the

Nash Equilibrium and payoffs of the game.

5.4 Analysis and Discussion

The objective of implementing game theory in cybersecurity of autonomous sys-

tems is to provide a mathematical representation of attack scenarios and learn the

109
Table 5.29: Final Payoff Matrix for Attack/Defense Strategies
Attacker/ Defender BR VD BR + VD
BI -5634,6514 -5820,5293 -5634,6714
MP -388,4944 -419,5800 -279,6854
NumRSU -2159,5953 -1020,6804 16346,35006
MP + BI -4062,9901 -5653,5574 -5438,5765
NumRSU + BI -7020,6450 51180,147776 186332,88672
NumRSU + MP -2034,6128 24576,52750 31008,36759
NumRSU + MP + BI -4002,5831 112993,94699 117956,64819

effectiveness of defense mechanisms against those attacks. Many potential threats can

be generated by tweaking just a few parameters in the real world, and simulating such

a scenario helps visualize and play around with parameters easily. Our work explores

the idea of implementing game theory on the simulation of the DoS/DDoS attack on

driverless cars in the VANET network. As discussed in our dissertation, we changed

different parameters for the attacker and the defender, evaluated the payoffs, and

then analyzed the result to find the optimal strategy for both the players. Table 5.30

is a summary of the players strategy with respective equilibrium. It also shows the

percentage packet loss in each scenario. The ‘Attack Strategy’ and ‘Defence Strat-

egy’ columns show the values of the parameters that would yield the best response

when those parameters are varied. The rest of the parameters are at their default

values. There are 4 cases in which packet loss is above 85%. The highest packet

loss of approx. 91% is achieved at 5 RSUs, a BI 0.001s with 20 vehicles, and a BR

18Mbps. We would expect the attacker to choose this strategy. The defender would

strive to receive as many packets as possible to lower the impact of the DoS attack,

hence going for a higher payoff, which could be achieved by reducing the traffic to

20 vehicles. In turn, the attacker would want to achieve maximum payoff by varying

the message payload along with RSUs and BI, which could be achieved with 5 RSUs,

800bit MP, and BI 0.001s when the vehicle density is 20. The average packet loss for

this strategy is 848577, which yields a packet loss of approx. 90%. This shows that an

110
Figure 5-16: Attacker Strategies vs Defender Strategies (Final Payoff)

attacker would need 5 RSUs to have a successful DoS/DDoS attack on the vehicular

network. With defender’s strategy into action, the impact of attack could be reduced

to only 20 vehicles instead of 100. If the vehicle receives a large number of packets, it

can alert other vehicles to change routes and minimize the traffic as much as possible.

The attacks where packet loss is low can be used to disrupt the traffic, as shown in

Figure 5-5. Low % packet loss also indicates that impact of DoS/DDoS attack can

be maintained without serious damage. The % packet loss for the number of RSUs,

message payload, and beacon interval with respect to vehicle density is not feasible to

calculate as the expected payoff from the mixed strategy lacks the information about

the number of packet loss.

111
Table 5.30: Attack/Defend Strategy and Packet Loss at Equilibrium with
respect to Payoff Matrices

Payoff Matrices Equilibrium Attack Strategy Defence Strategy PL%


BI vs BR -5634,6514 0.1s 12Mbps 5.48
BI vs VD -5820,5293 0.1s VD 80 3.55
BI vs BR+VD -5634,6714 0.1s 12Mbps, VD 100 5.33
MP vs BR -388,4944 80000bit 6Mbps 4.41
MP vs VD -419,5800 80000bit VD 40 3.20
MP vs BR+VD -279,6854 8000bit 12Mbps, VD 100 4.59
NumRSU vs BR -2159,5953 2 9Mbps 4.11
NumRSU vs VD -1020,6804 2 VD 100 3.41
NumRSU vs BR+VD 16346,35006 5 12Mbps, VD 20 83.77
MP+BI vs BR -4062,9901 80000bit, 0.001s 18Mbps 16.39
MP+BI vs VD -5653,5574 80000bit, 0.1s VD 40 5.97
MP+BI vs BR+VD -5438,5765 80000bit, 0.001s 9Mbps, VD 100 8.95
NumRSU+BI vs BR -7020,6450 2, 0.1s 12Mbps 4.70
NumRSU+BI vs VD 51180,147776 5, 0.001s VD 20 86.07
NumRSU+BI vs BR+VD 186332,88672 5, 0.001s 18Mbps, 20 90.95
NumRSU+MP vs BR -2034,6128 2, 80000bit 9Mbps 5.55
NumRSU+MP vs VD 24576,52750 5, 8000bit VD 20 84.48
NumRSU+MP vs BR+VD 31008,36759 5, 8000bit 18Mbps, VD 20 84.18
2, 80000, 0.01 ( 1399
1937
), 6Mbps ( 259
310
)
NumRSU+MP+BI vs BR -4002,5831 538 51 -
2, 80000, 0.001s ( 1937 ) 18Mbps ( 310 )
NumRSU+MP+BI vs VD 112993,94699 5, 800bit, 0.001s VD 20 89.96
NumRSU+MP+BI vs BR+VD 117956,64819 5, 800bit, 0.001s 18Mbps, VD 20 89.89

5.5 Chapter Summary

This chapter explores the interaction between an attacker and driverless vehicles

in a VANET network using a game-theory model, which assumes specific attacker

capabilities and incentives in a DoS/DDoS attack. By analyzing data related to

attacker strategies, we can assess the security of a system, i.e., the extent to which

defenses will protect the system against a specific set of threats. Modeling a game for a

DDoS attack in a VANET is complex because of the broadcast communication with no

acknowledgment. Moreover, applying multiple strategies simultaneously complicated

112
the game further. The application of the bottom-up approach in solving the game

proved helpful, resulting in a compact representation of the game. For an end-user,

this could mean that they can avoid the deeper technicality of the game and efficiently

find the optimal solution.

113
Chapter 6

Conclusion and Future Work

This chapter concludes the dissertation with a summary of the contributions and

discusses the limitations of this work which could be addressed in future works.

6.1 Conclusion

With the advancement in artificial intelligence (AI) and machine learning (ML) in

the last few decades, the increased usage of autonomous systems is evident in almost

every domain. Designing and developing autonomous vehicles in several developed

countries to ease transportation is no secret. On the other hand, domains such as

agriculture, healthcare, military, and space exploration have found many use cases for

these systems. The driverless car revolution has already begun, and the self-driving

technology has been embraced by Tesla, Uber, and Waymo [257]. Robots would soon

be integrated into our lives like a home assistant, a pet [258] or a friend like Sophia

(the AI) [7]. There was a need to explore the area of autonomous systems to evaluate

the extension of work done in terms of cybersecurity. A detailed search assessed on

the following criteria gave clarity to the subsequent contribution of the dissertation :

• the history of automation and its levels

• approaches of autonomy and current trends

114
• cybersecurity of an autonomous system

• modeling of an autonomous system, vulnerabilities, threats, and attacks

Experimental modeling and evaluation of these systems were beyond our scope

of research. The analysis of the selected articles reflects that UAV is the most at-

tractive field of study while driverless cars and robots are still in their early stage of

cybersecurity modeling and implementation.

Further research in this field established a lack of autonomous system architec-

ture. We proposed a generalized architecture of the system along with a strategic

game model to represent the system in any attack scenario. The game model can be

applied at any level of granularity with the right attack and defence parameters and

quantification of cost. Security game modeling can be used to reason over diverse

potential threats and attack scenarios. The game specifies only their options and the

consequence of each of them. The solution of the game is not to decide who wins the

game and who loses. Rather, it is a way to think what players might do in a given

scenario and what would be the consequence. Hence, there can be different solutions

of the same game.

Finally, we applied our game model to a real-world scenario through simulation.

We implemented an attack/defense game on driverless cars in a VANET network. The

simulation results show how the attacker can use different parameters to increase

their success in launching a DDoS attack and its impact on the traffic. Similarly,

the defender can minimize the attack impact by changing other parameters. Such

cybersecurity modeling would allow system administrators and developers to eval-

uate different attack scenarios and implement countermeasures accordingly so that

autonomous systems can make those decisions when required.

115
6.2 Limitations and Future Work

A fundamental constraint that we found during the review process is the lack of

literature that discusses the security of individual as well as generalized autonomous

systems. The study of cybersecurity is new in this domain, and so is its imple-

mentation. This resulted in a limited number of primary articles reviewed for the

security and modeling sections of the survey. The discussion focuses on the security

of three autonomous systems (UAVs, robots, and driverless cars) worth a detailed

review because of their popularity. Swarms are an extension of these systems, and

the Autonomous Internet of Things (AIoT) is a new concept yet to be explored. We

believe in having studied the most applicable works available in this area of research.

We would address the challenges and limitations associated with the current sim-

ulation methodology for our future works. The simulation work can be extended to

many other different attack scenarios for different types of attacks. It can be ex-

tended to explore other attack scenarios, such as a vehicle(s) acting as a malicious

node(s). Same or different parameters can be evaluated to see the attack’s impact.

In our future work, we can also assess what impact these parameters have on time

delay, distance covered by the vehicles, and battery consumption. If the vehicle has

to process a higher number of received packets, the battery charge will be consumed

higher. Since autonomous vehicles will be battery operated, this will reduce the dis-

tance they would typically cover at a specific battery level. In addition, the learning

of attacker/defender over time also needs to be taken into account. For example,

the attacker/defender must reason over different types of attacks and their counter-

measures on each module. We need a more elaborate model for both attacker and

defender to address these and other challenges. A framework or GUI could be de-

veloped to evaluate the Nash/QR equilibrium based on the user’s data of players,

strategies, utility function, and scenarios.

116
References
[1] Statista. (2018) Internet of Things (IoT) connected devices installed

base worldwide from 2015 to 2025 (in billions) . [Online]. Available:

http://tinyurl.com/j3t9t2w

[2] A. Kim, B. Wampler, J. Goppert, I. Hwang, and H. Aldridge, “Cyber attack

vulnerabilities analysis for unmanned aerial vehicles,” in Infotech@ Aerospace

2012, 2012, p. 2438.

[3] Michael Corkery, David Gelles, “Robots Welcome to Take Over, as

Pandemic Accelerates Automation ,” 2020. [Online]. Available: https:

//tinyurl.com/s42w7wl

[4] K. Hill. (2016) Security robot accidentally attacks child. [Online]. Available:

https://tinyurl.com/yynv5se3

[5] D. Wakabayashi. (2018) Uber’s Self-Driving Cars Were Struggling Before

Arizona Crash. [Online]. Available: http://tinyurl.com/y83sc39o

[6] CCW. (2018) Five years of campaigning, CCW continues. [Online]. Available:

https://www.stopkillerrobots.org/2018/03/fiveyears/

[7] D. Galeon. (2017) World’s First AI Citizen in Saudi Arabia Is Now Calling For

Women’s Rights. [Online]. Available: http://tinyurl.com/yyqcksxq

[8] P. Caughill. (2017) An Artificial Intelligence Has Officially Been Granted

Residency. [Online]. Available: http://tinyurl.com/y9q3zpz4

117
[9] F. Jahan, A. Y. Javaid, W. Sun, and M. Alam, “Gnssim: An open source

gnss/gps framework for unmanned aerial vehicular network simulation,” EAI

Endorsed Transactions on Mobile Communications and Applications, vol. 2,

no. 6, pp. 1–13, 2015.

[10] A. Y. Javaid, F. Jahan, and W. Sun, “Analysis of global positioning

system-based attacks and a novel global positioning system spoofing detec-

tion/mitigation algorithm for unmanned aerial vehicle simulation,” Simulation,

vol. 93, no. 5, pp. 427–441, 2017.

[11] F. Jahan, W. Sun, Q. Niyaz, and M. Alam, “Security modeling of autonomous

systems: A survey,” ACM Computing Surveys (CSUR), vol. 52, no. 3, p. 50,

2019.

[12] F. Jahan, W. Sun, and Q. Niyaz, “A non-cooperative game based model for

the cybersecurity of autonomous systems,” in 2020 IEEE Security and Privacy

Workshops (SPW). IEEE, 2020, pp. 202–207.

[13] G. D. Baxter, J. Rooksby, Y. Wang, and A. Khajeh-Hosseini, “The ironies of

automation: still going strong at 30?” in ECCE, 2012, pp. 65–71.

[14] J. M. Beer, A. D. Fisk, and W. A. Rogers, “Toward a framework for levels of

robot autonomy in human-robot interaction,” Journal of human-robot interac-

tion, vol. 3, no. 2, pp. 74–99, 2014.

[15] S. A. Mostafa, M. S. Ahmad, and A. Mustapha, “Adjustable autonomy: a

systematic literature review,” Artificial Intelligence Review, pp. 1–38, 2017.

[16] H.-M. Huang, “Autonomy levels for unmanned systems (alfus) framework vol-

ume i: Terminology version 2.0,” Tech. Rep., 2004.

118
[17] S. Shahrdar, L. Menezes, and M. Nojoumian, “A Survey on Trust in Au-

tonomous Systems,” in Intelligent Computing, 2017.

[18] A. Origins. (2013) Talos Crete. [Online]. Available: http://www.ancient-origins.

net/myths-legends/talos-crete-00157

[19] W. B. Yeats, “The Winding Stairs and Other Poems.” Reprinted by Kessinger

Publishing, 1933.

[20] Homer, “The Iliad,” in Vol. Book XVIII. circa.

[21] M. E. Rosheim, “Leonardo’s Lost Robot.” Berlin: Springer Verlag, 2006.

[22] N. Wiener and Others, God and Golem, inc, 1964.

[23] J. Turi. (2014) Tesla’s toy boat: A drone before its time. [Online]. Available:

https://www.engadget.com/2014/01/19/nikola-teslas-remote-control-boat/

[24] J. Turi. (2014) GE’s bringing good things, and massive robots,

to life. [Online]. Available: https://www.engadget.com/2014/01/26/

ge-man-amplifying-robots/

[25] ZDNet. (2018) 15 of the best movies about AI, ranked. [Online]. Available:

https://tinyurl.com/yculsjxp

[26] S. Wold and P. Staff. (2015) The 100 Greatest Movie Robots of All Time.

[Online]. Available: http://tinyurl.com/y3zbsxyk

[27] A. Newitz. (2013) 15 Books That Will Change the Way You Look at Robots.

[Online]. Available: http://tinyurl.com/yy2mkkwg

[28] T. B. Sheridan and W. L. Verplank, “Human and computer control of undersea

teleoperators,” Massachusetts Inst of Tech Cambridge Man-Machine Systems

Lab, Tech. Rep., 1978.


119
[29] F. L. BORCH. (2015) Project Horizon: Army Base on the

Moon. [Online]. Available: https://www.defensemedianetwork.com/stories/

an-army-base-on-the-moon/

[30] J. Navarro, “A state of science on highly automated driving,” Theoretical Issues

in Ergonomics Science, vol. 20, no. 3, pp. 366–396, 2019.

[31] R. van der Kleij, T. Hueting, and J. M. Schraagen, “Change detection sup-

port for supervisory controllers of highly automated systems: Effects on per-

formance, mental workload, and recovery of situation awareness following in-

terruptions,” International journal of industrial ergonomics, vol. 66, pp. 75–84,

2018.

[32] A. D. Farooqui and M. A. Niazi, Game theory models for communication

between agents: a review, 2016, vol. 4, no. 1. [Online]. Available: http:

//casmodeling.springeropen.com/articles/10.1186/s40294-016-0026-7

[33] R. Parasuraman, T. B. Sheridan, and C. D. Wickens, “A model for types and

levels of human interaction with automation,” IEEE Transactions on Systems,

Man, and Cybernetics - Part A: Systems and Humans, vol. 30, no. 3, pp. 286–

297, may 2000.

[34] M. Hussain, J. Dewey, and N. Weibel, “Reducing alarm fatigue: exploring

decision structures, risks, and design,” EAI Endorsed Transactions on Pervasive

Health and Technology, vol. 3, no. 10, 7 2017.

[35] L. Bainbridge, “Ironies of automation,” in Analysis, Design and Evaluation of

Man-Machine Systems 1982. Elsevier, 1983, pp. 129–135.

[36] D. A. Norman, “The problem with automation: inappropriate feedback and

interaction, not over-automation,” Phil. Trans. R. Soc. Lond. B, vol. 327, no.

1241, pp. 585–593, 1990.


120
[37] E. L. Wiener and R. E. Curry, “Flight-deck automation: Promises and prob-

lems,” Ergonomics, vol. 23, no. 10, pp. 995–1011, 1980.

[38] M. R. Endsley, “Towards a New Paradigm for Automation: Designing

for Situation Awareness,” IFAC Proceedings Volumes, vol. 28, no. 15, pp.

365–370, 1995. [Online]. Available: http://linkinghub.elsevier.com/retrieve/

pii/S1474667017452591

[39] M. Endsley, “Design and evaluation for situation awareness enhancement,” in

Proceedings of the Human Factors Society annual meeting, vol. 32, no. 2. SAGE

Publications Sage CA: Los Angeles, CA, 1988, pp. 97–101.

[40] M. Endslay, “The application of human factors to the development of expert

systems for advanced cockpits,” vol. 31, no. 12, pp. 1388–1392, 1987.

[41] J. W. McDaniel, “Rules for fighter cockpit automation,” in Aerospace and Elec-

tronics Conference, 1988. NAECON 1988., Proceedings of the IEEE 1988 Na-

tional. IEEE, 1988, pp. 831–838.

[42] W. B. Rouse, “Adaptive allocation of decision making responsibility between

supervisor and computer,” in Monitoring behavior and supervisory control.

Springer, 1976, pp. 295–306.

[43] R. Parasuraman, T. Bahri, J. E. Deaton, J. G. Morrison, and M. Barnes,

“Theory and design of adaptive automation in aviation systems,” Catholic

Univ of America Washington DC Cognitive Science Lab, Tech. Rep., 1992.

[Online]. Available: http://books.google.com.my/books?id=DEtSOwAACAAJ

[44] T. B. Sheridan, Telerobotics, Automation, and Human Supervisory Control.

Cambridge, MA, USA: MIT Press, 1992.

121
[45] M. R. Endsley and D. B. Kaber, “Level of automation effects on performance,

situation awareness and workload in a dynamic control task,” Ergonomics,

vol. 42, no. 3, pp. 462–492, 1999.

[46] O. o. t. S. o. D. W. DC, “Unmanned Aerial Vehicles Roadmap 2000-2025,”

Defense Pentagon Washington, DC, USA, Tech. Rep., 2001.

[47] H.-M. Huang, E. Messina, and J. Albus, “Toward a generic model for autonomy

levels for unmanned systems (alfus),” NATIONAL INST OF STANDARDS

AND TECHNOLOGY GAITHERSBURG MD, Tech. Rep., 2003.

[48] H. M. Huang, E. Messina, and J. Albus, “Autonomy level specification for in-

telligent autonomous vehicles: Interim progress report,” in 2003 PerMIS Work-

shop, Gaithersburg, MD. Citeseer, 2003, pp. 1–7.

[49] H.-M. Huang, J. S. Albus, E. R. Messina, R. L. Wade, and R. W.

English, “Specifying Autonomy Levels for Unmanned Systems: Interim

Report,” in SPIE Defense and Security Symposium 2004, 2004, pp. 386–397.

[Online]. Available: http://proceedings.spiedigitallibrary.org/proceeding.aspx?

articleid=844165

[50] H.-M. Huang, E. Messina, R. Wade, R. English, B. Novak, and J. Albus, “Au-

tonomy measures for robots,” in ASME 2004 International Mechanical Engi-

neering Congress and Exposition. American Society of Mechanical Engineers,

2004, pp. 1241–1247.

[51] H.-M. Huang, K. Pavek, J. Albus, and E. Messina, “Autonomy levels

for unmanned systems (ALFUS) framework: an update,” Proc. SPIE,

vol. 5804, no. June, pp. 439–448, 2005. [Online]. Available: http:

//dx.doi.org/10.1117/12.603725

122
[52] H.-M. Huang, “Autonomy levels for unmanned systems (ALFUS) framework:

safety and application issues,” in 2007 Workshop on Performance Metrics for

Intelligent Systems (PerMIS ’07), 2007, pp. 48–53.

[53] Cognilytica. (2018) Will There Be Another AI Winter? [Online]. Available:

https://www.cognilytica.com/2018/02/22/will-another-ai-winter/

[54] F. W. Heger, L. M. Hiatt, B. Sellner, R. Simmons, and S. Singh, “Results

in sliding autonomy for multi-robot spatial assembly,” no. 603, pp. 489–496,

September 2005.

[55] F. Heger and S. Singh, “Sliding autonomy for complex coordinated multi-robot

tasks: Analysis & experiments,” in Proceedings, Robotics: Systems and Science,

Philadelphia, 2006. [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/

download?doi=10.1.1.148.9892{&}rep=rep1{&}type=pdf

[56] S. A. Mostafa, M. S. Ahmad, and A. Mustapha, “Adjustable autonomy: a

systematic literature review,” Artificial Intelligence Review, vol. 51, no. 2, pp.

149–186, 2019.

[57] N. E. Reed, “A user controlled approach to adjustable autonomy,” in Proceed-

ings of the 38th Annual Hawaii International Conference on System Sciences,

Jan 2005, pp. 295b–295b.

[58] M. L. Cummings, S. Bruni, S. Mercier, and P. Mitchell, “Automation archi-

tecture for single operator, multiple uav command and control,” Massachusetts

Inst Of Tech Cambridge, Tech. Rep., 2007.

[59] M. M. Cummings, “Man versus machine or man+ machine?” IEEE Intelligent

Systems, vol. 29, no. 5, pp. 62–69, 2014.

123
[60] B. P. Sellner, L. M. Hiatt, R. Simmons, and S. Singh, “Attaining sit-

uational awareness for sliding autonomy,” in Proceedings of the 1st ACM

SIGCHI/SIGART conference on Human-robot interaction. ACM, 2006, pp.

80–87.

[61] S. Zilberstein, “Building strong Semi-Autonomous Systems,” no. Chaffins 2008,

pp. 1–20, 2010.

[62] M. B. Dias, B. Kannan, B. Browning, G. Jones, B. Argall, M. F. Dias, M. Zinck,

M. M. Veloso, and A. Stentz, “Sliding autonomy for peer-to-peer human-robot

teams,” Intelligent Autonomous Systems 10, IAS 2008, pp. 332–341, 2008.

[63] T. Fong, N. Cabrol, C. Thorpe, and C. Baur, “A personal user interface for col-

laborative human-robot exploration,” in 6th International Symposium on Arti-

ficial Intelligence, Robotics, and Automation in Space (iSAIRAS), no. CONF,

2001.

[64] G. Dorais, R. P. Bonasso, D. Kortenkamp, B. Pell, and D. Schreckenghost,

“Adjustable autonomy for human-centered autonomous systems,” in Working

notes of the Sixteenth International Joint Conference on Artificial Intelligence

Workshop on Adjustable Autonomy Systems, 1999, pp. 16–35.

[65] K. H. Wray, L. Pineda, and S. Zilberstein, “Hierarchical Approach to Transfer

Control in Semi-Autonomous Systems,” in Proceedings of the Twenty-Fifth,

International Joint Conference on Artificial Intelligence, 2016, pp. 517–523.

[Online]. Available: http://www.ijcai.org/Proceedings/16/Papers/080.pdf

[66] V. Z. Moffitt, J. L. Franke, and M. Lomas, “Mixed-initiative adjustable auton-

omy in multi-vehicle operations,” 2006.

[67] M. L. De Brun, V. Z. Moffitt, J. L. Franke, D. Yiantsios, T. Housten, A. Hughes,

124
S. Fouse, and D. Housten, “Mixed-initiative adjustable autonomy for hu-

man/unmanned system teaming,” in AUVSI unmanned systems North America

conference, 2008.

[68] D. J. Bruemmer, D. D. Dudenhoeffer, and J. L. Marble, “Dynamic-autonomy

for urban search and rescue.” in AAAI mobile robot competition, 2002, pp. 33–

37.

[69] B. Hardin and M. A. Goodrich, “On using mixed-initiative control: a per-

spective for managing large-scale robotic teams,” in Proceedings of the 4th

ACM/IEEE international conference on Human robot interaction. ACM, 2009,

pp. 165–172.

[70] L. Lin, M. a. Goodrich, and S. Clark, “Sliding Autonomy for UAV Path

Planning: Adding New Dimensions to Autonomy Management,” Journal

of Human-Robot Interaction, vol. 1, no. 1, pp. 78–95, 2012. [Online].

Available: http://www.humanrobotinteraction.org/journal/index.php/HRI/

article/view/12

[71] M. Desai and H. A. Yanco, “Blending human and robot inputs for sliding scale

autonomy,” in Robot and Human Interactive Communication, 2005. ROMAN

2005. IEEE International Workshop on. IEEE, 2005, pp. 537–542.

[72] M. Desai, “Sliding scale autonomy and trust in human-robot interaction,” Mas-

ter’s thesis, 2007, 1442064.

[73] A. Wakulicz-Deja and M. Przybyla-Kasperek, “Hierarchical multi-agent sys-

tem,” Studia Informatica, vol. 28, no. 4, pp. 63–80, 2007.

[74] T. Proscevicius, A. Bukis, V. Raudonis, and M. Eidukeviciute, “Hierarchical

control approach for autonomous mobile robots,” Elektronika ir Elektrotechnika,

vol. 110, no. 4, pp. 101–104, 2011.


125
[75] J. M. Bradshaw, P. Beautement, M. R. Breedy, L. Bunch, S. V. Drakunov, P. J.

Feltovich, R. R. Hoffman, R. Jeffers, M. Johnson, S. Kulkarni et al., “Making

agents acceptable to people,” in Intelligent Technologies for Information Anal-

ysis. Springer, 2004, pp. 361–406.

[76] P. Scerri, D. Pynadath, and M. Tambe, “Adjustable autonomy for the real

world,” in Agent Autonomy. Springer, 2003, pp. 211–241.

[77] J. M. Bradshaw, H. Jung, S. Kulkarni, M. Johnson, P. Feltovich, J. Allen,

L. Bunch, N. Chambers, L. Galescu, R. Jeffers et al., “Kaa: policy-based explo-

rations of a richer model for adjustable autonomy,” in Proceedings of the fourth

international joint conference on Autonomous agents and multiagent systems.

ACM, 2005, pp. 214–221.

[78] B. G. Weber, M. Mateas, and A. Jhala, “Learning from demonstration for

goal-driven autonomy.” in AAAI, 2012.

[79] M. Klenk, M. Molineaux, and D. W. Aha, “Goal-driven autonomy for respond-

ing to unexpected events in strategy simulations,” Computational Intelligence,

vol. 29, no. 2, pp. 187–206, 2013.

[80] M. A. Wilson, J. McMahon, A. Wolek, D. W. Aha, and B. H. Houston, “Toward

goal reasoning for autonomous underwater vehicles: Responding to unexpected

agents,” in Goal Reasoning: Papers from the IJCAI Workshop, 2016.

[81] W. R. Dufrene, “An approach for autonomy: a collaborative communication

framework for multi-agent systems,” in Workshop on Radical Agent Concepts.

Springer, 2005, pp. 147–159.

[82] M. Lacerda, D. Park, S. Patel, and D. Schrage, “A mars exploration concept

systems design with an innovative unmanned autonomous vehicle and “carrier”

126
ground rover configuration. part i: System design,” in 2018 Aviation Technol-

ogy, Integration, and Operations Conference, 2018, p. 3262.

[83] J. Brookshire, S. Singh, and R. Simmons, “Preliminary results in sliding auton-

omy for assembly by coordinated teams,” in 2004 IEEE/RSJ International Con-

ference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566),

vol. 1. IEEE, 2004, pp. 706–711.

[84] J. Bradshaw, A. Uszok, R. Jeffers, N. Suri, P. Hayes, M. Burstein, A. Acquisti,

B. Benyo, M. Breedy, M. Carvalho et al., “Representation and reasoning for

daml-based policy and domain services in kaos and nomads,” in Proceedings of

the second international joint conference on Autonomous agents and multiagent

systems. ACM, 2003, pp. 835–842.

[85] J. M. Bradshaw, A. Acquisti, J. Allen, M. R. Breedy, L. Bunch, N. Chambers,

P. Feltovich, L. Galescu, M. A. Goodrich, R. Jeffers et al., “Teamwork-centered

autonomy for extended human-agent interaction in space applications,” in

AAAI 2004 Spring Symposium, 2004, pp. 22–24.

[86] M. Molineaux, M. Klenk, and D. W. Aha, “Goal-driven autonomy in a navy

strategy simulation.” in AAAI, 2010, pp. 1548–1554.

[87] C. A. Corneanu, M. O. Simón, J. F. Cohn, and S. E. Guerrero, “Survey on

rgb, 3d, thermal, and multimodal approaches for facial expression recognition:

History, trends, and affect-related applications,” IEEE transactions on pattern

analysis and machine intelligence, vol. 38, no. 8, pp. 1548–1568, 2016.

[88] D. Mehta, M. F. H. Siddiqui, and A. Y. Javaid, “Facial emotion recognition: A

survey and real-world user experiences in mixed reality,” Sensors, vol. 18, no. 2,

p. 416, 2018.

127
[89] M. Giering, V. Venugopalan, and K. Reddy, “Multi-modal sensor registration

for vehicle perception via deep neural networks,” in High Performance Extreme

Computing Conference (HPEC), 2015 IEEE. IEEE, 2015, pp. 1–6.

[90] T. Korthals, M. Kragh, P. Christiansen, H. Karstoft, R. N. Jørgensen, and

U. Rückert, “Multi-modal detection and mapping of static and dynamic obsta-

cles in agriculture for process evaluation,” Frontiers in Robotics and AI, vol. 5,

p. 28, 2018.

[91] S. D. Pendleton, H. Andersen, X. Du, X. Shen, M. Meghjani, Y. H. Eng,

D. Rus, and M. H. Ang, “Perception, planning, control, and coordination for

autonomous vehicles,” Machines, vol. 5, no. 1, p. 6, 2017.

[92] C.-E. Hrabia, N. Masuch, and S. Albayrak, “A metrics framework for quan-

tifying autonomy in complex systems,” in German Conference on Multiagent

System Technologies. Springer, 2015, pp. 22–41.

[93] A. Steinfeld, T. Fong, D. Kaber, M. Lewis, J. Scholtz, A. Schultz, and

M. Goodrich, “Common metrics for human-robot interaction,” in Proceedings of

the 1st ACM SIGCHI/SIGART conference on Human-robot interaction. ACM,

2006, pp. 33–40.

[94] D. Feil-Seifer, K. Skinner, and M. J. Matarić, “Benchmarks for evaluating so-

cially assistive robotics,” Interaction Studies, vol. 8, no. 3, pp. 423–439, 2007.

[95] P. Damacharla, A. Y. Javaid, J. J. Gallimore, and V. K. Devabhaktuni, “Com-

mon metrics to benchmark human-machine teams (hmt): A review,” IEEE

Access, vol. 6, pp. 38 637–38 655, 2018.

[96] T. J. Bihl, C. Cox, and T. Jenkins, “Finding common ground by unifying

autonomy indices to understand needed capabilities,” in Sensors and Systems

128
for Space Applications XI, vol. 10641. International Society for Optics and

Photonics, 2018, p. 106410G.

[97] H. Lin, H. Alemzadeh, D. Chen, Z. Kalbarczyk, and R. K. Iyer, “Safety-critical

cyber-physical attacks: Analysis, detection, and mitigation,” in Proceedings of

the Symposium and Bootcamp on the Science of Security. ACM, 2016, pp.

82–89.

[98] D. Ding, Q.-L. Han, Y. Xiang, X. Ge, and X.-M. Zhang, “A survey on security

control and attack detection for industrial cyber-physical systems,” Neurocom-

puting, vol. 275, pp. 1674–1683, 2018.

[99] R. Alguliyev, Y. Imamverdiyev, and L. Sukhostat, “Cyber-physical systems and

their security issues,” Computers in Industry, vol. 100, pp. 212–223, 2018.

[100] H. A. Abdul-Ghani, D. Konstantas, and M. Mahyoub, “A comprehensive iot at-

tacks survey based on a building-blocked reference model,” International Jour-

nal of Advanced Computer Science and Applications (IJACSA), vol. 9, no. 3,

2018.

[101] G. Wu, J. Sun, and J. Chen, “A survey on the security of cyber-physical sys-

tems,” Control Theory and Technology, vol. 14, no. 1, pp. 2–10, 2016.

[102] A. Y. Javaid, W. Sun, and M. Alam, “Single and multiple uav cyber-attack sim-

ulation and performance evaluation,” EAI Endorsed Transactions on Scalable

Information Systems, vol. 2, no. 4, pp. 1–11, 2015.

[103] S. Bhattacharya and T. Başar, “Game-theoretic analysis of an aerial jamming

attack on a uav communication network,” in Proceedings of the 2010 American

Control Conference. IEEE, 2010, pp. 818–823.

129
[104] J. Petit, B. Stottelaar, M. Feiri, and F. Kargl, “Remote attacks on automated

vehicles sensors: Experiments on camera and lidar,” Black Hat Europe, vol. 11,

p. 2015, 2015.

[105] T. Humphrey. (2012) Todd Humphreys’ Research Team Demonstrates

First Successful GPS Spoofing of UAV. [Online]. Available: http:

//tinyurl.com/yydz75x5

[106] G. Vasconcelos, G. Carrijo, R. Miani, J. Souza, and V. Guizilini, “The impact

of dos attacks on the ar. drone 2.0,” in 2016 XIII Latin American Robotics

Symposium and IV Brazilian Robotics Symposium (LARS/SBR). IEEE, 2016,

pp. 127–132.

[107] R. N. Akram, P.-F. Bonnefoi, S. Chaumette, K. Markantonakis, and

D. Sauveron, “Secure autonomous uavs fleets by using new specific embed-

ded secure elements,” in 2016 IEEE Trustcom/BigDataSE/ISPA. IEEE, 2016,

pp. 606–614.

[108] G. Cornelius, P. Caire, N. Hochgeschwender, M. A. Olivares-Mendez,

P. Esteves-Verissimo, M. Völp, and H. Voos, “A Perspective of Security for

Mobile Service Robots,” in ROBOT 2017: Third Iberian Robotics Conference,

2017, pp. 88–100. [Online]. Available: http://link.springer.com/10.1007/

978-3-319-70833-1{ }8

[109] C. Kwon, W. Liu, and I. Hwang, “Security analysis for cyber-physical systems

against stealthy deception attacks,” in American Control Conference (ACC),

2013. IEEE, 2013, pp. 3344–3349.

[110] D. Davidson, H. Wu, R. Jellinek, T. Ristenpart, and V. Singh, “Controlling

uavs with sensor input spoofing attacks,” in Proceedings of the 10th USENIX

Conference on Offensive Technologies, ser. WOOT’16, 2016, pp. 221–231.


130
[111] V. Behzadan, “Cyber-Physical Attacks on UAS Networks- Challenges and

Open Research Problems,” 2017. [Online]. Available: http://arxiv.org/abs/

1702.01251

[112] J. Pappalardo. (2018) The Dream of Drone Delivery Just Became Much More

Real. [Online]. Available: http://tinyurl.com/yylzkvqs

[113] C. Kwon, S. Yantek, and I. Hwang, “Real-Time Safety Assessment of

Unmanned Aircraft Systems Against Stealthy Cyber Attacks,” Journal of

Aerospace Information Systems, vol. 13, no. 1, pp. 27–45, 2016. [Online].

Available: http://arc.aiaa.org/doi/10.2514/1.I010388

[114] C. L. Krishna and R. R. Murphy, “A review on cybersecurity vulnerabilities for

unmanned aerial vehicles,” in Safety, Security and Rescue Robotics (SSRR),

2017 IEEE International Symposium on. IEEE, 2017, pp. 194–199.

[115] D. Hambling. (2017) Ships fooled in GPS spoofing attack suggest Russian

cyberweapon. [Online]. Available: http://tinyurl.com/ycuzl3pz

[116] D. Welch and K. Naughton. (2019) GM Falls Millions of Miles Short on Cruise

Driving Projection. [Online]. Available: http://tinyurl.com/y5w4zpm7

[117] J. Williams. (2018) 2019 may be year of the driverless car: Here’s where top

automakers stand. [Online]. Available: http://tinyurl.com/yxgjhhrh

[118] M. DeBord. (2018) Waymo has launched its commercial self-driving

service in Phoenix and it’s called ‘Waymo One’. [Online]. Available:

http://tinyurl.com/y25d4g9x

[119] A. Davies. (2018) The Self-Driving Startup Teaching Cars

to Talk. [Online]. Available: https://www.wired.com/story/

driveai-self-driving-design-frisco-texas/
131
[120] J. Roulette. (2019) Self-driving buses roll into Orlando’s Lake Nona, a

growing testbed for ‘smart city’ technology. [Online]. Available: http:

//tinyurl.com/yy3kc8kh

[121] A. Greenberg. (2015) Hackers Remotely kill a Jeep on the Highway- With me

in it. [Online]. Available: http://tinyurl.com/o9coyn4

[122] M. N. Mejri, J. Ben-Othman, and M. Hamdi, “Survey on vanet security chal-

lenges and possible cryptographic solutions,” Vehicular Communications, vol. 1,

no. 2, pp. 53–66, 2014.

[123] C. V. Veen. (2015) 10 Years, 10 Milestones for Driverless Cars. [Online].

Available: https://tinyurl.com/y4wb3qwt

[124] F. Lambert. (2018) Watch what Tesla Autopilot can see in incredible 360º video.

[Online]. Available: https://electrek.co/2018/11/26/tesla-autopilot-360-video/

[125] T. H. Jr. (2018) Move over Tesla, this self-driving car will let you

sleep or watch a movie during your highway commute. [Online]. Available:

http://tinyurl.com/yad58cgj

[126] Navya. (2018) Navya History. [Online]. Available: https://www.navya-corp.

com/index.php/fr/navya/histoire

[127] Sarah Sloat. (2016) BMW, Intel, Mobileye Link Up in Self-Driving Tech

Alliance. [Online]. Available: http://tinyurl.com/y56bwcd2

[128] A. Dhamgaye and N. Chavhan, “Survey on security challenges in VANET,”

International Journal of Computer Science and Network, vol. 2, no. 1, p. 1,

2013.

[129] M. A. Elsadig and Y. A. Fadlalla, “Vanets security issues and challenges: A

survey,” Indian Journal of Science and Technology, vol. 9, no. 28, 2016.
132
[130] M. Azees, P. Vijayakumar, and L. J. Deborah, “Comprehensive survey on secu-

rity services in vehicular ad-hoc networks,” IET Intelligent Transport Systems,

vol. 10, no. 6, pp. 379–388, 2016.

[131] M. Saini, A. Alelaiwi, and A. E. Saddik, “How close are we to realizing a

pragmatic vanet solution? a meta-survey,” ACM Computing Surveys (CSUR),

vol. 48, no. 2, p. 29, 2015.

[132] H. La Vinh and A. R. Cavalli, “Security attacks and solutions in vehicular ad

hoc networks: a survey,” International journal on AdHoc networking systems

(IJANS), vol. 4, no. 2, pp. 1–20, 2014.

[133] I. A. Sumra, I. Ahmad, H. Hasbullah, and J.-l. b. A. Manan, “Classes of

attacks in VANET,” in Electronics, Communications and Photonics Conference

(SIECPC), 2011 Saudi International. Riyadh, Saudi Arabia: IEEE, 2011.

[Online]. Available: https://ieeexplore.ieee.org/abstract/document/5876939/

[134] M. S. Al-Kahtani, “Survey on security attacks in vehicular ad hoc networks

(vanets),” in Signal Processing and Communication Systems (ICSPCS), 2012

6th International Conference on. IEEE, 2012, pp. 1–9.

[135] S. Gillani, F. Shahzad, A. Qayyum, and R. Mehmood, “A survey on security

in vehicular ad hoc networks,” in International Workshop on Communication

Technologies for Vehicles. Springer, 2013, pp. 59–74.

[136] M. N. Mejri and M. Hamdi, “Recent advances in cryptographic solutions for

vehicular networks,” in Networks, Computers and Communications (ISNCC),

2015 International Symposium. IEEE, 2015. [Online]. Available: https:

//ieeexplore.ieee.org/abstract/document/7238573/

[137] E. B. Hamida, H. Noura, and W. Znaidi, “Security of cooperative intelligent

133
transport systems: Standards, threats analysis and cryptographic countermea-

sures,” Electronics, vol. 4, no. 3, pp. 380–423, 2015.

[138] P. Vijayakumar, M. Azees, and A. Kannan, “Dual Authentication

and Key Management Techniques for Secure Data Transmission in

Vehicular Ad Hoc Networks,” IEEE Transactions on Intelligent Transportation

Systems, vol. 17, no. 4, pp. 1015 – 1028, 2015. [Online]. Available:

https://ieeexplore.ieee.org/abstract/document/7327222/

[139] M. N. Mejri and J. Ben-Othman, “Entropy as a new metric for denial of service

attack detection in vehicular ad-hoc networks,” in Proceedings of the 17th ACM

international conference on Modeling, analysis and simulation of wireless and

mobile systems. ACM, 2014, pp. 73–79.

[140] N. Lyamin, A. Vinel, M. Jonsson, and J. Loo, “Real-time detection of denial-

of-service attacks in ieee 802.11 p vehicular networks,” IEEE Communications

letters, vol. 18, no. 1, pp. 110–113, 2014.

[141] O. Puñal, A. Aguiar, and J. Gross, “In vanets we trust?: characterizing rf

jamming in vehicular networks,” in Proceedings of the ninth ACM international

workshop on Vehicular inter-networking, systems, and applications. ACM,

2012, pp. 83–92.

[142] M. N. Mejri and J. Ben-Othman, “Detecting greedy behavior by linear

regression and watchdog in vehicular ad hoc networksNo Title,” in

Global Communications Conference (GLOBECOM), 2014 IEEE. Austin, TX,

USA: IEEE, 2014. [Online]. Available: https://ieeexplore.ieee.org/abstract/

document/7037603/

[143] R. Bauza, J. Gozalvez, and Joaquin Sanchez-Soriano, “Road traffic

congestion detection through cooperative Vehicle-to-Vehicle communications,”


134
in Local Computer Networks (LCN), 2010 IEEE 35th Conference. Denver, CO,

USA: IEEE, 2010. [Online]. Available: https://ieeexplore.ieee.org/abstract/

document/5735780/

[144] S. Fontanelli, E. Bini, and Paolo Santi, “Dynamic route planning in

vehicular networks based on future travel estimation,” in Vehicular Networking

Conference (VNC), 2010 IEEE. Jersey City, NJ, USA: IEEE, 2010. [Online].

Available: https://ieeexplore.ieee.org/abstract/document/5698247/

[145] M. T. Garip, M. E. Gursoy, P. Reiher, and M. Gerla, “Congestion attacks to

autonomous cars using vehicular botnets,” in NDSS Workshop on Security of

Emerging Networking Technologies (SENT), San Diego, CA, 2015.

[146] R. Waters and T. Bradshaw. (2016, may) Rise of the robots is

sparking an investment boom. San Francisco. [Online]. Available: http:

//tinyurl.com/yymnbxrt

[147] N. Fearn. (2018, jan) The Cutting-Edge Tech set to Define 2018.

[Online]. Available: http://www.techx365.com/author.asp?section id=686&

doc id=739321

[148] M. Hans, B. Graf, and R. D. Schraft, “Robotic home assistant care-o-bot: Past-

present-future,” in Robot and Human Interactive Communication, 2002. Pro-

ceedings. 11th IEEE International Workshop on, 2002, pp. 380–385.

[149] D. Quarta, M. Pogliani, M. Polino, F. Maggi, A. M. Zanchettin, and S. Zanero,

“An experimental security analysis of an industrial robot controller,” in 2017

38th IEEE Symposium on Security and Privacy (SP). IEEE, 2017, pp. 268–286.

[150] F. Maggi, D. Quarta, M. Pogliani, M. Polino, A. M. Zanchettin, and S. Zanero,

“Rogue robots: Testing the limits of an industrial robot’s security,” Technical

report, Trend Micro, Politecnico di Milano, Tech. Rep., 2017.


135
[151] C. Cerrudo and L. Apa, “Hacking robots before skynet,” IOActive Website, pp.

1–17, 2017.

[152] Ians. (2015) Surgical robots to become ubiquitous in Indian hospitals. [Online].

Available: http://tinyurl.com/y2rfp2es

[153] E. Snell. (2015) Phishing Attack Affects 3,300 Partners HealthCare Patients.

[Online]. Available: http://tinyurl.com/y3v8on9f

[154] K. Zetter, “Hospital networks are leaking data, leaving critical devices

vulnerable,” WIRED, jun 2014. [Online]. Available: http://tinyurl.com/

y5kxsfol

[155] T. Denning, C. Matuszek, K. Koscher, J. R. Smith, and T. Kohno, “A spotlight

on security and privacy risks with future household robots,” in Proceedings

of the 11th international conference on Ubiquitous computing - Ubicomp ’09.

ACM, 2009, p. 105.

[156] S. Yong, D. Lindskog, R. Ruhl, and P. Zavarsky, “Risk mitigation strategies

for mobile wi-fi robot toys from online pedophiles,” in Privacy, Security, Risk

and Trust (PASSAT) and 2011 IEEE Third Inernational Conference on Social

Computing (SocialCom), 2011 IEEE Third International Conference on. IEEE,

2011, pp. 1220–1223.

[157] U. of Nottingham-Mixed Reality Laboratory. (2016) Future Everyday

Interaction with the Autonomous Internet of Things (A-IoT). [Online].

Available: https://tinyurl.com/yylnuarh

[158] C. Engineering. (2016) IoT to IoAT: Internet of Autonomous Things devices

provides solutions . [Online]. Available: https://tinyurl.com/y5nlonp8

136
[159] S. Industry. (2017) Blockchain makes IoT devices autonomous. [Online].

Available: https://tinyurl.com/y5w784re

[160] D. Kyriazis and T. Varvarigou, “Smart, autonomous and reliable internet of

things,” Procedia Computer Science, vol. 21, pp. 442–448, 2013.

[161] E. Sahin, “Swarm Robotics: From Sources of Inspiration to Domains of Ap-

plication,” in Swarm Robotics, E. \cSahin and W. M. Spears, Eds. Berlin:

Springer Berlin Heidelberg, 2005, pp. 10–20.

[162] N. Correll and A. Martinoli, “Multirobot inspection of industrial machinery,”

IEEE Robotics \& Automation Magazine, vol. 16, no. 1, pp. 103—-112, 2009.

[Online]. Available: https://ieeexplore.ieee.org/abstract/document/4799452/

[163] D. Albani, J. IJsselmuiden, R. Haken, and V. Trianni, “Monitoring and mapping

with robot swarms for agricultural applications,” in Advanced Video and Signal

Based Surveillance (AVSS), 14th IEEE International Conference on. IEEE,

2017, pp. 1–6.

[164] W. L. Winfield and A. F. T., “Modeling and Optimization of Adaptive

Foraging in Swarm Robotic Systems,” The International Journal of Robotics

Research, vol. 29, no. 14, pp. 1743–1760, 2010. [Online]. Available:

https://doi.org/10.1177/0278364910375139

[165] M. Haque, E. Baker, C. Ren, D. Kirkpatrick, and J. A. Adams, “Analysis of bio-

logically inspired swarm communication models,” in Advances in Hybridization

of Intelligent Methods. Springer, 2018, pp. 17–38.

[166] Z.-y. Tan, Ying and Zheng, “Research advance in swarm robotics,”

Defence Technology, vol. 9, no. 1, pp. 18–39, 2013. [Online]. Available:

https://www.sciencedirect.com/science/article/pii/S221491471300024X

137
[167] F. Higgins, A. Tomlinson, and K. M. Martin, “Threats to the swarm: Secu-

rity considerations for swarm robotics,” International Journal on Advances in

Security, vol. 2, no. 2&3, pp. 288–297, 2009.

[168] Y. K. Sharma and A. Bagla, “Security challenges for swarm robotics,” SECU-

RITY CHALLENGES, vol. 2, no. 1, pp. 45–48, 2009.

[169] A. Y. Javaid, W. Sun, V. K. Devabhaktuni, and M. Alam, “Cyber security

threat analysis and modeling of an unmanned aerial vehicle system,” in 2012

IEEE Conference on Technologies for Homeland Security (HST). IEEE, 2012,

pp. 585–590.

[170] G. Vachtsevanos, L. Tang, and J. Reimann, “An intelligent approach to co-

ordinated control of multiple unmanned aerial vehicles,” in Proceedings of the

American helicopter society 60th annual forum, Baltimore, MD, 2004.

[171] J. Leonard, J. How, S. Teller, M. Berger, S. Campbell, G. Fiore, L. Fletcher,

E. Frazzoli, A. Huang, S. Karaman et al., “A perception-driven autonomous

urban vehicle,” Journal of Field Robotics, vol. 25, no. 10, pp. 727–774, 2008.

[172] C. Urmson, J. Anhalt, D. Bagnell, C. Baker, R. Bittner, J. Dolan, D. Duggins,

D. Ferguson, T. Galatali, C. Geyer, M. Gittleman, S. Harbaugh, M. Hebert,

T. Howard, A. Kelly, D. Kohanbash, M. Likhachev, N. Miller, K. Peterson,

R. Rajkumar, P. Rybski, B. Salesky, S. Scherer, Y. Woo-seo, R. Simmons,

S. Singh, J. Snider, A. Stentz, W. R. Whittaker, J. Ziglar, J. Struble, and

M. Taylor, “Tartan Racing: A Multi-Modal Approach to the DARPA Urban

Challenge,” Defense, vol. 94, no. 4, pp. 386–387, 2007.

[173] P. Guo, H. Kim, N. Virani, J. Xu, M. Zhu, and P. Liu, “Exploiting physical dy-

namics to detect actuator and sensor attacks in mobile robots,” arXiv preprint

arXiv:1708.01834, 2017.
138
[174] D. M. Gage, “Security considerations for autonomous robots,” in Security and

Privacy, 1985 IEEE Symposium on. IEEE, 1985, pp. 224–224.

[175] J. Goppert, W. Liu, A. Shull, V. Sciandra, I. Hwang, and H. Aldridge, “Nu-

merical analysis of cyberattacks on unmanned aerial systems,” 06 2012.

[176] J. Su, J. He, P. Cheng, and J. Chen, “A stealthy gps spoofing strategy for ma-

nipulating the trajectory of an unmanned aerial vehicle,” IFAC-PapersOnLine,

vol. 49, no. 22, pp. 291–296, 2016.

[177] E. A. Oladimeji, S. Supakkul, and L. Chung, “Security threat modeling and

analysis: A goal-oriented approach,” in Proc. of the 10th IASTED International

Conference on Software Engineering and Applications (SEA 2006). Citeseer,

2006, pp. 13–15.

[178] B. B. Madan, M. Banik, and D. Bein, “Securing unmanned autonomous systems

from cyber threats,” The Journal of Defense Modeling and Simulation, vol. 16,

no. 2, pp. 119–136, 2019.

[179] F. Shull. (2016) Cyber Threat Modeling: An Evaluation of Three Methods.

[Online]. Available: https://tinyurl.com/y3j56guz

[180] T. Foltyn, “CYBERSECURITY TRENDS 2019: Privacy and Intrusion in the

Global Village,” Check Point Research, December 2018. [Online]. Available:

http://tinyurl.com/yxeph2cb

[181] K. Mansfield, T. Eveleigh, T. H. Holzer, and S. Sarkani, “Unmanned aerial

vehicle smart device ground control station cyber security threat model,” in

Technologies for Homeland Security (HST), 2013 IEEE International Confer-

ence on. IEEE, 2013, pp. 722–728.

139
[182] B. Beyst. (2016) Comparing ThreatModeler to Microsoft Threat Modeling

Tool (TMT). [Online]. Available: https://tinyurl.com/y379hnrb

[183] G. W. Clark, M. V. Doran, and T. R. Andel, “Cybersecurity issues in robotics,”

in Cognitive and Computational Aspects of Situation Management (CogSIMA),

2017 IEEE Conference on. IEEE, 2017, pp. 1–5.

[184] F. J. R. Lera, C. F. Llamas, Á. M. Guerrero, and V. M. Olivera,

“Cybersecurity of robotics and autonomous systems: Privacy and safety,”

in Robotics-Legal, Ethical and Socioeconomic Impacts. InTech, 2017. [Online].

Available: http://dx.doi.org/10.5772/intechopen.69796

[185] K. Koscher, A. Czeskis, F. Roesner, S. Patel, T. Kohno, S. Checkoway, D. Mc-

Coy, B. Kantor, D. Anderson, H. Shacham et al., “Experimental security anal-

ysis of a modern automobile,” in Security and Privacy (SP), 2010 IEEE Sym-

posium on. IEEE, 2010, pp. 447–462.

[186] S. Parkinson, P. Ward, K. Wilson, and J. Miller, “Cyber threats facing au-

tonomous and connected vehicles: Future challenges,” IEEE Transactions on

Intelligent Transportation Systems, vol. 18, no. 11, pp. 2898–2915, 2017.

[187] H. Al-Mohannadi, Q. Mirza, A. Namanya, I. Awan, A. Cullen, and J. Disso,

“Cyber-attack modeling analysis techniques: An overview,” in 2016 IEEE 4th

International Conference on Future Internet of Things and Cloud Workshops

(FiCloudW). IEEE, 2016, pp. 69–76.

[188] J. Petit and S. E. Shladover, “Potential Cyberattacks on Automated Vehicles,”

IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 2, pp.

546–556, 2015.

[189] E. Yağdereli, C. Gemci, and A. Z. Aktaş, “A study on cyber-security of au-

140
tonomous and unmanned vehicles,” The Journal of Defense Modeling and Sim-

ulation, vol. 12, no. 4, pp. 369–381, 2015.

[190] V. L. Thing and J. Wu, “Autonomous vehicle security: A taxonomy of attacks

and defences,” in Internet of Things (iThings) and IEEE Green Computing and

Communications (GreenCom) and IEEE Cyber, Physical and Social Comput-

ing (CPSCom) and IEEE Smart Data (SmartData), 2016 IEEE International

Conference on. IEEE, 2016, pp. 164–170.

[191] T. Bonaci, J. Herron, T. Yusuf, J. Yan, T. Kohno, and H. J. Chizeck, “To

make a robot secure: An experimental analysis of cyber security threats against

teleoperated surgical robots,” arXiv preprint arXiv:1504.04339, 2015.

[192] A. Sanjab, W. Saad, and T. Başar, “Prospect theory for enhanced cyber-

physical security of drone delivery systems: A network interdiction game,” in

2017 IEEE International Conference on Communications (ICC). IEEE, 2017,

pp. 1–6.

[193] S. Bhattacharya and T. Basar, “Multi-layer hierarchical approach to double

sided jamming games among teams of mobile agents,” Proceedings of the IEEE

Conference on Decision and Control, pp. 5774–5779, 2012.

[194] K. Merrick, M. Hardhienata, K. Shafi, and J. Hu, “A survey of game theoretic

approaches to modelling decision-making in information warfare scenarios,” Fu-

ture Internet, vol. 8, no. 3, p. 34, 2016.

[195] Z. Xu and Q. Zhu, “A cyber-physical game framework for secure and resilient

multi-agent autonomous systems,” in 2015 54th IEEE Conference on Decision

and Control (CDC). IEEE, 2015, pp. 5156–5161.

[196] A. Ferdowsi, U. Challita, W. Saad, and N. B. Mandayam, “Robust deep re-

inforcement learning for security and safety in autonomous vehicle systems,”


141
in 2018 21st International Conference on Intelligent Transportation Systems

(ITSC). IEEE, 2018, pp. 307–312.

[197] A. Ferdowsi, W. Saad, and N. B. Mandayam, “Colonel blotto game for se-

cure state estimation in interdependent critical infrastructure,” arXiv preprint

arXiv:1709.09768, 2017.

[198] Z. Xu and Q. Zhu, “A game-theoretic approach to secure control of

communication-based train control systems under jamming attacks,” in Pro-

ceedings of the 1st International Workshop on Safe Control of Connected and

Autonomous Vehicles. ACM, 2017, pp. 27–34.

[199] A. Sanjab, W. Saad, and T. Başar, “Prospect theory for enhanced cyber-

physical security of drone delivery systems: A network interdiction game,” in

2017 IEEE International Conference on Communications (ICC). IEEE, 2017,

pp. 1–6.

[200] K. Wang, M. Du, S. Maharjan, and Y. Sun, “Strategic honeypot game model

for distributed denial of service attacks in the smart grid,” IEEE Transactions

on Smart Grid, vol. 8, no. 5, pp. 2474–2482, 2017.

[201] A. Attiah, M. Chatterjee, and C. C. Zou, “A game theoretic approach to model

cyber attack and defense strategies,” in 2018 IEEE International Conference

on Communications (ICC). IEEE, 2018, pp. 1–7.

[202] Q. Wu, S. Shiva, S. Roy, C. Ellis, and V. Datla, “On modeling and simulation of

game theory-based defense mechanisms against dos and ddos attacks,” in Pro-

ceedings of the 2010 spring simulation multiconference. Society for Computer

Simulation International, 2010, p. 159.

[203] H. S. Bedi, S. Roy, and S. Shiva, “Game theory-based defense mechanisms

142
against ddos attacks on tcp/tcp-friendly flows,” in 2011 IEEE symposium on

computational intelligence in cyber security (CICS). IEEE, 2011, pp. 129–136.

[204] T. Spyridopoulos, G. Karanikas, T. Tryfonas, and G. Oikonomou, “A game

theoretic defence framework against dos/ddos cyber attacks,” Computers &

Security, vol. 38, pp. 39–50, 2013.

[205] Y. Wang, Y. Zhang, L. Zhang, L. Zhu, and Y. Liu, “Game based ddos attack

strategies in cloud of things,” in 6th International Conference on Information

Engineering for Mechanics and Materials. Atlantis Press, 2016.

[206] A. Michalas, N. Komninos, and N. R. Prasad, “Multiplayer game for ddos

attacks resilience in ad hoc networks,” in 2011 2nd International Conference

on Wireless Communication, Vehicular Technology, Information Theory and

Aerospace & Electronic Systems Technology (Wireless VITAE). IEEE, 2011,

pp. 1–5.

[207] A. Michalas, N. Komninos, and N. Prasad, “Cryptographic puzzles and game

theory against dos and ddos attacks in networks,” International Journal of

Computer Research, vol. 19, no. 1, p. 79, 2012.

[208] A. Chowdhary, S. Pisharody, A. Alshamrani, and D. Huang, “Dynamic game

based security framework in sdn-enabled cloud networking environments,” in

Proceedings of the ACM International Workshop on Security in Software De-

fined Networks & Network Function Virtualization. ACM, 2017, pp. 53–58.

[209] G. Yan, R. Lee, A. Kent, and D. Wolpert, “Towards a bayesian network game

framework for evaluating ddos attacks and defense,” in Proceedings of the 2012

ACM conference on Computer and communications security. ACM, 2012, pp.

553–566.

143
[210] M. Wright, S. Venkatesan, M. Albanese, and M. P. Wellman, “Moving target

defense against ddos attacks: An empirical game-theoretic analysis,” in Pro-

ceedings of the 2016 ACM Workshop on Moving Target Defense. ACM, 2016,

pp. 93–104.

[211] E. M. Kandoussi, I. El Mir, M. Hanini, and A. Haqiq, “Modeling an anomaly-

based intrusion prevention system using game theory,” in International Con-

ference on Innovations in Bio-Inspired Computing and Applications. Springer,

2017, pp. 266–276.

[212] E. Bukharov, D. Zybin, A. Soloviev, and A. Kalach, “Mathematical simulation

of countermeasures to attacks of “denial of service” type with the use of game

theory approach,” in Journal of Physics: Conference Series, vol. 1203, no. 1.

IOP Publishing, 2019, p. 012076.

[213] O. A. Wahab, J. Bentahar, H. Otrok, and A. Mourad, “Optimal load distribu-

tion for the detection of vm-based ddos attacks in the cloud,” IEEE Transac-

tions on Services Computing, 2017.

[214] B. Subba, S. Biswas, and S. Karmakar, “A game theory based multi layered

intrusion detection framework for wireless sensor networks,” International Jour-

nal of Wireless Information Networks, vol. 25, no. 4, pp. 399–421, 2018.

[215] P. Narwal, S. N. Singh, and D. Kumar, “Game-theory based detection and

prevention of dos attacks on networking node in open stack private cloud,” in

2017 International Conference on Infocom Technologies and Unmanned Systems

(Trends and Future Directions)(ICTUS). IEEE, 2017, pp. 481–486.

[216] M. V. De Assis, A. H. Hamamoto, T. Abrão, and M. L. Proença, “A game

theoretical based system using holt-winters and genetic algorithm with fuzzy

144
logic for dos/ddos mitigation on sdn networks,” IEEE Access, vol. 5, pp. 9485–

9496, 2017.

[217] L. Gao, Y. Li, L. Zhang, F. Lin, and M. Ma, “Research on detection and defense

mechanisms of dos attacks based on bp neural network and game theory,” IEEE

Access, vol. 7, pp. 43 018–43 030, 2019.

[218] G. Wu, Z. Li, and L. Yao, “Dos mitigation mechanism based on non-cooperative

repeated game for sdn,” in 2018 IEEE 24th International Conference on Parallel

and Distributed Systems (ICPADS). IEEE, 2018, pp. 612–619.

[219] A. Ilavendhan and K. Saruladha, “A zero-sum game-based security algorithm

against dos attack in vanets,” INTERNATIONAL JOURNAL OF SCIEN-

TIFIC TECHNOLOGY RESEARCH, vol. 9, no. 3, pp. 4830–4834, 2020.

[220] A. Mairaj and A. Y. Javaid, “Game theoretic solution for an unmanned aerial

vehicle network host under ddos attack,” Computer Networks, p. 108962, 2022.

[221] R. M. Husar and J. Stracener, “System autonomy modeling during early concept

definition,” Editorial Preface, vol. 5, no. 6, 2014.

[222] F. Richter. (2018) Fatal Accidents Damage Trust in Autonomous Driving.

[Online]. Available: https://tinyurl.com/y3lbkx8a

[223] D. Causevic. (2018) How Machine Learning Can Enhance Cybersecurity for

Autonomous Cars. [Online]. Available: http://tinyurl.com/yyrhhsg7

[224] L. Critchley. (2018) How Machine-Based Learning Will Protect Automobiles

from Cyber Attacks. [Online]. Available: https://www.azom.com/article.aspx?

ArticleID=15659

[225] E. C. Ferrer, “The blockchain: a new framework for robotic swarm systems,”

arXiv preprint arXiv:1608.00695, 2016.


145
[226] M. Aggarwal. (2017) Blockchain In Robotics - A Sneak Peek Into The Future.

[Online]. Available: https://tinyurl.com/y5flry5n

[227] S. Williamson. (2018) Blockchain May Be the Answer to Making Self Driving

Cars Safer. [Online]. Available: http://tinyurl.com/y3xv467u

[228] N. Zhang, S. Zhang, P. Yang, O. Alhussein, W. Zhuang, and X. S. Shen, “Soft-

ware defined space-air-ground integrated vehicular networks: Challenges and

solutions,” IEEE Communications Magazine, vol. 55, no. 7, pp. 101–109, 2017.

[229] Q. Niyaz, W. Sun, and A. Y. Javaid, “A deep learning based ddos detection

system in software-defined networking (sdn),” arXiv preprint arXiv:1611.07400,

2016.

[230] A. Ydenberg, N. Heir, and B. Gill, “Security, sdn, and vanet technology of

driver-less cars,” in 2018 IEEE 8th Annual Computing and Communication

Workshop and Conference (CCWC). IEEE, 2018, pp. 313–316.

[231] R. Kumar, M. A. Sayeed, V. Sharma, and I. You, “An sdn-based secure mobility

model for uav-ground communications,” in International Symposium on Mobile

Internet Security. Springer, 2017, pp. 169–179.

[232] E. Amoroso. (2019) Security Advantages of Software Defined Networking

(SDN). [Online]. Available: http://tinyurl.com/yynq6gpg

[233] D. M. West. (2017) Securing the future of driverless cars. [Online]. Available:

https://www.brookings.edu/research/securing-the-future-of-driverless-cars/

[234] K. Kaur and G. Rampersad, “Trust in driverless cars: Investigating key fac-

tors influencing the adoption of driverless cars,” Journal of Engineering and

Technology Management, vol. 48, pp. 87–96, 2018.

146
[235] Jon Walker , “The Self-Driving Car Timeline – Predictions from the Top 11

Global Automakers,” 2019. [Online]. Available: https://tinyurl.com/y9uaosuu

[236] Teena Maddox, “How autonomous vehicles could save over 350K lives

in the US and millions worldwide,” 2018. [Online]. Available: https:

//tinyurl.com/y6me578b

[237] D. J. Atkinson, “Emerging cyber-security issues of autonomy and the psy-

chopathology of intelligent machines,” in 2015 AAAI Spring Symposium Series,

2015.

[238] G. Owen, Game theory, 4th ed. Bingley, England: Emerald Group Publishing,

2013.

[239] R. Department of Defense and Engineering, “Technical Assessment: Auton-

omy,” Office of Technical Intelligence, Office of the Assistant Secretary of De-

fense for Research and Engineering, Tech. Rep., 2015.

[240] K. Berntorp, T. Hoang, R. Quirynen, and S. Di Cairano, “Control architecture

design for autonomous vehicles,” in 2018 IEEE Conference on Control Tech-

nology and Applications (CCTA). IEEE, 2018, pp. 404–411.

[241] L. Petnga and H. Xu, “Security of unmanned aerial vehicles: Dynamic state

estimation under cyber-physical attacks,” in 2016 International Conference on

Unmanned Aircraft Systems (ICUAS). IEEE, 2016, pp. 811–819.

[242] J. Kocić, N. Jovičić, and V. Drndarević, “Sensors and sensor fusion in au-

tonomous vehicles,” in 2018 26th Telecommunications Forum (TELFOR).

IEEE, 2018, pp. 420–425.

[243] C. Sitawarin, A. N. Bhagoji, A. Mosenia, M. Chiang, and P. Mittal, “Darts:

147
Deceiving autonomous cars with toxic signs,” arXiv preprint arXiv:1802.06430,

2018.

[244] A. Qayyum, M. Usama, J. Qadir, and A. Al-Fuqaha, “Securing connected &

autonomous vehicles: Challenges posed by adversarial machine learning and

the way forward,” arXiv preprint arXiv:1905.12762, 2019.

[245] P. Guo, H. Kim, N. Virani, J. Xu, M. Zhu, and P. Liu, “Exploiting physical dy-

namics to detect actuator and sensor attacks in mobile robots,” arXiv preprint

arXiv:1708.01834, 2017.

[246] E. Bompard, C. Gao, R. Napoli, A. Russo, M. Masera, and A. Stefanini, “Risk

assessment of malicious attacks against power systems,” IEEE Transactions on

Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 39, no. 5,

pp. 1074–1085, 2009.

[247] FCC - Wireless Telecommunications, “Dedicated Short Range Communications

(DSRC) Service ,” 2019. [Online]. Available: https://tinyurl.com/yb72km89

[248] J. B. Kenney, “Dedicated short-range communications (dsrc) standards in the

united states,” Proceedings of the IEEE, vol. 99, no. 7, pp. 1162–1182, 2011.

[249] C. Sommer, O. K. Tonguz, and F. Dressler, “Adaptive beaconing for delay-

sensitive and congestion-aware traffic information systems,” in 2010 IEEE Ve-

hicular Networking Conference. IEEE, 2010, pp. 1–8.

[250] McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. ,

“Gambit: Software Tools for Game Theory,Version 16.0.1,” 2016. [Online].

Available: http://www.gambit-project.org.

[251] F. Jahan, W. Sun, and Q. Niyaz, “A non-cooperative game based

model for the cybersecurity of autonomous systems,” in 41st IEEE

148
Symposium on Security and Privacy. Los Alamitos, CA, USA: IEEE

Computer Society, dec 2020, pp. 612–619. [Online]. Available: https:

//doi.ieeecomputersociety.org/10.1109/PADSW.2018.8644627

[252] F. Jahan, W. Sun, Q. Niyaz, and M. Alam, “Security modeling of autonomous

systems: A survey,” ACM Computing Surveys (CSUR), vol. 52, no. 5, pp. 1–34,

2019.

[253] S. Biswas, J. Mišić, and V. Mišić, “Ddos attack on wave-enabled vanet through

synchronization,” in 2012 IEEE Global Communications Conference (GLOBE-

COM). IEEE, 2012, pp. 1079–1084.

[254] B. M. Mughal, A. A. Wagan, and H. Hasbullah, “Impact of safety beacons on

the performance of vehicular ad hoc networks,” in International Conference on

Software Engineering and Computer Systems. Springer, 2011, pp. 368–383.

[255] ABC News , “Berlin artist uses handcart full of smartphones to trick Google

Maps’ traffic algorithm into thinking there is traffic jam,” 2020. [Online].

Available: https://tinyurl.com/3mjeu84m

[256] J. Pita, M. Tambe, C. Kiekintveld, S. Cullen, and E. Steigerwald, “Guards:

game theoretic security allocation on a national scale,” in The 10th International

Conference on Autonomous Agents and Multiagent Systems-Volume 1, 2011, pp.

37–44.

[257] S. McBride. (2018) The Driverless Car Revolution Has Begun – Here’s How

To Profit. [Online]. Available: http://tinyurl.com/ydzby4yr

[258] L. Dormehl. (2017, April) MiRo is the robot dog that promises to be a geek’s

best friend. [Online]. Available: https://tinyurl.com/yxz74w93

149

You might also like