Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

USABLE SECURITY AND PRIVACY FOR SECURITY AND PRIVACY WORKERS

Supporting Artificial Intelligence/Machine


Learning Security Workers Through
an Adversarial Techniques, Tools, and
Common Knowledge Framework

Mohamad Fazelnia , Ahmet Okutan  , and Mehdi Mirakhorli  | Rochester Institute of Technology

This article focuses on supporting artificial intelligence (AI)/machine learning (ML) security workers. It
presents AI/ML adversarial techniques, tools, and common knowledge (AI/ML ATT&CK) framework to
enable AI/ML security workers to intuitively explore offensive and defensive tactics.

I n recent years, there has been a significant increase


in the use of artificial intelligence (AI) and machine
learning (ML) in a variety of application domains,
engineering AI-enabled software systems, for the simple
reason that it requires security workers who are AI/ML
subject matter experts (SMEs), and the demand for such
ranging from autonomous cars to medical diagnosis expertise continues to exceed the supply. This article aims
tools. Such increased usage of AI/ML techniques in to provide a framework that can be used by AI engineers
real-world applications attracted adversaries and cyber- to transform them into AI/ML security workers.
attackers to exploit the weaknesses in these techniques To develop AI/ML-enabled systems resilient against
to achieve malicious goals. AI/ML capability builders cyberattacks, AI engineers must be aware of the exist-
need to employ appropriate mitigation techniques dur- ing attacks and their distinctive characteristics, such as
ing the development and maintenance stages to mini- their underlying techniques, evolving scenarios, and
mize the attack surface of AI software products.1 This goals. For classical software systems, there are extensive
is a challenging task, considering the variety of AI/ML resources, such as MITRE’s Common Weakness Enu-
techniques, their unique vulnerabilities, and the emer- meration, Common Attack Pattern Enumeration, and
gence of new attack vectors. Adversarial Tactics, Techniques, and Common Knowl-
With the shortage of skilled information security work- edge frameworks, that enumerate software weaknesses
ers continuing to grow, this problem is exacerbated when and common attacks.2,3 However, these frameworks do
not cover the security issues of AI/ML systems. Having
Digital Object Identifier 10.1109/MSEC.2022.3221058
similar resources for AI engineers would enable them
Date of current version: 19 December 2022 to become AI/ML security workers by guiding them in

1540-7993/22©2022IEEE Copublished by the IEEE Computer and Reliability Societies January/February 2023 37
USABLE SECURITY AND PRIVACY FOR SECURITY AND PRIVACY WORKERS

investigating potential attack scenarios against AI prod- All research materials, the generated AI/ML ATT&CK
ucts and choosing appropriate defensive measures. framework, developed attack mitigation, data sets, and
While existing resources, such as the U.S. National investigated tools are shared with the community at
Institute of Standards and Technology (NIST) Tax- www.aimlattack.com/.
onomy and Terminology of Adversarial Machine
Learning1 and other similar work4,5 provide a com- AI/ML Security Workers
mon language to discuss various types of attacks by Who are AI/ML security workers? We define them as
using an upper taxonomy, they often lack details and cybersecurity professionals7, 8, 9 who contribute to the
concrete attack/defense techniques necessary to make development of secure AI-enabled software systems
actionable decisions for a given AI system.6 To fill this through monitoring the training and deployment of AI
gap, we need to support the training of the next genera- systems as well as formulating new adversarial attack
tion of AI/ML security workers and ease their day-to- scenarios, measuring their severity, and devising novel
day tasks. To this end, we argue that it is necessary to defense strategies for the robustness and integrity of
have an intuitive framework that guides AI/ML security AI-enabled systems in different application domains.
workers, particularly the novices, to ensure that cyber- AI/ML security workers are responsible for implement-
security thinking is explicitly integrated in their daily AI ing, validating, justifying, and advocating the required
engineering and decision-making activities. defenses and mitigation strategies. While this role is not
This article, therefore, discusses an AI/ML adver- yet defined in standardized work roles, such as NIST’s
sarial techniques, tools, and common knowledge Workforce Framework for Cybersecurity,10 recent job
(ATT&CK) framework, an intuitive and comprehen- posts demonstrate growth in the demand for AI secu-
sive framework based on a systematic literature review rity engineers, adversarial AI experts, and many other
(SLR) that characterizes offensive and defensive tech- roles with similar responsibilities. According to a recent
niques, tactics, and tools related to the security of publication,11 just about every industry needs workers
AI/ML-enabled systems. AI engineers can employ the with AI skills as companies focus on technologies to
AI/ML ATT&CK framework during software design give computers the capability to think, learn, and adapt.
and implementation as an easy-to-use decision sup- To utilize AI at its optimal potential, AI engineers
port system to identify weaknesses associated with AI need to have programming skills and knowledge in sta-
models, explore viable attacks, and consider appropri- tistical learning, data modeling, data evaluation, soft-
ate mitigation techniques and security measures. Our ware engineering and system design, and distributed
contributions are fourfold: computing and, in general, the ability for conceptual
thinking to understand how a product is used and how
■■ We provide a definition of AI/ML security workers, it can be used more effectively. We argue that AI/ML
describing their role as AI cybersecurity professionals security workers, in addition to the preceding skills,
and advocates who take both offensive and defensive need to be equipped with offensive and defensive think-
measures. ing so that they can identify potential attack scenarios,
■■ We provide an extensive knowledge base of 102 attacks, abuse cases, and mitigation techniques as they engineer
65 mitigation techniques, and 105 tools related to the AI software. AI/ML security workers deal with complex
security of AI/ML-enabled software systems. This tasks that require creativity, critical thinking, complex
catalog is based on an SLR to categorize and charac- information processing, and decision making.
terize currently available attack scenarios, mitigation It has been demonstrated that such tasks need higher
techniques, and tools discussed in 860 papers. cognitive skills.11 Unlike software security workers who
■■ We define an AI/ML ATT&CK framework that aims have access to many resources, such as the Open Web
to support both offensive and defensive AI/ML secu- Application Security Project Top 10 (https://owasp.
rity workers. The offensive view is organized as a deci- org/Top10/), security checklists, secure coding prac-
sion tree, which enumerates 26 attack scenarios and is tices, the IEEE Top 10 Design Flaws (https://cyber
mapped to 102 concrete attack techniques and exist- security.ieee.org/blog/2015/11/13/avoiding-the
ing tools capable of implementing the attacks. The -top-10-security-flaws/), and more, AI/ML security
defensive view enumerates 25 defense scenarios and workers currently lack comprehensive resources to sup-
is mapped to 65 mitigation techniques. port them in decision making. In particular, there are
■■ An online interactive version of the framework is pro- no checklists and other frameworks that enumerate AI/
vided to support AI/ML security workers, especially ML weaknesses and attack scenarios that can support
less-experienced ones. A limited-scope user study has both novices and experts in systematically identifying
demonstrated promising results for the effectiveness known flaws in AI software. Despite the significance
of such framework. of security in AI software, there is a lack of academic

38 IEEE Security & Privacy January/February 2023


research, especially regarding any holistic guidelines to Then, we applied the inclusion and exclusion criteria to
support AI/ML security workers. the abstracts of the remaining papers. At the end of this
Our work aims to leverage an SLR with the objective step, we had 860 papers. Finally, we went through the
of integrating and synthesizing existing knowledge to pro- papers and started the detailed literature review.
vide new insights for AI/ML security workers. This work After randomly reviewing the first 10% of the
aims to characterize AI/ML cybersecurity techniques, selected papers, the initial skeleton of the AI/ML
tactics, and tools in a way that can help AI/ML security ATT&CK framework was formulated, consisting of
workers better understand attack surfaces of AI-powered the framework structure and the proposed attributes
systems, reason about AI-specific vulnerabilities in a sys- to model cyberadversaries and security techniques13 as
tem, and employ the best defense techniques to mitigate well as related tools and toolchains. Along with review-
the vulnerabilities. In particular, we aim to support two ing the papers and adding new techniques to the frame-
groups of AI/ML security workers who investigate the work, we updated the framework skeleton until all the
following questions in their day-to-day activities: papers were reviewed. Moreover, during the review
of each paper, we performed a snowballing process to
1. Offensive AI/ML security workers: What are the AI/ review other related works cited in the paper. As a result,
ML-specific weaknesses and vulnerabilities, and we present a unifying taxonomy of AI/ML-specific
how do attackers use these to exploit the system and cyberattacks and cyberdefenses that provides informa-
execute organized attacks? tion that can help AI/ML capability builders to deploy
2. Defensive AI/ML security workers: What are the appropriate cybersecurity tactics during system design
potential mitigation techniques that can be lever- to deliver a reliable and secure AI-based system.
aged to prevent and mitigate the vulnerabilities and
improve the system’s robustness? AI/ML ATT&CK Framework
Based on the conducted literature review and analy-
Methodology sis of the papers described in the previous section,
To create such a comprehensive AI/ML ATT&CK we developed our AI/ML ATT&CK framework. We
framework to serve as a guideline, we performed an SLR organized the framework around three main compo-
following the guidelines provided by Kitchenman.12 nents—attacks, mitigation, and tools—and two views:
offensive and defensive. The three components of our
Review Protocol
Table 1 summarizes our SLR protocol. We performed
the SLR in two of the most popular computer science Table 1. The review protocol.
archives: IEEE Xplore and ACM Digital Library. First,
Research What are the AI/ML-specific weaknesses and
to find the appropriate search keywords, we collected an
questions vulnerabilities, and how do attackers use them to
initial set of relevant works by manual exploration and exploit the system and execute organized attacks?
then searched for common and relevant words in their What are the potential mitigation techniques that can
titles and abstracts. Then, as we found new relevant be leveraged to prevent and mitigate vulnerabilities
papers, we added them to the collection and refined the and improve the system’s robustness?
set of keywords. Based on this procedure, we formed Dates 2000–2021
the following search query: {(Artificial Intelligence OR
Machine Learning OR Deep Learning OR Neural Net- Databases IEEE Xplore and ACM Digital Library
works) AND (threat OR mitigation OR adversary)}. We Search criteria English
also used other queries that relied on variations of the Search title, abstract, and keywords
terms. The variations consisted of words with and with- Search keywords {(Artificial Intelligence OR Machine Learning OR
out possible prefixes and suffixes for each keyword (e.g., Deep Learning OR Neural Networks) AND (threat OR
adverse, adversarial, adversary, and so on). This review mitigation OR adversary)}
was conducted on peer-reviewed papers published Inclusion/ Inclusion: full paper, focus on attacks and weaknesses
between the 2000 and 2021. exclusion criteria on AI/ML models, and focus on improving robustness
After extracting the papers from IEEE Xplore and against attacks
ACM Digital Library, we documented the results to Exclusion: not written in English, reports, abstract,
prepare them for the review process. From the initial ideas, summaries and discussions, and duplicate
studies
search, we extracted 20,321 papers. Then, we checked
the papers’ titles and filtered out papers on the basis of Number of Initial results: 20,321
the inclusion and exclusion criteria represented in Table 1. papers After first review step: 3,329
After the first filtration process, we had 3,329 papers. Fully synthesized: 860

www.computer.org/security 39
USABLE SECURITY AND PRIVACY FOR SECURITY AND PRIVACY WORKERS

framework and their intended usage by AI/ML secu- Moreover, the online framework provides informa-
rity workers are provided in Figure 1. The offensive tion about the data set used in each technique and its
and defensive views provide AI/ML security workers characteristics.
with scenario-based decision trees to identify appro-
priate techniques and tools for their offensive/defen- AI/ML ATT&CK Components
sive tasks. This section provides a description of components,
The three main components of the framework are their features, and relationships.
explained as follows:
Attacks
1. Attacks: This represents a comprehensive analysis This component represents a detailed analysis of offen-
of offensive techniques against AI/ML-enabled sive techniques against AI/ML-powered systems, as
systems as well as an adversary’s goal, assumptions, shown in Figure 2. Each part of this component is
and capabilities. described as follows.
2. Mitigation: This provides a detailed analysis of
defensive techniques, their effects, and their Awareness. Some cyberattacks require information,
approaches as well as the applicability of offensive such as training data, models, and architecture, while
techniques. other attacks can be executed without such knowledge.
3. Tools: This illustrates tools and toolchains capable This feature represents the level of information that is
of providing offensive and defensive techniques for accessible to an adversary, as follows:
AI/ML systems.
■■ White box: The attacker has full knowledge of the tar-
The following section explores each component get model and training data.
and describes its features in detail. The current ver- ■■ Gray box: The attacker has partial knowledge about
sion of the AI/ML ATT&CK framework contains the model and training data.
more than 100 attack techniques, 65 mitigation ■■ Black box: The adversary has no knowledge about the
techniques, and 105 tools related to the security of model and data.
AI/ML systems. Each entry in this framework pro-
vides a descriptive analysis of a technique as well as Tactic. This component represents an adversary’s
corresponding offensive and defensive techniques. approach, which is categorized as follows:

Investigate Attack Scenarios


Attacks

Offensive View

AI/ML Security Worker

Tools Find Tools to


Mitigation Implement
AI/ML ATT&CK
Attack Scenarios

Find Tools/Methods to
Mitigate Attacks

Investigate
Defense Scenarios
Defensive View

AI/ML Security Worker

Figure 1. The framework consists of three main components: attacks, mitigation, and tools. The framework connects and
unites these three components to help AI/ML security workers identify related cybersecurity techniques. The offensive
and defensive scenarios help security workers identify and select appropriate cybersecurity techniques.

40 IEEE Security & Privacy January/February 2023


White Box

Awareness Gray Box

Black Box

Source Misclassification

Targeted Misclassification
Evasion
Source-Targeted Misclassification

Confidence Reduction

Membership
Inference

Pattern in
Training Data

Model Regularization
Inversion Coefficient
Exploratory
Attribute
Tactic Inference

Decision
Boundaries

ATTACK Model Model


Extraction Architecture

Model
Functionality

Data Manipulation

Random
Label Flipping
Poisoning Intelligent

Data Injection

Data Reordering

Confidentiality
Type of
Integrity
Violation
Availability

Affected Models

Affected Data Set

Affected Application
Target
Text

Visual

Domain Audio

Graph

Nominal

Figure 2. A characterization of AI/ML-specific offensive techniques. Each leaf node of this tree in the online framework
expands to concrete offensive techniques for further investigation.

www.computer.org/security 41
USABLE SECURITY AND PRIVACY FOR SECURITY AND PRIVACY WORKERS

■■ Poisoning: The adversary tries to undermine the learn- Figure 2, and is accessible to AI/ML security workers
ing process during model training to manipulate the for deeper investigations through the online AI/ML
system’s behavior during inference time. In this attack, ATT&CK framework.
the attacker poisons the data set through the follow-
ing approaches: Type of violation. This feature represents the type of vio-
lation caused by an adversary, which can be one or com-
0 Data manipulation: The attacker manipulates the bination of the following groups:
existing training samples, resulting in wrong output
during inference time. ■■ Confidentiality: The adversary aims to extract the sys-
0 Label flipping: By adjusting the label of the training tem’s private information.
samples, the attacker aims to interrupt the learning ■■ Integrity: The adversary forces the system to generate
process in two ways: random and intelligent. inaccurate results by processing malicious samples.
0 Data injection: The attacker injects malicious sam- ■■ Availability: The adversary aims to corrupt the nor-
ples into the training data set, which corrupts the mal behavior of the system for legitimate inputs.
learning process.
0 Data reordering: The attacker changes the order Target. This represents entities that are vulnerable to the
of the data feeding during training to corrupt the attack, as follows:
learning process.
■■ Affected models: This represents ML and deep learn-
■■ Evasion: The attacker aims to fool the ML system to ing models that are vulnerable to the adversary.
misbehave by modifying a legitimate input to result in ■■ Affected applications: This illustrates real-world appli-
wrong predictions during inference time. Depending cations, services, software, and tools that can be
on the input’s original label and the target class, this affected by the adversary.
attack is categorized as follows:14 ■■ Affected data set: This represents the data set on which
the attack is carried out.
0 Confidence reduction: The attacker tries to reduce ■■ Domain: This represents the type of data on which the
the confidence scores of the predicted classes. attack is carried out, which can be in the form of text,
0 Source misclassification: The adversary tries to visual, audio, graph, and nominal.
change the prediction to any label other than the
correct one. Mitigation
0 Targeted misclassification: The attacker aims to force This represents a detailed analysis of the AI/
the model to misclassify any given input to the ML-specific defensive techniques, as detailed in Fig-
specified class. ure 3. Each component of this framework is described
0 Source-targeted misclassification: The attacker forces as follows.
the model to misclassify a certain set of inputs to a
specific class. Target. This represents the attack and the affected data
and model entities.
■■ Exploratory: The adversary aims to explore the sys-
tem’s private information, such as training data, learn- Tactic. This component classifies the mitigation tech-
ing algorithms, and decision boundaries. There are nique based on the approach, as follows:
three types of attacks based on stolen information:
■■ Adversarial detection: This detects malicious samples
0 Membership inference: The adversary aims to deter- to prevent them from entering the system.
mine whether a specific sample has contributed to ■■ Adversarial removal: This removes perturbations from
the training process. the data and generates clean data.
0 Model inversion: The adversary aims to reconstruct ■■ Training modification: This modifies the training
the training data and the samples representing each process to make the model more robust against
class of the training data set. attacks.
0 Model extraction: The adversary aims to extract the ■■ Differential privacy: This protects the privacy of the
model and decision boundaries or build a function- system by sharing patterns of the training data set
ally similar model to the target model. while withholding information about the individuals
in the training data set.
Each of the preceding tactics is further classi- ■■ Homomorphic encryption: This provides an encrypted
fied into more granular classes, as demonstrated in form of the data for training the model.

42 IEEE Security & Privacy January/February 2023


Each of the preceding tactics is further catego- 1. Proactive: The mitigation technique should be
rized into more granular classes, as in Figure 3, and deployed before or during the model training.
is accessible to AI/ML security workers for deeper 2. Reactive: The mitigation technique should be
investigations through the online AI/ML ATT&CK deployed after model training.
framework.
Tools
Timing. There are two stages when mitigation tech- This characterizes the tools and toolchains capable of
niques can be deployed: deploying offensive and defensive techniques related

Poisoning

Attack’s Type Evasion

Target Exploratory

Data
Entity
Model

Statistical Based

Clustering Based

Reject on
Negative Impact
Adversarial Detection
Inconsistent
Prediction

Neuron
Activation
Pattern

Data Reconstruction

Mitigation Adversarial Removal Relabeling

Feature Squeezing
Tactic
Adversarial Training

Ensemble of Methods

Feature Masking
Training Modification
Distillation

Gradient Masking

Randomization

Differential Privacy

Homomorphic Encryption

Proactive
Timing
Reactive

Figure 3. A categorization of the mitigation techniques. Each leaf node of this tree on the website expands to concrete
techniques identified in the literature. It can be used to explore defensive techniques and understand their characteristics.

www.computer.org/security 43
USABLE SECURITY AND PRIVACY FOR SECURITY AND PRIVACY WORKERS

to the security of AI/ML systems, as demonstrated in Supporting AI/ML Security Workers


Figure 4. Each part of this component is explained Simply providing access to a large number of AI/ML
as follows: robustness techniques cannot fully support users in
choosing the best techniques to deploy. Users need
■■ Type: This represents whether the tool is offensive or easy-to-use guidance to explore various techniques,
defensive. identify the best approach, and reason about their
■■ Availability: This illustrates whether the tool is pub- choices. To this end, two additional views, offensive and
licly available or private. defensive, have been compiled in the AI/ML ATT&CK
■■ Target: This explains the data and model entities that framework. These views provide decision trees rep-
are required to deploy the tool. resenting offensive and defensive scenarios that can
guide AI/ML security workers to investigate a system’s
The AI/ML ATT&CK framework provides features robustness from attacker and defender perspectives.
to characterize the offensive and defensive methodolo-
gies related to the security of AI/ML techniques. The How to Use the AI/ML ATT&CK Framework:
framework enables AI/ML security workers to under- Walk-Through Examples
stand the attack surfaces of different AI/ML techniques. To facilitate using the guidelines of the AI/ML
Moreover, it represents appropriate cyberdefense tech- ATT&CK framework, it is delivered as a publicly acces-
niques to build robust systems that withstand various sible web solution at http://www.aimlattack.com.
threats. To our knowledge, this is the first study that Using this web solution, users can explore the frame-
thoroughly characterizes tools and toolchains related to work from various perspectives. The following sections
AI/ML security and can practically guide AI/ML secu- describe how it can be used to support offensive and
rity workers to choose the best tools capable of imple- defensive AI/ML security workers.
menting offensive and defensive tactics.
Use Case 1: AI/ML ATT&CK for
Offensive Security Workers
An offensive AI/ML security worker can use the AI/
Confidentiality
ML ATT&CK framework to determine the attack
Offensive Availability surface of an AI-enabled system and deploy offensive
techniques to assess the system’s robustness. The frame-
Integrity
Type work represents attack scenarios that have been devel-
Respond oped with respect to the goals of the cyberattack, level
of access to the target system, and application domain
Defensive Prevent
of the target system. By following the provided com-
Detect prehensive step-by-step decision tree-style guidance,
as presented in Figure 5, AI/ML security workers can
Public identify related offensive techniques. Furthermore, on
Availability
the website, by selecting each offensive technique, the
Tool Private
user will be directed to a new page that thoroughly
Text explains the technique’s details and represents tools
capable of implementing the technique to deploy and
Audio observe the system’s performance under such an attack.
Domain Nominal
Example offensive scenario. For a given system that uti-
Visual lizes AI algorithms and has been trained on sensitive
Target Graph data, an offensive AI/ML security worker wants to
determine to what extent the system is robust against
Classical ML data breaches. Thus, as the first step, by following the
Model offensive view of the framework (Figure 5), the offen-
Deep Neural
Networks sive AI/ML security worker selects the second box,
“Steal System’s Private Information.” Then, in the next
Figure 4. The tool component. It characterizes tools and toolchains capable layer, considering the attacker’s goal, which is to assess
of implementing offensive and defensive techniques. On the website, each leaf the system’s security against data breaching, the AI/
node of this tree expands to show concrete tools and their code repositories, as
ML security worker chooses “Stealing Private Informa-
identified in the literature.
tion About the Training Data.” Next, with respect to the

44 IEEE Security & Privacy January/February 2023


Increasing the Computational Time to Exhaust
the System’s Resource During the Training Phase
Corrupt the System’s
Availability Increasing the Computational Time to Exceed
the Decision Time and Generate Invalid Results

Without Access to the


Training Data and the
Identify the Model
Participation of a
Sample in Model
Training With Access to the
Training Data or Model

Stealing Private Information about the Stealing Training


Training Data Data Distribution

Stealing Language
Models’ Training Data
Stealing Training Data
Stealing Training Data
Images

Stealing Hyperparameters
With an Access to the
Training Data Set
Stealing Hyperparameters
Stealing Hyperparameters
With an Access to the
Training Data Set

Stealing Architecture
With Access to the
Training Data Set
Steal the System’s Stealing Model Architecture Stealing the Deep Neural
Network Architecture
Private Information Stealing Architecture
Without Access to the
Training Data Set
As an Attacker,
I Want to: Stealing Algorithm With an
Access to the Training Data Set
Stealing the ML Algorithm
Stealing Algorithm Without
Access to the Training Data Set

Stealing the Decision Boundaries by Querying


the System
Stealing Model Functionality to Create a
Functionally Similar System and Identify the
System’s Vulnerabilities Extracting the Training Data and Train a Model
With the Extracted Data

Exploiting Hardware Vulnerabilities to Steal


Private Information About the Data or the
Model

Directly Injecting Perturbed


Data into the Training Set to
Corrupt the Learning Process

Flipping the Labels of the Random


Training Data to Mislead the
Learning Process
Intelligent

Corrupting the Learning Attacking the ML


Process to Change Its System int he Supply
Decision Boundaries to Inserting Backdoor into the Chain
Generate Illegitimate Results Model to Force the Model to
Misclassify in the Presence of
the Predefined Triggers Inserting Malicious
Samples in the Internet
to be Collected by the
Victim System

Manipulating the Training


Mislead the System to Samples to Mislead the
Learning Process
Generate Inaccurate Results
Adding Small
Perturbations to the
Original Samples
With Access to the Training Data or Model
Generating Adversarial
Examples Using
Generative Models
Generating Adversarial
Examples to Fool the Classifier
During Inference Time
Adding Small
Perturbations to the
Original Samples
Without Access to the Training Data or Model
Generating Adversarial
Examples Using
Generative Models

Figure 5. The offensive view of the AI/ML ATT&CK framework. It provides an overview of attack scenarios from an attacker’s perspective.
Offensive AI/ML security workers follow the provided step-by-step guidance to identify related attacks and tools. On the website, the leaf
nodes of this view are connected to concrete attack techniques for further investigation.

www.computer.org/security 45
USABLE SECURITY AND PRIVACY FOR SECURITY AND PRIVACY WORKERS

goal, “Identify the Participation of a Sample in Training had two parts: each user was asked to play the role of an
Model” can potentially be chosen. Finally, consider- offensive AI/ML security worker and then a defensive
ing the attack’s setting, the AI/ML security worker can AI/ML security worker. Group 1 had access to existing
choose any of the nodes of interest (e.g., “With Access taxonomies in the literature1,4,5 as well as access to the
to the Model and Training Data”). When the leaf node Internet, while group 2 was requested to use only the
is selected, the framework will expand all the related AI/ML ATT&CK framework. For evaluation, we calcu-
concrete techniques and tools. lated the number of correct answers reported by each
user divided by the total number of correct answers (the
Use Case 2: AI/ML ATT&CK for Defensive ground truth).
AI/ML Security Workers The results show that group 2 identified not only
The AI/ML ATT&CK framework empowers defensive abstract attack classes (types of attacks) but also con-
AI/ML security workers to identify the system’s attack crete techniques. The response of group 2 was specific,
surface, reason about the system’s vulnerabilities, and enumerating attack classes and subclasses and then list-
select the best mitigation techniques to improve the sys- ing specific methods from a paper. In contrast, group
tem’s robustness. Considering the attacks on the system 1 identified only very high-level types of attacks (e.g.,
that AI/ML security workers want to mitigate, they can poisoning attacks) and mitigation techniques and could
select the best potential mitigation techniques and uti- not be specific about subtypes and concrete methods.
lize the provided defensive tools to deploy and improve In comparison, the responses from group 1 lacked
the system’s robustness. information about a concrete method to implement or
mitigate an attack. Group 1 identified only 3% of the
Example defensive scenario. For a given system that uti- concrete attack methods and 5% of the mitigation tech-
lizes AI algorithms and has been trained on sensitive niques, while group 2 identified 39% of all correct and
data, it has been shown that the system is vulnerable concrete attack methods and 50% of the correct miti-
to attacks that attempt to identify the participation of gation techniques. Even for abstract high-level attack
individual samples in the training process. Thus, it is classes (types of attacks), group 1 identified 75% of the
necessary to deploy appropriate mitigation techniques attack types and 69% of the mitigation types correctly,
to improve the robustness against such attacks. To this while group 2 identified 100% of both. Overall, group
end, by following the defensive view represented in the 2 outperformed group 1 in terms of retrieved correct
framework, as given in Figure 6, the user can choose the attacks and mitigation. While the current results are
last box, “Apply Configurations to the Data and Model promising, given the sample size, further studies are
to Improve the Privacy.” In the next layer, the user can needed to understand the effectiveness of each part of
select the first box, “Protect the Privacy of the Individ- this framework.
ual Samples in the Training Dataset by Modifying the
Data While It Provides Useful Information to Train Limitations and Future Work
the Model.” Finally, when the leaf node on the web- All represented information in this framework has been
site is selected, the framework expands all the appro- extracted and characterized through an SLR of the
priate mitigation techniques. Also, by selecting each related techniques and tools. However, despite exten-
technique, the user will be directed to a new page that sive effort to curate and develop the framework, it is
thoroughly explains the details of the mitigation tech- not guaranteed that these techniques and tactics are
nique and represents tools capable of implementing the only viable ones that can be explored by AI/ML
the technique. security workers, especially with the emergence of
new AI/ML cyberattacks. We have carried only out
User Evaluation a limited-scope study involving six AI engineers. Fur-
A small user study was conducted to show our frame- ther work is required to evaluate claims related to the
work’s effectiveness. First, six SMEs with prior work framework’s effectiveness in supporting AI engineers
experience as AI engineers and who were not cyberse- with limited security experience to improve AI sys-
curity professionals were recruited. Then, these SMEs tems’ security.
were divided into two groups, where the AI experience
level of the groups was balanced based on years of expe- Ethics
rience. The groups were provided with the same case Offensive cyberwarfare raises serious ethical problems
study of an object detection and recognition system for societies. This topic is not fully addressed by poli-
for an autonomous vehicle. Both groups were asked cies and regulations. Our primary intention in formulat-
to identify potential attacks and mitigation techniques ing the AI/ML ATT&CK framework and the associated
across the development cycle of the project. This study offensive and defensive tools is purely educational and

46 IEEE Security & Privacy January/February 2023


Identify the Malicious Samples Based on
Statistical Comparison Between the Training
Data and the Individual Samples

Identify the Malicious Samples by Comparing


their Effect Before and After their
Involvement in the Learning Process

Identify the Malicious Samples Using a


Clustering Algorithm
Identify the Malicious
Learning a Distance-Based Anomaly Detection
Samples to Prevent Them Using a Supervised Learning Algorithm
From Getting into the System
Identify Whether the Data Set is Poisoned by
Comparing the Generated Results Between a
Previously Trained Model on a Trusted Data Set
and the New Model Trained on the Untrusted
Data Set

Identify Malicious Samples by Observing a


Trend in Invoking Neurons in the Network

Applying Reconstructor on the Input Data to


Denoise It

Relabeling Each Sample by Comparing Its


Assigned Label and the Assigned Labels to
Its Neighbors
Remove the Noise and
Compress the data to Reduce the Unnecessary
Perturbation From the Data Features and Removing Noise From the Data
to Retrieve the Original Data
Transfer High-Dimensional Data to a Lower
Dimension and Train the Model on the Reduced
Dimension Data

Denoising the Input Data by Increasing the


Local Smoothness of the Data

Generate Adversarial Examples by Adding


Small Perturbations to the Original Data and
Train the Model On Both Clean and
Perturbed Data

Generate Adversarial Examples Using Ensemble


of Techniques and Augment the Perturbed
Data With the Original Training Data Set and
Train the Model on the Augmented Data Set

Randomly Changing the Label of Some of


As a Defender, the Samples in the Training Data Set
I Want to:
Adjusting the Dropout Rate During Model
Training to Find the Optimal Dropout Rate,
Considering Both Accuracy and Robustness

Generate Nondeterministic Model by Masking


Some of the Features and Inserting Additional
Layers in the Model
Modify the Learning Process Removing the Unnecessary and Dormant
to Make the Model More Neurons From the Network by Comparing
the Performance of the Classifier With and
Robust Against Adversarial Without Them
Perturbations
Creating Adversarial Examples Labeled as
“NULL”and Augmenting the Training
Data Set With them, to Classify Them as
Null During Inference Time

Applying Random, Transformations and Filters


to the Data, Masking the System More Robust
Against Adversarial Examples

Improving the Robustness of the Model


by Applying Random Modifications to the
Learning Model

Distilling the Model to Make it Less Vulnerable


to Noises

Modifying the Loss Function to Improve the


Robustness Against Attacks

Protect the Privacy of the Individual Samples


in the Training Data Set by Modifying the
Original Data While the Data Set Provides
Useful Information to Train the Model
Apply Configurations to the Applying an Obfuscator to the Training
Data and Model to Improve Data Set to Remove Sensitive Information
About the Individuals
the Privacy
Transferring the Data in an Encrypted Form
Between Client and Server to Prevent
Information Leak

Figure 6. The defensive view of the AI/ML ATT&CK framework. Defensive AI/ML security workers can explore mitigation
scenarios to find appropriate techniques to practice secure AI/ML design. On the website, the leaf nodes of this view are
connected to concrete mitigation techniques.

www.computer.org/security 47
USABLE SECURITY AND PRIVACY FOR SECURITY AND PRIVACY WORKERS

for enhancing the robustness of AI/ML systems. We 11. J. Bughin, E. Hazan, S. Lund, P. Dahlström, A. Wiesinger, and
do not endorse the unethical use of this framework A. Subramaniam, “Skill shift: Automation and the future
and curated tools/software. Using offensive techniques of the workforce,” McKinsey Global Inst., vol. 1, pp. 3–84,
requires extreme legal and ethical consideration, careful May 2018. [Online]. Available: https://www.mckinsey.
determination of the perpetrators and victims, and the com/~/media/mckinsey/featured%20insights/
reduction of collateral damage. future%20of%20organizations/skill%20shift%20auto
mation%20and%20the%20future%20of%20the%20
References workforce/mgi-skill-shift-automation-and-future-of-the
1. E. Tabassi, K. J. Burns, M. Hadjimichael, A. D. -workforce-may-2018.pdf
Molina-Markham, and J. T. Sexton, “A taxonomy and ter- 12. B. Kitchenham, Procedures for Performing Systematic
minology of adversarial machine learning,” Nat. Inst. Stan- Reviews, vol. 33. Keele, U.K.: Keele University, Jul. 2004,
dards Technol., Gaithersburg, MD, USA, Draft NISTIR pp. 1–26.
8269, 2019. [Online]. Available: https://nvlpubs.nist. 13. M. Fazelnia, I. Khokhlov, and M. Mirakhorli, “Attacks,
gov/nistpubs/ir/2019/NIST.IR.8269-draft.pdf defenses, and tools: A framework to facilitate robust AI/
2. “ATT&CK.” MITRE. Accessed: Jun. 16, 2022. [Online]. ML systems,” in Proc. 9th Int. Conf. Learning Representa-
Available: https://attack.mitre.org/ tion, RobustML workshop, (ICLR), Vienna, Austria 2021.
3. “Common weakness enumeration.” MITRE. Accessed: [Online]. Available: https://arxiv.org/abs/2202.09465
Jun. 16, 2022. [Online]. Available: https://cwe.mitre. 14. N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B.
org/ Celik, and A. Swami, “The limitations of deep learning
4. L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and in adversarial settings,” in Proc. 2016 IEEE Eur. Symp.
J. D. Tygar, “Adversarial machine learning,” in Proc. 4th Secur. Privacy (EuroS&P), pp. 372–387, doi: 10.1109/
ACM Workshop Secur. Artif. Intell. (AISEC), New York, EuroSP.2016.36.
NY, USA: Association for Computing Machinery, 2011,
pp. 43–58, doi: 10.1145/2046684.2046692. Mohamad Fazelnia is currently a Ph.D. student at the
5. K. Sadeghi, A. Banerjee, and S. K. S. Gupta, “A Global Cybersecurity Institute, Rochester Institute of
system-driven taxonomy of attacks and defenses in adver- Technology, Rochester, NY 14623 USA. His research
sarial machine learning,” IEEE Trans. Emerg. Topics Com- interests include artificial intelligence security, human
put. Intell., vol. 4, no. 4, pp. 450–467, Aug. 2020, doi: aspects of cybersecurity, and software development.
10.1109/TETCI.2020.2968933. He received a B.S. in industrial engineering from
6. J. M. Spring, A. Galyardt, A. D. Householder, and N. Sharif University of Technology, Tehran, Iran. He is a
VanHoudnos, “On managing vulnerabilities in AI/ Member IEEE and the IEEE Computer Society. Con-
ML systems,” in Proc. 2020 New Secur. Paradigms Work- tact him at mf8754@rit.edu.
shop (NSPW), New York, NY, USA: Association for
Computing Machinery, pp. 111–126, doi: 10.1145/ Ahmet Okutan is a senior artificial intelligence engi-
3442167.3442177. neer at the Global Cybersecurity Institute, Rochester
7. J. Haney and W. Lutters, “Cybersecurity advocates: Dis- Institute of Technology, Rochester, NY 14623 USA.
covering the characteristics and skills of an emergent His research interests include the overlap of artificial
role,” Inf. Comput. Secur., vol. 29, no. 3, pp. 485–499, Mar. intelligence/machine learning and cybersecurity. He
2021, doi: 10.1108/ICS-08-2020-0131. received a Ph.D. in computer science from Işík Üni-
8. S. Jia, X. Liu, P. Zhao, C. Liu, L. Sun, and T. Peng, “Rep- versitesi, Turkey. He is a Member of IEEE. Contact
resentation of job-skill in artificial intelligence with him at axoeec@rit.edu.
knowledge graph analysis,” in Proc. 2018 IEEE Symp.
Product Compliance Eng. – Asia (ISPCE-CN), pp. 1–6, Mehdi Mirakhorli is currently an associate professor and
doi: 10.1109/ISPCE-CN.2018.8805749. Kodak Endowed Chair at the Global Cybersecurity
9. S. Amershi et al., “Software engineering for machine Institute and in the Department of Software Engi-
learning: A case study,” in Proc. 2019 IEEE/ACM 41st Int. neering, Rochester Institute of Technology, Roch-
Conf. Softw. Eng., Softw. Eng. Pract. (ICSE-SEIP), pp. 291– ester, NY 14623 USA. His research interests include
300, doi: 10.1109/ICSE-SEIP.2019.00042. cybersecurity, software assurance, and artificial intel-
10. R. Petersen, D. Santos, M. Smith, K. Wetzel, and G. Witte, ligence. He received a Ph.D. in computer science from
“Workforce framework for cybersecurity (NICE frame- DePaul University, Chicago, IL, USA. He is an associ-
work),” Nat. Inst. Standards Technol., Gaithersburg, MD, ate editor of IEEE Transactions on Software Engineer-
USA, NIST SP 800-181 REV. 1, 2020. [Online]. Available: ing and International Journal on Empirical Software
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/ Engineering. He is a Member of IEEE. Contact him at
NIST.SP.800-181r1.pdf mehdi.mirakhorli@rit.edu.

48 IEEE Security & Privacy January/February 2023

You might also like