Adversarial - Attack - With - Genetic - Algorithm - Against - IoT - Malware - Detectors

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

2023 IEEE International Conference on Communications (ICC): IoT and Sensor Networks Symposium

Adversarial Attack with Genetic Algorithm against


IoT Malware Detectors
Peng Yuan∗† , Shanshan Wang∗† , Chuan Zhao ∗†‡ , Wenyue Wang∗† , Daokuan Bai∗† , Lizhi Peng∗† ,
Zhenxiang Chen ∗†§
∗ Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan, China
† School of Information Science and Engineering, University of Jinan, Jinan, China
‡ Quan Cheng Laboratory, Jinan, China
§ Corresponding author, Email: czx.ujn@gmail.com

Abstract—The exponential growth and sophistication of In- However, supervised classification tasks are vulnerable to
ternet of Things (IoT) malware behavior have resulted in new adversarial attacks [8] [9]. The adversarial attacks create Ad-
detection technologies capable of defending IoT devices against versarial Examples (AEs), i.e. inputs with minor perturbations
some threats. However, their success has stimulated the interest
ICC 2023 - IEEE International Conference on Communications | 978-1-5386-7462-8/23/$31.00 ©2023 IEEE | DOI: 10.1109/ICC45041.2023.10279299

of attackers attempting to circumvent current IoT malware that result in substantially different model predictions which
detectors. Among detection technologies, the detectors trained severely hinders the application of machine learning in this
based on Uniform Resource Locator (URL) requests have become field. For example, work [10] proves that small adversarial
popular. To draw attention to the safety of the detectors, we perturbations to the input samples can cause the model to
propose a grey-box method to attack detectors based on URL make incorrect predictions. So far, however, there has been
requests without breaking malicious functions of URL requests.
The key idea is to add perturbations to the tail of URLs. little research about adversarial attacks against IoT malware
Specifically, this method is based on a Genetic Algorithm (GA) detection models based on URL requests. It indicates a need
to find suitable perturbations and optimizes the process of to explore the vulnerability of IoT malware detectors based on
adversarial attacks through a dynamic number of evolution URL requests under adversarial attacks.
directions and a maximum generation limit. The effectiveness of There is a challenge in attacks against detectors based on
our adversarial attack is demonstrated by experimental results
based on a widely used public dataset CSIC2010 and several URL requests: the attack must deceive target detection models
representative detectors. As far as we know, this is the first without hurting the functions of malicious URL requests. To
time an adversarial attack against IoT detectors based on URL process this challenge, we only append perturbations to the
requests has been done. The method has an attack success rate of tails URL using GA, and other locations do not make any
more than 92%. Furthermore, experiment results show that the changes. In this strict condition, new challenges will come
method can reduce query numbers while maintaining the attack
success rate. up. Another challenge is that our method must balance the
Index Terms—adversarial attacks, genetic algorithms, IoT relationship between the possibility of finding AE and query
malware detectors, URL requests time when a failure occurs. When a malicious sample doesn’t
have any matching AEs, it will take more time because the
I. I NTRODUCTION GA function can’t predict whether it will be able to find AEs
IoT is an interactive network that connects and operates before the search begins.
physical devices and can exchange data with each other In this paper, we are the first to propose a grey-box adver-
and remotely control smart objects at any time, anywhere. sarial attack method to explore the robustness of IoT malware
Numerous IoT applications are being implemented to benefit detection models based on URL requests. Specifically, we
not just individuals, but also corporations and government propose an AEs generation method based on GA. Our method
activities [1]. An authoritative IT research and consulting can transform malicious URL requests without affecting ma-
firm, Gartner, predicts that by 2023 there will be 48.6 billion licious functions, and speed up the convergence of queries.
IoT devices connected using 5G [2]. However, IoT devices We demonstrate the effectiveness of the proposed method on
suffer from a variety of dangers due to their exposure to representative detectors on a public dataset CSIC2010 [11].
the network environment. In recent years, machine learning In summary, the main contributions of this work can be
techniques have been widely used in many network security concluded as follows:
fields, including Internet of Things (IoT) malware detection • We are the first to use the grey-box adversarial attack
[3]. method against IoT malware detectors based on URL
The purpose of IoT malware detectors based on URL requests and do not break malicious functions of IoT
requests is to distinguish between normal IoT behaviors and malware.
malicious ones. In our research, the literature [4] [5] [6] [7] • We design a method based on GA that reduces the
mentioned that text features are extracted from URL requests number of queries and increases convergence speed when
to train IoT malware detectors and achieve excellent detection unable to find AEs.
results. • We analyze the attack success rate and query cost of

978-1-5386-7462-8/23/$31.00 ©2023 IEEE 1413


Authorized licensed use limited to: Universiti Kuala Lumpur. Downloaded on May 28,2024 at 06:05:19 UTC from IEEE Xplore. Restrictions apply.
2023 IEEE International Conference on Communications (ICC): IoT and Sensor Networks Symposium

our method by attacking representative IoT malware


detectors. The detectors include two traditional machine
learning detectors called one-class HYBIRD [4] [5], one
ensemble deep learning detector called EDL-WADS [6],
and one single model detector M-ResNet (MRN) from
[7].
We describe target detectors in Section II. Section III
introduces the framework of our method in detail, while
experimental results are presented in Section IV. Section V
describes related works. Finally, conclusions are presented in
Section VI. Fig. 1. The overview process of constructing detectors.
II. I OT M ALWARE D ETECTOR
We select four representative detectors as target detectors. III. E VADING M ALWARE D ETECTORS
The detectors include [5] and [4] based on a composite A. Threat Model
traditional machine learning model named one-class HYBIRD,
an ensemble deep learning detector EDL-WADS [6], and one Attacker’s target. Assuming that a malicious sample tries
sub-model picked from [7]. to evade detection by models, the attacker’s objective is to
The construction of the four detectors can all be divided into create a strategy that transforms the malicious samples into
three stages in Fig.1: tokenization stage, vectorization stage, AEs that can fool the detectors.
and training stage. As is shown in Table I, original URLs Attacker’s capabilities. The attacker can intercept Hyper
are converted to standard URLs at the tokenization stage. The Text Transfer Protocol (HTTP) requests during transmission.
URL requests will be tokenized by all punctuation and become Additionally, the attacker can control HTTP header generation
more readable and easier to handle. For example, the word by programming and append new content to URL field of
U niString represents strings consisting of characters a to z, HTTP header [13].
‘ ’, ‘-’ and numbers, N umbers represents numbers. Words Attacker’s information. The attacker understands that IoT
occurred in a transformation dictionary will not be replaced. malware detectors recognize malware using the URLs of
The standard URLs will be input into the vectorization stage HTTP requests. We assume that the attackers know the to-
and output vectors. For example, in one-class HYBIRD, stan- kenization technology of the target detectors easily. Because
dard URLs are processed to statistical feature vectors based URL tokenization technology is popular, the above detectors
on the word frequency in the clustering step and are input into all use tokenization to process URL requests. The attackers are
doc2vec [12] to obtain the vectors in the classification step. unaware of the specific vectorization method of the detector
The vectors will be used to train models in the training and are unable to obtain information about the models. The
stage. For example, in EDL-WADS, three deep-learning clas- attackers can input their own URL requests into the detector
sification models and a comprehension decision model will and get the labels (normal or malicious), which can be used
be trained. The first model is based on MRN, which has to judge whether an attack is successful or not.
been improved on the bias of Residual Network (ResNet); the B. Framework Description
second model is trained by LSTM and MLP; the third model
Fig.2 shows the process of our method. The method’s goal
is based on the CNN model and word embedding technology.
is to find an AE for each malicious URL request while
The outputs of the intermediate layers of the three models are
maintaining the malicious functions.
concatenation and train comprehension decision model which
Process URL. In U RL P rocess part, original URL re-
gives final prediction.
quests are processed, and the results are provided for the next
The authors of the detectors do not give enough detail,
stage. The normal URL requests are tokenized and vectorized
so we try our best to construct the detectors. Because our
and become vectors. The malicious URL requests will likewise
work focuses on adversarial attacks, a few differences do not
be tokenized. The tokenization outcomes of malicious and
influence our attack experiments.
normal requests are chromosomes of GA. Every words and
Table I: The examples in different tokenization
punctuation are chromosomal genes that combine to form a
gene pool of attackers.
URL Example Generate AE. To easily understand our method, we first
/publico/autenticar.jspmodo=entrar& described Generate AE part in which the parameters y
Original URL login=grimshaw&pwd=84m3 ri156
&remembera=on&b1=12134 (y determines the maximum generation limit of GA) and
Tokenization in
/SenString/SenString.jsp?SenString=SenString c (c determines the number of cluster center points. The
&login=SenString&pwd=MixString cluster center points represent the evolutionary orientation of
EDL-WADS
&SenString=on&b1=Numbers
Tokenization in /A/A.A?A=A&A=A&A=A GA.) are constant. A standard malicious URL request will
one-class HYBIRD &A=A&A=A be transformed into an adversarial URL request by passing it
through the HTTP modifier. The GA module will select an

1414
Authorized licensed use limited to: Universiti Kuala Lumpur. Downloaded on May 28,2024 at 06:05:19 UTC from IEEE Xplore. Restrictions apply.
2023 IEEE International Conference on Communications (ICC): IoT and Sensor Networks Symposium

Fig. 2. The Framework of our method. The P rocess U RL part will process URL requests to vectors and standard URL. The vectors participate in the
cluster and the standard URLs are used to build a gene pool. The Generate AE part owns HTTP modifier, which integrates GA to transform malicious
URL requests to adversarial URL requests. The Adjust P arameter part collects failure reasons and selects new c and y, and triggers the Generate AE
part again.

orientation from the clustering center points and then evolute 1 presents permitting eg to participate evolution continually.
toward the selected orientation. If the GA does not create any The equation means that if Euclidean Distance of a g-th
adversarial URL requests in this orientation, it will select a generation individual less than g-1-th generation sample, the
new orientation and begin evolution until all clustering center individual is reserved.
points are exhausted. The HTTP modifier module will convert Terminal Condition. We set three cases for terminating the
GA outputs to realistic URL requests and transmit the requests GA. In the first case, the GA is terminated if the detector is
to the detector. deceived by an adversarial URL request created by the GA. In
Adjust Parameter. Based on the Generate AE described the second case, the GA will stop when surpassing y during
above, we describe the parameter adjustment strategy of our evolution. In the third case, if no individual in population
method. If the evolution to one cluster center orientation fails, passes the fitness function, the GA will be terminated.
the F ailure Record module will record the reasons. The IV. E XPERIMENT
failure reasons include two cases: 1) No offspring pass fitness
function; 2) Exceeding max generation limit. After a turn of In this part, we demonstrate the effectiveness of our ap-
Generate AE method ends, the c and y will be adjusted proach by comparative experiments and analytical experi-
according to the failure reasons. Our method picks a new value ments.
from a list L and resets c when the first or second failure reason
A. Experiment Setting
occurs. L is set by attackers. For instance, L = [1, 3, 5]. y is
reset when failure reason occurs. y will add stride s and get Target detectors. To evaluate the attack method’s influence
new y. If y + s exceed ymax , our method will stop. ymax and on the detectors, we implement the detector described in
the s are set by attackers. Section.II.
Fitness Function. During evolution, malicious URL re- Dataset. The target detectors all use CSIC2010 [11] as an
quests will evolve towards the cluster center point Ci with experimental dataset, so we choose it as a dataset in experi-
Eq.(1). ments. The dataset contains 36,000 normal URL requests and
more than 25,000 malicious URL requests.
(
0 ED(eg , Ci ) < ED(eg−1 , Ci )
f itness(eg ) = (1) Methods for comparison. We are the first to execute
1 ED(eg , Ci ) >= ED(eg−1 , Ci ), grey box adversarial attacks against IoT detectors based URL
We use C to represent cluster centers, g presents the genera- requests. To improve credibility of our method, we decided to
tion, and eg presents the sample in g-th generation. The ED imitate the article [14] to design comparative experiments. We
presents Euclidean Distance. The 0 presents discarding eg , and compare our method (L = [2, 1, 4], ymax = 15 and s = 5)

1415
Authorized licensed use limited to: Universiti Kuala Lumpur. Downloaded on May 28,2024 at 06:05:19 UTC from IEEE Xplore. Restrictions apply.
2023 IEEE International Conference on Communications (ICC): IoT and Sensor Networks Symposium

to the GA-classical (classical GA), GA-transform (The GA-


transform can be see as the version of our method which
not include Adjust P arameter part and c = 6, y = 15.
We talk about why we choose the parameter combination in
analysis experience.), random method (random perturbation
length, random selection to genes).
Evaluation metrics. The first metric is the Attack Success
Rate (ASR), which is the most commonly used evaluation
metric. The second metric is medium of queries for finding
a successful AE. It is not affected by the maximum or min-
imum value of the distribution sequence, thus improving the
representativeness of the median to the distribution sequence
to a certain extent.
Devices. We used one typical desktop PC in the experiment
(Intel Core i7-8700 CPU @ 3.20GHz and 32GB of physical
memory running 64-bit Ubuntu 20.04.3).

B. Comparative Experiments
We apply four methods to attack four detectors, and record Fig. 3. The success rate of four adversarial methods against four detectors.
ASR of them. We choose the average ASR of each method
which uses the best parameters, and print them in Fig.3. Table II: The table record query medium (medium of number of queries) of
our method and GA-transform against detectors.
We run ten times for each method and each detector, and
get average values. Our method gets high attack success Medium of Medium of
Suc. or Fail Detector
rates, especially it achieves 100% against EDL-WADS and Our Method GA-transform
MRN. We can see that detector EDL-WADS and MRN are one-class HYBIRD[4] 21.5 23
one-class HYBIRD[5] 22.5 26.5
more vulnerable than one-class HYBIRD[4] and one-class success
EDL-WADS 3 3
HYBIRD[5] facing the four adversarial attacks. It presents MRN 4 3
that models trained by two-class data are more influenced one-class HYBIRD[4] 4845 ↓ 11799
one-class HYBIRD[5] 4940 ↓ 10678
by adversarial attacks than one-class classification models. f ailure1
EDL-WADS - -
In Fig.3, the performance of our method and GA-transform MRN - -
are similar. However, query cost of our method is lower than one-class HYBIRD[4] 5519.5 ↓ 12453
one-class HYBIRD[5] 5312 ↓ 13053
the GA-transform in Table II obviously. When failure reasons f ailure2
EDL-WADS - -
include F ailure1 or F ailure2 shown in Fig.2, the query MRN - -
cost (medium) of our method is just half of GA-transform.
What’s more, it does not affect the query cost in success.
It is due that our method can find AEs in low generation example, when c = 6, as y is adjusted from 5 to 15, the ASR
and fewer clustering center points for most malicious URL increases to 92.8%. This is because the termination condition
requests. It skips some combinations of parameters according is triggered easily in the smaller y, while this problem is
to the guidance of failure reasons and finds AEs for malicious weakened in the larger y.
URL requests which are difficult to be transformed in low
generation and less clustering center points. The experience D. The Analysis for the Number of Queries
shows that our method can reduce query cost and keep ASR. We studied the relationship between the number of cluster
center points and the number of queries in the adversarial
C. The Analysis for ASR in Different Parameter Combinations attack in this subsection. The target detector is one-class
In this analytic experiment, we remove Adjust P arameter HYBIRD [4], and the result is given in Table III. We note
of framework and keep c and y are not changed during an that as the number of cluster center points grows, so does the
attack. The target detector is one-class HYBIRD [4]. Fig.4 number of queries. It is because our method will choose a new
illustrates the ASR of various c and y combinations of our cluster center point to re-evolve towards when the malicious
method. We note a progressive increase in the ASR of adver- sample fails to develop towards one of the cluster center points.
sarial attacks as the number of cluster center points grows. Additionally, we discovered an interesting phenomenon: bad
When c = 6, the ASR reaches the maximum value. This is news travels slowly. As c grows, the cost of time wasted on
owing to the different distributions of the dataset attackers used failure increases proportionately. In Table III, the number of
and the realistic dataset. If c is too large or too small, samples queries of both f ailure1 and f ailure2 grows as c rises. Espe-
generated by GA will miss the normal area defined by the cially when c = 8 and f ailure1, the upper quartile increases
realistic detector during the deformation phase. Additionally, to 19437. Because our method cannot predict whether it will
we see that as y grows, the ASR increases significantly. For be successful or not in the beginning. If a deformation is

1416
Authorized licensed use limited to: Universiti Kuala Lumpur. Downloaded on May 28,2024 at 06:05:19 UTC from IEEE Xplore. Restrictions apply.
2023 IEEE International Conference on Communications (ICC): IoT and Sensor Networks Symposium

Table III: This table records the lower quartile, upper quartile, and median of the number of queries. We compare different conditions of our method with
and without Adjust P arameter part. The performance of our method without Adjust P arameter part is equivalent to GA-transform when c = 6. The
result of L = [2, 1, 4] is the best performance of our method. In this experiment, y is 15.

Without Adjust Parameter Part With Adjust Parameter Part


Suc. or Fail Index
c=1 c=2 c=3 c=4 c=5 c=6 c=7 c=8 L=[1,2,4] L=[2,1,4] L=[4,2,1]
lower quartile 6 4 2 5 4 4 4 6 5 2 3
median 24.5 17 18 42 22 23.5 23.5 25 24.5 21.5 20
success
upper quartile 132 112 147 163 155 156 185 219 216 139 144
lower quartile 19 2432 1995 266 5795 2128 8873 14630 4294 2660 2185
median 28.5 2983 4683 4246 6992 11799 15219 17746 4674 4940 2736
f ailure1
upper quartile 285 4503 5054 7277 10013 13053 15428 19437 5168 5320 2888
lower quartile 2584 2660 4655 5681 6935 13034 8873 14630 4294 4940 2888
median 2584 5139 4826 8759 7866 13053 15219 18591 4835.5 5519.5 5244
f ailure2
upper quartile 2584 5168 5225 10336 10013 15504 15428 19722 5244 6023 5244

Fig. 4. The ASR of our method without Adjust Parameter part against the Fig. 5. The relationship between length of standard malicious URL request
one-class HYBIRD [4] in the number of different cluster center points c and and evolutionary generation when finding an AE for malicious URL request.
the generation limit y.

doomed to fail, the more c the more time it takes. To obtain the impact of each packet attribute on the detection results and
result of deformation (success or failure), we must wait longer successfully compromises some state-of-the-art popular IoT
than success. The deformation cost is large when a malicious malware detectors, such as NIDS, and Kitsune. The work [16]
sample is unable to successfully deform. targets graph neural network (GNN)-based intrusion detection
in IoT systems with a tight budget, introducing a novel
E. The Relationship Between Length and Evolutionary Gen- hierarchical adversarial attack (HAA) generation approach to
eration implementing the level-aware black-box adversarial assault
In Fig.5, we set c = 6 and adjust y to 100 and discover that strategy. The work [17] demonstrates AE generation in IoT
the malicious sample’s length is unrelated to the generation healthcare with the aim of raising awareness of adversarial
required for deformation. This is because malicious samples attacks and encouraging others to safeguard deep learning
contain not only malicious features, but also an uncertain models from attacks on healthcare systems. Different from the
number of benign features. We also find that the generation above works, our work attacks IoT malware detectors based
needed by generating an AE not surpasses 15. It explains that on HTTP header text features.
ymax = 15 makes sense to our method. Adversarial Attack Against HTTP Header. In recent
years, several works about adversarial attacks against phishing
V. R ELATED W ORK websites have been reported, which are implemented by mod-
The related work consists of three parts: adversarial attack ifying any character of the URL. Another work [18] declares
against IoT; adversarial attack against HTTP header; adversar- that they could generate AE with minimum changes (one byte
ial attack based on GA. in the URL) in the inputs to bypass the URL classification
Adversarial Attack Against IoT. In recent years, the IoT model based Deep Learning with a high success rate. While
malware detectors suffer from various adversarial attacks. For those works obtain favorable outcomes, they launch adver-
example, the work [15] uses a saliency map to disclose the sarial attacks by modifying the URL requests. However, the

1417
Authorized licensed use limited to: Universiti Kuala Lumpur. Downloaded on May 28,2024 at 06:05:19 UTC from IEEE Xplore. Restrictions apply.
2023 IEEE International Conference on Communications (ICC): IoT and Sensor Networks Symposium

modification to the URL may result in the loss of malicious [4] Y. An, F. R. Yu, J. Li, J. Chen, and V. C. Leung, “Edge intelligence
functionality inside the URL. Instead of alerting the URL, our (ei)-enabled http anomaly detection framework for the internet of things
(iot),” IEEE Internet of Things Journal, vol. 8, no. 5, pp. 3554–3566,
method can ensure the functional integrity of malicious URLs. 2020.
Adversarial Attack Based on GA. GA is often utilized [5] Y. An, J. Li, F. R. Yu, J. Chen, and V. C. Leung, “A novel http anomaly
as a tool to launch adversarial attacks. For example, the detection framework based on edge intelligence for the internet of things
(iot),” IEEE Wireless Communications, vol. 28, no. 2, pp. 159–165, 2020.
work [19] demonstrates that instances and high-performing [6] C. Luo, Z. Tan, G. Min, J. Gan, W. Shi, and Z. Tian, “A novel web attack
models can be perturbed by minimizing semantic and syntactic detection system for internet of things via ensemble classification,” IEEE
dissimilarity using GA. Besides, the work [20] develops a Transactions on Industrial Informatics, vol. 17, no. 8, pp. 5810–5818,
2020.
prototype tool called IagoDroid, that takes a malware sample [7] Z. Tian, C. Luo, J. Qiu, X. Du, and M. Guizani, “A distributed
and a target family as input and alters the sample to cause it deep learning system for web attack detection on edge devices,” IEEE
to be classed as a member of the target family while retaining Transactions on Industrial Informatics, vol. PP, no. 99, pp. 1–1, 2019.
[8] O. Ibitoye, R. Abou-Khamis, A. Matrawy, and M. O. Shafiq, “The threat
its original semantics. Inspired by these works, we utilize the of adversarial attacks on machine learning in network security–a survey,”
GA to adversarial black-box attack on IoT malware detectors arXiv preprint arXiv:1911.02621, 2019.
based on HTTP header text features. [9] I. Rosenberg, A. Shabtai, Y. Elovici, and L. Rokach, “Adversarial
machine learning attacks and defense methods in the cyber security
domain,” ACM Computing Surveys (CSUR), vol. 54, no. 5, pp. 1–36,
VI. C ONCLUSION 2021.
[10] M. Nasr, A. Bahramali, and A. Houmansadr, “Defeating dnn-based
We propose a genetic algorithm-based adversarial attack traffic analysis systems in real-time with blind adversarial perturbations,”
method to attack IoT malicious detectors based on URL in USENIX Security Symposium, 2021.
[11] C. T. Giménez, A. P. Villegas, and G. Á. Marañón, “Http data set
requests, aiming to reveal the vulnerability of such detec- csic 2010,” Information Security Institute of CSIC (Spanish Research
tors. Our method uses the cluster center points of benign National Council), 2010.
samples mastered by the attacker as the evolution direction, [12] Q. V. Le and T. Mikolov, “Distributed representations of sentences
and documents,” CoRR, vol. abs/1405.4053, 2014. [Online]. Available:
and dynamically adjusts the number of evolution directions http://arxiv.org/abs/1405.4053
and the generation limit according to the cause of the search [13] A. Arcuri, “Restful api automated test case generation,” 2017 IEEE
failure, so as to alleviate the slow convergence when the International Conference on Software Quality, Reliability and Security
(QRS), pp. 9–20, 2017.
evolution fails without affecting the success rate of the attack. [14] S. Hou, Y. Fan, Y. Zhang, Y. Ye, J. Lei, W. Wan, J. Wang, Q. Xiong, and
We adopt the method of appending adversarial perturbations F. Shao, “αcyber: Enhancing robustness of android malware detection
instead of modifying the samples, so as to retain the malicious system against adversarial attacks on heterogeneous graph based model,”
Proceedings of the 28th ACM International Conference on Information
functions of malware samples to the greatest extent. The output and Knowledge Management, 2019.
of the genetic algorithm is reversibly mapped to real-world [15] H. Qiu, T. Dong, T. Zhang, J. Lu, G. Memmi, and M. Qiu, “Adversarial
Adversarial Examples (AEs). We design a set of experiments attacks against network intrusion detection in iot systems,” IEEE Internet
of Things Journal, vol. 8, no. 13, pp. 10 327–10 335, 2021.
to verify the effectiveness of our method and compare our [16] X. Zhou, W. Liang, W. Li, K. Yan, S. Shimizu, and K. I.-K. Wang,
technique with several other methods. Experiments show that “Hierarchical adversarial attacks against graph-neural-network-based iot
our method outperforms other methods in the adversarial network intrusion detection system,” IEEE Internet of Things Journal,
vol. 9, no. 12, pp. 9310–9319, 2022.
attack success rate. [17] A. Rahman, M. S. Hossain, N. A. Alrajeh, and F. Alsolami, “Adversarial
examples—security threats to covid-19 deep learning systems in medical
VII. ACKNOWLEDGEMENTS iot devices,” IEEE Internet of Things Journal, vol. 8, no. 12, pp. 9603–
9610, 2021.
This work was supported by the Shandong Provincial Key [18] W. Chen, Y. Zeng, and M. Qiu, “Using adversarial examples to bypass
deep learning based url detection system,” 2019 IEEE International
R&D Program of China under Grants No.2021SFGC0401, the Conference on Smart Cloud (SmartCloud), pp. 128–130, 2019.
National Natural Science Foundation of China under Grants [19] M. Alzantot, Y. Sharma, A. Elgohary, B.-J. Ho, M. B. Srivastava, and
No.61972176, Project of Shandong Province Higher Educa- K.-W. Chang, “Generating natural language adversarial examples,” in
EMNLP, 2018.
tional Youth Innovation Science and Technology Program [20] A. Calleja, A. Martı́n, H. D. Menéndez, J. E. Tapiador, and D. Clark,
under Grant No.2019KJN028, Natural Science Foundation of “Picking on the family: Disrupting android malware triage by forcing
Shandong Province under Grant No. ZR2019LZH015, Tais- misclassification,” Expert Syst. Appl., vol. 95, pp. 113–126, 2018.
han Scholars Program, Shandong Provincial Natural Science
Foundation under Grant No. ZR2021LZH007.

R EFERENCES
[1] D. Fang, Y. Qian, and R. Q. Hu, “A flexible and efficient authentication
and secure data transmission scheme for iot applications,” IEEE Internet
of Things Journal, vol. 7, no. 4, pp. 3474–3484, 2020.
[2] P. Papadopoulos, O. Thornewill von Essen, N. Pitropakis, C. Chrysoulas,
A. Mylonas, and W. J. Buchanan, “Launching adversarial attacks against
network intrusion detection systems for iot,” Journal of Cybersecurity
and Privacy, vol. 1, no. 2, pp. 252–273, 2021.
[3] T. Sharma and D. Rattan, “Malicious application detection in android—a
systematic literature review,” Computer Science Review, vol. 40, p.
100373, 2021.

1418
Authorized licensed use limited to: Universiti Kuala Lumpur. Downloaded on May 28,2024 at 06:05:19 UTC from IEEE Xplore. Restrictions apply.

You might also like