Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

TOPIC: ADVERSIAL ATTACK IN MACHINE LEARNING IN IOT

Related Work:
The FL (Federated Learning) based system gives us a lot of opportunities in the
form of distributed processing as clients/agents only share encrypted parameter without sharing
data server is responsible for initializing global model distributing global model to the clients,
acquire parameters from the clients, and aggregating them using some rule or algorithms, and
append the parameters results to the global model.
Considering this Federated Learning based paradigm there are certain adversial attacks that
impose a certain threat to the model. One of the stealthy attack, model poisoning have cause
greater threat to the FL models [1] . In the Model poisoning, a local client tries to tamper with the
FL global model's training process by giving the altered model parameters throughout each
training iteration, all while attempting to remain undetected. To protect our global model from
malicious data , Byzantine Resilient Aggregation Rules (BRARS) [2] was used when FL based
model have certain failure scenario, when client does not follow the predefined protocols send
malicious parameters to the server. One of the simplest ways in BRARS is to choose one of the
n received updates (1,2,3, ...n) whose distance from the rest of the updates is the smallest.
Poisonous attack can be used for targeted attack by inserting training data so that they can
learn to provide the intended adversarial response [3]. These types of attack can be used to
perform difficult and complex goals like back door attacks. By explicit boosting the malicious
agents updates to overcome the effect of the other agents update. Adversial defense
mechanisms such as BRARS have no effect on these targeted model poisonings. With notions
of stealth for the adversary based on accuracy checking on the weighting of update metrics and
test/validation data, and it is empirically demonstrated that targeted model poisoning with
explicit boosting can be detected with the use of these stealth metrics in all rounds.
Another type of attack label backdoor poisoning [4] , use to insert backdoor into the model and
perform specific functionalities. The attacker changes certain features or tiny chunks of the
training dataset. This type of backdoor is difficult to detect as it has no effect on the operation
and runs normally. As this type of attack fails on the video data by using adversial trigger and 2
different types of adversial perturbation this attack can be made more resilient to state of the art
adversial defense model and systems.
When we train Network Intrusion Detection System using deep learning it as high level of
accuracy but one of the hurdle in training using deep learning is that it requires large amounts of
dataset to train the model as the data is sent to centralized system for training so there is high
chance of privacy concerns so [5] propose system using federated learning this method allow
multiple client to joint training on retaining local data using GRU deep learning using
CICIDS2017 data set this method works better in finding intrusion then centralized system but
one of the hurdle is communication overhead.
A novel approach to reduce the space of client updates in FL using Federated Rank Learning
(FRL) [6]. In conventional FL, model parameter updates—which exist in a continuous space of
float numbers—make up client updates. FRL, on the other hand, operates in the discrete space
of integer values known as the parameter rankings. FRL enables training the global model using
parameter ranks rather than parameter weights by utilizing ideas from supermasks training
methods. FRL creates a distinct space for client updates by employing parameter rankings
rather than weights, enabling more effective communication between the clients and the server.
In federated learning applications, this decrease in the update space can help reduce privacy
worries and lower communication costs.
An attacker can act as a trusted entity in federated learning to upload the poisonous data to the
server [7]. This poisonous data will affect the performance of the global mode. The trusted entity
will train generative adversarial nets (GAN) to mimic behaviors of the other participants. These
GAN will be used by hackers to input poisonous update. This type of attack has 80% accuracy
on both main and poisonous tasks.
IOT produce large amount to data sending this data to central server can be one of the privacy
concern[8]. By using multi point view to train model learn from many input data and records
perspectives and provide effective performance with more distinct predictions. Using the MV-
FLID data set, which trains on many perspectives of Internet of Things network-based data in a
decentralized way for identification, categorization, and to mitigate threats and attacks. By using
FL based model to efficiently aggregates profiles while utilizing peer learning.
Adversial attack possess great threat to Neural network as to fool the trained model adversarial
input are used [9]. As adversial attack are used to attack specific model or architecture as this
limit their attack to specific model to overcome this by using the ILA (Intermediate level attack)
to fine tune the adversial model to increase the number of the perbutation of specific layer of the
model for improving transferability of target models without any knowledge of the targeted
system achieving high transferability. By using the image dataset CIFAR 10 by the respective
model. The transferability results of different models are DenseNet121-95.6%, GoogLeNet-
94.9%, ResNet18-94.8,SENet18-94.6%.
Malicious clients in Federated learning can cause performance degradation by affecting the
whole aggregation process[10].By using the hybrid learning-based model to detect malicious
clients and malicious updates that doesn’t fit in the learned association of the model parameters
and the data. The suggested method consists of two components: (1) a data association that
motivates the model to discover the relationship between the data and the model parameters,
and (2) a model regularization that motivates the model to sustain its discovered relationship
over time. Well known attack such as data poisoning and model extracting attack is mitigated
using the above model and using this technique we can successfully identify and remove
poisonous parameter updates without significantly affecting the FL paradigm's overall learning
performance and achieved accuracy up to 97.5%.
Federated Learning is being used as ML method because of this distributed nature and privacy
advantage in the intrusion detection system for IOT devices[11]. But it is prone to one the attack
poisoning attack mainly backdoor attack. Attackers can use one of the vulnerable device to
inject small amount of data into the training process and remain undetected . The goal is to
manipulate the machine learning model to classify benign data as malicious, or to classify
malicious data as benign. Using 46 real time IOT device the effect of attack is show that more
than severe. Using the secure gateway to check malicious update and discard those update we
can overcome poisoned attack.
Neural networks are used by deep learning-based NIDS to analyze network traffic and find
anomalies that can point to an potential attack[12]. These systems are, however, susceptible to
adversarial attacks that might be used to produce malicious updates that is categorized as
unharmful. Different strategies, including gradient-based attacks, black-box attacks, and transfer
attacks, can be used to create adversarial examples. DNN based NDS is trained using KDD
database and then adversary attack is applied to accuracy is decreased by 50%, precision by
20% and recall by 80%. One of the major problems in making this type of attack is it requires
heavy computational resources and sometimes it may not be stable and give precise attack.
Network intrusion system (NIS) are used to detect abnormalities in the system[13]. With the
development of Machine Learning (ML) intelligent systems are built now ML based NIS are use
that increase the efficiency and performance for detecting intrusion in the system. But
adversarial samples can decrease the efficiency of the ML system to overcome this bidirectional
GAN (Generative Adverse Network) are use. The GAN's generator network gains the ability to
generate data that is synthetic that mimics real network traffic during training. The classification
network's goal is to correctly categories the data as either real or fake. It increases the
robustness of the system against known and unknown attacks and increases the accuracy
against different types of adversial attacks.
Deep learning (DL) have been used in Network Intrusion detection System (NIDS) but these DL
are vulnerable to adverse attacks [14]. There are many techniques to check robustness of
adversarial attack against DL but all these techniques work on white box attack. But attackers
don’t have knowledge of the system so black box attacks and grey box attack are mostly used
buy hackers. To overcome this an adversarial feature reduction (AFR) strategy is proposed,
which decreased the attack's effectiveness by limiting the formation of adversarial features.
Black box attack has success rate up to 97% the proposed defense mechanism AFR can’t stop
the attack but it can minimize the attack.
S. Ref. Probelm Technique Datasets Strength Weakness
No

1. [1] Make defense for Krum based attack MNIST, First breaking 4 Don’t provide
model poisoning in Fashion- byzantine-robust defense in all
FL. MNIST, federated defenses and federated learning
CHMNIST then making new scenarios.
defense.

2. [3] Adversial attack in Making model Fashion- Decreasing efficiency of Less accurate in
Federated Learning poisoning work by MNIST the model by poisoning. error prediction.
boosting updates. There can be
insider attacks.

3. [4] Making backdoor Resnet50 VOC2012 ,UC Making backdoor attack Training data can
attack robust against Train using F-101 more efficient using be manipulated or
video recognition ImageNet to extract adversial perturbation corrupted. 
model feature and train
LSTM
4. [8] Improving accuracy Federated machine MQTT Increasing training Not trained on
and privacy of IDS learning-based protocol efficiency with fewer untrained attack so
using FL intrusion detection dataset data and prevents the less vulnerable.
approach data from the end
device from being
expose.

5. [9] Improve the accuracy Using intermediate CIFAR 10 Improving transferability Attack may not
of intrusion detection level attack to fine of target models. work on system
systems (IDSs) for tune the adversial that have new and
Internet of Things model. unique defense
(IoT) networks using system.
federated learning
techniques.
6 [10] improve the Hybrid learning CIFAR,MNIST, Improve robustness of Large scale model
robustness of based to associate APTOS federated systems. and large dataset
federated learning- data and model require.
based speech
emotion recognition
systems against
adversarial attacks
7 [11] detect and prevent Inserting malicious UNSW-NB15 Improving attack Impact of single
poisoning attacks in data using efficiency by attacking malicious client is
federated learning- malicious client Federated Learning very small. Doesn’t
based intrusion based intrusion provide defense
detection systems detection for IOT
(IDSs) for IoT
networks.
8 [12] Effect of Adversary Injecting malicious KDD-99 Making performance It requires heavy
attack on DNN based updates into the and accuracy of DNN computational
IDS model. reduce by up to 60%. resources.
9 [13] Increase accuracy GAN use to make IDS2018 Improve accuracy by Increase false
and robustness of synthetic data to 26% positive and false
network intrusion mimic behavior of negative due to
system. original data. GAN nature.
10 [14] Improving This approach CICIDS2017 AFR decrease the Can’t stop the
Effectiveness of NIDS conducted an effect of black box attack effect
against adversial attack on NIDS and attack by 50 attacker can still
attack. proposed %. exploit the
adversarial feature vulnerable feature.
reduction (AFR),
which lessened the
attack effectiveness
by limiting the
generation of
adversarial
features.
.
REFERENCES

[1] M. Fang, X. Cao, J. Jia, and N. Z. Gong, "Local model poisoning attacks to byzantine-robust
federated learning," in Proceedings of the 29th USENIX Conference on Security Symposium, 2020,
pp. 1623-1640.
[2] Y. Chen, L. Su, and J. Xu, "Distributed statistical machine learning in adversarial settings:
Byzantine gradient descent," in Abstracts of the 2018 ACM International Conference on
Measurement and Modeling of Computer Systems, 2018, pp. 96-96.
[3] A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, "Analyzing federated learning through an
adversarial lens," in International Conference on Machine Learning, 2019: PMLR, pp. 634-643.
[4] S. Zhao, X. Ma, X. Zheng, J. Bailey, J. Chen, and Y.-G. Jiang, "Clean-label backdoor attacks on video
recognition models," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition, 2020, pp. 14443-14452.
[5] Z. Tang, H. Hu, and C. Xu, "A federated learning method for network intrusion detection,"
Concurrency and Computation: Practice and Experience, vol. 34, no. 10, p. e6812, 2022.
[6] H. Mozaffari, V. Shejwalkar, and A. Houmansadr, "Every Vote Counts: Ranking-Based Training of
Federated Learning to Resist Poisoning Attacks," in 32nd USENIX Security Symposium (USENIX
Security 23), 2023.
[7] J. Zhang, J. Chen, D. Wu, B. Chen, and S. Yu, "Poisoning attack in federated learning using
generative adversarial nets," in 2019 18th IEEE International Conference On Trust, Security And
Privacy In Computing And Communications/13th IEEE International Conference On Big Data
Science And Engineering (TrustCom/BigDataSE), 2019: IEEE, pp. 374-380.
[8] D. C. Attota, V. Mothukuri, R. M. Parizi, and S. Pouriyeh, "An ensemble multi-view federated
learning intrusion detection for IoT," IEEE Access, vol. 9, pp. 117734-117745, 2021.
[9] Q. Huang, I. Katsman, H. He, Z. Gu, S. Belongie, and S.-N. Lim, "Enhancing adversarial example
transferability with an intermediate level attack," in Proceedings of the IEEE/CVF international
conference on computer vision, 2019, pp. 4733-4742.
[10] Y. Chang, S. Laridi, Z. Ren, G. Palmer, B. W. Schuller, and M. Fisichella, "Robust federated learning
against adversarial attacks for speech emotion recognition," arXiv preprint arXiv:2203.04696,
2022.
[11] T. D. Nguyen, P. Rieger, M. Miettinen, and A.-R. Sadeghi, "Poisoning attacks on federated
learning-based IoT intrusion detection system," in Proc. Workshop Decentralized IoT Syst. Secur.
(DISS), 2020, pp. 1-7.
[12] K. Yang, J. Liu, C. Zhang, and Y. Fang, "Adversarial examples against the deep learning based
network intrusion detection systems," in MILCOM 2018-2018 ieee military communications
conference (MILCOM), 2018: IEEE, pp. 559-564.
[13] Y. Peng, G. Fu, Y. Luo, J. Hu, B. Li, and Q. Yan, "Detecting adversarial examples for network
intrusion detection system with gan," in 2020 IEEE 11th International Conference on Software
Engineering and Service Science (ICSESS), 2020: IEEE, pp. 6-10.
[14] D. Han et al., "Evaluating and improving adversarial robustness of machine learning-based
network intrusion detectors," IEEE Journal on Selected Areas in Communications, vol. 39, no. 8,
pp. 2632-2647, 2021.

You might also like