Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

BATTLE OF DEEP FAKES: ARTIFICIAL

INTELLIGENCE SET TO BECOME A MAJOR


THREAT TO THE INDIVIDUAL AND
NATIONAL SECURITY
Atif Ali Khushboo Farid Khan Ghouri Hina Naseem
University of Karachi, Allama Iqbal Open University
UIIT, PMAS Arid Agriculture Karachi, Pakistan Islamabad, Pakistan
University
Khushboo.ghori@hotmail.com Henoharem@gmail.com
Rawalpindi, Pakistan
dralexaly@gmail.com
Alaa M. Momani
School of Information Technology,
Wathiq Mansoor
Tariq Rahim Soomro Skyline University College, University
College of Engineering and IT,
CCSIS, Institute of Business City Sharjah, 1797,
University of Dubai
Management, Karachi, 75190, Sindh, Sharjah, UAE
Pakistan Dubai
alaa.momani@skylineuniversity.ac.ae
tariq.soomro@iobm.edu.pk wmansoor@ud.ac.ae
2022 International Conference on Cyber Resilience (ICCR) | 978-1-6654-6122-1/22/$31.00 ©2022 IEEE | DOI: 10.1109/ICCR56254.2022.9995821

Abstract—The article discusses the possibility of political fake news in politics has long been a matter of concern and
organizations utilizing deepfake technologies. It is observed that attempts at government regulation in many countries. The
deepfakes can affect all levels of public and political life and spread of deepfakes and improved algorithms for their
contribute to the development of several problems, including generation can further decrease trust in the mass media in
reputational risks for celebrities and ordinary citizens, the growth different countries [2]. A 2020 Reuters report shows that the
of organized crime, and social stability and national security world has seen a drop in trust in online news, an increase in
concerns. The sophistication of deepfake technology (DT) has the spread of fake news, and a decline in trust in media
increased significantly. Cybercriminals can now modify sounds, content. Likewise, the Edelman Trust Barometer concludes
images, and movies to scam individuals and organizations. This
that the world's media is the least trusted institution. Such a
growing threat to international institutions and individuals
global decline in confidence in the "fourth estate" is primarily
demands our attention. This article discusses deepfakes, their
societal benefits, and how deception technology works. The due to a significant drop in trust in Internet platforms,
hazards deepfakes pose to enterprises, governments, and legal especially search engines and social networks. Notably, the
systems worldwide are highlighted. In addition, the paper will decline in trust in the media is accompanied by increased
examine potential solutions for deepfakes and end with future Confidence in information, recommendations, and comments
research goals. The authors conclude by discussing potential posted by Online users. Let us recall the background of the
threats, prospects, and key paths for state regulation of this content scandalous precedent in India: on February 7, the day before
within the framework of broader political and legal instruments to the parliamentary elections, two videos gained popularity in
combat the spread of disinformation and fake news. the WhatsApp messenger, in which the head of the Indian
People's Party, Manoi Tivari, campaigned to vote for himself.
Keywords— political technology, deepfake, information security, In one of the videos, the politician spoke in the most common
AI, public policy language of Hindi; in the other, in the Hariani dialect.
However, in reality, there was only one source of the video,
I. INTRODUCTION based on which the deepfakes of the politician's speeches in
Deepfake technologies (DT) have emerged due to advances other languages were made [3].
in artificial intelligence (AI), posing a potential danger to
Issues of forensic photocopy are closely related to forensic
global institutions. Deepfake is an artificial intelligence-
photography and video recording provisions. The current
based technique that can edit images, sounds, and video
level of development of photocopy, the scientific foundations
content to depict a nonexistent event. For example, it is
of which are in the formation process, is characterized by
becoming typical for the faces of politicians to be spliced onto
several problems, primarily of a theoretical nature.
the bodies of other individuals who appear to say things they
never said [1]. At the beginning of February 2020, Indian In addition, creating such deepfake media demands skill and
politician Manoi Tiwari effectively used the Specialized face- specialized software and technology. However, freely
swapping software Deepfake (or Face-Swap) to create a available programs such as "FaceSwap" and "Reface" make
deepfake of his ad in different languages to attract more it possible to create deepfake media without the need for
voters. This event became yet another convincing illustration specialized software and hardware [3]. However,
of both the significant political and marketing potential of constructing such deepfakes requires the use of specialized
deepfakes and the possible threats emanating from their use, software and hardware and freely available applications such
including as "FaceSwap" and "Reface." Unfortunately, openly
accessible technologies such as "FaceSwap" and "Reface"
Manipulating public opinion, interfering in citizens' privacy,
have enabled inexperienced persons to manipulate the media
and mobilizing ethnic or protest groups. The avalanche of
for malevolent or amusing ends.

978-1-6654-6122-1/22/$31.00 ©2022 IEEE


Authorized licensed use limited to: Universidad de Sevilla. Downloaded on December 11,2023 at 17:42:44 UTC from IEEE Xplore. Restrictions apply.
Already today, deepfake technology is actively used for transition of the faces of the superimposed face, clarity, and
criminal purposes. Thus, according to the cybersecurity detail of wrinkles, teeth[6].
company Deeptrace, in 2019, there were over 14,600 units of
video deepfakes on the internet, of which the overwhelming
majority are porn videos depicting famous actors.
II. RELATED WORK
The development of information technologies has also
created the conditions for committing crimes in cyberspace.
For example, the very concept of "Deepfake."
Computer sync technique The use of an image thesis based
on artificial intelligence to integrate and superimpose existing Fig 2: A virtual model of a model, the right side of whose face
photographs and movies onto original images or films [4]. belongs to another girl
created by combining the terms "deep learning" and "fake."
Deepfakes is an Artificial Intelligence that employs the
synthesis of a person's image; it combines many images of a A. Potential dangers of Deepfakes
person shot from different angles and with varying facial
expressions to create a video; by analyzing photographs, a Deepfakes can impact various spheres of public and political
life shortly, contributing to the spread of a wide range of
particular algorithm "learns" how a person appears and
threats, including reputational risks for celebrities and
moves. By itself, the synthesis of images, video, or audio may
not have obvious socially dangerous goals, but manipulating ordinary citizens, the growth of organized crime, and
the use of images, videos, or the voices of real people in concerns regarding social stability and national security.
media generates an entire complex of moral, legal, and To begin, there are significant individual threats of abuse.
managerial problems. Deepfakes can be used to harass, slander, and blackmail
individuals, including journalists and elected officials. So, in
2017, the network was swept by a wave of pornography with
Deepfakes can fairly faithfully depict people doing things deepfake faces of celebrities superimposed on the bodies of
they never really did or saying things they never said. porn actresses. Internet bullying of certain public figures
Deepfake algorithms "recognize" how someone's face looks (primarily women) with the help of deepfake content of
from various angles and expressions by constructing models obscene and pornographic content has the most destructive
from hundreds to thousands of target images. The algorithm effect in Asian countries. Former Pakistani prime minister
can then predict what the target individual's (or the victim of Imran Khan will be targeted with "deep fake" videos to
information sabotage's) face will look like by imitating the destroy his reputation.
expression on another person's face. A similar procedure is
used to train the deepfake algorithm to replicate an Secondly, organized crime can take advantage of deepfake
technology. Deepfakes could be a gold mine for criminal
individual's accent, intonation, and tone of voice [5].
organizations and virtual fraudsters. While people are
predominantly aware of modifying images with image editors
(such as Photoshop), deepfake technology, particularly voice
mimicry, is relatively unknown outside data science and
machine learning. Perhaps a more serious issue than
manipulating images and videos is the ability of technology
to simulate accent, intonation, and speech patterns.
Synthesized speech can be used to perpetrate fraud, including
tampering with bank accounts or bogus kidnapping. Victims
are identified via social media and then confronted with
phone calls demanding ransom for a missing loved one.
When many parents openly share their children's videos on
Fig 1: Transformation of face the internet in the digital age, this scam can become
significantly more dangerous using deepfake audio.
The qualification and technical requirements for creating Deepfakes can be used as a form of black public relations.
quality deepfakes are low. Any motivated person with a mid- Companies or entrepreneurs will be able to order the creation
range PC can create deepfakes. Several open-source of deepfake-videos. For example, the CEO of a competing
programs are freely available on the internet, such as company makes defamatory or offensive statements and
DeepFaceLab and FaceSwap. The model of the virtual sitter "merges" this video on social networks. Corporate sabotage
we constructed contained a partial overlay of images of two through deepfakes can be used to manipulate the stock market
female faces removed from the video interview. A similar potentially. In the same way, the technology can be used to
part of Model 2's face (original video) was imported to the discredit political opponents, parties, and social movements.
left half of Model 1's face (target video) (Fig. 1). Of course, the most serious concern is the potential to use
We needed over 45 hours of continuous machine learning and deepfake technologies to fuel conflicts and civil unrest and
200,000 iterations. However, despite the fewer iterations, the undermine national security. For example, in many countries,
quality of the applied mask looks much better - the it is possible to provoke interethnic or sectarian clashes by
comparative result is shown in Fig. 2, where a smoother posting fake videos on social networks where a representative
of a certain group speaks out or performs other actions that

Authorized licensed use limited to: Universidad de Sevilla. Downloaded on December 11,2023 at 17:42:44 UTC from IEEE Xplore. Restrictions apply.
others may perceive as an insult. In that case, any such fake should have the tools to quickly and efficiently test
video with provocative content can ascribe an extremist information messages, audio, and video recordings that they
message to any politician or representative of any ethnic suspect are forgery. It is also desirable that end-users - people
group. In turn, any attempt by the authorities to react and - determine whether the information they view and share with
explain the technology of deepfakes after the fact will be others is genuine. Thus, the priority task is to develop services
belated in such a situation. Suppose the general population is and tools for fact-checking. Ideally, they should be simple
not aware of the phenomenon of deepfakes and their (that is, they do not require serious IT skills and special
capabilities. As media technology develops, its audience education) and free, which is difficult to achieve and requires
becomes more and more involved. It becomes difficult to appropriate investment. State control and pressure on social
refute a falsified video with high online engagement after networking services to more serious moderation of their
being viewed by many people. However, deepfake content and introducing fact-checking tools are considered a
technology is now being used for nefarious purposes, such as priority areas today. Therefore, websites and platforms that
those listed below [7]. carry potentially dangerous fake information must be held
accountable and accountable. Today, legal and information
mechanisms are being analyzed that encourage social
• Celebrity pornography. networks and messengers to label "synthetic media" more
carefully to raise public awareness of such materials. It is a
• Hoaxes and Scams. matter of time before laws are enacted to prohibit certain
• Manipulation of elections. inappropriate deepfake content. The drafting of such laws is
already underway in several countries worldwide.
• Social engineering.
In parallel with developing deepfake technologies, their
• Theft of identity and financial fraud. detection and verification technologies are also being
• Automated disinformation attacks. improved. At the moment, technologies for generating
deepfakes have not yet been able to overcome the famous
effect of the "ominous valley completely," according to
B. Prospects for countering the proliferation of which a detailed virtual character very similar to a person
deepfakes causes a sharp rejection and rejection from the audience if
minor inconsistencies of reality and even unnatural
Despite the real danger of deepfakes, it should be recognized movement "like a robot." Deepfake videos look convincing
that this technology is primarily just a technical tool with only in the first seconds; the longer the duration, the stronger
more positive uses than negative ones. However, today the effect of the "sinister valley" can manifest itself, which
governments are faced with the need to develop and take can scare away the audience and frustrate the manipulator's
certain actions and precautions to minimize the possibility of intentions. However, for professional provocateurs, even a
damage from deepfakes with negative and criminal few seconds may be enough. Experts note that "painted-on"
intentions. This issue is already on the political agenda of faces on a fake video usually do not blink, so it will be
several countries. For example, in the summer of 2019, the possible to recognize deepfakes by analyzing eye movements
Intelligence Committee of the US House of Representatives and blinking frequency in the future.
held open hearings on threats to national security posed by
artificial intelligence, primarily deepfake AI. It was While human content analysis is required, developing
unanimously determined that deepfakes significantly threaten automated tools to detect deepfakes is justified. Rather than
various facets of American society. Legislation prohibiting reacting to potentially dangerous deepfakes post-facto,
US officials and agencies from creating and disseminating automatic detection can prevent them from being posted.
such content is currently being debated. Currently, the United Along with detecting deepfakes, it is possible to develop
States is completing the preparation of a draft federal law methods for determining the date, time, and physical origin
regulating this area. Russia is currently also analyzing the of deepfake content. These methods must be straightforward,
possibilities of limiting the uncontrolled distribution of practical, and accessible to a broad audience.
deepfakes within the framework of already adopted laws to Thus, information security specialists in different countries
combat inaccurate information published under the guise of agree that to combat the spread of socially dangerous
socially significant reliable messages[8]. deepfakes, it is necessary to raise public awareness, develop
There are a number of approaches that may provide detection technologies and introduce new laws regulating this
assistance in designing emerging solutions for the rising promising area. In addition, new legal instruments should be
challenges in designing smart as well as autonomous developed and applied to streamline this "gray area."
management systems. Fuzzy logic design [9], Machine However, activities aimed at combating the dissemination of
learning [10-12,14, 17-21, 23, 25, 27, 28-32], soft computing dangerous content should at the same time not undermine
[9, 16], Particle Swarm Optimization (PSO) [13, 22], and freedom of expression.
computational intelligence [15], round robin [24], and Steps such as mandatory labeling of synthetic media,
explainable artificial intelligence [26] are some of the increased content moderation, delays in posting on social
approaches that are being used while employing and media, and government pressure on online platforms to
constructing a number of smart, and autonomous frameworks censor posted content are inevitably controversial and
Of course, the proliferation of dangerous deepfakes should be resentful among the public. In addition, the business models
contained by introducing and improving fact-checking of several IT companies and information resources are under
mechanisms (checking messages and detecting fakes). threat. By their very nature, social media platforms imply
Individuals, social media platforms, and especially the media freedom and rapid information exchange. While adding post

Authorized licensed use limited to: Universidad de Sevilla. Downloaded on December 11,2023 at 17:42:44 UTC from IEEE Xplore. Restrictions apply.
delays for machine or manual content analysis can weed out Software development also has the challenge of identifying
some potentially dangerous misinformation, losing the effect deepfake scenarios.
of instantaneousness, Even if only for a few minutes, this
would represent a fundamental shift like social media as a IV. CONCLUSION
whole. As a result, it is doubtful that the companies that AI targeting hacking is one of the major challenges in 2021.
power popular social networks, services, and messengers will Using deepfake software, images and videos are created to
take drastic measures, such as initiatives to remove deepfake trap companies. This causes trouble, especially in global
algorithms from the public domain. In addition to the fact that politics, where it is very hard to detect. Governments all over
the corresponding software has already been installed on the world have major concerns about this technology. Even
millions of computers around the world, the conservation and this is not yet so much advanced, but even then, a yearly
ignoring of this technology will lead to the opposite effects - 2.4M$ loss is accountable. It is time to fight AI with AI.
it will become much more difficult to counteract aggressive Companies are developing software that identifies deepfakes,
disinformation using deepfakes and media literacy, and, but they have policies to handle this software. The reliability
accordingly, information -formational stability of society will factor is one of the major concerns in this technology. If a
be artificially slowed down in its development. Today, there company's software recognizes a deepfake, it can generate
is a gradual process of adaptation of society and network better deepfake audio or video for future malicious intentions.
culture to new media opportunities. Deepfakes enter the mass It is a long journey of identification and making rules of
culture and are aestheticized; their capabilities are used to governance/ usage of AI.
create entertaining content. In the coming years, we will be
able to assess the political potential of deepfakes, and REFERENCES
governments need to prepare for this. [1] Ali, A., Said, R. A., Rizwan, H. M. A., Shehzad, K., & Naz, I.
(2022, February). Application of Computational Intelligence and Machine
C. Factors to identify deepfake technology Learning to Conventional Operational Research Methods. In 2022
International Conference on Business Analytics for Technology and Security
(ICBATS) (pp. 1-6). IEEE.
It is possible to identify deepfakes using the following tips; [2] Garden, A. (2020). "He could tell you things! I've tried to forget;
continuous development makes technology more difficult, things I never did know". The Literary Afterlives of Roger
but at this moment, it can be detected using the following [9] Casement, 1899-2016, 23-
https://doi.org/10.3828/liverpool/978178 9621815.003.0002
[3] Ali, A., Septyanto, A. W., Chaudhary, I., Al Hamadi, H., Alzoubi,
H. M., & Khan, Z. F. (2022, February). Applied Artificial Intelligence as
● Abnormal eye movement. Event Horizon Of Cyber Security. In 2022 International Conference on
● Unnatural facial expressions. Business Analytics for Technology and Security (ICBATS) (pp. 1-7). IEEE.
[4] Khan, K. F., Ali, A., Khan, Z. F., & Siddiqua, H. (2021,
● An absence of blinking. November). Artificial Intelligence and Criminal Culpability. In 2021
International Conference on Innovative Computing (ICIC) (pp. 1-7). IEEE.
● Unnaturally shaped body.
[5] Wegner, E., Burkhart, C., Weinhuber, M., &
● Facial morphing a simple stitch of one image over Nückles, M. (2020). What metaphors of learning can (and
another. cannot) tell us about students' Learning. Learning and Individual
Differences, 80, 101884. https://doi.org/10.1016/j.lindif.2020 .101884
● Inconsistent head positions. [6] Wong, L. P. (n.d.). Hierarchical clustering using K-iterations fast-
learning Artificial neural networks (KFLANN).
● Abnormal skin colors. https://doi.org/10.32657/10356/ 2530
● Unnatural hair.
[7] The Deepfake challenges and Deepfake video detection. (2020).
● Awkward head and body positioning. International Journal of Innovative Technology and
https://doi.org/10.35940/ijitee.e2779.04 9620
● Voices that sound robotic.
● Odd lighting or discoloration. [8] AsadUllah, M., Khan, M. A., Abbas, S., Athar, A., Raza, S. S., &
Ahmad, G. (2018). Blind channel and data estimation using fuzzy logic-
● Bad lip-syncing. empowered opposite learning-based mutant particle swarm optimization.
Computational intelligence and neuroscience, 2018.
● Blurry or misaligned visuals.
[9] Khan, F., Khan, M. A., Abbas, S., Athar, A., Siddiqui, S. Y.,
● Digital background noise.. Khan, A. H., ... & Hussain, M. (2020). Cloud-based breast cancer prediction
empowered with soft computing approaches. Journal of healthcare
III. DISCUSSION engineering, 2020.
[10] Khan MA, Kanwal A, Abbas S, Khan F**, Whangbo T.
In a never-ending cybersecurity race, AI makes cyber threats Intelligent Model for Predicting the Quality of Services Violation using
based on deep fake audio impersonations to enhance their Machine learning, CMC-Computers, Materials & Continua, Vol.71, No.2,
threat punch. Advancement in deep fake technology means pp. 3607-3619, 2022.
cloning is easy, which leads to a big cyber attack. Mostly it is [11] Muhammad Waseem Iqbal, Muhammad Raza Naqvi,
used to manipulate others and commit crimes against Muhammad Adnan Khan, Faheem Khan**, T. Whangbo. Mobile Devices
companies. For example, cybercriminals use deep fake Interface Adaptivity Using Ontologies using Machine learning, CMC-
technology to impersonate CEOs to request funds transfers. Computers, Materials & Continua, Vol.71, No.3, pp. 4767-4784, 2022.
There is software used for identifying fake videos and audio. [12] Uğur Ayvaz, Hüseyin Gürüler, Faheem Khan**, Naveed Ahmed,
Taegkeun Whangbo, Abdusalomov Akmalbek Bobomirzaevich3, Automatic
It is important to mention that AI fights with AI in deepfake
Speaker Recognition Using Mel-Frequency Cepstral Coefficients Through
scenarios. Using AI with machine learning and educating Machine Learning, CMC-Computers, Materials & Continua, Vol.71, No.3,
staff is an important and utmost requirement of companies. pp. 5511-5521, 2022.

Authorized licensed use limited to: Universidad de Sevilla. Downloaded on December 11,2023 at 17:42:44 UTC from IEEE Xplore. Restrictions apply.
[13] Shahid Mehmood, Imran Ahmad, Muhammad Adnan Khan, [22] Asif, M., Khan, M. A., Abbas, S., & Saleem, M. (2019, January).
Faheem Khan, T. Whangbo, Sentiment Analysis in Social Media for Analysis of space & time complexity with PSO based synchronous MC-
Competitive Environment Using Content Analysis, CMC-Computers, CDMA system. In 2019 2nd international conference on computing,
Materials & Continua, Vol.71, No.3, pp. 5603-5618, 2022. mathematics and engineering technologies (iCoMET) (pp. 1-5). IEEE.
[14] Muhammad Adnan Khan, Sagheer Abbas, Ali Raza, Faheem [23] Muhammad, M.U.U.A.H. and Saleem, A.M.S.F.M., Intelligent
Khan, T. Whangbo. Emotion Based Signal Enhancement Through Intrusion Detection System for Apache Web Server Empowered with
Multisensory Integration Using Machine Learning, CMC-Computers, Machine Learning Approaches, International Journal of Computational and
Materials & Continua, Vol.71, No.3, pp. 5911-5931, 2022. Innovative Sciences, 1(1), pp.1-8, 2022.
[15] Rahmani, A.M., Ali, S., Malik, M.H., Yousefpoor, E., [24] Akmal, R. and Saleem, M., 2022. AA Novel Method to improve
Yousefpoor, M.S., Mousavi, A. and Hosseinzadeh, M., 2022. An energy- the Round Robin CPU Scheduling Quantum time using Arithmetic Mean”.
aware and Q-learning-based area coverage for oil pipeline monitoring International Journal of Computational and Innovative Sciences, 1(2), pp.69-
systems using sensors and Internet of Things. Scientific Reports, 12(1), pp.1- 82.
17. [25] Khan, Z., 2022. Used Car Price Evaluation using three Different
[16] Fatima, A., Adnan Khan, M., Abbas, S., Waqas, M., Anum, L., Variants of Linear Regression. International Journal of Computational and
& Asif, M. (2019). Evaluation of planet factors of smart city through multi- Innovative Sciences, 1(1).
layer fuzzy logic (MFL). The ISC International Journal of Information [26] Muneer, S. and Rasool, M.A., 2022. AA systematic review:
Security, 11(3), 51-58. Explainable Artificial Intelligence (XAI) based disease prediction.
[17] Fatima, S.A., Hussain, N., Balouch, A., Rustam, I., Saleem, M. International Journal of Advanced Sciences and Computing, 1(1), pp.1-6
and Asif, M., 2020. IoT enabled smart monitoring of coronavirus empowered [27] Muhammad, N.A. and Fatima, R., 2022. Role of image
with fuzzy inference system. International journal of advance research, ideas processing in digital forensics and cybercrime detection. International
and innovations in technology, 6(1), pp.188-194. Journal of Computational and Innovative Sciences, 1(1), pp.4-4.
[18] Saleem, M., Khan, M. A., Abbas, S., Asif, M., Hassan, M., & [28] Kanwal, A. and Javaid, M., 2022. Emotion Based Regulatory
Malik, J. A. (2019, July). Intelligent FSO link for communication in natural System for Metacognition. International Journal of Computational and
disasters empowered with fuzzy inference system. In 2019 International Innovative Sciences, 1(2), pp.8-20.
Conference on Electrical, Communication, and Computer Engineering
[29] Ghazal, T.M. and Alzoubi, H.M., 2022. Fusion-based supply
(ICECCE) (pp. 1-6). IEEE.
chain collaboration using machine learning techniques. Intelligent
[19] Ghazal, T. M., Rehman, A. U., Saleem, M., Ahmad, M., Ahmad, Automation & Soft Computing, 31(3), pp.1671-1687.
S., & Mehmood, F. (2022, February). Intelligent Model to Predict Early
[30] Ghazal, T.M., 2022. Data Fusion-based machine learning
Liver Disease using Machine Learning Technique. In 2022 International
architecture for intrusion detection. Computers, Materials & Continua, 70(2),
Conference on Business Analytics for Technology and Security (ICBATS)
pp.3399-3413.
(pp. 1-5). IEEE.
[31] Ghazal, T.M., Afifi, M.A.M. and Kalra, D., 2020. Security
[20] Saleem, M., Abbas, S., Ghazal, T. M., Khan, M. A., Sahawneh,
vulnerabilities, attacks, threats and the proposed countermeasures for the
N., & Ahmad, M. (2022). Smart cities: Fusion-based intelligent traffic
Internet of Things applications. Solid State Technology, 63(1s).
congestion control system for vehicular networks using machine learning
techniques, Egyptian Informatics Journal, vol. In Press [32] Ghazal, T.M., Abbas, S., Ahmad, M. and Aftab, S., 2022,
February. An IoMT based Ensemble Classification Framework to Predict
[21] Asif, M., Abbas, S., Khan, M.A., Fatima, A., Khan, M.A. and
Treatment Response in Hepatitis C Patients. In 2022 International
Lee, S.W., 2021. MapReduce based intelligent model for intrusion detection
Conference on Business Analytics for Technology and Security (ICBATS)
using machine learning technique. Journal of King Saud University-
(pp. 1-4). IEEE.
Computer and Information Sciences, vol. In Press

Authorized licensed use limited to: Universidad de Sevilla. Downloaded on December 11,2023 at 17:42:44 UTC from IEEE Xplore. Restrictions apply.

You might also like