Professional Documents
Culture Documents
Europol Innovation Lab Facing Reality Law Enforcement and The Challenge of Deepfakes
Europol Innovation Lab Facing Reality Law Enforcement and The Challenge of Deepfakes
Law enforcement
and the challenge
of deepfakes
Neither the European Union Agency for Law Enforcement Cooperation nor any person acting
on behalf of the agency is responsible for the use that might be made of the following information.
For any use or reproduction of photos or other material that is not under the copyright of
the European Union Agency for Law Enforcement Cooperation, permission must be sought
directly from the copyright holders.
While best efforts have been made to trace and acknowledge all copyright holders, Europol
would like to apologise should there have been any errors or omissions. Please do contact us
if you possess any further information relating to the images published or their rights holder.
Cite this publication: Europol (2022), Facing reality? Law enforcement and the challenge
of deepfakes, an observatory report from the Europol Innovation Lab, Publications Office
of the European Union, Luxembourg.
This version published in January 2024 replaces the previous one. Updates were made
in the chapter Understanding deepfakes.
This publication and more information on Europol are available on the Internet.
www.europol.europa.eu
Contents
4 Introduction
5 Understanding deepfakes
10 Deepfake technology’s
impact on crime
Disinformation
Document fraud
Deepfake as a service
16 Deepfake detection
Manual detection
Automated detection
Preventive measures
21 Conclusion
3
Introduction Today, threat actors are using disinformation campaigns and
deepfake content to misinform the public about events, to influence
politics and elections, to contribute to fraud, and to manipulate
shareholders in a corporate context. Many organisations have
now begun to see deepfakes as an even bigger potential risk than
identity theft (for which deepfakes can also be used), especially
now that most interactions have moved online since the COVID-19
pandemic. This concern is echoed by a recent report by University
College London (UCL) that ranks deepfake technology as one of the
biggest threats faced by society today.1
1 UCL – London’s Global University, ‘‘Deepfakes’ ranked as most serious AI crime threat’,
https://www.ucl.ac.uk/news/2020/aug/deepfakes-ranked-most-serious-ai-crime-threat.
4
Understanding Disinformation is being spread with the intention to deceive. Tools of
disinformation campaigns can include deepfakes, falsified photos,
deepfakes counterfeit websites and other information taken out of context to
deceive the audience.2
5
Zelenskyy surrendering.5 This fear appears to have become reality
after hackers made a Ukrainian news website show a video of
president Zelenskyy telling his soldiers to surrender.6 At the time
of writing much is still unclear about the video and it has not been
verified to be a real deepfake or another fake, but it does show how
the use of (deep)fakes are being used for disinformation purposes.
Examples like the one above show that this type of disinformation
can be dangerous. Its aim is to intensify existing conflicts and
debates, undermine trust in state-run institutions and stir up anger
and emotions in general. The erosion of trust is likely to make the
business of policing harder.
5 Metro, ‘Ukraine warns Russia may deploy deepfakes of Volodmyr Zelensky surrendering’,
2021, accessed on 15 March 2022, https://metro.co.uk/2022/03/04/ukraine-warns-russia-
may-deploy-deepfakes-of-zelensky-surrendering-16217350.
6 National Public Radio, ‘Deepfake video of Zelenskyy could be ‘tip of the iceberg’
in info war, experts warn’, 2021, accessed on 17 March 2022, https://www.npr.
org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-
ukraine-russia.
7 iProov, ‘Almost Three-Quarters of UK Public Unaware of Deepfake Threat, New Research’,
2019, accessed 15 March 2022, https://www.iproov.com/press/uk-public-deepfake-threat.
8 Köbis, N.C. et al., ‘Fooled twice: People cannot detect deepfakes but think they can’, iScience,
24(11), 2021, accessed 15 March 2022, https://doi.org/10.1016/j.isci.2021.103364.
9 Recorded Future, Insikt Group, ‘The Business of Fraud: Deepfakes, Fraud’s Next Frontier’,
2021.
6
The technology Deepfake technology uses the power of deep learning technology
to audio and audio-visual content. Employed properly, these models
behind deepfakes can produce content that convincingly shows people saying or
doing things they never did, or create people that never existed
in the first place. The rise of the application of AI to generate
deepfakes is already having, and will have, further implications for
the way people treat recorded media. Here we discuss two core
advancements behind deepfake technology, namely deep learning
and generative adversarial networks, and how 5G technology may
further enable the use of deepfakes.
Deep learning
Deep learning is a kind of machine learning where a computer
analyses datasets to look for patterns with the help of neural
networks. Machine learning is an application of AI where
computers automatically improve through the use of data.
Deep learning is a kind of machine learning that applies neural
networks. These neural networks mimic the way our brains work
to more effectively learn from the data provided. Deep learning
technology, paired with the availability of large databases with
material to train the generative models on, has allowed for rapid
improvement of deepfake technology.
Deep learning algorithms use neural networks that mimic the brain’s
processes to find patterns in data.10 Therefore, the availability of
data is essential for a good deepfake system; it needs examples to
learn what the result has to look like. It will try to discover patterns
in the available data and thus extract what features are important
and how these relate to each other. That will allow it to construct a
complete and convincing picture. Depending on the quality of the
available data and the factors the algorithm uses, the result may be
more or less realistic.
10 Code Academy, ‘What Is Deep Learning?’, 2021, accessed on 10 March 2022, https://www.
codecademy.com/resources/blog/what-is-deep-learning/.
7
In one example from 2018, filmmaker Jordan
Peele and BuzzFeed CEO Jordan Peretti
created a deepfake video to warn the public
about disinformation, specifically regarding
the public’s perception of political leaders.
Peele and Peretti used free tools with the help
of editing experts to overlay Peele’s voice and
mouth over a pre-existing video of Barack
Obama. In the video, Obama allegedly said,
“We are entering an era in which our enemies
can make it look like anyone is saying anything,
at any point in time. Even if they would never say
those things.”11
With the results from these tests, the models continuously improve
until the generated content is just as likely to come from the
generative model as the training data. This powerful method both
simplifies the learning process, making it more accessible, and also
improves the outcome by incorporating a mechanism designed
to minimise the chance its product would be discriminated from
authentic content.
11 Ars Electronica, ‘Obama Deep Fake’, 2018, accessed on 10 March 2022, https://ars.
electronica.art/center/en/obama-deep-fake/.
12 Goodfellow, I. et al, (2014), Generative Adversarial Nets (PDF). Proceedings of the
International Conference on Neural Information Processing Systems (NIPS 2014). pp.
2672–2680.
8
incorporation of that feature. For example, people’s eyes would
not blink in early deepfake videos, making them relatively easy
to detect.13 Even though the training data for deepfake models
included many pictures of people, these people generally did
not blink in pictures. Adding more videos with people blinking to
the database allowed both models to work together to produce
people with blinking eyes, making the result more realistic and
consequently harder to differentiate from authentic content.
Face swap
Transfer the face of one person for that of the person in the video;
Attribute editing
Change characteristics of the person in the video, e.g. style or
colour of the hair;
Face re-enactment
Transferring the facial expressions from the face of one person
onto the person in the target video;
13 GIZMODO, ‘Most Deepfake Videos Have One Glaring Flaw’, 2018, accessed on 10 March
2022, https://gizmodo.com/most-deepfake-videos-have-one-glaring-flaw-1826869949.
9
Deepfake Participants of the foresight activities cited several trends that
European LEAs should be sensitive to. Of note is crime as a service
technology’s (CaaS), with criminals selling access to the tools, technologies and
impact on crime knowledge to facilitate cyber and cyber-enabled crime. CaaS is
expected to evolve in parallel with current technologies, resulting in
the automation of crimes such as hacking and adversarial machine
learning and deepfakes. Indeed, participants flagged the tendency
of criminal actors to become early adopters of new technologies.
As a result, they are always one step ahead of law enforcement in
their implementation, use and adaptation of these technologies.
14 The Guardian, 2018, accessed on 10 March 2022, ‘An information apocalypse is coming.
How can we protect ourselves?’, https://www.theguardian.com/commentisfree/2018/
mar/16/an-information-apocalypse-is-coming-how-can-we-protect-ourselves.
15 Europol, ‘Malicious Uses and Abuses of Artificial Intelligence’, 2020, accessed on 10 March
2022, https://www.europol.europa.eu/publications-events/publications/malicious-uses-
and-abuses-of-artificial-intelligence.
16 KYC stands for Know Your Customer and refers to the processes for identity verification and
fraud risk assessment used by institutions.
10
Disinformation
Disinformation campaigns are operations to deliberately spread
false information in order to deceive.17 One major concern about
this use is the ease of creating a fake emergency alert that warns of
an impending attack. Another concern is the disruption of elections
or other aspects of politics by releasing a fake audio or video
recording of a candidate or other political figure. To illustrate this,
the BBC created a video for the 2019 general election in the UK in
which the candidates Boris Johnson and Jeremy Corbyn endorsed
each other.18 If this kind of manipulation successfully deceives a
large enough part of the populace, this could have a serious impact
on the outcome of an election.
Non-consensual pornography
In a December 2020 study, Sensity, an Amsterdam-based company
that detects and tracks deepfakes online, found 85 047 deepfake
videos on popular streaming websites, with the number doubling
every 6 months.20 In a previous September 2019 study, Sensity
discovered that 96 % of the fake videos involved non-consensual
pornography. To create this, one will overlay a victim’s face onto
the body of a pornography actor, making it appear that the victim is
engaging in the act. In many situations, the victims of pornographic
deepfakes are celebrities or high-profile individuals.
11
These videos are popular, having received approximately 134 million
views at the time21, and there are several pornographic sites that
specifically produce pornographic celebrity deepfakes. Perpetrators
often act anonymously, making crime attribution more difficult.
Document fraud
Passports are becoming increasingly hard to forge with modern
fraud prevention measures. Synthetic media and digitally
manipulated facial images present a new approach for document
fraud. Using different methods and tools, it is possible to combine,
or morph, the faces of the person the passport actually belongs
to and the person(s) wanting to obtain a passport illegally. This
method may increase the chance that the photo in a forged
document passes any identity checks including those using
automated means (facial recognition systems).22
The face in the middle of the image above is an example of a digitally manipulated
facial image made using this ‘morphing’ method from the other two images.
The images on the left and right are from The SiblingsDB, which contains different
datasets depicting images of individuals related by sibling relationships.
The subjects are voluntary students and employees of the Politecnico di Torino
and their siblings, in the age range between 13 and 50.23
FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES
21 Government Technology, ‘Deepfakes Are on the Rise — How Should Government Respond?’,
2020, accessed on 10 March 2022, https://www.govtech.com/policy/deepfakes-are-on-the-
rise-how-should-government-respond.html.
22 Robertson, D.J., Mungall, A., Watson, D.G. et al, ‘Detecting morphed passport photos: a
training and individual differences approach,’ Cogn. Research 3, 27, 2018, accessed on 16
August 2021, https://doi.org/10.1186/s41235-018-0113-8.
MIT Technology Review, ‘The hack that could make face recognition think someone
else is you’, 2020, accessed on 10 March 2022, https://www.technologyreview.
com/2020/08/05/1006008/ai-face-recognition-hack-misidentifies-person.
Pikoulis, E.-V. et al., ‘Face Morphing, a Modern Threat to Border Security: Recent Advances
and Open Challenges’, Applied Sciences, 2021, accessed 17 February 2022, at https://www.
mdpi.com/2076-3417/11/7/3207.
23 T.F. Vieira, A. Bottino, A. Laurentini, M. De Simone, ‘Detecting Siblings in Image Pairs’, The
Visual Computer, 2014, vol 30, issue 12, p. 1333-1345, doi: 10.1007/s00371-013-0884-3
12
This kind of approach to fraud can be applied to any other type of
digital identity check that requires visual authentication. It greatly
undermines identity verification procedures since there is no reliable
way to detect this kind of attack.24, 25, 26, 27
Document fraud is a facilitator of other crimes like illegal
immigration, trafficking in human beings, selling of various illegal
goods, and terrorism, as perpetrators often use fake IDs to travel to
their target locations. Deepfake technology might amplify the risk
for advanced document fraud by organised crime groups.
Deepfake as a service
Just like many other new technologies, deepfakes are still used
mainly by proficient engineers and research parties. However,
deepfake capabilities are becoming more accessible for the
masses through deepfake apps and websites. There are special
marketplaces on which users or potential buyers can post requests
for deepfake videos (for example, requests for non-consensual
pornography). The increased demand for deepfakes has also
led to the creation of several companies that deliver deepfakes
as a product or even online service. Recorded Future has reported
a threat actor’s willingness to pay USD 16 000 for this kind
of service.28
24 University of Lincoln ScienceDaily, ‘Two fraudsters, one passport: Computers more accurate
than humans at detecting fraudulent identity photos,’ 2019, accessed on 20 July, 2020, at
www.sciencedaily.com/releases/2019/08/190801104038.htm.
25 Naser Damer, PhD. (n.d.). Fraunhofer IGD, ‘Face morphing: a new threat?’ accessed on 20
July 2020, at https://www.igd.fraunhofer.de/en/press/annual-reports/2018/face-morphing-
a-new-threat.
26 David J. Robertson, et al. ‘Detecting morphed passport photos: a training and individual
differences approach,’ Springer Nature, 2018, accessed on 20 July 2020, at https://
cognitiveresearchjournal.springeropen.com/articles/10.1186/s41235-018-0113-8 .
27 Robin S.S. Kramer, et al., ‘Face morphing attacks: Investigating detection with humans and
Computers, Springer Nature, 2019, accessed on 20 July 2020, at https://link.springer.com/
article/10.1186/s41235-019-0181-4.
28 Biometric update, ‘Dark news from dark web: deepfakers are getting their act together’, 2021,
accessed on 16 March 2022, https://www.biometricupdate.com/202105/dark-news-from-
dark-web-deepfakers-are-getting-their-act-together.
13
who would be interested in deepfakes as a service. Those who
know how to leverage sophisticated AI can perform the service
for others, enabling threat actors to manipulate a person’s face
and/or voice without understanding the intricacies behind how it
works. Then they can conduct advanced social engineering attacks
on unsuspecting victims, with the aim to make a sizable profit.
Platforms offering these kinds of services have already started
to emerge.29
29 Europol, ‘Malicious Uses and Abuses of Artificial Intelligence’, 2020, accessed on 10 March
2022, https://www.europol.europa.eu/publications-events/publications/malicious-uses-
and-abuses-of-artificial-intelligence.
14
to scrutinise such content and verify if it is real or somehow
artificially manipulated or generated.
15
on audio-visual material. For example, technical solutions could
be implemented to make deepfakes easier to spot or to increase
markers of authenticity. The Content Authenticity Initiative31 is an
example of efforts to provide a standard for content authenticity
and provenance.
Deepfake detection Law enforcement has always had to deal with fake evidence
and therefore is in a good position to adapt to the presence
of deepfakes. In order to handle the material LEAs encounter
appropriately, it is important to account for the possibility of
synthetic content with malicious intent. Here we discuss some of
the ways this synthetic content can be uncovered, and preventative
measures that can be taken against this threat.
Manual detection
It is still possible for the vast majority of deepfake content to be
manually detected by looking for inconsistencies. This is a labour
FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES
intensive task, which can only be done for a very limited number
of files, and requires appropriate training to be familiar with all the
relevant signs. Moreover, this process is further complicated by
the human predisposition to believe audio-visual content and work
from a truth default perspective.32 That introduces the possibility of
mistakes, both with selecting the files that need to be inspected as
well as the inspection itself.
16
The models generating deepfakes might produce believable images,
but these may still contain imperfections upon closer examination.
A few examples include:
• lack of blinking;
Automated detection
Ideally, a system would scan any digital content and automatically
report on its authenticity. Such a system will most likely never be
perfect, but with increased sophistication of deepfake technology, a
high degree of certainty from such a system could be worth more
than the manual inspection. There have already been efforts to
create this kind of software from organisations such as Facebook34
and security firm McAfee.35 Detection software will look for signs of
manipulation and help the reviewer decide on the authenticity with
an explainable AI report on these signs.
Biological signals
This approach tries to detect deepfakes based on imperfections in
the natural changes in skin colour that arise from the flow of blood
through the face.37
33 Venema, A. E., & Geradts, Z. J., ‘Digital Forensics Deepfakes and the Legal Process,’ 2020,
TheSciTechLawyer, 16(4), pp. 14-23.
34 Michigan State University, MSU, ‘Facebook develop research model to fight deepfakes’, 2021,
accessed on 10 March 2022, https://msutoday.msu.edu/news/2021/deepfake-detection.
35 McAfee, ‘The Deepfakes Lab: Detecting & Defending Against Deepfakes with Advanced AI’,
2020, accessed on 10 March 2022, https://www.mcafee.com/blogs/enterprise/security-
operations/the-deepfakes-lab-detecting-defending-against-deepfakes-with-advanced-ai.
36 AIM, ‘Top AI-Based Tools & Techniques For Deepfake Detection’, 2020, accessed on 24
September 2021, https://analyticsindiamag.com/top-ai-based-tools-techniques-for-
deepfake-detection.
37 U. A. Ciftci, I. Demir and L. Yin, “FakeCatcher: Detection of Synthetic Portrait Videos using
Biological Signals,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, doi:
10.1109/TPAMI.2020.3009287.
17
Phoneme-viseme mismatches
For some words the dynamics of the mouth, viseme, are
inconsistent with the pronunciation of a phoneme. Deepfake models
may not correctly combine viseme and phoneme in these cases.38
Facial movements
This approach uses correlations between facial movements
and head movements to extract a characteristic movement of
an individual to distinguish between real and manipulated or
impersonated content.39
18
Preventive measures
Organisations that rely on some kind of authorisation by face
or voice biometrics should assess the authorisation process as
a whole. Increasing the robustness of this process is currently
considered as a better course of action than solely implementing
specific deepfake detection systems. Common checks are:
How are other In order to address the challenges posed by deepfake technology,
it is important to look into what kind of action other actors, including
actors responding the online platforms where most deepfakes can and might be
to deepfakes? shared, are addressing this threat. This is also influenced by the
current legislative framework, which can ask for mandatory or
voluntary measures. In this section, this report will show some
examples of key online service providers and companies and their
anti-deepfake measures. This chapter will then examine the EU
regulatory framework in this area.
Technology companies
Early in 2020, Meta (formerly Facebook) announced a new policy
banning deepfakes from their platforms.42 Meta said it would
remove AI-edited content that would likely mislead people, but made
it clear that satire or parodies using the same technology would still
be permissible on the platforms. In order for law enforcement to
assess and address the impact of deepfakes on its work, it needs to
be aware of the policies technology companies have put in place, as
it is likely that potential evidence or malicious content will be shared
via these platforms. How technology companies such as Twitter
and Meta regulate deepfake technology will have an extensive
impact on how people will engage with and react to deepfakes.
19
and cause significant harm to the subject of the video, other
persons, or society.” 44
20
European Union
Regarding legal trends, participants of the foresight activities
noted that at both the national and regional level, European law is
struggling to keep pace with the evolution of technology and
the changing definitions of crime. Participants flagged the need
to establish new regulatory frameworks. These should be sensitive
to contemporary law enforcement challenges (particularly in
the digital realm), as well as to changing ethical norms. Some
participants anticipated greater regulation of the digital sphere
in the coming decade.
21
Conclusion As this report shows, in order to effectively address the threats
posed by deepfake technology, legislation and regulation need to
take into account law enforcement needs. Within the regulatory
framework, law enforcement, online service providers and
other organisations need to develop their policies and invest in
detection as well as prevention technology. Policymakers and law
enforcement agencies need to evaluate their current policies
and practices, and adapt them to be prepared for the new reality
of deepfakes.
In the months and years ahead, it is highly likely that threat actors
will make increasing use of deepfake technology to facilitate various
criminal acts and conduct disinformation campaigns to influence or
distort public opinion. Advances in machine learning and artificial
intelligence will continue enhancing the capabilities of the software
used to create deepfakes. According to experts, GANs, availability
of public datasets and increased computing power will be the main
drivers of deepfake development in the future and make them more
difficult to distinguish from authentic content.
22
About the Europol Innovation Lab
Technology has a major impact on the nature of crime. Criminals quickly integrate
new technologies into their modus operandi, or build brand-new business models
around them. At the same time, emerging technologies create opportunities for
law enforcement to counter these new criminal threats. Thanks to technological
innovation, law enforcement authorities can now access an increased number
of suitable tools to fight crime. When exploring these new tools, respect for
fundamental rights must remain a key consideration.
In October 2019, the Ministers of the Justice and Home Affairs Council called
for the creation of an Innovation Lab within Europol, which would develop a
centralised capability for strategic foresight on disruptive technologies to inform
EU policing strategies.
Strategic foresight and scenario methods offer a way to understand and prepare
for the potential impact of new technologies on law enforcement. The Europol
Innovation Lab’s Observatory function monitors technological developments
that are relevant for law enforcement and reports on the risks, threats and
opportunities of these emerging technologies. To date, the Europol Innovation
Lab has organised three strategic foresight activities with EU Member State law
enforcement agencies and other experts.
www.europol.europa.eu