Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

computer law & security review 46 (2022) 105716

Available online at www.sciencedirect.com

journal homepage: www.elsevier.com/locate/CLSR

Deepfakes: regulatory challenges for the synthetic


society

Bart van der Sloot∗, Yvette Wagensveld


Tilburg Institute for Law, Technology & Society, Tilburg University

a r t i c l e i n f o a b s t r a c t

Keywords: With the rise of deepfakes and synthetic media, the question as to what is real and what is
Deepfake not will become increasingly important and politized. Deepfakes can be used to spread fake
synethetic media news, influence elections, introduce highly realistic fake evidence in courts and make fake
post-truth era porno movies. Each of these applications potentially has a big impact on society, social re-
Privacy lationships, democracy and the rule of law. The question this article shall assess is whether
freedom of expression the current regulatory regime suffices to address these potential harms and if not, which
rule of law additional rules and principles should be adopted. It will discuss several potential amend-
democracy ments to the privacy and data protection regime, limitations to the freedom of expression
social equality and ex ante rules on the distribution of use of deepfake-technologies.
fake news © 2022 Bart van der Sloot and Yvette Wagensveld. Published by Elsevier Ltd.
non-consensual fake porn This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/)

for a few years; its quality has improved dramatically since


1. Introduction then.2

A deepfake is content (video, audio or otherwise) that is wholly


or partially fabricated or existing content (video, audio or
otherwise) that has been manipulated. There are currently
dozens of apps on the market that citizens can download, for
example, to portray themselves in a well-known Hollywood
film, to put words in the mouths of politicians or friends or
to generate their own version of the CNN news bulletin.1 Fa- Using Artificial Neural Networks, which are based on ’real’
cial expressions can be manipulated per frame, pitch, timbre or biological neural networks that, systems can learn how to
and language can be adjusted; identities of two or more people
can be merged, for example by fusing the faces of two people
or by giving a figure the face of a well-known person and the
voice of another. Deepfake technology has only been around
2
Feng, J. et al. (2020). Generative adversarial networks based
on collaborative learning and attention mechanism for hyper-
spectral image classification. Remote Sensing, 12(7), 1149. Goyal
S. (2019). GANs — A Brief Introduction to Generative Adversar-
ial Networks. Medium. <https://medium.com/analytics-vidhya/

Corresponding author. gans- a- brief- introduction- to- generative- adversarial- networks-
E-mail address: b.vdrsloot@uvt.nl (B. van der Sloot). f06216c7200e>. Last accessed 14 March 2022. <https://meraju.
1
<https://beebom.com/best- deepfake- apps- websites/>. Last com/fakelab- a- deepfake- audio- detection- tool/>. Last accessed
accessed 14 March 2022. 14 March 2022.

https://doi.org/10.1016/j.clsr.2022.105716
0267-3649/© 2022 Bart van der Sloot and Yvette Wagensveld. Published by Elsevier Ltd. This is an open access article under the CC BY
license (http://creativecommons.org/licenses/by/4.0/)
2 computer law & security review 46 (2022) 105716

perform tasks by looking at examples.3 Several technologies DataGrid, has even managed to create high-resolution whole-
can be used for this, but the most popular is based on what is body images of non-existent people.13
known as Generative Adversarial Networks (GAN)4 and Vari- Since the inception of the technology, automatic deepfake
ational AutoEncoders.5 GANs consist of two competing net- detection systems have been developed.14 Initially, detection
works, a generator G (x) and a discriminator D (x). Two compet- was fairly simple, for example because deepfake technology
ing models are trained simultaneously to synthesise images. did not make people’s eyes blink.15 Although this glitch has
The algorithms reflect the content and produce realistic fake since been remedied, deepfakes are still not perfect, partly be-
images.6 The goal is to assign random noise to samples and to cause they often contain artefacts, i.e. traces that can reveal
distinguish real and generated samples.7 The two competing image or audio manipulation.16 At the same time, deepfake
networks G and D both play a hostile game where the gener- detection techniques are only able to detect 65% of the deep-
ator tries to mislea the discriminator by generating data that fakes, and often are not able to indicate precisely what has
is comparable to that in the existing training set. The discrim- been manipulated and how, let alone assess whether the ma-
inator tries not to be fooled by identifying fake data based on nipulation was relevant or significant.17 Experts believe this
real data. The generator and the discriminator work simulta- figure to go down rather than go up in time.
neously to learn to generate and recognise complex tasks and ‘The real problem to come will not be individual deep- or
outputs.8 cheapfakes, but the ease with which a whole ecosystem of
With this technique, by looking at, say, a thousand pho- false information can be created. A fake video, but also fake
tographs of Donald Trump, a new photograph can be produced twitter accounts that link to the video, fake accounts on dis-
that is not an exact copy of one of those, making it appear to cussion forums that discuss the content of the video, fake
be a completely new photograph.9 This also applies to audio websites that host the video and produce fake news reports
and video.10 Although the technique was initially complex, in- on that what is shown in the video, fake Instagram accounts
volved significant costs and required specialised equipment that produce memes of the fake video, etc. It will be very hard
and programmes, it is now possible for anyone to create a to pierce through a multi-layered environment of deception.
deepfake within seconds.11 Deepfakes videos can be manu-
factured from a single still image12 and a Japanese company,
& Efros, A. A. (2019). Everybody dance now. In Proceedings of the
IEEE/CVF International Conference on Computer Vision (pp. 5933-5942).
13
Burt, C. (2019). DataGrid develops AI to generate whole-
3
Maras, M. H., & Alexandrou, A. (2019). Determining au- body images of nonexistent people. Biometricupdate.com. <https:
thenticity of video evidence in the age of artificial intelligence //www.biometricupdate.com/201905/datagrid-develops-
and in the wake of deepfake videos. The International Journal ai- to- generate- whole- body- images- of- nonexistent- people>.
of Evidence & Proof, 23(3), 255-262. <https://www.allaboutlaw. Last accessed 14 March 2022.
co.uk/commercial-awareness/legal- spotlight/how- can- 14
Durall, R., Keuper, M., Pfreundt, F. J., & Keuper, J. (2019). Unmask-
the- law- deal- with- deepfake- >. Last accessed 14 March 2022. ing deepfakes with simple features. arXiv preprint arXiv:1911.00686.
4
Mann, A. (2019). Deepfake AI: Our Dystopian Present. Livescience. 15
Li, Y., Chang, M.-C. and Lyu, S. In ictu oculi: Exposing ai gen-
<https://www.livescience.com/deepfake-ai.html> Last accessed erated fake face videos by detecting eye blinking. arXiv preprint
14 March 2022. arXiv:1806.02877, 2018.
5
Durall, R., Keuper, M., Pfreundt, F. J., & Keuper, J. (2019). Unmask- 16
<https://www.forensischinstituut.nl/forensisch-onderzoek/
ing deepfakes with simple features. arXiv preprint arXiv:1911.00686. prnu-compare-professional>. Last accessed 14 March 2022. Marra,
6
Goodfellow, I. et al. (2014). Generative adversarial net- F., Gragnaniello, D., Verdoliva, L., & Poggi, G. (2019, March). Do gans
works. arXiv preprint arXiv:1406.2661. leave artificial fingerprints?. In 2019 IEEE Conference on Multimedia
7
Brock, A., Donahue, J., & Simonyan, K. (2018). Large scale GAN Information Processing and Retrieval (MIPR) (pp. 506-511). IEEE, Intro-
training for high fidelity natural image synthesis. arXiv preprint duction. Cozzolino, D., & Verdoliva, L. (2018). Noiseprint: a CNN-
arXiv:1809.11096, p. 2. based camera model fingerprint. arXiv preprint arXiv:1808.08396,
8
Goyal, S. (2019). GANs — A Brief Introduction to Gener- par. 2. <https://www.albany.edu/news/92306.php>. Last accessed
ative Adversarial Networks. Medium. <https://medium.com/ 14 March 2022. Brock, A., Donahue, J., & Simonyan, K. (2018). Large
analytics- vidhya/gans- a- brief- introduction- to- generative- scale GAN training for high fidelity natural image synthesis. arXiv
adversarial-networks-f06216c7200e>. Last accessed 14 March preprint arXiv:1809.11096. Amerini, I., and Caldelli, R. (2020, June).
2022. Exploiting prediction error inconsistencies through LSTM-based
9
McDonald, G. (2018). Seeing Isn’t Believing: This New AI Sys- classifiers to detect deepfake videos. In Proceedings of the 2020 ACM
tem Can Create “Deep Fake” Videos. Seeker. <https://www.seeker. Workshop on Information Hiding and Multimedia Security (pp. 97-102).
com/artificial- intelligence/this- new- ai- system- can- create- Korshunov, P., & Marcel, S. (2019). Vulnerability assessment and
convincing- deep- fake- videos>. Last accessed 14 March 2022. detection of deepfake videos. In The 12th IAPR International Confer-
10
Bansal, A., Ma, S., Ramanan, D., & Sheikh, Y. (2018). Recycle-gan: ence on Biometrics (ICB), pp. 1-6. X. Yang, Y. Li, and S. Lyu. Exposing
Unsupervised video retargeting. In Proceedings of the European deep fakes using inconsistent head poses. In ICASSP 2019-2019
conference on computer vision (ECCV) (pp. 119-135), introduc- IEEE International Conference on Acoustics, Speech and Signal Pro-
tion. <https://www.youtube.com/watch?v=ehD3C60i6lw&amp& cessing (ICASSP), pages 8261–8265. IEEE, 2019. Durall, R., Keuper,
feature=youtu.be>. Last accessed 14 March 2022. M., Pfreundt, F. J., & Keuper, J. (2019). Unmasking deepfakes with
11
Hurst, E. (2019). How can the law deal with Deepfake?. Allabout- simple features. arXiv preprint arXiv:1911.00686, par. 2. Nguyen,
law. <https://www.allaboutlaw.co.uk/commercial-awareness/ T. T., Nguyen, C. M., Nguyen, D. T., Nguyen, D. T., & Nahavandi, S.
legal- spotlight/how- can- the- law- deal- with- deepfake- >. Last (2019). Deep Learning for Deepfakes Creation and Detection: A
accessed 14 March 2022. Survey. arXiv preprint arXiv:1909.11573.
12
Zakharov, E., Shysheya, A., Burkov, E., & Lempitsky, V. (2019). 17
Canton Ferrer C. et al. (2020). Deepfake Detection Chal-
Few-shot adversarial learning of realistic neural talking head mod- lenge Results: An open initiative to advance AI. Ai.Facebook.
els. In Proceedings of the IEEE/CVF International Conference on Com- <https://ai.facebook.com/blog/deepfake- detection- challenge-
puter Vision (pp. 9459-9468), par 5. Chan, C., Ginosar, S., Zhou, T., results- an- open- initiative- to- advance- ai/>. Last accessed 14
computer law & security review 46 (2022) 105716 3

Technology may be part of the solution. Digital signatures, actor’s deepfake perform stunts.26 Deepfake influencers exist
public-private key encryption, authentication systems, etc. In and get sponsorship contracts.27 Deepfake technology can be
addition, there is deepfake detection technology. However, this used when people speak in a different language over Zoom or
is most likely only part of the solution. Detection programs are Skype in order for their words to be translated live and have
unable to detect all deepfakes and it will be difficult to impos- their lips synchronised to match the translation.28
sible to ensure that all online content is authenticated via a Deepfakes can also be used to safeguard the privacy of in-
secure and trustworthy system.’18 dividuals.29 They can be used for medical applications,30 for
Deepfake technology has a lot to offer. It can be used forensic research31 and for creating a virtual fitting room, al-
for satire, for example by putting Nick Cage in even more lowing customers to try on clothes based on data about their
films than the ones already played19 or creating an alterna- gender, length and weight.32 Charities use deepfakes, for ex-
tive Christmas speech for the head of state.20 Historical fig-
ures, such as Salvador Dalí,21 Mona Lisa22 or Napoleon may
be brought back to live in order to give a virtual tour around a 26
<https://www.imdb.com/title/tt1821641/>. Last accessed 14
museum or teach a history lesson in high school.23 Deceased March 2022.
family members can also be brought back to life.24 Deepfakes 27
<https://influencermatchmaker.co.uk/blog/
are also increasingly being used in the film industry, either virtual- influencers- what- are- they- how- do- they- work>. Last
by having deceased actors finish a movie25 or by having an accessed 14 March 2022.
28
KR, P., et al. (2019, October). Towards automatic face-to-face
translation. In Proceedings of the 27th ACM International Conference
March 2022. Schroepfer M. (2019). Creating a dataset and a on Multimedia (pp. 1428-1436).
challenge for deepfakes. Ai.Facebook. <https://ai.facebook.com/ 29
Zhu, B., Fang, H., Sui, Y., & Li, L. (2020, February). Deep-
blog/deepfake- detection- challenge/>. Last accessed 14 March fakes for Medical Video De-Identification: Privacy Protec-
2022. Burt T. (2020). New Steps to Combat Disinformation. Mi- tion and Diagnostic Information Preservation. In Proceed-
crosoft. <https://blogs.microsoft.com/on- the- issues/2020/09/01/ ings of the AAAI/ACM Conference on AI, Ethics, and Society.
disinformation- deepfakes- newsguard- video- authenticator/>. <https://github.com/iperov/DeepFaceLab/issues/892>. Last ac-
Last accessed 14 March 2022. Rossler, A. et al. (2019). cessed 14 March 2022. <https://mrdeepfakes.com/forums/thread-
Faceforensics++: Learning to detect manipulated facial images. legacy- guide- deepfacelab- 1- 0- guide>. Last accessed 14 March
In Proceedings of the IEEE/CVF International Conference on Computer 2022. <https://www.wired.com/story/deepfakes- getting- better-
Vision (pp. 1-11). Dolhansky, B., Bitton, J., Pflaum, B., Lu, J., Howes, theyre- easy- spot/>. Last accessed 14 March 2022. <https://www.
R., Wang, M., & Ferrer, C. C. (2020). The deepfake detection chal- storypick.com/mr- bean- and- trump- funny- deepfake- video/>.
lenge dataset. arXiv preprint arXiv:2006.07397. SentinelOne (2019). Last accessed 14 March 2022.
What is a Hash? (And How Does It Work?). SentinelOne. <https: 30
Snow J. (2018). Deepfakes for good: Why researchers
//www.sentinelone.com/blog/what- is- hash- how- does- it- work/>. are using AI to fake health data. Fast Company. <https:
Last accessed 14 March 2022. Ozdemir D. (2021). Teenager’s AI //www.fastcompany.com/90240746/deepfakes- for- good-
project for detecting Deepfak videos wins Award. Interesting why-researchers-are-using-ai-for-synthetic-health-data>. Last
Engineering. <https://interestingengineering.com/teenagers-ai- accessed 14 March 2022. Baur, C., Albarqouni, S., & Navab, N.
system- for- detecting- deepfake- videos- wins- award>. Last ac- (2018). Generating highly realistic images of skin lesions with
cessed 14 March 2022. Vaccari, C., & Chadwick, A. (2020). Deepfakes GANs. In OR 2.0 Context-Aware Operating Theaters, Computer As-
and disinformation: Exploring the impact of synthetic political sisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin
video on deception, uncertainty, and trust in news. Social Media+ Image Analysis. Springer, Cham. Frid-Adar, M., et al. (2018). GAN-
Society, 6(1). based synthetic medical image augmentation for increased CNN
18
Van der Sloot, B., Wagensveld Y. & Koops, B-J., ’Deepfakes: de performance in liver lesion classification. Neurocomputing, 321,
juridische uitdagingen van een synthetische samenleving’, WODC 321-331. <https://teamgleason.org> Last accessed 14 March
2022, p. 390. 2022. <https://www.logopedie.nl/kennis/dysartrie/>. Last ac-
19
<https://www.youtube.com/watch?v=BU9YAHigNx8>. Last ac- cessed 14 March 2022. Creer S. et al. (2013). Building personalised
cessed 14 March 2022. synthetic voices for individuals with severe speech impair-
20
<https://www.youtube.com/watch?v=IvY-Abd2FfM>. Last ac- ment. Science Direct. <https://www.sciencedirect.com/science/
cessed 14 March 2022. article/abs/pii/S0885230812000836>. Last accessed 14 March
21
Lee D. (2019). Deepfake Salavador Dalí takes selfies with mu- 2022. <https://www.filmacademie.ahk.nl/lichting/2020/projecten/
seum visitors. The Verge. <https://www.theverge.com/2019/5/10/ deepfake-therapy/>. Last accessed 14 March 2022. Zhao, Y. (2018).
18540953/salvador- dali- lives- deepfake- museum>. Last accessed Enabling people with visual impairments to navigate virtual
14 March 2022. reality with a haptic and auditory cane simulation. In Proceedings
22
<https://www.youtube.com/watch?v=P2uZF-5F1wI>. Last ac- of the 2018 CHI conference on human factors in computing systems (pp.
cessed 14 March 2022. BBC (2019). Mona Lisa ‘brought to 1-14). Eng, K. et al. (2007). Cognitive virtual-reality based stroke
life’ with deepfake AI. BBC News. <https://www.bbc.com/news/ rehabilitation. In World Congress on Medical Physics and Biomedical
technology-48395521>. Last accessed 14 March 2022. Engineering 2006. Springer, Berlin, Heidelberg.
23
<https://www.cereproc.com/en/jfkunsilenced>. Last accessed 31
Schwartz M. (2018). Who Killed the Kiev Protesters? A 3-D
14 March 2022. Model Holds the Clues. New York Times.<https://www.nytimes.
24
<https://blog.myheritage.com/2021/02/ com/2018/05/30/magazine/ukraine-protest-video.html>. Last
deep- nostalgia- goes- viral/>. Last accessed 14 March 2022. accessed 14 March 2022. <https://www.terredeshommes.nl/nl/
Berthelot, D., Milanfar, P., & Goodfellow, I. (2020). Creating high res- programmas/sweetie>. Last accessed 14 March 2022. Van der Hof,
olution images with a latent adversarial generator. arXiv preprint S., Georgieva, I., Schermer, B., & Koops, B. J. (Eds.). (2019). Sweetie
arXiv:2003.02365. Wan, Z. et al. (2020). Bringing old photos back to 2.0: Using artificial intelligence to fight webcam child sex tourism. TMC
life. In Proceedings of the IEEE/CVF Conference on Computer Vision and Asser Press.
Pattern Recognition (pp. 2747-2757). 32
Baron K. (2019). Digital Doubles: The Deepfake Tech Nour-
25
<https://www.hollywoodreporter.com/behind-screen/ ishing New Wave Retail. Forbes. <https://www.forbes.com/
how- furious- 7- brought- late- 845763>. Last accessed 14 March sites/katiebaron/2019/07/29/digital-doubles-the-deepfake-tech-
2022. nourishing- new- wave-retail/?sh=c4ce31f4cc7b>. Last ac-
4 computer law & security review 46 (2022) 105716

ample, by having a celebrity seemingly call out in all the lan- There is no generally accepted definition of deepfake. Both
guages of the world to support their cause,33 or by giving peo- from the literature reviewed and the interviews conducted
ple a general idea of what a destroyed city looks like, by giving for this study a disunified picture emerges. Some define the
famous cities a ’remake’, in order to increase support for tak- concept very narrowly, for example: mixing two existing im-
ing in refugees.34 As a final example, deepfake technology al- ages through Generative Adversarial Networks. Others opt for
lows politicians to deliver messages in any language to reach a broader definition, seeing a deepfake as any form of manip-
out to domestic minorities or get their message across more ulation or fabrication of audio, video or other signals by means
effectively abroad.35 of Artificial Intelligence or Machine Learning. This article opts
The examples, illustrations and sources provided through- for a broad definition, both because this is in line with the pop-
out the text were compiled for a governmental study, on which ular understanding of deepfakes and because the technical
this article is based.36 What becomes clear from the exam- means with which an image, video or audio file is manipu-
ples provided above is that deepfake technology is likely to lated is less relevant for the legal and policy context than the
grow speed and quality, it will be democratised and deepfake- result. What matters is the extent to which the manufactured
detection technologies will only be able to detect a percent- material appears real and/or is taken to be true.
age of the deepfakes. Deepfake technology also holds several It is best to take ‘deepfake’ as an ideal type. The most typ-
promises, such as its use for satire, for privacy-enhancement, ical deepfake is a video that has been created with advanced
for gaming and the entertainment industry, for retail and for technical means in which an existing person appears to be
law enforcement agencies when infiltrating networks via a doing or saying something that she did not actually do or say,
fake persona or fake child pornography, and for bringing back where it is hardly possible for either a natural person or an
historical figures or deceased loved ones. At the same time, AI deepfake detection system to discover the manipulation.
it is important to stress that although the examples provided Around this archetypical deepfake, numerous peripheral ap-
above are real and will have concrete and considerable im- plications could be placed. It can be videos that look real, but
pact on society, but that at the moment deepfake technology are not generated with high-tech means; it can be high-quality
is by and large used for but one application: generating non- videos of non-existent people; it may concern a fake audio
consensual porn. or text fragment or a manipulated satellite signal. It may in-
This article will explain, however, that there are several risk volve minor manipulations, which are sometimes inherent in
attached as well (section 2) and will analyse whether the cur- the technique, such as smoothing a person’s skin or compres-
rent regulatory rime is sufficiently equipped to tackle those sion on audio messages; etc. Consequently, at least six factors
problems. It will also discuss potential solutions for three ar- could be relevant when determining the extent to which ma-
eas of law: privacy and data protection law (section 3), free- terial should be qualified a deepfake:
dom of expression (section 4) and ex ante forms of regula-
tion (section 5).37 Finally, a conclusion is offered (section 6), in • The type of data carrier
which it will be stressed that the main problem with respect • How advanced the technology used for the creation of fake
to the regulation of deepfakes is not so much the absence of material is
rules, but the lack of proper enforcement of the existing and • The degree of manipulation
potential additional rules. • The extent to which the manipulation is material to the
information being conveyed
• The question of whether the deepfake concerns an existing
or non-existent person
cessed 14 March 2022. <https://www.charlietemple.com/ • The extent to which the consumer takes the content to be
nl- nl/virtuele- paskamer>. Last accessed 14 March 2022. Ap- true
parel Resources News-Desk (2020). Virtual fitting room mar-
ket forecast to double by 2025: Report. Apparel Resources.
The working definition of deepfakes used in this article is:
<https://in.apparelresources.com/technology- news/retail- tech/
Image, sound or other material that is wholly or partly fabricated
virtual- fitting- room- market- forecast- double- 2025- report/>.
Last accessed 14 March 2022.<https://www.cereproc.com/en/ or existing image, sound or other material that has been manipulated
jfkunsilenced>. Last accessed 14 March 2022. with the help of advanced technical means and that is impossible or
33
Chandler S. (2020). Why Deepfakes Ara A Net Positive For Hu- difficult to distinguish from the authentic material
manity. Forbes. <https://www.forbes.com/sites/simonchandler/ It should finally be underlined that deepfakes are the most
2020/03/09/why- deepfakes- are- a- net- positive- for- humanity/ advanced and realistic form of a more general move towards
?sh=61c55ef02f84>. Last accessed 14 March 2022. Also deepfakes
what is called synthetic media. Synthetic media concern con-
can be used to incite anger: <https://www.youtube.com/watch?
v=8o0iOm-2sLw>. Last accessed 14 March 2022. tent that is wholly or partially created or manipulated through
34
<Deepempathy.mit.edu>. Last accessed 14 March 2022. the use of AI or other technologies. Synthetic media is the
35 family to which deepfakes are a subbranch. Many of the prob-
Christopher N. (2020). We’ve Just Seen the First Use of
Deepfakes in an Indian Election Campaign. Vice. <https: lems described in this article with respect to deepfakes also
//www.vice.com/en/article/jgedjb/the- first- use- of- deepfakes- apply to the substantially bigger amount of synthetic media.
in- indian- election- by- bjp>. Last accessed 14 March 2022.
36
Although a fake video may not formally be a deepfake, has
Van der Sloot, B., Wagensveld Y., & Koops, B-J., ’Deepfakes: de
not been produced with highly advanced techniques and can
juridische uitdagingen van een synthetische samenleving’, WODC
2022. be distinguished from authentic media when assessed care-
37
Other legal regimes, such as tort law and criminal, will only be fully, most consumers do not scrutinize videos they see for
discussed in the margin. authenticity in detail. Similarly, even although an AI detection
computer law & security review 46 (2022) 105716 5

technique is able to filter out many non-deepfake (cheapfake) but to speak through them? Digital communications are of-
material, the prediction that in a couple of years, more than ten meant to be short-term, real-time, and immediate, but
90% of all digital material may concern synthetic media may through what I think of as “platform temporality,” they can be
mean that the post-truth era has in fact materialised. Deep- collected and preserved, and potentially passed on to loved
fakes could and perhaps should be treated as the clearest ex- ones and future descendants. There is also some amount of
ample and advanced form of synthetic media, as the canary anxiety attached to the capacity for data to live on past peo-
in the coal mine. ple’s physiological selves; it is hard to control data during life
and it is nearly impossible to do so after one’s death. Liv-
ing on through social media accounts is one thing, but actu-
2. Risk to personal and societal interests ally using A.I. to replicate a person’s personality in perpetuity
is something different entirely, especially when it comes to
The term deepfake was first used by citizens in 2017 on deepfakes.’44 In addition, deepfakes can be used for commer-
the platform Reddit, where faces of celebrities, such as Tay- cial exploitation and may so undermine a person’s exploita-
lor Swift, were put on the bodies of porn actresses. Subse- tion rights.45
quently, the popularity of this practice exploded, with thou- Deepfakes can also be used for financial gain, such as ma-
sands of users sharing creations on the platform. As of Febru- nipulating markets. After a fake message circulated on What-
ary 2018, major platforms responded by restricting applica- sapp in 2019, stating that Metro Bank was out of liquidity, peo-
tions of deepfakes. Reddit banned the deepfakes subreddit, a ple flocked to Metro Bank to claim all their money and jew-
forum dedicated to this particular topic on Reddit’s website. ellery. This eventually led to the share falling by 9%. Criminals
‘Reddit does not allow content that impersonates individuals can use deepfakes to impersonate the CEO of a listed com-
or entities in a misleading or deceptive manner. This not only pany, for example, by making statements that are harmful to
includes using a Reddit account to impersonate someone, but the company, causing the share price to fall, or to have an
also encompasses things such as domains that mimic oth- employee transfer a large sum of money because she thinks
ers, as well as deepfakes or other manipulated content pre- her boss is ordering her to do so.46 Europol therefore consid-
sented to mislead, or falsely attributed to an individual or en- ers deepfakes to be significant to ’perpetrating extortion and
tity. While we permit satire and parody, we will always take fraud, facilitating document fraud, falsifying online identities
into account the context of any particular content.’38 This was and fooling KYC [Know Your Customer] mechanisms, falsify-
followed by several websites, including gaming site Discord ing or manipulating electronic evidence for criminal justice
and Gfycat, because the posting of pornographic deepfakes investigations, disrupting financial markets’ and, for example,
was against the website’s policy,39 a choice wich was honoured the theft of trade secrets.47 Deepfakes may obviously also be
by Pornhub, Twitter40 and Facebook.41 Still, research showed used for these types of reasons on a smaller scale, for exam-
that 96 percent of all deepfake videos are currently porno- ple to have a parent or friend transfer money quickly, having
graphic in nature and made without mutual consent.42 them believe that their child or friend is asking them to do so
A second problem may arise when a deceased person does via skype.
not want to be brought back to live, either by relatives or by Deepfakes can and are applied in the political realm as
institutions.43 ‘What does it mean to resurrect the dead, and well.48 Trump supposedly called on Belgium to withdraw from
not only bring them back to life to communicate with them, the Paris Agreement,49 Obama seemingly called Trump a "to-

38
<https://www.reddithelp.com/hc/en-us/articles/
360043075032>. Last accessed 14 March 2022.
39
Fink D., Diamond S. (2020). Deepfakes:2020 and Beyond. 44
<https://slate.com/technology/2020/11/
Law.com. <https://www.law.com/therecorder/2020/09/03/ robert- kardashian- joaquin- oliver- deepfakes- death.html>. Last
deepfakes- 2020- and- beyond/?slreturn=20210014101012>. Last accessed 14 March 2022.
accessed 14 March 2022. 45
<https://www.youtube.com/watch?v=m7u-y9oqUSw>.
40
Hern A. (2018). ‘Deepfake’ face-swap porn videos banned Last accessed 14 March 2022. <https://www.theverge.com/
by Pornhub and Twitter. The Guardian. <https://www. 2020/4/28/21240488/jay- z- deepfakes- roc- nation- youtube-
theguardian.com/technology/2018/feb/07/twitter- pornhub- ban- removed- ai- copyright- impersonation>. Last accessed 14 March
deepfake- ai- face- swap- porn- videos- celebrities- gfycat- reddit>. 2022.
46
Last accessed 14 March 2022. Stupp C. (2019). Fraudsters Used AI to Mimic CEO’s
41
Bickert M. (2020). Enforcing Against Manipulated Media. Voice in Unusual Cybercrime Case. The Wall Street Journal.
Facebook. <https://about.fb.com/news/2020/01/enforcing-against- <https://www.wsj.com/articles/fraudsters- use- ai- to- mimic-
manipulated-media/>. Last accessed 14 March 2022. ceos- voice- in- unusual- cybercrime- case- 11567157402>. Last
42
Sensity (2019). The State of Deepfakes: Landscape, accessed 14 March 2022.
Threats, and Impact. Medium. <https://medium.com/sensity/ 47
<https://www.europol.europa.eu/cms/sites/default/
mapping- the- deepfake- landscape- 27cb809e98bc>. Last ac- files/documents/malicious_uses_and_abuses_of _artificial_
cessed 14 March 2022. Melville K. (2019). The insidious rise intelligence_europol.pdf>, p. 56. Last accessed 14 March 2022.
48
of deepfake porn videos – and one woman who won’t be si- See however: Thomas D. (2020). Deepfakes: a threat to democ-
lenced. ABC News. <https://www.abc.net.au/news/2019- 08- 30/ racy or just a bit of fun?. BBC News. <https://www.bbc.com/news/
deepfake-revenge-porn-noelle-martin-story-of-image-based- business-51204954>. Last accessed 14 March 2022.
abuse/11437774>. Last accessed 14 March 2022. 49
Schwartz O. (2018). You thought fake news was bad?
43
Harbinja, E. (2017). Post-mortem privacy 2.0: theory, law, Deep fakes are where truth goes to die. The Guardian.
and technology. International Review of Law, Computers & Technol- <https://www.theguardian.com/technology/2018/nov/12/
ogy, 31(1). deep- fakes- fake- news- truth>. Last accessed 14 March 2022.
6 computer law & security review 46 (2022) 105716

tal and complete dipshit"50 and former Italian Prime Minis- Media. In a world where any citizen has access to deepfake
ter Matteo Renzi seemed to insult about any colleague.51 ‘The technology and can disseminate a fake video, photo or au-
deepfake video refers to Renzi’s decision Sept. 17 to leave the dio file on the Internet within seconds, the question is how,
Democratic Party and form his own party. In the parody, the in practical terms, the media can ensure that the news they
supposed Renzi is seen talking when he thinks he is off air. He report is accurate. Quality media that invest in procedures
discusses the reaction of various politicians, including Prime not only run the risk of being less profitable because of the
Minister Giuseppe Conte; Luigi Di Maio, leader of the Five costs concerned, but also of becoming ’outdated’, because
Star Movement; and Italy’s president, Sergio Mattarella. The other media, with fewer due diligence requirements, will
video is so outrageous that it is clearly a parody, but deepfake always be quicker to report sensational news.
technology makes it look incredibly realistic. So when peo- • Second, both the spread of fake news, the expected in-
ple started sharing it online, claiming that it was a real video, crease in echo chambers on the internet and the possibil-
quite a few social media users fell for it and were outraged ity for interfering in foreign elections through trolls and
by what they saw as Renzi’s bad behaviour.’52 Public unrest fake news may have an impact on democratic elections.
may arise when a video, showing a politician to be alive and Importantly, Western democracies are now aware of this
kicking, is believed to be fake,53 and deepfakes may lead to risk and some are adopting laws and non-legal measures to
tensions between countries.54 counter such activities. But another problem is that coun-
This also means that besides personal interests, societal tries such as Russia, China and Iran would not so much tar-
interests may be at stake with the rise of deepfakes. get Western democracies, but the Global South. This could
be done for all kinds of purposes, such as influencing con-
• First, deepfakes may be used for spreading misinfor- crete decision-making (a Russian state-owned company
mation. Deepfakes may accelerate the move to a post- getting a contract instead of an British one), influencing
truth era55 and increase segregation and stratification, be- elections there (for example to put a Xi-friendly regime in
cause different deepfakes will circulate in different echo- power) or influencing decisions at the international level
chambers or filter bubbles on the Internet.56 Europol also (for example by getting them to vote in favour of lifting
fears that deepfakes will be used for ‘distributing disinfor- sanctions against Iran). Obviously, deepfakes will not only
mation and manipulating public opinion, inciting acts of be used by foreign powers, but also by domestic political
violence toward minority groups, supporting the narratives opponents.
of extremist or even terrorist groups, and, stoking social • Third, deepfakes are and increasingly will be introduced in
unrest and political polarization’.57 Due to the expected courts as fake evidence.58 This may have an impact on the
considerable increase in the number of fake messages and rule of law. (1) Trials will last longer because parties can
deepfakes, the truth and authentic reporting may eventu- always claim that the evidence against them is fake and
ally become snowed under, certainly among groups that fabricated. (2) Deepfakes increase the risk that a court will
have difficulty with reporting by the so-called Mainstream falsely assume evidence to be true. (3) A convicted person
can always publicly maintain her innocence, stating that
the court took fake evidence to be real. (4) With respect
50
Fagan K. (2018). A viral video that appeared to show Obama
to certain offences, the suggestion or prosecution alone
calling Trump a ‘dips—‘ shows a disturbing new trend called
may have a big impact on a person’s life and professional
‘deepfakes’. Business Insider. <https://www.businessinsider.nl/
obama- deepfake- video- insulting- trump- 2018- 4?international= career. Even if it becomes abundantly clear that a person
true&r=US>. Last accessed 14 March 2022. was wrongfully prosecuted for sexual acts with children
51 because the videos were fake, the harm may already been
The Observers (2019). Deepfake video of former Ital-
ian PM Matteo Renzi sparks debate in Italy. The Observers. done. Importantly here and with other sensational news,
<https://observers.france24.com/en/20191008-deepfake-video- even if it is later debunked, groups may claim that the
former- italian- pm- matteo- renzi- sparks- debate- italy>. Last
specific message was fake, but that the underlying truth
accessed 14 March 2022.
(there is a worldwide network of pederasts) is true. In the
52
<https://observers.france24.com/en/20191008-deepfake-
video- former- italian- pm- matteo- renzi- sparks- debate- italy>. case of fake news that is distributed to a wider public, but
Last accessed 14 March 2022. later debunked, the initial (often sensational) fake message
53
<https://www.motherjones.com/politics/2019/03/ will generate significantly more attention than the subse-
deepfake-gabon-ali-bongo/>. Last accessed 14 March 2022. quent rectification. And, even if the story debunking the
54
BBC (2020). Australia demands China apologise for posting fake news reaches a person, she is often left with a ’wasn’t
‘repugnant’ fake image. BBC News. <https://www.bbc.com/news/
there something with..’ feeling.59
world-australia-55126569>. Last accessed 14 March 2022.
55
McIntyre, Lee. Post-truth. MIt Press, 2018. Higgins, K. (2016). Post- • Fourth and finally, there is already a practice, offline and
truth: a guide for the perplexed. Nature News, 540(7631), 9. Suiter, J. certainly online, to share disrespectful statements towards
(2016). Post-truth politics. Political Insight, 7(3), 25-27.
56
Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinforma-
58
tion: Exploring the impact of synthetic political video on decep- Swerling G. (2020). Doctored aufio evidence used
tion, uncertainty, and trust in news. Social Media+ Society, 6(1), to damn father in custody battle. The Telegraph.
2056305120903408. Pariser, E. (2011). The filter bubble: How the new <https://www.telegraph.co.uk/news/2020/01/31/deepfake-
personalized web is changing what we read and how we think. Penguin. audio- used- custody- battle- lawyer- reveals-doctored-evidence/>.
57
<https://www.europol.europa.eu/cms/sites/default/ Last accessed 14 March 2022.
59
files/documents/malicious_uses_and_abuses_of _artificial_ Pfefferkorn, R. (2020). "Deepfakes" in the courtroom. Boston Uni-
intelligence_europol.pdf>, p. 52. Last accessed 14 March 2022. versity Public Interest Law Journal, 29(2).
computer law & security review 46 (2022) 105716 7

women and young girls,60 with either a misogynistic or applicability, as long as a person can be recognised through
sexual undertone. Slut-shaming and revenge porn can the deepfake.63 Thus, the GDPR applies both to the data used
have very serious consequences, especially for young adult to render the deepfake and, in most cases, to the deepfake it-
girls but also for female politicians.61 Deepfakes will in- self. If a famous person is appearing to do or say things she
crease this problem, not only because of their quantity and had never done or said, such is still treated as personal data
quality, but also because deepfake technology allows a per- because the data concern that person. If a person writes on a
son to create a nude picture or porn video of someone blog ‘Boris Johnson has brown eyes and a big beard’, such is
based on a photo of her where she is fully dressed. Impor- clearly untrue, but writing the blog will still be qualified or en-
tantly, when a deepfake porn video is shared on Whatsapp, tailing the processing personal data because what matters for
depicting a classmate from high school, even if her whole the applicability of the GDPR is whether data relate to an iden-
class knows the video to be fake, the social consequences tifiable individual, not whether those data are correct. Mutatis
may be no less. In addition, even if she herself knows the mutandis, this means that whether video, audio or other ma-
video to be false, her self-image may nevertheless be dis- terial about a person is faked or not through GAN or similar
torted. technologies is not relevant, what is relevant is whether a per-
son can be reasonably identified.
Consequently, the vast majority of deepfakes will involve
3. Privacy and data protection law the processing of personal data. Nevertheless, there are five
borderline areas:
In the EU, there is no law specifically addressing the regulation
of deepfakes, although a proposal for an AI regulation is cur-
rently pending.62 Article 52 of that proposal lays down trans-
parency obligations for certain AI systems. Paragraph 3 speci- • First, deepfakes can be used to conduct live anonymous
fies that users of an AI system that generates or manipulates conversations, e.g. an anonymous witness, where a person
image, audio or video content that appreciably resembles ex- assumes the voice and/or face of a non-existent person.
isting persons, objects, places or other entities or events and • Second, the GDPR does, in principle, not apply to deceased
would falsely appear to a person to be authentic or truthful persons. Thus, when a porno is made of two long gone fig-
(‘deep fake’), shall disclose that the content has been artifi- ures or when a deepfake is created of a deceased grand-
cially generated or manipulated. Recital 70 further explains parent holding a funeral speech, such falls outside the data
that users, who use an AI system to generate or manipulate protection framework.64
image, audio or video content that appreciably resembles ex- • Third, the GDPR does, in principle, not apply to deepfakes
isting persons, places or events and would falsely appear to a about organisations and states.65 Thus, when a deepfake
person to be authentic, should disclose that the content has paints a very bleak and deceptive picture of the state of
been artificially created or manipulated by labelling the artifi- Monaco or Shell, the GDPR does not apply.
cial intelligence output accordingly and disclosing its artificial • Fourth, deepfakes can be used to merge images or voices
origin. Given the significant risks to both personal and societal of two or more persons.66 Although the production of such
interests, the question is whether a transparency obligation is deepfakes involves the use of personal data of several peo-
sufficient, even if it would be respected by users and platforms. ple and will be subject to the GDPR (provided that the other
In addition, the provision leaves open a number of questions, conditions have been met), it is unclear whether the final
such as, but not limited to: Does this provision also apply to a result also is. In the case of two faces being merged into
deepfake smiley-faced sun? Does this provision apply to fully one, this may be the case when both are still clearly recog-
AI generated deepfake persons? To whom should information nisable. The more data from the more people are used, the
be disclosed, the general public, the person depicted, the plat- more unlikely it is that the result will be a processing of
form on which the deepfake is posted? How and in what form personal data of one or more of them. However, even then
should the information be disclosed? Should the information it may involve a recognisable element of one of them, for
merely state that it concerns a deepfake or specify which part example someone with a large birthmark on the nose.
of the video, picture or audio fragment has been manipulated • Fifth, through the use of deepfake technology, an entirely
and how? new but fictitious person can be created, for example
In addition, the General Data Protection Regulation might trough the website ’This Person Does not Exist’. The GDPR,
apply to deepfakes. The fact that the information conveyed
through a deepfake is incorrect does not bear on the GDPR’s

60
<https://www.groene.nl/artikel/misogynie- als- politiek- wapen>.
Last accessed 14 March 2022. 63
<https://ec.europa.eu/justice/article-29/documentation/
61
<https://www.youtube.com/watch?v=B5eMz4JpYu0>. Last ac- opinion-recommendation/files/2007/wp136_en.pdf>. Last ac-
cessed 14 March 2022. <https://www.youtube.com/watch?v= cessed 14 March 2022.
4b79yBzyRHs>. Last accessed 14 March 2022. 64
Recital 27 GDPR.
62 65
Proposal for a Regulation of the European Parliament and of the Recital 14 GDPR.
66
Council Laying Down Harmonised Rules on Artificial Intelligence Albahar, M., & Almalki, J. (2019). Deepfakes: Threats and coun-
and Amending Certain Union Legislative Acts {SEC(2021) 167 final} termeasures systematic review. Journal of Theoretical and Applied
- {SWD(2021) 84 final} - {SWD(2021) 85 final}. Information Technology, 97(22), 3242-3250
8 computer law & security review 46 (2022) 105716

in principle, does not apply to data processing about ficti- Because deepfakes are by definition fake, it seems plausible
tious persons.67 that data subjects always have the right to correct the mate-
rial, if the freedom of expression (including satire) does not
prevail (see section 4). The question is who ultimately has the
last word on what is correct and what is not. In some cases
data subjects may have an interest in providing incorrect data,
or there is debate on what is correct and what is not. The GDPR
is silent on where the burden of proof lies for proving that per-
sonal data is correct or incorrect, and what standard of proof
should be used. Ultimately, in matters of conflict between the
data subject and the controller, the DPA or a judge may need
to resolve the matter.
If the controller honours a request to rectify or supplement
data, an additional obligation applies: ‘The controller shall
When the Regulation does apply, it sets strict limits on the
communicate any rectification [] to each recipient to whom
creation and sharing deepfakes. For example, because infor-
the personal data have been disclosed, unless this proves im-
mation about a person’s sexual life and political activities is
possible or involves disproportionate effort. The controller
qualified as sensitive data, deepfakes seemingly containing
shall inform the data subject about those recipients if the data
data about such aspects will be prohibited, unless she ex-
subject requests it.’74 Consequently, if a data controller re-
plicitly consents,68 which will generally not be the case with
ceives a request to rectify a deepfake, i.e. to remove the fake
deepfakes. When non-sensitive information is processed, e.g.
element, she will have to comply in almost all cases, and if
a funny deepfake of a friend or a public figure, such will be
she has distributed the deepfake, there are two scenario’s. If
deemed legitimate only when she consents or when the in-
she knows to whom she has forwarded the deepfake or can
terest served by producing and sharing the deepfake is higher
easily find out who has a copy, she should reach out, inform-
than the interest of the person concerned not to.69 Though
ing them that they must too rectify the deepfake. This also ap-
satire is deemed a legitimate interest, when the data subject’s
plies to situations in which she has made the deepfake public,
interests are significantly affected, it is clear that no legitimate
but can easily find out who has made a copy or downloaded
ground for processing the personal data will exist. In addition,
the material. Only when it is not reasonably possible to find
the GDPR contains a number of restrictive principles. Many
out who possesses the deepfake, she is exempted from this
deepfakes of public figures are based on data found on the in-
obligation. The reasonable, in this sense, is contextual. An ex-
ternet which were collected and processed for different pur-
ception to the obligation to inform will be accepted quicker
poses, which may conflict with the purpose limitation princi-
with relatively innocent deepfakes than with fake messages
ple.70 And there is the obligation to inform the data subject
that contain pornographic content, for example; what matters
of the fact that a deepfake about her has been produced and
is the ’proportionality’ of the effort. Proportionality is related,
shared.71
among other things, to the damage that the data subject ex-
Special mention should be made of the obligation to only
periences from the continued existence of the processing of
process correct personal data; the question what this means
incorrect data.
for purposeful deceit through deepfakes.72 The GDPR empha-
Perhaps more importantly, a data controller does not have
sises the processing of data that is correct and complete. Thus,
a duty merely to rectify data at the request of the data subject,
the data subject has the right to ‘obtain from the controller
but has an independent duty to ensure the quality of the data
without undue delay the rectification of inaccurate personal
it processes. Personal data must be ‘accurate and, where nec-
data concerning him or her. Taking into account the purposes
essary, kept up to date; every reasonable step must be taken
of the processing, the data subject shall have the right to have
to ensure that personal data that are inaccurate, having re-
incomplete personal data completed, including by means of
gard to the purposes for which they are processed, are erased
providing a supplementary statement.’73 If data are incorrect,
or rectified without delay.’75 A strict reading of this duty would
the data subject may ask for them to be adjusted. This could
imply that all deepfakes are prohibited by definition, since the
concern her age, for example, or an address that has been in-
controller knows that it is processing incorrect data. The data
correctly entered into the system or has become outdated. The
quality principles does not refer to the correctness or suit-
right to rectify incorrect data should in principle always be
ability of the data in relation to the purpose (the creation of
honoured, unless the data subject cannot be identified, or the
a deepfake), but to the accurateness of the data themselves.
data subject asks the controller to correct a small or meaning-
Also, the GDPR does not mention a possibility for the data sub-
less detail each and every week.
ject to exempt the controller from this duty, which may partly
have to do with the fact that data protection law does not only
67
<www.thispersondoesnotexist.com>. Last accessed 14 March serve the interests of the data subject, but also general inter-
2022. ests that are related, inter alia, to processing data only when
68
Article 9 GDPR.
69
they are correct and accurate.
Article 6 GDPR.
70
Article 5 para 1 sub b GDPR.
71
Article 13 and 14 GDPR.
72 74
Article 5 para 1 sub d GDPR. Article 19 GDPR.
73 75
Article 16 GDPR. Article 5 para 1 sub D GDPR.
computer law & security review 46 (2022) 105716 9

This means that many of the deepfakes that are currently perform? On both points, the EU could consider adopting new
produced violate the GDPR and should be deemed unlawful, regulation.
provided that the GDPR applies and the data controller can- A second important point for legislative reconsideration is
not successfully invoke the freedom of speech (section 4). Still, the household exemption. The GPDR does not apply to pro-
there are two issues that may become problematic with re- cessing of personal data ‘by a natural person in the course of a
spect to deepfakes: the scope of personal data and the house- purely personal or household activity’.77 ‘This Regulation does
hold exemption. not apply to the processing of personal data by a natural per-
Especially with respect to two legal lacunas, namely deep- son in the course of a purely personal or household activity
fakes about deceased and virtual persons, additional regula- and thus with no connection to a professional or commercial
tion might be considered. activity. Personal or household activities could include corre-
Post-mortem privacy has been discussed for decades,76 spondence and the holding of addresses, or social network-
but deepfakes take this discussion to a new dimension, both ing and online activity undertaken within the context of such
morally and commercially. The legal regime in most coun- activities. However, this Regulation applies to controllers or
tries does offer protection to the rights/interests of the dead, processors which provide the means for processing personal
for instance, by regulating in detail what can and cannot be data for such personal or household activities.’78 The Euro-
done with a body. Such rules concern the physical body, but pean Court of Justice,79 as well as the Working Party 29,80 has
there are no rules on the virtual body or the realistic repro- interpreted the exemption in a narrow manner, meaning that
duction of a person’s psyche. There are numerous applications it is only said to apply to a limited number of cases.
that could be the subject of legal and political debate. For ex- When the GDPR was under discussion, the Working Party
ample, to what extent is it desirable and permissible to have argued for a revised version of the household exemption. ‘WP
long gone historical figures teach in schools? To what extent 29 urges the legislature to use the process of introducing new
is it desirable and permissible to have deceased artists give data protection law as an opportunity to reduce as far as pos-
a tour of a museum? To what extent is it desirable and per- sible the legal uncertainty that currently surrounds various
missible to have deceased actors feature in films? To what ex- aspects of individuals’ personal or household use of the in-
tent is it desirable and permissible to have a deceased per- ternet. Access to the internet and more functional ICT has
son star in a porno film? To what extent is it desirable and brought many positive new possibilities to individuals – for ex-
permissible to have deceased artists give concerts? To what ample instant access to knowledge, services and the possibil-
extent can family members communicate with a deceased ity of contact with other people worldwide. However, data pro-
loved one even against the will of that person; what does such tection authorities are also experiencing an increasing num-
communication do to trauma processing (is it actually helpful ber of complaints emanating from individuals’ personal use
or detrimental to trauma processing)? Answering these ques- of the internet. A typical complaint might be that a pupil has
tions might require the introduction of a post-mortem privacy used a social networking site to say post a derogatory, inaccu-
regime or an extension of the substance and scope of applica- rate or hurtful message about a teacher. Currently some data
tion of personality rights. protection authorities would reject any complaints about the
Similar questions could be asked with respect to the cre- pupil on the grounds that the processing of personal data in-
ation of virtual persons. There are very few legal limitation to volved would fall within the personal or household process-
creating such persons and having them perform certain activi- ing exemption. Some data protection authorities also take the
ties. But the creation of non-existent persons through AI raises view that other elements of the law – for example those relat-
numerous ethical and legal dilemmas. The police may infil- ing to libel or harassment – are more appropriate instruments
trate a criminal network using a fake person, pederasts can be for dealing with issues such as ‘cyber-bullying’. It is the case
traced using fake child pornography and traffickers in women though that some DPAs [Data Protection Authorities] do – in-
can be identified using fake customer profiles. For these ap- creasingly – take on the role of mediating individuals’ internet
plications as well as for others, more clarity is needed as to postings.’81
what is or is not permitted. In addition, the question is to what The exemption, which dates back to the 1995 EU Data Pro-
extent virtual but highly realistic persons can be used by the tection Directive, was left virtually unchanged in the GDPR.
entertainment industry, by the porn industry or for medical
applications. For example, there are therapies for the treat-
77
ment of paedophiles through the display of fake child pornog- Article 3 para 3 sub c GDPR.
78
raphy, but is that desirable? What are the moral boundaries to Recital 18 GDPR.
79
EUCJ 6 november 2003, C-101/01, ECLI:EU:C:2003:596
producing fake personalities and the activities that they can
(Bodil Lindqvist). EUCJ EU 11 december 2014, C-212/13,
ECLI:EU:C:2014:2428 (František Ryneš v Úřad pro ochranu os-
obních údajů). EUCJ Sergejs Buivids, zaak C–345/17, 14 februari
76
Edwards, L., & Harbina, E. (2013). Protecting post-mortem pri- 2019.
80
vacy: Reconsidering the privacy interests of the deceased in a Article 29 Data Protection Working Party, Opinion 5/2009 on on-
digital world. Cardozo Arts & Ent. LJ, 32, 83. Harbinja, E. (2017). line social networking, 01189/09/EN WP 163, 12 June 2009, Brussels,
Post-mortem privacy 2.0: theory, law, and technology. Interna- Ec.europa.eu 9 April 2020.
81
tional Review of Law, Computers & Technology, 31(1), 26-42. Buite- Annex 2 Proposals for Amendments regarding exemption
laar, J. C. (2017). Post-mortem privacy and informational self- for personal or household activities. The situation under Di-
determination. Ethics and Information Technology, 19(2), 129-142. rective 95/46/EC, p. 3. <https://ec.europa.eu/justice/article-29/
Lopez, A. B. (2016). Posthumous privacy, decedent intent, and post- documentation/other-document/files/2013/20130227_statement_
mortem access to digital assets. Geo. Mason L. Rev., 24, 183. dp_annex2_en.pdf>.
10 computer law & security review 46 (2022) 105716

But, as the Working Party 29 suggests, there are good reasons from person A privately fantasising about person B or mak-
to reconsider that exemption. When the 1995 Directive was ing sexual drawings of B in the privacy of her home. Second,
adopted, the primary reason given for the need of a house- citizens are often unequipped to engage in long and compli-
hold exemption was keeping an address book. Such does con- cated legal battles and third, even if they win, the compensa-
cern processing personal data of third parties, but only in- tion granted, at least under European legal regimes, is often
volves their name, address, and telephone number. Keeping negligible (see for more on this point section 5). Third, obvi-
such data is socially accepted and usually desired by the third ously, in order to start a tort claim, person B has to be aware
parties in question. Currently, however, citizens have access of the fact that A made a deepfake about her, while this will
to a wealth of information and can use various advanced data often not be the case.
processing technologies. This means that the type of data that
can be processed about others with the use of a private com-
puter is incomparable to those thought of when drafting the 4. Freedom of expression
1995 Directive. In addition, the household exemption made
sense in a world in which the private sphere was more or less Within the framework of the European Convention on Human
closed off from the public sphere. In the current data-driven Rights, deepfakes should be discussed in the interplay be-
environment, the boundary between the two spheres has be- tween Article 8 ECHR, the right to privacy, and Article 10 ECHR,
come increasingly blurry. A click of a button is enough to dis- the right to freedom of expression. The European Court of Hu-
seminate thousands of photos or videos stored on a private man Rights has ruled that the right to privacy also includes
computer online through social media or digital platforms. the right to the protection of one’s honour and reputation.83
The household exemption raises the following problem. On the other hand, the Court has ruled that the freedom of ex-
Suppose an ex-partner stores private photographs of his ex- pression must be understood very broadly and also includes
girlfriend on his computer, with which he then produces a the right to shock, offend and disturb.84 This includes sharing
deepfake video in which she performs all kinds of perverse personal opinions and characterisations85 and satire.86 Free-
sexual acts. He tells his friends about it, who also communi- dom of expression may also cover publishing a fictitious inter-
cate this to her. This is just one of the many possible examples view.87 Thus, in the case of deepfakes of a potentially unlawful
of deepfake applications that cannot be addressed under the nature, two parties will often be able to invoke two different
GDPR. The production of compromising material and the pos- human rights. Because the Court sets few general rules and as-
session of it, is not covered by the GDPR. Once the material is sesses each individual case on its own merits, taking account
on the internet or distributed to large groups of friends it is, of the circumstances of the case, it is not possible to say in
but by then it is too late. The damage has already been done; general terms which one of these rights shall prevail when
compromising videos can attract thousands or millions (in the deepfakes are published.88 Therefore, the framework of the
case of celebrities) of viewers within a few hours. It may often freedom of expression is in itself flexible enough to address
be impossible to take that video down permanently, because problematic deepfakes. Nevertheless, three issues deserve at-
of the ease with which a copy of the video can be produced. tention.
Consequently, it could be considered to limit the household
• First, under the European Convention of Human Rights,
exemption.
public figures can invoke their right to privacy in order
It might be argued that this legal gap could also be closed
to protect their name, honour and reputation, even when
with reference to tort law. Most countries have regimes in
they actively seek the limelight. Yet the European Court
place that allow citizens to go to court when they are harmed
of Human Rights has also ruled that public figures must
by others and some have specific tort grounds that could di-
tolerate greater intrusion into their private lives than or-
rectly apply to the type of cases discussed. An example might
dinary citizens and must accept that they will be mocked
be the American ‘intrusion upon seclusion’ tort.82 This is cer-
and ridiculed. This principle is widely supported and there
tainly true, but at least three obstacles exist in this respect.
is no reason to alter it, yet with deepfakes, it might be con-
First, these types of regimes generally require harm being in-
sidered to set out which types of deepfakes in general will
flicted on the claimant. Though with some of the more ex-
be deemed legitimate and which ones would not. This pro-
treme (pornographic) deepfakes, such harm will be obvious,
vides legal certainty to both citizens that want to ridicule
for others such will be less evident (see for more on this point
section 4). The question is precisely whether the generation of
83
a deepfake porno, for example, without the person distribut- ECtHR, Pfeifer v. Austria, appl.no. 12556/03, 15 November
2007.ECtHR, A. v. Norway, appl.no. 28070/06, 09 April 2009. ECtHR,
ing that fake does harm and how such harm is distinguishable
Bogomolova v. Russia, appl.no. 13812/09, 20 June 2017.
84
ECtHR, Handyside v. the UK, appl.no. 5493/72, 07 December
1976.
82 85
Tutaj, A. J. (1998). Intrusion Upon Seclusion: Bringing an Other- ECtHR, Dink v. Turkey, appl.nos. 2668/07, 6102/08, 30079/08,
wise Valid Cause of Action into the 21st Century. Marq. L. Rev., 82, 7072/09 and 7124/09, 14 September 2010.
86
665. Meltz, E. A. (2014). No Harm, No Foul: Attempted Invasion ECtHR, Leroy v. France, appl.no. 36109/0302 October 2008.
87
of Privacy and the Tort of Intrusion upon Seclusion. Fordham ECtHR, Nikowitz and Verlagsgruppe News GMBH v. Austria,
L. Rev., 83, 3431. Zhu, B. (2014). A traditional tort for a modern appl.no. 5266/03, 22 Frebruary 2007.
88
threat: applying intrusion upon seclusion to dataveillance obser- Van der Sloot, B., ’Expectations of Privacy: The Three Tests De-
vations. NYUL Rev., 89, 2381. Palyan, T. (2008). Common Law Privacy ployed by the European Court of Human Rights’, In: Hallinan, D.
in a Not So Common World: Prospects for the Tort of Intrusion Leenes, R. & De Hert, P., ’Data Protection and Privacy Enforcing
Upon Seclusion in Virtual Worlds. Sw. L. REv., 38, 167. Rights in a Changing World’, Hart, Oxford, 2022.
computer law & security review 46 (2022) 105716 11

politicians and to public figures that want to know whether sult of an election. California allows politicians to file ac-
they stand a chance when going to court. In addition, it tion against a person who has created and disseminated a
should be borne in mind that although satire and ridicule deepfake within 60 days of an election with intent to injure
of public figures has been around for ages, the fact that the candidates reputation, or to deceive a voter into voting
deepfakes can be highly realistic might have an impact be- against the candidate.91 Because European countries are
yond what was common before. It is already known that just as vulnerable against foreign interference of elections
many women avoid public office because they do not want through the use of deepfakes, either national parliaments
to be subject to misogynist and sexual statements on the or the EU should consider similar rules.
internet; the rise of deepfakes might aggravate this, which
might have the effect that qualified women relinquish op-
portunities to their male peers. 5. Ex ante regulation
• Second, a fear is that deepfakes will take the post-truth
era to the next level, as it will be so easy for any citizen Although specific additional provisions should be considered
around the world to create and disseminate a highly real- in light of deepfakes, the main challenge lies in the oversight
istic deepfake within a minute or two. An untrue, inaccu- and enforcement of the prevailing rules. Experts predict that
rate or misleading statement can be addressed under the in four or five years’ time, more than 90% of all online con-
current legal regime, but only if damage has been caused, tent well be manipulated in whole or in part.92 Such often
for example to personal interests (under tort law) or to cer- concerns relatively minor manipulations: video call services
tain social interests (under criminal law). Deepfakes chal- that equalise a person’s skin tones, audio that loses some of
lenge this regime on at least three points. First, it can be its higher sound registers through compression, photo cam-
difficult to substantiate the causal relationship between era’s that filter out red tones when burning forests are cap-
an untrue, false, or misleading statement and the (fore- tured, because they ‘know’ that forests are green. Yet even
seeable) harm it causes (e.g. the hatred a deepfake has in- these smaller manipulations can be of great importance, for
cited against minority groups). Second, untrue, inaccurate, example in the identification of a suspect or in an online med-
or misleading expressions can be problematic per sé be- ical consultation with a dermatologist. In addition, there are
cause they blur the line between fact and fiction, even if deepfakes that create a truly false picture of the world, where
they do no concrete harm. Even if, say, a million deepfakes the perpetrator has malignant intend.
spreading through social networks do no harm to personal Deepfake technology is expected to get cheaper, faster and
or societal interests, they certainly do deepen the post- better over time. This will make it even more difficult for au-
truth era, which may be deemed problematic in and by it- tomated deepfake-detection programmes to filter out deep-
self. Third, there are untrue, inaccurate, or misleading ex- fakes. Currently, such systems have a hit score of about 65%;
pressions that do cause harm, but that are very difficult to experts expect this number to go down rather than up, not
link to a specific legal provisions. For example, fake satel- only because the deepfakes themselves get increasingly more
lite images may be produced in which Russia appears to realistic, but also because the ease with witch whole fake
move its nuclear missiles near the Latvian border. Or, fake media-environments can be generated. Not only can a deep-
news may be distributed on Covid-19, leading to a decline fake video be created, but also fake twitter accounts that link
in people that want to get vaccinated. Or, a political leader to the video, fake accounts on discussion forums that discuss
may distribute a video, making it look like there are thou- the content of the video, fake websites that host the video
sands of supporters at her rallies, while there were a hand- and produce fake news reports on that what is shown in the
ful only. These developments may force the European leg- video, fake Instagram accounts that produce memes of the
islator to choose between Skyla and Charybdis, between fake video, etc. This makes it more difficult for both humans
staying clear from these complicated issues, which may and AI to detect deepfakes through contextual information.
mean that the problem of misinformation will grow, and Importantly, detection technologies will often only give an
adopting regulation on untrue, false or misleading state- ’authenticity percentage’: e.g. the chance that this video is au-
ments. thentic/not manipulated is 73%. This may create additional
• Third, several states of the U.S.A. have passed laws on the problems, such as, but not limited to: at what ‘truth percent-
dissemination of misinformation during elections.89 Texas age’ can a newspaper or judge accept the reliability of the
was the first States to pass legislation on this point. It was source? Should there be different thresholds for public broad-
accepted that while this ‘technology likely cannot be con- casters than for private ones and should there be different
stitutionally banned altogether, but it can be narrowly lim- standards for criminal law and civil law courts? Can internet
ited to avoid what may be its greatest potential threat: the providers automatically block content that is presumably fake
electoral process.’90 That is why a new criminal offence and if so, at what percentage, or should personnel manually
was created for creating and publishing a deepfake video check content, and if so at what percentage? It should also be
within 30 days of an election, where that person has the kept in mind that these detection programmes are not always
intention of injuring a candidate or of influencing the re- able to indicate what has been manipulated and how nor as-

89
See also: <https://www.congress.gov/bill/117th-congress/ 91
California Elections Code – Division 20 Election Campaigns –
house-bill/1/actions>. Chapter 1 Endorsement of Candidates - § 20010.
90
<https://capitol.texas.gov/tlodocs/86R/analysis/html/ 92
Schick, N. (2020). Deep Fakes and the Infocalypse: What You
SB00751F.htm>. Urgently Need To Know. Hachette UK.
12 computer law & security review 46 (2022) 105716

sess whether that manipulation is relevant to the context in by citizens, as it would significantly extent their power and
which it is shared. It may well be, for example, that a photo require them to determine what is true and what is not.
shared online of a politician clearly drunk has been manipu- In addition, both for these authorities and for internet in-
lated, but it matters whether the fact that she appears clearly termediaries, there is a number of complex legal questions
drunk is due to manipulation or that the manipulation con- with deepfakes, such as, but not limited to: Are personal data
cerns a detail in the picture. It will be difficult for anyone to processed? Is there a legitimate basis for processing of the
assess what has been manipulated, how and why when the data? Does an exception to the privacy and data protection
authentic photo, video, audio or other signal is absent. regime apply in the context of freedom of expression? How
Finally, it is clear that deepfake technology will be should conflicting interests of citizens be assessed and are
democratised over the years. Currently, advanced deepfake there general, societal interests at stake with the free flow of
technology is mostly in the hands of commercial parties such information? Which legal regime applies to the publication of
as the entertainment industry, the retail sector and the porn a deepfake when multiple jurisdiction may play a role (e.g.
business and of some state actors, for example to cause con- data controller in the EU, internet site based in the U.S.A and
fusion in other countries. However, already, consumers can the data subject based in China) and how should such a party
download deepfake apps and tools that are freely available. deal with conflicting rules and requirements?
Given the easy with which a deepfake can be produced, ex- It should be kept in mind that under the current regu-
perts expect that in time, a deepfake app will be a standard latory regime for internet intermediary liability under the
application on anyone’s phone, giving millions and millions e-Commerce Directive,96 there is no filtering obligation and
of people the opportunity to produce and spread false infor- there is even a prohibition for countries to apply monitoring
mation with the click of a button. duties to these intermediaries. This is confirmed by the pro-
Although most problematic deepfakes will be prohibited, posed for the Digital Services Act,97 that will replace the e-
either through the GDPR or through national criminal and Commerce regime in time. Though there are additional rules,
tort law, the question is how these rules can actually be en- standards and requirements for hosting providers, the DSA is
forced. Citizens have rights through which they can block or still primarily based on a notice and action (formerly known
remove unlawful content, but in practice, it is hard to get plat- as notice and take down) regime. In addition, it reaffirms the
forms to remove content, let alone prevent copies from being prohibition on monitoring obligations: ‘No general obligation
shared on other parts of the internet.93 Citizens often do not to monitor the information which providers of intermediary
even know that a malicious deepfake of them exists and is services transmit or store, nor actively to seek facts or circum-
available, for example on a dodgy porn website, and if they stances indicating illegal activity shall be imposed on those
know, legal proceedings often cost money and time they don’t providers.’98 This means that although some internet plat-
have, and may even attract more attention to the content they forms do scan for deepfakes, states cannot oblige them to. In
want removed.94 The same holds true for law enforcement au- addition, providers only have to take action when there are
thorities, such as the Public Prosecutor and the Data Protec- signs that content is unlawful, which again places a large bur-
tion Authority, who are already overburdened and often focus den on citizens or trusted flaggers.99
on data processing operations by governmental organisations This means that it cannot be excluded that indeed, the pre-
and commercial parties, and not on data processing by citi- diction that in a few years’ time, more than 90% of all on-
zens.95 It is also questionable to what extent it is desirable for line content will be wholly or partly manipulated may well
such an authority to assess the veracity of content produced come true. Given not only the significant personal interests at
stake, but the risks for democracy, the rule of law, the well-
functioning of the press and an inclusive society (section 2),
93
Markou, C. (2015). The ‘Right to Be Forgotten’: Ten reasons why such may be found undesirable. But in order to tackle this is-
it should be forgotten. In Reforming European Data Protection Law (pp. sue, a core approach to the regulation of technology should be
203-226). Springer, Dordrecht. Urban, J. M., Schofield, B. L., & Kara-
reconsidered. In general, the choice is for ex post regulation in
ganis, J. (2017). Takedown in Two Worlds: An Empirical Analysis. J.
a dual way. Technology, in general, is not prohibited or banned
Copyright Soc’y USA, 64, 483. Jhaver, S., Bruckman, A., & Gilbert, E.
(2019). Does transparency in moderation really matter? User be- from the market and the use of technology is generally not as-
havior after content removal explanations on reddit. Proceedings of sess before or when it is used, but after. This means that there
the ACM on Human-Computer Interaction, 3(CSCW), 1-27. are very few hurdles before producing a deepfake and sharing
94
Van der Sloot, B. & Van Schendel, S., ’Procedural law for the it online; only then may the deepfake be assessed on legiti-
data-driven society’, Information & Communications Technology macy and legal action be taken. This means not only that the
Law, 2021. Schwartz, P., ’The Computer in German and American
Constitutional Law: Towards an American right of informational
self-determination’, The American Journal of Comparative Law, 37(4), 24. Koops, B. J. (2014). The trouble with European data protection
1989, 675-701. Cate, F. H. & Mayer-Schönberger, V. ‘Notice and con- law. International data privacy law, 4(4), 250-261.
96
sent in a world of Big Data’, International Data Privacy Law, 3(2), 2013, Directive 2000/31/EC of the European Parliament and of the
67-73. Barocas, S. & Nissenbaum, H., ‘On notice: The trouble with Council of 8 June 2000 on certain legal aspects of information so-
notice and consent’, In Proceedings of the engaging data forum: The ciety services, in particular electronic commerce, in the Internal
first international forum on the application and management of personal Market.
97
electronic information, 2009. Reidenberg, J. R., Russell, N. C., Callen, A. Proposal for a Regulation of the European Parliament and of the
J., Qasir, S., & Norton, T. B. (2015). Privacy harms and the effective- Council on a Single Market For Digital Services (Digital Services
ness of the notice and choice framework. ISJLP, 11, 485. Act) and amending Directive 2000/31/EC COM/2020/825 final.
95 98
Graham, G., & Hurst, A. (2019). GDPR enforcement: How are EU Article 7 DSA.
99
regulators flexing their muscles?. IQ: The RIM Quarterly, 35(3), 20- Article 19 DSA.
computer law & security review 46 (2022) 105716 13

harm has already materialised; it is precisely the choice for ex such a duty and because the DPA would not have the re-
post regulation that allows the quantity of deepfakes that are sources necessary to assess every deepfake on its legiti-
produced and shared on the internet to rise to a point where macy. An alternative could be imposing an obligation on
no one is able to adequately assess but the extreme and clearly internet intermediaries to monitor their services for deep-
unlawful ones. fakes and to block (harmful) deepfakes. This would require
Though it is clear that the current regulatory approach is an amendment to the Digital Service Act. For example, in-
unable to adequately address the problems concerned with ternet intermediaries could be made to deploy deepfake
deepfakes specifically and synthetic media in general and in a detection techniques and block any content that is likely
certain sense is the cause of those problems, a solution would fake. Though this would not filter out all fake content,
mean the introduction of ex ante regulation, which is politi- it would certain reduce the number of deepfakes signifi-
cally sensitive. Yet, three regulatory alternatives should be se- cantly.
riously considered. • Thirdly, specific rules could be considered for legal pro-
ceedings. Especially in criminal cases, the fact alone that
• First, banning technologies and products is a sensitive is- someone is prosecuted may have a significant impact on
sue. Every technology has positive use cases; it is precisely her personal and professional life. In addition, citizens are
through the frequent use of a technology that previously already producing deepfakes in civil law proceedings. Most
unforeseen possibilities can be explored and both citizens legal systems have no systems in place for checking evi-
and companies usually want technologies to be available to dence on authenticity before they are introduced in court.
them. This will not be different with deepfakes. Yet, there Judges mostly presume evidence, such as a video, to be au-
may be an argument for prohibiting deepfake technologies thentic, unless there are contra-indications. This means
and applications from the consumer market. This would that it is mostly up to the defendant or, in civil law proceed-
allow organisations and professionals to use the technol- ings, the opposing party to claim and possibly prove that
ogy for specific means, such as the retail sector, the enter- evidence has been manipulated or fabricated. The ques-
tainment industry and video-call applications, but prevent tion is whether this is desirable, on the one hand because
billions of citizens from the possibility of creating deep- it entails the privatisation of a general problem and on
fakes. This would prevent the digital environment from be- the other because a citizen will not always be able to dis-
coming synthetic and limit the use of deepfakes to specific pute the veracity and accuracy of evidence. This may be so
controlled environments such as a movie, a business call not only when suspects are convicted in absentia, but also
and a webstore. It would allow the commercial value of when people suffer from mental disorders. In addition, it
the technology to be utilised in full. Though it might be re- can be costly to obtain the technical expertise necessary to
garded as stifling, there is only one type of use of deepfakes demonstrate that evidence is or may be inauthentic, which
by citizens that is valuable and positive, and that is its used might have the effect that weaker parties will be disadvan-
for satire and humour. Though it is a loss that they cannot taged even more. That is why a rule could be introduced
produce funny fake video’s, given the societal interests at that evidence may only be introduced in court after it has
stake, this may be considered a minor loss. An additional been assessed for authenticity, for example by the National
reason for such a ban is that 96% of all deepfakes current Forensic Service or another organ. This would involve eco-
concern non-consensual porn, which is clearly unlawful, nomic costs, but would prevent significant personal and
in addition to other malignant uses for fraud, identity theft societal harm.
and slander. Technologies are not neutral, they enable cer-
tain practices and when a tool such as deepfake technology
is almost certainly to be used for unlawful practices, such 6. Conclusion
may be considered a reason for limiting access to it.100
• Second, instead of or in addition to this option, an ex It is clear that deepfake technology will become more popu-
ante test for the legitimacy could be introduced. Such a lar in the coming years. Although there are positive use cases
test could be imposed on citizens, for example by requir- for this technology, deepfakes can have a major negative im-
ing them to perform a Data Protection Impact Assess- pact, not only on personal interests, but also on social insti-
ment when producing a Deepfake.101 When they would tutions. These involve fraud, identity theft and reputational
find there a high risk to be at stake, they would then have harm and undermining the media, the rule of law and democ-
to inform their national Data Protection Authority, which racy. The legal regime already regulates and prohibits most
would then have to assess whether the deepfake could harmful deepfakes. The main problem regarding deepfakes is
be published.102 Although this is a theoretical solution, it an enforcement problem. Therefore, in the context of deep-
is questionable whether this would work in practice, both fakes, consideration could be given to ex ante regulation, ei-
because it is unlikely that citizens will actually adhere to ther by prohibiting the production, offering, use or possession
of deepfake technology for the consumer market or by intro-
100
See e.g. Latour, B. (1993). We have never been modern. Harvard ducing a mandatory ex ante legitimacy test, which should be
University Press. Borgmann, A. Technology and the Character of carried out before any material is published and/or distributed
Contemporary Life, University of Chicago Press, 1984. by citizens. An ex ante test could also be required before ma-
101
Article 35 GDPR. terial could be introduced in court.
102
Article 36 GDPR.
14 computer law & security review 46 (2022) 105716

Yet the question is whether these regulatory options are detrimental effect on the functioning of the media, that fake
desirable. news can influence democratic elections and that deepfakes
about public figures can contribute to qualified people giving
• First, there is the question of the enforceability of such up public office or not running for office. Therefore, it could be
rules. Deepfake technology will always be accessible to examined to what extent additional rules can be set for state-
citizens, for example through websites offered from third ments about public persons, who now have to tolerate more
countries. How effective a ban would be will have to be as- than private persons, in order to offer them more protection
sessed before it is introduced. against unwanted and untrue statements by citizens. It can
• Second, there is the question of the desirability of such also be considered to what extent the spread of fake news and
a ban. Do we want a society in which technologies are the influencing of democratic elections or political decision-
banned; would such a choice not be an expression of mis- making by means of fake news can be addressed more force-
trust in citizens’ and deprive them of a valuable means of fully. An additional argument for the introduction of further
expression? rules on this point is that not all malicious statements can be
• Third, the question is whether the underlying reason for tackled at present, since the making of untrue statements is
introducing such a ban would remain valid. While it is cur- not prohibited per se. An untrue, incorrect or misleading ex-
rently the case that the vast majority of deepfakes are un- pression can be addressed under the current regime, but only
lawful in nature, this need not be the case in the future. if damage is caused, for example to personal interests (un-
Social and ethical codes for the use of a technology are usu- der the tort law) or to certain social interests, such as when
ally developed gradually after it is introduced. The fact that inciting hatred against minorities (under criminal law). New
such norms are not yet present or dominant with regard rules can create more clarity regarding the lawfulness of those
to deepfakes does not mean much, as the technology has expressions that cannot be directly related to such interests.
only been in use for a few years. Thus even if left legally Yet, here too, arguments can be furthered against more and
unregulated, social and ethical norms may shift the use of stricter rules:
deepfake technology over time.
• Fourth and finally, ex ante rules would not remove the un-
• Additional rules on statements concerning public persons
derlying problem. There is a social problem with misogyny,
or untrue statements during democratic elections may be
especially online; restricting deepfake technology does not
considered undesirable because they may cripple the free
take away that problem, but only one of the means to ad-
and open debate that is so essential to a democratic con-
vance misogynous expressions. There is a social problem
stitutional state.
with fake news and the blurring of fact and fiction; this
• On a related point, satire with respect to politicians and
trend is likely to continue even without deepfakes. Foreign
public officials, also during elections, can be viewed an im-
powers seek to influence the outcome of democratic elec-
portant asset and essential part of the free democratic de-
tions, deepfakes are only a small part of their extensive ar-
bate.
senal. Etc.
• It is also highly questionable to what extent it is desirable
for the government to interfere with and ultimately deter-
Consequently, there are both very strong reasons to seri- mine what is true and what is not; this was precisely Or-
ously consider ex ante rules and strong reasons not to. The well’s fear.
final decision will have to be made on the basis of an exten-
sive social and political debate. The same applies to the other
regulatory options, such as limiting the household exemption. Finally, there are two new(er) issues for which more de-
Regulating what people do in their private lives may be desir- tailed rules may be considered, but for which the answer to
able in order to curb the production of intrinsically punishable what should be allowed and what not depends on political
or undesirable material, and it may also help to prevent such preference or the ethical point of view. This concerns the
material from being produced and then distributed to a wide deepfakes of deceased persons and deepfakes of fictitious per-
audience and/or on various websites by addressing the prob- sons. Some examples of were different views may arise in-
lem at the source. Yet the following questions could be asked: clude, but are not limited to:

• Do we want to move towards a society in which what citi- • Some perceive the idea that Napoleon would give a history
zens do in the privacy of their homes is regulated? lesson in secondary schools a nice prospect, others find it
• Do we want DPAs to scrutinize citizens’ private behaviour? problematic because children would be confronted at an
Isn’t the cure worse than the disease? early age with blurred lines between fact and fiction.
• Isn’t the enforcement problem actually increased because • Some find it positive would the police infiltrate child porn
the government (DPA, police, etc) have insufficient means networks by means of deepfakes or if paedophiles could
and manpower already, let alone the means and manpower be treated by means of fake child-porn, others find it un-
to also supervise the private lives of citizens? desirable if the government deploys such means and con-
tributes to a post-truth world.
The same hesitations apply to the potential rules restrict- • Some think that a partner of a deceased can decide for her-
ing online expression. It is clear that deepfakes can have a self whether she wants to continue communicating with
computer law & security review 46 (2022) 105716 15

the deceased, others think that there should be legal lim-


its and strong protection of post-mortem privacy.
Declaration of Competing Interest
• Some think that a deepfake of a politician who appears to
The authors declare that they have no known competing fi-
be giving a speech in the language of a country’s minority
nancial interests or personal relationships that could have ap-
is desirable with a view to inclusion, others find it voter
peared to influence the work reported in this paper.
deception.

Data Availability
In short, there are enough complex issues for a societal and
political debate on future regulation, which should preferably No data was used for the research described in the article.
take place sooner rather than later, given the speed at which
deepfake technology is developing and its potential impact on
society and democratic institutions.

You might also like