Professional Documents
Culture Documents
Deepfake Ethics
Deepfake Ethics
Deepfake Ethics
change media, privacy, and accountability as we know it, the need for frameworks that safeguard
against their abuse is clear. With the rise of social media, there has already been considerable
discussion about the role of social media companies in regulating content, and the situations in
which they should do so. There is the argument that while newspaper publishers are able to read
editorials before they are published, social media companies cannot moderate content before it is
posted to their platforms because media is constantly being uploaded to these platforms. Social
media companies should not immediately be held responsible for content that is posted, but
should be required to remove posts that are harmful to others or contain violence. In PHIL-401,
we learned about the theories of well-known philosophers and how their thoughts can be of value
when analyzing new technological developments and their effects on society. John Stuart Mill
was an early proponent of free speech, and he argued that it was necessary to promote free
speech so that people were not trapped in echo chambers and filter bubbles in which only their
own beliefs would be reinforced. However, he did not encourage inherently harmful speech. If
we view deepfakes through Mill’s eyes, we might predict that satirical deepfakes which aim to
mock a person or find fault in their argument, would be acceptable. Deepfake pornography (the
most prevalent type), changing peoples’ speech to hateful comments, etc. would not be
acceptable. Although this framework can be used, it is important to address the unique
challenges that accompany deepfake regulation. Deepfakes are meant to fool people into
thinking that they are real, which means that even in instances where they are used for satire,
they may not be perceived as satirical by all. Because of this, it is important to develop
regulations which address the technological challenges associated with deepfake detection along
with the legal challenges of content moderation. This could take the form of technology that
detects deepfakes when they are posted to platforms and automatically flags them as such to
prevent confusion. In addition there should be laws that identify people who create malicious
deepfake content to protect the rights of people whose content they are reproducing. Lastly
frameworks must be in place that ensure that the original creators of the work receive the credit
that they deserve and the assurance that others cannot modify their work. If these three actions
are taken together, we can protect the rights of creators and limit the harm that this technology