Professional Documents
Culture Documents
Deepfake
Deepfake
file:///C:/Users/hp/Downloads/
Deep_Insights_of_Deepfake_Technology_A_Review.pdf
https://www.irjet.net/archives/V7/i12/IRJET-V7I1265.pdf
This technology was invented in 2014 by Ian Goodfellow, who works at Apple now. Deepfakes are the
merchandise of AI (AI) applications that merge, combine, replace, and superimpose images and
video clips to make fake videos that appear authentic. Deepfake technology can generate, for
instance, a humorous, pornographic, or political video of an individual saying anything, without the
consent of the person whose image and voice is involved. Deepfakes target social media platforms,
where conspiracies, rumors, and misinformation spread easily.
Deepfake, a mixtures of deep learning and fake, are imitating contents where targeted subject‟s face
was swapped by source person to make videos or images of target person
deepfakes themselves were born in 2017 when a Reddit user of an equivalent name posted doctored
clips on the location.
1. Textual deepfakes
Earlier, it was thought that a machine could not do a creative project like drawing or
writing in the early stages of machine learning and natural language processing
(NLP). Fast forward through 2021, when top-rated AI-generated writing can now
compose human-looking pith and clarity, thanks to the robust language models and
libraries developed over decades by the incremental labor of scholars and data
science specialists.
2. Deepfake video
Deepfake criminals’ primary weapon is the creation of fake photographs and videos.
Given that we live in the omnipresent social media world, where videos and photos
elucidate incidents and stories better than the text format, it is the most widely used
type of deepfake.
Modern video-generating artificial intelligence is more competent than natural
language AI and may be more hazardous. Hyperconnect, a Seoul-based software
firm, released MarioNETte in 2020, a program that can create deepfake movies of
historical people, celebrities, and leaders. This is accomplished by having another
individual reenact the desired personality’s facial gestures, which are then placed on
the intended personality’s deepfake.
3. Deepfake audio
Text, photos, and video are not the only things that neural networks and artificial
intelligence can do. They also can clone a human voice. All that’s needed is a data
repository containing an audiotape of a person whose voice must be imitated.
Deepfake algorithms will be able to learn from this data collection and replicate the
prosody of a specific person’s voice.
Commercial software such as Lyrebird and Deep Voice has been introduced. Still, you
only have to say a few phrases before artificial intelligence becomes acclimated to
your voice and accent. This program becomes strong enough to copy your voice as
you send in more recordings of yourself. You may simply deliver a phrase or a
sentence after loading in a collection of your voice recordings, and this deepfake
program will describe the text in your tone.
LAWS
https://www.indiatoday.in/law/story/deepfake-videos-images-storming-internet-what-
laws-can-come-to-your-rescue-2459655-2023-11-07
https://www.scconline.com/blog/post/2023/03/17/emerging-technologies-and-law-
legal-status-of-tackling-crimes-relating-to-deepfakes-in-india/
https://www.livemint.com/news/deepfakes-major-violation-of-it-law-harm-
women-in-particular-rajeev-chandrasekhar-11699358904728.html
https://www.indiatoday.in/india/story/deepfakes-massive-threat-to-society-
centre-will-penalise-creators-platforms-hosting-them-it-minister-2466476-2023-
11-23
IMPLICATIONS
https://www.bu.edu/bulawreview/files/2021/04/LANGA.pdf pg 770
file:///C:/Users/hp/Downloads/
The_Emergence_of_Deepfake_Technology_A_Review.pdf
Deepfakes are a major threat to our society, political system, and business because
they
1) put pressure on journalists struggling to filter real from fake news,
2) threaten national security by disseminating propaganda and interfering in
elections,
3) hamper citizen trust toward information by authorities, and,
4) raise cybersecurity issues for people and organizations
It is highly probably that the journalism industry is going to have to face a massive
consumer trust issue due to deepfakes. Deepfakes pose a greater threat than
“traditional” fake news because they are harder to spot and people are inclined to
believe the fake is real
Examples:
During the spike in tensions between India and Pakistan in 2019, Reuters found 30
fake videos on the incident; mostly old videos from other events posted with new
captions
While looking for eyewitness videos about the mass shooting in Christchurch, New
Zealand, Reuters came across a video which claimed to show the moment a suspect
was shot dead by police. However, they quickly discovered it was from a different
incident in the U.S.A., and the suspect in the Christchurch shooting was not killed
Deepfakes are likely to hamper digital literacy and citizens’ trust toward authority-
provided information, as fake videos showing government officials saying things that
never happened make people doubt authorities.
The corporate world has already expressed interest in protecting themselves against
viral frauds, as deepfakes could be used for market and stock manipulation, for
example, by showing a chief executive saying racist or misogynistic slurs, announcing
a fake merger, making false statements of financial losses or bankruptcy
Positive Impacts
• Educational benefits
Deepfake technology has a positive educational potential. It could revolutionize our lessons in history
with interactivity. With deep samples of historical figures, it could preserve tales and help catch
interest.
Since deepfakes can replicate voices and alter images, translated films that use the original actors
can be allowed. The voices sound like that of the original. And, crucially, the gestures of the lips also
lead to the words spoken.
Consider the days that the ghost has been given up by an actor. The role of CGI can be filled by deep
fake technology, recreating the likeness of unavailable past actors. So with their actor, the character
does not have to pass away. The recreation of the late Peter Cushing in Star Wars: Rogue One(2017),
who died in1994, for instance.
• Art world
AI software could help us create virtual museums. This will encourage people who otherwise would
not be ready to encounter them face to face to access the world's masterpieces. We should share the
planet's compelling, profound artwork.
• Medical community
This will offer a boost to data protection, thus assisting with the emergence of current diagnostic and
tracking procedures. Hospitals can produce deep-fake patients by using the technology behind deep-
fakes. That is, patient data that is practical for research and experimentation, but does not place
actual patients at risk. So rather than actual patient data, researchers can use true-to-life deepfake
patients. From this there is space to examine new diagnostic and monitoring techniques. Or even
train other AI to help with medical decisions.
• Training
It can be used in customer service training. It’s possible that we could one day have realistic,
deepfake virtual humans. This could provide deepfake examples of real customers
CONCLUSIONS
https://www.irjet.net/archives/V7/i12/IRJET-V7I1265.pdf
According to the study, deepfakes are a serious threat to society, the form of government and
businesses because they put pressure on journalists struggling to filter real from fake news, threaten
national security by disseminating propaganda that interferes in elections, hamper citizen trust
toward information by authorities, and lift cyber security issues for people and organizations.
On the opposite hand, there are a minimum of four known ways to combat deepfakes, namely 1)
legislation and regulation, 2) corporate policies and voluntary action, 3) education and training, and
4) anti-deepfake technology. While legislative actions are often taken against some deepfake
producers, it's not effective against foreign states. Rather, corporate policies and voluntary action like
deepfake-addressing content moderation policies, and quick removal of user-flagged content on
social media platforms, also as education and training that aims at improving digital media literacy,
better online behavior and critical thinking, which create cognitive and concrete safeguards toward
digital content consumption and misuse, are likely to be more efficient.