Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

DEEPFAKE

file:///C:/Users/hp/Downloads/
Deep_Insights_of_Deepfake_Technology_A_Review.pdf
https://www.irjet.net/archives/V7/i12/IRJET-V7I1265.pdf

This technology was invented in 2014 by Ian Goodfellow, who works at Apple now. Deepfakes are the
merchandise of AI (AI) applications that merge, combine, replace, and superimpose images and
video clips to make fake videos that appear authentic. Deepfake technology can generate, for
instance, a humorous, pornographic, or political video of an individual saying anything, without the
consent of the person whose image and voice is involved. Deepfakes target social media platforms,
where conspiracies, rumors, and misinformation spread easily.

Deepfake, a mixtures of deep learning and fake, are imitating contents where targeted subject‟s face
was swapped by source person to make videos or images of target person

deepfakes themselves were born in 2017 when a Reddit user of an equivalent name posted doctored
clips on the location.

Technology required to make Deepfakes


It is hard to form an honest deepfake on a typical computer. Most are created on high-end desktops
with powerful graphics cards or better still with computing power within the cloud. This reduces the
time interval from days and weeks to hours. But it takes expertise, too. many tools are now available
to assist people make deepfakes. Several companies will make them for you (deepfakesweb.com)
and do all the processing within the cloud

Types of Deepfake Frauds


https://www.spiceworks.com/it-security/cyber-risk-management/articles/what-is-deepfake/
#:~:text=A%20deepfake%20is%20a%20human,using%20machine%20learning%20(ML).
Now that we have established what a deepfake is, let us consider its
various types:

Types of Deepfake Frauds

1. Textual deepfakes
Earlier, it was thought that a machine could not do a creative project like drawing or
writing in the early stages of machine learning and natural language processing
(NLP). Fast forward through 2021, when top-rated AI-generated writing can now
compose human-looking pith and clarity, thanks to the robust language models and
libraries developed over decades by the incremental labor of scholars and data
science specialists.
2. Deepfake video
Deepfake criminals’ primary weapon is the creation of fake photographs and videos.
Given that we live in the omnipresent social media world, where videos and photos
elucidate incidents and stories better than the text format, it is the most widely used
type of deepfake.
Modern video-generating artificial intelligence is more competent than natural
language AI and may be more hazardous. Hyperconnect, a Seoul-based software
firm, released MarioNETte in 2020, a program that can create deepfake movies of
historical people, celebrities, and leaders. This is accomplished by having another
individual reenact the desired personality’s facial gestures, which are then placed on
the intended personality’s deepfake.

3. Deepfake audio
Text, photos, and video are not the only things that neural networks and artificial
intelligence can do. They also can clone a human voice. All that’s needed is a data
repository containing an audiotape of a person whose voice must be imitated.
Deepfake algorithms will be able to learn from this data collection and replicate the
prosody of a specific person’s voice.
Commercial software such as Lyrebird and Deep Voice has been introduced. Still, you
only have to say a few phrases before artificial intelligence becomes acclimated to
your voice and accent. This program becomes strong enough to copy your voice as
you send in more recordings of yourself. You may simply deliver a phrase or a
sentence after loading in a collection of your voice recordings, and this deepfake
program will describe the text in your tone.

4. Deepfakes on social media


One can use deepfake innovation in conjunction with creating stories or blogs to
build a fake internet profile that would be difficult for a regular user to detect. A
deepfake with the identity Maisy Kinsley on social media sites like LinkedIn and
Twitter, for instance, was convincingly a (non-existent) Bloomberg reporter. Her
profile photo was unusual as if it had been made by software. Maisy Kinsley’s public
profile regularly attempted to connect with Tesla stock short-sellers, indicating that
the profile was most likely fabricated for financial gain.

5. Real-time or live deepfakes


Deepfake technology is astonishingly advanced, allowing firms to generate
advertising clones, governments to imitate political adversaries, and hackers to
recreate user voices to pass voice-based authentication.
Youtubers are already changing their faces in real-time using a novel deepfake
program. For example, DeepFaceLive is an open-source artificial intelligence software
that can convert your visage into someone else’s via videoconference and streaming
networks. Streamers already have started utilizing the feature on platforms like
Twitch, and both broadcasters and developers of any other media output can use
this program.

LAWS
https://www.indiatoday.in/law/story/deepfake-videos-images-storming-internet-what-
laws-can-come-to-your-rescue-2459655-2023-11-07
https://www.scconline.com/blog/post/2023/03/17/emerging-technologies-and-law-
legal-status-of-tackling-crimes-relating-to-deepfakes-in-india/
https://www.livemint.com/news/deepfakes-major-violation-of-it-law-harm-
women-in-particular-rajeev-chandrasekhar-11699358904728.html
https://www.indiatoday.in/india/story/deepfakes-massive-threat-to-society-
centre-will-penalise-creators-platforms-hosting-them-it-minister-2466476-2023-
11-23
IMPLICATIONS
https://www.bu.edu/bulawreview/files/2021/04/LANGA.pdf pg 770
file:///C:/Users/hp/Downloads/
The_Emergence_of_Deepfake_Technology_A_Review.pdf

Deepfakes are a major threat to our society, political system, and business because
they
1) put pressure on journalists struggling to filter real from fake news,
2) threaten national security by disseminating propaganda and interfering in
elections,
3) hamper citizen trust toward information by authorities, and,
4) raise cybersecurity issues for people and organizations
It is highly probably that the journalism industry is going to have to face a massive
consumer trust issue due to deepfakes. Deepfakes pose a greater threat than
“traditional” fake news because they are harder to spot and people are inclined to
believe the fake is real
Examples:
During the spike in tensions between India and Pakistan in 2019, Reuters found 30
fake videos on the incident; mostly old videos from other events posted with new
captions
While looking for eyewitness videos about the mass shooting in Christchurch, New
Zealand, Reuters came across a video which claimed to show the moment a suspect
was shot dead by police. However, they quickly discovered it was from a different
incident in the U.S.A., and the suspect in the Christchurch shooting was not killed

The intelligence community is concerned that deepfakes will be used to threaten


national security by disseminating political propaganda and disrupting election
campaigns. Putting words in someone's mouth on a video that goes viral is a
powerful weapon in today’s disinformation wars, as such altered videos can easily
skew voter opinion. Highly realistic deepfakes thus pose a unique threat to public
safety and national security because of their persuasive power, bolstered by “the
distribution powers of social media.
Deepfakes could also pose a threat to national security if they are used to engage in
wartime deception. The most catastrophic scenario stemming from a convincing
deepfake is nuclear war brought on by a forged video of a world leader declaring war
or threatening retaliation.

Deepfakes are likely to hamper digital literacy and citizens’ trust toward authority-
provided information, as fake videos showing government officials saying things that
never happened make people doubt authorities.

The corporate world has already expressed interest in protecting themselves against
viral frauds, as deepfakes could be used for market and stock manipulation, for
example, by showing a chief executive saying racist or misogynistic slurs, announcing
a fake merger, making false statements of financial losses or bankruptcy

Deepfaked porn or product announcements could be used for brand sabotage,


blackmail, or to embarrass management. This technology can be used to create fake
images or videos that depict people doing or saying things that never actually
happened, potentially damaging the reputation of individuals, or spreading false
information. It is also possible for deepfakes to be used for malicious purposes such as
non-consensual pornography, or for political propaganda or misinformation campaigns.
This can have serious implications for individuals whose images or likenesses are used
without their consent, as well as for society at large when deepfakes are used to spread
false information or manipulate public opinion.

Further, deepfake technology can create a fraudulent identity and, in live-stream


videos, convert an adult face into a child’s or younger person’s face, raising concerns
about the use of the technology by child predators

SCAMS DUE TO DEEPFAKE


https://www.livemint.com/news/deepfakes-major-violation-of-it-law-harm-
women-in-particular-rajeev-chandrasekhar-11699358904728.html
https://timesofindia.indiatimes.com/city/ahmedabad/deepfakes-replace-
women-on-sextortion-calls/articleshow/86020397.cms
https://www.irjet.net/archives/V7/i12/IRJET-V7I1265.pdf

Positive Impacts
• Educational benefits

Deepfake technology has a positive educational potential. It could revolutionize our lessons in history
with interactivity. With deep samples of historical figures, it could preserve tales and help catch
interest.

• Unifying the global audience

Since deepfakes can replicate voices and alter images, translated films that use the original actors
can be allowed. The voices sound like that of the original. And, crucially, the gestures of the lips also
lead to the words spoken.

• The entertainment industry

Consider the days that the ghost has been given up by an actor. The role of CGI can be filled by deep
fake technology, recreating the likeness of unavailable past actors. So with their actor, the character
does not have to pass away. The recreation of the late Peter Cushing in Star Wars: Rogue One(2017),
who died in1994, for instance.

• Art world
AI software could help us create virtual museums. This will encourage people who otherwise would
not be ready to encounter them face to face to access the world's masterpieces. We should share the
planet's compelling, profound artwork.

• Medical community

This will offer a boost to data protection, thus assisting with the emergence of current diagnostic and
tracking procedures. Hospitals can produce deep-fake patients by using the technology behind deep-
fakes. That is, patient data that is practical for research and experimentation, but does not place
actual patients at risk. So rather than actual patient data, researchers can use true-to-life deepfake
patients. From this there is space to examine new diagnostic and monitoring techniques. Or even
train other AI to help with medical decisions.

• Training

It can be used in customer service training. It’s possible that we could one day have realistic,
deepfake virtual humans. This could provide deepfake examples of real customers

CONCLUSIONS
https://www.irjet.net/archives/V7/i12/IRJET-V7I1265.pdf
According to the study, deepfakes are a serious threat to society, the form of government and
businesses because they put pressure on journalists struggling to filter real from fake news, threaten
national security by disseminating propaganda that interferes in elections, hamper citizen trust
toward information by authorities, and lift cyber security issues for people and organizations.

On the opposite hand, there are a minimum of four known ways to combat deepfakes, namely 1)
legislation and regulation, 2) corporate policies and voluntary action, 3) education and training, and
4) anti-deepfake technology. While legislative actions are often taken against some deepfake
producers, it's not effective against foreign states. Rather, corporate policies and voluntary action like
deepfake-addressing content moderation policies, and quick removal of user-flagged content on
social media platforms, also as education and training that aims at improving digital media literacy,
better online behavior and critical thinking, which create cognitive and concrete safeguards toward
digital content consumption and misuse, are likely to be more efficient.

You might also like