Professional Documents
Culture Documents
Deep Fake
Deep Fake
I. INTRODUCTION
The Human mind is the greatest threat we are and will be facing in the near future. Social engineering
and exploiting human zero days with DeepFakes are harder and harder to discern. This is surely going
to make the future memorable as we are still exploring what it is about humans that makes us
susceptible to getting pawned, manipulated and otherwise compromised. The main intention right now
is analysing how DeepFakes and AI are being weaponized, and how this will forever change the defence
strategies.
Deepfakes are artificial images and sounds put together with machine-learning algorithms. A deepfake
creator uses deepfake technology to manipulate media and replace a real person’s image, voice, or both
with similar artificial likenesses or voices. Deepfake technology as an advanced form of photo-editing
software that makes it easy to alter images. But deepfake technology goes a lot further in how it
manipulates visual and audio content. For instance, it can create people who don’t exist. Or it can make
it appear to show real people saying and doing things they didn’t say or do. As a result, deepfake
technology can be used as a tool to spread misinformation.
In Gabon, a deepfake video led to an attempted military coup in the East African nation.
In India, a candidate used deepfake videos in different languages to criticize the incumbent and reach
different constituencies.
Deepfake political videos have become so prevalent and damaging that California has outlawed them
during election season. The goal is to keep deepfakes from deceptively swaying voters. Deepfakes can
bolster fake news. They are a major security concern, especially in the 2020 presidential election year.
Deepfakes have the potential to undermine political systems. In mid-March, as the Russian invasion of
Ukraine crept into its third week, an unusual video started making the rounds on social media and was
even broadcast on the television channel Ukraine 24 due to the efforts of hackers.
The video appeared to show Ukrainian President Volodymyr Zelenskyy, stilted with his head moving
and his body largely motionless, calling on the citizens of his country to stop fighting Russian soldiers
and to surrender their weapons. He had already fled Kyiv, the video claimed.
1
Except, those weren’t the words of the real Zelenskyy. “ The video was a “deepfake,” or content
constructed using artificial intelligence. In a deepfake, individuals train computers to mimic real people
to make what appears to be an authentic video. Shortly after the deepfake was broadcast, it
was debunked by Zelenskyy himself, removed from prominent online sources like Facebook and
YouTube, and ridiculed by Ukrainians for its poor quality” , according to the Atlantic Council. However,
just because the video was quickly discredited doesn’t mean it didn’t cause harm. In a world
increasingly politically polarized, in which consumers of media may believe information that reinforces
their biases, regardless of the content’s apparent legitimacy, deepfakes pose a significant threat.
At present, deepfakes are too easily detected by an untrained eye to be a significant security threat.
The technology is always improving. At the moment, the greatest worry is the use of it by state-
sponsored actors that have the resources to create the most convincing possible videos. The real threat
begins when anyone with a modern computer can create highly realistic manipulated videos at the
push of a button. Countermeasures are already being developed in anticipation of this future state of
affairs. For example, DARPA believes that these videos can develop into a serious national security
problem and is developing new video forensics tools to detect them.
Deepfakes disproportionately affect public figures, because a large collection of shots of their faces
from various angles is needed to create the facial models that are swapped into the fake video. Should
flawless deepfakes become common, it is likely there will be a rise in pre-release authentication of
content created by these figures with some sort of digital watermark. That would not address every
type of deepfake, but it could be an effective countermeasure against the creation of false statements
by political figures and celebrities.There is some concern about targeted phishing attacks to gain access
to business networks. Management figures may have enough video and audio data in circulation to
serve as a base to create a deepfake that may trigger a company to take some strict countermeasures.
Generative Adversarial Networks (GANs) belongs to the generative models. That means they are
able to generate artificial content base on the arbitrary input. Generally, GANs most of the time refers
to the training method, rather on the generative model. Reason for this is that GANs don't train a single
network, but instead two networks simultaneously.
The first network is usually called Generator, while the second Discriminator. Purpose of the Generator
model is to images that look real. During training, the Generator progressively becomes better at
creating images that look real. Purpose of the Discriminator model is to learn to tell real images apart
from fakes. During training, the Discriminator progressively becomes better at telling fake images from
real ones. The process reaches equilibrium when the Discriminator can no longer distinguish real
images from fakes.
The following steps are followed by a GAN used for face generation :
1. Generator takes an array of random numbers having size equal to seed size and generates an
image.
2. The generated images with equal number of real face images (which are used for training) are
passed to the discriminator for their classification as real or fake.
3. Discriminator now passes its result as a loss again to the generator to make itself more
accurate in producing fake images.
4. This process keeps iterating for a desired number of times, hence generator keeps on
improving its output so as to fool the discriminator.
2
Both, the discriminator and generator doesn’t run simultaneously.
As of now these are a few techniques used in the process but as the year goes by there will be many
more to come.
Unnatural eye movement , A lack of blinking ,Unnatural facial expressions, Facial morphing — a simple
stitch of one image over another, Unnatural body shape , Unnatural hair , Abnormal skin colors,
Awkward head and body positioning, Inconsistent head positions , Odd lighting or discoloration ,Bad
lip-syncing , Robotic-sounding voices , Digital background noise and Blurry or misaligned visuals.
Apart from these, when things are too professional, we have a few softwares that can identify the fake
ones.
Reality Defender is a user-friendly, no-code platform allowing companies to scan for fake content
within media (audio, video and images). Our API and web app provides real-time scanning, deepfake
scoring and PDF report cards. Reality Defender promises to do the same (the plug-in has yet to launch
fully), but in a more technologically advanced manner, using machine learning to verify whether or not
an image has been tinkered with. Both plug-ins also encourage users to help out with this process,
identifying pictures that have been manipulated or so-called “propaganda. The AI Foundation has
established this technology to detect synthetic media, sometimes known as deep fakes.
It is a non-partisan and non-commercial effort to help reporters and campaigns uphold truth and
ethical standards. It is dedicated to monitoring media and fighting misinformation. It uses algorithms
and humans together to fight against fake news. It can identify manipulated content using synthetic
media detection algorithms and human analysis. Reality Defender is continuously evolving to meet the
changing threats imposed by the AI fakes.
SurfSafe’s approach is simpler. Once installed, users can click on pictures, and the software will
perform something like a reverse-image search. It will look for the same content that appears on
trusted “source” sites and flag well-known doctored images. SurfSafe’s leans heavily on the expertise
of established media outlets. Its reverse-image search is basically sending readers to look at other sites’
coverage in the hope that they have spotted the fake.
But still it can still be tricky to nail down. In our tests, for example, the SurfSafe plug-in recognized the
most widely circulated version of the Seahawks picture as a fake, but it couldn’t spot variants shared
on Facebook where the image had been cropped or was a screenshot taken from a different platform.
The plug-in was even worse at identifying stills from the González video, failing to even identify a
number of screenshots hosted on Snopes.
Software for creating deepfakes has required large data sets, but new technology may make creating
deepfake videos easier. For example, through an AI lab in Russia, Samsung has developed an AI system
that can create a deepfake video with only a handful of images — or even one photo.
3
Faceswap is another leading free and Open Source multi-platform Deepfakes software. Powered by
Tensorflow, Keras and Python; Faceswap will run on Windows, macOS and Linux. These are the few
technologies to detect a deep fake.
VI. CONCLUSION
Deepfakes offer a fantastic opportunity to make a positive difference in our lives. Deepfakes can
provide people with a voice and a sense of purpose. But there are serious consequences of Deepfake
which is widely used to create misrepresenting information that may include sexually explicit images
or videos of celebrities. Deepfake is a real threat where the society influences the sentiments and
perceptions of people around us which we need to be aware of.