Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

EXPLORING DEEP FAKES

I. INTRODUCTION
The Human mind is the greatest threat we are and will be facing in the near future. Social engineering
and exploiting human zero days with DeepFakes are harder and harder to discern. This is surely going
to make the future memorable as we are still exploring what it is about humans that makes us
susceptible to getting pawned, manipulated and otherwise compromised. The main intention right now
is analysing how DeepFakes and AI are being weaponized, and how this will forever change the defence
strategies.

II. WHAT IS DEEP FAKE?


Deep fake is any form of artificial media created by deep learning neural networks. It can come in the
form of still images and audio. A surge in what’s known as “fake news” shows how deepfake videos
can trick audiences into believing made-up stories. The term deepfake melds two words: deep and
fake. It combines the concept of machine or deep learning with something that isn’t real.

Deepfakes are artificial images and sounds put together with machine-learning algorithms. A deepfake
creator uses deepfake technology to manipulate media and replace a real person’s image, voice, or both
with similar artificial likenesses or voices. Deepfake technology as an advanced form of photo-editing
software that makes it easy to alter images. But deepfake technology goes a lot further in how it
manipulates visual and audio content. For instance, it can create people who don’t exist. Or it can make
it appear to show real people saying and doing things they didn’t say or do. As a result, deepfake
technology can be used as a tool to spread misinformation.

III. WHY IS IT DANGEROUS AND WHAT ARE THE POTENTIAL THREATS?


There has been many such historic events back in the political arena that can give us an estimate on
how advanced technology can have serious repurcussions. One such example is a deepfake 2018 video
of former U.S. President Barack Obama talking about deepfakes. This wasn’t really Obama, but it looked
and sounded like him. Other deepfakes have been used around the world in elections or for fomenting
political controversy, including these:

In Gabon, a deepfake video led to an attempted military coup in the East African nation.

In India, a candidate used deepfake videos in different languages to criticize the incumbent and reach
different constituencies.

Deepfake political videos have become so prevalent and damaging that California has outlawed them
during election season. The goal is to keep deepfakes from deceptively swaying voters. Deepfakes can
bolster fake news. They are a major security concern, especially in the 2020 presidential election year.
Deepfakes have the potential to undermine political systems. In mid-March, as the Russian invasion of
Ukraine crept into its third week, an unusual video started making the rounds on social media and was
even broadcast on the television channel Ukraine 24 due to the efforts of hackers.

The video appeared to show Ukrainian President Volodymyr Zelenskyy, stilted with his head moving
and his body largely motionless, calling on the citizens of his country to stop fighting Russian soldiers
and to surrender their weapons. He had already fled Kyiv, the video claimed.

1
Except, those weren’t the words of the real Zelenskyy. “ The video was a “deepfake,” or content
constructed using artificial intelligence. In a deepfake, individuals train computers to mimic real people
to make what appears to be an authentic video. Shortly after the deepfake was broadcast, it
was debunked by Zelenskyy himself, removed from prominent online sources like Facebook and
YouTube, and ridiculed by Ukrainians for its poor quality” , according to the Atlantic Council. However,
just because the video was quickly discredited doesn’t mean it didn’t cause harm. In a world
increasingly politically polarized, in which consumers of media may believe information that reinforces
their biases, regardless of the content’s apparent legitimacy, deepfakes pose a significant threat.

At present, deepfakes are too easily detected by an untrained eye to be a significant security threat.
The technology is always improving. At the moment, the greatest worry is the use of it by state-
sponsored actors that have the resources to create the most convincing possible videos. The real threat
begins when anyone with a modern computer can create highly realistic manipulated videos at the
push of a button. Countermeasures are already being developed in anticipation of this future state of
affairs. For example, DARPA believes that these videos can develop into a serious national security
problem and is developing new video forensics tools to detect them.

Deepfakes disproportionately affect public figures, because a large collection of shots of their faces
from various angles is needed to create the facial models that are swapped into the fake video. Should
flawless deepfakes become common, it is likely there will be a rise in pre-release authentication of
content created by these figures with some sort of digital watermark. That would not address every
type of deepfake, but it could be an effective countermeasure against the creation of false statements
by political figures and celebrities.There is some concern about targeted phishing attacks to gain access
to business networks. Management figures may have enough video and audio data in circulation to
serve as a base to create a deepfake that may trigger a company to take some strict countermeasures.

IV. HOW DEEP FAKES ARE MADE?

Deepfakes can be created in more than one way:

Generative Adversarial Networks (GANs) belongs to the generative models. That means they are
able to generate artificial content base on the arbitrary input. Generally, GANs most of the time refers
to the training method, rather on the generative model. Reason for this is that GANs don't train a single
network, but instead two networks simultaneously.

The first network is usually called Generator, while the second Discriminator. Purpose of the Generator
model is to images that look real. During training, the Generator progressively becomes better at
creating images that look real. Purpose of the Discriminator model is to learn to tell real images apart
from fakes. During training, the Discriminator progressively becomes better at telling fake images from
real ones. The process reaches equilibrium when the Discriminator can no longer distinguish real
images from fakes.

The following steps are followed by a GAN used for face generation :

1. Generator takes an array of random numbers having size equal to seed size and generates an
image.

2. The generated images with equal number of real face images (which are used for training) are
passed to the discriminator for their classification as real or fake.

3. Discriminator now passes its result as a loss again to the generator to make itself more
accurate in producing fake images.

4. This process keeps iterating for a desired number of times, hence generator keeps on
improving its output so as to fool the discriminator.

2
Both, the discriminator and generator doesn’t run simultaneously.

Simple AutoEncoder : A simple or Vanilla Autoencoder consists of two neural networks — an


Encoder and a Decoder. An Encoder is responsible for converting an image into a compact lower
dimensional vector (or latent vector). This latent vector is a compressed representation of the
image. The Encoder, therefore maps an input from the higher dimensional input space to the lower
dimensional latent space. This is similar to a CNN classifier. In a CNN classifier, this latent vector
would be subsequently fed into a softmax layer to compute individual class probabilities. However,
in an Autoencoder, this latent vector is fed into the Decoder. The Decoder is a different neural
network that tries to reconstruct the image, thereby mapping from the lower dimensional latent
space to the higher dimensional output space.

As of now these are a few techniques used in the process but as the year goes by there will be many
more to come.

V. HOW TO DETECT A DEEP FAKE?


There are certain telltale characteristics which a lay man can decipher as deepfake videos, like:

Unnatural eye movement , A lack of blinking ,Unnatural facial expressions, Facial morphing — a simple
stitch of one image over another, Unnatural body shape , Unnatural hair , Abnormal skin colors,
Awkward head and body positioning, Inconsistent head positions , Odd lighting or discoloration ,Bad
lip-syncing , Robotic-sounding voices , Digital background noise and Blurry or misaligned visuals.

Apart from these, when things are too professional, we have a few softwares that can identify the fake
ones.

Reality Defender is a user-friendly, no-code platform allowing companies to scan for fake content
within media (audio, video and images). Our API and web app provides real-time scanning, deepfake
scoring and PDF report cards. Reality Defender promises to do the same (the plug-in has yet to launch
fully), but in a more technologically advanced manner, using machine learning to verify whether or not
an image has been tinkered with. Both plug-ins also encourage users to help out with this process,
identifying pictures that have been manipulated or so-called “propaganda. The AI Foundation has
established this technology to detect synthetic media, sometimes known as deep fakes.

It is a non-partisan and non-commercial effort to help reporters and campaigns uphold truth and
ethical standards. It is dedicated to monitoring media and fighting misinformation. It uses algorithms
and humans together to fight against fake news. It can identify manipulated content using synthetic
media detection algorithms and human analysis. Reality Defender is continuously evolving to meet the
changing threats imposed by the AI fakes.

SurfSafe’s approach is simpler. Once installed, users can click on pictures, and the software will
perform something like a reverse-image search. It will look for the same content that appears on
trusted “source” sites and flag well-known doctored images. SurfSafe’s leans heavily on the expertise
of established media outlets. Its reverse-image search is basically sending readers to look at other sites’
coverage in the hope that they have spotted the fake.

But still it can still be tricky to nail down. In our tests, for example, the SurfSafe plug-in recognized the
most widely circulated version of the Seahawks picture as a fake, but it couldn’t spot variants shared
on Facebook where the image had been cropped or was a screenshot taken from a different platform.
The plug-in was even worse at identifying stills from the González video, failing to even identify a
number of screenshots hosted on Snopes.

Software for creating deepfakes has required large data sets, but new technology may make creating
deepfake videos easier. For example, through an AI lab in Russia, Samsung has developed an AI system
that can create a deepfake video with only a handful of images — or even one photo.

3
Faceswap is another leading free and Open Source multi-platform Deepfakes software. Powered by
Tensorflow, Keras and Python; Faceswap will run on Windows, macOS and Linux. These are the few
technologies to detect a deep fake.

VI. CONCLUSION
Deepfakes offer a fantastic opportunity to make a positive difference in our lives. Deepfakes can
provide people with a voice and a sense of purpose. But there are serious consequences of Deepfake
which is widely used to create misrepresenting information that may include sexually explicit images
or videos of celebrities. Deepfake is a real threat where the society influences the sentiments and
perceptions of people around us which we need to be aware of.

You might also like