Deepfake AI Market Analysis (Updated)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Deepfake AI: Asset or Adversary

In theory, deepfake technology can make it appear as though anyone is saying or


doing anything, which is why it is so controversial.

Celebrities and political figures are the most “deepfaked” – with all that implies.

The technology is legal, but some uses are forbidden by relatively new
regulations.

We’ve all heard about deep fakes. They’ve made Elon Musk sing a Soviet space
song and turned Barack Obama into Black Panther, among other parodies and
memes.

They’ve also been used to commit crimes and been the subject of several
controversies when their creators tried to make them pass as legitimate.

For example, during the 2020 US presidential campaign, there were several
deepfake videos of Joe Biden falling asleep, getting lost, and misspeaking.
These videos aimed at bolstering the rumours that he was in cognitive
decline due to his age.

What is Deepfake?

The term “deepfake” was created from the words “deep learning” and “fake”.

Deep learning is a type of machine learning based on artificial neural networks,


which are inspired by the human brain. The method is used to teach machines
how to learn from large amounts of data via multi-layered structures of
algorithms.
Deep fakes usually employ a deep-learning computer network called a variational
auto-encoder, a type of artificial neural network that is normally used for facial
recognition.
Auto encoders can encode and compress input data, reducing it to a lower
dimensional latent space, and then reconstruct it to deliver output data based
on the latent representation. In the case of deepfakes, the auto encoders are
used to detect facial features, suppressing visual noise and “non-face” elements
in the process. The latent representation contains all these basic data that the
autoencoder will use to deliver a more versatile model that allows the “face
swap”, leaning on common features.

To make the results more realistic, deepfakes also use Generative Adversarial
Networks (GANs).

GANs train a “generator” to create new images from the latent representation of
the source image, and a “discriminator” to evaluate the realism of the generated
materials.

If the generator’s image does not pass the discriminator’s test, it is incited to
develop new images until there is one that “fools” the discriminator.
How long does it take to make deep fake?

The process of making a deepfake might sound complicated, but pretty much
anyone can create a deepfake ,as there are many tools available to do so, and
not much knowledge is required to use them. The complexity of the deepfake
is also a determinant factor. High-quality deepfakes are usually made on
powerful computers that are able to render projects faster, but complex
deepfake videos can still take hours to render, while simple face-swapping can
be done in 30 minutes. A simpler deepfake can even be created in a few
seconds using deepfake apps for smartphones

Who Invented Deepfake technology?


No single person can be credited for inventing deepfake technology as it is based
on several previous technologies, such as artificial neural networks (ANNs) and
artificial intelligence (AI).

In general, the development of this type of synthetic media can be traced back to
the 1990s. But deepfake technology as we know it today often relies on GANs,
and GANs didn’t exist until 2014 when they were invented by computer scientist
Ian Goodfellow.

How do you detect Deepfakes?

Some deepfakes are easier to detect than others. It all depends on the quality
and complexity of the falsified material. Moderately trained, or even untrained,
people can detect lower-quality deepfakes with the naked eye, by taking into
account subtle details.

Some deepfakes use filters that make the false faces look blurrier in some
areas. Others have slight inconsistencies in symmetry, color, lighting, sharpness,
or texture. Some deepfake videos might shimmer or flicker due to these
inconsistencies “accumulating” frame to frame.

What makes the deepfake research area more challenging is the competition
between the creation and detection and prevention of deepfakes, which will
become increasingly fierce in the future”, says Amit Roy-Chowdhury, a
professor of electrical and computer engineering and head of the Video
Computing Group at UC Riverside. Roy-Chowdhury helped create the
Expression Manipulation Detection (EDM) method, a system that spots
specific areas within an image that have been altered. He adds that, “With
more advances in generative models, deepfakes will be easier to synthesize
and harder to distinguish from real.”
Advantages of using deepfake for business?

Influencer campaign

You only need a bank of digital footage instead of requiring them to shoot a
video for hours.

Artificial intelligence and machine learning do the rest. Or, you could use
historical influencers, such as Marilyn Monroe

Since there are a plethora of video and voice recordings of them, marketers can
use their likeness in a deepfake to boost their campaign.

Experiential Campaigns
To stand out against the crowd, brands can use deepfakes to immerse the
consumer into a shopping experience. For example, ecommerce stores can
superimpose a shopper’s face onto a model’s body to see how the clothes look.

Nostalgic Ad campaigns

State Farm created one famous example of deepfake technology.

The insurance company created an ad for the series The Last Dance by
superimposing 1998 Sportscenter footage to make it look like Kenny Mayne
predicted the documentary.
This deepfake was made purely for entertainment and created nostalgia with
viewers that remember that iconic time for the Chicago Bulls basketball team.

Product Demos

Product demos could become experiential for customers. Instead of using the
same b-roll for all clients, marketers could create personalized demos that show
the actual client using their product. It can’t get more personal than that, can it?

This technology is here to stay, and it will continue to evolve.

In the digital marketing space, deepfake technology has both advantages and
disadvantages.

While there are ethical implications, deepfake videos allow brands to stretch
their marketing budgets and reach new audiences.
As long as marketers avoid making campaigns with malicious intent, deepfakes
can help both the brand and the consumer by creating a more personalized,
immersive experience.
How can video intelligence needs to improve to detect
deepfakes?

Deepfake detection is normally deemed a binary classification problem where


classifiers are used to classify between authentic videos and tampered ones. This
kind of methods requires a large database of real and fake videos to train
classification models. Detecting deepfakes, which are realistic synthetic media
created using AI, requires advancements in video intelligence. Here's a detailed
explanation of how video intelligence can be improved to detect deepfakes:

Increased complexity of Deepfake Generation: -As deepfake technology becomes


more advanced, so must the methods used to detect them. Video intelligence
algorithms need to be able to recognize increasingly subtle visual cues that
differentiate deepfakes from authentic videos.

Use of machine learning and AI: - Just as AI is used to create deepfakes, it can
also be used to detect them. Machine learning algorithms can be trained on large
datasets of both authentic and deepfake videos to learn patterns that distinguish
between the two.

Focus on Facial and body movements: - Deepfakes often fail to accurately


replicate natural facial expressions and body movements. Video intelligence
algorithms can be designed to analyse these movements for inconsistencies that
may indicate a deepfake.

Attention to Audio-Visual discrepancies: - In addition to visual cues, audio can


also help detect deepfakes. Algorithms can analyse lip movements and speech
patterns to determine if they match the audio track.

Detection of manipulated artifacts: - Deepfake videos may contain artifacts that


are not present in authentic videos. Video intelligence algorithms can be
trained to identify these artifacts, such as unnatural blurring or distortion.

Real time detection: - As deepfakes become more prevalent, there is a need for
real-time detection methods that can quickly identify deepfakes as they are
being created or uploaded.

Collaboration and Research: - Collaboration between researchers, industry


experts, and organizations can help advance the field of deepfake detection.
Sharing knowledge and resources can lead to more effective detection methods.
Regulatory and Legal frameworks: - Establishing regulatory and legal frameworks
for deepfake detection and prevention can help incentivize the development of
better video intelligence technologies.

How frame rate plays important role in detecting


Deepfakes?

Frame rate can be helpful in detecting deepfakes due to its impact on the
temporal consistency of a video. Here's how frame rate can play a role in
detecting deepfakes:

Decrease inconsistencies: - Deepfake algorithms may struggle to maintain


temporal consistency, especially when the frame rate of the generated video
differs from that of the original footage. An analysis of frame rates can help
detect such inconsistencies.
Artifacts and Blur: - Lower frame rates can lead to more noticeable artifacts and
blur in deepfake videos, particularly during fast motion or rapid changes in the
scene. These artifacts can be indicators of a deepfake.

Motion Blur: - Natural videos often exhibit motion blur during fast movements,
which can be challenging for deepfake algorithms to replicate accurately.
Analysing the motion blur in a video can help determine its authenticity.

Frame Duplication: - In some cases, deepfake videos may use frame duplication
to reduce the computational load. Analysing the pattern of frame duplication can
reveal the presence of a deepfake.

Audio-Visual creation: - Frame rate can also affect the synchronization between
audio and video. Inaccurate synchronization, such as mismatched lip
movements or unnatural pauses, can indicate a deepfake.
Market of Deepfake:-

The market for deepfake creation and detection is a rapidly evolving space
driven by advancements in AI, media, and cyber security. Deepfakes are
synthetic media created using deep learning techniques to manipulate or
generate content, often replacing someone's likeness in videos or images with
another person's. As the technology matures, the market for both creating and
detecting deepfakes is expected to grow significantly.

Deepfake Creation Market: -


The deepfake creation market is driven by various factors, including
entertainment, marketing, and malicious purposes.
Entertainment and Media: -In the entertainment industry, deepfake technology
offers new possibilities for filmmakers and content creators. It allows for the
resurrection of deceased actors, the creation of realistic visual effects, and the
adaptation of stories in innovative ways. Studios and production companies are
increasingly exploring the use of deepfake technology to enhance storytelling
and create visually stunning effects at a fraction of the cost of traditional
methods.

Market and Advertising: - In marketing and advertising, deepfake technology


offers the potential to create highly personalized and engaging content. Brands
can use deepfakes to create interactive experiences, celebrity endorsements,
and targeted advertisements. The ability to create hyper-realistic content that
resonates with consumers can drive brand engagement and increase sales.

Malicious Use: -Despite its potential for positive applications, deepfake


technology also poses significant risks, particularly in terms of misinformation
and fraud. Malicious actors can use deepfakes to create convincing fake videos
or images for political manipulation, spreading false information, or committing
fraud. This aspect of the market drives the need for robust detection and
mitigation solutions.
Deepfake Detection Market: -

The deepfake detection market is driven by the need to combat the negative
impacts of deepfake technology. Detecting deepfakes requires advanced AI
algorithms capable of distinguishing between real and manipulated content.

Social Media and content platforms: - Social media platforms and content-
sharing websites are under pressure to detect and remove deepfake content to
prevent the spread of misinformation and protect their users. These platforms
are increasingly investing in AI-based detection systems to identify and remove
deepfakes from their platforms.
News and Journalism: - The rise of deepfake technology poses a significant
challenge to news organizations and journalists. Detecting deepfakes in news
content is crucial to maintaining trust and credibility with audiences. News
organizations are investing in AI-powered tools to verify the authenticity of
media content and detect deepfakes.

Cyber security and Fraud Prevention: - In addition to social and political


implications, deepfake technology also presents challenges in terms of cyber
security and fraud prevention. Deepfakes can be used to impersonate individuals
or manipulate audio-visual evidence in legal proceedings. Cybersecurity
companies are developing tools to detect deepfakes and prevent their malicious
use.
Market Size: -The global deepfake market is expected to experience substantial
growth in the coming years. According to a report by Grand View Research, the
market size was valued at USD 1.4 billion in 2020 and is projected to reach USD
39.6 billion by 2028, growing at a CAGR of 43.8% from 2021 to 2028. This
significant growth is attributed to several factors, including advancements in AI
and machine learning technologies, increasing digitalization, and the growing
adoption of deepfake technology across various industries

Share of consumers who say they could detect a deepfake video worldwide as
of 2023
Companies working on Deepfake technology: - Several companies are actively
working on deepfake detection technologies, offering a range of solutions for
different applications. These companies use various approaches, including AI
algorithms, machine learning models, and deep neural networks, to detect and
mitigate the impact of deepfake content. Here are some notable companies in
the deepfake detection space
 Deeptrace: - Deeptrace, now part of Microsoft, developed technology to
detect deepfakes and manipulated media. They focused on providing
solutions for social media platforms, newsrooms, and video verification
services
 Sensity: - Sensity (formerly known as Deep Sentinel) focuses on detecting
and mitigating deepfake content on social media platforms and the
internet. They use AI and machine learning algorithms to identify deepfake
videos and images.
 Truepic: - Truepic offers solutions for verifying the authenticity of photos
and videos. Their technology is used in various industries, including
insurance, banking, and media, to detect deepfakes and manipulated
media.
 Quantiphi: - Quantiphi provides AI-powered solutions for detecting
deepfakes and manipulated media. They offer a range of services,
including video and image analysis, to help businesses identify and
mitigate the impact of deepfake content.
 CyberNETIQ: - CyberNETIQ offers deepfake detection solutions for cyber
security and fraud prevention. Their technology uses AI algorithms to
analyse videos and images for signs of manipulation.

These companies typically charge for their services based on factors such as the
volume of content processed, the complexity of the analysis required, and the
level of customization needed.

Pricing models can vary, but they often include subscription-based plans or pay-
per-use models. Some companies may also offer customized solutions for
specific industries or use cases, which may be priced differently.
OARO-Unfakable Records

Deepfakes refer to fake videos and images that look very realistic and are
produced using artificial intelligence (AI) based algorithms. In recent times, the
phenomenon of deepfakes has gained popularity through memes and other
entertaining media. However, the potential for misuse is quite high, prompting
various companies to find solutions to counter such fake media. Particularly, the
insurance sector requires solutions to detect deepfakes and ensure the
authenticity of the claims.

Spanish startup OARO offers companies various tools to authenticate and verify
digital identity, compliance, and media. OARO Media creates an immutable data
trail that allows businesses, governing bodies, and individual users to
authenticate any photo or video. The startup’s mobile application further
generates reliable photos and videos embedded with records of user identity,
content, timestamp, and global positioning system (GPS) coordinates. For
example, the solution allows insurance companies to reduce operating costs.
Sentinel-Tackling Information Warfare

One reason for the sudden increase in deep fake content is the reduction in time
and cost involved in creating them. Deepfakes usually take just a day to create,
with the right resources, and are also increasingly difficult to spot with the naked
eye. This, in turn, drives emerging start-ups to work on solutions to tackle
deepfakes using AI techniques.

Sentinel is a startup from Estonia that helps governments and media companies
defend against fake media content. The startup automates the authentication of
digital media by checking if any media is generated by artificial intelligence. The
startup’s detection model is based on the Defence in Depth (DiD) approach. This
model utilizes a multi-layer defence comprising a large database of deepfakes
and neural network classifiers to accurately detect deepfakes.

Sensity – Visual threat Intelligence Platform

The news and media industry is also facing increased public scrutiny as people
find it difficult to trust news sources. Battling fake news and disinformation is
essential to conduct democratic discourse. Moreover, disinformation campaigns
often divide populations into broad partisan categories, making it difficult to
have proper discussions. Since fake media is increasingly being generated using
AI, startups are also responding with AI-based fake content detection solutions.

Swiss startup Quantum Integrity utilizes its patented deep learning technology
to detect deepfake image and video forgery. The startup’s algorithms are
customizable for various use cases and detect fakes made using different
software and methods. This solution enables companies to save time and
money, reduce fraud, and also enables streamlined decision-making. Some
applications include accident reporting, forged documents, and fake videos.

Quantum Integrity – AI Powered Deep fake detection: - The news and media
industry is also facing increased public scrutiny as people find it difficult to trust
news sources. Battling fake news and disinformation is essential to conduct
democratic discourse. Moreover, disinformation campaigns often divide
populations into broad partisan categories, making it difficult to have proper
discussions. Since fake media is increasingly being generated using AI, startups
are also responding with AI-based fake content detection solutions.
Swiss startup Quantum Integrity utilizes its patented deep learning technology
to detect deepfake image and video forgery. The startup’s algorithms are
customizable for various use cases and detect fakes made using different
software and methods. This solution enables companies to save time and
money, reduce fraud, and also enables streamlined decision-making. Some
applications include accident reporting, forged documents, and fake videos.

Group Cyber ID (GCID) Digital Media Forensics: - The police and judicial processes
usually require authentication of images, videos, and audio recordings,
presented as evidence in court. However, D=digital forensics currently takes a
long time to complete analysis. With the increasing amount of digital media as
evidence, there is a pressing need for faster mechanisms to detect fake content.
To this end, AI startups are working on solutions to help the justice system better
process and verify digital content.

Group Cyber ID is an Indian startup that offers digital forensic solutions for
images, videos, and audio, among other digital components. The startup
administers a multi-layer security strategy of technologies, procedures &
practices in digital forensics, e-discovery, and cyber defense. The solution
authenticates the digital composition of an organization and identifies the
vulnerabilities and risks in internal and external data. The solution further
secures networks, devices, programs & data from a range of malicious attacks.

When an AI-generated image of an explosion occurring outside the


Pentagon proliferated on social media earlier this week, it provided a brief
preview of a digital information disaster AI researchers have warned against for
years. The image was clearly fabricated, but that didn’t stop several prominent
accounts like Russian state-controlled RT and Bloomberg news impersonator
@BloombergFeed from running with it. Local police reportedly received frantic
communications from people believing another 911-style attack was underway.
The ensuing chaos sent a brief shockwave through the stock market.

The deepfaked Pentagon fiasco resolved itself in a few hours, but it could have
been much worse. Earlier this year, computer scientist Geoffrey Hinton, referred
to by some as the “Godfather of AI,” said he was concerned the increasingly
convincing quality of AI-generated images could lead the average person to “not
be able to know what is true anyone.”

Startups and established AI firms alike are racing to develop new AI deepfake
detection tools to prevent that reality from happening. Some of these efforts
have been underway for years but the sudden explosion of generative AI into the
mainstream consciousness by OpenAI’s DALL-E and ChatGPT has led to an
increased sense of urgency, and larger amounts of investment, to develop some
way to easily detect AI falsehoods.

Companies racing to find detection solutions are doing so across all levels of
content. Some, like startup Optic and Intel’s FakeCatch, are focusing on sussing
out AI involvement in audio and videos while others like Fictitious.AI are focusing
their efforts more squarely on text generated by AI chatbots. In some cases,
these current detection systems seem to perform well, but tech safety experts
like former Google Trust and Saftey Lead Arjun Narayan fear the tools are still
playing catch up.

“This is where the trust and safety industry needs to catch up on how we detect
synthetic media versus non-synthetic media,” Narayan told Gizmodo in an
interview. “I think detection technology will probably catch up as AI advances but
this is an area that requires more investment and more exploration.”

Here are some of the companies leading the race to detect deepfakes.
Future studies: -

Future studies should investigate alternative interventions to enhance


deepfake detection. For example, a video presentation on artefact detection
with examples may be a more engaging detection intervention than simply
providing written detection tips. Above, we also suggest including a training
element in which participants are given immediate feedback after a set of
practice videos before doing a detection activity. Researchers should also
consider ways in which they can modify future deepfake studies to increase their
ecological validity

(For example, by embedding videos among other content, as would be typical of


a website) to determine detection accuracy under more realistic conditions.
Finally, replication attempts of the current study should a) make attempts to
avoid participants sharing detection tips in comments sections (if recruiting via
social media), b) bring the measurement of per video confidence and overall
confidence into alignment (to identify if differences between these variables
simply reflect response biases), and c) consider investigating additional individual
difference variables, such as race and prior exposure to deepfakes.

CONCLUSIONS: -

Our study demonstrates that the public's ability to detect deepfakes is


generally poor (although above chance levels), even in the idealized situation in
which individuals are explicitly informed that they will be presented with
deepfakes. The findings cast doubt on whether simply providing the public with
strategies for detecting deepfakes based on the observation of visual artifacts
can meaningfully improve detection, given the lack of an effect for the
experimental intervention. Worryingly, it appears that individuals may be
overly optimistic regarding their abilities to ascertain the authenticity of
individual videos. However, individuals appear to have a more realistic
understanding of their detection abilities in the long run.

You might also like