Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

BUFFET T BRIEF • JULY 2023

The Rise of Artificial


Intelligence and Deepfakes
AI-generated deepfake media is a growing threat to international security, yet
deepfakes may also hold promise for counterterrorism. Through smart poli-
cies, public awareness campaigns and technical countermeasures, the threat
of deepfakes may be mediated, while the promise harnessed responsibly.

falsification of military orders to sow confu-


DEEPFAKES THREATEN sion among rank-and-file soldiers, discred-
INTERNATIONAL SECURITY, HOLD iting political leaders and exploiting national
POTENTIAL FOR ANTI-TERRORISM tensions to promote polarization and discord.
In June 2019, for example, a controversial
The advancement of artificial intelligence video began circulating in Malaysia. The vid-
(AI) is a growing concern for the international eo appeared to depict Malaysia’s Economic
community, governments and the public, with Affairs Minister engaged in physical relations
significant implications for national security with an aide. Despite the aide’s attestation to
and cybersecurity. It also raises ethical ques- the authenticity of the video, the minister and
tions related to surveillance and transpar- his political allies claimed the video had been
ency. In a world rife with misinformation and doctored using deepfake technology, demon-
mistrust, AI provides ever-more sophisticat- strating the nefarious political potential of
ed means of convincing people of the veracity deepfake technology as a new form of politi-
of false information that has the potential to cal slander—or, as a new mechanism for ob-
lead to greater political tension, violence or scuring truth.
even war.
One year later, in the spring of 2020, Extinc-
Deepfakes—media content created by AI tion Rebellion Belgium released a fabricated
technologies that are generally meant to be video of then Belgian prime minister Sophie
deceptive—are a particularly significant and Wilmès appearing to connect the spread of
growing tool for misinformation and digital COVID-19 to uncontrolled ecological crises.
impersonation. Deepfakes are generated by The group used footage taken from her re-
machine-learning algorithms combined with cent address to the nation about the pan-
facial-mapping software that can insert that demic and generated a similar, fake speech
data into digital content without permission. with a script written by Extinction Rebellion.
When execution is excellent, the result can be While transparently deepfake, this instance
an extremely believable—but totally fabricat- demonstrates how bad actors could use
ed—a text, video or audio clip of a person do- deepfake technology to promote the spread
ing or saying something that they did not. of misinformation.

Researchers have identified several possible Then, in March 2022, shortly after Russia
use cases of deepfake technology with rami- began its invasion of Ukraine, the Ukrainian
fications for the security sector, including the public was surprised to see a video of their

buffett.nor thwestern.edu 1
president, Volodymyr Zelenskyy, urging the community: Can deepfakes be used to count-
military to lay down their weapons and surren- er terrorists and destabilize terror groups?
der to the invading forces. As the video spread NSAIL researchers have been at the leading
on social media and gained traction in the edge of developing systems to generate real-
news, Zelenskyy’s office quickly disavowed its istic deepfake videos for countering terrorist
authenticity. Indeed, the video had been gen- groups, while also recommending extreme
erated using deepfake technology by Russian caution, the sparing use of deepfake technol-
propagandists—the first high-profile example ogy and a deepfake code of conduct for gov-
of a deepfake being weaponized in an armed ernments.
conflict.

RESEARCHERS INVESTIGATE
SOLUTIONS AND GUARDRAILS FOR
CYBERDECEPTION
The Northwestern Security and AI Lab
(NSAIL)—jointly housed by Northwestern Uni-
versity’s Roberta Buffett Institute for Global
Affairs and McCormick School of Engineer- Screenshot of TREAD used to put words in the mouth of
ing—is dedicated to performing fundamental two dead terrorists, Anwar al Awlaki and Mohammad al
research on AI technology, including deep- Adnani with trainers in English and Arabic, respectively.
fakes, that is relevant to questions related to
cybersecurity and international security as
well as when the use of AI or deepfakes is eth- EARLY FINDINGS AND
ically or politically warranted. RECOMMENDATIONS

NSAIL was launched in October 2022 as a part- NSAIL head V.S. Subrahmanian, in collabora-
nership between the Northwestern Buffett In- tion with Dan Byman of Georgetown University
stitute of Global Affairs and the Northwestern and Chris Meserole of the Brookings Institu-
McCormick School of Engineering. Led by V.S. tion, recently published a paper on the impli-
Subrahmanian, a Buffett Faculty Fellow at the cations of deepfakes on international conflict,
Northwestern Buffett Institute and the Walter including the threats of “falsifying orders from
P. Murphy Professor of Computer Science, the military leaders, sowing confusion among the
new lab is currently working on over 20 distinct public and armed forces and lending legitima-
research projects related to AI, examining is- cy to wars and uprisings.”
sues ranging from how to protect cities from
drone attacks by terrorist organizations, to In addition to highlighting the ways in which
detecting deception in videos, to the implica- deepfakes can be used to foster dissent,
tions of deepfakes for international conflicts. confusion and distrust among an adversary’s
public, military and media, the report also out-
The lab’s Terrorism Reduction with AI Deep- lines a series of policy recommendations for
fakes (TREAD) project—developed by North- liberal democracies balancing a vested inter-
western University Ph.D. candidate Chong- est in the accuracy of public information with
yang Gao, undergraduate Alex Feng and strong incentives to deploy deepfakes against
Subrahmanian—specifically investigates the their adversaries, particularly in the context of
implications of deepfakes for international armed conflict. Considering those incentives,
conflict and terrorism mitigation while rais- Subrahmanian and his collaborators suggest
ing a central question for the global security that the U.S. and its democratic allies develop

2 buffett.nor thwestern.edu
a code of conduct for deepfake use by govern- such steps in the near future and beyond.
ments called a Deepfakes Equities Process
based on the federal government’s existing If every piece of media content becomes sus-
Vulnerabilities Equities Process, which guides pect because of AI and deepfakes, Western
decisions on whether newly discovered cyber- democracies in particular will become ever
security vulnerabilities are publicly disclosed more concerned about the potential erosion
or kept secret for offensive use against gov- of trust needed for democracy to function. In
ernment adversaries. An “inclusive, delibera- response, expect an increase in the number
tive process is the best way to ensure deep- of organizations, thinktanks and research or-
fakes are used responsibly,” they wrote, and a ganizations like NSAIL devoting time and re-
Deepfakes Equities Process would determine sources to studying the evolving capabilities
when the benefits of leveraging deepfake and threats of cyber deception, developing
technology against high-profile targets out- technical countermeasures to detect them
weighs the risks “by incorporating the view- and providing risk mitigation guidance and
points of stakeholders across a wide range of solutions.
government offices and agencies.”’
NSAIL researchers and founder Subrahma-
nian will continue to advance the world’s
A Deepfakes Equities Process knowledge about AI technology across many
would determine when the domains, while presenting their findings glob-
benefits of leveraging deepfake ally. For example, in February 2023, the Dutch
government hosted the first global Summit on
technology against high-profile Responsible Artificial Intelligence in the Mil-
targets outweighs the risks “by itary Domain (REAIM 2023). The summit con-
incorporating the viewpoints of vened stakeholders from around the world for
stakeholders across a wide range of dialogue on key opportunities, challenges, and
risks associated with military applications of
government offices and agencies.”
AI. Subrahmanian was an invited speaker, ar-
guing for bringing insight on deepfake tech-
nology to the fore of current debates on re-
DEVELOPMENTS TO WATCH sponsible uses of AI within militaries.
Deepfake technology has wide-ranging impli-
cations beyond international security, includ-
ing the fabrication of criminal evidence, new
forms of sexual harassment including deep-
fake pornography and/or privacy violations
from employers trying to prevent deepfakes
from happening in the first place. In the months
and years ahead, governments, international
organizations, institutions and businesses are
likely to pay greater attention to the growing
role of AI and the guardrails needed for risk
mitigation through new and evolving technol-
ogies, policies and strategies. For example, in
June 2023, the European Union took the first
step towards regulating how businesses can
use artificial intelligence. Undoubtedly, the EU
will not be the only international body to take

buffett.nor thwestern.edu 3

You might also like