As AI and Automation Grow, Protecting Data Is Key For Both Values and Interests-CEPA

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

As AI and Automation Grow, Protecting Data is Key for both Values and

Interests
By James Lamond
Center for European Policy Analysis, December 6, 2023

Over the past year or so, the world has been shocked at the rapid leap forward in artificial
intelligence (AI) and automation. While AI, machine learning, and automation have all been
in use for some time, the easy-to-use interfaces that emerged late last year have made what
was once the stuff of science fiction a reality for the masses. AI may very well be one of the
most consequential technological developments in our lifetimes, and it has already been put
to meaningful use in biological research, helping tackle climate change, protecting against
financial fraud, and more.

The potential for innovation and AI to help tackle some of the most vexing issues facing
society today is immense. But just as the potential for progress is real, so too are the
challenges and threats that AI could bring about. A group of prominent AI experts, including
leaders from the very companies working on these emerging technologies, signed a one-
sentence letter earlier this year declaring that, “Mitigating the risk of extinction from AI should
be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Leaving aside the risk of extinction, there are also more immediate challenges, including
mass surveillance, autonomous lethal weapons, disruptions in the labor market in new and
profound ways, and even just bad music.

Interestingly, this new technological development also happens to take place at the same
time as two interrelated global political trends: 1) the rise of geopolitical competition and 2)
the growth in efforts to erode democratic norms and protections. AI will have a profound
impact on both of these. The important questions are how, toward what end, and what
policymakers can do about it.

Geopolitics and AI

National power and technological leadership have always been intricately tied together. It is
not just about the direct dominance in technology, but the broader economy and society
that technological innovation contributes to that helps build this power. Today, this
correlation is even more pronounced and is changing at a quicker rate than in the past. As
Nathaniel Fick, the US ambassador at large for cyberspace and digital policy, recently
explained before the Senate Foreign Relations Committee, “Many traditional measures of
strength, such as GDP or military capacity, are increasingly downstream from our ability to
innovate in core areas. … [I]n the realm of geopolitical competition, tech is the game.”

At the same time, governments and anti-democratic forces have already begun to use
AI/automation tools to manipulate the information space, interfere in democratic processes,
and repress populations. In its Freedom on the Net 2023 report, Freedom House found that
at least 47 governments deployed commentators to manipulate online discussions and that
AI/automation tools are used to escalate disinformation, conduct mass surveillance
(particularly facial recognition), and enhance censorship.

Put simply, AI is a tool that can be used by antidemocratic forces, both at home and abroad,
to undermine the transatlantic community’s democracies.

The Task for Policymakers: Keep Data Secure and Private

AI is here and here to stay, along with all the risks and opportunities that come with it. The
task for policymakers in the transatlantic community is to help build an ecosystem that
promotes innovation in order to maintain a competitive edge, but also protects the
democratic values that undergird the community.

This will require a complex recipe, one that the world’s experts and policymakers are still
figuring out. But one essential ingredient in the recipe is protecting user data.

AI systems are dependent on massive amounts of data to learn and improve decision-
making processes. In turn, big data analytics use AI for better data analysis. Part of the
equation for building an ecosystem for AI that is both innovative and democratic is ensuring
the integrity of data, its collection, and protection. Specifically, data will need to be kept
secure (privacy protected) and collected and analyzed in accordance with democratic
values.
Ensuring that data is kept secure from illicit access by malign actors, whether they be state
actors or criminal organizations, has been a growing priority for the past two decades. As
has advancing privacy protections to ensure that personal data is not exploited
commercially or shared without the user’s consent. But both have had uneven progress
within the transatlantic community in recent years.

In the United States, there has been innovative thinking from policymakers focused on
national security about how to bring together all the stakeholders involved in cybersecurity
to better ensure the defense, availability, reliability, and resilience of its cyber networks and
infrastructure. Meanwhile, in the European Union (EU) there are growing security concerns
stemming from current implementation of the Digital Markets Act (DMA). The recent
decision by the EU to commission a technical study to assess and map the security
concerns stemming from certain provisions of the DMA shows that the security concerns
are real. The study is a welcome step, but only if a thorough and genuine technical analysis
of the security implications is conducted with recommendations for how to mitigate the
unintended consequences.

Meanwhile, on the privacy front, the prospects for a comprehensive digital privacy law in the
United States appear to be stalled, despite some serious progress last year. The EU has
been years ahead of the United States with the General Data Protection Regulation
(GDPR), but here too the DMA is creating complications. The DMA’s interoperability
mandates for encrypted messaging apps could undermine the privacy gains from the
GDPR. For example, Steve Bellovin, one of the world’s leading cryptographers and a former
chief technologist at the U.S. Federal Trade Commission, warns that end-to-end encryption
interoperability, as proposed in the DMA, is “somewhere between extraordinarily difficult
and impossible.” There are other proposals in the EU and the United Kingdom that also
could have the unintended consequence of undermining end-to-end encryption, which is
the technology that provides privacy and secure technologies. Democracy activists,
including those at the American Civil Liberties Union and Electronic Frontier Foundation,
have warned that end-to-end encryption is critical to protecting human rights and democratic
values. Several companies, including secure messaging apps used by civil society
organizations and democratic activists, have threatened to pull out of markets due to these
proposals.

AI/automation has opened up a whole new world. How this world takes shape will have
profound consequences for the future of technology, society, and democracy in the
transatlantic space and globally. As governments debate current and future policies, they
should carefully consider the impact on cybersecurity and privacy. The stakes are high.

You might also like