A World Divided Over Artificial Intelligence-Foreign Affairs

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

A World Divided Over Artificial Intelligence

Geopolitics Gets in the Way of Global Regulation of a Powerful Technology


By Aziz Huq
Foreign Affairs, March 11, 2024

An artificial-intelligence-powered assistant on display in Barcelona, February 2024, Bruna Casas / Reuters

In November 2023, a number of countries issued a joint communiqué promising strong


international cooperation in reckoning with the challenges of artificial intelligence. Startlingly
for states often at odds on regulatory matters, China, the United States, and the European
Union all signed the document, which offered a sensible, wide-ranging view on how to
address the risks of “frontier” AI—the most advanced species of generative models
exemplified by ChatGPT. The communiqué identified the potential for the misuse of AI for
“disinformation” and for the kindling of “serious, even catastrophic” risks in cybersecurity
and biotechnology. The same month, U.S. and Chinese officials agreed to hold talks in the
spring on cooperation over AI regulation. These talks will also focus on how to handle the
risks of the new technology and ensure its safety.

Through multinational communiqués and bilateral talks, an international framework for


regulating AI does seem to be coalescing. Take a close look at U.S. President Joe Biden’s
October 2023 executive order on AI; the EU’s AI Act, which passed the European
Parliament in December 2023 and will likely be finalized later this year; or China’s slate of
recent regulations on the topic, and a surprising degree of convergence appears. They have
much in common. These regimes broadly share the common goal of preventing AI’s misuse
without restraining innovation in the process. Optimists have floated proposals for closer
international management of AI, such as the ideas presented in Foreign Affairs by the
geopolitical analyst Ian Bremmer and the entrepreneur Mustafa Suleyman and the plan
offered by Suleyman and Eric Schmidt, the former CEO of Google, in the Financial Times
in which they called for the creation of an international panel akin to the UN’s
Intergovernmental Panel on Climate Change to “inform governments about the current state
of AI capabilities and make evidence-based predictions about what’s coming.”

But these ambitious plans to forge a new global governance regime for AI may collide with
an unfortunate obstacle: cold reality. The great powers, namely, China, the United States,
and the EU, may insist publicly that they want to cooperate on regulating AI, but their actions
point toward a future of fragmentation and competition. Divergent legal regimes are
emerging that will frustrate any cooperation when it comes to access to semiconductors,
the setting of technical standards, and the regulation of data and algorithms. This path
doesn’t lead to a coherent, contiguous global space for uniform AI-related rules but to a
divided landscape of warring regulatory blocs—a world in which the lofty idea that AI can
be harnessed for the common good is dashed on the rocks of geopolitical tensions.

Chips on their Shoulders

The best-known area of conflict related to AI is the ongoing duel between China and the
United States over global semiconductor markets. In October 2022, the U.S. Commerce
Department issued its first comprehensive licensing regime for the export of advanced chips
and chip-making technology. These chips are needed to manufacture the devices that can
run the cutting-edge AI models used by OpenAI, Anthropic, and other firms on the
technological frontier. The export controls apply not just to U.S. companies but to any
manufacturer that uses such U.S. software or technology; in practice, Washington’s export-
control regulations have a global remit. In August 2023, China countered with its own export
controls on the rare minerals gallium and germanium—both necessary components for
manufacturing chips. Two months later, the Biden administration toughened its earlier
regulations by expanding the range of covered semiconductor products.

Tit-for-tat competition over semiconductors is possible because international trade law


under the World Trade Organization does not sufficiently constrain governments from
instituting export controls. The body has rarely addressed the issue in the past. And since
former U.S. President Donald Trump neutered the WTO’s appellate body in 2018 by
blocking the appointment of new members, there has been little prospect of new formal
rules that can be credibly enforced by an authoritative global institution. As a result, these
salvos in the chip war between China and the United States are eroding free trade and
setting destabilizing precedents in international trade law. They will likely work as a complete
substitute for such law in the near term, guaranteeing lower levels of trade and greater
geopolitical strains.

But the chip war is just the most high-profile front in the gathering contest over AI’s
necessary components. A second zone of conflict concerns technical standards. Such
standards have long undergirded the use of any major technology: imagine trying to build a
railroad across the United States if every state had a different legally mandated gauge for
train tracks. The rise of the digital era has seen the proliferation of various kinds of standards
to enable the production and purchase of complex products around the world. The iPhone
13, for example, has nearly 200 parts sourced from more than a dozen countries. If these
disparate elements are to work together—and make an object that can communicate with
cell towers, satellites, and the Internet of Things—they have to share a set of technical
specifications. The choice of such standards has profound effects. It determines whether
and how innovations can find commercial uses or achieve market shares. As the German
industrialist Werner von Seimens said in the late 1800s, “He who owns the standards, owns
the market.”

The chip war between China and the United States is setting destabilizing precedents in
international trade law.

At present, a series of little-known bodies such as the International Telecommunication


Union, the International Electrotechnical Commission, the International Organization for
Standardization, and the Internet Engineering Task Force negotiate technical standards for
digital technology in general. Based in Geneva and operating as nonprofits or as UN
affiliates, these bodies play a major role in setting the terms of global digital trade and
competition. Members of these institutions vote on standards by majority rule. To date, those
forums have been dominated by U.S. and European officials and firms. But that is changing.

In the last two decades, China has increasingly taken on leadership roles in the technical
committees of several of these bodies, where it has unstintingly promoted its preferred
standards. Since 2015, it has integrated its own technical standards in the projects of its
Belt and Road Initiative, a vast global infrastructure investment program. As of 2019, it had
reached 89 standardization agreements with 39 countries and regions. In March 2018,
China launched yet another strategy, “China Standard 2035,” calling for an even stronger
Chinese role in international standard setting and demanding greater civil-military
coordination within China on the choice of standards. Predictably, some industry analysts
in the United States have responded by calling for Washington to combat “more proactively
. . . Chinese influence over standard-setting bodies.”
This is not the first time technical standards have become ensnarled in geopolitical tensions.
In August 2019, U.S. sanctions on the Chinese telecommunications giant Huawei led China
to establish its own energy-efficiency standards that were incompatible with Western ones.
The result was a fracturing of technical standards for managing how large data centers,
which are central to the digital economy, work. In the AI context, markets separated by
different technical standards would slow the diffusion of new tools. It would also make it
more difficult to develop technical solutions that could be applied globally to problems such
as disinformation or deepfake pornography. In effect, the problems that great powers have
identified as important to jointly address would become harder to solve.

Divisions over AI-related technical standards have already emerged. The EU’s AI Act, for
example, mandates the use of “suitable risk management measures.” To define this term,
the act looks to three independent standard-setting organizations that would develop and
promulgate context-specific standards regarding AI safety risks. It is telling that the three
bodies specified in the legislation to date are European, not the international ones
mentioned above. This seems a quite conscious effort to distinguish European regulation
from its U.S. and Chinese counterparts. And it promises the Balkanization of standards
pertaining to AI.

Their Dark Materials

Geopolitical conflict is not just shaping a new international regulatory landscape for the
physical commodities that make up AI. It is also sharpening divides over the intangible
assets needed for the technology. Again, the emerging legal regime entrenches a divided
worldorder in which broad-based, collective solutions are likely to fail.

The first important intangible input of AI is data. AI tools such as ChatGPT are built on
massive pools of data. To succeed, however, they also need more targeted batches of data.
Generative AI tools, which are able to produce paragraphs of text or extended video based
on brief prompts, are incredibly powerful. But they are often unsuited to highly specific tasks.
They must be fine-tuned with smaller, context-specific data sets to do a particular job. A firm
using a generative AI tool for its customer-service bot, for example, might train such an
instrument on its own transcripts of consumer interactions. AI, in short, needs both large
reservoirs of data and smaller, more bespoke data pools.

Companies and countries will therefore invariably compete over access to different kinds of
data. International conflict over data flows is not new: the United States and the EU
repeatedly clashed over the terms under which data can cross the Atlantic after the EU’s
Court of Justice struck down, in 2015, a safe harbor agreement that had allowed companies
to move data between servers in the United States and Europe. But the scale of such
disagreements is on the rise now, shaping how data will flow and making it harder for data
to cross national borders.

The emerging legal regime regarding AI will frustrate broad-based, collective solutions.

Until recently, the United States promoted a model of free global data transfers out of a
commitment to open markets and as a national security imperative—a more integrated
world, officials believed, would be a safer one. Washington was aggressive in its use of
bilateral trade deals to promote this vision. In contrast, European law has long reflected
greater caution about data privacy. For their part, China and India have enacted domestic
legislation that mandates, in different ways, “data localization,” with greater restrictions on
the flow of data across borders.

Since AI swept to center stage, these views have shuffled. India recently relaxed its
prohibition, suggesting that it will allow greater data flows to other countries—thus giving it
greater sway over the terms of global digital trade. China also seems to be easing its
localization rules as its economy sputters, allowing more companies to store data outside
China’s borders. But startlingly, the United States is moving in the opposite direction. U.S.
politicians who were worried about the social media app TikTok’s relationship with the
Chinese government pressured the company to commit to limiting data flows to China.
(TikTok, by its own admission, has honored this commitment somewhat inconsistently.) In
October 2023, the U.S. trade representative announced that the federal government was
dropping the country’s long-standing demands at the WTO for the protection of cross-border
data flows and prohibitions on the forced localization of data. If Washington maintains this
path, the world will have lost its principal advocate of free data flows. More data localization
would likely ensue.

Finally, global competition is starting to emerge over whether and when states can demand
the disclosure of the algorithms that underlie AI instruments. The EU’s proposed AI Act, for
instance, requires large firms to provide government agencies access to the inner workings
of certain models to ensure that they are not potentially damaging to individuals. Similarly,
recent Chinese regulations regarding AI used to create content (including generative AI)
requires firms to register with authorities and limits the uses of their technology. The U.S.
approach is more complex—and not entirely coherent. On the one hand, Biden’s executive
order in October 2023 demands a catalog of disclosures about “dual-use foundation
models”—cutting-edge models that can have both commercial and security-related uses.
On the other hand, trade deals pursued by the Trump and Biden administrations have
included many provisions prohibiting other countries from mandating in their laws any
disclosure of “propriety source code and algorithms.” In effect, the U.S. position seems to
demand disclosure at home while forbidding it overseas.

Even though this kind of regulation regarding algorithms is in its infancy, it is likely that
countries will follow the path carved by global data regulation toward fragmentation. As the
importance of technical design decisions, such as the precise metric an AI is tasked with
optimizing, becomes more widely understood, states are likely to try to force firms to
disclose them—but also to try to prohibit those firms from sharing this information with other
governments.

Things Fall Apart

In an era of faltering global resolve on other challenges, great powers had initially struck an
optimistic note in grappling with AI. In Beijing, Brussels, and Washington, there seemed to
be broad agreement that AI can cause potentially grave harms and that concerted
transnational action was needed.

Countries are not, however, taking this path. Rather than encouraging a collective effort to
establish a clear legal framework to manage AI, states are already engaged in subtle,
shadowy conflicts over AI’s material and intangible foundations. The resulting legal order
will be characterized by fracture and distance, not entanglement. It will leave countries
suspicious of one another, sapping goodwill. And it will be hard to advance proposals for
better global governance of AI. At a minimum, the emerging regime will make it more difficult
to gather information and assess the risks of the new technology. More dangerously, the
technical obstacles raised by the growing legal Balkanization of AI regulation may make
certain global solutions, such as the establishment of an intergovernmental panel on AI,
impossible.

A fragmented legal order is one in which deeply dangerous AI models can be developed
and disseminated as instruments of geopolitical conflict. A country’s efforts to manage AI
could easily be undermined by those outside its borders. And autocracies may be free to
both manipulate their own publics using AI and exploit democracies’ free flow of information
to weaken them from within. There is much to be lost, then, if a global effort to regulate AI
never truly materializes.

AZIZ HUQ is Frank and Bernice J. Greenberg Professor of Law at the University of Chicago and the author of
The Collapse of Constitutional Remedies.

You might also like