Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

A Proposal for

Global AI Regulation
Professor William Webb | CTO Access Partnership

accesspartnership.com
Why
regulate AI?
AI has the potential to deliver term harms remain unclear. The
more benefits to humanity than global, far-reaching nature of this
any new technology in the last technology underlines this threat.
century. This benefit is matched
by great potential harms. Unusually for such a fast-moving
area, there is near-universal
The challenges of regulating AI agreement that some sort of
are well-documented: it is a fast- regulation is needed, and indeed
moving field in which the long- is critical.

1 - A Global
Proposal
AI for
Regulation
Global AIProposal
Regulation
| Professor
| Professor
William
William
Webb
Webb accesspartnership.com
01
Current
regulatory
proposals
Regulators, policy bodies, and
companies have responded to this
clear need with a flurry of proposals to
either regulate AI or build the evidence
base to do so. However, the quality of
these proposals varies considerably.
Others, meanwhile, are considering
alternative approaches, such as
voluntary ethical frameworks.

Europe

The most concrete is the EU’s AI Act. This Any regulatory approach to AI, at least
has been many years in development and is in its current rapidly changing and
now in the advanced stages of approval. It is highly unpredictable state, should be:
detailed (over 100 pages in total) and includes
enforcement mechanisms and more. As the Global, since AI by its nature
first comprehensive piece of legislation in cannot be constrained to
the field, it has the potential, as with GDPR any country.
regulation, to become the de facto global AI
regulation.
Responsive, such that it can
change quickly as harms
While other countries have varying degrees
become apparent.
of regulation and frameworks, they are
developing at such a pace that any policy is
likely to be outdated soon after its completion. Built on principles, or ethics,
But there has been much comment on the that provide guidance for
EU’s AI Act since there is much to criticise, those working on AI and
not least from a distance: the EU’s Act was enable even some degree of
developed by well-informed teams using a self-regulation.
well-developed process. It is not clear that
anyone else following the same approach
would come up with anything materially Each of these are very different from current
better. Nor is it likely that tweaking will approach, which tends to be national, slow-
overcome the valid criticisms raised. Arguably, moving, and specific. It is unlikely that these
what the AI Act demonstrates is that the objectives will be delivered through the
current regulatory process is not the right existing bodies and processes (even the
one to regulate AI. Mostly, this is because it shortest international standards reconciliation
is too slow – AI changes far faster than the process is a years-long affair). Just as AI has
regulators can draft the laws. But also, in a led to a dramatic change in many industries,
highly uncertain area, specific evidence-based it needs to lead to a dramatic change in
regulation is unlikely to work well. regulators and regulatory processes.

accesspartnership.com A Proposal for Global AI Regulation | Professor William Webb -2


02
Globality
AI is a global development.
Companies leading the way have
global reach and AI systems can be
accessed from almost anywhere in
the world. AI would be much better
regulated on a global basis.

Global regulation has often been national regulatory structure. Rather than
unsuccessful or very slow and a national regulator or government seeding
compromised. But this need not be the powers to the global AI regulator they could,
case with AI regulation. This is primarily instead, commit to enact into national law
because there is no absolute need to or regulation the global regulations. This is
have all countries involved. As long as the how some EU regulation works – national
majority of countries where the major users governments agree to enact it as national
of AI reside are part of the initiative, then it law. Similarly, compliance verification and
is highly likely that the major suppliers will penalties could also work nationally but
adhere to the regulation in all countries that as part of an agreed global process – for
they supply to. This is akin to adherence example, one country might issue a fine to
to the EU’s GDPR regulation globally, even a transgressor that had been agreed at a
where it does not apply, primarily because global level.
it is simpler for companies to work to one
regulatory standard. A global regulator should not be required to
have a representative from each country that
Flexible membership signs up to it. This would be unwieldy and
It may also be manageable to have a few raise the risk of some countries blocking or
countries, for example China, outside of the compromising progress. Instead, it should
regulatory framework. Since the origin of AI be able to select its own panel of experts,
tools can normally be ascertained, those using chosen for their capabilities rather than
AI can avoid non-certified systems and users nationality. It should be fully funded from the
look for compliance markings on products. member countries.
While not ideal, use of non-certified AI systems
in countries that have not subscribed to the Action first
global regulation may be identifiable and Of course, none of this is easy. Countries
manageable. The more countries involved the will not want to write a “blank cheque”
better, but this ability to move ahead with only to a new body with no track record and
a subset of countries should mean that there little in the way of checks and safeguards.
is less blocking of progress from those with Selecting a panel of experts is likely to lead
differing interests, or worse a desire to subvert. to nationalistic behaviours. And that’s before
even considering how to regulate AI, a topic
Use the force of huge debate and contention. But while not
In order to set up the new framework quickly easy, it is possible, and some solutions are
it may be best to work within the existing discussed below. And it is critically needed.

3 - A Proposal for Global AI Regulation | Professor William Webb accesspartnership.com


03
Rapid reaction
Regulators generally move slowly. For example, • Use of the best forecasters in the world
a spectrum auction, which is a well-known and to predict what might happen, and tools
widely used piece of regulation, can often take such as backcasting, where the routes to
many years to design and run. possible predicted harms can be better
understood. Again, this is better done
Regulators fear legal challenges, consult often, globally.
assemble large bodies of evidence to de-risk,
and pause whenever issues emerge. Equally, While adopting these does not sound too
regulators can move fast; for example in the case difficult, it will require different types of staff
of the Covid vaccine, where the benefits were than currently reside in most regulators,
so great that normal processes were modified as well as cultural change in some cases.
and approaches that monitored and changed Recruitment is often a slow process, so use of
regulation rapidly were put in place. contractors, consultants, or other flexible ways
to rapidly gain access to the world’s leading
In the case of AI regulation, the risks of moving thinkers might be needed.
too quickly are relatively low and likely to be
slowing the innovation in AI. Conversely, the risks In respect of point 2 – call it flash rulemaking
of moving too slowly could be significant harms – another challenge is the framework that
that are hard to address once widely embedded. regulators work under, which is typically a legal
Act that sets out the duties, objectives, and
To be responsive on timescales of months, not mode of operation of the regulator. Often, this
years, AI regulators need: framework requires evidence-based regulation
and formal consultation. These are excellent
1.1 The ability to spot trends or even predict approaches – but only in slow-moving areas. A
harms. change to the legal framework may be needed
2
2. A framework that allows regulation to be to allow the regulator to adopt a very different
produced without evidence or consultation. approach, where regulation is quickly drafted
by a group of experts and then immediately
In respect of point 1, there are many ways to put into force, but with frequent review
spot trends: and potential update so that any errors or
unexpected outcomes can be dealt with.
• Regulators could be much more tightly
coupled to research activities, such as The problem is that changes to legal
national AI laboratories, with staff circulating frameworks are themselves based on
between them and regular briefings across evidence-led assessment and consultation.
the research-regulatory divide. Regulators can Consequently, changing the framework
even shape research into areas of concern or might take too long. It may require some
uncertainty. form of derogation, or similar, that can allow
the regulator to operate under temporary
• Regulators could have a well-resourced exceptions while the slower processes unfold.
monitoring function that closely monitors AI Quite what will work will likely differ from
use and actively probes for possible harms; country to country. If a global regulator is
for example, working with organisations on established, as discussed above, then countries
beta-versions of systems and using a form may need to do relatively little other than be
of “penetration testing” that seeks possible able to incorporate the global regulation into
issues. This is better done globally with the their national framework. But without this, they
cost shared across all regulators. may need to make harder changes.

accesspartnership.com A Proposal for Global AI Regulation | Professor William Webb -4


04
Principles-led
In a fast-moving area, it is rarely
possible to have highly specific
regulation. It tends to become
outdated, or stakeholders quickly
find ways around it.

A different approach is to set out, at a


high level, what the regulation aims to These issues are nearly insurmountable,
achieve, such as fairness for all, societal but they do not mean that there is no
benefits, inclusiveness, security, and the role for such frameworks. They bring
ability to control and modify. Many such two potential benefits:
frameworks for AI have already been
developed and appear well thought out
They can be used to signal
and appropriate.
likely future regulation, reducing
the uncertainty that comes
The challenge is how to apply and
from the responsive regulation
enforce such frameworks. At present,
discussed above. While
they are often proposed as voluntary
stakeholders may not know
codes that companies developing AI
exactly what regulation will be
might wish to adopt. That is laudable,
enacted, they can have a good
but hardly sufficient for regulatory
guess at what might be coming
purposes.
and some expectation that if
they adhere to the framework,
As principles-led frameworks are, by
they will not be heavily
their nature, high-level, they are very
impacted by new regulation.
difficult to enforce. How can it be shown
that a company carefully considered all
aspects of fairness in designing their They may have some impact
system? Or that the resulting system is – many companies will
fair when fairness is such a subjective take note and change their
concept? How would you prosecute behaviours. So they can lessen
a company that appeared not to have harms that might result. On
complied with the ethical code? Any their own, they are insufficient.
legal challenge would seem problematic However, when coupled with a
when there is so much subjectivity responsive and more specific
involved. These problems have meant regulation, they can form
that ethical codes have rarely been part part of a better regulatory
of a regulator’s toolkit. approach.

5 - A Proposal for Global AI Regulation | Professor William Webb accesspartnership.com


05
Expert-led
Regulators tend to be generalists
who then consult the experts as
needed as they develop regulation.
For example, they might talk to
mobile network designers when
considering spectrum auctions
to ensure that the auctions are
technically sound.

This approach has worked in the past


because the sectors being regulated are
relatively slow-moving and the regulators
can often recruit a few individuals with
expert knowledge to work on internal teams.

This is not the case with AI, which is moving


much faster than sectors like mobile
communications. Experts are concentrated
in the large companies developing AI and
unlikely to join regulators. Prior regulation, or
examples from other countries, hardly exists.

This leads to a huge asymmetry of


knowledge between those developing AI
and those trying to regulate it, making
good regulation extremely challenging. The
solution is to use panels of experts, ideally
globally, who have an equal role in creating
regulation, not just an input.

These experts need to be co-


opted from leading companies,
universities, research bodies,
and elsewhere. They clearly need
to understand AI, but experts
on the impact of AI, such as
sociologists, psychologists, and
more, are also needed.

accesspartnership.com A Proposal
A Globalfor
AI Global
Regulation
AI Regulation
Proposal | Professor William Webb -6
06
What will the
regulation
look like?
This paper has avoided saying
much about what the actual AI
regulation might look like. This
is broadly because it is too early
to do so in many areas. However,
some harms are already apparent,
and the form of likely regulation is
emerging, for example:

Disinformation and Bias and


fake news: discrimination:
This is a very challenging problem Given that biases are often incorporated
requiring multiple tools spread across unwittingly, the approach here might
many actors, such as regulation that require extensive testing before AI is used
makes generation of disinformation in particular applications. In this case, it
illegal and requires sites that host such would be the responsibility of those using
information to actively search for and the AI to ensure that it was tested for
remove it. Fact-checking bodies could biases in their intended use. Guidance on
be set up to search for and flag such selection of training data might also help
information. those training models.

Copyright abuse: Privacy:


This extends from AI using written material, If personal data is incorporated into training
music, and art to using people’s voice or models, it might emerge into public or
appearance and more. A global agreement commercial spheres in unanticipated ways.
on a fair recompense for materials used Here, the best approach is likely to require
in AI training is needed coupled with a those training models to exclude personal
requirement for those using AI models to data. However, there are many grey areas
pay appropriate fees. that will need careful consideration.

7 - A Proposal for Global AI Regulation | Professor William Webb accesspartnership.com


There are future potential harms that have been postulated but where
it is less clear what the optimal form of regulation is, such as:

Anti-competitive behaviour, Risks in robotic or autonomous


should the provision of key AI devices that use AI to decide
models, or other enablers, be on actions, such as AI-enabled
concentrated in a small number autonomous cars.
of players.

Aggressive loss of employment Criminal behaviour or other


leading to social risks. highly dangerous outcomes.

Risks that AI designed for military Existential risks, should AI “get


purposes “escapes” into the wider out of control” and pursue goals
world and leads to enhanced detrimental to humanity.
criminal behaviour or other highly
dangerous outcomes.

Work needs to start on these urgently to explore what best can be done.

These are not intended to be comprehensive lists and much remains to be determined – hence
the need for responsive action.

accesspartnership.com A Proposal for Global AI Regulation | Professor William Webb -8


07
How to bring
about better AI
regulation
If the preferred approach is a new
global regulator, this should also
embrace responsiveness and
principles-led regulation.

To achieve this, someone needs to take the What about other stakeholders who will
initiative to make it happen, ideally a high- understandably want to influence and
ranking politician who can persuade others accelerate the process? Historically,
across multiple countries to consider a stakeholders often slow down regulatory
global regulator. This then allows suitable processes by their actions, so care is
individuals (e.g., senior civil servants or needed here. Work by industry bodies,
regulators) from each country to convene for research organisations, or other non-
more detailed discussions leading to terms commercial entities can be helpful if it
of reference and then the appointment of provides regulators with ready-made
the key individuals needed to run the global material to form regulations or delivers
regulator. what can be regarded as evidence.
This might be in the form of voluntary,
But, as noted above, persuading multiple principles-led frameworks. Major
countries to “write a blank cheque” to a stakeholders all asking for global regulation
new, unproven regulator is a very big ask, can also provide governments with the
and perhaps too challenging for risk-averse signals that they need to move in that
regulators. Instead incremental approaches direction. Setting up forecasting panels and
could be adopted. Countries could give similar can provide a ready-made resource
approval in principle but with opt-out that regulators can then adopt more quickly
options in the first year, with increasing than establishing their own. But demanding
commitment as the global regulator proves that regulators adopt certain materials is
its worth (or the ability to pull out if the likely to be less than helpful and regulators
global regulator does not achieve its goals). will, understandably, be suspicious of
The key will be to show a path towards materials generated from stakeholders who
becoming the global regulator, rather than stand to make commercial gain.
yet another advisory body.

If establishing a global regulator fails, Above all, a clear, consistent


countries will need to go it alone or work in message that responsive global AI
regional groups. This means each country regulation is needed is most likely
addressing the challenges listed above of
setting up a new way of delivering regulation. to result in the governmental action
That likely requires leadership from the essential to bringing it about.
government and the regulator.

9 - A Proposal for Global AI Regulation | Professor William Webb accesspartnership.com


Work with us
Founded in 1999, Access Partnership is a global policy consulting
firm with integrated expertise across technology, government affairs,
multilateral organisations, and sustainability.

Meet William

As CTO, William oversees the


development and dissemination of
technology for our clients and vendors,
bringing over thirty years’ experience
in technological communications. He
manages a team of technical experts
and provides technical support across
the entire portfolio of Access’s activities.
He also provides direct consulting,
including expert witness services, to
William Webb mobile operators undergoing mergers or
Chief Technology Officer legal disputes around the world.
william.webb@accesspartnership.com
A previous Director of Technology at
OfCom for over seven years, William was
also the Director of Corporate Strategy
William oversees the for Motorola, based in Chicago, USA,
development and dissemination and moved on to become one of the
founding directors of Neul, holding the
of technology for our clients role of CTO, where he was responsible
and vendors, bringing over for the overall technical design of an
thirty years’ experience in innovative new wireless technology,
before being sold to Huawei in 2014.
technological communications.
Latterly, he became CEO at Weightless
SIG, which harmonised the technology
as a global standard.

accesspartnership.com A Proposal for Global AI Regulation | Professor William Webb - 10


North America Europe

Washington DC London Brussels


Suite 512 4th Floor, The Tower Square de Meeûs 37
1730 Rhode Island Ave N.W. 65 Buckingham Gate 4th Floor
Washington DC 20036 London, SW1E 6AS B-1000 Brussels
USA UK Belgium

Tel: +1 202 503 1570 Tel: +44 (0) 20 8600 0630 Tel: +32 (0)2 791 79 50
washingtondc@accesspartnership.com london@accesspartnership.com brussels@accesspartnership.com

Middle East Africa Asia-Pacific

Abu Dhabi Johannesburg Singapore


Al Wahda City Tower, 20th Floor 119 Witch-Hazel Avenue Asia Square, Tower 2#11-0112 Marina
Hazaa Bin Zayed The First Street Highveld Technopark ViewSingapore 018961
PO Box 127432 Johannesburg
Abu Dhabi, UAE Gauteng, South Africa Tel: +65 8323 7855
singapore@accesspartnership.com
Tel: +971 2 815 7811 Tel: +27 72 324 8821
abudhabi@accesspartnership.com

You might also like