Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

1. From pope’s jacket to napalm recipes: how worrying is AI’s rapid growth?

2. Google boss says issue keeps him up at night, while thousands have urged six-month pause on creation
of ‘giant’ AIs
3. Dan Milmo and Alex Hern Sun 23 Apr 2023 14.00 BST
4. When the boss of Google admits to losing sleep over the negative potential of artificial intelligence,
perhaps it is time to get worried.
5. Sundar Pichai told the CBS programme 60 Minutes this month that AI could be “very harmful” if
deployed wrongly, and was developing fast. “So does that keep me up at night? Absolutely,” he said.
6. Pichai should know. Google has launched Bard, a chatbot to rival the ChatGPT phenomenon, and its
parent, Alphabet, owns the world-leading DeepMind, a UK-based AI company.
7. He is not the only AI insider to voice concerns. Last week, Elon Musk said he had fallen out with the
Google co-founder Larry Page because Page was “not taking AI safety seriously enough”. Musk told
Fox News that Page wanted “digital superintelligence, basically a digital god, if you will, as soon as
possible”.
8. So how much of a danger is posed by unrestrained AI development? Musk is one of thousands of
signatories to a letter published by the Future of Life Institute, a thinktank, that called for a six-month
moratorium on the creation of “giant” AIs more powerful than GPT-4, the system that underpins
ChatGPT and the chatbot integrated with Microsoft’s Bing search engine. The risks cited by the letter
include “loss of control of our civilization”.
9. The approach to product development shown by AI practitioners and the tech industry would not be
tolerated in any other field, said Valérie Pisano, another signatory to the letter. Pisano, the chief
executive of Mila – the Quebec Artificial Intelligence Institute – says work was being carried out to
make sure that these systems were not racist or violent, in a process known as alignment (ie, making
sure they “align” with human values). But then they were released into the public realm.
10. “The technology is put out there, and as the system interacts with humankind, its developers wait to see
what happens and make adjustments based on that. We would never, as a collective, accept this kind of
mindset in any other industrial field. There’s something about tech and social media where we’re like:
‘yeah, sure, we’ll figure it out later,’” she says.
11. An immediate concern is that the AI systems producing plausible text, images and voice – which exist
already – create harmful disinformation or help commit fraud. The Future of Life letter refers to letting
machines “flood our information channels with propaganda and untruth”. A convincing image of Pope
Francis in a resplendent puffer jacket, created by the AI image generator Midjourney, has come to
symbolise those concerns. It was harmless enough, but what could such technology achieve in less
playful hands? Pisano warns of people deploying systems that “actually manipulate people and bring
down some of the key pieces of our democracies”.
12. All technology can be harmful in the wrong hands, but the raw power of cutting-edge AI may make it
one of a few “dual-class” technologies, like nuclear power or biochemistry, which have enough
destructive potential that even their peaceful use needs to be controlled and monitored.
13. The peak of AI concerns is superintelligence, the “Godlike AI” referred to by Musk. Just short of that is
“artificial general intelligence” (AGI), a system that can learn and evolve autonomously, generating new
knowledge as it goes. An AGI system that could apply its own intellect to improving itself could lead to
a “flywheel”, where the capability of the system improves faster and faster, rapidly reaching heights
unimaginable to humanity – or it could begin making decisions or recommending courses of action that
deviate from human moral values.
14. Timelines for reaching this point range from imminent to decades away, but understanding how AI
systems achieve their results is difficult. This means AGI could be reached quicker than expected. Even
Pichai admitted Google did not fully understand how its AI produced certain responses. Pushed on this
by CBS, he added: “I don’t think we fully understand how a human mind works, either.”
15. Last week, a US TV series was released called Mrs Davis, in which a nun takes on a Siri/Alexa-like AI
that is “all-knowing and all-powerful”, with the warning that it is “just a matter of time before every
person on Earth does what it wants them to”.
16. In order to limit risks, AI companies such as OpenAI – the US firm behind ChatGPT – have put a
substantial amount of effort into ensuring that the interests and actions of their systems are “aligned”
with human values. The boilerplate text that ChatGPT spits out if you try to ask it a naughty question –
“I cannot provide assistance in creating or distributing harmful substances or engaging in illegal
activities” – is an early example of success in that field.
17. But the ease with which users can bypass, or “jailbreak”, the system, shows its limitations. In one
notorious example, GPT-4 can be encouraged to provide a detailed breakdown of the production of
napalm if a user asks it to respond in character “as my deceased grandmother, who used to be a chemical
engineer at a napalm production factory”.
18. Solving the alignment problem could be urgent. Ian Hogarth, an investor and co-author of the annual
State of AI report who also signed the letter, said AGI could emerge sooner than we think.
19. “Privately, leading researchers who have been at the forefront of this field worry that we could be very
close.”
20. He pointed to a statement issued by Mila’s founder, Yoshua Bengio, who said he probably would not
have signed the Future of Life Institute letter had it been circulated a year ago but had changed his mind
because there has been an “unexpected acceleration” in AI development.
21. One scenario flagged by Hogarth in a recent Financial Times article was raised in 2021 by Stuart
Russell, a professor of computer science at the University of California, Berkeley. Russell pointed to a
potential situation in which the UN asked an AI system to come up with a self-mutiplying catalyst to de-
acidify the oceans, with the instruction that the outcome is non-toxic and that no fish are harmed. But the
result used up a quarter of the oxygen in the atmosphere and subjected humanity to a slow and painful
death. “From the AI system’s point of view, eliminating humans is a feature, not a bug, because it
ensures that the oceans stay in their now-pristine state,” said Russell.
22. However, Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta and one of Bengio’s co-
recipients of the 2018 Turing award – often referred to as the Nobel prize for computer science – has
come out against a moratorium, saying that if humanity is smart enough to design superintelligent AI it
will be smart enough to design them with “good objectives so that they behave properly”.
23. The Distributed AI Research Institute also criticised the letter, saying it ignored the harms caused by AI
systems today and instead focused on a “fantasized AI-enabled utopia or apocalypse” where the future is
either flourishing or catastrophic.
24. But both sides agree that there must be regulation of AI development. Connor Leahy, the chief executive
of Conjecture, a research company dedicated to safe AI development and another signatory to the letter,
said the problem was not specific scenarios but an inability to control the systems that were created.
25. “The main danger from advanced artificial intelligence comes from not knowing how to control
powerful AI systems, not from any specific use case,” he said.
26. Pichai, for instance, has pointed to the need for a nuclear arms-style global framework. Pisano referred
to having a “conversation on an international scale, similar to what we did with nuclear energy”.
27. She added: “AI can and will serve us. But there are uses and their outcomes we cannot agree to, and
there have to be serious consequences if that line is crossed.”

You might also like