Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Risks posed by artificial intelligence

80000hours.org/problem-profiles/artificial-intelligence-risk/

Note: This is one of many ‘problem profiles’ we’ve written to help people find the most pressing problems they
can contribute to solving, and thereby have a larger social impact. Learn more about how we compare different
problems, see how we try to score them numerically, and check the list of all problems we’ve considered so far.

Summary
Many experts believe that there is a significant chance we’ll create artificially intelligent machines with abilities
surpassing those of humans – superintelligence – sometime during this century. These advances could lead to
extremely positive developments, but could also pose risks due to catastrophic accidents or misuse. The people
working on this problem aim to maximize the chance of a positive outcome, while reducing the chance of
catastrophe.

Work on the risks posed by superintelligent machines seems largely neglected, with total funding for this
research well under $10 million a year.

How to work on this problem

The primary opportunity to deal with the problem is to conduct research in philosophy, computer science and
mathematics aimed at keeping an AI’s actions and goals in alignment with human intentions, even if it were
much more capable than humans at all tasks. There are also indirect ways of approaching the problem, such as
increasing the number of people worried about the risks posed by artificial intelligence and their capability to act
in the future.

Score
(using
our
Factor rubric) Notes

Scale 14 We estimate that the risk of extinction from AI within the next 200 years is in the
order of 10%, and that a research program on the topic could reduce that by at
least 1 percentage point. These estimates are necessarily highly uncertain.

Neglectedness 11 $1-$10m of annual funding.

Solvability 2 Solutions are believed to be several decades off and it is quite unclear how to
approach the problem. Some dispute whether it is possible to address the
problem today.

Recommendation Recommended - This is among the most pressing problems to work on.
top tier

Level of depth Exploratory profile We’ve made an initial evaluation of this problem by summarising
existing research.

What is the problem?


Many experts believe that there is a significant chance we’ll create artificially intelligent machines with abilities
surpassing those of humans – superintelligence – sometime during this century. These advances could lead to
extremely positive developments, but could also pose risks due to catastrophic accidents or misuse. The people
working on this problem aim to maximise the chance of a positive outcome, while reducing the chance of
1/5
catastrophe.

Why is this problem pressing?

What is our recommendation based on?

We think this problem is pressing because it’s prioritised by:

The Open Philanthropy Project – read their profile.


The Future of Humanity Institute at Oxford University, which thinks it’s among the most pressing problems
facing humanity from a long-run perspective. You can read their argument in the book, Superintelligence,
by Nick Bostrom.
The Global Priorities Project rates it as a top policy priority – read their report on the topic.

Having looked into the above reports we broadly agree with the groups above.

Notable individuals who have publicly expressed concern about the problem include AI researcher Stuart
Russell, physicists Steve Omohundro and Stephen Hawking and entrepreneurs Elon Musk, Bill Gates and Steve
Wozniak.

Why think it’s pressing?

The reasons for believing that artificial intelligence poses real risks are unusually complex and counterintuitive,
and so cannot be fully explained in this profile. For those who are curious to know more, we suggest reading this
popular introduction first. As a next step you could read the book Superintelligence by Nick Bostrom for a more
detailed and careful explanation.

If a superintelligent machine were developed and humans were unable to keep its actions aligned with
human goals, it would pose a major risk to the survival of human civilization. A significant fraction of
experts in the area think the probability of a superintelligent machine creating outcomes we would not
want is high, unless we make a concerted effort to prevent that from happening.
Given the size of the risk, the area is highly neglected. The amount of funding directly spent on addressing
these risks is under $10m per year, compared to billions of dollars spent on speeding up AI development,
or the billions spent on similar threats such as nuclear war or biosecurity.
In the worst case scenario not only do all people alive now die, but all future generations of people, and
whatever value they would create, are prevented from existing.
If a superintelligent machine could be fully controlled, it could contribute to solving many other important
problems, such as curing disease or ending poverty.
There are unexplored avenues of research to better understand these risks and how to mitigate them.
Alternatively we can build a community dedicated to mitigating these risks at a future time when it
becomes easier to make progress.
The problem has become more urgent in the last year due to increased investment in developing machine
intelligence.

What are the major arguments against?

If human level artificial intelligence will arrive a very long time in the future – say in more than 50 years’
time – then it may be better to focus on making society better in a broader way, and deal with these
specific risks once they are closer and better understood.
Some computer scientists do not believe that machine superintelligence is possible at all; others think it is
likely to be friendly or easy to control if created.

2/5
For any given individual there is a high probability that their skills will not be a natural fit for working on
this problem.

Key judgement calls made to prioritise this problem

AI does pose a real risk – Creating a machine superintelligence is probably possible, and it will not
necessarily do what humans intend it to do.
The ability to make research progress – We think it’s possible to do research today that will help with
the problem of controlling AI in the future. Many experts think such research is possible, but there’s wide
disagreement.
Importance of extinction risks – This problem has a particularly large scale if you place value on
mitigating a disaster that could occur several decades in the future and would prevent the existence of all
future generations.
Comfort with uncertainty – It’s not possible to have strong evidence that interventions to solve this
problem will succeed. We think it is sometimes appropriate to work on problems where the likelihood of
success is unknown.

What can you do about this problem?

What’s most needed to contribute to this problem?

Experts focused on this problem think what’s most needed right now is:

Technical AI safety research – Research into how to make AI aligned with human values ( read more
about what it involves).
Strategy research – Identifying and aiming to settle the key strategic issues that determine what we
should do about AI risk and other existential risks.
Community and capacity building – Creating a network of people who want to positively shape the
direction of AI, and have the expertise or credibility to influence the outcome.
Improving collective decision making and human intelligence – We cover this area in more detail in
another profile (forthcoming).

It’s widely thought that the most pressing need is for more talented AI risk researchers, as well as other staff for
the AI risk organisations.

For more detail, see Chapter 14 of the book Superintelligence.

What skill sets and resources are most needed?

People with a strong background in computer science or mathematics (e.g. a PhD from a top 10 school or
equivalent).
Competent all-rounders to perform management, operations or communications in those organisations.
Donors to fund the necessary research are important in the short term; in the medium term we expect this
problem to be constrained by available talent rather than funding.

Who is working on this problem?

See this list of organisations and funders for AI risk research .


Google Deep Mind does research into how to produce intelligent machines, including on safety and
control.

3/5
The Future of Humanity Institute at Oxford University carries out research on possible methods for
controlling an AI.
AI Impacts aims to improve our understanding of the likely impacts of human-level artificial intelligence.
The Machine Intelligence Research Institute conducts primarily technical research into solving the AI
control problem.
The Future of Life Institute at MIT raised $10 million from Elon Musk, to make grants to support AI safety
research by other groups and individuals over several years.
The Centre for Applied Rationality and effective altruism community work on growing the number of
people willing and able to take action to deal with risks from AI.
We list the organisations working on improving collective decision making in another problem profile
(forthcoming).

What can you concretely do to help?

The most promising options are:

Become an AI risk researcher. Read more in our review of this career path.
Work at an AI risk research organisation doing management, communications or operations.
‘Earn to give’ and provide funding to one of the organisations above.
Take a job anywhere in the AI industry or study machine learning in academia, in order to accumulate
expertise and grow the informed AI risk community. Some companies to especially consider include:
Google Deep Mind, Open AI, Facebook, Vicarious, Baidu.

Less promising options that are available to a wider range of people include:

Work to promote effective altruism; promote the paths above to people who are better placed to act on
them; or improve collective decision making processes.
Increase your own career capital, for example by building your credibility with others, or obtain influence
within important institutions, with the hope that this will put you in a better position to help reduce AI
related risks in future.

These are all forms of ‘capacity building’ which put us in a better position to deal with the problem in the future.

Further reading
Our career profile on AI safety research.
A popular introduction to concerns about AI (with a few caveats and corrections here).
The book Superintelligence, by Nick Bostrom.
Tim Urban of WaitButWhy suggests Our Final Invention by James Barrat as a more accessible book
about AI risks, though we have not yet read it ourselves.
The Open Philanthropy Project’s report on the problem and options for addressing it .

Want to work on this problem?

Go and update your career plan to make sure you actually work on it.
Make my plan

Think you should work on something else?

Take our quiz to get more ideas on which global problem you should work on. It’s only six questions.

4/5
Take the quiz

Or join our newsletter and get notified when we release new problem profiles.

Spread the word about this problem:

5/5

You might also like