Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 6

Skip to content

Moral Robots
Making sense of robot ethics
HOME
HIGHLIGHTS
VIDEO LECTURES
AI AND SOCIETY
OPINION
INTERESTS
RESOURCES
ABOUT
Search for:
Search …
Philosophical issues: AI in Games (2017)
July 11, 2017andyAI and Society, Feature, Philosophy, Philosophy-AI-Book
Before we go deeper into the discussion of the philosophical issues that are
created by Artificial Intelligence systems (see here for a list of posts), it will
be useful to get a rough idea of the kinds of systems we are talking about. This is
not an exhaustive list by any means, and it does not even represent every kind of
AI system. It is a selection of systems that, in my opinion, pose interesting
philosophical questions. We will use some of these systems as examples in later
discussions.

Contents [hide]

1 Deep Blue, IBM Watson, and AlphaGo


2 Computer winners, human losers
3 Why do humans play games?
4 Are computers actually playing?
4.1 Related Posts
4.2 Related
Deep Blue, IBM Watson, and AlphaGo
It’s only twenty years ago, but it’s another universe. In 1997, Bill Clinton was
president of the US, Nokia was the largest mobile phone manufacturer in the world,
and the first iPhone was ten years away in the future. People still talked of the
greenhouse effect (instead of global warming), and you’d probably read this through
your dialup modem on a Pentium II PC with 64 MB of main memory. You’d get your
Internet access through AOL (like half of all American households). Oh, and forget
about googling anything, because Google would be founded only a year later.

Still, already at this point computers would start winning against humans in games
where humans were considered invincible. In May 1997, the IBM computer Deep Blue
won the rematch against the then strongest chess player in the world, Gary
Kasparov. It was a narrow win, and up to the last game the two opponents both had
2.5 points. On May 11, Deep Blue won the sixth game of the rematch, and became
officially the first computer to win in chess against a human world champion under
official tournament conditions and time controls. It was debated for a while
whether there was any cheating involved from the side of IBM, or whether Kasparov
lost because he didn’t play well, rather than because the computer had become
invincible. But whatever the reason was for Kasparov’s loss in 1997, since then
computers have clearly become powerful enough to easily defeat any human opponent
in chess, and nobody would dispute this today.

IBM’s computers earned a much more complex win in 2011, when IBM’s computer Watson
won against two human players in Jeopardy. The game, for those who don’t know it,
presents the player with questions like: “Sakura cheese from Hokkaido is a soft
cheese flavored with leaves from this fruit tree.” The player then has to guess the
missing word, in this case the name of the fruit tree (and present the answer in
form of a question, but this is a purely formal requirement that does not affect
the difficulty of the game). The questions were presented to Watson in text form
(so the machine didn’t have to recognise spoken text, which would have been harder
in 2011 than it is now). Still, as the sample above shows, the questions require
quite a bit of the ability to understand natural language, to work out what is
actually asked, and then to look up the answer in a very extensive database of
facts. Watson won against the two human champions with a very big lead, and has
since been developed as a knowledge processing machine for other, more serious
applications. ‘Watson’ is still used by IBM as a label for machines that today can
recognise objects in pictures, perform medical diagnoses, and communicate with
customers of various companies in natural language over the phone.

Related: Welcome to Moral Robots!


The third important milestone in game AI development (and many say, the last) came
in 2016 and 2017, when DeepMind’s AlphaGo easily and decisively won nearly all
games against two top human players of the ancient Chinese game of Go. In March
2016, AlphaGo won 4 out of 5 games against a top human player, Lee Sedol. One year
later, the program, now improved with more playing experience, won all three games
against the strongest human player, Ke Jie (Future of Go Summit, 23-27 May 2017).
At the same event, there was also a Team Go game, where five human players together
played against AlphaGo: Chen Yaoye, Zhou Ruiyang, Mi Yuting, Shi Yue, and Tang
Weixing. They lost together against the program.

Computer winners, human losers


Looking at this history, one thing that becomes obvious is that the clear and final
defeat of humans against AI players does not seem to deter humans from playing
these games. One would perhaps expect that chess championships would lose some of
their appeal after the defeat of Kasparov in 1997. More generally, one might expect
that games that humans can’t possibly win against a computer might in the long term
become less attractive to human players. What’s the point of playing a game
professionally, one might say, if even the strongest human player is not the
strongest player any more? If there’ll always be someone far stronger than even the
best human champion?

But, perhaps surprisingly, this hasn’t happened. Even after 1997, we have chess
championships, and the current chess world champion was just seven years old when
Kasparov lost to Deep Blue. When he decided to pursue this career, it was already
clear that there was no possible winning against computers. Still, this did not
keep him from becoming a professional chess player, and neither did it deter
thousands of others. So what happened here? Why are humans not as upset by losing
to computers as one might expect? Why do they play on, even after computers have
“solved” a game?

One way to look at the question is by comparison with other sports. It was always
the case that humans ran slower than lions. They also ran slower than humans on
horseback. They also run slower than cars. Still, this has not stopped human
runners from pursuing marathons, or attending the 100m races in the Olympic games.
We don’t usually question the point of attending the Olympics on the argument that
cars can run faster.

This provides a clue. Human sports, despite appearances, are not focussing on the
outcome only, in the sense that some numerical measure of performance is the only
thing that counts for the athlete. If it were so, then perhaps all sports would
have been given up when people realised that other entities (animals, cars,
computers) perform better than them regarding the particular measure. But if
athletes don’t focus on the outcome, what is it then that they pursue when they
take up a sport?

What is the true purpose of a game, or a competitive sport, for those who
participate in it?
Why do humans play games?
Here are three obvious answers to the question why we, as humans, play games:

To enjoy the game, to be entertained.


To compete against others (a social function).
To challenge oneself to improve.
These are three distinct reasons. I might play chess or Go because I am bored and I
want to spent an hour doing something that’s interesting to me and that I find
entertaining (reason 1). Or I might want to compete against others in a match, to
see who is stronger, and to feel the thrill of winning against a strong opponent
(reason 2). This is different from the first reason. If I play for entertainment
against my little daughter (reason 1), I might not insist on winning. In fact, I
might try to lose in order to motivate my daughter to play against me again, and to
make sure that she doesn’t feel bad about the game. Reason 2, instead, focuses on
the built-in sense of competition that primates (and humans) have to such a high
extent: wanting to be the strongest, fastest, cleverest, and generally best admired
among all others one competes against, and thus to rise to the top of one’s social
hierarchy. This is not a pretty trait, but human societies are built on it, and
it’s hard to deny that competition motivates much of our everyday behaviours: from
ranking in schools, to promotions at the workplace, to the way we display status
and wealth through fashion, the cars we drive, and the houses we live in.
Competitive games are just another way to earn and display rank.

Related: Summary of the AlphaGo paper


And finally, I might be playing in order to challenge myself to improve at a skill
that I find admirable, and that I lack to some extent. We often learn things for
that reason. People take drawing classes, learn a foreign language, or learn to
swim, to garden, or to play chess for the reason that this gives them a measurable
way to improve, to feel that they are getting better at something, and to
experience the pleasure that comes from that feeling of gaining competence in some
new area of human activity. Improving at something is in itself already a rewarding
experience, regardless of any other, external gains that may or may not come from
it.

Are computers actually playing?


Realising why we, as humans, play, we could even use this as a definition of a
game: We might say that a game is an activity that is performed voluntarily by
individuals in order that they be entertained, or in order to compete with others
and gain rank, or in order to challenge themselves to improve in performing this
activity. In all three cases, positive emotions are derived from achieving each
goal through playing the particular game.

Now we can see two things:

First, the human goals of game playing are not endangered by computers playing
these games. Clearly, I can enjoy a game, I can be entertained by it, even if the
game is played (better) by computers. I can also enjoy a game that I know I will
lose (for example, if I play against a chess program). If my goal is to pleasantly
pass the time, I can still do so (either against a human or against a computer)
even knowing that computers can, in principle, always beat humans in that game.
This realisation simply does not affect my enjoyment of actually playing the game.
Similarly, if my aim is to compete socially for rank, then I will not play against
a computer (since computers don’t participate in human ranking schemes), and I will
not measure my performance against any computer’s. Instead, I will compete with a
human for a place on a human rank list. The fact that computers might be better at
some activity does not affect my pursuit of rank amongst my peers. Finally, if my
aim is to improve my own abilities, again it will be irrelevant to me whether
computers can play the same game better than humans. I’m simply not interested in
this fact, since my motivation is to become better by learning to play this game. A
computer can help me do this, but it cannot deter me from doing it. So there is
really no reason why humans should abandon competitive games just because computers
play them better. (Same with running: the enjoyment, the competitive, and the self-
improvement aspects of running are not affected by the fact that cars run faster!)

Related: Let the Masses Live In VR – A Socioeconomic Exploration


Second, we can question whether computers even can be said to “play a game” in the
same sense as we do. If we define “playing a game” as I suggested above, then
computers cannot even be said to “be playing” chess or Go. We use this kind of
language as an abbreviation, but it is not really true:

Computers don’t play for their own enjoyment or entertainment. Playing is, for
them, an activity they are programmed to perform, not different from calculating a
column in a spreadsheet, or driving a self-driving car. None of these activities
cause any enjoyment or annoyance to the machine. They are all just activities
performed for no other reason than that the machine was ordered to perform them.
Competing against others (human or not) is also not a motivation a machine might
entertain. Machines don’t even realise that they are competing, since they (at
least for the moment) lack any awareness of their own identity. A game, to them, is
just a series of complex calculations, and they have no sense of competition, no
desire to win, and no consciousness of any social hierarchy that might be affected
by them winning a game. Deep Blue didn’t suddenly “feel great” because it beat
Kasparov.
And finally, although (at least some) machines do improve by playing games (AlphaGo
learned many of its Go skills by playing against itself), such improvement is not
the goal of the machine itself. Rather, the programmer or operator orders the
machine to play games. The machine itself is not motivated to play out of a wish
for self-improvement.
Thus, we might argue that what machines do when they “play” chess is, in fact, a
totally different activity from what humans do when they play chess. Machines don’t
actually “play” anything, since they lack the motivational infrastructure for
playing games. They perform a behaviour, but this, in itself, is not yet playing.
In the same way, a car that drives along a marathon track is not, in any valid
sense, “running” a marathon. It does something entirely different, namely driving
along a marathon track. And a computer that performs the sequence of commands that
implement legal chess moves is not actually “playing” chess. It is just performing
legal chess moves. This is an important distinction that is almost always
overlooked, since we are so fond of using phrases like “AlphaGo plays a game,”
which are fundamentally misleading as to what actually happens when AlphaGo decides
to perform a move on the Go board.

Let’s leave it at that for the moment! In the next few posts we will briefly
discuss other areas of application for AI systems, and we will have a look at what
philosophical problems they pose! Stay tuned.

Related Posts
2017 May/June: The Month(s) in Robot Ethics
2017 May/June: The Month(s) in Robot Ethics
Learning to Love Intelligent Machines
Learning to Love Intelligent Machines
DeepChess: End-to-End Deep Neural Network for Automatic Learning in Chess
DeepChess: End-to-End Deep Neural Network for Automatic Learning in Chess
Summary of the AlphaGo paper
Summary of the AlphaGo paper
The artificial man in ancient myth
The artificial man in ancient myth
Fun with finite state machines!
Fun with finite state machines!
Share on FacebookShare on TwitterShare on LinkedinShare on PinterestShare on Xing
Related
2017 May/June: The Month(s) in Robot Ethics
July 13, 2017
In "Newsletters"
Aims and Claims of AI
May 18, 2017
In "AI and Society"
Contribute
December 26, 2016
In "About"
Tagged alphago chess deepmind games human obsolescence society
Post navigation
Android™ Based RoboticsWhy Automation and AI are Cool, Until They’re Not
STAY UP TO DATE!
email address
Find us on:
View moralrobots’s profile on FacebookView @MoralRobots’s profile on Twitter
LATEST POSTS
Expert systems (video) [AI and Society 06b] September 26, 2019
What is symbolic AI? (video) [AI and Society 06a] September 26, 2019
Defeasible logic in AI (video) [AI and Society 05b] September 17, 2019
Symbolic AI and Prolog (video) [AI and Society 05a] September 17, 2019
Definitions of AI (video) [AI and Society 04b] September 11, 2019
What is intelligence? (video) [AI and Society 04a] September 11, 2019
Claims of AI (video) [AI and Society 03b] September 9, 2019
History and Claims of AI (video) [AI and Society 03a] September 5, 2019
More fun with finite state machines February 14, 2019
Fun with finite state machines! February 12, 2019
CATEGORIES
About
Activism
AI and Society
AIS videos
Briefing
Case studies
Coder's corner
Feature
News
Newsletters
Opinion
Philosophy
Philosophy-AI-Book
Press
Research corner
Resources
Technology
Uncategorized
TAG
LATEST
COMMENTS
amazon applications art artificial intelligence autonomous cars biohacking brain-
computer-interface code creativity culture deep learning deepmind ethics ethics
problems facebook games google hardware healthcare human obsolescence industry
introduction jobs law machine learning machine morality machine vision medicine
natural language processing neural networks personal assistants philosophy privacy
python reading list research robots social robots society surveillance
technoregulation tensorflow tools tutorial war robots
Find us on:
View moralrobots’s profile on FacebookView @MoralRobots’s profile on Twitter
Proudly powered by WordPress | Theme: NewsAnchor by aThemes.

You might also like