Full Chapter Artificial Intelligence and Bioethics Perihan Elif Ekmekci PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Artificial Intelligence and Bioethics

Perihan Elif Ekmekci


Visit to download the full and correct content document:
https://textbookfull.com/product/artificial-intelligence-and-bioethics-perihan-elif-ekmek
ci/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Artificial Intelligence and Robotics Huimin Lu

https://textbookfull.com/product/artificial-intelligence-and-
robotics-huimin-lu/

Artificial Intelligence By Example Develop machine


intelligence from scratch using real artificial
intelligence use cases Denis Rothman

https://textbookfull.com/product/artificial-intelligence-by-
example-develop-machine-intelligence-from-scratch-using-real-
artificial-intelligence-use-cases-denis-rothman/

Artificial Intelligence and Industrial Applications:


Artificial Intelligence Techniques for Cyber-Physical,
Digital Twin Systems and Engineering Applications
Tawfik Masrour
https://textbookfull.com/product/artificial-intelligence-and-
industrial-applications-artificial-intelligence-techniques-for-
cyber-physical-digital-twin-systems-and-engineering-applications-
tawfik-masrour/

Playing Smart: On Games, Intelligence, and Artificial


Intelligence 1st Edition Julian Togelius

https://textbookfull.com/product/playing-smart-on-games-
intelligence-and-artificial-intelligence-1st-edition-julian-
togelius/
AI Meets BI: Artificial Intelligence and Business
Intelligence 1st Edition Lakshman Bulusu

https://textbookfull.com/product/ai-meets-bi-artificial-
intelligence-and-business-intelligence-1st-edition-lakshman-
bulusu/

Artificial Intelligence and Robotics 1st Edition Huimin


Lu

https://textbookfull.com/product/artificial-intelligence-and-
robotics-1st-edition-huimin-lu/

Artificial Intelligence for Big Data Complete guide to


automating Big Data solutions using Artificial
Intelligence techniques Anand Deshpande

https://textbookfull.com/product/artificial-intelligence-for-big-
data-complete-guide-to-automating-big-data-solutions-using-
artificial-intelligence-techniques-anand-deshpande/

Artificial Intelligence and Games 1st Edition Georgios


N. Yannakakis

https://textbookfull.com/product/artificial-intelligence-and-
games-1st-edition-georgios-n-yannakakis/

Artificial Intelligence and Algorithms in Intelligent


Systems Radek Silhavy

https://textbookfull.com/product/artificial-intelligence-and-
algorithms-in-intelligent-systems-radek-silhavy/
SPRINGER BRIEFS IN ETHICS

Perihan Elif Ekmekci


Berna Arda

Artificial
Intelligence and
Bioethics

123
SpringerBriefs in Ethics
Springer Briefs in Ethics envisions a series of short publications in areas such as
business ethics, bioethics, science and engineering ethics, food and agricultural
ethics, environmental ethics, human rights and the like. The intention is to present
concise summaries of cutting-edge research and practical applications across a wide
spectrum.
Springer Briefs in Ethics are seen as complementing monographs and journal
articles with compact volumes of 50 to 125 pages, covering a wide range of content
from professional to academic. Typical topics might include:
• Timely reports on state-of-the art analytical techniques
• A bridge between new research results, as published in journal articles, and a
contextual literature review
• A snapshot of a hot or emerging topic
• In-depth case studies or clinical examples
• Presentations of core concepts that students must understand in order to make
independent contributions

More information about this series at http://www.springer.com/series/10184


Perihan Elif Ekmekci Berna Arda

Artificial Intelligence
and Bioethics

123
Perihan Elif Ekmekci Berna Arda
TOBB University of Economics Ankara University Medical School
and Technology Ankara, Turkey
Ankara, Turkey

ISSN 2211-8101 ISSN 2211-811X (electronic)


SpringerBriefs in Ethics
ISBN 978-3-030-52447-0 ISBN 978-3-030-52448-7 (eBook)
https://doi.org/10.1007/978-3-030-52448-7
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2020
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

When the effort to understand and explain the universe is combined with human
creativity and future design, we see works in the field of science fiction. One of the
pioneers of this combination was undoubtedly Jules Verne and his brilliant works.
His book “From the Earth to the Moon,” published in 1865, was one of the first
examples of the genre of a science fiction novel. However, science fiction has
always been ahead of its era. A real journey to the moon was accomplished only in
1969, about a century after the publication of the book.
The twentieth century was a period in which scientific developments progressed
with giant steps. In the twenty-first century, we are now in an age in which scientific
knowledge increases logarithmically, and the half-life of knowledge is very short.
The amount of information to be possessed has grown hugely and varied so much
that it has reached a level that is well above human control.
The book you are holding is about Artificial Intelligence (AI) and bioethics. It is
intended to draw attention to the value problems of an enormous phenomenon with
uncertain limits on human creativity. The book consists of the following chapters.
The first section is History of Artificial Intelligence. In this section, a historical
perspective of the development of technology and AI is presented. We take a quick
tour from ancient philosophers to Rene Descartes, from Lady Ada to Alan Turing,
the prominent pioneers of the AI concept who contributed to philosophy and actual
creation of this new phenomenon of whom some suffered many injustices during
their lifetimes. This section ends with a short description of the state of the art of AI.
The second section is Definitions. This section aims to explain some of the key
terms used in the book to familiarize them with readers. We also aim to search the
answer to the following question: “what makes an entity—human or machine—
intelligent?” In the face of this fundamental question, relevant concepts such as
strong AI, weak AI, heuristic, Turing test, which have become increasingly clear
over time, have been discussed separately in the light of literature.
The Personhood and Artificial Intelligence section address a fundamental question
about the ethical agency of AI. Personhood is an important philosophical, psycho-
logical, and legal concept for AI because of its implications about moral responsi-
bility. Didn’t the law build all punishment on the admission that individuals with

v
vi Preface

personhood should at the same time be responsible for what they do (and sometimes
do not)? Until recently, we all lived in a world dominated by an anthropocentric
approach. Human beings have always been at the top of the hierarchy among all
living and non-living entities and ever the most valuable. However, the emerging of
AI and its potential to improve into human-level or above human-level intelligence
challenges human beings’ superior position by claiming to be acknowledged to have
personhood. Many examples of daily life, such as autonomous vehicles, military
drones, early warning systems, are discussed in this section.
The following section is on bioethical inquiries about AI. The first question is on
the main differences between conventional technology and AI and if the current
ethics of technology can be applied to ethical issues of AI.
After discussing the differences between conventional technology and AI and
justifying our arguments about the need for a new ethical frame, we highlight the
bioethical problems arising from the current and future AI technologies. We address
the Asilomar Principles, the Montreal Declaration, and the Ethics Guidelines for
Trustworthy AI of the European Commission as examples of suggested frameworks
for ethics of AI. We discuss the strengths and weaknesses of these documents. This
section ends with our suggestion for the fundamentals of the new bioethical frame
for AI.
The final section focuses on the ethical implications of AI in health care. Medicine
is one of the oldest professions in the world, is and will be affected by AI. The moral
atmosphere, shaped by the Hippocratic tradition of the Western world, was initially
sufficient to ensure that only the physician was virtuous. However, by the
mid-twentieth century, the impact of technology on medicine had become very evi-
dent. On the one hand, physicians were starting to lose their ancient techne-oriented
professional identity.
On the other hand, a new patient type emerged, as a figure demanding her/his
rights, beginning to inquire about the physician’s paternalism. Even in this case, it
was inevitable that different approaches would replace traditional medical ethics.
Currently, these new approaches are challenged once more with the emergence of
AI in health care. The main question is how the patients and caregivers will be
shaped within the medical ethics framework in the AI world. This subsection, AI in
health care and medical ethics, is suggesting answers and shedding light on possible
areas of ethical concern.
Chess grandmaster Kasparov, one of the most brilliant minds of the twentieth
century, tried to defeat Deep Blue, but his defeat was inevitable. Lee Sedol, a young
Go master, decided to retire in November 2019, after being defeated by the AI
system Alpha Go. Of course, Alpha Go represented a much more advanced level of
AI than Deep Blue. Lee Sedol’s early decision was that even if he was number one,
AI was “invincible AI would always be at the top”. We now accept that Lee Sedol’s
decision is based on a realistic prediction. The conclusion part presents the
projections about AI in light of all the previous chapters and the solutions for the
new situations that the scientific world will face.
Preface vii

This book contains what two female researchers can see and respond to in terms
of AI from their specialty field, bioethics. We want to continue asking questions and
looking for answers together.
Wish you an enjoyable reading experience.

Ankara, Turkey Perihan Elif Ekmekci


Berna Arda
Contents

1 History of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . ........ 1


1.1 What Makes an Entity–Human or Machine–Intelligent? ........ 1
1.2 First Steps to Artificial Intelligence . . . . . . . . . . . . . . . ........ 3
1.3 The Dartmouth Summer Research Project on Artificial
Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 A New Partner in Professional Life: Expert Systems . . . . . . . . . . 10
1.5 Novel Approaches: Neural Networks . . . . . . . . . . . . . . . . . . . . . . 12
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1 What is Artificial Intelligence? . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.1 Good Old-Fashioned Artificial Intelligence . . . . . . . . . . . . 21
2.1.2 Weak Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.3 Strong Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.4 Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.5 Turing Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.6 Chinese Room Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.1.7 Artificial Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.1.8 Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.9 Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2 State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3 Personhood and Artificial Intelligence . . . . . . . . . . . . . . . . . ....... 29
3.1 What Makes Humans More Valuable Than Other Living
or Non-living Entities? . . . . . . . . . . . . . . . . . . . . . . . . . ....... 31
3.2 Can Machines Think? . . . . . . . . . . . . . . . . . . . . . . . . . . ....... 32
3.3 Moral Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....... 35

ix
x Contents

3.4 Non-discrimination Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . 36


3.5 Moral Status and Ethical Value: A Novel Perspective Needed
for Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . 37
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4 Bioethical Inquiries About Artificial Intelligence . . . . . . . . . . . . . . .. 41
4.1 Ethics of Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 41
4.2 Does Present Ethics of Technology Apply to Artificial
Intelligence? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 46
4.3 Looking for a New Frame of Ethics for Artificial Intelligence . . .. 51
4.4 Common Bioethical Issues Arising from Current Use
of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 52
4.4.1 Vanity of the Human Workforce, Handing the Task
Over to Expert Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.4.2 Annihilation of Real Interpersonal Interaction . . . . . . . . . . 55
4.4.3 Depletion of Human Intelligence and Survival Ability . . . . 56
4.4.4 Abolishing Privacy and Confidentiality . . . . . . . . . . . . . . . 56
4.5 Bioethical Issues on Strong Artificial Intelligence . . . . . . . . . . . . . 58
4.5.1 The Power and Responsibility of Acting . . . . . . . . . . . . . . 59
4.5.2 Issues About Equity, Fairness, and Equality . . . . . . . . . . . 60
4.5.3 Changing Human Nature Irreversibly . . . . . . . . . . . . . . . . 61
4.6 The Enhanced/New Ethical Framework for Artificial
Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 62
4.6.1 Two Main Aspects of the New Ethical Frame
for Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . .. 62
4.6.2 The Ethical Norms and Principles That Would Guide
the Development and Production of Artificial Intelligence
Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 63
4.6.3 Evolution of Ethical Guidelines and Declarations . . . . . .. 64
4.6.4 Who is the Interlocutor? . . . . . . . . . . . . . . . . . . . . . . . . .. 69
4.7 The Ethical Frame for Utilization and Functioning of Artificial
Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 70
4.7.1 How to Specify and Balance Ethical Principles
in Actual Cases in a Domain? . . . . . . . . . . . . . . . . . . . .. 70
4.7.2 Should We Consider Ethical Issues of Artificial
Intelligence in the Ethical Realm of the Domain
They Operate? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 71
4.7.3 Would Inserting Algorithms for Ethical Decision Making
in Artificial Intelligence Entities Be a Solution? . . . . . . . .. 74
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 77
5 Artificial Intelligence in Healthcare and Medical Ethics . . . . . . . . . . 79
5.1 Non-maleficence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2 Change of Paradigm in Health Service . . . . . . . . . . . . . . . . . . . . . 84
Contents xi

5.2.1 Abolition of the Consultation Process . . . . . . . . . . . . . . . . 85


5.2.2 Loss of Human Capacity . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.3 Privacy and Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.4 Using Human Beings as a Means to an End . . . . . . . . . . . . . . . . 90
5.5 Data Bias, Risk of Harm and Justice . . . . . . . . . . . . . . . . . . . . . . 92
5.6 Lack of Legislative Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . 95
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Chapter 1
History of Artificial Intelligence

1.1 What Makes an Entity–Human


or Machine–Intelligent?

When we discuss the ethical issues about artificial intelligence (AI), we focus on its
human-like abilities. The human-like abilities categorize under two main headings:
doing and thinking. An entity with the capability to think, understand the settings,
consider the options, consequences, and implications, reason, and finally decide what
to do and physically act -do- accordingly in a given circumstance may be considered
intelligence. Then an entity is intelligent if it can think and do. However, both abilities
are not easy to define. While looking for answers for definitions, we trace back to
5th century BC to Aristotle, who laid out the foundations of epistemology and first
formal deductive reasoning. For centuries, the philosophical inquiries about body
and mind and how the brain works accompanied advancements in human efforts to
build autonomous machines. Seventeenth century hosted two significant people in
this respect; Blaise Pascal, who invented the mechanical calculator, the Pascaline,
and Rene Descartes, who codified body-mind dichotomy in his book “A Treatise on
Man”. The body-mind dichotomy, known as the Cartesian system, indicates that the
mind is an entity separate from the body. According to this perspective, the mind
is intangible, and the way it works is so unique and metaphysical that it cannot
duplicate in an in-organic/human-made artifact. Descartes argued that the body, on
the other hand, was an automatic machine-like irrigation taps in elegant gardens of
French chateaus or clocks on church municipality towers, which were popular at
those times in European towns. It was the era when anatomic dissections on the
human body became more frequent in Europe. The raising knowledge about human
anatomy revealed some facts about the pumping engine, heart, and blood circulation
in tubes, vessels. It enabled some analogies between the human body and a working
machine. Descartes stated that the pineal gland was the place where the integration
between the mind and the material body. The idea about body-mind dichotomy was
developed further. It survived to our day and, as discussed in forthcoming chapters,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 1


P. E. Ekmekci and B. Arda, Artificial Intelligence and Bioethics,
SpringerBriefs in Ethics, https://doi.org/10.1007/978-3-030-52448-7_1
2 1 History of Artificial Intelligence

and still constitutes one of the main arguments against the personification of AI
entities.
The efforts of philosophers to formulate the thought, ethical reasoning, and
the ontology of humanness and, the nature of epistemology continued in the 18th
century with Gottfried Wilhelm Leibniz, Baruch Spinoza, Thomas Hobbes, John
Locke, Immanuel Kant, and David Hume. Spinoza, a contemporary philosopher of
Descartes, studied the Cartesian system and disagreed with this idea. His rejection
was based on his pantheist perspective, which viewed the mind and body as two
different aspects of human being, and were merely representations of God. Another
highly influential philosopher and mathematician of the time, Gottfried Wilhelm
Leibnitz imagined mind and body as two different monads which were exactly
matching with each other to form a corresponding system, which was similar to
the cogwheels of a clock. In an era when communication among contemporaries
was limited to direct contact or having access to one of the few published books,
Leibnitz travelled all through Europe and talked to other scientists and philosophers,
which enabled him to comprehend the existing state of the art both in theory and
implementation. What he inferred was the need for a common language of science so
that thoughts and ideas could be discussed based on the same terms. This common
language required to develop an algorithm in which human thoughts are represented
by symbols for computers so that humans and machines could communicate. Leib-
nitz’s calculus ratiocinator –a calculus for reasoning- could not accomplish the task
of symbolically expressing logical terms and reason on them, but undoubtedly was
an inspiration for “Principia Mathematica” of Alfred North Whitehead and Bertrand
Russel in the mid-19th century.
While every one of these philosophers shaped contemporary philosophy, efforts to
produce autonomous artifacts proceeded. Jacques de Vaucanson’s mechanical duck
symbolized the state of the art of automata and AI in the 18th century. This auto-
matic duck could beat its wings, drink, eat, and even digest what it eats, almost
like a living being. In the 19th century, artifacts and humanoids took place in liter-
ature. Best known ones were Ernst Theodor Wilhelm Hoffman’s “The Sandman”,
Johann Wolfgang von Goethe’s “Faust” (part II) and, Mary Wollstonecraft Shelley’s
“Frankenstein”. Hoffman’s work has been an inspiration to the composition of the
famous ballet “Coppelia”, which contained a doll that comes to life and becomes an
actual human being. These may be considered as the part played by art and litera-
ture for the elaboration of the idea of artifacts becoming more human-like and the
potential of AI to have human attributes. AI appeared in 19th-century literature. L.
Fran Baum’s mechanical man “Tiktok” was an excellent example of intelligent non-
human beings of the time. Jules Verne and Isaac Asimov also deserve mentioning as
pioneering writers who have AI in their works (Buchanan 2005). These pieces of art
helped to prepare the mind of ordinary people to the possibility of the existence of
intelligent beings other than our human species.
1.2 First Steps to Artificial Intelligence 3

1.2 First Steps to Artificial Intelligence

In the 1840s, Lady Ada Lovelace envisaged the “analytical engine,” which was built
by Charles Babbage. Babbage was already dreaming about making a machine that
could calculate logarithms. The idea of calculators has been introduced in 1642 by
Blaise Pascal, and further developed by Leibnitz later. Still, these were slightly too
simple when compared to the automatic table calculator, the “difference engine,” that
Babbage started to build in 1822. However, the difference engine was left behind
when Lady Ada Lovelace’s perspectives were introduced about the analytical engine
(McCorduck 2014).
The analytical engine was planned to perform arithmetical calculations as well as
to analyse and to tabulate functions with a vast data storage capacity and a central
processing unit controlled by algebraic patterns. Babbage could never finish building
the analytical engine, but still, his efforts are considered as a cornerstone in the
history of AI. This praise is mostly due to the partnership between him and Ada
Lovelace, a brilliant woman who had been tutored in mathematics since the age of 4,
conceptualized a flying machine at the age of 12, and became Babbage’s partner at
the age of 17. Lovelace thought that the analytical engine had the potential to process
symbols representing all subjects in the universe and could deal with massive data to
reveal the facts about the real world. She imagined that this engine could go far beyond
scientific performance and would even be able to compose a music piece (Boden
2018). She elaborated on the original plans for the capabilities of Analytical Engine
and published the paper in an English journal in1843. Her elaboration contained the
first algorithm to be carried out by a computer program. She had the vision, she knew
what to expect, but she and her partner Babbage could not figure out how to realize
it. The analytical engine did not answer “how-to” question. On the other hand, Ada
Lovelace’s vision survived and materialized at the beginning of the 21st century.
In the 19th century, George Boole worked on the construction of “the mathematics
of the human intellect” or to find out the general principles of reasoning by symbolic
logic, merely by using symbols of 0 and 1. This binary system of Boole later consti-
tuted the basics of developing computer languages. Whitehead and Russel’s were
following the same path while they wrote Principia Mathematica, a book which has
been a cornerstone for philosophy, logic, and mathematics and an inspirational guide
for scientists working on AI.
The 19th century has been the year in which AI flourished both in theory and imple-
mentation. In 1936 Alan Turing shed light on “how to” question, left unanswered by
Lady Lovelace, by proposing the “Turing Machine.” In essence, his insight was very
similar to Lovelace’s. He suggested that an algorithm may solve any problem which
is represented by a symbol. Also, “it is possible to invent the single machine which
can be used to compute any computable sequence.” (Turing 1937). In his paper “On
Computable Numbers with an Application to the Entscheidungsproblem,” he defined
how this machine works. It is worth mentioning that he used terms like “remember,
wish, and behave” while describing the abilities of the Turing Machine, words that
4 1 History of Artificial Intelligence

were used only for functions of the human mind before. The following sentences
show how he personalized the Turing Machine:
The behaviour of the computer at any moment is determined by the symbols which he is
observing and his state of mind at that moment.

It is always possible for the computer to break off from his work, to go away and forget all
about it, and later to come back and go on with it.

Alan Turing demonstrated that an AI which is capable of doing anything that


requires intelligence could be produced, and the process occurring in the human
mind could be modelled. In 1950, he published another outstanding paper: “Com-
puting Machinery and Intelligence” (Turing 1950). This paper initiated with the
question: “Can machines think?” and proposed the “imitation game” to answer this
fundamental question. Turing described the fundamentals of the Imitation Game as
follows:
It is played with three people, a man (A), a woman (B), and an interrogator (C) who may
be of either sex. The interrogator stays in a room apart from the other two. The interrogator
aims to determine which of the other two is the man and which one is the woman. He knows
them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X
is B and Y is A…. The object of the game for the third player (B) is to help the interrogator.
The best strategy for her is probably to give truthful answers. She can add such things as “I
am the woman, do not listen to him!” to her answers, but it will avail nothing as the man can
make similar remarks.

A teleprinter performed the communication between rooms, or interactions were


repeated by an intermediary to avoid hints transferred by the tone of the voice. After
this theoretical explanation of settings, Turing replaced the initial question about
the ability of machines to think by the following ones: “What will happen when a
machine takes the part of A in this game?” “Will the interrogator decide wrongly as
often when the game is played like this as he does when the game is played between
a man and a woman?”.
Turing discerned possible strong objections to the idea of thinking machines. The
act of thinking has always been a term attributed to human beings, if not considered as
the sole unique property of human beings to distinguish them from any other natural
or artificial being and provide the superior hierarchical position of human beings in
the world. Turing wrote in his paper, “May not machines carry out something which
ought to be described as thinking but which is very different from what a man does?
This objection is a very strong one, but at least we can say that if nevertheless, a
machine can be constructed to play the imitation game satisfactorily, we need not be
troubled by this objection.”
Turing was aware of the fact that arguments against machines thinking would arise
from several main grounds: the first one is the theological objection, which argues
that God has provided the man an immortal soul so that they can think. God did not
give part of his soul to any other creature; hence, none of them possess the ability
to think. The second objection said that the fact of thinking machines would risk
the commanding superior hierarchical position of the human beings in the world,
and the consequences of this would be unacceptably dreadful. Turing named this
1.2 First Steps to Artificial Intelligence 5

argument “heads in the sand objection,” which was entirely appropriate considering
the implications of it. The third one is the mathematical objection, which rests on the
idea that machines can produce unsatisfactory results, which is abstractly defeated by
Turing by pointing at fallacious conclusions coming from human minds. The fourth
objection Turing considered was consciousness or meta-cognition in contemporary
terms. This objection came from Professor Jefferson’s Lister Oration and stated that
composing music or writing a sonnet was not enough to prove the existence of
thinking. One must also be able to know what it had written or produced and feel
the pleasure of success, and the grief of failure. Turing overcame this objection by
putting forth that objections of consciousness “could be persuaded to abandon it
rather than be forced into the solipsist position,” and one should solve the mystery
about consciousness before building an argument on it.
It is plausible to say that Turing’s perspective was ahead of Lady Lovelace’s at one
point. Lovelace stated that the analytical engine (or an artifact) could do whatever we
know how to order it to perform. With this, she cast out the possibility of a machine
that can learn and improve its abilities. Her vision complied with Gödel’s Theorem,
which indicated that computers are inherently incapable of solving any problem
which humans can overcome (McCorduck 2014). At this point, Turing proposed
to produce a program to simulate a child’s mind instead of an adult is so that the
possibility to enhance capabilities of the machine’s thought process would be high,
just like a child’s learning and training process. Reading through this inspirational
paper, we can say that Alan Turing had an extraordinary broad perspective about
what AI could be in the future.
While Turing was developing his ideas about thinking machines, some other
scientists from the USA were producing inspirational ideas on the subject. Two
remarkable scientists, Warren McCulloch and Walter Pitts, are worth mentioning at
this point. In 1943, they published a paper titled “A Logical Calculus of the Ideas
Immanent in Nervous Activity” (McCulloch and Pitts 1943). This paper referred to
the anatomy and physiology of neurons. It argued that the inhibitory and excitatory
activities of neurons and neuron nets grounded basically on “all-or-none” law, and
“this law of nervous system was sufficient to ensure that propositions may represent
the activity of any neuron.” Thus, neural nets could compute logical propositions of
the mind. This argument proposed that the physiological activities among neurons
would correspond to relations among logical propositions. The representations were
utilized to identify the ties with those of the propositions. In this respect, every activity
of the neurons corresponded to a proposition. This paper had a significant impact on
the first digital computer developed by John Von Neumann in the same year (Boden
1995). However, the authors’ arguments inevitably inspired the proponents of the
archaic inquiries about how the mind worked, the unknowable object of knowledge
and, the dichotomy of body and mind. The authors concluded that since all psychic
activities of the mind are working with the “all-or-none” law of neural activities,
“both the formal and final aspects of that (mind) activity which we will not call
mental are rigorously deducible from present neurophysiology.” The paper ended
with the assertion that “mind no longer goes more ghostly than a ghost.” There is no
doubt that these assertions constructed the main arguments of cognitive science. The
6 1 History of Artificial Intelligence

idea for the elaboration of cognitive science was that cognition is computation over
representations, and it may take place in any computational system, either neutral or
artificial.
Three years after this inspiring paper, in 1950, while Turing was publishing
his previously mentioned paper on the possibility of machines thinking, Claude
Shannon’s article titled: “Programming a Computer for Playing Chess” appeared to
shed light on some of the fundamental questions of the issue (Shannon 1950). In
this article, he was questioning features needed to differentiate a machine that could
think from a calculator or a general-purpose computer. For him, the capability to
play chess would imply that this new machine could think. He grounded his idea
on the following arguments. First, a chess-playing device should be able to process
not only with numbers but with mathematical expressions and words; what we call
representations and images today. Second, the machine should be able to make proper
judgments about future actions by trial and error method based on previous results,
meaning the new entity would have the necessary capacity to operate beyond “strict,
unalterable computing processes.” The third argument depended on the nature of the
decisions of the game. When a chess player decides to make a move, this move may
be right, wrong, or tolerable, depending on what her rival would do in response. A
decision that could be considered to be faulty by any average chess player will be
accepted if the opponent makes worse decisions about her moves in response. Hence
a chess player’s choices are not 1 or 0—right or wrong-, but “rather have a contin-
uous range of quality from the best to the worst”. Shannon described a chess player’s
strategy as “a process of choosing a move in a given position”. In-game theory, if the
player always chooses to make the same move in the same position—pure strategy-,
this makes the player a very predictable one, since her rival would be able to figure out
her strategy after a few games. Hence a good player has to have a mixed plan, which
means that the plan should involve a reasoning procedure operating with statistical
elements so that the player can make different moves in similar positions.
Shannon thought if we could build a chess-playing machine with these intelligent
qualifications, then this would be followed by machines “capable of logical deduc-
tion, orchestrating a melody or making strategic decisions for the military.” Today
we are far beyond Shannon’s predictions in 1950.

1.3 The Dartmouth Summer Research Project on Artificial


Intelligence

While Shannon was working on the essentials of a chess-playing machine, other


remarkable scientists were working on human reasoning. John McCarty was one
of them who made a significant proposition to use the first-order predicate calculus
to simulate human reasoning and to use formal and homogenous representation for
human knowledge. Norbert Wiener, a mathematician at Massachusetts Institute of
Technology (MIT), began to work with fellow physiologist Arturo Rosenblueth and
1.3 The Dartmouth Summer Research Project on Artificial Intelligence 7

Julian Bigelow, and this group published the influential paper titled “Behaviour,
Purpose, and Teleology,” in Philosophy of Science in 1943. Soon after, in 1942,
Rosenblueth met Warren McCulloh, who was already working together with Walter
Pitts on the mathematical description of neural behaviour of the human brain. Simul-
taneously various scientists were working on similar problems in different institu-
tions. In the summer of 1956, John McCarthy, Marvin Minsky, Nathaniel Rochester,
and Claude Shannon came up with the idea to summon up all scientists working in
the field of thinking machines together, share what they had achieved and, develop
perspectives for future work. They thought it was the right time for this conference
since the studies on AI had to “proceed based on the conjecture that every aspect of
learning or any other feature of intelligence can in principle be so precisely described
that a machine could be made to simulate it.” The name of the gathering was “the
Dartmouth Summer Research Project on AI”. It was planned to last for two months
with the attendance of 10 scientists who were working in the field of AI. In the Dart-
mouth conference “AI” term was officially used for thinking machines for the first
time. Each of the four scientists who wrote the proposal offered proposals for research
areas. Shannon was interested in the application of information theory to computing
machines and brain models and, the matched environment-brain model approach
to automata, Minksy’s research area was learning machines. He was working on a
machine that could sense changes in the environment and adapt its output according
to them. He described his proposal as a very tentative one and hoped to improve his
work in the summer course. Rochester proposed to work on originality and random-
ness of machines, and McCarthy’s proposal was “to construct an artificial language
which a computer can be programmed to use on problems requiring conjecture and
self-reference” (McCarthy et al. 2006). However, the conference did not meet the
high expectations of the initiating group. McCarthy acknowledged the reasons for
this setback that most of the attendees did not come for the whole two months.
Some of them were there for two days while others came and left on various dates.
Moreover, most of the scientists were reluctant to share their research agenda and
collaborate. The conference did not yield a common agreement on general theory
and methodology for AI. McCarthy said they were also wrong to think that the
timing was perfect for such a big gathering. In his later evaluations, he stated that
the field was not ready for big groups to work together since there was no agreement
on the general plan and course of action (McCorduck 2014). On the other hand,
despite the negativities, the Dartmouth AI conference had significant importance
in the history of AI because of nailing the term AI, solidifying the problems and
perspectives about AI, and determining state of the art (Moor 2006). However, it
was later realized that the conference hosted a promising intervention, the Logic
Theorist or General Problem Solver (GPS), developed by Allen Newell and Herbert
A. Simon. It did not get the attention it deserved at the conference since its main
focus was different than what most scientists’ projects. Simon and Newell came
from RAND, an institution where people worked to develop projects for the benefit
of the air forces. It was the Second World Wartime, and the military needed any
innovation they could have that would give them superiority over the enemy. Simon
was working on the decision process of human beings, and he wrote a book titled
8 1 History of Artificial Intelligence

“Administrative Behavior,” in which he explained this theory of decision making.


His theory of reasoning process was on concluding from premises. He argued that
people would start different premises in different settings, and these premises would
be affected by their perspectives. From this point of view, he predicted that deci-
sions depend on their perspectives. This book was very influential in business and
economy, but it also had practical implications for AI. When he started to work
with Newell, Simon began to transfer the language Newell developed for air defence
set-up, to his decision-making process so that information-processing ideas could
be used to comprehend the way air defence personnel operated. The core idea of
Simon was to develop an analogy between the reasoning process of the human mind
and computer. He argued that understanding how human beings reached conclusions
from premises could be imitated by computers. This imitation would provide two
significant benefits; to have a machine that could think like human beings and to
understand how the human mind works, which has been a mystery for humanity
for centuries. Newell’s unique understanding of the non-numerical capabilities of
computers matched well with Simon’s perspectives to develop their Logic Machine
and present it at the Dartmouth Conference. Their matching perspectives constituted
the main reason why the Logic Machine did not receive sufficient attention at the
Dartmouth Conference. The contemporary scientists thought what Simon and Newell
were presenting was a model for the human mind, something that did not interest
the participants. The human mind-computer analogy and the question if machines
could do what the human mind is capable of in terms of reasoning, which were
discussed widely by Turing and other pioneers, were out of date during the time of
the conference. However, even it was not appreciated at the conference, the Logic
Theorist was working and was capable of intellectual and creative tasks that were
unique to human beings heretofore. Moreover, the decisions of Logic Theorist were
unpredictable. One would not know what the machine’s decision would be, which is
also a very human feature, indeed (McCorduck 2014).
Meanwhile, the works on game theories were developing. In 1947 Arthur Samuel
accomplished to write down a checker player program for computers. Although first
versions of the program could play only at beginner or average level, by 1961, it
could play at the masters’ level. Alex Bernstein, who was a devoted chess player
and an IBM worker, invested long hours of hard work on building a computer that
could play chess. His program was capable of deciding to make the best possible
moves after evaluating probable positions in depth. Pamela McCormick writes in
her influential book “machine who think” that Bernstein’s team worked in shifts
with another team of IBM, which was developing the popular program, FORTRAN.
They had to work with shifts since there was only one computer in the lab, and one
group had to be off for the other one to work. Although Bernstein’s work gave him
popularity, the chess-playing computer could not be more than a mediocre player.
1975 was the year in which a chess player computer developed by Richard Green-
blatt has achieved class C. In 1977 David Slate and Larry Atkin dared to put their
Chess 4.5 computer against David Levy, a chess player with a 2375 rating. The result
was a disappointment for Slate and Atkin. Levy beat the Chess 4.5 and showed “who
the boss was.”
1.3 The Dartmouth Summer Research Project on Artificial Intelligence 9

1950s embraced frustrations of scientists resulting from lack of a common under-


standing of what AI was, what it would be in the future, and the nature and content
of basic issues in programming such as memory management and symbol manip-
ulation languages. In short, it is plausible to say that the AI community who were
excited by the general-purpose computer in the beginnings of the 1950s and were
busy with machine simulation of complex non-numerical systems like games and
picture transformations by the middle of the decade. Minsky describes this period as
follows: “work on cybernetics had been restricted to theoretical attempts to formu-
late basic principles of behaviour and learning. The experimental work was at best
‘paradigmatic; small assemblies of simple ‘analogue’ hardware convinced the sceptic
that mechanical structures could indeed exhibit “conditioned” adaptive behaviours
of various sorts… The most central idea of the pre-1962 period was that of finding
heuristic devices to control the breadth of a trial-and-error search. A close second
preoccupation was with finding effective techniques for learning” (Minsky 1968).
According to Minsky, AI work has proceeded in three major paths. The first one was
aiming to discover self-organizing systems; Newell and Simon led the second one
focused on the simulation of human thought, and the third one was to build intelli-
gence artifacts simple, biological, or humanoid. The programs which were built by
the third perspective were called “heuristic programs.” The book “Computers and
Thoughts” by Edward Feigenbaum and Julian Feldman provides a satisfactory collec-
tion of heuristic programs by the end of 1961. Checker player by Arthur Samuel, a
logic theorist by Newell and Simon, was the most well-known one. After 1962 the
main focus shifted from learning to the representation of knowledge and overcoming
the limitations of existing heuristic programs.
Minsky acknowledges that the main limitation of heuristic programs was the
blind trial-and-error method that goes through all possible sequences of available
actions. The first thing to overcome this problem was to replace this blind process
with a smart one, which would go through hypothesis only with a high potential of
relevance instead of all possible ones. This new approach would enable a mediocre
level checker program to perform at the master level, not by increasing the speed of the
process, but by decreasing the number of choices to go through. The second limitation
was the formality of targeted problem areas such as games or mathematical proofs.
Minsky said the reason for that was the clarity and simplicity of these problems.
Another limitation went back to Lady Lovelace’s prediction about a machine that
can only do what we program it to do; in other words, it cannot learn. The programs of
this era were static, as Lady Lovelace said. They could solve the problem, but could
not learn from this solving. The final limitation emerged from the representation
of relevant knowledge to the computer. That was the difference between problem-
oriented factual information and general problem-solving heuristics (Minsky 1968).
In 1958 Frank Rosenblatt came up with a novel approach, which he called the
“neural network,” which was indeed a paradigm breaker idea since it aimed to over-
come one of the most significant limitations of symbolic AI by machine learning.
Symbolic AI required all knowledge to be operated by the AI system to be decoded
by the human programmers. It was knowledge-based, and this knowledge had to be
provided by the codes written for the AI to process with them. On the other hand,
10 1 History of Artificial Intelligence

Rosenblatt’s neural network was developed to acquire knowledge without the inter-
ference of codes written down to translate it to the symbols. Although that was a
brilliant idea, the technology was not ready to support it. The available hardware did
not have sufficient receptors to acquire knowledge, like cameras with poor pixels
and audio receptors, which could not specify sounds like speech. Another significant
problem was the lack of big data. The neural network required vast amounts of data to
accomplish machine learning, which was not available in the 1950s. Besides, Rosen-
blatt’s neural network was quite limited in terms of layers. It had one input and one
output layer, which enabled the system to learn only simple things like recognizing
that the shape is a circle, not a triangle. In 1969 Marvin Minsky and Seymour Papert
wrote in their book “Perceptrons” that the primary dilemma in neural networks was
two layers are not enough to learn complicated things, but adding more layers to
overcome this problem would lower the chance of accuracy (rebooting AI). There-
fore, Rosenblatt’s neural network could not succeed and faded away among the eye-
catching improvements in symbolic AI until Geoffrey E. Hinton and his colleagues
introduced advanced deep learning AI systems about five decades later (Marcus and
Davis 2019).
Patrick Henry Winston names all the period from Lady Lovelace to the end of
the 1960s, the pre-historic era of AI. The main achievements of this age were the
development of symbol manipulation languages such as Lisp, POP, and IPL and
hardware advances such as processors and memory (Buchanan 2005). According to
Winston, after the 1960s, the Dawn Age came, in which the AI community was full of
high expectations, such as building AI as smart as humans, which did not come true.
However, there were two accomplishments at this time worth mentioning because
of their role in the creation of expert systems, the program for solving geometric
analogy problems, and the program that did symbolic integration (Grimson and
Patil 1987). Another characteristic of the 1960s was that the institutionalization of
significant organizations and laboratories. MIT, Carnegie Tech, working with the
Rand Corporation, AI laboratories at Stanford, Edinburgh, and Bell laboratories are
some of these institutions. These were the major actors who took an active role in
enhancing AI technology in the following decades (Buchanan 2005).

1.4 A New Partner in Professional Life: Expert Systems

From the 1970s perspective of Bruce Buchanan and Edward Feigenbaum, who
suggested that knowledge was the primary element for intelligent behaviour domi-
nated the AI field. Chancing the focus from logical inference and resolution theory,
which were predominant until the early 1970s, to knowledge-based systems was a
significant paradigm shift in the course of AI technology (Buchanan 2005). In 1973,
Stanford University accomplished a very considerable improvement in the AI field:
the development of MYCIN, an expert system to diagnose and treat bacterial blood
infections. MYCIN used AI for solving problems in a particular domain, medicine,
which has been requiring human expertise. The first expert system, DENDRAL,
1.4 A New Partner in Professional Life: Expert Systems 11

was developed by Edward Feigenbaum and Joshua Lederberg at Stanford Univer-


sity in 1965 for analysing chemical compounds. The inventers of MYCIN were also
from Stanford University. Edward Shortliffe was a brilliant mind who had degrees
in applied mathematics, medical information sciences, and medicine. MYCIN had
a knowledge base that was provided by physicians at Stanford School of Medicine.
The MYCIN’s static knowledge also involved rules for making inferences. The
system allowed the addition of new experiences and knowledge without changing
the decision-making process. When the physician user loaded data about a particular
patient, the rule interpreter system worked and produced conclusions about the status
of the patient. The expert system of the MYCIN allowed the physician user to ask
questions such as “how, why, and explain.” Having the answers enables the user to
understand the reasoning process of the system. In 1979 a paper was published in
the Journal of American Medical Association, which compared the performance of
MYCIN and ten medical doctors on meningitis cases, and the result was satisfactory
for MYCIN (Yu et al. 1979). According to Newell, MYCIN was “the original expert
system that made it evident to all the rest of the world that a new niche had opened
up.”
Newell’s foresight has come true. By 1975 medicine has been a significant area of
application for AI. PIP, CASNET, and INTERNIST were other expert systems, and
MYCIN was further developed to EMYCIN to diagnose and treat more infectious
diseases. Based on the EMYCIN model, a new expert system was produced: PUFF. It
was an expert on pulmonary illnesses. PUFF was also written at Stanford University.
Its main task was to interpret measurements of respiratory tests and to provide a
diagnosis for the patient. The superiority of PUFF was that it did not require data
input by physicians, which was a handicap for using AI systems in hospitals because
of being time-consuming. The data input came directly from the respiratory test
system. Also, PUFF’s area of expertise was a practical choice since diagnosing a
pulmonary dysfunction did not require a tremendous amount of knowledge. Patient
history and measurements from the respiratory tests together with existing medical
knowledge would be enough for diagnosis, and this early model of expertise system
could process them. On the other hand, PUFF had substantial benefits for clinical use,
such as; saving time for physicians and standardizing the decision-making process
(Aikins et al. 1983).
In 1982 a new expert system was introduced to the medical community,
CADUCEUS. It got its name from the symbol of the medical profession, with roots
going back to Aesculapius. The name of the program implied that it was highly
assertive in decision making for several medical conditions. CADUCEUS based
on the INTERNIST expert system, which was also proven to be highly effective.
However, CADUCEUS contained a considerable amount of medical knowledge and
could link symptoms and diseases. Its performance was determined to be efficient
and accurate in a wide range of medical specialties (Miller et al. 1982).
These were expert systems for medical practice. However, expert systems were
being developed in other areas too. XCON configured computers, DIPMETER
ADVISOR, interpreted oil wells. All these systems could solve problems, explain
their rationale that dominated their decision-making process, and were reliable, at
12 1 History of Artificial Intelligence

least for the standardization of their decisions. These features were important since a
technology product would be successful if it is practical and beneficiary in daily use.
Each of these expert systems had been involved in everyday use to various extends
(Grimson and Patil 1987). Despite these positive examples, there were still debates
about the power of this technology to revolutionize industries and to what extent it
would be possible to build expert systems to substitute human beings to accomplish
their tasks. Moreover, there were conflicting ideas about which sectors would be
pioneering for the development and use of AI technology. Although these were the
burning questions of the 1980s, it is surprising (or not) to notice that these questions
would be relevant if we articulate them in any platform related to AI in 2019.
1977 is a year worth mentioning in the history of AI. It was the year when Steve
Jobs and Stephen Wozniak built the first personal Apple computer and placed it
on the market for personal use. Also, the first Star Wars movie was on theatres to
introduce us with robots who have human-like emotions and motives and, Voyagers
1 and 2 were sent to space to explore the mysterious unknown planets.
The signs of progress in the 1980s were dominated, but not limited to expert
systems. The work station computer system was also promising. A sound work station
system should enable the user to communicate in her language, provide flexibility
to work in various methods, bottom-up, top-down or back and forth, and provide
a total environment composed of computational tools or previous records. LOGI-
CIAN, GATE MASTER, and INTELLECT were examples of these systems. These
systems inevitably embodied questions on the possibility of producing a work station
computer with human-like intelligence, again a relevant issue for today (Grimson
and Patil 1987).
Robotics was another area of concern. While the first initiatives on robotics took
place at the beginning of the 1970s at Stanford Research Institute, during the 1980s,
news about improvements has started to emerge from Japan and the USA. Second-
generation robots were in use for simple industrial tasks such as spray painting.
Wabot-2 from Waseda University Tokyo could read music and play the organ with
ten fingers. By the mid-1980s, third-generation robots were available with tactile
sensing, and a Ph.D. student produced a ping-pong playing robot that could beat
human players the outcome of his dissertation thesis. However, none of them were
significant inventions, and there were still doubts about the need and significance of
building robots (Kurzweil 2000).

1.5 Novel Approaches: Neural Networks

By the beginning of 1980s neural networks, which were introduced by Frank Rosen-
blatt in 1958 and became a failure due to technological insufficiencies, began to
gain importance once more. The Nobel Prize in Physiology winners in 1981 were
David Hubel and Torsten Wiesel. The work that brought the prize to them was
their discovery concerning information processing in the visual system. Their study
revealed the pattern of organization of brain cells that process visual stimuli, the
1.5 Novel Approaches: Neural Networks 13

transfer method of visual information to the cerebral cortex, and the response of
the brain cortex during this process. Hubel and Wiesel’s main argument was that
different neurons responded differently to visual images. Their study was efficiently
transferred to the area of neural networks. Kunihiko Fukushima was the first one
to build an artificial neural network depending on the work of Hubel and Wiesel’s,
which became the actual model for deep learning. The main idea was to give more
considerable weight to connections between nodes if it is required to have a more
substantial influence on the next one.
Meanwhile, Geoffrey Hinton and his colleagues were working on developing
deeper neural networks. They found out that by using back-propagation, they may
be able to overcome the problem of accuracy that Minsky and Papert were talking
about back in 1969.
The driving factors behind the reappearance and potential dominance of neural
networks were the introduction of the internet and general-purpose graphical
processing units (GPGPU) to the field. It did not take long to realize that the internet
could very efficiently and effectively distribute codes and, GPGPU was more suitable
for applications using image processing. The first game-changer application was “the
Facebook.” It used GPGPU, high-level abstract languages, and internet together. As
its name become shorter by dropping “the,” it gained enormous weight in the market
in a short time. It did not take long for researchers to find out that the structure of
neural networks and GPGPUs is a good match. In 2012 Geoffrey Hinton’s team
succeeded in using GPGPUs to enhance the power of neural networks. Big data and
deep learning were two core issues for this success. Hinton and his friends used the
ImageNet database to teach neural networks and acquired 98% accuracy in image
recognition. GPGPUs enabled the researchers to add several layers to the hidden
layer of the neural network so that the learning capacity of them is enhanced to
embrace more complex issues with higher accuracy, including speech and object
recognition. In a short time, the practical use of deep learning has been manifest
in a variety of areas. For example, the capacity of on-line translation applications
enhanced significantly, synthetic art, virtual games improved remarkably.
In brief, we can say that deep learning has given us an entirely novel perspective for
AI systems. Since the first introduction of binary systems, it was the programmers’
primary task to write their efficient codes for computers to operate. Deep learning
looks like a getaway form the limitations of good old-fashioned AI to strong AI
(Marcus and Davis 2019).
In 1987 the market for AI technology products had reached 1.4 billion US Dollars
in the USA. Since the beginning of the 1990s, improvements in AI technology has
been swift and beyond imagination. AI has disseminated to so many areas and has
been so handy to use that today, we do not even recognize it while using. Before
finishing this section, there are two other instances worth mentioning. The game
mates and driverless cars:
In July 2002, the DARPA Company, which has been working on driverless cars
since the mid-1980s, announced a challenge for all parties interested in this filed. The
challenge was to demonstrate a driverless automobile, which can ride autonomously
from Barstow California to Primm Nevada. One hundred six teams accepted the
14 1 History of Artificial Intelligence

challenge; nevertheless, only 15 driverless automobiles were ready to turn on their


engines on the day of the challenge, 13 March 2004. None of the competitors could
complete the course. After the event, the deputy program manager of the Grand
Challenge program said that some vehicles were good at following GPS but failed
to sense the obstacles on the ground. The ones with sensitive sensors for ground
surface were hallucinogenic about obstacles or were afraid of their shadows. It was
surprising to detect words indicating human features such as hallucinating or fearing
in his narration. DARPA announced a second challenge in 2005, and this time five
driverless cars could accomplish the course. It seems one year was enough to teach
driverless vehicles not to be afraid of shadows or treat their hallucinations. The
winner driverless automobile of the 2005 challenge was called Stanley, and after
being nominated as the “best robot of all times” by Wires, retired into seclusion at
the Smithsonian National Museum of American History. The third challenge was
in November 2007. It was called Urban Challenge, and as it is evident from the
name this time, driverless cars were to drive in a mock city where other cars with
human drivers were riding as the driverless vehicles were competing. Moreover, they
had to visit specific checkpoints, park and, negotiate intersections without violating
traffic rules. Eleven teams were qualified for the challenge, and six of them could
accomplish the task. Completing the third challenge required more complex systems
than detecting the obstacles on the ground. The finishers had to have a complex
system capable of perception. They also had to plan, reason, and to act accordingly
(Nilsson 2009).
Although driverless cars have been a popular area for implementation of AI in
contemporary times, another area has been on the spot, which is games, particularly
chess games. Since Claude Shannon’s paper on programming a computer for playing
chess, this game has been one of the main working areas for scientists studying AI.
The reason for this may be that chess has always been a game for intelligent people,
and no one would doubt the intelligence of a chess champion. Hence, an AI entity
to beat a chess champion, Kasparov, was a significant victory for AI technology
that drew the attention of the entire world. In 1996, when Kasparov and deep blue
played chess for the first time, deep blue won the first game, but Kasparov was the
winner of the match. However, one year later, deep blue became more intelligent
and defeated the world champion in a six-game match by two wins, one loss, and
three draws. Kasparov’s remarks after the game were even more interesting than
the defeat was. Right after the game, Kasparov left the room immediately and later
explained the reason for his behaviour was fear. He said, “I am a human being.
When I see something that is well beyond my understanding, I am afraid". While
Kasparov was impressed by the creativity and intelligence in deep blue’s strategy, the
creator company, IBM, did not agree that deep blue’s victory was a sign to imply AI
technology could mimic human thought processes or creativity. On the contrary, deep
blue was a very significant example of computational power. It could calculate and
evaluate 200,000,000 chess positions per second, while Kasparov’s capacity allowed
him to examine and calculate up to three chess positions per second. Hence it was
not intuition or creativity that enabled deep blue to defeat Kasparov; it was a brute
computational force. However, these frank comments of the company did not have a
1.5 Novel Approaches: Neural Networks 15

significant impact on the romantic thoughts in peoples’ minds busy with imagining
an AI struggling in its mind to beat Kasparov (Nilsson 2009). This image positions
deep blue as an entity of strong AI, with the capability to learn process information
and drive novel strategies following its intelligence process.

References

Buchanan, B.G. 2005. A (very) brief history of artificial intelligence. AI Magazine 26 (4): 53.
Boden, M. 2018. Artificial intelligence. A very short introduction. Oxford, UK: Oxford University
Press.
Boden, M.A. 1995. The AI’s half century. AI Magazine 16 (4): 96.
Grimson, W.E.L., and R.S. Patil (eds.). 1987. Al in the 1980s and beyond. An MIT Survey.
Massachusetts, London, England: MIT Press Cambridge.
Aikins, J.S., J.C. Kunz, E.H. Shortliffe, and R.J. Fallat. 1983. PUFF: an expert system for
interpretation of pulmonary function data. Computers and Biomedical Research 16 (3): 199–208.
Kurzweil, R. 2000. The age of spiritual machines When computers exceed human intelligence. New
York, NY, USA: Penguin Books.
Marcus, G., and E. Davis. 2019. Rebooting AI: building artificial intelligence we can trust. New
York, USA: Pantheon Books.
McCorduck, P. 2014. Machines who think a personal inquiry into the history and prospects of
artificial intelligence, 68. Massachusetts: A K Peters, Ltd. ISBN 1-56881-205-1.
McCarthy, J., and M.L. Minsky, N. Rochester, and C.E. Shannon. 2006. A proposal for the
Dartmouth summer research project on artificial intelligence. AI Magazine 27(4): 12.
McCulloch, W.S., and W. Pitts. 1943. A logical calculus of the ideas immanent in nervous activity.
Bulletin of Mathematical Biophysics 5: 115–133.
Minsky, M. 1968. Semantic information processing. Cambridge, MA, USA: MIT Press.
Moor, J. 2006. The Dartmouth college Ai conference: the next fifty years. AI Magazine 27 (4): 87.
Nilsson, N.J. 2009. The quest for artificial intelligence a history of ideas and achievements, 603–611.
New York, NY, USA: Cambridge University Press.
Miller, R.A., H.E. Pople Jr., and J.D. Myers. 1982. Internist-I, an experimental computer-based
diagnostic consultant for general internal medicine. New England Journal of Medicine 307 (8):
468–476.
Shannon, C.E. 1950. Programming a computer for playing chess. Philosophical Magazine 41 (314):
256–275.
Turing, A.M. 1937. On computable numbers with an application to the entscheidungsproblem.
Proceedings of the London Mathematical Society 42(1): 230–265. (Turing, A.M. 1938, “On
computable numbers, with an application to the entscheidungsproblem: a correction”. Proceed-
ings of the London Mathematical Society 43(6):544–546).
Turing, A.M. 1950. Computing machinery and intelligence. Mind. LIX(236):443–460.
Yu, V.L., L.M. Fagan, S.M. Wraith, et al. 1979. Antimicrobial selection by a computer. a blinded
evaluation by infectious diseases experts. JAMA 242(12):1279–1282.
Chapter 2
Definitions

2.1 What is Artificial Intelligence?

The world “artificial” indicates that the entity is a product of human beings and that
it cannot come to existence in natural ways without the involvement of humans.
An artifact is an object made by a human being that is not naturally present but
occurs as a result of the preparative or investigative procedure by human beings.
Intelligence is generally defined as the ability to acquire and apply knowledge. A more
comprehensive definition refers to the skilled use of reason, the act of understanding
and the ability to think abstractly as measured by objective criteria. An AI, then
refers to an entity created by human beings and possesses the ability to understand
and comprehend knowledge, reason by using this knowledge and even act due to
them.
The term artificial has an allusion to synthetic, imitation, or not real. It is used for
things manufactured with the resemblance to the real one, like artificial flowers, but
lacks the features that are innate to the natural one (Lucci and Kopec 2016). This
may be the reason why McCarthy insisted on avoiding the term “artificial” in the
title of their book, which they co-authored with Shannon in 1956. McCarthy thought
that “automata studies” was a much proper title for a book that collected papers on
current AI studies, a name which has positive connotations; severe and scientific
(McCorduck 2004). When Shannon introduced the AI term in the call for Dartmouth
Conference, some scientists did not like it by saying that the term artificial implies
that “there is something phony about it” or “it is artificial and there is nothing real
about this work at all.” However, the majority must have liked it so that the term was
accepted and is in use since then.
Human beings have been capable of producing artifacts since the Stone Age,
about 2.6 million years before. This ability has improved significantly from creating
simple tools for maintaining individual or social viability to giant machines for
industrial production or computers for gruelling problems. However, one common
feature for any artifact is sustained all through these ages: The absolute control and
determination of human beings on them. Every artifact, irrespective of its field and

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 17


P. E. Ekmekci and B. Arda, Artificial Intelligence and Bioethics,
SpringerBriefs in Ethics, https://doi.org/10.1007/978-3-030-52448-7_2
18 2 Definitions

purpose of use, is designed and created by human beings. Hence the abilities and
capacity of them are predictable and controllable by human beings. This feature has
been preserved for artifacts for a very long time.
The first signals to indicate an approaching change in this paradigm became
explicit in the proposal for Dartmouth Conference. There were several issues for
discussion, of which two had the potential of changing the characteristic feature
of artifacts. These were self–improvement and randomness and creativity. In the
proposal, it was written that “a truly intelligent machine will carry out activities
which may be best described as a self–improvement. Some schemes for doing this
have been proposed and are worth further study” and, “a fairly attractive and yet
clearly incomplete conjecture is that the difference between creative thinking and
unimaginative competent thinking lies in the injection of some randomness. The
randomness must be guided by intuition to be efficient” (McCarthy et al. 2006). The
words self-improvement, creative thinking, and injection of randomness were the
exact words to represent ideas to enhance the definition of AI from a computer that
can solve codes which were unsolvable for humans to a more complex one. The
new idea was to create “a machine that is to behave in ways that would be called
intelligent if a human were so behaving” (McCarthy et al. 2006).
Another cornerstone for the definition of AI after the Dartmouth College Summer
Course was the highly influential paper “Computing Machinery and Intelligence,”
written by Alan Turing in 1950. His paper started with the provocative question
which humankind still seeking the answer to:
I propose to consider the question, “Can machines think?” (Turing 1950).

Despite the question in the title, Turing did not argue if machines can think. On
the contrary, he took the machines’ capability to think for granted and focused on
how to prove that. Turing suggested the well-known Turing test to detect this ability
of machines. However, he did not provide a definition and did not specify what he
meant by the act of “thinking.” After reading his paper, one can assume that Turing
conceptualized thinking as an act of reasoning and providing appropriate answers
to questions related to various subjects. Moreover, after studying his work, it would
be plausible to say that he assumed that thinking is an indicator of intelligence.
For centuries the ability to think has been considered to be a qualification that is
unique to Homo sapiens species as characterized in Rodin’s Le Penseur, a man with
visible muscles to prove his liveliness and his hand on his chin to show that he is
thinking. With his test, Alan Turing not only implied that machines could think but
also attempted to provide a handy tool to prove that they can do so. This attempt was
a challenge to the thesis of the superior hierarchical position of homo-sapiens over
any other living or artificial being, which has been taken for granted for a long time.
The idea that machines can think has flourished since the time of Turing and evolved
to terms such as weak AI, strong AI, or artificial general intelligence (AGI).
Stuart Russell and Peter Norvig suggested a relatively practical approach to
defining AI. They focused on two main processes of AI, the first one is the thought
process, and the second one is behaviour. This approach makes classification of
2.1 What is Artificial Intelligence? 19

definitions possible and enables us to gather varying perspectives together to under-


stand how AI has been construed. Russel and Norvig classified definitions of AI in
a four-cell matrix composed of “thinking humanly,” “thinking rationally,” “acting
humanly,” and “acting rationally.” The distinction among these definitions is not
that clear since acting humanly may require thinking and reasoning humanly, and
thinking rationally requires the ability to think in the first place (Russel and Norvig
2016).
In terms of thinking, AI is either defined by attributing to human-like thinking
or rational thinking. “AI is an artifact with processors working like human mind” is
the simplest definition of AI with the perspective of human-like thinking. Rational
thinking, on the other hand, has a more mechanistic connotation. In this respect,
a basic definition might be “AI is the computation with the capacity to store and
process information to reason and decide.” In terms of behaviour, Kurzweil’s and
Raphael’s definitions are worth mentioning. Kurzweil said that “AI is the art of
creating machines that perform functions that require intelligence when performed
by people” (Kurzweil 1990). On the other hand, Raphael used different words to
indicate a similar context; “AI is the science of making machines do things that
would require intelligence if done by man.” (Raphael 1976). Although similar, these
two definitions pose a significant difference. The first one sees AI development as a
process of creation, a term with metaphysical connotation, while the latter sees it as
a branch of science, which is definitely out of the scope of metaphysics. Moreover,
Kurzweil refers to “performing functions” which is very broad since it includes
brain functions such as deep thinking, learning, dreaming or revelation as well as
motor functions, while Raphael’s definition focuses on “doing things” which evokes
actions with concrete results such as problem solving and calculation. Russel and
Norvig thought that these definitions reflect the essential differences in two significant
entities: Weak AI and Strong AI. The definitions which are centred on thinking and
acting humanly comply with Strong AI, while definitions attribution to thinking and
action rationally go for Weak AI (Russel and Norvig 2016).
The difficulty in defining AI results from the involvement of several disciplines
in the process of developing AI technology products. Philosophy, mathematics,
economics, neuroscience, psychology, computer engineering, control theory and
cybernetics, linguistics, ethnography, and anthropology are some of the disciplines
which work in the field of AI. All of these disciplines have their paradigms, perspec-
tives, terminologies, methodologies, and priorities, which are quite different from
each other. Hence, it is not easy to form a common language and understanding. On
the other hand, all these disciplines are essential for creating AI, especially strong AI,
which makes working together necessary for accomplishing this hard task. Having
said these, we will go through the disciplines which are involved in AI and examine
their role and perspectives.
In this respect, the first discipline to address is philosophy. Ontology and epis-
temology have served as the theoretical grounds for the development of AI. As
described in the History of AI section, theories about nature of human body and
mind have been guiding the development of thought on the kind and nature of the
artifacts for centuries. On the other hand, epistemology has a significant effect on
Another random document with
no related content on Scribd:
Peggy knelt down, so as to come nearer to the tub, and looked
down into it. Then she uttered a little wail. “O father, I think they’re all
looking sick somehow! Look at my flounders!”
One of the flounders, alas! was dead already, as well as the crab,
and the other looked rather sorry for himself. Colonel Roberts,
however, would not let Peggy cry.
“Look here, child,” he said; “they want to be put back into the sea
—that’s all. There are too many of them all crowded together in the
tub; we’ll take them back to a pool on the shore, and they will soon
be as frisky as ever again.”
“Not the dead ones,” said Peggy solemnly.
“No, not the poor dead ones, but the sick ones. Go and fetch me
a pail, and we’ll carry them down to the shore.”
“But then I won’t ever see them again,” Peggy objected.
“Now, don’t be a selfish little girl. You would rather they lived and
were happy, wouldn’t you?”
“Ye—s,” Peggy faltered.
“Well, go and fetch the pail.”
After all, it would be good fun to put them all back into the sea,
Peggy thought; so she ran away and fetched the garden pail from
the shed. Colonel Roberts pulled up his sleeves, and dived his arm
into the tub, and fished up the creatures one by one. They all looked
rather flabby and sick.
“Now, we must take them down to the shore,” he said.
They selected a nice large pool, and one by one placed the poor
sick creatures into it. Then Peggy sat down to watch. She had not
long to wait: the sick flounder revived in the most extraordinary
manner, the anemones began to wave their feelers about in the nice
clean water as if they too felt all right.
“See! they are all quite happy again, Peggy,” said her father.
“Oh, I am sorry not to keep them,” said she. “Do you think I’ll ever
get anything to play with that I can love so much?”
“Well, that depends upon yourself, Peggy; but as we walk back to
the house you can guess what I’ve got for you at home.”
“Have you got something new for me—something I’ll love?”
“Yes, quite new. I fancy you’ll love it very much.”
“As much as my sea beasts?”
“Oh, a great deal more. What do you think would be the nicest
thing you could have?”
“A Shetland pony?”
“No, far nicer.”
“A big Persian pussy-cat?”
“No, nicer still.”
Peggy began to dance with impatience. “Oh, do tell me; what is
it?” she cried.
“Well, you will find a new sister at home, very small and pink, with
blue eyes and a lot of nice black hair.”
Peggy received this description dumbly; indeed, she walked on
for a few yards before she said bitterly,—
“O father, I’d have liked the Shetland pony ever so much better;
couldn’t you change it yet? Is the sister much cheaper? I’ll give you
my shilling!”
She was rather hurt by the way her father laughed at this
proposal.
“Why, Peggy, a sister will be ever so much nicer than a pony; she
will be able to play with you and speak to you soon.”
“Can’t she speak? She can’t be a very good one,” said Peggy
dolefully.
“No, she can only cry as yet—she cries a good deal.”
“Well, I don’t want her then, father. Do please send her away, and
get me the pony instead, or even the cat.”
“I think we’ve got to keep her, Peggy. Suppose you wait till you
see her. Perhaps you won’t wish then to send her away.”
“Can she walk, if she is so stupid, and can’t talk?” Peggy asked
suspiciously.
“Oh no, she can’t walk; she is dressed in long robes, just like your
Belinda.”
“Who has been playing with her?” Peggy asked. “Has mother? It
doesn’t amuse her much to play with Belinda, and if this thing is just
like her, I wonder mother cares to play with it either.”
“Yes, mother has played with her most of the time.”
“Well, I think it’s very queer of her, for she doesn’t like Belinda a
bit,” said Peggy. Then, after a moment’s silence, she added,
“Perhaps I’ll like it too; I don’t feel as if I would. And please, father,
will you let me ride up to the house on your back?”
This ended the discussion about the new sister.
And now, if I were to tell you how precious the new sister was to
Peggy, it would take another volume as big as this to tell it. For when
Peggy’s sister grew a little older, they had such wonderful
adventures together that Peggy used to wonder how she had got on
all the tiresome years when she was alone.
TRANSCRIBER NOTES

Misspelled words and printer errors have been corrected. Where


multiple spellings occur, majority use has been employed.

Punctuation has been maintained except where obvious printer


errors occur.

Book name and author have been added to the original book cover.
The resulting cover is placed in the public domain.
*** END OF THE PROJECT GUTENBERG EBOOK ALL THAT
HAPPENED IN A WEEK ***

Updated editions will replace the previous one—the old editions will
be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying copyright
royalties. Special rules, set forth in the General Terms of Use part of
this license, apply to copying and distributing Project Gutenberg™
electronic works to protect the PROJECT GUTENBERG™ concept
and trademark. Project Gutenberg is a registered trademark, and
may not be used if you charge for an eBook, except by following the
terms of the trademark license, including paying royalties for use of
the Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is very
easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund from
the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law in
the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name
associated with the work. You can easily comply with the terms of
this agreement by keeping this work in the same format with its
attached full Project Gutenberg™ License when you share it without
charge with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.

1.E. Unless you have removed all references to Project Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears, or
with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is derived


from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is posted


with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning of
this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute this


electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1 with
active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or expense
to the user, provide a copy, a means of exporting a copy, or a means
of obtaining a copy upon request, of the work in its original “Plain
Vanilla ASCII” or other form. Any alternate format must include the
full Project Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or providing


access to or distributing Project Gutenberg™ electronic works
provided that:

• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and
discontinue all use of and all access to other copies of Project
Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™


electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in paragraph
1.F.3, the Project Gutenberg Literary Archive Foundation, the owner
of the Project Gutenberg™ trademark, and any other party
distributing a Project Gutenberg™ electronic work under this
agreement, disclaim all liability to you for damages, costs and
expenses, including legal fees. YOU AGREE THAT YOU HAVE NO
REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF
WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE
FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of receiving it,
you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or entity
that provided you with the defective work may elect to provide a
replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.

1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the
Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and distribution
of Project Gutenberg™ electronic works, harmless from all liability,
costs and expenses, including legal fees, that arise directly or
indirectly from any of the following which you do or cause to occur:
(a) distribution of this or any Project Gutenberg™ work, (b)
alteration, modification, or additions or deletions to any Project
Gutenberg™ work, and (c) any Defect you cause.

Section 2. Information about the Mission of


Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.

The Foundation’s business office is located at 809 North 1500 West,


Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many small
donations ($1 to $5,000) are particularly important to maintaining tax
exempt status with the IRS.

The Foundation is committed to complying with the laws regulating


charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states where


we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot make


any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.

Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.

Project Gutenberg™ eBooks are often created from several printed


editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.

You might also like