Professional Documents
Culture Documents
Full Chapter Artificial Intelligence and Bioethics Perihan Elif Ekmekci PDF
Full Chapter Artificial Intelligence and Bioethics Perihan Elif Ekmekci PDF
Full Chapter Artificial Intelligence and Bioethics Perihan Elif Ekmekci PDF
https://textbookfull.com/product/artificial-intelligence-and-
robotics-huimin-lu/
https://textbookfull.com/product/artificial-intelligence-by-
example-develop-machine-intelligence-from-scratch-using-real-
artificial-intelligence-use-cases-denis-rothman/
https://textbookfull.com/product/playing-smart-on-games-
intelligence-and-artificial-intelligence-1st-edition-julian-
togelius/
AI Meets BI: Artificial Intelligence and Business
Intelligence 1st Edition Lakshman Bulusu
https://textbookfull.com/product/ai-meets-bi-artificial-
intelligence-and-business-intelligence-1st-edition-lakshman-
bulusu/
https://textbookfull.com/product/artificial-intelligence-and-
robotics-1st-edition-huimin-lu/
https://textbookfull.com/product/artificial-intelligence-for-big-
data-complete-guide-to-automating-big-data-solutions-using-
artificial-intelligence-techniques-anand-deshpande/
https://textbookfull.com/product/artificial-intelligence-and-
games-1st-edition-georgios-n-yannakakis/
https://textbookfull.com/product/artificial-intelligence-and-
algorithms-in-intelligent-systems-radek-silhavy/
SPRINGER BRIEFS IN ETHICS
Artificial
Intelligence and
Bioethics
123
SpringerBriefs in Ethics
Springer Briefs in Ethics envisions a series of short publications in areas such as
business ethics, bioethics, science and engineering ethics, food and agricultural
ethics, environmental ethics, human rights and the like. The intention is to present
concise summaries of cutting-edge research and practical applications across a wide
spectrum.
Springer Briefs in Ethics are seen as complementing monographs and journal
articles with compact volumes of 50 to 125 pages, covering a wide range of content
from professional to academic. Typical topics might include:
• Timely reports on state-of-the art analytical techniques
• A bridge between new research results, as published in journal articles, and a
contextual literature review
• A snapshot of a hot or emerging topic
• In-depth case studies or clinical examples
• Presentations of core concepts that students must understand in order to make
independent contributions
Artificial Intelligence
and Bioethics
123
Perihan Elif Ekmekci Berna Arda
TOBB University of Economics Ankara University Medical School
and Technology Ankara, Turkey
Ankara, Turkey
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
When the effort to understand and explain the universe is combined with human
creativity and future design, we see works in the field of science fiction. One of the
pioneers of this combination was undoubtedly Jules Verne and his brilliant works.
His book “From the Earth to the Moon,” published in 1865, was one of the first
examples of the genre of a science fiction novel. However, science fiction has
always been ahead of its era. A real journey to the moon was accomplished only in
1969, about a century after the publication of the book.
The twentieth century was a period in which scientific developments progressed
with giant steps. In the twenty-first century, we are now in an age in which scientific
knowledge increases logarithmically, and the half-life of knowledge is very short.
The amount of information to be possessed has grown hugely and varied so much
that it has reached a level that is well above human control.
The book you are holding is about Artificial Intelligence (AI) and bioethics. It is
intended to draw attention to the value problems of an enormous phenomenon with
uncertain limits on human creativity. The book consists of the following chapters.
The first section is History of Artificial Intelligence. In this section, a historical
perspective of the development of technology and AI is presented. We take a quick
tour from ancient philosophers to Rene Descartes, from Lady Ada to Alan Turing,
the prominent pioneers of the AI concept who contributed to philosophy and actual
creation of this new phenomenon of whom some suffered many injustices during
their lifetimes. This section ends with a short description of the state of the art of AI.
The second section is Definitions. This section aims to explain some of the key
terms used in the book to familiarize them with readers. We also aim to search the
answer to the following question: “what makes an entity—human or machine—
intelligent?” In the face of this fundamental question, relevant concepts such as
strong AI, weak AI, heuristic, Turing test, which have become increasingly clear
over time, have been discussed separately in the light of literature.
The Personhood and Artificial Intelligence section address a fundamental question
about the ethical agency of AI. Personhood is an important philosophical, psycho-
logical, and legal concept for AI because of its implications about moral responsi-
bility. Didn’t the law build all punishment on the admission that individuals with
v
vi Preface
personhood should at the same time be responsible for what they do (and sometimes
do not)? Until recently, we all lived in a world dominated by an anthropocentric
approach. Human beings have always been at the top of the hierarchy among all
living and non-living entities and ever the most valuable. However, the emerging of
AI and its potential to improve into human-level or above human-level intelligence
challenges human beings’ superior position by claiming to be acknowledged to have
personhood. Many examples of daily life, such as autonomous vehicles, military
drones, early warning systems, are discussed in this section.
The following section is on bioethical inquiries about AI. The first question is on
the main differences between conventional technology and AI and if the current
ethics of technology can be applied to ethical issues of AI.
After discussing the differences between conventional technology and AI and
justifying our arguments about the need for a new ethical frame, we highlight the
bioethical problems arising from the current and future AI technologies. We address
the Asilomar Principles, the Montreal Declaration, and the Ethics Guidelines for
Trustworthy AI of the European Commission as examples of suggested frameworks
for ethics of AI. We discuss the strengths and weaknesses of these documents. This
section ends with our suggestion for the fundamentals of the new bioethical frame
for AI.
The final section focuses on the ethical implications of AI in health care. Medicine
is one of the oldest professions in the world, is and will be affected by AI. The moral
atmosphere, shaped by the Hippocratic tradition of the Western world, was initially
sufficient to ensure that only the physician was virtuous. However, by the
mid-twentieth century, the impact of technology on medicine had become very evi-
dent. On the one hand, physicians were starting to lose their ancient techne-oriented
professional identity.
On the other hand, a new patient type emerged, as a figure demanding her/his
rights, beginning to inquire about the physician’s paternalism. Even in this case, it
was inevitable that different approaches would replace traditional medical ethics.
Currently, these new approaches are challenged once more with the emergence of
AI in health care. The main question is how the patients and caregivers will be
shaped within the medical ethics framework in the AI world. This subsection, AI in
health care and medical ethics, is suggesting answers and shedding light on possible
areas of ethical concern.
Chess grandmaster Kasparov, one of the most brilliant minds of the twentieth
century, tried to defeat Deep Blue, but his defeat was inevitable. Lee Sedol, a young
Go master, decided to retire in November 2019, after being defeated by the AI
system Alpha Go. Of course, Alpha Go represented a much more advanced level of
AI than Deep Blue. Lee Sedol’s early decision was that even if he was number one,
AI was “invincible AI would always be at the top”. We now accept that Lee Sedol’s
decision is based on a realistic prediction. The conclusion part presents the
projections about AI in light of all the previous chapters and the solutions for the
new situations that the scientific world will face.
Preface vii
This book contains what two female researchers can see and respond to in terms
of AI from their specialty field, bioethics. We want to continue asking questions and
looking for answers together.
Wish you an enjoyable reading experience.
ix
x Contents
When we discuss the ethical issues about artificial intelligence (AI), we focus on its
human-like abilities. The human-like abilities categorize under two main headings:
doing and thinking. An entity with the capability to think, understand the settings,
consider the options, consequences, and implications, reason, and finally decide what
to do and physically act -do- accordingly in a given circumstance may be considered
intelligence. Then an entity is intelligent if it can think and do. However, both abilities
are not easy to define. While looking for answers for definitions, we trace back to
5th century BC to Aristotle, who laid out the foundations of epistemology and first
formal deductive reasoning. For centuries, the philosophical inquiries about body
and mind and how the brain works accompanied advancements in human efforts to
build autonomous machines. Seventeenth century hosted two significant people in
this respect; Blaise Pascal, who invented the mechanical calculator, the Pascaline,
and Rene Descartes, who codified body-mind dichotomy in his book “A Treatise on
Man”. The body-mind dichotomy, known as the Cartesian system, indicates that the
mind is an entity separate from the body. According to this perspective, the mind
is intangible, and the way it works is so unique and metaphysical that it cannot
duplicate in an in-organic/human-made artifact. Descartes argued that the body, on
the other hand, was an automatic machine-like irrigation taps in elegant gardens of
French chateaus or clocks on church municipality towers, which were popular at
those times in European towns. It was the era when anatomic dissections on the
human body became more frequent in Europe. The raising knowledge about human
anatomy revealed some facts about the pumping engine, heart, and blood circulation
in tubes, vessels. It enabled some analogies between the human body and a working
machine. Descartes stated that the pineal gland was the place where the integration
between the mind and the material body. The idea about body-mind dichotomy was
developed further. It survived to our day and, as discussed in forthcoming chapters,
and still constitutes one of the main arguments against the personification of AI
entities.
The efforts of philosophers to formulate the thought, ethical reasoning, and
the ontology of humanness and, the nature of epistemology continued in the 18th
century with Gottfried Wilhelm Leibniz, Baruch Spinoza, Thomas Hobbes, John
Locke, Immanuel Kant, and David Hume. Spinoza, a contemporary philosopher of
Descartes, studied the Cartesian system and disagreed with this idea. His rejection
was based on his pantheist perspective, which viewed the mind and body as two
different aspects of human being, and were merely representations of God. Another
highly influential philosopher and mathematician of the time, Gottfried Wilhelm
Leibnitz imagined mind and body as two different monads which were exactly
matching with each other to form a corresponding system, which was similar to
the cogwheels of a clock. In an era when communication among contemporaries
was limited to direct contact or having access to one of the few published books,
Leibnitz travelled all through Europe and talked to other scientists and philosophers,
which enabled him to comprehend the existing state of the art both in theory and
implementation. What he inferred was the need for a common language of science so
that thoughts and ideas could be discussed based on the same terms. This common
language required to develop an algorithm in which human thoughts are represented
by symbols for computers so that humans and machines could communicate. Leib-
nitz’s calculus ratiocinator –a calculus for reasoning- could not accomplish the task
of symbolically expressing logical terms and reason on them, but undoubtedly was
an inspiration for “Principia Mathematica” of Alfred North Whitehead and Bertrand
Russel in the mid-19th century.
While every one of these philosophers shaped contemporary philosophy, efforts to
produce autonomous artifacts proceeded. Jacques de Vaucanson’s mechanical duck
symbolized the state of the art of automata and AI in the 18th century. This auto-
matic duck could beat its wings, drink, eat, and even digest what it eats, almost
like a living being. In the 19th century, artifacts and humanoids took place in liter-
ature. Best known ones were Ernst Theodor Wilhelm Hoffman’s “The Sandman”,
Johann Wolfgang von Goethe’s “Faust” (part II) and, Mary Wollstonecraft Shelley’s
“Frankenstein”. Hoffman’s work has been an inspiration to the composition of the
famous ballet “Coppelia”, which contained a doll that comes to life and becomes an
actual human being. These may be considered as the part played by art and litera-
ture for the elaboration of the idea of artifacts becoming more human-like and the
potential of AI to have human attributes. AI appeared in 19th-century literature. L.
Fran Baum’s mechanical man “Tiktok” was an excellent example of intelligent non-
human beings of the time. Jules Verne and Isaac Asimov also deserve mentioning as
pioneering writers who have AI in their works (Buchanan 2005). These pieces of art
helped to prepare the mind of ordinary people to the possibility of the existence of
intelligent beings other than our human species.
1.2 First Steps to Artificial Intelligence 3
In the 1840s, Lady Ada Lovelace envisaged the “analytical engine,” which was built
by Charles Babbage. Babbage was already dreaming about making a machine that
could calculate logarithms. The idea of calculators has been introduced in 1642 by
Blaise Pascal, and further developed by Leibnitz later. Still, these were slightly too
simple when compared to the automatic table calculator, the “difference engine,” that
Babbage started to build in 1822. However, the difference engine was left behind
when Lady Ada Lovelace’s perspectives were introduced about the analytical engine
(McCorduck 2014).
The analytical engine was planned to perform arithmetical calculations as well as
to analyse and to tabulate functions with a vast data storage capacity and a central
processing unit controlled by algebraic patterns. Babbage could never finish building
the analytical engine, but still, his efforts are considered as a cornerstone in the
history of AI. This praise is mostly due to the partnership between him and Ada
Lovelace, a brilliant woman who had been tutored in mathematics since the age of 4,
conceptualized a flying machine at the age of 12, and became Babbage’s partner at
the age of 17. Lovelace thought that the analytical engine had the potential to process
symbols representing all subjects in the universe and could deal with massive data to
reveal the facts about the real world. She imagined that this engine could go far beyond
scientific performance and would even be able to compose a music piece (Boden
2018). She elaborated on the original plans for the capabilities of Analytical Engine
and published the paper in an English journal in1843. Her elaboration contained the
first algorithm to be carried out by a computer program. She had the vision, she knew
what to expect, but she and her partner Babbage could not figure out how to realize
it. The analytical engine did not answer “how-to” question. On the other hand, Ada
Lovelace’s vision survived and materialized at the beginning of the 21st century.
In the 19th century, George Boole worked on the construction of “the mathematics
of the human intellect” or to find out the general principles of reasoning by symbolic
logic, merely by using symbols of 0 and 1. This binary system of Boole later consti-
tuted the basics of developing computer languages. Whitehead and Russel’s were
following the same path while they wrote Principia Mathematica, a book which has
been a cornerstone for philosophy, logic, and mathematics and an inspirational guide
for scientists working on AI.
The 19th century has been the year in which AI flourished both in theory and imple-
mentation. In 1936 Alan Turing shed light on “how to” question, left unanswered by
Lady Lovelace, by proposing the “Turing Machine.” In essence, his insight was very
similar to Lovelace’s. He suggested that an algorithm may solve any problem which
is represented by a symbol. Also, “it is possible to invent the single machine which
can be used to compute any computable sequence.” (Turing 1937). In his paper “On
Computable Numbers with an Application to the Entscheidungsproblem,” he defined
how this machine works. It is worth mentioning that he used terms like “remember,
wish, and behave” while describing the abilities of the Turing Machine, words that
4 1 History of Artificial Intelligence
were used only for functions of the human mind before. The following sentences
show how he personalized the Turing Machine:
The behaviour of the computer at any moment is determined by the symbols which he is
observing and his state of mind at that moment.
It is always possible for the computer to break off from his work, to go away and forget all
about it, and later to come back and go on with it.
argument “heads in the sand objection,” which was entirely appropriate considering
the implications of it. The third one is the mathematical objection, which rests on the
idea that machines can produce unsatisfactory results, which is abstractly defeated by
Turing by pointing at fallacious conclusions coming from human minds. The fourth
objection Turing considered was consciousness or meta-cognition in contemporary
terms. This objection came from Professor Jefferson’s Lister Oration and stated that
composing music or writing a sonnet was not enough to prove the existence of
thinking. One must also be able to know what it had written or produced and feel
the pleasure of success, and the grief of failure. Turing overcame this objection by
putting forth that objections of consciousness “could be persuaded to abandon it
rather than be forced into the solipsist position,” and one should solve the mystery
about consciousness before building an argument on it.
It is plausible to say that Turing’s perspective was ahead of Lady Lovelace’s at one
point. Lovelace stated that the analytical engine (or an artifact) could do whatever we
know how to order it to perform. With this, she cast out the possibility of a machine
that can learn and improve its abilities. Her vision complied with Gödel’s Theorem,
which indicated that computers are inherently incapable of solving any problem
which humans can overcome (McCorduck 2014). At this point, Turing proposed
to produce a program to simulate a child’s mind instead of an adult is so that the
possibility to enhance capabilities of the machine’s thought process would be high,
just like a child’s learning and training process. Reading through this inspirational
paper, we can say that Alan Turing had an extraordinary broad perspective about
what AI could be in the future.
While Turing was developing his ideas about thinking machines, some other
scientists from the USA were producing inspirational ideas on the subject. Two
remarkable scientists, Warren McCulloch and Walter Pitts, are worth mentioning at
this point. In 1943, they published a paper titled “A Logical Calculus of the Ideas
Immanent in Nervous Activity” (McCulloch and Pitts 1943). This paper referred to
the anatomy and physiology of neurons. It argued that the inhibitory and excitatory
activities of neurons and neuron nets grounded basically on “all-or-none” law, and
“this law of nervous system was sufficient to ensure that propositions may represent
the activity of any neuron.” Thus, neural nets could compute logical propositions of
the mind. This argument proposed that the physiological activities among neurons
would correspond to relations among logical propositions. The representations were
utilized to identify the ties with those of the propositions. In this respect, every activity
of the neurons corresponded to a proposition. This paper had a significant impact on
the first digital computer developed by John Von Neumann in the same year (Boden
1995). However, the authors’ arguments inevitably inspired the proponents of the
archaic inquiries about how the mind worked, the unknowable object of knowledge
and, the dichotomy of body and mind. The authors concluded that since all psychic
activities of the mind are working with the “all-or-none” law of neural activities,
“both the formal and final aspects of that (mind) activity which we will not call
mental are rigorously deducible from present neurophysiology.” The paper ended
with the assertion that “mind no longer goes more ghostly than a ghost.” There is no
doubt that these assertions constructed the main arguments of cognitive science. The
6 1 History of Artificial Intelligence
idea for the elaboration of cognitive science was that cognition is computation over
representations, and it may take place in any computational system, either neutral or
artificial.
Three years after this inspiring paper, in 1950, while Turing was publishing
his previously mentioned paper on the possibility of machines thinking, Claude
Shannon’s article titled: “Programming a Computer for Playing Chess” appeared to
shed light on some of the fundamental questions of the issue (Shannon 1950). In
this article, he was questioning features needed to differentiate a machine that could
think from a calculator or a general-purpose computer. For him, the capability to
play chess would imply that this new machine could think. He grounded his idea
on the following arguments. First, a chess-playing device should be able to process
not only with numbers but with mathematical expressions and words; what we call
representations and images today. Second, the machine should be able to make proper
judgments about future actions by trial and error method based on previous results,
meaning the new entity would have the necessary capacity to operate beyond “strict,
unalterable computing processes.” The third argument depended on the nature of the
decisions of the game. When a chess player decides to make a move, this move may
be right, wrong, or tolerable, depending on what her rival would do in response. A
decision that could be considered to be faulty by any average chess player will be
accepted if the opponent makes worse decisions about her moves in response. Hence
a chess player’s choices are not 1 or 0—right or wrong-, but “rather have a contin-
uous range of quality from the best to the worst”. Shannon described a chess player’s
strategy as “a process of choosing a move in a given position”. In-game theory, if the
player always chooses to make the same move in the same position—pure strategy-,
this makes the player a very predictable one, since her rival would be able to figure out
her strategy after a few games. Hence a good player has to have a mixed plan, which
means that the plan should involve a reasoning procedure operating with statistical
elements so that the player can make different moves in similar positions.
Shannon thought if we could build a chess-playing machine with these intelligent
qualifications, then this would be followed by machines “capable of logical deduc-
tion, orchestrating a melody or making strategic decisions for the military.” Today
we are far beyond Shannon’s predictions in 1950.
Julian Bigelow, and this group published the influential paper titled “Behaviour,
Purpose, and Teleology,” in Philosophy of Science in 1943. Soon after, in 1942,
Rosenblueth met Warren McCulloh, who was already working together with Walter
Pitts on the mathematical description of neural behaviour of the human brain. Simul-
taneously various scientists were working on similar problems in different institu-
tions. In the summer of 1956, John McCarthy, Marvin Minsky, Nathaniel Rochester,
and Claude Shannon came up with the idea to summon up all scientists working in
the field of thinking machines together, share what they had achieved and, develop
perspectives for future work. They thought it was the right time for this conference
since the studies on AI had to “proceed based on the conjecture that every aspect of
learning or any other feature of intelligence can in principle be so precisely described
that a machine could be made to simulate it.” The name of the gathering was “the
Dartmouth Summer Research Project on AI”. It was planned to last for two months
with the attendance of 10 scientists who were working in the field of AI. In the Dart-
mouth conference “AI” term was officially used for thinking machines for the first
time. Each of the four scientists who wrote the proposal offered proposals for research
areas. Shannon was interested in the application of information theory to computing
machines and brain models and, the matched environment-brain model approach
to automata, Minksy’s research area was learning machines. He was working on a
machine that could sense changes in the environment and adapt its output according
to them. He described his proposal as a very tentative one and hoped to improve his
work in the summer course. Rochester proposed to work on originality and random-
ness of machines, and McCarthy’s proposal was “to construct an artificial language
which a computer can be programmed to use on problems requiring conjecture and
self-reference” (McCarthy et al. 2006). However, the conference did not meet the
high expectations of the initiating group. McCarthy acknowledged the reasons for
this setback that most of the attendees did not come for the whole two months.
Some of them were there for two days while others came and left on various dates.
Moreover, most of the scientists were reluctant to share their research agenda and
collaborate. The conference did not yield a common agreement on general theory
and methodology for AI. McCarthy said they were also wrong to think that the
timing was perfect for such a big gathering. In his later evaluations, he stated that
the field was not ready for big groups to work together since there was no agreement
on the general plan and course of action (McCorduck 2014). On the other hand,
despite the negativities, the Dartmouth AI conference had significant importance
in the history of AI because of nailing the term AI, solidifying the problems and
perspectives about AI, and determining state of the art (Moor 2006). However, it
was later realized that the conference hosted a promising intervention, the Logic
Theorist or General Problem Solver (GPS), developed by Allen Newell and Herbert
A. Simon. It did not get the attention it deserved at the conference since its main
focus was different than what most scientists’ projects. Simon and Newell came
from RAND, an institution where people worked to develop projects for the benefit
of the air forces. It was the Second World Wartime, and the military needed any
innovation they could have that would give them superiority over the enemy. Simon
was working on the decision process of human beings, and he wrote a book titled
8 1 History of Artificial Intelligence
Rosenblatt’s neural network was developed to acquire knowledge without the inter-
ference of codes written down to translate it to the symbols. Although that was a
brilliant idea, the technology was not ready to support it. The available hardware did
not have sufficient receptors to acquire knowledge, like cameras with poor pixels
and audio receptors, which could not specify sounds like speech. Another significant
problem was the lack of big data. The neural network required vast amounts of data to
accomplish machine learning, which was not available in the 1950s. Besides, Rosen-
blatt’s neural network was quite limited in terms of layers. It had one input and one
output layer, which enabled the system to learn only simple things like recognizing
that the shape is a circle, not a triangle. In 1969 Marvin Minsky and Seymour Papert
wrote in their book “Perceptrons” that the primary dilemma in neural networks was
two layers are not enough to learn complicated things, but adding more layers to
overcome this problem would lower the chance of accuracy (rebooting AI). There-
fore, Rosenblatt’s neural network could not succeed and faded away among the eye-
catching improvements in symbolic AI until Geoffrey E. Hinton and his colleagues
introduced advanced deep learning AI systems about five decades later (Marcus and
Davis 2019).
Patrick Henry Winston names all the period from Lady Lovelace to the end of
the 1960s, the pre-historic era of AI. The main achievements of this age were the
development of symbol manipulation languages such as Lisp, POP, and IPL and
hardware advances such as processors and memory (Buchanan 2005). According to
Winston, after the 1960s, the Dawn Age came, in which the AI community was full of
high expectations, such as building AI as smart as humans, which did not come true.
However, there were two accomplishments at this time worth mentioning because
of their role in the creation of expert systems, the program for solving geometric
analogy problems, and the program that did symbolic integration (Grimson and
Patil 1987). Another characteristic of the 1960s was that the institutionalization of
significant organizations and laboratories. MIT, Carnegie Tech, working with the
Rand Corporation, AI laboratories at Stanford, Edinburgh, and Bell laboratories are
some of these institutions. These were the major actors who took an active role in
enhancing AI technology in the following decades (Buchanan 2005).
From the 1970s perspective of Bruce Buchanan and Edward Feigenbaum, who
suggested that knowledge was the primary element for intelligent behaviour domi-
nated the AI field. Chancing the focus from logical inference and resolution theory,
which were predominant until the early 1970s, to knowledge-based systems was a
significant paradigm shift in the course of AI technology (Buchanan 2005). In 1973,
Stanford University accomplished a very considerable improvement in the AI field:
the development of MYCIN, an expert system to diagnose and treat bacterial blood
infections. MYCIN used AI for solving problems in a particular domain, medicine,
which has been requiring human expertise. The first expert system, DENDRAL,
1.4 A New Partner in Professional Life: Expert Systems 11
least for the standardization of their decisions. These features were important since a
technology product would be successful if it is practical and beneficiary in daily use.
Each of these expert systems had been involved in everyday use to various extends
(Grimson and Patil 1987). Despite these positive examples, there were still debates
about the power of this technology to revolutionize industries and to what extent it
would be possible to build expert systems to substitute human beings to accomplish
their tasks. Moreover, there were conflicting ideas about which sectors would be
pioneering for the development and use of AI technology. Although these were the
burning questions of the 1980s, it is surprising (or not) to notice that these questions
would be relevant if we articulate them in any platform related to AI in 2019.
1977 is a year worth mentioning in the history of AI. It was the year when Steve
Jobs and Stephen Wozniak built the first personal Apple computer and placed it
on the market for personal use. Also, the first Star Wars movie was on theatres to
introduce us with robots who have human-like emotions and motives and, Voyagers
1 and 2 were sent to space to explore the mysterious unknown planets.
The signs of progress in the 1980s were dominated, but not limited to expert
systems. The work station computer system was also promising. A sound work station
system should enable the user to communicate in her language, provide flexibility
to work in various methods, bottom-up, top-down or back and forth, and provide
a total environment composed of computational tools or previous records. LOGI-
CIAN, GATE MASTER, and INTELLECT were examples of these systems. These
systems inevitably embodied questions on the possibility of producing a work station
computer with human-like intelligence, again a relevant issue for today (Grimson
and Patil 1987).
Robotics was another area of concern. While the first initiatives on robotics took
place at the beginning of the 1970s at Stanford Research Institute, during the 1980s,
news about improvements has started to emerge from Japan and the USA. Second-
generation robots were in use for simple industrial tasks such as spray painting.
Wabot-2 from Waseda University Tokyo could read music and play the organ with
ten fingers. By the mid-1980s, third-generation robots were available with tactile
sensing, and a Ph.D. student produced a ping-pong playing robot that could beat
human players the outcome of his dissertation thesis. However, none of them were
significant inventions, and there were still doubts about the need and significance of
building robots (Kurzweil 2000).
By the beginning of 1980s neural networks, which were introduced by Frank Rosen-
blatt in 1958 and became a failure due to technological insufficiencies, began to
gain importance once more. The Nobel Prize in Physiology winners in 1981 were
David Hubel and Torsten Wiesel. The work that brought the prize to them was
their discovery concerning information processing in the visual system. Their study
revealed the pattern of organization of brain cells that process visual stimuli, the
1.5 Novel Approaches: Neural Networks 13
transfer method of visual information to the cerebral cortex, and the response of
the brain cortex during this process. Hubel and Wiesel’s main argument was that
different neurons responded differently to visual images. Their study was efficiently
transferred to the area of neural networks. Kunihiko Fukushima was the first one
to build an artificial neural network depending on the work of Hubel and Wiesel’s,
which became the actual model for deep learning. The main idea was to give more
considerable weight to connections between nodes if it is required to have a more
substantial influence on the next one.
Meanwhile, Geoffrey Hinton and his colleagues were working on developing
deeper neural networks. They found out that by using back-propagation, they may
be able to overcome the problem of accuracy that Minsky and Papert were talking
about back in 1969.
The driving factors behind the reappearance and potential dominance of neural
networks were the introduction of the internet and general-purpose graphical
processing units (GPGPU) to the field. It did not take long to realize that the internet
could very efficiently and effectively distribute codes and, GPGPU was more suitable
for applications using image processing. The first game-changer application was “the
Facebook.” It used GPGPU, high-level abstract languages, and internet together. As
its name become shorter by dropping “the,” it gained enormous weight in the market
in a short time. It did not take long for researchers to find out that the structure of
neural networks and GPGPUs is a good match. In 2012 Geoffrey Hinton’s team
succeeded in using GPGPUs to enhance the power of neural networks. Big data and
deep learning were two core issues for this success. Hinton and his friends used the
ImageNet database to teach neural networks and acquired 98% accuracy in image
recognition. GPGPUs enabled the researchers to add several layers to the hidden
layer of the neural network so that the learning capacity of them is enhanced to
embrace more complex issues with higher accuracy, including speech and object
recognition. In a short time, the practical use of deep learning has been manifest
in a variety of areas. For example, the capacity of on-line translation applications
enhanced significantly, synthetic art, virtual games improved remarkably.
In brief, we can say that deep learning has given us an entirely novel perspective for
AI systems. Since the first introduction of binary systems, it was the programmers’
primary task to write their efficient codes for computers to operate. Deep learning
looks like a getaway form the limitations of good old-fashioned AI to strong AI
(Marcus and Davis 2019).
In 1987 the market for AI technology products had reached 1.4 billion US Dollars
in the USA. Since the beginning of the 1990s, improvements in AI technology has
been swift and beyond imagination. AI has disseminated to so many areas and has
been so handy to use that today, we do not even recognize it while using. Before
finishing this section, there are two other instances worth mentioning. The game
mates and driverless cars:
In July 2002, the DARPA Company, which has been working on driverless cars
since the mid-1980s, announced a challenge for all parties interested in this filed. The
challenge was to demonstrate a driverless automobile, which can ride autonomously
from Barstow California to Primm Nevada. One hundred six teams accepted the
14 1 History of Artificial Intelligence
significant impact on the romantic thoughts in peoples’ minds busy with imagining
an AI struggling in its mind to beat Kasparov (Nilsson 2009). This image positions
deep blue as an entity of strong AI, with the capability to learn process information
and drive novel strategies following its intelligence process.
References
Buchanan, B.G. 2005. A (very) brief history of artificial intelligence. AI Magazine 26 (4): 53.
Boden, M. 2018. Artificial intelligence. A very short introduction. Oxford, UK: Oxford University
Press.
Boden, M.A. 1995. The AI’s half century. AI Magazine 16 (4): 96.
Grimson, W.E.L., and R.S. Patil (eds.). 1987. Al in the 1980s and beyond. An MIT Survey.
Massachusetts, London, England: MIT Press Cambridge.
Aikins, J.S., J.C. Kunz, E.H. Shortliffe, and R.J. Fallat. 1983. PUFF: an expert system for
interpretation of pulmonary function data. Computers and Biomedical Research 16 (3): 199–208.
Kurzweil, R. 2000. The age of spiritual machines When computers exceed human intelligence. New
York, NY, USA: Penguin Books.
Marcus, G., and E. Davis. 2019. Rebooting AI: building artificial intelligence we can trust. New
York, USA: Pantheon Books.
McCorduck, P. 2014. Machines who think a personal inquiry into the history and prospects of
artificial intelligence, 68. Massachusetts: A K Peters, Ltd. ISBN 1-56881-205-1.
McCarthy, J., and M.L. Minsky, N. Rochester, and C.E. Shannon. 2006. A proposal for the
Dartmouth summer research project on artificial intelligence. AI Magazine 27(4): 12.
McCulloch, W.S., and W. Pitts. 1943. A logical calculus of the ideas immanent in nervous activity.
Bulletin of Mathematical Biophysics 5: 115–133.
Minsky, M. 1968. Semantic information processing. Cambridge, MA, USA: MIT Press.
Moor, J. 2006. The Dartmouth college Ai conference: the next fifty years. AI Magazine 27 (4): 87.
Nilsson, N.J. 2009. The quest for artificial intelligence a history of ideas and achievements, 603–611.
New York, NY, USA: Cambridge University Press.
Miller, R.A., H.E. Pople Jr., and J.D. Myers. 1982. Internist-I, an experimental computer-based
diagnostic consultant for general internal medicine. New England Journal of Medicine 307 (8):
468–476.
Shannon, C.E. 1950. Programming a computer for playing chess. Philosophical Magazine 41 (314):
256–275.
Turing, A.M. 1937. On computable numbers with an application to the entscheidungsproblem.
Proceedings of the London Mathematical Society 42(1): 230–265. (Turing, A.M. 1938, “On
computable numbers, with an application to the entscheidungsproblem: a correction”. Proceed-
ings of the London Mathematical Society 43(6):544–546).
Turing, A.M. 1950. Computing machinery and intelligence. Mind. LIX(236):443–460.
Yu, V.L., L.M. Fagan, S.M. Wraith, et al. 1979. Antimicrobial selection by a computer. a blinded
evaluation by infectious diseases experts. JAMA 242(12):1279–1282.
Chapter 2
Definitions
The world “artificial” indicates that the entity is a product of human beings and that
it cannot come to existence in natural ways without the involvement of humans.
An artifact is an object made by a human being that is not naturally present but
occurs as a result of the preparative or investigative procedure by human beings.
Intelligence is generally defined as the ability to acquire and apply knowledge. A more
comprehensive definition refers to the skilled use of reason, the act of understanding
and the ability to think abstractly as measured by objective criteria. An AI, then
refers to an entity created by human beings and possesses the ability to understand
and comprehend knowledge, reason by using this knowledge and even act due to
them.
The term artificial has an allusion to synthetic, imitation, or not real. It is used for
things manufactured with the resemblance to the real one, like artificial flowers, but
lacks the features that are innate to the natural one (Lucci and Kopec 2016). This
may be the reason why McCarthy insisted on avoiding the term “artificial” in the
title of their book, which they co-authored with Shannon in 1956. McCarthy thought
that “automata studies” was a much proper title for a book that collected papers on
current AI studies, a name which has positive connotations; severe and scientific
(McCorduck 2004). When Shannon introduced the AI term in the call for Dartmouth
Conference, some scientists did not like it by saying that the term artificial implies
that “there is something phony about it” or “it is artificial and there is nothing real
about this work at all.” However, the majority must have liked it so that the term was
accepted and is in use since then.
Human beings have been capable of producing artifacts since the Stone Age,
about 2.6 million years before. This ability has improved significantly from creating
simple tools for maintaining individual or social viability to giant machines for
industrial production or computers for gruelling problems. However, one common
feature for any artifact is sustained all through these ages: The absolute control and
determination of human beings on them. Every artifact, irrespective of its field and
purpose of use, is designed and created by human beings. Hence the abilities and
capacity of them are predictable and controllable by human beings. This feature has
been preserved for artifacts for a very long time.
The first signals to indicate an approaching change in this paradigm became
explicit in the proposal for Dartmouth Conference. There were several issues for
discussion, of which two had the potential of changing the characteristic feature
of artifacts. These were self–improvement and randomness and creativity. In the
proposal, it was written that “a truly intelligent machine will carry out activities
which may be best described as a self–improvement. Some schemes for doing this
have been proposed and are worth further study” and, “a fairly attractive and yet
clearly incomplete conjecture is that the difference between creative thinking and
unimaginative competent thinking lies in the injection of some randomness. The
randomness must be guided by intuition to be efficient” (McCarthy et al. 2006). The
words self-improvement, creative thinking, and injection of randomness were the
exact words to represent ideas to enhance the definition of AI from a computer that
can solve codes which were unsolvable for humans to a more complex one. The
new idea was to create “a machine that is to behave in ways that would be called
intelligent if a human were so behaving” (McCarthy et al. 2006).
Another cornerstone for the definition of AI after the Dartmouth College Summer
Course was the highly influential paper “Computing Machinery and Intelligence,”
written by Alan Turing in 1950. His paper started with the provocative question
which humankind still seeking the answer to:
I propose to consider the question, “Can machines think?” (Turing 1950).
Despite the question in the title, Turing did not argue if machines can think. On
the contrary, he took the machines’ capability to think for granted and focused on
how to prove that. Turing suggested the well-known Turing test to detect this ability
of machines. However, he did not provide a definition and did not specify what he
meant by the act of “thinking.” After reading his paper, one can assume that Turing
conceptualized thinking as an act of reasoning and providing appropriate answers
to questions related to various subjects. Moreover, after studying his work, it would
be plausible to say that he assumed that thinking is an indicator of intelligence.
For centuries the ability to think has been considered to be a qualification that is
unique to Homo sapiens species as characterized in Rodin’s Le Penseur, a man with
visible muscles to prove his liveliness and his hand on his chin to show that he is
thinking. With his test, Alan Turing not only implied that machines could think but
also attempted to provide a handy tool to prove that they can do so. This attempt was
a challenge to the thesis of the superior hierarchical position of homo-sapiens over
any other living or artificial being, which has been taken for granted for a long time.
The idea that machines can think has flourished since the time of Turing and evolved
to terms such as weak AI, strong AI, or artificial general intelligence (AGI).
Stuart Russell and Peter Norvig suggested a relatively practical approach to
defining AI. They focused on two main processes of AI, the first one is the thought
process, and the second one is behaviour. This approach makes classification of
2.1 What is Artificial Intelligence? 19
Book name and author have been added to the original book cover.
The resulting cover is placed in the public domain.
*** END OF THE PROJECT GUTENBERG EBOOK ALL THAT
HAPPENED IN A WEEK ***
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.