Professional Documents
Culture Documents
Full Chapter Robotics Ai and Humanity Science Ethics and Policy Joachim Von Braun PDF
Full Chapter Robotics Ai and Humanity Science Ethics and Policy Joachim Von Braun PDF
https://textbookfull.com/product/robotics-ai-and-humanity-
science-ethics-and-policy-joachim-von-braun/
https://textbookfull.com/product/an-introduction-to-ethics-in-
robotics-and-ai-christoph-bartneck/
https://textbookfull.com/product/ai-and-robotics-in-disaster-
studies-t-v-vijay-kumar/
https://textbookfull.com/product/ai-ethics-mark-coeckelbergh/
Synthetic Biology Metaphors Worldviews Ethics and Law
1st Edition Joachim Boldt (Eds.)
https://textbookfull.com/product/synthetic-biology-metaphors-
worldviews-ethics-and-law-1st-edition-joachim-boldt-eds/
https://textbookfull.com/product/ai-ethics-1st-edition-mark-
coeckelbergh/
https://textbookfull.com/product/beyond-the-algorithm-ai-
security-privacy-and-ethics-1st-edition-santos/
https://textbookfull.com/product/volcanic-unrest-from-science-to-
society-joachim-gottsmann/
https://textbookfull.com/product/humanoid-robotics-and-
neuroscience-science-engineering-and-society-1st-edition-gordon-
cheng/
Joachim von Braun · Margaret S. Archer
Gregory M. Reichberg · Marcelo Sánchez Sorondo Editors
123
Editors
Joachim von Braun Margaret S. Archer
Bonn University University of Warwick
Bonn, Germany Coventry, UK
© The Editor(s) (if applicable) and The Author(s) 2021. This book is an open access publication.
Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License
(http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction
in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link
to the Creative Commons license and indicate if changes were made.
The images or other third party material in this book are included in the book’s Creative Commons license, unless
indicated otherwise in a credit line to the material. If material is not included in the book’s Creative Commons license
and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain
permission directly from the copyright holder.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not
imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and
regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed
to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty,
expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been
made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Message from Pope Francis
Artificial intelligence is at the heart of the epochal change we are experiencing. Robotics can
make a better world possible if it is joined to the common good. Indeed, if technological
progress increases inequalities, it is not true progress. Future advances should be oriented
towards respecting the dignity of the person and of Creation. Let us pray that the progress
of robotics and artificial intelligence may always serve humankind . . . we could say, may it
“be human”.
Pope Francis, November Prayer Intention, 5 November 2020
v
Acknowledgements
This edited volume, including the suggestions for action, emerged from a Conference on
“Robotics, AI and Humanity, Science, Ethics and Policy”, organized jointly by the Pontifical
Academy of Sciences (PAS) and the Pontifical Academy of Social Sciences (PASS), 16–17
May 2019, Casina Pio IV, Vatican City. Two related conferences had previously been held at
Casina Pio IV, Vatican City: “Power and Limitations of Artificial Intelligence” (December
2016) and “Artificial Intelligence and Democracy” (March 2018). The presentations and
discussions from these conferences are accessible on the website of the Pontifical Academy
of Sciences www.pas.va/content/accademia/en.html. The contributions by all the participants
in these conferences are gratefully acknowledged. This publication has been supported by
the Center for Development Research (ZEF) at Bonn University and the Research Council
of Norway.
vii
Contents
ix
x Contents
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Foundational Issues in AI and Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Overview on Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Intelligent Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Consciousness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
AI and Robotics Changing the Future of Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
AI/Robotics: Poverty and Welfare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Food and Agriculture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Finance, Insurance, and Other Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Robotics/AI and Militarized Conflict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Implications for Ethics and Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
AI/Robotics: Human and Social Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Regulating for Good National and International Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
for which serve the public good but also ensure proper data security regulations. Ethical and legal aspects of AI/robotics
protection and personal privacy. need clarification in order to inform regulatory policies on
applications and the future development of these technolo-
gies.
The volume is structured in the following four sections:
Keywords
Artificial intelligence · Robotics · Consciousness · • Foundational issues in AI and robotics, looking into AI’s
Labor markets · Services · Poverty · Agriculture · computational basis, brain–AI comparisons as well as AI
Militarized conflicts · Regulation and consciousness.
• AI and robotics potentially changing the future of society
in areas such as employment, education, industry, farming,
mobility, and services like banking. This section also
Introduction1 addresses the impacts of AI and robotics on poor people
and inequality.
Advances in artificial intelligence (AI) and robotics are accel- • Robotics and AI implications for militarized conflicts and
erating. They already significantly affect the functioning of related risks.
societies and economies, and they have prompted widespread • AI/robot–human interactions and ethical and religious
debate over the benefits and drawbacks for humanity. This implications: Here approaches for managing the coexis-
fast-moving field of science and technology requires our tence of humans and robots are evaluated, legal issues are
careful attention. The emergent technologies have, for in- addressed, and policies that can assure the regulation of
stance, implications for medicine and health care, employ- AI/robotics for the good of humanity are discussed.
ment, transport, manufacturing, agriculture, and armed con-
flict. Privacy rights and the intrusion of states into personal
life is a major concern (Stanley 2019). While considerable
attention has been devoted to AI/robotics applications in Foundational Issues in AI and Robotics
each of these domains, this volume aims to provide a fuller
picture of their connections and the possible consequences Overview on Perspectives
for our shared humanity. In addition to examining the current
research frontiers in AI/robotics, the contributors of this The field of AI has developed a rich variety of theoretical
volume address the likely impacts on societal well-being, approaches and frameworks on the one hand, and increas-
the risks for peace and sustainable development as well ingly impressive practical applications on the other. AI has
as the attendant ethical and religious dimensions of these the potential to bring about advances in every area of science
technologies. Attention to ethics is called for, especially as and society. It may help us overcome some of our cognitive
there are also long-term scenarios in AI/robotics with conse- limitations and solve complex problems.
quences that may ultimately challenge the place of humans In health, for instance, combinations of AI/robotics with
in society. brain–computer interfaces already bring unique support to
AI/robotics hold much potential to address some of our patients with sensory or motor deficits and facilitate caretak-
most intractable social, economic, and environmental prob- ing of patients with disabilities. By providing novel tools for
lems, thereby helping to achieve the UN’s Sustainable De- knowledge acquisition, AI may bring about dramatic changes
velopment Goals (SDGs), including the reduction of cli- in education and facilitate access to knowledge. There may
mate change. However, the implications of AI/robotics for also be synergies arising from robot-to-robot interaction and
equity, for poor and marginalized people, are unclear. Of possible synergies of humans and robots jointly working on
growing concern are risks of AI/robotics for peace due to tasks.
their enabling new forms of warfare such as cyber-attacks While vast amounts of data present a challenge to human
or autonomous weapons, thus calling for new international cognitive abilities, Big Data presents unprecedented oppor-
tunities for science and the humanities. The translational po-
1 The tential of Big Data is considerable, for instance in medicine,
conclusions in this section partly draw on the Concluding
Statement from a Conference on “Robotics, AI and Humanity, Science, public health, education, and the management of complex
Ethics and Policy“, organized jointly by the Pontifical Academy systems in general (biosphere, geosphere, economy). How-
of Sciences (PAS) and the Pontifical Academy of Social Sciences ever, the science based on Big Data as such remains em-
(PASS), 16–17 May 2019, Casina Pio IV, Vatican City. The statement piricist and challenges us to discover the underlying causal
is available at http://www.casinapioiv.va/content/accademia/en/events/
2019/robotics/statementrobotics.html including a list of participants mechanisms for generating patterns. Moreover, questions
provided via the same link. Their contributions to the statement are remain whether the emphasis on AI’s supra-human capacities
acknowledged. for computation and compilation mask manifold limitations
AI, Robotics, and Humanity: Opportunities, Risks, and Implications for Ethics and Policy 3
unbiasedness. Until recently, basic mathematical science had It may actually be the diverse concepts and definitions
few (if any) ethical issues on its agenda. However, given of consciousness that make the position taken by Dehaene
that mathematicians and software designers are central to the et al. appear different from the concepts outlined by Singer
development of AI, it is essential that they consider the ethical (Chap. 2) and controversial to others like Gabriel (Chap. 5),
implications of their work.4 In light of the questions that are Sánchez Sorondo (Chap. 14), and Schröder (Chap. 16). At
increasingly raised about the trustworthiness of autonomous the same time, the long-run expectations regarding machines’
systems, AI developers have a responsibility—that ideally causal learning abilities and cognition as considered by Zim-
should become a legal obligation—to create trustworthy and mermann and Cremers (Chap. 3) and the differently based
controllable robot systems. position of Archer (Chap. 15) both seem compatible with the
functional consciousness definitions of Dehaene et al. (Chap.
4). This does not apply to Gabriel (Chap. 5) who is inclined
Consciousness to answer the question “could a robot be conscious?” with a
clear “no,” drawing his lessons selectively from philosophy.
Singer (Chap. 2) benchmarks robots against brains and points He argues that the human being is the indispensable locus of
out that organisms and robots both need to possess an internal ethical discovery. “Questions concerning what we ought to
model of the restricted environment in which they act and do as morally equipped agents subject to normative guidance
both need to adjust their actions to the conditions of the largely depend on our synchronically and diachronically
respective environment in order to accomplish their tasks. varying answers to the question of “who we are.” ” He argues
Thus, they may appear to have similar challenges but— that robots are not conscious and could not be conscious
Singer stresses—the computational strategies to cope with “ . . . if consciousness is what I take it to be: a systemic feature
these challenges are different for natural and artificial sys- of the animal-environment relationship.” (Gabriel, Chap. 5,
tems. He finds it premature to enter discussions as to whether pp. . . . ).
artificial systems can acquire functions that we consider
intentional and conscious or whether artificial agents can be
considered moral agents with responsibility for their actions AI and Robotics Changing the Future
(Singer, Chap. 2). of Society
Dehaene et al. (Chap. 4) take a different position from
Singer and argue that the controversial question whether In the second section of this volume, AI applications (and
machines may ever be conscious must be based on consid- related emergent technologies) in health, manufacturing, ser-
erations of how consciousness arises in the human brain. vices, and agriculture are reviewed. Major opportunities for
They suggest that the word “consciousness” conflates two advances in productivity are noted for the applications of
different types of information-processing computations in the AI/robotics in each of these sectors. However, a sectorial
brain: first, the selection of information for global broadcast- perspective on AI and robotics has limitations. It seems
ing (consciousness in the first sense), and second, the self- necessary to obtain a more comprehensive picture of the
monitoring of those computations, leading to a subjective connections between the applications and a focus on public
sense of certainty or error (consciousness in the second policies that facilitates overall fairness, inclusivity, and equity
sense). They argue that current AI/robotics mostly imple- enhancement through AI/robotics.
ments computations similar to unconscious processing in The growing role of robotics in industries and conse-
the human brain. They however contend that a machine quences for employment are addressed (De Backer and
endowed with consciousness in the first and second sense as DeStefano, Chap. 6). Von Braun and Baumüller (Chap.
defined above would behave as if it were conscious. They ac- 7) explore the implications of AI/robotics for poverty
knowledge that such a functional definition of consciousness and marginalization, including links to public health.
may leave some unsatisfied and note in closing, “Although Opportunities of AI/robotics for sustainable crop production
centuries of philosophical dualism have led us to consider and food security are reported by Torero (Chap. 8). The hopes
consciousness as unreducible to physical interactions, the and threats of including robotics in education are considered
empirical evidence is compatible with the possibility that by Léna (Chap. 9), and the risks and opportunities of AI in
consciousness arises from nothing more than specific com- financial services, wherein humans are increasingly replaced
putations.” (Dehaene et al., Chap. 4, pp. . . . ). and even judged by machines, are critically reviewed by
Pasquale (Chap. 10). The five chapters in this section of the
volume are closely connected as they all draw on current and
4 The
fast emerging applications of AI/robotics, but the balance of
ethical impact of mathematics on technology was groundbreak-
ingly presented by Wiener (1960). opportunities and risks for society differ greatly among these
domains of AI/robotics applications and penetrations.
AI, Robotics, and Humanity: Opportunities, Risks, and Implications for Ethics and Policy 5
impacts of interventions. However, there is also the by labor costs as well as by demands for hygiene and food
issue of pollution through electronic waste dumped by safety in processing.
industrialized countries in low-income countries. This Torero (Chap. 8) outlines the opportunities of new tech-
issue needs attention as does the carbon footprint of nologies for smallholder households. Small-size mechaniza-
AI/robotics. tion offers possibilities for remote areas, steep slopes or soft
soil areas. Previously marginal areas could be productive
Effects of robotics and AI for such structural changes again. Precision farming could be introduced to farmers that
in economies and for jobs will not be neutral for people have little capital thus allowing them to adopt climate-smart
suffering from poverty and marginalization. Extreme poverty practices. Farmers can be providers and consumers of data,
is on the decline worldwide, and robotics and AI are potential as they link to cloud technologies using their smartphones,
game changers for accelerated or decelerated poverty reduc- connecting to risk management instruments and track crop
tion. Information on how AI/robotics may affect the poor is damage in real time.
scarce. Von Braun and Baumüller (Chap. 7) address this gap. Economic context may change with technologies. Buying
They establish a framework that depicts AI/robotics impact new machinery may no longer mean getting oneself into
pathways on poverty and marginality conditions, health, ed- debt thanks to better access to credit and leasing options.
ucation, public services, work, and farming as well as on the The reduced scale of efficient production would mean higher
voice and empowerment of the poor. The framework identi- profitability for smallholders. Robots in the field also rep-
fies points of entry of AI/robotics and is complemented by a resent opportunities for income diversification for farmers
more detailed discussion of the pathways in which changes and their family members as the need to use family labor for
through AI/robotics in these areas may relate positively or low productivity tasks is reduced and time can be allocated
negatively to the livelihoods of the poor. They conclude for more profit-generating activities. Additionally, robots can
that the context of countries and societies play an important operate 24/7, allowing more precision on timing of harvest,
role in determining the consequences of AI/robotics for the especially for high-value commodities like grapes or straw-
diverse population groups at risk of falling into poverty. berries.
Without a clear focus on the characteristics and endowments
of people, innovations in AI/robotics may not only bypass
them but adversely impact them directly or indirectly through Education
markets and services of relevance to their communities.
Empirical scenario building and modelling is called for to Besides health and caregiving, where innovations in
better understand the components in AI/robotics innovations AI/robotics have had a strong impact, in education and
and to identify how they can best support livelihoods of finance this impact is also likely to increase in the future.
households and communities suffering from poverty. Von In education—be it in the classroom or in distance-learning
Braun and Baumüller (Chap. 7) note that outcomes much systems, focused on children or on training and retraining
depend on policies accompanying AI and robotics. Lee points of adults—robotics is already having an impact (Léna,
to solutions with new government initiatives that finance care Chap. 9). With the addition of AI, robotics offers to expand
and creativity (Chap. 22). the reach of teaching in exciting new ways. At the same
time, there are also concerns about new dependencies
and unknown effects of these technologies on minds.
Food and Agriculture Léna sees child education as a special case, due to it
involving emotions as well as knowledge communicated
Closely related to poverty is the influence of AI/robotics on between children and adults. He examines some of the
food security and agriculture. The global poor predominantly modalities of teacher substitution by AI/robotic resources
work in agriculture, and due to their low levels of income they and discusses their ethical aspects. He emphasizes positive
spend a large shares of their income on food. Torero (Chap. aspects of computer-aided education in contexts in which
8) addresses AI/robotics in the food systems and points out teachers are lacking. The technical possibilities combining
that agricultural production—while under climate stress— artificial intelligence and teaching may be large, but the
still must increase while minimizing the negative impacts on costs need consideration too. The ethical questions raised
ecosystems, such as the current decline in biodiversity. An by these developments need attention, since children are
interesting example is the case of autonomous robots for farm extremely vulnerable human beings. As the need to develop
operations. Robotics are becoming increasingly scale neutral, education worldwide are so pressing, any reasonable solution
which could benefit small farmers via wage and price effects which benefits from these technological advances can
(Fabregas et al. 2019). AI and robotics play a growing role in become helpful, especially in the area of computer-aided
all elements of food value chains, where automation is driven education.
AI, Robotics, and Humanity: Opportunities, Risks, and Implications for Ethics and Policy 7
Finance, Insurance, and Other Services has imbricated into international legal orders that hide wealth
and income from regulators and tax authorities. Cryptocur-
Turning to important service domains like finance and insur- rency may become a tool for deflecting legal demands and
ance, and real estate, some opportunities but also worrisome serve the rich. Golumbia (2009) points at the potential desta-
trends of applications of AI-based algorithms relying on Big bilizing effects of cryptocurrencies for financial regulation
Data are quickly emerging. In these domains, humans are in- and monetary policy. Pasquale (Chap. 10) stresses that both
creasingly assessed and judged by machines. Pasquale (Chap. incrementalist and futurist Fintech expose the hidden costs
10) looks into the financial technology (Fintech) landscape, of digital efforts to circumvent or co-opt state monetary
which ranges from automation of office procedures to new authorities.
approaches of storing and transferring value, and granting In some areas of innovations in AI/robotics, their future
credit. For instance, new services—e.g., insurance sold by the trajectories already seem quite clear. For example, robotics
hour—are emerging, and investments on stock exchanges are are fast expanding in space exploration and satellite systems
conducted increasingly by AI systems, instead of by traders. observing earth,6 in surgery and other forms of medical
These innovations in AI, other than industrial robotics, are technology,7 and in monitoring processes of change in the
probably already changing and reducing employment of (for- Anthropocene, for instance related to crop developments at
mer) high-skill/high-income segments, but not routine tasks small scales.8 Paradigmatic for many application scenarios
in manufacturing. A basis for some of the Fintech operations not just in industry but also in care and health are robotic
by established finance institutions and start-ups is the use hand-arm systems for which the challenges of precision,
of data sources from social media with algorithms to assess sensitivity, and robustness come along with safe grasping
credit risk. Another area is financial institutions adopting requirements. Promising applications are evolving in tele-
distributed ledger technologies. Pasquale (Chap. 10) divides manipulation systems in a variety of areas such as healthcare,
the Fintech landscape into two spheres, “incrementalist Fin- factory production, and mobility. Depending on each of these
tech” and “futurist Fintech.” Incrementalist Fintech uses new areas, sound IP standards and/or open-source innovation
data, algorithms, and software to perform traditional tasks systems should be explored systematically, in order to shape
of existing financial institutions. Emerging AI/robotics do optimal innovation pathways. This is a promising area of eco-
not change the underlying nature of underwriting, payment nomic, technological, legal, and political science research.
processing, or lending of the financial sector. Regulators still
cover these institutions, and their adherence to rules accord-
ingly assures that long-standing principles of financial regu- Robotics/AI and Militarized Conflict
lation persist. Yet, futurist Fintech claims to disrupt financial
markets in ways that supersede regulation or even render Robotics and AI in militarized conflicts raise new challenges
it obsolete. If blockchain memorializing of transactions is for building and strengthening peace among nations and for
actually “immutable,” the need for regulatory interventions the prevention of war and militarized conflict in general. New
to promote security or prevent modification of records may political and legal principles and arrangements are needed but
no longer be needed. are evolving too slowly.
Pasquale (Chap. 10) sees large issues with futurist Fin- Within militarized conflict, AI-based systems (including
tech, which engages in detailed surveillance in order to get robots) can serve a variety of purposes, inter alia, extract-
access to services. These can become predatory, creepy, ing wounded personnel, monitoring compliance with laws
and objectionable on diverse grounds, including that they of war/rules of engagement, improving situational aware-
subordinate inclusion, when they allow persons to compete ness/battlefield planning, and making targeting decisions.
for advantage in financial markets in ways that undermine While it is the last category that raises the most challenging
their financial health, dignity, and political power (Pasquale, moral issues, in all cases the implications of lowered barriers
Chap. 10). Algorithmic accountability has become an im- of warfare, escalatory dangers, as well as systemic risks must
portant concern for reasons of discriminating against women be carefully examined before AI is implemented in battlefield
for lower-paying jobs, discriminating against the aged, and settings.
stimulating consumers into buying things by sophisticated
social psychology and individualized advertising based on
“Phishing.”5 Pistor (2019) describes networks of obligation 6 See for instance Martin Sweeting’s (2020) review of opportunities of
that even states find exceptionally difficult to break. Capital small satellites for earth observation.
7 For a review on AI and robotics in health see for instance Erwin Loh
(2018).
5 Relevant for insights in these issues are the analyses by Akerlof and 8 On assessment of fossil fuel and anthrogpogenic emissions effects on
Shiller (2015) in their book on “Phishing for Phools: The Economics of public health and climate see Jos Lelieveld et al. (2019). On new ways
Manipulation and Deception.” of crop monitoring using AI see, for instance, Burke and Lobell (2017).
8 J. von Braun et al.
Worries about falling behind in the race to develop new novel modes of communication and trust. The limitations of
AI military applications must not become an excuse for AI must be properly understood by system designers and
short-circuiting safety research, testing, and adequate train- military personnel if AI applications are to promote more,
ing. Because weapon design is trending away from large- not less, adherence to norms of armed conflict.
scale infrastructure toward autonomous, decentralized, and It has long been recognized that the battlefield is an espe-
miniaturized systems, the destructive effects may be mag- cially challenging domain for ethical assessment. It involves
nified compared to most systems operative today (Danzig the infliction of the worst sorts of harm: killing, maiming,
2018). AI-based technologies should be designed so they destruction of property, and devastation of the natural envi-
enhance (and do not detract from) the exercise of sound moral ronment. Decision-making in war is carried out under con-
judgment by military personnel, which need not only more ditions of urgency and disorder. This Clausewitz famously
but also very different types of training under the changed termed the “fog of war.” Showing how ethics are realistically
circumstances. Whatever military advantages might accrue applicable in such a setting has long taxed philosophers,
from the use of AI, human agents—political and military— lawyers, and military ethicists. The advent of AI has added
must continue to assume responsibility for actions carried out a new layer of complexity. Hopes have been kindled for
in wartime. smarter targeting on the battlefield, fewer combatants, and
International standards are urgently needed. Ideally, these hence less bloodshed; simultaneously, warnings have been
would regulate the use of AI with respect to military plan- issued on the new arms race in “killer robots,” as well as the
ning (where AI risks to encourage pre-emptive strategies), risks associated with delegating lethal decisions to increas-
cyberattack/defense as well as the kinetic battlefields of ingly complex and autonomous machines. Because LAWS
land, air, sea, undersea, and outer space. With respect to are designed to make targeting decisions without the direct
lethal autonomous weapon systems, given the present state intervention of human agents (who are “out of the killing
of technical competence (and for the foreseeable future), no loop”), considerable debate has arisen on whether this mode
systems should be deployed that function in unsupervised of autonomous targeting should be deemed morally permis-
mode. Whatever the battlefield—cyber or kinetic—human sible. Surveying the contours of this debate, Reichberg and
accountability must be maintained, so that adherence to Syse (Chap. 12) first present a prominent ethical argument
internationally recognized laws of war can be assured and that has been advanced in favor of LAWS, namely, that AI-
violations sanctioned. directed robotic combatants would have an advantage over
Robots are increasingly utilized on the battlefield for a va- their human counterparts, insofar as the former would operate
riety of tasks (Swett et al., Chap. 11). Human-piloted, remote- solely on the basis of rational assessment, while the latter are
controlled fielded systems currently predominate. These in- often swayed by emotions that conduce to poor judgment.
clude unmanned aerial vehicles (often called “drones”), un- Several counter arguments are then presented, inter alia, (i)
manned ground, surface, and underwater vehicles as well that emotions have a positive influence on moral judgment
as integrated air-defense and smart weapons. The authors and are indispensable to it; (ii) that it is a violation of human
recognize, however, that an arms race is currently underway dignity to be killed by a machine, as opposed to being killed
to operate these robotic platforms as AI-enabled weapon by a human being; and (iii) that the honor of the military
systems. Some of these systems are being designed to act profession hinges on maintaining an equality of risk between
autonomously, i.e., without the direct intervention of a human combatants, an equality that would be removed if one side
operator for making targeting decisions. Motivating this drive delegates its fighting to robots. The chapter concludes with a
toward AI-based autonomous targeting systems (Lethal Au- reflection on the moral challenges posed by human–AI team-
tonomous Weapons, or LAWS) brings about several factors, ing in battlefield settings, and on how virtue ethics provide a
such as increasing the speed of decision-making, expanding valuable framework for addressing these challenges.
the volume of information necessary for complex decisions, Nuclear deterrence is an integral aspect of the current
or carrying out operations in settings where the segments security architecture and the question has arisen whether
of the electromagnetic spectrum needed for secure commu- adoption of AI will enhance the stability of this architecture
nications are contested. Significant developments are also or weaken it. The stakes are very high. Akiyama (Chap. 13)
underway within the field of human–machine interaction, examines the specific case of nuclear deterrence, namely, the
where the goal is to augment the abilities of military per- possession of nuclear weapons, not specifically for battle-
sonnel in battlefield settings, providing, for instance, en- field use but to dissuade others from mounting a nuclear or
hanced situational awareness or delegating to an AI-guided conventional attack. Stable deterrence depends on a complex
machine some aspect of a joint mission. This is the concept web of risk perceptions. All sorts of distortions and errors are
of human–AI “teaming” that is gaining ground in military possible, especially in moments of crisis. AI might contribute
planning. On this understanding, humans and AI function toward reinforcing the rationality of decision-making under
as tightly coordinated parts of a multi-agent team, requiring these conditions (easily affected by the emotional distur-
AI, Robotics, and Humanity: Opportunities, Risks, and Implications for Ethics and Policy 9
bances and fallacious inferences to which human beings are increasingly intensive; yet, AI systems are hard to test
are prone), thereby preventing an accidental launch or un- and validate. This raises issues of trust in AI and robots, and
intended escalation. Conversely, judgments about what does issues of regulation and ownership of data, assignment of
or does not fit the “national interest” are not well suited to responsibilities, and transparency of algorithms are arising
AI (at least in its current state of development). A purely and require legitimate institutional arrangements.
logical reasoning process based on the wrong values could We can distinguish between mechanical robots, designed
have disastrous consequences, which would clearly be the to accomplish routine tasks in production, and AI/robotics
case if an AI-based machine were allowed to make the launch capacities to assist in social care, medical procedures, safe
decision (which virtually all experts would emphatically and energy efficient mobility systems, educational tasks, and
exclude), but grave problems could similarly arise if a human scientific research. While intelligent assistants may benefit
actor relied too heavily on AI input. adults and children alike, they also carry risks because their
impact on the developing brain is unknown, and because peo-
ple may lose motivation in areas where AI appears superior.
Implications for Ethics and Policies Basically robots are instruments in the perspective of
Sánchez Sorondo (Chap. 14) with the term “instrument”
Major research is underway in areas that define us as humans, being used in various senses. “The primary sense is clearly
such as language, symbol processing, one-shot learning, self- that of not being a cause of itself or not existing by itself.”
evaluation, confidence judgment, program induction, con- Aristotle defines being free as the one that is a cause of
ceiving goals, and integrating existing modules into an over- himself or exists on its own and for himself, i.e., one who
arching, multi-purpose intelligent architecture (Zimmermann is cause of himself (causa sui or causa sui ipsius).” From
and Cremers, Chap. 3). Computational agents trained by re- the Christian perspective, “ . . . for a being to be free and
inforcement learning and deep learning frameworks demon- a cause of himself, it is necessary that he/she be a person
strate outstanding performance in tasks previously thought endowed with a spiritual soul, on which his or her cognitive
intractable. While a thorough foundation for a general theory and volitional activity is based” (Sánchez Sorondo, Chap.
of computational cognitive agents is still missing, the concep- 14, p. 173). An artificially intelligent robotic entity does not
tual and practical advance of AI has reached a state in which meet this standard. As an artifact and not a natural reality,
ethical and safety questions and the impact on society overall the AI/robotic entity is invented by human beings to fulfill a
become pressing issues. For example, AI-based inferences of purpose imposed by human beings. It can become a perfect
persons’ feelings derived from face recognition data are such entity that performs operations in quantity and quality more
an issue. precisely than a human being, but it cannot choose for itself
a different purpose from what was programmed in it for by
a human being. As such, the artificially intelligent robot is a
AI/Robotics: Human and Social Relations means at the service of humans.
The majority of social scientists have subscribed to a
The spread of robotics profoundly modifies human and social similar conclusion as the above. Philosophically, as distinct
relations in many spheres of society, in the family as well as in from theologically, this entails some version of “human es-
the workplace and in the public sphere. These modifications sentialism” and “species-ism” that far from all would en-
can take on the character of hybridization processes between dorse in other contexts (e.g., social constructionists). The
the human characteristics of relationships and the artificial result is to reinforce Robophobia and the supposed need to
ones, hence between analogical and virtual reality. Therefore, protect humankind. Margaret S. Archer (Chap. 15) seeks
it is necessary to increase scientific research on issues con- to put the case for potential Robophilia based upon the
cerning the social effects that derive from delegating relevant positive properties and powers deriving from humans and AI
aspects of social organization to AI and robots. An aim of co-working together in synergy. Hence, Archer asks “Can
such research should be to understand how it is possible to Human Beings and AI Robots be Friends?” She stresses
govern the relevant processes of change and produce those the need to foreground social change (given this is increas-
relational goods that realize a virtuous human fulfillment ingly morphogenetic rather than morphostatic) for structure,
within a sustainable and fair societal development. culture, and agency. Because of the central role the social
We noted above that fast progress in robotics engineering sciences assign to agents and their “agency” this is crucial as
is transforming whole industries (industry 4.0). The evolution we humans are continually “enhanced” and have since long
of the internet of things (IoT) with communication among increased their height and longevity. Human enhancement
machines and inter-connected machine learning results in speeded up with medical advances from ear trumpets, to
major changes for services such as banking and finance as spectacles, to artificial insertions in the body, transplants, and
reviewed above. Robot–robot and human–robot interactions genetic modification. In short, the constitution of most adult
10 J. von Braun et al.
human bodies is no longer wholly organic. In consequence, absence of human respect for the integrity of other beings
the definition of “being human” is carried further away from (natural or artificial) would be morally allowed or even
naturalism and human essentialism. The old bifurcation into encouraged. Avoiding disrespectful treatment of robots is
the “wet” and the “dry” is no longer a simple binary one. If ultimately for the sake of the humans, not for the sake of
the classical distinguishing feature of humankind was held the robots. Maybe this insight can contribute to inspire an
to be possession of a “soul,” this was never considered to “overlapping consensus” as conceptualized by John Rawls
be a biological organ. Today, she argues, with the growing (1987) in further discussions on responsibly coordinating
capacities of AI robots, the tables are turned and implicitly human-robot interactions.
pose the question, “so are they not persons too?” The paradox Human–robot interactions and affective computing’s eth-
is that the public admires the AI who defeated Chess and ical implications are elaborated by Devillers (Chap. 17).
Go world champions. They are content with AI roles in The field of social robotics is fast developing and will have
care of the elderly, with autistic children, and in surgical wide implications especially within health care, where much
interventions, none of which are purely computational feats, progress has been made toward the development of “compan-
but the fear of artificially intelligent robots “taking over” ion robots.” Such robots provide therapeutic or monitoring
remains and repeats Asimov’s (1950) protective laws. Per- assistance to patients with a range of disabilities over a
ceiving this as a threat alone owes much to the influence of long timeframe. Preliminary results show that such robots
the Arts, especially sci-fi; Robophobia dominates Robophilia may be particularly beneficial for use with individuals who
in popular imagination and academia. With AI capacities now suffer from neurodegenerative pathologies. Treatment can be
including “error-detection,” “self-elaboration of their pre- accorded around the clock and with a level of patience rarely
programming,” and “adaptation to their environment,” they found among human healthcare workers. Several elements
have the potential for active collaboration with humankind, are requisite for the effective deployment of companion
in research, therapy, and care. This would entail synergy or robots: They must be able to detect human emotions and in
co-working between humans and AI beings. turn mimic human emotional reactions as well as having an
Wolfgang Schröder (Chap. 16) also addresses robot– outward appearance that corresponds to human expectations
human interaction issues, but from positions in legal about their caregiving role. Devillers’ chapter presents labo-
philosophy and ethics. He asks what normative conditions ratory findings on AI-systems that enable robots to recognize
should apply to the use of robots in human society, and specific emotions and adapt their behavior accordingly. Emo-
ranks the controversies about the moral and legal status of tional perception by humans (how language and gestures are
robots and of humanoid robots in particular among the top interpreted by us to grasp the emotional states of others) is
debates in recent practical philosophy and legal theory. As being studied as a guide to programing robots so they can
robots become increasingly sophisticated, and engineers simulate emotions in their interactions with humans. Some
make them combine properties of tools with seemingly of the relevant ethical issues are examined, particularly the
psychological capacities that were thought to be reserved use of “nudges,” whereby detection of a human subject’s
for humans, such considerations become pressing. While cognitive biases enables the robot to initiate, through verbal
some are inclined to view humanoid robots as more than or nonverbal cues, remedial measures to affect the subject’s
just tools, discussions are dominated by a clear divide: What behavior in a beneficial direction. Whether this constitutes
some find appealing, others deem appalling, i.e., “robot manipulation and is open to potential abuse merits closer
rights” and “legal personhood” for AI systems. Obviously, study.
we need to organize human–robot interactions according Taking the encyclical Laudato si’ and its call for an “in-
to ethical and juridical principles that optimize benefit and tegral ecology” as its starting point, Donati (Chap. 18) ex-
minimize mutual harm. Schröder concludes, based on a amines how the processes of human enhancement that have
careful consideration of legal and philosophical positions, been brought about by the digital revolution (including AI
that, even the most human-like behaving robot will not lose and robotics) have given rise to new social relationships. A
its ontological machine character merely by being open to central question consists in asking how the Digital Techno-
“humanizing” interpretations. However, even if they do not logical Mix, a hybridization of the human and nonhuman that
present an anthropological challenge, they certainly present issues from AI and related technologies, can promote human
an ethical one, because both AI and ethical frameworks are dignity. Hybridization is defined here as entanglements and
artifacts of our societies—and therefore subject to human interchanges between digital machines, their ways of operat-
choice and human control, Schröder argues. The latter holds ing, and human elements in social practices. The issue is not
for the moral status of robots and other AI systems, too. This whether AI or robots can assume human-like characteristics,
status remains a choice, not a necessity. Schröder suggests but how they interact with humans and affect their social
that there should be no context of action where a complete relationships, thereby generating a new kind of society.
AI, Robotics, and Humanity: Opportunities, Risks, and Implications for Ethics and Policy 11
Advocating for the positive coexistence of humans and impact the robot’s future use, constitutes the “responsibility
AI, Lee (Chap. 22) shares Donati’s vision of a system that attribution framework” for responsible robotics. Whereas
provides for all members of society, but one that also uses Asimov’s (1950) famous “three laws of robotics” focused
the wealth generated by AI to build a society that is more on the behavior of the robot, current “responsible robotics”
compassionate, loving, and ultimately human. Lee believes redirects our attention to the human actors, designers, and
it is incumbent on us to use the economic abundance of producers, who are involved in the development chain of
the AI age to foster the values of volunteers who devote robots. The robotics sector has become highly complex, with
their time and energy toward making their communities more a wide network of actors engaged in various phases of devel-
caring. As a practical measure, they propose to explore the opment and production of a multitude of applications. Under-
creation not of a universal basic income to protect against standing the different sorts of responsibility—moral, legal,
AI/robotics’ labor saving and job cutting effects, but a “social backward- and forward-looking, individual and collective—
investment stipend.” The stipend would be given to those who that are relevant within this space, enables the articulation of
invest their time and energy in those activities that promote an adequate attribution framework of responsibility for the
a kind, compassionate, and creative society, i.e., care work, robotics industry.
community service, and education. It would put the economic
bounty generated by AI to work in building a better society,
rather than just numbing the pain of AI-induced job losses. Regulating for Good National
Joint action in the sphere of human–human interrelations and International Governance
may be a model for human–robot interactions. Human–
human interrelations are only possible when several prereq- An awareness that AI-based technologies have far outpaced
uisites are met (Clodic and Alami, Chap. 19), inter alia: the existing regulatory frameworks has raised challenging
(i) that each agent has a representation within itself of its questions about how to set limits on the most dangerous
distinction from the other so that their respective tasks can developments (lethal autonomous weapons or surveillance
be coordinated; (ii) each agent attends to the same object, is bots, for instance). Under the assumption that the robotics
aware of that fact, and the two sets of “attentions” are causally industry cannot be relied on to regulate itself, calls for gov-
connected; and (iii) each agent understands the other’s action ernment intervention within the regulatory space—national
as intentional, namely one where means are selected in view and international—have multiplied (Kane, Chap. 21). The
of a goal so that each is able to make an action-to-goal author recognizes how AI technologies offer a special diffi-
prediction about the other. The authors explain how human– culty to any regulatory authority, given their complexity (not
robot interaction must follow the same threefold pattern. In easily understood by nonspecialists) and their rapid pace of
this context, two key problems emerge. First, how can a development (a specific application will often be obsolete
robot be programed to recognize its distinction from a human by the time needed untill regulations are finally established).
subject in the same space, to detect when a human agent is The various approaches to regulating AI fall into two main
attending to something, and make judgments about the goal- categories. A sectoral approach looks to identify the societal
directedness of the other’s actions such that the appropriate risks posed by individual technologies, so that preventive or
predictions can be made? Second, what must humans learn mitigating strategies can be implemented, on the assumption
about robots so they are able to interact reliably with them in that the rules applicable to AI, in say the financial industry,
view of a shared goal? This dual process (robot perception of would be very different from those relevant to heath care
its human counterpart and human perception of the robot) is providers. A cross-sectoral approach, by contrast, involves
here examined by reference to the laboratory case of a human the formulation of rules (whether norms adopted by indus-
and a robot who team up in building a stack with four blocks. trial consensus or laws set down by governmental authority)
Robots are increasingly prevalent in human life and their that, as the name implies, would have application to AI-
place is expected to grow exponentially in the coming years based technologies in their generality. After surveying some
(van Wynsberghe, Chap. 20). Whether their impact is positive domestic and international initiatives that typify the two
or negative will depend not only on how they are used, approaches, the chapter concludes with a list of 15 recom-
but also and especially on how they have been designed. If mendations to guide reflection on the promotion of societally
ethical use is to be made of robots, an ethical perspective beneficial AI.
must be made integral to their design and production. Today
this approach goes by the name “responsible robotics,” the Toward Global AI Frameworks
parameters of which are laid out in the present chapter. Over the past two decades, the field of AI/robotics has
Identifying lines of responsibility among the actors involved spurred a multitude of applications for novel services. A
in a robot’s development and implementation, as well as particularly fast and enthusiastic development of AI/Robotics
establishing procedures to track these responsibilities as they occurred in the first and second decades of the century around
12 J. von Braun et al.
industrial applications and financial services. Whether or not Protecting People’s and Individual Human Rights
the current decade will see continued fast innovation and and Privacy
expansion of AI-based commercial and public services is an AI/robotics offer great opportunities and entail risks; there-
open question. An important issue is and will become even fore, regulations should be appropriately designed by legit-
more so, how the AI innovation fields are being dominated imate public institutions, not hampering opportunities, but
by national strategies especially in the USA and China, or if also not stimulating excessive risk-taking and bias. This
some global arrangement for standard setting and openness requires a framework in which inclusive public societal dis-
can be contemplated to serve the global common good along course is informed by scientific inquiry within different dis-
with justifiable protection of intellectual property (IP) and ciplines. All segments of society should participate in the
fair competition in the private sector. This will require nu- needed dialogue. New forms of regulating the digital econ-
merous rounds of negotiation concerning AI/Robotics, com- omy are called for that ensure proper data protection and
parable with the development of rules on trade and foreign personal privacy. Moreover, deontic values such as “permit-
direct investment. The United Nations could provide the ted,” “obligatory,” and “forbidden” need to be strengthened
framework. The European Union would have a strong interest to navigate the web and interact with robots. Human rights
in engaging in such a venture, too. Civil society may play key need to be protected from intrusive AI.
roles from the perspective of protection of privacy. Regarding privacy, access to new knowledge, and infor-
Whether AI may serve good governance or bad gover- mation rights, the poor are particularly threatened because of
nance depends, inter alia, on the corresponding regulatory their current lack of power and voice. AI and robotics need to
environment. Risks of manipulative applications of AI for be accompanied by more empowerment of the poor through
shaping public opinion and electoral interference need at- information, education, and investment in skills. Policies
tention, and national and international controls are called should aim for sharing the benefits of productivity growth
for. The identification and prevention of illegal transactions, through a combination of profit-sharing, not by subsidizing
for instance money received from criminal activities such robots but through considering (digital) capital taxation, and
as drug trafficking, human trafficking or illegal transplants, a reduction of working time spent on routine tasks.
may serve positively, but when AI is in the hands of op-
pressive governments or unethically operating companies, Developing Corporate Standards
AI/robotics may be used for political gain, exploitation, and The private sector generates many innovations in AI/robotics.
undermining of political freedom. The new technologies It needs to establish sound rules and standards framed by
must not become instruments to enslave people or further public policy. Companies, including the large corporations
marginalize the people suffering already from poverty. developing and using AI, should create ethical and safety
Efforts of publicly supported development of intelligent boards, and join with nonprofit organizations that aim to es-
machines should be directed to the common good. The im- tablish best practices and standards for the beneficial deploy-
pact on public goods and services, as well as health, educa- ment of AI/ robotics. Appropriate protocols for AI/robotics’
tion, and sustainability, must be paramount. AI may have un- safety need to be developed, such as duplicated checking by
expected biases or inhuman consequences including segmen- independent design teams. The passing of ethical and safety
tation of society and racial and gender bias. These need to be tests, evaluating for instance the social impact or covert racial
addressed within different regulatory instances—both gov- prejudice, should become a prerequisite for the release of new
ernmental and nongovernmental—before they occur. These AI software. External civil boards performing recurrent and
are national and global issues and the latter need further transparent evaluation of all technologies, including in the
attention from the United Nations. military, should be considered. Scientists and engineers, as
The war-related risks of AI/robotics need to be addressed. the designers of AI and robot devices, have a responsibility to
States should agree on concrete steps to reduce the risk ensure that their inventions and innovations are safe and can
of AI-facilitated and possibly escalated wars and aim for be used for moral purposes (Gibney 2020). In this context,
mechanisms that heighten rather than lower the barriers of Pope Francis has called for the elaboration of ethical guide-
development or use of autonomous weapons, and fostering lines for the design of algorithms, namely an “algorethics.”
the understanding that war is to be prevented in general. With To this he adds that “it is not enough simply to trust in the
respect to lethal autonomous weapon systems, no systems moral sense of researchers and developers of devices and al-
should be deployed that function in an unsupervised mode. gorithms. There is a need to create intermediate social bodies
Human accountability must be maintained so that adherence that can incorporate and express the ethical sensibilities of
to internationally recognized laws of war can be assured and users and educators.” (Pope Francis 2020). Developing and
violations sanctioned. setting such standards would help in mutual learning and
innovation with international spillover effects. Standards for
AI, Robotics, and Humanity: Opportunities, Risks, and Implications for Ethics and Policy 13
protecting people’s rights for choices and privacy also apply Goodman, N. (1954). Fact, fiction, and forecast. London: University of
and may be viewed differently around the world. The general London Press.
Lelieveld, J., Klingmüller, K., Pozzer, A., Burnett, R. T., Haines, A.,
standards, however, are defined for human dignity in the UN & Ramanathan, V. (2019). Effects of fossil fuel and total anthrog-
Human Rights codex. pogenic emission removal on public health and climate. PNAS,
116(15), 7192–7197. https://doi.org/10.1073/pnas.1819989116.
Loh, E. (2018). Medicine and the rise of the robots: A qualitative review
of recent advances of artificial intelligence in health. BMJ Leader, 2,
References 59–63. https://doi.org/10.1136/leader-2018-000071.
Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks
Akerlof, G. A., & Shiller, R. J. (2015). Phishing for phools: The of plausible inference. San Francisco: Morgan Kaufmann.
economics of manipulation and deception. Princeton, NJ: Princeton Pistor, K. (2019). The code of capital: How the law creates wealth and
University Press. inequality. Princeton, NJ: Princeton University Press.
Asimov, I. (1950). Runaround. In I. Asimov (Ed.), I, Robot. Garden City: Pope Francis (2020). Discourse to the general assembly of the
Doubleday. Pontifical Academy for Life. Retrieved February 28, from http://
Baldwin, R. (2019). The globotics upheaval: Globalization, robotics, press.vatican.va/content/salastampa/it/bollettino/pubblico/2020/02/
and the future of work. New York: Oxford Umiversity Press. 28/0134/00291.html#eng.
Birhane, A. & van Dijk, J. (2020). Robot rights? Let’s talk about Rawls, J. (1987). The idea of an overlapping consensus. Oxford Journal
human welfare instead. Paper accepted to the AIES 2020 confer- of Legal Studies, 7(1), 1–25.
ence in New York, February 2020. Doi: https://doi.org/10.1145/ Russell, S. (2019). Human compatible: AI and the problem of control.
3375627.3375855. New York: Viking.
Burke, M., & Lobell, D. B. (2017). Satellite-based assessment of Stanley, J. (2019). The dawn of robot surveillance. Available via
yield variation and its determinants in smallholder African systems. American Civil Liberties Union. Retrieved March 11, 2019, from
PNAS, 114(9), 2189–2194; first published February 15, 2017.. https:/ https://www.aclu.org/sites/default/files/field_document/061119-
/doi.org/10.1073/pnas.1616919114. robot_surveillance.pdf.
Danzig, R. (2018). Technology roulette: Managing loss of control as Sweeting, M. (2020). Small satellites for earth observation—Bringing
many militaries pursue technological superiority. Washington, D.C.: space within reach. In J. von Braun & M. Sánchez Sorondo (Eds.),
Center for a New American Security. Burke M. Transformative roles of science in society: From emerging basic sci-
Fabregas, R., Kremer, M., & Schilbach, F. (2019). Realizing the poten- ence toward solutions for people’s wellbeing Acta Varia 25. Vatican
tial of digital development: The case of agricultural advice. Science, City: The Pontifical Academy of Sciences.
366, 1328. https://doi.org/10.1126/science.aay3038. Wiener, N. (1960). Some moral and technical consequences of
Gibney, E. (2020). The Battle to embed ethics in AI research. Nature, automation. Science, 131, 1355–1358. https://doi.org/10.1126/
577, 609. science.131.3410.1355.
Golumbia, D. (2009). The cultural logic of computation. Cambridge,
MA: Harvard University Press.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.
org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a
credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Part I
Foundational Issues in AI and Robotics
Differences Between Natural and Artificial
Cognitive Systems
Wolf Singer
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Strategies for the Encoding of Relations: A Comparison Between Artificial and Natural Systems . 19
Encoding of Relations in Feed-Forward Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Encoding of Relations by Assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
A Comparison Between the Two Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Assembly Coding and the Binding Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Computing in High-Dimensional State Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Information Processing in Natural Recurrent Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
W. Singer ()
Max Planck Institute for Brain Research (MPI), Ernst Strüngmann Introduction
Institute for Neuroscience (ESI) in Cooperation with Max Planck
Society, Frankfurt, Germany
Organisms and robots have to cope with very similar chal-
Frankfurt Institute for Advanced Studies (FIAS), Frankfurt, Germany lenges. Both need to possess an internal model of the re-
e-mail: wolf.singer@brain.mpg.de
stricted environment in which they act and both need to
adjust their actions to the idiosyncratic conditions of the local activity of the coupled neurons but on additional gating
respective environment in order to accomplish particular signals that have a “now print” function. Only if these signals
tasks. However, the computational strategies to cope with are available in addition can local activity lead to synaptic
these challenges exhibit marked differences between natural changes. These gating signals are generated by a few special-
and artificial systems. ized centres in the depth of the brain and conveyed through
In natural systems the model of the world is to a large widely branching nerve fibres to the whole forebrain. The
extent inherited, i.e. the relevant information has been ac- activity of these value assigning systems is in turn controlled
quired by selection and adaptation during evolution, is stored by widely distributed brain structures that evaluate the be-
in the genes and expressed in the functional anatomy of havioural validity of ongoing or very recently accomplished
the organism and the architecture of its nervous systems. cognitive or executive processes. In case the outcome is
This inborn model is subsequently complemented and refined positive, the network connections whose activity contributed
during ontogeny by experience and practice. The same holds to this outcome get strengthened and if the outcome is neg-
true for the specification of the tasks that the organism needs ative they get weakened. This retrospective adjustment of
to accomplish and for the programs that control the execution synaptic modifications is possible, because activity patterns
of actions. Here, too, the necessary information is provided in that potentially could change a connection leave a molecular
part by evolution and in part by lifelong learning. In order to trace at the respective synaptic contacts that outlasts the
be able to evolve in an ever-changing environment, organisms activity itself. If the “now print” signal of the gating systems
have evolved cognitive systems that allow them to anal- arrives while this trace is still present, the tagged synapse will
yse the actual conditions of their embedding environment, undergo a lasting change (Redondo and Morris 2011; Frey
to match them with the internal model, update the model, and Morris 1997). In this way, the network’s specific activity
derive predictions and adapt future actions to the actual pattern that led to the desired outcome will be reinforced.
requirements. Therefore, this form of supervised learning is also addressed
In order to complement the inborn model of the world as reinforcement learning.
organisms rely on two different learning strategies: Unsuper- Comparing these basic features of natural systems with the
vised and supervised learning. The former serves to capture organization of artificial “intelligent” systems already reveals
frequently occurring statistical contingencies in the environ- a number of important differences.
ment and to adapt processing architectures to the efficient Artificial systems have no evolutionary history but are the
analysis of these contingencies. Babies apply this strategy for result of a purposeful design, just as any other tool humans
the acquisition of the basic building blocks of language. The have designed to fulfil special functions. Hence, their internal
unsupervised learning process is implemented by adaptive model is installed by engineers and adapted to the specific
connections that change their gain (efficiency) as a function conditions in which the machine is expected to operate. The
of the activity of the connected partners. If in a network two same applies for the programs that translate signals from the
interconnected neurons are frequently coactivated, because robot’s sensors into action. Control theory is applied to assure
the features to which they respond are often present simulta- effective coordination of the actuators. Although I am not
neously, the connections among these two neurons become a specialist in robotics I assume that the large majority of
more efficient. The neurons representing these correlated useful robots is hard wired in this way and lacks most of the
features become associated with one another. Thus, statis- generative, creative and self-organizing capacities of natural
tical contingencies between features get represented by the agents.
strength of neuronal interactions. “Neurons wire together if However, there is a new generation of robots with en-
they fire together”. Conversely, connections among neurons hanced autonomy that capitalize on the recent progress in
weaken, if these are rarely active together, i.e. if their activity machine learning. Because of the astounding performance of
is uncorrelated. By contrast, supervised learning strategies these robots, autonomous cars are one example, and because
are applied when the outcome of a cognitive or executive of the demonstration that machines outperform humans in
process needs to be evaluated. An example is the generation games such as Go and chess, it is necessary to examine in
of categories. If the system were to learn that dogs, sharks and greater depth to which extent the computational principles
eagles belong to the category of animals it needs to be told realized in these machines resemble those of natural systems.
that such a category exists and during the learning process it Over the last decades the field of artificial intelligence has
needs to receive feedback on the correctness of the various been revolutionized by the implementation of computational
classification attempts. In case of supervised learning the strategies based on artificial neuronal networks. In the second
decision as to whether a particular activity pattern induces half of the last century evidence accumulated that relatively
a change in coupling is made dependent not only on the simple neuronal networks, known as Perceptrons or Hopfield
Differences Between Natural and Artificial Cognitive Systems 19
nets, can be trained to recognize and classify patterns and Encoding of Relations in Feed-Forward
this fuelled intensive research in the domain of artificial Architectures
intelligence. The growing availability of massive computing
power and the design of ingenious training algorithms pro- One strategy for the analysis and encoding of relations is
vided compelling evidence that this computational strategy based on convergent feed-forward circuits. This strategy is
is scalable. The early systems consisted of just three layers ubiquitous in natural systems. Nodes (neurons) of the input
and a few dozens of neuron like nodes. The systems that layer are tuned to respond to particular features of input
have recently attracted considerable attention because they patterns and their output connections are made to converge
outperform professional Go players, recognize and classify on nodes of the next higher layer. By adjusting the gain
correctly huge numbers of objects, transform verbal com- of these converging connections and the threshold of the
mands into actions and steer cars, are all designed according target node it is assured that the latter responds preferen-
to the same principles as the initial three-layered networks. tially to only a particular conjunction of features in the
However, the systems now comprise more than hundred input pattern (Hubel and Wiesel 1968; Barlow 1972). In
layers and millions of nodes which has earned them the this way consistent relations among components become
designation “deep learning networks”. Although the training represented by the activity of conjunction-specific nodes (see
of these networks requires millions of training trials with a Fig. 1). By iterating this strategy across multiple layers in
very large number of samples, their amazing performance is hierarchically structured feed-forward architectures complex
often taken as evidence that they function according to the relational constructs (cognitive objects) can be represented
same principles as natural brains. However, as detailed in by conjunction-specific nodes of higher order. This basic
the following paragraph, a closer look at the organization of strategy for the encoding of relations has been realized in-
artificial and natural systems reveals that this is only true for dependently several times during evolution in the nervous
a few aspects. systems of different phyla (molluscs, insects, vertebrates) and
reached the highest degree of sophistication in the hierarchi-
cal arrangement of processing levels in the cerebral cortex
Strategies for the Encoding of Relations: A of mammals (Felleman and van Essen 1991; Glasser et al.
Comparison Between Artificial and Natural 2016; Gross et al. 1972; Tsao et al. 2006; Hirabayashi et
Systems al. 2013; Quian Quiroga et al. 2005). This strategy is also
the hallmark of the numerous versions of artificial neuronal
The world, animate and inanimate, is composed of a rel- networks designed for the recognition and classification of
atively small repertoire of elementary components that are patterns (Rosenblatt 1958; Hopfield 1987; DiCarlo and Cox
combined at different scales and in ever different constella- 2007; LeCun et al. 2015). As mentioned above, the highly
tions to bring forth the virtually infinite diversity of objects. successful recent developments in the field of artificial intel-
This is at least how the world appears to us. Whether we ligence, addressed as “deep learning networks” (LeCun et al.
are caught in an epistemic circle and perceive the world as
composite because our cognitive systems are tuned to divide
wholes into parts or because the world is composite and
our cognitive systems have adapted to this fact will not be
discussed further. What matters is that the complexity of
descriptions can be reduced by representing the components
and their relations rather than the plethora of objects that
result from different constellations of components. It is prob-
ably for this reason that evolution has optimized cognitive
systems to exploit the power of combinatorial codes. A
limited number of elementary features is extracted from the
sensory environment and represented by the responses of
feature selective neurons. Subsequently different but com-
plementary strategies are applied to evaluate the relations
between these features and to generate minimally overlap- Fig. 1 The encoding of relations by conjunction-specific neurons (red)
ping representations of particular feature constellations for in a three-layered neuronal network. A and B refer to neurons at the
classification. In a sense this is the same strategy as utilized input layer whose responses represent the presence of features A and B.
Arrows indicate the flow of activity and their thickness the efficiency
by human languages. In the Latin alphabet, 28 symbols of the respective connections. The threshold of the conjunction-specific
suffice to compose the world literature. neuron is adjusted so that it responds only when A and B are active
simultaneously
20 W. Singer
2015; Silver et al. 2017, 2018), capitalize on the scaling of composite objects can not only be related to one another by
this principle in large multilayer architectures (see Fig. 2). the formation of conjunction-specific cells but also by the
formation of functionally coherent assemblies of neurons. In
this case, the neurons that encode the features that need to
Encoding of Relations by Assemblies be bound together become associated into an assembly. Such
assemblies, so the original assumption, are distinguished as
In natural systems, a second strategy for the encoding of re- a coherent whole that represents a particular constellation
lations is implemented that differs in important aspects from of components (features) because of the jointly enhanced
the formation of individual, conjunction-specific neurons activation of the neurons constituting the assembly. The joint
(nodes) and requires a very different architecture of connec- enhancement of the neurons’ activity is assumed to be caused
tions. In this case, relations among components are encoded by cooperative interactions that are mediated by the recip-
by the temporary association of neurons (nodes) represent- rocal connections between the nodes of the network. These
ing individual components into cooperating assemblies that connections are endowed with correlation-dependent synap-
respond collectively to particular constellations of related tic plasticity mechanisms (Hebbian synapses, see below) and
features. In contrast to the formation of conjunction-specific strengthen when the interconnected nodes are frequently co-
neurons by convergence of feed-forward connections, this activated. Thus, nodes that are often co-activated because
second strategy requires recurrent (reciprocal) connections the features to which they respond do often co-occur in the
between the nodes of the same layer as well as feed-back environment enhance their mutual interactions. As a result of
connections from higher to lower levels of the processing these cooperative interactions, the vigour and/or coherence of
hierarchy. In natural systems, these recurrent connections the responses of the respective nodes is enhanced when they
outnumber by far the feed-forward connections. As proposed are activated by the respective feature constellation. In this
by Donald Hebb as early as 1949, components (features) of way, consistent relations among the components of cognitive
Differences Between Natural and Artificial Cognitive Systems 21
objects are translated into the weight distributions of the in the weights of the recurrent and feed-back connections.
reciprocal connections between network nodes and become Finally, the encoding of entirely new or the completion of
represented by the joint responses of a cooperating assembly incomplete relational constructs (associativity) is facilitated
of neurons. Accordingly, the information about the presence by the cooperativity inherent in recurrently coupled networks
of a particular constellation of features is not represented by that allows for pattern completion and the generation of novel
the activity of a single conjunction-specific neuron but by the associations (generative creativity).
amplified or more coherent or reverberating responses of a However, assembly coding and the required recurrent
distributed assembly of neurons. networks cannot easily be implemented in artificial systems
for a number of reasons. First and above all it is extremely
cumbersome to simulate the simultaneous reciprocal inter-
A Comparison Between the Two Strategies actions between large numbers of interconnected nodes with
conventional digital computers that can perform only sequen-
Both relation-encoding strategies have advantages and tial operations. Second, recurrent networks exhibit highly
disadvantages and evolution has apparently opted for a non-linear dynamics that are difficult to control. They can
combination of the two. Feed-forward architectures are fall dead if global excitation drops below a critical level and
well suited to evaluate relations between simultaneously they can engage in runaway dynamics and become epileptic
present features, raise no stability problems and allow for if a critical level of excitation is reached. Theoretical analysis
fast processing. However, encoding relations exclusively shows that such networks perform efficiently only if they
with conjunction-specific neurons is exceedingly expensive operate in a dynamic regime close to criticality. Nature takes
in terms of hardware requirements. Because specific care of this problem with a number of self-regulating mech-
constellations of (components) features have to be anisms involving normalization of synaptic strength (Turri-
represented explicitly by conjunction-specific neurons giano and Nelson 2004), inhibitory interactions (E/I balance)
via the convergence of the respective feed-forward (Yizhar et al. 2011) and control of global excitability by
connections and because the dynamic range of the nodes modulatory systems, that keep the network within a narrow
is limited, an astronomically large number of nodes and working range just below criticality (Plenz and Thiagarajan
processing levels would be required to cope with the 2007; Hahn et al. 2010).
virtually infinite number of possible relations among the The third problem for the technical implementation of
components (features) characterizing real-world objects, biological principles is the lack of hardware solutions for
leave alone the representation of nested relations required to Hebbian synapses that adjust their gain as a function of
capture complex scenes. This problem is addressed as the the correlation between the activity of interconnected nodes.
“combinatorial explosion”. Consequently, biological systems Most artificial systems rely on some sort of supervised learn-
relying exclusively on feed-forward architectures are rare ing in which temporal relations play only a minor role if at
and can afford representation of only a limited number of all. In these systems the gain of the feed-forward connec-
behaviourally relevant relational constructs. Another serious tions is iteratively adjusted until the activity patterns at the
disadvantage of networks consisting exclusively of feed- output layer represent particular input patterns with minimal
forward connections is that they have difficulties to encode overlap. To this end very large samples of input patterns are
relations among temporally segregated events (temporal generated, deviations of the output patterns from the desired
relations) because they lack memory functions. result are monitored as “errors” and backpropagated through
By contrast, assemblies of recurrently coupled, mutually the network in order to change the gain of those connections
interacting nodes (neurons) can cope very well with the that contributed most to the error. In multilayer networks this
encoding of temporal relations (sequences) because such is an extremely challenging procedure and the breakthroughs
networks exhibit fading memory due to reverberation and of recent developments in deep learning networks were due
can integrate temporally segregated information. Assembly mainly to the design of efficient backpropagation algorithms.
codes are also much less costly in terms of hardware re- However, these are biologically implausible. In natural sys-
quirements, because individual feature specific nodes can be tems, the learning mechanisms exploit the fundamental role
recombined in flexible combinations into a virtually infinite of consistent temporal relations for the definition of semantic
number of different assemblies, each representing a different relations. Simultaneously occurring events usually have a
cognitive content, just as the letters of the alphabet can common cause or are interdependent because of interactions.
be combined into syllables, words, sentences and complex If one event consistently precedes the other, the first is likely
descriptions (combinatorial code). In addition, coding space the cause of the latter, and if there are no temporal corre-
is dramatically widened because information about the sta- lations between the events, they are most likely unrelated.
tistical contingencies of features can be encoded not only in Likewise, components (features) that often occur together are
the synaptic weights of feed forward connections but also likely to be related, e.g. because their particular constellation
22 W. Singer
is characteristic for a particular object or because they are Synchronization is as effective in enhancing the efficiency of
part of a stereotyped sequence of events. Accordingly, the neuronal responses in down-stream targets as is enhancing
molecular mechanisms developed by evolution for the estab- discharge rate (Bruno and Sakmann 2006). Thus, activation
lishment of associations are exquisitely sensitive to tempo- of target cells at the subsequent processing stage can be
ral relations between the activity patterns of interconnected assured by increasing either the rate or the synchronicity
nodes. The crucial variable that determines the occurrence of discharges in the converging input connections. The ad-
and polarity of gain changes of the connections is the tem- vantage of increasing salience by synchronization is that
poral relation between discharges in converging presynaptic integration intervals for synchronous inputs are very short,
inputs and/or between the discharges of presynaptic afferents allowing for instantaneous detection of enhanced salience.
and the activity of the postsynaptic neuron. In natural systems Hence, information about the relatedness of responses can
most excitatory connections—feed forward, feed-back and be read out very rapidly. In extremis, single discharges can be
recurrent—as well as the connections between excitatory labelled as salient and identified as belonging to a particular
and inhibitory neurons are adaptive and can change their assembly if synchronized with a precision in the millisecond
gain as a function of the correlation between pre- and post- range.
synaptic activity. The molecular mechanisms that translate Again, however, it is not trivial to endow artificial re-
electrical activity in lasting changes of synaptic gain evaluate current networks with the dynamics necessary to solve the
correlation patterns with a precision in the range of tens binding problem. It would require to implement oscillatory
of milliseconds and support both the experience-dependent microcircuits and mechanisms ensuring selective synchro-
generation of conjunction-specific neurons in feed-forward nization of feature selective nodes. The latter, in turn, have to
architectures and the formation of assemblies. rely on Hebbian learning mechanisms for which there are yet
no satisfactory hardware solutions. Hence, there are multiple
reasons why the unique potential of recurrent networks is
Assembly Coding and the Binding Problem only marginally exploited by AI systems.
states has to be performed sequentially according to the clock presented stimuli persists for some time in the medium
cycle of the digital computer used to simulate the recurrent (fading memory). Thus, information about multiple stimuli
network, many of the analogue computations taking place in can be integrated over time, allowing for the representation
natural networks can only be approximated with iterations if of sequences; (4) information about the statistics of natural
at all. Therefore, attempts are made to emulate the dynamics environments (the internal model) can be stored in the weight
of recurrent networks with analogue technology. An original distributions and architecture of the recurrent connections
and hardware efficient approach is based on optoelectron- for instantaneous comparison with incoming sensory evi-
ics. Laser diodes serve as oscillating nodes and these are dence. These properties make recurrent networks extremely
reciprocally coupled through glass fibres whose variable effective for the classification of input patterns that have
length introduces variations of coupling delays (Soriano et both spatial and temporal structure and share overlapping
al. 2013). All these implementations have in common to use features in low-dimensional space. Moreover, because these
the characteristic dynamics of recurrent networks as medium networks self-organize and produce spatio-temporally struc-
for the execution of specific computations. tured activity patterns, they have generative properties and
Because the dynamics of recurrent networks resemble can be used for pattern completion, the formation of novel
to some extent the dynamics of liquids—hence the term associations and the generation of patterns for the control
“liquid computing”—the basic principle can be illustrated of movements. Consequently, an increasing number of AI
by considering the consequences of perturbing a liquid. If systems now complement the feed-forward strategy imple-
objects impact at different intervals and locations in a pond mented in deep learning networks with algorithms inspired
of water, they generate interference patterns of propagating by recurrent networks. One of these powerful and now widely
waves whose parameters reflect the size, speed, location and used algorithms is the Long Short Term Memory (LSTM) al-
the time of impact of the objects. The wave patterns fade with gorithm, introduced decades ago by Hochreiter and Schmid-
a time constant determined by the viscosity of the liquid, huber (1997) and used in systems such as AlphaGo, the
interfere with one another and create a complex dynamic network that outperforms professional GO players (Silver et
state. This state can be analysed by measuring at several al. 2017, 2018). The surprising efficiency of these systems
locations in the pond the amplitude, frequency and phase of that excels in certain domains human performance has nur-
the respective oscillations and from these variables a trained tured the notion that brains operate in the same way. If one
classifier can subsequently reconstruct the exact sequence considers, however, how fast brains can solve certain tasks
and nature of the impacting “stimuli”. Similar effects occur despite of their comparatively extremely slow components
in recurrent networks when subsets of nodes are perturbed and how energy efficient they are, one is led to suspect
by stimuli that have a particular spatial and temporal struc- implementation of additional and rather different strategies.
ture. The excitation of the stimulated nodes spreads across And indeed, natural recurrent networks differ from their
the network and creates a complex dynamic state, whose artificial counterparts in several important features which is
spatio-temporal structure is determined by the constellation the likely reason for their amazing performance. In sensory
of initially excited nodes and the functional architecture cortices the nodes are feature selective, i.e. they can be
of the coupling connections. This stimulus-specific pattern activated only by specific spatio-temporal stimulus configu-
continues to evolve beyond the duration of the stimulus due rations. The reason is that they receive convergent input from
to reverberation and then eventually fades. If the activity has selected nodes of the respective lower processing level and
not induced changes in the gain of the recurrent connections thus function as conjunction-specific units in very much the
the network returns to its initial state. This evolution of the same way as the nodes in feed forward multilayer networks.
network dynamics can be traced by assessing the activity In low areas of the visual system, for example, the nodes
changes of the nodes and is usually represented by time are selective for elementary features such as the location
varying, high-dimensional vectors or trajectories. As these and orientation of contour borders, while in higher areas
trajectories differ for different stimulus patterns, segments of the processing hierarchy the nodes respond to increas-
exhibiting maximal distance in the high-dimensional state ingly complex constellations of elementary features. In ad-
space can be selected to train classifiers for the identification dition, the nodes of natural systems, the neurons, possess
of the respective stimuli. an immensely larger spectrum of integrative and adaptive
This computational strategy has several remarkable ad- functions than the nodes currently used in artificial recur-
vantages: (1) low-dimensional stimulus events are projected rent networks. And finally the neurons and/or their em-
into a high-dimensional state space where nonlinearly sep- bedding microcircuits are endowed with the propensity to
arable stimuli become linearly separable; (2) the high di- oscillate.
mensionality of the state space can allow for the mapping The recurrent connections also differ in important respects
of more complicated output functions (like the XOR) by from those implemented in most artificial networks. Because
simple classifiers, and (3) information about sequentially of the slow velocity of signals conveyed by neuronal ax-
Another random document with
no related content on Scribd:
voyage is over, they separate, with no expectation of ever meeting
again, unless some chance should make them fellow-travelers
another time.
All the children on board are sure to make friends with each other;
and they have plenty of room to play on the long decks, and in the
saloons, without interfering with the comfort of older persons.
THE SALOON OF THE GREAT EASTERN.
It would be a delightful thing to take a voyage on such a
magnificent steamer as this. Apart from the pleasure that the ship
itself, with all its great machinery and its splendid appointments
would afford, there would be the satisfaction of knowing that there is
some chance of escaping sea-sickness when on board of the Great
Eastern.
And any one who ever has been sea-sick would be very apt to
appreciate the advantages of a vessel that does not pitch and toss
on every ordinary wave.
KANGAROOS.
AN AURORA BOREALIS.
The boys could not have described the scene to give you any idea
of it, as I have tried to do, but they enjoyed it. It never occurred to
them to ask what it was, or where it came from. They accepted it as
they did their six months’ day and night, and great snows, and
volcanoes, and all the other forms of Nature. If they thought about it
at all, they probably supposed that all the world was just like
Greenland.
After a little while they grew tired of the Aurora, and turned their
attention once more to the traps. Polargno’s were on a point of land,
shielded somewhat by a large rock. He had no less than four, and he
usually found them all empty. As the boys silently approached this
rock they caught sight of an animal, which was circling about the
outside of one of the traps. All saw it at the same instant, and all
knew it to be one of the most valuable of their Arctic animals. Their
seal-skin boots had made no noise on the smooth ice, and the
animal was not aware of their approach. They were not on his
windward side, and therefore he was not likely to detect them by
scent. The boys stood still behind the rock, and cautiously peered
around it, watching every movement of the creature. They were
afraid to draw a long breath lest he should hear them.
Polargno’s eyes gleamed with satisfaction. Here was a prize
indeed! This was a fine Arctic fox, and he had never caught so
valuable an animal! It was seldom that anybody did, for the Arctic fox
is quite as wise and cautious as his brethren of warmer climes. He
imagined himself returning to the village with this trophy, and thought
with pride of the excitement he would cause, and how the people
would gather around him, and congratulate him, and how the fur
traders would praise him. And then he began to think what fine
things he would get from them in exchange for the skin.
But still he was anxious; for, all this time, the animal was on the
wrong side of the trap. If he did not go inside of it, farewell to
Polargno’s visions, for the boys had no guns, and they would not
have done much with them, if they had had them, for they were not
skilful in the use of firearms. The animal was evidently suspicious of
the fir boughs thrown so carelessly down, and lightly covered with
snow; but he was also very hungry, and eager for the food under this
arrangement. His hunger proved too great for his prudence, and,
after investigating the trap on all sides, and thinking over the matter
for a time that seemed very long to the watching boys, he cautiously
placed one foot over the spot where the bait lay. This was enough.
Click went a wooden spring, concealed among the branches, and
down went the fox through a wooden trap underneath, that snapped
together again, and shut him in.
“Hi,” cried Polargno, as he rushed out from behind the rock,
followed by both boys. But he was in too great a hurry. He stumbled
over a stone. His feet went up into the air, and his back and head
went crashing down into the trap, sending fir boughs and splints of
wood flying in all directions.
The fox snapped at him, but, fortunately missed his face; and
having snipped a little piece out of the boy’s ear, evidently came to
the conclusion that running away was better than revenge. He
therefore ran over Polargno’s prostrate body, and up his elevated
legs, and, making a tremendous spring from the quivering feet, he
darted away at his utmost speed.
The boys left Polargno to get out of his trap as best he could, and
immediately gave chase to the fox. But they knew it was useless.
They might as well try to catch the wind. If they had brought the dogs
the fox would probably have had the worst of it. But, as it was, he
escaped—hungry, but safe.
This was Polargno’s adventure with the fox.
The next summer, Polargno had a very surprising adventure with a
seal. He was in a cave alone on the bay. He had paddled out a short
distance from the shore because he had nothing else to do just then.
He paddled up and down until he got tired, and then he rested on his
oars, and looked about him. The scene was very different from what
it had been when he and the fox had caught each other. Now the bay
was entirely free from ice, and the waves leaped and danced as if
rejoicing to be free once more. There was not a cloud in the sky,
where the sun shone brightly far above the horizon in the same
place, apparently, that it had been for several days and nights.
Flowers bloomed in the grassy fields, birds perched upon the rocks,
and the noise of insects could be faintly heard.
SUMMER-TIME.
But a Greenlander is never free from the sight of snow; and, even
now, in mid-summer, every high mountain peak had its white cap;
and on the tallest mountains the snow extended far down the sides.
Polargno took pleasure in the summer warmth and life, but I do not
suppose he thought much about the objects he saw around him. His
mind was busy with the prospect of the good time he would have
when two whaling ships that were cruising some miles below in the
bay, should come up as high as their settlement. There was a report,
too, that a large school of whales was making its way northward.
Thinking of these things while he idly looked about him, he
suddenly felt that he was being lifted into the air. Before he could
recover from his surprise at this rapid elevation he found that his
canoe was being borne swiftly over the surface of the water.
Instinctively he tightened his hold upon the paddle that he might not
lose it, and this action caused one end of it to strike an animal under
the boat, which immediately flapped itself free, and rolled off to a
little distance, where it remained, as motionless as a log, evidently
waiting to see what would happen next.
The thing that came near happening was the upsetting of
Polargno’s canoe, for the blow it received from the flap of the
creature’s tail sent it spinning around like a top. Polargno would not
have been much alarmed if it had upset, for he could swim like a
fish; but still he was very glad it remained right side up.
As soon as he could gather together his scattered wits he found
that the animal which had given him this unceremonious ride was not
a sea-lion, as he had at first supposed, but a large specimen of the
common seal. Its bouncing up under his boat was an
unpremeditated act on the part of the seal, who was quite as much
alarmed as the boy, and quite as glad to get away.
But should he get away? This question came into Polargno’s mind.
The Esquimaux boats at this season were kept prepared for whaling
expeditions, and in the bow of this one there laid a harpoon with a
nice long coil of rope. The boy glanced from this to the shining back
of the seal that lay so temptingly just above the surface of the water.
He knew all about seals. He had helped kill many a one. That was
very different from fighting one entirely alone, but then the glory
would be so much greater if he conquered.
A seal is a timid animal, but when brought to bay it can fight boldly
and fiercely enough, and Polargno knew well that there was a
chance of his coming to grief if he once began the combat. But then
again the glory was so much the greater if he conquered.
He wished to wipe out the memory of his ridiculous adventure with
the Arctic fox, which had brought upon him the laughter of the whole
village, and was a joke against him to that very day.
These thoughts passed swiftly through his mind, and he made his
determination. He cautiously paddled towards the seal, but this act
alarmed the creature, and it sank into the water out of sight.
Polargno knew it would come up again to breathe, and he uncoiled
the harpoon line, and held the weapon all prepared to throw.
Meantime the canoe drifted down to the very spot where the seal
had sunk, and Polargno looked down into the deep green water,
thinking he might see it coming up. But it rose in an entirely different
place, on the other side of the boat, and at quite a distance.
Polargno was by no means sure of his aim in making such a long
throw; but, putting himself into the attitude he had seen experienced
harpooners assume, he sent the harpoon whizzing through the air
with a straight, steady motion that carried it with a wide sweeping
curved line into the back of the seal, just above the tail.
Down into the water went the animal with a rush that made
Polargno’s canoe reel and dance. If it had been a small whale, or
even a sea-lion, that the boy had undertaken to capture in this
fashion, it would have dragged down the canoe, harpoon, rope, and
all, leaving to Polargno the pleasant task of swimming home and
telling the news. But the seal was not quite strong enough for this,
though it did its best; and, each time that it rose to the surface after
“sounding,” Polargno wound the line tighter and tighter around the
strong supports to which it was fastened. In this way he brought the
seal nearer and nearer the canoe. By the time its strength was pretty
well spent it had so short a line that it could dive only a few feet
below the surface. And then Polargno began to wonder how he
should get it to the shore when it was dead. It would be too heavy a
body for him to manage alone, and there was no one in sight on the
shore to whom he could call for help. He did not wish to cut the body
adrift, for then he was not likely to get it again.
Suddenly there flashed into his mind a brilliant thought. The seal
should take itself to the shore, and take him too! He seated himself
firmly in the boat, and took up the paddle. With this he hit the seal a
whack on the side, and, in darting away to the opposite direction
from the blow, the animal headed for the shore. It could not dive, but
it made a grand rush through the water, drawing the boat swiftly
along. A few such rushes brought it to the shore. Whenever it made
a turn to the right or left, the paddle reminded it to keep the straight
path. Polargno had never heard of Neptune’s chariot with its dolphin
steeds, and was therefore unconscious that he was working out a
poetical idea, but he was very proud of the success of his stratagem,
especially as it possessed an element of danger. If his charger had
taken it into its head to back against the boat, and to give it a blow
with its tail, it would have stove it in, and if it had given Polargno a
whack at the same time it would probably have killed him. But the
seal was too weak from loss of blood, or too ignorant to think of any
such revenge, and rushed upon the beach at last, dragging
Polargno’s boat up with such violence that he was shot out of it in a
twinkling.
He fell upon the soft sand and was not hurt. When he stood upon
his feet he found that his father, and one of the neighbors had come
to the shore to look after the boats, and had witnessed the last part
of his extraordinary journey. He was very glad of this, for he had
thought his story would not be believed in the village.
The seal was soon killed, and yielded a good deal of oil and
blubber.
After this, the people of the village looked upon Polargno as a very
clever and brave fellow, and they laughed at him no more about the
trick the fox had played him.
FROZEN UP.
By that time the whales were gone, and the vessel was full, and
they were really on the point of departure, when, unfortunately, there
came upon them a few days of excessively cold weather that was
very unusual so early in the season. In a short time the bay was
frozen, and the vessel tightly enclosed in the ice. The sailors now
began seriously to fear that they would have to winter in that dreadful
climate, when, to their joy, the weather moderated somewhat, and
the ice broke up. They soon found, however, that this condition of
things was worse than the other, for there was great danger of the
ship being crushed by the huge masses of loose ice that pressed
upon it on every side. The crew worked hard to save the ship, but it