Textbook Philosophy and Computing Essays in Epistemology Philosophy of Mind Logic and Ethics 1St Edition Thomas M Powers Eds Ebook All Chapter PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

Philosophy and Computing Essays in

Epistemology Philosophy of Mind Logic


and Ethics 1st Edition Thomas M.
Powers (Eds.)
Visit to download the full and correct content document:
https://textbookfull.com/product/philosophy-and-computing-essays-in-epistemology-p
hilosophy-of-mind-logic-and-ethics-1st-edition-thomas-m-powers-eds/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Philosophy in Educational Research Epistemology Ethics


Politics and Quality 1st Edition David Bridges (Auth.)

https://textbookfull.com/product/philosophy-in-educational-
research-epistemology-ethics-politics-and-quality-1st-edition-
david-bridges-auth/

Beyond Faith and Rationality Essays on Logic Religion


and Philosophy Ricardo Sousa Silvestre

https://textbookfull.com/product/beyond-faith-and-rationality-
essays-on-logic-religion-and-philosophy-ricardo-sousa-silvestre/

The Philosophy of Piers Plowman: The Ethics and


Epistemology of Love in Late Medieval Thought 1st
Edition David Strong (Auth.)

https://textbookfull.com/product/the-philosophy-of-piers-plowman-
the-ethics-and-epistemology-of-love-in-late-medieval-thought-1st-
edition-david-strong-auth/

Philosophy of Mind in the Twentieth and Twenty First


Centuries The History of the Philosophy of Mind 1
edition Edition Amy Kind

https://textbookfull.com/product/philosophy-of-mind-in-the-
twentieth-and-twenty-first-centuries-the-history-of-the-
philosophy-of-mind-1-edition-edition-amy-kind/
Essays in the Philosophy of Chemistry 1st Edition
Fisher

https://textbookfull.com/product/essays-in-the-philosophy-of-
chemistry-1st-edition-fisher/

Ethics Beyond the Limits New Essays on Bernard Williams


Ethics and the Limits of Philosophy Sophie Grace
Chappell

https://textbookfull.com/product/ethics-beyond-the-limits-new-
essays-on-bernard-williams-ethics-and-the-limits-of-philosophy-
sophie-grace-chappell/

Grammar, Philosophy, and Logic 1st Edition Bruce Silver


(Auth.)

https://textbookfull.com/product/grammar-philosophy-and-
logic-1st-edition-bruce-silver-auth/

Implicit bias and philosophy. Volume 1, Metaphysics and


epistemology 1st Edition Brownstein

https://textbookfull.com/product/implicit-bias-and-philosophy-
volume-1-metaphysics-and-epistemology-1st-edition-brownstein/

Logic and Philosophy of Time Themes from Prior 1st


Edition Hasle

https://textbookfull.com/product/logic-and-philosophy-of-time-
themes-from-prior-1st-edition-hasle/
Philosophical Studies Series

Thomas M. Powers Editor

Philosophy
and
Computing
Essays in Epistemology, Philosophy of
Mind, Logic, and Ethics
Philosophical Studies Series

Volume 128

Editor-in-Chief
Luciano Floridi, University of Oxford, Oxford Internet Institute, United Kingdom
Mariarosaria Taddeo, University of Oxford, Oxford Internet Institute, United Kingdom

Executive Editorial Board


Patrick Allo, Vrije Universiteit Brussel, Belgium
Massimo Durante, Università degli Studi di Torino, Italy
Phyllis Illari, University College London, United Kingdom
Shannon Vallor, Santa Clara University

Board of Consulting Editors


Lynne Rudder Baker, University of Massachusetts at Amherst
Stewart Cohen, Arizona State University, Tempe
Radu Bogdan, Tulane University
Marian David, University of Notre Dame
John M. Fischer, University of California at Riverside
Keith Lehrer, University of Arizona, Tucson
Denise Meyerson, Macquarie University
François Recanati, Institut Jean-Nicod, EHESS, Paris
Mark Sainsbury, University of Texas at Austin
Barry Smith, State University of New York at Buffalo
Nicholas D. Smith, Lewis & Clark College
Linda Zagzebski, University of Oklahoma
More information about this series at http://www.springer.com/series/6459
Thomas M. Powers
Editor

Philosophy and Computing


Essays in Epistemology, Philosophy of Mind,
Logic, and Ethics

123
Editor
Thomas M. Powers
Department of Philosophy
University of Delaware
Newark, DE, USA

ISSN 0921-8599 ISSN 2542-8349 (electronic)


Philosophical Studies Series
ISBN 978-3-319-61042-9 ISBN 978-3-319-61043-6 (eBook)
DOI 10.1007/978-3-319-61043-6

Library of Congress Control Number: 2017952948

© Springer International Publishing AG 2017


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, express or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents

1 Introduction: Intersecting Traditions in the Philosophy


of Computing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Thomas M. Powers
2 Levels of Computational Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Michael Rescorla
3 On the Relation of Computing to the World. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
William J. Rapaport
4 Cognitive Computation sans Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Paul Schweizer
5 Software Error as a Limit to Inquiry for Finite Agents:
Challenges for the Post-human Scientist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
John F. Symons and Jack K. Horner
6 The Singularity Business. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Selmer Bringsjord and Alexander Bringsjord
7 Artificial Moral Cognition: Moral Functionalism
and Autonomous Moral Agency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Don Howard and Ioan Muntean
8 AI and the Automation of Wisdom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Shannon Vallor
9 An Analysis of Machine Ethics from the Perspective of Autonomy . . . 179
Mario Verdicchio
10 Beyond Informed Consent—Investigating Ethical Justifications
for Disclosing, Donating or Sharing Personal Data in Research . . . . . . 193
Markus Christen, Josep Domingo-Ferrer, Dominik Herrmann,
and Jeroen van den Hoven

v
vi Contents

11 Big Data, Digital Traces and the Metaphysics of the Self . . . . . . . . . . . . . . 209
Soraj Hongladarom
12 Why Big Data Needs the Virtues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Frances S. Grodzinsky

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Chapter 1
Introduction: Intersecting Traditions
in the Philosophy of Computing

Thomas M. Powers

Keywords Philosophy of computing • Cognition • Representation • Autonomy


• Big Data • Machine ethics

This volume consists of selected papers from the 2015 joint international
conference—the first-ever meeting of the Computer Ethics-Philosophical Enquiry
conference series of the International Society for Ethics and Information
Technology, and the International Association for Computing and Philosophy—held
at the University of Delaware from June 22–25 of 2015. The organizing themes of
the conference are well represented in the volume. They include theoretical topics at
the intersection of computing and philosophy, including essays that explore current
issues in epistemology, philosophy of mind, logic, and philosophy of science,
and also normative topics on matters of ethical, social, economic, and political
import. All of the essays provide views of their subject matter through the lens of
computation.
Two general types of question motivate the foregoing essays. First, how has
computation changed philosophical inquiry into the nature of mind and cognition?
Second, how we can come to terms with the ethical and political aspects of the
computer and information-technology revolution? It is worth noting that these
questions—though they have lingered on the surface and beneath many philosoph-
ical discussions for decades (and in some cases, for centuries)—are given a new
treatment by the authors precisely because of recent developments in the science
and technology of computation. Almost all philosophers know the general landscape
of these questions well—What is the nature of explanation? What is thought?
How does language represent the world? What is it to act in such a way as to be
responsible? Formerly, answers to these questions placed humans at the center of
such inquiries; philosophy was concerned with explanations given by humans and
for humans. We considered—as though it were tautological—only the possibility of
human (natural) languages, learning, thought, agency, and the like. But philosophy
cannot ignore the fact that computational machines are capable of joining us at the

T.M. Powers ()


Department of Philosophy, University of Delaware, Newark, DE, USA
e-mail: Tpowers@udel.edu

© Springer International Publishing AG 2017 1


T.M. Powers (ed.), Philosophy and Computing, Philosophical Studies Series 128,
DOI 10.1007/978-3-319-61043-6_1
2 T.M. Powers

center of this inquiry, and thus we can now ponder machine language, learning,
thought, agency, and other quite revolutionary concepts.
The impetus for the new treatments in these essays comes from startling
developments in contemporary technologies which allow pervasive information
gathering and analyses far beyond human capacities—the era of Big Data—and new
scientific insights into our own cognition. We are also beginning to see sophisticated,
autonomous, and deadly functionality in machines, and there is even talk of the
possibility of a “post-human” society due to the convergence of genetics and
genomics with cognitive neuroscience and information technologies. So indeed, we
are entering uncharted territory for science, ethics, politics, and civilization.
That philosophy would once again be forced to reconstitute itself because of
developments in science and technology would have been well appreciated by
Descartes, through his study of microscopy, as well as by Kant’s grounding of a crit-
ical metaphysics to acknowledge the contributions of Newton’s mechanics. Indeed,
the essays in this volume also fall in the tradition of philosophy reconstituting itself,
as described in Floridi’s “fourth revolution.”
The virtue of a collection that ranges over philosophical questions, such as this
one does, lies in the prospects for a more integrated understanding of issues. These
are early days in the partnership between philosophy and information technology,
and many foundational issues are still being sorted out. It is to be expected that many
of the tools of philosophy will have to be deployed to establish this foundation, and
this volume admirably showcases those tools in the hands of scholars. Here briefly
is what the reader can expect.
In Michael Rescorla’s essay “Levels of Computational Explanation” he analyzes
three levels in describing computing systems (representational, syntactic, and
physical) and argues that syntactic description is key because it mediates between
the representational description and physical construction of artificial systems.
However, while this three-level view works well for artificial computing systems,
it is inadequate for natural computing systems (e.g., humans) on his view. By
focusing on explanatory practice in cognitive science, he brings to the foreground
the pragmatic advantages of syntactic description for natural computing systems.
In “On the Relation of Computing to the World” William J. Rapaport surveys a
multitude of ways of construing said relation. One would be hard-pressed to find
a stone unturned here: questions of semantic and syntactic computation (and even
syntactic semantics), the nature and teleology of algorithms, program verification
and the limits of computation, symbols and meanings in computation, and inputs
and outputs in a Turing machine are all considered.
Paul Schweizer’s “Cognitive Computation sans Representation” addresses what
he considers to be the deep incompatibility between the computational and the
representational theories of mind (CTM vs. RTM). Attempts to join them, he argues,
are destined to fail because the unique representational content claimed by RTM is
superfluous to the formal procedures of CTM. In a computing mechanism, after
syntactic encoding “it’s pure syntax all the way down to the level of physical
implementation.”
1 Introduction: Intersecting Traditions in the Philosophy of Computing 3

In “Software Error as a Limit to Inquiry for Finite Agents: Challenges for the
Post-human Scientist” John F. Symons and Jack K. Horner revisit C.S. Peirce’s
intuition about truth being the output of inquiry in the limit by considering the
limits for the future of software-driven science. They use their previously-developed
concept of a finite post-human agent, constrained “only by mathematics and the
temporal finiteness of the universe,” to argue that greater use of software or other
rule-governed procedures will lead to decreased ability to control for error in
scientific inquiry.
Concerns about a post-human world are also at the center of Selmer and
Alexander Bringsjord’s “The Singularity Business: Toward a Realistic, Fine-grained
Economics for an AI-Infused World.” Here they are interested literally in business
and economic questions that will arise in a future with “high-but-sub-human-level
artificial intelligence.” They investigate the implications of an artificial intelligence
that is impressive and useful, from the standpoint of contemporary science and
engineering, but falls well short of the awareness or self-consciousness promised
by The Singularity. They argue that even this “minimal” artificial intelligence will
have tremendous labor market and social implications, arising from the tasks to
which it can be set.
In “Artificial Moral Cognition: Moral Functionalism and Autonomous Moral
Agency” by Don Howard and Ioan Muntean, we find a model of an Artificial
Autonomous Moral Agent (AAMA) that will engage in moral cognition by learning
moral-behavioral patterns from data. Unlike a rule-following autonomous agent,
their AAMA will be based on “soft computing” methods of neural networks and
evolutionary computation. They conceive of the resulting agent as an example of
virtue ethics for machines, having a certain level of autonomy and complexity, and
being capable of particularistic moral judgments. They apply the concept of the
AAMA to Hardin’s metaphor of the “lifeboat” in the ethics of choice with limited
resources.
Similarly, Shannon Vallor takes a virtue ethics approach to questions of machine
and robotic ethics in her “AI and the Automation of Wisdom.” Like the Bringsjords,
Vallor is concerned with economic, political, and technological implications—here,
primarily, to the future development of human wisdom in a world that may seem
to eschew it. Knowledge and skills in the coming workforce will change to meet
societal needs, as they always have, but more troubling is the threat posed by an
“algorithmic displacement of human wisdom.” Drawing on sources from Aristotle
to the present, she forewarns of a weakened ability to rise to our environmental,
civic, and political responsibilities.
Mario Verdicchio presents “An Analysis of Machine Ethics from the Perspective
of Autonomy” in order to urge a return to what he calls classic ethics to guide
researchers in machine ethics. He rejects the call for a new discipline of ethics for
machines—one that would focus on embedding ethical principles into programs. He
argues instead that industry-driven standards for robotic safety are sufficient, and
that nothing in robotics presents a fundamental challenge to the ethics of design;
rather, new machine capabilities show us why we ought to focus on the traditional
(classic) ethics that guide human choices. While he acknowledges that cutting-edge
4 T.M. Powers

robots may have a higher degree of autonomy than those in the past, he does not
think that such autonomy is sufficient to require a new ethics discipline.
We turn to questions of research ethics in the era of Big Data with “Beyond
Informed Consent: Investigating Ethical Justifications for Disclosing, Donating or
Sharing Personal Data in Research” by Markus Christen, Josep Domingo-Ferrer,
Dominik Herrmann, & Jeroen van den Hoven. In this essay the authors consider
how the modern digital research ecosystem challenges notions of informed consent,
control of personal data, and protection of research subjects from unintended effects
of research in a rapidly changing social and scientific landscape. They develop
arguments around three core values—autonomy, fairness and responsibility—to
show how an active community of research participants can be educated through
and involved in research over time. Such a community would enable user-centric
management of personal data, including generation, publication, control, exploita-
tion, and self-protection.
Soraj Hongladarom turns to ontological concerns in “Big Data, Digital Traces
and the Metaphysics of the Self.” This investigation begins with a conception of
self and identity that depends on one’s Internet activity. Identity is constituted by
“how one leaves her traces digitally on the Internet.” This view borrows from
the well-known “extended mind” thesis of Chalmers and Clark and issues in
a conception of the (digitally) extended self. Hongladarom suggests that these
traces—the distributed parts of the ontologically-whole self—nonetheless belong
to the owner. Thus, they deserve protection and generate strong claims of privacy
and respect for individuals.
In the final essay of this volume, “Why Big Data Needs the Virtues” by Frances
S. Grodzinsky, we encounter an analysis of Big Data and its value, with an argument
on how it can be harnessed for good. Grodzinsky starts with an account of Big Data’s
volume, velocity, variety, and veracity. She goes on to critique claims that statistical
correlations in Big Data are free of theory, ready to gleaned from data sets. Turning
to the ethics of the “Big Data Scientist,” she sketches a virtuous epistemic agent
who incorporates both virtue ethics and virtue epistemology. Such an agent will be
well placed, she believes, to act responsibly to use Big Data for socially-beneficial
ends.
Through analyses of the foregoing issues, the philosophical work in these
chapters promises to clarify important questions and help develop new lines of
research. It is hoped that readers will find much of value in these intersecting
traditions in philosophy and computing.
Chapter 2
Levels of Computational Explanation

Michael Rescorla

Abstract It is widely agreed that one can fruitfully describe a computing system
at various levels. Discussion typically centers on three levels: the representational
level, the syntactic level, and the hardware level. I will argue that the three-
level picture works well for artificial computing systems (i.e. computing systems
designed and built by intelligent agents) but less well for natural computing systems
(i.e. computing systems that arise in nature without design or construction by
intelligent agents). Philosophers and cognitive scientists have been too hasty to
extrapolate lessons drawn from artificial computation to the much different case
of natural computation.

Keywords Levels of explanation • Representation • Syntax • The computational


theory of mind • Intentionality • Functionalism • Abstraction • Bayesianism

2.1 Representation, Syntax, and Hardware

It is widely agreed that one can fruitfully describe a computing system at various
levels. Discussion typically centers on three levels that I will call the represen-
tational level, the syntactic level, and the hardware level. To illustrate, consider
a computer programmed to perform elementary arithmetical operations such as
addition, multiplication, and division:
– At the representational level, we individuate computational states through their
representational properties. For instance, we might say that our computer divides
the number 2 into the number 5 to yield remainder 1. This description implicitly
presupposes that the computer’s states represent specific numbers.
– At the syntactic level, we individuate computational states non-representationally.
We describe our computer as manipulating numerals, rather than performing
arithmetical operations over numbers. For example, we might say that the
computer performs certain syntactic operations over the numerals “2” and “5”

M. Rescorla ()
Department of Philosophy, University of California, Los Angeles, CA, USA
e-mail: rescorla@ucla.edu

© Springer International Publishing AG 2017 5


T.M. Powers (ed.), Philosophy and Computing, Philosophical Studies Series 128,
DOI 10.1007/978-3-319-61043-6_2
6 M. Rescorla

and then outputs the numeral “1.” When offering this description, we do not
presuppose that the computer’s states represent numbers.
– At the hardware level, we describe the physical realization of computational
states. We specify our computer’s components, how those components are
assembled, and how the computer’s physical state evolves according to well-
understood physical laws.
A three-level picture along these lines figures prominently in many philosophical
and scientific discussions (Chalmers 2011, 2012; Fodor 1981, 1987, 1994, 2008;
Haugeland 1985; Pylyshyn 1984).
I will argue that the three-level picture works well for artificial computing
systems (i.e. computing systems designed and built by intelligent agents) but
less well for natural computing systems (i.e. computing systems that arise in
nature without design or construction by intelligent agents). Philosophers and
cognitive scientists have been too hasty to extrapolate lessons drawn from artificial
computation to the much different case of natural computation. I discuss artificial
computation in Sects. 2 and 3 and natural computation in Sect. 4. I compare the two
cases in Sect. 5.

2.2 Representational Description of Artificial Computation

Hardware description figures indispensably within computing practice. Ultimately,


we must describe the materials from which a machine is to be built, the way those
materials are to be combined, the intended physical evolution of the machine, and
so on. Only then can we build the machine. A good hardware description serves as
a blueprint, specifying how to construct a physical system with desired properties.
Suitable hardware description is also needed for various modifications or repairs we
might make. These points are evident, so I will not discuss them further.
I focus on the representational and syntactic levels. I will argue that represen-
tational description illuminates a wide range of artificial computations (Sects. 2.1
and 2.2). I will then argue that syntactic description plays a key role in mediating
between representational description and physical construction of artificial systems
(Sect. 3).

2.2.1 Representation Elucidated

Researchers across philosophy, computer science (CS), and cognitive science use
the phrase “representation” in various ways. Following common philosophical
usage (e.g. Burge 2010, p. 9), I tie representation to veridicality-conditions. To
illustrate:
2 Levels of Computational Explanation 7

– Beliefs are evaluable as true or false. My belief that Barack Obama is president
is true if Barack Obama is president, false if he is not.
– Declarative sentences (e.g. “Barack Obama is president”) as uttered in specific
conversational contexts are likewise evaluable as true or false.
– Perceptual states are evaluable as accurate or inaccurate. A perceptual state that
represents presence of a red sphere is accurate only if a red sphere is before me.
– Intentions are evaluable as fulfilled or thwarted. My intention to eat chocolate is
fulfilled if I eat chocolate, thwarted if I do not eat chocolate.
Truth-conditions, accuracy-conditions, and fulfillment-conditions are species of
veridicality-conditions. Complex representations decompose into parts whose rep-
resentational properties contribute to veridicality-conditions. For example, the
truth-condition of “John loves Mary” is determined by the denotation of “John,” the
denotation of “Mary,” and the satisfaction-condition of “loves.” Representational
description invokes veridicality-conditions or representational properties that con-
tribute to veridicality-conditions.
I distinguish two ways that a system may come to have representational
properties: it may have representational properties at least partly by virtue of its
own activity; or it may have representational properties entirely because some
other system has imposed those properties upon it. For example, the human mind
has representational properties at least partly due to its own activity. In contrast,
words in a book represent entirely by virtue of their connection to our linguistic
conventions. The book does not contribute to representational properties of its
component words.
Philosophers commonly evoke this distinction using the labels original ver-
sus derived intentionality (Haugeland 1985) or intrinsic versus observer relative
meanings (Searle 1980). To my ear, these labels suggest that the main con-
trast concerns whether a system is solely responsible for generating its own
representational properties. Yet Burge (1982) and Putnam (1975) have argued
convincingly that the external physical and social environment plays a large role
in determining representational properties of mental states, so that not even the
mind is solely responsible for generating its own representational properties. I
prefer the labels indigenous versus inherited, which seem to me to carry fewer
misleading connotations. Representational properties of human mental states are
indigenous, because human mental activity plays at least some role in generating
representational properties of mental states. Representational properties of words
in a book are inherited, because the book plays no role in generating those
properties.
Are representational properties of artificial computing systems inherited or
indigenous? For the artificial computing systems employed in our own society, the
answer is usually “inherited.” For example, a simple pocket calculator represents
numbers entirely by virtue of our linguistic conventions regarding numerals. A sim-
ilar diagnosis applies to many far more sophisticated systems. Some philosophers
maintain that artificial computing machines in principle cannot have indigenous
representational properties (Searle 1980). I think that this position is implausible
8 M. Rescorla

and that existing arguments for it are flawed. I see no reason why a sufficiently
sophisticated robot could not confer representational properties upon its own inter-
nal states. We could equip the robot with sensors or motor organs, so that it causally
interacts with the external world in a suitably sophisticated way. So equipped, I
see no reason why the robot could not achieve indigenous representation of its
external environment. Whether any actual existing artificial computing systems have
indigenous representational properties is a trickier question that I set aside.

2.2.2 The Value of Representational Description

To what extent do we illuminate an artificial computing system by citing its


representational properties (whether those properties are inherited or indigenous)?
We often want to compute over a non-linguistic domain. Consider the Euclidean
algorithm for computing greatest common divisors (gcds). The algorithm features
arithmetical operations over natural numbers (Rescorla 2013a). Numbers cannot
be directly instantiated inside a computing system. Rather, the computing system
must instantiate numerals that denote natural numbers. More generally, a system
can compute over non-linguistic items only if the system represents those items
(Rescorla 2015c). When we describe a machine as computing over a non-linguistic
domain, we presuppose (at least implicitly) that the machine’s states represent
elements of the domain.
Non-linguistic domains studied within computability theory typically contain
mathematical entities (e.g. natural numbers). Real-world applications just as often
involve computation over a non-linguistic, non-mathematical domain. We might
want a smartphone that computes the fastest route from one location to another;
or a library catalogue system that allows users to recall library books; or a robot that
estimates current position based on sonar and odometer readings; or a program that
retrieves an individual’s city of birth from a database; and so on. Each computation
is defined at least partly over a non-linguistic, non-mathematical domain. To
understand the computation, we must describe it as representing locations, books,
people, cities, and so on. Representational description helps us articulate why we
built the machine and what function it serves in our broader practices.
Some philosophers assert or intimate that representational description is rela-
tively unimportant to computing practice (Chalmers 2011; Piccinini 2008, 2009).
Why not instead employ syntactic descriptions? Rather than say that a machine
computes the gcd of two numbers, why not say that it executes a syntactic operation
over numerals? Rather than say that a machine retrieves an individual’s city of
birth, why not say that the machine executes appropriate operations over names?
Doesn’t the representational level become superfluous once we isolate an underlying
syntactic description?
I find such proposals jarringly divergent from actual practice within
computability theory, CS, robotics, industry, and everyday life. In all these spheres,
2 Levels of Computational Explanation 9

we are often primarily concerned with representational aspects of computation. The


representational level is not superfluous, because it captures the primary purpose
served by underlying syntactic operations. Representational description is needed to
articulate the main reason why we built the machine in the first place: computation
over a non-linguistic domain.
Accordingly, representational relations between computing systems and repre-
sented domains figure crucially within scientific inquiry. Three examples:
– Computability theory studies computational properties of notations for various
non-linguistic domains (Rescorla 2015c). Different notations embody different
ways of representing the domain. For instance, computable analysis studies
which operations over real numbers are computable relative to decimal notation
and which are computable relative to alternative notations (Weihrauch 2000).
– Computer science offers rigorous computational models that describe computa-
tions in representational terms (Rescorla 2013a, 2014c). For example, one can
codify the Euclidean algorithm as a LISP program or as a register machine
program (Abelson et al. 1996, p. 49, p. 497). The resulting programs describe
computation over natural numbers, just as the pre-theoretic Euclidean algorithm
does. So the programs individuate computational states partly through their
representational properties.
– Probabilistic robotics delineates Bayesian algorithms describing how a robot
navigates through its environment (Thrun et al. 2005). These algorithms presup-
pose that the robot maintains an internal map that represents spatial aspects of the
environment (Rescorla 2009). The algorithms dictate how to update probabilities
over maps in light of sensor measurements and odometer readings.
We could not retain computability theory, computer science, or probabilistic
robotics in anything resembling their current forms if we were to jettison repre-
sentational description in favor of syntactic description.
Representational description usefully characterizes not just inputs and outputs to
computation but also internal states. Suppose we describe a machine as executing
the Euclidean algorithm. In doing so, we describe the machine as repeatedly
executing the division operation and hence as representing a series of numbers
intermediate between input and output (Rescorla 2013a). Or suppose we describe
a robot as updating probabilities over maps. The robot’s updates involve internal
representational states—probability assignments to maps that represent the envi-
ronment. In these examples, and many others, we want our machine to transit
appropriately between internal computational states with suitable representational
properties.
I do not say that representational description illuminates all artificial computing
systems. I say only that it illuminates many artificial computing systems that figure
prominently in science and everyday life.
10 M. Rescorla

2.3 Syntactic Description of Artificial Computation

There has been considerable philosophical discussion surrounding the notion of


“syntax.” I will assume that syntactic description has at least two properties:
– Syntactic description is non-representational, i.e. it does not individuate compu-
tational states through their representational properties. If we say that a computer
stores the numeral “13” in some memory register, then we have offered a non-
representational description, because the numeral “13” might have had different
representational properties (or no representational properties at all) depending on
the surrounding computational, physical, or social environment.
– Syntactic description is multiply realizable in Putnam’s (1975) sense: physical
systems with wildly heterogeneous physical properties may satisfy a given
syntactic description. Because syntactic description is multiply realizable, it is
much more abstract than hardware description.
Syntactic descriptions with these two properties figure prominently in computing
practice. Consider a Turing machine that manipulates stroke marks on a machine
tape; or a system of logic gates that manipulate “0”s and “1”s; or a desktop
computer that compiles a high-level programming language into machine code.
In each case, we can describe how the machine manipulates syntactic items while
ignoring representational and physical details.
Philosophers commonly gloss syntax in functionalist terms (Chalmers 2011;
Field 2001, pp. 56–57; Fodor 1994, pp. 108–110; Stich 1983, pp. 149–151):
syntactic states are individuated through their characteristic relations to one another
and to computational inputs and outputs. Chalmers develops the functionalist
conception using the notion of causal topology: “the pattern of interaction among
parts of the system, abstracted away from the make-up of individual parts and from
the way the causal connections are implemented” (Chalmers 2011, p. 337). On
Chalmers’s view, syntactic description specifies a causal topology. It thereby con-
strains abstract causal structure but not physical details. A variant functionalist view
allows syntactic description to constrain both abstract causal structure and physical
aspects of inputs and outputs (e.g. geometric shapes of a desktop computer’s inputs
and outputs; physical properties of a robot’s sensor inputs and motor outputs).
In everyday computing practice, we are often primarily concerned with compu-
tation over syntactic entities. We describe a computation by decomposing it into
elementary syntactic operations (e.g. moving a word from one memory register
to another) that transform syntactic inputs into syntactic outputs. For example, a
compiler carries syntactic inputs (source code in a high-level programming lan-
guage) into syntactic outputs (machine language code) through a series of syntactic
manipulations. Typically, an artificial computing system falls under multiple levels
of syntactic description. Syntactic description of a desktop computer is organized in
a hierarchy, ranging from logic gates to machine code to assembly code to a high-
level programming language. As we ascend the hierarchy, we describe progressively
more abstract aspects of syntactic processing.
2 Levels of Computational Explanation 11

Even when computer scientists are most fundamentally interested in


representational aspects of computation, syntactic description plays a pivotal role.
Suppose we want to build a machine that executes some computation as described
in representational terms. Representational description, taken on its own, does not
specify how to build such a machine. Even if we know how we want our machine
to transit between representational states, we may have little idea how to build
a machine that so transits. As Chalmers (2012, p. 245) puts it, “[o]ne cannot go
straight from representational explanation to building a mechanism; one has some
hard working to do in figuring out the right mechanism.” How do we ensure that
our machine transits as desired between representational states? How do we build
a machine that reliably transits from a computational state that bears some relation
to the represented domain to a computational state that bears some other desired
relation to the represented domain? For example, suppose we want a machine that
executes the Euclidean algorithm. How do we ensure that our machine divides
numbers into one another as the algorithm requires? Representational description
by itself does not supply anything like a workable blueprint for a physical machine.
Turing’s (1936) brilliant solution: supplement representational description with
syntactic description. To build a machine that transits appropriately between rep-
resentational states, we build a machine that manipulates syntactic items. Suitable
manipulation of syntactic items endowed with suitable representational properties
ensures that the machine satisfies our desired representational description. To build
a machine that executes the Euclidean algorithm, we isolate an algorithm for
manipulating numerals. Assuming that the numerals denote suitable numbers, the
machine thereby computes gcds.
Syntactic description carries us much closer than representational description to
a workable blueprint for a physical machine. In principle, we know how to build
a machine that executes iterated elementary syntactic operations over syntactic
items. This is especially true for low-level syntactic descriptions, such as logic
gate descriptions or machine language descriptions. It is also true for more abstract
syntactic descriptions, such as a LISP program that specifies manipulation of list
structures. Syntactic description helps us design and construct machines in a way
that representational description does not.
Why exactly does syntactic description carry us closer than representational
description to a workable blueprint? Because we do not know helpful sufficient
conditions for a machine to instantiate desired representational properties:
– Indigenous representational properties typically depend upon complex causal
interactions between the physical system and its surrounding environment—
causal interactions that we are currently unable to specify in an informative way.
Even when the represented domain is mathematical, we do not have a good
theory describing what it takes for the system to bear appropriate relations to
the represented domain. We have nothing like a useful blueprint for ensuring that
a machine has suitable indigenous representational properties.
– Inherited representational properties may initially seem less problematic, since
it is easy enough to stipulate that a machine’s internal states have certain
12 M. Rescorla

representational properties. However, we cannot ensure that other users of some


machine will make the same stipulations. Once we release a machine into the
wild, we have little control over which representational properties other people
bestow upon it. Thus, we are not able to provide anything like a useful blueprint
for ensuring that a machine has suitable inherited representational properties.
Syntactic description avoids these problems. By focusing solely on “intrinsic”
aspects of computation, without seeking to ensure that computational states bear
appropriate relations to represented entities, syntactic description carries us much
closer to a workable blueprint for a physical system.
Hardware description likewise supplies a workable blueprint. As Chalmers
(2012) emphasizes, though, it includes numerous details that are irrelevant for many
purposes. When designing or modifying a computing machine, we often do not care
about the exact physical substrate that implements, say, memory registers. We would
like a workable blueprint that prescinds from irrelevant hardware details. Syntactic
description fulfills this desideratum. As Chalmers (2012, p. 245) puts it, syntactic
description “yields a sweet spot of being detailed enough that a fully specified
mechanism is provided, while at the same time providing the minimal level of detail
needed for such a mechanism.”
Chalmers’s analysis illuminates why syntactic description figures so centrally
within computing practice. Even when we are mainly concerned with representa-
tional aspects of computation, syntactic description helps us build a physical system
that transits appropriately between representational states. Syntactic description
helps because it is non-representational (so that it furnishes a workable blueprint)
and multiply realizable (so that it suppresses irrelevant hardware details).
Chalmers’s analysis is closely related to abstraction, a common technique in
computer science (Abelson et al. 1996, pp. 83–89). Abstraction is suppression of
low-level implementation detail. For example, one might model manipulation of
list structures without specifying how list structures are implemented by memory
registers. Abstraction has several virtues:
– Abstraction helps us manage the enormous complexity of typical computing
systems. Designing and modifying complex systems is much easier when we
ignore details that do not bear upon our current design goals.
– Abstraction increases flexibility, allowing us to remain non-committal about how
exactly we will implement our high-level description. Flexibility is important if
we are not sure which low-level implementation is best, or if we want to permit
different implementation details at some future date.
The advantages of syntactic description over hardware description are a special
case of the general pressure toward abstraction. Good computer design manages
complexity and promotes flexibility by suppressing irrelevant hardware details
whenever possible.
I conclude that syntactic description advances our pragmatic computing goals
in a distinctive way that representational description and hardware description
do not. Syntactic description helps us design and build physical machines that
2 Levels of Computational Explanation 13

implement representationally-specified computations. It plays a crucial role in


mediating between representational description and physical construction of arti-
ficial computing machines.1

2.4 Natural Computing Systems

By a “natural system,” I mean one that arises in nature without design or oversight
by intelligent agents. Whether a system counts as “natural” is a matter of its etiology,
not its material constitution. A computing system constructed by humans from DNA
or other biochemical material is not “natural,” because it is an artifact. A silicon-
based creature that evolved through natural selection on another planet counts as
“natural,” even though it is not constructed from terrestrial biochemical materials.
According to the computational theory of mind (CTM), the mind is a computing
system. Classical CTM holds that the mind executes computations similar to those
executed by Turing machines (Fodor 1975, 1987, 1994, 2008; Gallistel and King
2009; Putnam 1975; Pylyshyn 1984). Connectionist CTM models mental activity
using neural networks (Horgan and Tienson 1996; Ramsey 2007; Rumelhart et al.
1986). Both classical and connectionist CTM trace back to seminal work of McCul-
loch and Pitts (1943). In (Rescorla 2015b), I surveyed classical, connectionist,
and other versions of CTM. For present purposes, I do not assume any particular
version of CTM. I simply assume that the mind in some sense computes. Under
that assumption, it makes sense to talk about “natural computing systems.” We may
therefore ask how Sect. 1’s levels of description apply to natural computation—
specifically, mental computation.2
Hardware description is vital to the study of mental computation. Ultimately, we
want to know how neural tissue physically realizes mental computations. Everyone
agrees that a complete cognitive science will include detailed hardware descriptions
that characterize how neural processes implement mental activity. Unfortunately,
satisfactory hardware descriptions are not yet available. Although we know quite a

1
When representational properties are inherited rather than indigenous, syntactic description offers
further advantages over representational description. I argue in (Rescorla 2014b) that inherited
representational properties of computational states are causally irrelevant: one can freely vary
inherited representational properties without altering the underlying syntactic manipulations, so
representational properties do not make a difference to the computation. Representational descrip-
tion does not furnish genuinely causal explanations of a system whose representational properties
are all inherited. No such analysis applies to a computational system whose representational
properties are indigenous. In that case, I claim, representational properties can be causally relevant
(Rescorla 2014b).
2
For purposes of this paper, “mental computation” indicates computation by a natural system with
a mind. I leave open the possibility that an artificial system (such as a sophisticated robot) might
also have a mind.
14 M. Rescorla

lot about the brain, we still do not know how exactly neural processing physically
realizes basic mental activities such as perception, motor control, navigation,
reasoning, decision-making, and so on.
What about representational and syntactic description? Will these also figure in
any complete cognitive science? I discuss representation in Sect. 4.1 and syntax in
Sect. 4.2.

2.4.1 Representational Description of Mental Computation

Traditionally, philosophers have emphasized the mind’s representational capacity as


one of its most important features. Perception, motor control, navigation, decision-
making, language acquisition, problem solving, and many other core mental
activities crucially involve representational mental states. For example:
– Perception. Perceptual states represent the environment as being a certain way.
They represent shapes, sizes, colors, locations, and other properties of distal
objects. They are evaluable as accurate or inaccurate, depending on whether
perceived objects actually have the represented distal properties (Burge 2010;
Peacocke 1992).
– Motor control. The motor system transforms intentions into motor commands.
When all goes well, the resulting motor commands promote fulfillment of the
operative intention. For example, if I form an intention to pick up a nearby
ball, then my motor system issues motor commands that hopefully result in my
picking up the ball.
– Navigation. We routinely navigate through the physical environment. In many
cases, we do so by estimating the environment’s spatial layout and by planning a
route to some destination (Evans 1982). Estimates of spatial layout are evaluable
as accurate or inaccurate. Representations of my desired destination are evaluable
as fulfilled or thwarted, depending on whether I reach the destination.
Perception, motor control, and navigation crucially involve mental states with
veridicality-conditions. So do numerous other core mental processes.
Cognitive scientists offer explanatorily successful theories that describe mental
activity in representational terms:
– Perceptual psychology studies how the perceptual system transits from proximal
sensory stimulations (e.g. retinal stimulations) to perceptual states that estimate
shapes, sizes, colors, locations, and other distal properties (Palmer 1999).
Perceptual modeling individuates perceptual states through their representational
properties—as estimates of specific distal shapes, sizes, locations, and so on
(Burge 2010; Rescorla 2015a).
– Sensorimotor psychology studies how the motor system converts intentions into
motor commands that promote fulfillment of those intentions (Rosenbaum 2002;
2 Levels of Computational Explanation 15

Shadmehr and Mussa-Ivaldi 2012). The science presupposes that individuals


form intentions with fulfillment-conditions (Jeanerrod 2006; Pacherie 2006;
Rescorla 2016a).
– Beginning with Tolman (1948), many cognitive psychologists have postulated
that mammals navigate using cognitive maps (Gallistel 1990; O’Keefe and
Nadel 1978). Mammals update their cognitive maps based on sensory input and
self-motion cues. Cognitive maps represent spatial aspects of the environment,
including landmark locations as well as the individual’s own current location
(Rescorla 2009, in press).
In these areas, and many others, cognitive science describes how representational
mental states interact with one another, with sensory inputs, and with motor outputs.
A psychological theory that cites representational aspects of mentality is often called
intentional psychology.
Recently, Bayesian cognitive science has elevated intentional psychology to new
heights of mathematical rigor, precision, and explanatory power. The basic idea is
to model mental activity using tools of Bayesian decision theory:
– According to Bayesian perceptual psychology, the perceptual system executes
an unconscious Bayesian inference from proximal sensory stimulations to per-
ceptual states that estimate distal conditions (Feldman 2015; Knill and Richards
1996; Rescorla 2015a).
– According to Bayesian sensorimotor psychology, the sensorimotor system selects
motor commands through unconscious Bayesian inference and decision-making
(Bays and Wolpert 2007; Rescorla 2016a).
– Bayesian models of navigation posit Bayesian updating over cognitive maps that
represent the spatial environment (Cheng et al. 2007; Madl et al. 2014; Madl
2016; Penny et al. 2013; Rescorla 2009).
On a Bayesian approach, the individual (or her subsystems) assigns probabilities to
“hypotheses” drawn from a hypothesis space. Bayesian models typically individuate
hypotheses in representational terms—as representations of specific distal shapes,
sizes, colors, locations, and so on. Bayesian cognitive science describes how
representational mental states (probability assignments to hypotheses that represent
the world) interact with one another, with sensory inputs, and with motor outputs.
The past century has witnessed successive waves of anti-representationalist
sentiment. Advocates of behaviorism (Skinner 1938), connectionism (Ramsey
2007), eliminative materialism (Churchland 1981; Quine 1960; Stich 1983),
interpretivism (Davidson 1980; Dennett 1971), embodied cognition (van Gelder
1992), and dynamical systems theory (Chemero 2009) frequently reject intentional
psychology as unscientific, unconfirmed, unexplanatory, vacuous, or otherwise
problematic. Anti-representationalists recommend that scientific psychology
eschew representational discourse, relying instead upon stimulus-response
psychology, or neuroscience, or some other non-representational scientific
framework. In many cases, anti-representationalists launch highly abstract
philosophical critiques of intentional psychology (Dennett 1971; Quine 1960;
16 M. Rescorla

Stich 1983). I think that anti-representationalism has dramatically failed. Anti-


representational theories have repeatedly shown themselves unequipped to explain
even very basic mental phenomena that intentional psychology readily explains.
Abstract philosophical critiques of intentional psychology tend to be much less
convincing than the representationalist theorizing they purportedly undermine.
I henceforth assume that intentional psychology illuminates perception, motor
control, navigation, decision-making, and many other core mental phenomena. We
reap substantial explanatory benefits by describing these phenomena in representa-
tional terms.

2.4.2 Syntactic Description of Mental Computation

Should syntactic description of mental activity likewise play an important role in


cognitive science?
Fodor (1975, 1987, 1994, 2008) holds that mental computation manipulates items
drawn from the language of thought—an internal system of mental representations.
Mental representations have formal syntactic properties, i.e. properties individuated
without regard to representational import. Mental computation is sensitive to formal
syntactic properties but not representational properties. Fodor holds that a complete
scientific psychology should delineate intentional laws, which describe how mental
states as individuated representationally interact with one another, with sensory
inputs, and with motor outputs. Intentional laws are implemented by computations
describable in syntactic terms. On Fodor’s picture, syntactic manipulation of mental
representations ensures that mental computation transits appropriately between
representational mental states. Fodor also recognizes that any complete cognitive
science will assign a prominent role to neuroscientific description. In this way, he
applies Sect. 1’s three-level picture to mental computation. Representational mental
activity is implemented by syntactic manipulations, which are physically realized
by neurophysiological processes.
Chalmers (2011, 2012) espouses a similar three-level picture of mental activ-
ity, although he places less emphasis than Fodor on representational aspects of
psychological explanation. Field (2001) and Stich (1983) embrace the syntactic
and hardware levels while rejecting the representational level. They hold that
cognitive science should describe mental computation syntactically while ignoring
representational aspects of mentality. Fodor, Chalmers, Field, and Stich all agree
that syntactic description of mental computation should figure crucially within any
complete cognitive science.3

3
Piccinini (2015) assigns a central role to non-representational, multiply realizable descriptions
of artificial and natural computation, including mental computation. He declines to call these
descriptions “syntactic.” Nevertheless, the worries developed below regarding syntactic description
of mental computation also apply to Piccinini’s approach. For further discussion, see (Rescorla
2016b).
2 Levels of Computational Explanation 17

Fodor (1981, 2008) maintains that cognitive science already prioritizes syntactic
description of mental activity. I disagree. Contrary to what Fodor suggests, formal
syntactic description does not figure in current scientific theorizing about numer-
ous mental phenomena, including perception, motor control, deductive inference,
decision-making, and so on (Rescorla 2012, 2014b; 2017). Bayesian percep-
tual psychology describes perceptual inference in representational terms rather
than formal syntactic terms (Rescorla 2015a). Bayesian sensorimotor psychology
describes motor control in representational terms rather than formal syntactic
terms (Rescorla 2016a). There may be some areas where cognitive science offers
syntactic explanations. For example, certain computational models of low-level
insect navigation look both non-neural and non-representational (Rescorla 2013b).
But formal syntactic description is entirely absent from many core areas of cognitive
science.
Plausibly, one always can describe mental activity in syntactic terms. The ques-
tion is whether one thereby gains any explanatory benefits. There are innumerable
possible ways of taxonomizing mental states. Most taxonomic schemes offer no
explanatory value. For instance, we can introduce a predicate true of precisely those
individuals who believe that snow is white or who want to drink water. However, it
seems unlikely that this disjunctive predicate will play any significant explanatory
role within a finished cognitive science. Why expect that syntactic taxonomization
will play any more significant a role?
To focus the discussion, consider Chalmers’s functionalist conception of syntax.
Given a true representational or neurophysiological theory of a mental process,
we can abstract away from representational and neural details to specify a causal
topology instantiated by the process. But why suspect that we thereby gain any
explanatory benefits? We can abstract away from a true scientific theory of any
phenomenon to specify a causal topology instantiated by the phenomenon. In most
cases, we do not thereby improve our explanations of the target phenomenon.
Here is a non-psychological example. The Lotka-Volterra equations are first-
order nonlinear differential equations used in ecology to model simple predator-prey
systems (Nowak 2006):

dx
dt
D x .a  by/
.LV/
dy
dt
D y .dx  c/

where x is prey population level, y is predator population level, t is time, ax is


prey reproduction rate, bxy is the rate at which predators eat prey, cy is predator
death rate, and dxy is predator reproduction rate. Lotka (1910) introduced LV in
order to model oscillating chemical reactions. Researchers have subsequently used
LV to model epidemics (Kermack and McKendrick 1927), economic interaction
(Goodwin 1967), combustion (Semenov 1935), and other unrelated phenomena.
So LV applies not just to ecological systems but also to diverse non-ecological
systems provided that we reinterpret x and y as suitable non-ecological variables.
18 M. Rescorla

These diverse systems instantiate the same causal topology. We can specify their
shared causal topology more explicitly by taking LV’s Ramsey sentence, thereby
suppressing all ecological details.
Ecologists explain predator/prey population levels by using LV where x and
y are interpreted as prey and predator population levels. We do not improve
ecological explanation by noting that LV describes some chemical or economic
system when x and y are reinterpreted as chemical or economic variables, or
by supplementing LV with LV’s Ramsey sentence.4 What matters for ecological
explanation are the ecological interactions described by LV, not the causal topology
obtained by suppressing ecological details. That some ecological system shares
a causal topology with certain chemical or economic systems is an interesting
coincidence, not an explanatory significant fact that illuminates population levels.
The causal topology determined by LV is not itself explanatory. It is just a byproduct
of underlying ecological interactions described by LV when x and y are interpreted
as prey and predator population levels.
Cognitive science describes causal interactions among representational mental
states. By suppressing representational and neural properties, we can specify a
causal topology instantiated by mental computation. But this causal topology looks
like a mere byproduct of causal interactions among representational mental states.
In itself, it does not seem explanatory. Certainly, actual cognitive science practice
does not assign an explanatorily significant role to abstract descriptions of the
causal topology instantiated by perception, motor control, or numerous other mental
phenomena.
Philosophers have offered various arguments why cognitive science requires
syntactic description of mental activity. I will quickly address a few prominent
arguments. I critique these and other arguments more thoroughly in (Rescorla 2017).
Argument from Computational Formalism (Fodor 1981, p. 241; Gallistel
and King 2009, p. 107; Haugeland 1985, p. 106) Standard computational for-
malisms found in computability theory operate at the syntactic level. We can model
the mind as a computational system only if we postulate formal syntactic items
manipulated during mental computation.
Reply The argument misdescribes standard computational formalisms. Contrary to
what the argument maintains, many standard formalisms are couched at an abstract
level that remains neutral regarding the existence of formal syntactic items. We
can construe many standard computational models as defined over states that are
individuated representationally rather than syntactically. Computational modeling
per se does not require syntactic description. My previous writings have expounded
this viewpoint as applied to Turing machines (Rescorla 2017), the lambda calculus
(Rescorla 2012), and register machines (Rescorla 2013a).

4
See (Morrison 2000) for further examples along similar lines.
2 Levels of Computational Explanation 19

Argument from Causation (Egan 2003; Haugeland 1985, pp. 39–44) Repre-
sentational properties are causally irrelevant to mental activity. Thus, intentional
psychology cannot furnish causal explanations. We should replace or supplement
intentional psychology with suitable non-representational descriptions, thereby
attaining genuinely causal explanations of mental and behavioral outcomes.
Reply The argument assumes that representational properties are causally irrele-
vant to mental activity. This assumption conflicts with pre-theoretic intuition and
with scientific psychology (Burge 2007, pp. 344–362), which both assign represen-
tational aspects of mentality a crucial causal role in mental activity. We have no
good reason to doubt that representational properties are causally relevant to mental
activity. In (Rescorla 2014a), I argue that indigenous representational properties
of a computing system can be causally relevant to the system’s computations.
Since mental states have indigenous representational properties, it follows that
representational properties can be causally relevant to mental computation.
Argument from Implementation Mechanisms (Chalmers 2012; Fodor 1987,
pp. 18–19) We would like to describe in non-representational terms how the
mind reliably transits between representational mental states. In other words, we
would like to isolate non-intentional implementation mechanisms for intentional
psychology. We should delineate a syntactic theory of mental computation, thereby
specifying non-intentional mechanisms that implement transitions among represen-
tational mental states.
Reply I agree that we should isolate non-intentional implementation mechanisms
for intentional psychology. However, we can take the implementation mechanisms
to be neural rather than syntactic (Rescorla 2017). We can correlate representational
mental states with neural states, and we can describe how transitions among neural
states track transitions among representationally-specified states. As indicated
above, we do not yet know how to do this. We do not yet know the precise
neural mechanisms that implement intentional mental activity. In principle, though,
we should be able to isolate those mechanisms. Indeed, discovering the neural
mechanisms of cognition is widely considered a holy grail for cognitive science.
What value would mental syntax add to an eventual neural theory of implementation
mechanisms?
Argument from Explanatory Generality (Chalmers 2012) Syntactic description
prescinds from both representational and neural properties. Thus, it offers a degree
of generality distinct from intentional psychology and neuroscience. This distinctive
generality provides us with reason to employ syntactic description. In particular,
a syntactic theory of implementation mechanisms offers advantages over a neural
theory of implementation mechanisms by supplying a different degree of generality.
Reply The argument relies on a crucial premise: that generality is always an
explanatory virtue. One can disambiguate this premise in various ways, using
different notions of “generality.” I doubt that any disambiguation of the premise
will prove compelling. As Potochnik (2010, p. 66) notes, “Generality may be of
20 M. Rescorla

explanatory worth, but explanations can be too general or general in the wrong
way.” One can boast generality through disjunctive or gerrymandered descriptions
that add no explanatory value to one’s theorizing (Rescorla 2017; Williamson 2000).
To illustrate, suppose we want to explain why John failed the test. We might note
that
John did not study all semester.
Alternatively, we might note that
John did not study all semester or John was seriously ill.
There is a clear sense in which the second explanation is more general than
first. Nevertheless, it does not seem superior. One might try to disbar such
counterexamples by saying that generality is a virtue when achieved in a non-
disjunctive or non-gerrymandered way.5 But then one would need to show that
syntactic description is itself non-disjunctive and non-gerrymandered, which carries
us back to the question whether syntactic description is explanatorily valuable.
Thus, I doubt that generic methodological appeals to explanatory generality support
syntactic modeling of the mind.6
Overall, philosophical discussion of mental computation has vastly overem-
phasized formal mental syntax. Certain areas of cognitive science may posit
formal syntactic mental items, but there is no clear reason to believe that mental
computation in general is fruitfully described in syntactic terms.

5
Strevens (2008) offers a detailed theory of explanation based on the core idea that good
explanation abstracts away from as many details as possible. However, his finished theory
significantly compromises that core idea, precisely so as to impugn disjunctive explanations.
Strevens seeks to eliminate disjunctive explanations through a causal contiguity condition on good
explanation (2008, pp. 101–109): when we explain some phenomenon through a causal model,
all the model’s realizers should form a “contiguous set” in “causal similarity space.” He says
that we should pursue greater abstraction only to the extent that we preserve cohesion. He says
that overly disjunctive explanantia violate cohesion, because they have non-cohesive realizers.
Strevens’s causal contiguity condition has dramatic consequences for scientific psychology.
Psychological properties are multiply realizable, so psychological explanations are apparently
realized by processes that form a “non-contiguous set” in “causal similarity space.” Hence, as
Strevens admits (pp. 155–165, p. 167), the cohesion requirement prohibits causal models from
citing psychological properties. This prohibition applies just as readily to syntactic description as
to representational description. So Strevens’s treatment does not provide any support for syntactic
explanation of mental activity. He castigates both syntactic explanation and intentional explanation
as non-cohesive.
6
Potochnik (2010) argues that generality is an explanatory virtue only when it advances the
research program to which an explanation contributes. Theoretical context heavily shapes whether
it is explanatorily beneficial to abstract away from certain details. On this conception, one cannot
motivate syntactic description through blanket appeal to the virtues of explanatory generality. One
would instead need to cite specific details of psychological inquiry, arguing that the generality
afforded by syntactic description promotes psychology’s goals. I doubt that any such argument
will prove compelling.
2 Levels of Computational Explanation 21

2.4.3 A Case Study: Mammalian Cognitive Maps

To illustrate the themes of this section, let us consider mammalian cognitive maps.
These have veridicality-conditions. For example, a cognitive map that represents
a landmark as present at some physical location is veridical only if the landmark
is indeed present at that location. Detailed, empirically fruitful theories describe
how mammalian cognitive maps interface with sensory inputs, motor commands,
and self-motion cues. The theories describe computations through which mam-
mals form, update, and deploy cognitive maps. In describing the computations,
researchers cite representational properties that contribute to veridicality-conditions
— e.g. they cite the physical location that a cognitive map attributes to a landmark.
Thus, representational description plays a central role within current theories of
mammalian navigation (Rescorla in press).
Neurophysiological description also plays a central role. In comparison with
other areas of cognitive science, we know a fair amount about the neural under-
pinnings of map-based navigation. For example:
– The rat hippocampus contains place cells, each responding selectively to a
specific spatial location.
– The rat entorhinal cortex contains grid cells, each responding selectively to
multiple spatial locations in the available environment. They are called “grid
cells” because the locations where a given cell fires form a periodic grid that
covers the environment.
Neuroscientists have developed mathematical models describing how place cells,
grid cells, and other such cells support mammalian navigation (Evans et al. 2016;
Giacomo et al. 2011). The models aim to illuminate the neurophysiological mecha-
nisms that underlie formation, updating, and deployment of cognitive maps. To be
sure, we are still a long way from completely understanding those mechanisms.
Conspicuously lacking from current scientific research into mammalian naviga-
tion: anything resembling syntactic description. The science describes navigational
computations in representational terms, and it explores the neural mechanisms that
implement those representationally-described computations. It does not describe the
mechanisms in multiply realizable, non-representational terms. It does not abstract
away from neural details of the mechanisms. On the contrary, neural details are
precisely what researchers want to illuminate. Of course, one might propose that
we supplement representational and neurophysiological description of mammalian
navigation with syntactic description. For example, one might articulate a causal
topology that prescinds from representational and neural details. But we have yet
to identify any clear rationale for the proposed supplementation. Certainly, current
scientific practice provides no such rationale. Taking current science as our guide,
syntactic description of mammalian navigation looks like an explanatorily idle
abstraction from genuinely explanatory representational and neural descriptions.
22 M. Rescorla

2.5 Contrast Between Artificial and Natural Computation

I have drawn a sharp distinction between artificial and natural computing systems.
Syntactic description plays a vital role in mediating between representational
description and physical construction of artificial computing systems. In contrast,
many mental computations are usefully described in representational terms rather
than syntactic terms. Why the disparity? Why is syntactic description so much more
important for artificial computation than natural computation?
Sect. 3 emphasized the crucial pragmatic role that syntax plays within computing
practice. By abstracting away from representational properties, syntactic description
offers a workable blueprint for a physical machine. By abstracting away from
physical properties, syntactic description ignores hardware details that are irrelevant
for many purposes. These are practical advantages that immeasurably advance a
practical task: design and construction of physical machines.
Admittedly, we can imagine a computing practice that eschews syntactic descrip-
tion. However, our own reliance on syntactic description secures important advan-
tages over any such hypothetical practice. To illustrate, suppose an agent designs and
builds a machine to execute the Euclidean algorithm. Suppose the agent describes
his machine in representational terms and hardware terms but not syntactic terms.
Now consider a second machine that has very different hardware but instantiates the
same causal topology. Both duplicates satisfy a common abstract causal blueprint.
This commonality is notable even if the agent does not register it. The agent could
have achieved his computing goals by building the second machine rather than
first. In eschewing talk about syntax, the agent foregoes valuable descriptions that
promote his own computing ends. He does not employ syntactic descriptions, but
he should.
Thus, norms of good computing design ensure a key role for syntactic description
of artificial computing systems. Syntactic description enables pragmatically fruitful
suppression of representational and hardware properties.
No such rationale applies to the scientific study of mental computation. Psy-
chology is not a practical enterprise. Cognitive scientists are not trying build a
computing system. Instead, they seek to explain activity in pre-given computing
systems. Constructing an artificial computing system is a very different enterprise
than understanding a pre-given computing system. That formal syntactic description
advances the practical task of designing and constructing artificial computers
does not establish that it advances the explanatory task of understanding a pre-
given computational system. We have seen no reason to think that suppressing
representational and hardware properties of natural computing systems advances
our study of those systems. We have seen no reason to think that formal syntactic
description adds explanatory value to representational and neural description of
mental computation.
Any artificial computing machine was designed by intelligent agents. Good
design practice dictates that those agents sometimes adopt a syntactic viewpoint
Another random document with
no related content on Scribd:
to occur in many Platyhelminthes: (1) the larger longitudinal canals,
and (2) the capillary vessels, which commence with (3) the flame-
cells in the parenchyma of the body. The mode of distribution of
these parts is not, however, ascertained. The canals are delicate,
sinuous, apparently intracellular tubes, coursing close to the margin
of the body and sending offsets which suspend the canals to the
dorsal surface, where possibly openings may occur. In dilatations of
these vessels bunches of cilia, and occasionally flame-cells, are
found. Usually, however, flame-cells occur at the commencement or
during the course of the capillaries, which are straight, rarely
branching, tubes of exceeding tenuity, and appear (Lang) to be
outgrowths of the flame-cells, just as the duct is an outgrowth of a
gland-cell. In fact there is little doubt that the stellate flame-cells are
modified parenchymatous gland-cells, containing a lumen filled with
a fluid into which a number of cilia project and vibrate synchronously.
The cells excrete nitrogeneous waste substances, which are then
discharged into the capillaries, whence the cilia of the main vessels
drive them presumably to the exterior, though external openings of
the excretory system are not known. Traces of this system have
been observed in young Leptoplana (first by Schultze in 1854) and
also in Cestoplana.

Sensation.—A nervous sheath, with scattered ganglion cells,


everywhere underlies the musculature. It is exceedingly faintly
marked on the dorsal surface, but laterally and ventrally forms a
dense network with polygonal meshes. Thickenings of this sheath
give rise to lateral nerves, and also to a pair of stout longitudinal
nerves from which the internal organs are probably innervated. The
brain, hardly distinct in pelagic Polyclads, in most forms does not
differ greatly from that of Leptoplana (p. 13).

The sense organs of Polyclads have the form of tentacles, eyes,


otocysts (in Leptoplana otophora), and stiff tactile cilia. The solid
dorsal tentacles of Planoceridae contrast strongly with the folded or
pointed hollow processes of the Cotylea. The former (Fig. 8, A, T)
are muscular and very contractile, and are placed near the brain
some distance from the anterior end. The latter are outgrowths of the
front margin of the body, and are sometimes (Yungia) provided
superficially with olfactory pits and internally with eyes and intestinal
coeca.

The eyes which occur in Polyclads may be divided into (a) a pair of
cerebral groups overlying the brain; (b) those embedded in the
tentacles (tentacular group); and (c) the marginal eyes, which in
Anonymus occur all round the margin. A complex form is sometimes
assumed by the cerebral eyes of Pseudoceridae, resulting probably
from incomplete fission (Fig. 11). Leptoplana otophora was obtained
by Schmarda on the south coast of Ceylon. On each side of the
brain is a capsule containing two otoliths. This is the only known
case of the occurrence of these organs in Polyclads.

Reproduction.—Although Polyclads are able to repair the result of


injuries to a very considerable extent, they are not known to multiply
asexually. The two processes are intimately associated, but, though
probably all Turbellaria can regenerate certain lost parts, asexual
reproduction only occurs sporadically.

Fig. 11.—Double eye from the cerebral group of Pseudoceros


maximus. (After Lang.)

All known Polyclads are hermaphrodite. The male organs, scattered,


like the testes of Leptoplana, over the ventral surface, develop
earlier than the ovaries, though the periods of maturation overlap;
hence the possibility of self-fertilisation, though remote, is still worth
consideration. The genital apertures, through which, in the male,
spermatozoa, and in the female, ova, are emitted, are usually
situated as in Leptoplana (Figs. 2 and 5, ♂ and ♀). In Trigonoporus,
a genus once found at Naples, a secondary female aperture has
been discovered leading into the female genital canal[35]; and in
Anonymus, Polypostia, and Thysanozoon (Fig. 7, E, ♂) two or more
male pores and penes have been found. Anonymus has several
penes (Fig. 7, D, ♂) arranged radially round the body. Polypostia, a
remarkable form described by Bergendal,[36] belonging to the
Acotylea, possesses about twenty such structures ranged round the
female genital aperture. Lang, whose attention was attracted by
these singular facts, made the interesting discovery that
Thysanozoon uses its penes as weapons of offence rather than as
copulating organs, burying them in the skin of another Polyclad
(Yungia) that happened to cross its path, spermatozoa being of
course left in the wound. Lang further found that Prostheceraeus
albocinctus and Cryptocelis alba in this way implanted a
spermatophore in the skin of another individual of the same species,
and he suggested that from this point the spermatozoa wandered
through the tissues till they met with and fertilised the eggs. It is now
known that a similar process of "hypodermic impregnation" occurs
sporadically in several groups of animals.[37] Nevertheless, in some
Polyclads it is probable, and in Stylochus neapolitanus it is certain,
that normal copulation takes place. The sperm-masses are
transferred to a coecal diverticulum of the female genital canal, and
then by a delicate mechanism, of which we know only the effects,
one spermatozoon obtains entrance into one matured ovum, which
differs from the ova of most Turbellaria in that it contains in its own
protoplasm the yolk necessary for the nutrition of the embryo. In
other words, there are no special yolk-glands. After fertilisation, the
ovum in all Polyclads is coated with a shell formed by the shell-
gland, which also secretes a substance uniting the eggs together.
They are deposited on stones and shells, either in plate-like masses
or in spirals (like those of Nudibranchs). Cryptocelis alba lays
masses of an annular shape, with two ova in each shell, and buries
them in sand.

Development.[38]—The first stages in the embryology of Polyclads


appear to be very uniform. They result, in all Cotylea and in certain
Planoceridae, in the formation of a Müller's larva (Fig. 12) about a
couple of weeks after the eggs are laid. This larva (1-1.8 mm. long),
which is modified in the Planoceridae, is distinguished by the
presence of a ciliated band, running somewhat transversely round
the body, and usually produced into a dorsal, a ventral, and three
pairs of lateral processes. When swimming the body is placed as in
Fig. 12, and twists round rapidly about its longitudinal axis by means
of the strong locomotor cilia placed in transverse rows upon the
processes. The cilia of each row vibrate synchronously, and recall
the action of the swimming plates of a Ctenophore. It is noteworthy
that whereas Stylochus pilidium passes through a modified or,
according to some authors, a primitive larval stage, its near ally, S.
neapolitanus, develops directly. Most Acotylea indeed develop
directly, and their free-swimming young differ from Müller's larva
merely in the absence of the ciliated band and in the mode of
swimming.

Fig. 12.—Section through Müller's larva of Thysanozoon brocchii


(modified from Lang). The right half is seen from inside. × 150.
Semi-diagrammatic. br, Brain; dl, dorsal ciliated lobe; dr, salivary
gland-cells of pharynx; e, eye; ep, ciliated epidermis containing
rhabdites; mg, stomach or main-gut; mg1, unpaired gut branch
over the brain; mo, "mouth" of larva; n, n1, section of nerves; oe,
ectodermic pit forming oesophagus of larva; par, parenchyma
filling the space between the alimentary tract and the body wall;
ph, pharynx lying in the cavity of the peripharyngeal sheath, the
nuclei of which are visible; sl1, sl2, sl3, lateral ciliated lobes of the
right side; vl, ventral ciliated lobe.
Fig. 13.—Diagrammatic transverse sections of a larval Polyclad at
different stages, to illustrate the development of the pharynx.
(After Lang.) A, Larva of the eighth day still within the shell. The
main-gut (mg) is still solid, the epidermis is slightly invaginated,
and a pair of muscular mesodermic thickenings (ms) are present.
B, Young pelagic larva. The epidermic invagination has deepened
and developed laterally. C, The lateral pouches have formed the
wall of the peripharyngeal sheath, enclosing the mesodermic,
muscular, thickening or pharyngeal fold (ph). (Compare Fig. 12.)
Towards the end of larval life, when the ciliated processes (sl, Fig.
12) have aborted, the stage D is reached. By the opening
outwards of the pharyngeal sheath (ph.sh) the two apertures gm,
or true mouth, and m, or external mouth, are formed, which
together correspond with the oesophageal opening of the younger
larva. (Compare the transverse section in Fig. 5.)

Polyclads possess an undoubted mesoderm, which gives rise to the


muscles, the pharyngeal fold, and the parenchyma. The ectoderm
forms the epidermis, in the cells of which the rhabdites (Fig. 12)
arise, apparently as so many condensed secretions. From the
ectoderm the brain arises as two pairs of ingrowths, which fuse
together, and from these the peripheral nervous system grows out.
Three pigmented ectoderm cells give rise, by division, to the eyes—
an unpaired cell (Fig. 12, e) to the cerebral group of eyes, and the
other two to the marginal and tentacular groups. The copulatory
organs apparently arise to a large extent as ingrowths from the
ectoderm, from which the accessory glands (prostates, shell-glands)
are also formed. The endoderm forms the lining of the main-gut and
its branches. The pharynx is developed as in Fig. 13, which shows
that the "mouth" of the young larva (C) does not correspond exactly
with that of the adult (D). The salivary glands arise from ectoderm
cells, which sink deeply into the parenchyma. The reproductive
organs (ovaries and testes) possibly arise by proliferation from the
gut-cells (Lang, v. Graff). The change from the larva to the adult is
gradual, the ciliary band being absorbed and the creeping mode of
life adopted.

Turbellaria. II. Tricladida.

The Triclads are most conveniently divided into three groups[39]: (i.)
Paludicola, the Planarians of ponds and streams; (ii.) the Maricola,
the Triclads of the sea; and (iii.) Terricola or Land Planarians. From
the Polyclads they differ in their mode of occurrence; in the
elongated form of their body and almost constant, mid-ventral
position of the mouth; in possessing a single external genital pore
(Monogopora); and in the production of a few, large, hard-shelled
eggs provided with food-yolk.

Occurrence of the Paludicola.—The Planarians of our ponds and


streams are the most familiar and accessible Turbellaria. Their
elongated, flattened bodies, and gliding movements, render them
conspicuous objects on the under surface of stones and on the
leaves of aquatic plants, where they live gregariously. The variable
Polycelis nigra (Fig. 14, H) is very abundant in stagnant water and
slowly-moving streams, whereas its ally, P. cornuta (Fig. 14, G),
distinguished by a pair of tentacles, is more local. Planaria
(Dendrocoelum) lactea (A), P. polychroa (I), P. torva, and P. punctata
are not infrequently found together, but the last is at once the largest
and rarest.
Fig. 14.—Forms of Triclads, with the distinguishing specific characters
of certain British forms. A, Planaria lactea O. F. M., × 2; B,
Planaria alpina Dana, × 4 (after Kennel); C, Phagocata gracilis
Leidy (after Woodworth), × 6; C', the same with the pharynges
(ph) extruded; D, Gunda ulvae Oer., × 4; E, Planaria gonocephala
Dug. (after Schmidt), × 4; F, genitalia of Gunda ulvae (after
Wendt); G, head of Polycelis cornuta Schm.; H, head of Polycelis
nigra Ehr.; I, head of Planaria polychroa Schm. K to N show the
distinctive characters of the genital ducts in K, Polycelis nigra; L,
Planaria polychroa; M, Planaria alpina; N, Planaria torva Schultze
(after Iijima and v. Kennel). ga, Genital atrium; go, common
genital opening; mgr, "musculo-glandular organ"; mo, "mouth";
ovd, oviduct; pe, penis; ph, pharynx; pyr, pyriform organs of
unknown significance; sc, sucker; sp, spermatophore lying in (ut)
uterus; vd, vesicula seminalis. (All except C and E are found in
England.)

Planaria alpina (Fig. 14, B) is characteristic of cold mountain


streams, but occurs down to sea-level in England, the Isle of Man,
and Ireland, and from its abundance in spring water, probably enjoys
a wide distribution underground. In the Swiss Alps it has been found
at altitudes of over 6000 feet, at lower levels in the Rhone, and also
in the Lake of Geneva. This wide distribution may perhaps be
accounted for, partly, by its faculty for asexual reproduction in
summer, and also, by the production, later in the year, of hard-
shelled eggs which are laid loosely, not attached to stones or plants.
[40]But we have no really direct evidence of the means of dispersal
of this or of any of the foregoing species, although they all have a
wide distribution in Europe. Of extra-European forms the accounts
that exist are very fragmentary. The only indubitable diagnostic
character of a Triclad is the structure of its genital ducts, and this is
accurately known in only a few cases. Several species such as
Dicotylus pulvinar (Fig. 16, B), at present known only from Lake
Baikal,[41] and others (Planaria mrazekii, P. albissima) from
Bohemia,[42] will doubtless be found elsewhere when they are
carefully looked for. Phagocata gracilis is a remarkable North
American form, possessing several pharynges (Fig. 14, C and C'),
recalling the independent movement of the pharyngeal lobes of
Discocelis lichenoides (Fig. 9).[43]

Occurrence of the Maricola.—Little as we accurately know of the


distribution of the fresh-water Planariae, our knowledge of the
occurrence of the marine forms is still more limited. Gunda
(Procerodes) ulvae (Fig. 14, D) is the commonest European form,
occurring abundantly in the upper part of the littoral zone, on the
shores of the Baltic. G. segmentata from Messina has been carefully
described by Lang,[44] but these are almost the only species of
Maricola which can be accurately determined. They differ from the
Paludicola in the position of the "uterus" behind the genital pore and
in the absence of a "musculo-glandular organ" (Fig. 14, F). A special
interest attaches to the Bdellouridae, a family containing three
species, all parasitic on Limulus from the east coast of America.
These remarkable Triclads usually have a sucker at the hinder end
of the body, by which they attach themselves firmly to the cephalo-
thoracic appendages and to the gill-plates, upon which the eggs may
be found in considerable numbers. One species, Syncoelidium
pellucidum, possesses a pair of problematical organs in the hinder
part of the body, opening to the exterior ventro-laterally by a couple
of chitinous mouth-pieces, but having no connexion with the genital
ducts.[45]
Occurrence and Distribution of Land Planarians.—The
terricolous Triclads or Land Planarians are the most interesting
division of the group. Some forms, such as Bipalium kewense, attain
large dimensions, being usually 6 to 9 inches in length, and
specimens fully extended have measured 18 inches. Their bodies
are frequently banded or striped with brilliant colours. Geoplana
coerulea Mos. has a blue ventral surface and is olive green or dark
Prussian blue above. G. splendens Dendy, is marked dorsally by
three stripes of emerald green alternating with four dark brown
longitudinal bands. The mode of coloration, though somewhat
variable, is an important specific character. Its significance, however,
is not clearly understood. The colours may be a warning signal, as
some Geoplana at least are disagreeable to the taste of man and
some birds[46]; but since Land Planarians are largely nocturnal
animals, living by day under logs, banana leaves, and in other moist
and dark situations, this explanation is clearly insufficient. Two
Geoplana have been noticed by Mr. Dendy which seem to be
protectively coloured. G. triangulata var. australis occurs abundantly
in the beech forest in the South Island of New Zealand, and its
brown back and yellow or orange ventral surface match the leaves
around its haunts. G. gelatinosa again looks like a mere slimy patch
on the rotten bark where it is found. In arid districts, during the dry
season, Land Planarians burrow in the soil and form a cyst, in which
they lie coiled up, after the manner of earthworms.[47] The glutinous
investment of their delicate bodies forms a moist medium in which
the cilia covering the body (and especially the ventral surface) may
constantly and evenly vibrate, and by which they adhere firmly to
their prey. In some tropical Planarians, in addition to possessing
offensive properties, the mucus is so copious in amount and hardens
with such rapidity, that these Triclads may creep over bridges of it,
and may even be blown from one stem or branch of a plant to
another, hanging at the ends of their threads.[48]
Fig. 15.—Some Land Planarians found in Europe. A, Bipalium
kewense Mos. × ⅓ (after Bergendal); B, Rhynchodemus terrestris
O. F. M., × 2; C, Geodesmus bilineatus Metsch., × 2½ (after
Metschnikoff). mr, Region of mouth; gp, region of genital pore.

In Europe there are only two or three indigenous Land Planarians, of


which Rhynchodemus terrestris O. F. M. (Fig. 15, B) is the most
widely distributed, and has been found in moist situations for the
most part wherever it has been carefully looked for. It measures
about ¾ inch in length, and is dark grey above, whitish below, and
bears a pair of eyes near the anterior extremity (Fig. 15, B). Bipalium
kewense (Fig. 15, A), which has been found in the forests of Upolu,
Samoa, by Mr. J. J. Lister, has been accidentally imported, from the
(unknown) districts where it is indigenous, with plants and soil to
various parts of the world—England, Germany, the Cape, and also to
Sydney, where it appears to have established itself. In these Bipalia
living in hothouses, the genitalia never appear to attain maturity, and
apparently multiple fission and subsequent reparation of the missing
parts is the only mode of reproduction. Geodesmus bilineatus (Fig.
15, C), which has occurred at Giessen, Würzburg, and Dresden,
has, in all probability, been introduced with ferns from the West or
East Indies. Microplana humicola, described by Vejdovsky from
dunghills in Bohemia, is doubtfully indigenous.

In marked contrast with the poverty of the temperate zones in Land


Planarians, is the abundance and great variety of this group in
Southern Asia, South America, and especially in Australasia, where
the rich Land Planarian fauna has been carefully investigated by
Spencer, Dendy, Fletcher, and others, in certain parts of Victoria,
New South Wales, and New Zealand.[49] About forty species of
Planarians have been discovered on the Australian continent, thirty-
five of which belong to the predominant genus Geoplana,
distinguished by the presence of numerous eyes along the border of
the simple anterior extremity. Of the remaining five, four belong to
the genus Rhynchodemus, with, lastly, the introduced Bipalium
kewense. The distribution of any one species, however, is so limited
that only three forms are common to the two former colonies; and
although some of the twenty known New Zealand Planarians (chiefly
species of Geoplana), are identical with Australian species, yet only
one, or possibly two, varieties of these species are Australian also.
In addition to their prevalence in Australasia, the Geoplanidae also
occur in South America, South Africa, Japan, and the East Indies.
The Bipaliidae are characteristic of the Oriental region, being found
in China, Borneo, Bengal, and Ceylon. The Rhynchodemidae are a
cosmopolitan family, occurring in Europe, North and South America,
the Cape of Good Hope, Ceylon, the East Indies, Australia
(particularly Lord Howe Island), and Samoa.[50]

Habits and Structure of Triclads.—The common Planaria


(Dendrocoelum) lactea, which usually progresses by ciliary action,
aided, it is said, by muscular contractions of the ventral surface,
performs, if alarmed, a series of rapid "looping" movements, by
affixing a sucker (Fig. 14, A, sc), placed on the under side of the
head, to the substratum, and pulling the posterior end close to this.
The sucker, discovered by Leydig, is even better developed in P.
punctata (Fig. 16, A), P. mrazekii, and P. cavatica, and is an efficient
adhering-organ which has probably been developed from a similar
but simpler structure found in a considerable number of both fresh-
water and marine Triclads (P. alpina, Fig. 16, E). Probably the sucker
of the Land Planarian Cotyloplana (D) is the same structure, but the
two suckers of Dicotylus (B) are at present unique. Planaria dioica,
found by Claparède on the coast of Normandy,[51] is covered with
minute adhesive papillae, similar to those of certain Rhabdocoelida
(e.g. Monotus, Fig. 19, D), enabling it to cling tightly to the Zostera,
and so to resist the loosening action of the waves.

Fig. 16.—Suckers of Triclads. A, Planaria punctata Pall.; a, dorsal


surface of head; b, ventral surface (freely moving) showing the
sucker; c, sucker contracted (after Hallez): B, ventral surface of
head of Dicotylus pulvinar Gr., from Lake Baikal (after Grube): C,
dorsal surface of Procotylea fluviatilis Gir. (after Girard): D, sucker
of Cotyloplana whiteleggei Sp. (after Spencer): E, ventral view of
head of Planaria alpina Dana (preserved specimen); hg, adhering
groove; m, thickened musculature forming the margin of the
sucker; sc, sucker; t, tentacles.

The movements of Land Planarians are somewhat peculiar. The


ventral surface of Bipalium has a median groove, into which the
ducts of numerous mucus-glands open. This is bordered by two
ridges clothed with long and powerful cilia, which perform the chief
part in propelling the animal, aided, according to Lehnert,[52] by
muscular waves which pass from the head, backwards, i.e. opposite
in direction to those by which a snail slides along. This observation,
however, needs confirmation. The whole body executes sinuous
movements, during which the crescentic head, lifted slightly above
the ground (Fig. 15, A), is constantly altering and regaining its
normal shape, somewhat as a Planaria lactea uses the lobes of its
head. Further examination shows that the margin of the head of
Bipalium is not only provided with eyes, but in addition, with ciliated,
(probably) olfactory pits. Such depressions, innervated directly from
the cerebral ganglia, have been found in sixteen species of
Geoplana, and in one or two species of Rhynchodemus.[53] Some
Land Planarians (a species of Rhynchodemus from Ceylon, and a
Dolichoplana from the Philippines) wriggle out of a box or the hand
with great speed (Moseley).

The skin of Triclads is full of minute rods or rhabdites, which are shot
out in great numbers when the animal is irritated, and doubtless
serve an offensive purpose. The Terricola possess two kinds of
these: (1) needle-like rods; and (2) in Bipalium kewense, flagellated
structures, bent into a V-form and with a slender thread attached to
one end (Shipley). In Geoplana coerulea these bent rods furnish the
blue colour of the ventral surface. The rhabdites arise in all Triclads
in cells below the basement-membrane, which they are said to
traverse in order to reach the epidermis, thus differing in origin, and
also in structure, from the rods of Polyclads.

Food.—Triclads are largely if not wholly carnivorous animals,


feeding upon Annelids, Crustacea, Insects, Insect-larvae, and
Molluscs. The mouth is usually mid-ventral or behind the middle of
the body, but in the anomalous Leimacopsis terricola Schm. from the
Andes[54] and in Dolichoplana it is near the anterior end. The
pharynx (Figs. 17, 18, ph) is cylindrical or bell-shaped, exceedingly
dilatable and abundantly supplied with glands and nervous tissue. It
opens into the three main intestinal branches, one of which runs in
the median plane forwards, the others backwards right and left,
enclosing a space in which the genital ducts lie (Figs. 14, A, 17). The
fresh-water Planarians prey upon Oligochaeta, Hydrophilidae
(aquatic beetles), and the commoner pond-snails. Bipalium kewense
pursues earthworms, seizes the upper surface of the anterior end by
the glutinous secretion of its ventral surface, and then proceeds to
envelop part or the whole of the worm within its pharynx, which is
stretched as a thin skin over the body of its struggling prey (Lehnert).
The tissues of the latter pass into the intestine of the Planarian, and
distend it greatly. After such a meal, which lasts from one to five
hours, a Bipalium may remain for three months without seeking food.
Geobia subterranea, a white eyeless form from Brazil, pursues
earthworms (Lumbricus corethrurus) in their burrows, and has been
seen by Fritz Müller sucking the blood out of a young worm.[55]
Geoplana typhlops, a Tasmanian species, is also blind, and pursues
worms, as does G. triangulata (Dendy). In Trinidad, von Kennel[56]
observed that land-snails (Subulinae) were the food of certain Land
Planarians, the name of which, however, he does not state. The
pharynx was employed to suck out the soft parts of the snail even
from the upper whorls of the shell.

Reproduction.—In Planaria lactea the numerous testes (Fig. 17, te)


are placed both above and below the alimentary canal throughout
the greater part of its course. The membrane of each gonad is
continued into a minute vas efferens, which unites with those of
neighbouring testes. Two vasa deferentia (v.d) arise thus on each
side, one from the posterior, the other from the anterior testes of the
body, and open into the vesiculae seminales (v.s), which may be
seen in the living animal as tortuous whitish tubes at the sides of the
pharynx (Fig. 14, A). These open into the penis (Figs. 14, A; 17, pe),
a large pyriform organ, the apex of which, when retracted, points
forwards, projecting into the penial cavity. When this apical portion is
evaginated and turned inside out, it is of considerable length, and is
able to pass into the long slender duct of the uterus (ut) of another
individual. The penial sheath (ps) is part of the genital atrium (gs),
which is developed as a pit from the skin, and invests the end of the
genital ducts, the mouth of the pit forming the common genital pore
(go), through which both male and female genital products are
emitted.

There are two ovaries (ov) placed far forwards, between the third
and fourth pairs of intestinal coeca. The oviducts (ovd) lie just over
the lateral nerves, and have a slightly tortuous course, at each
outward bend receiving the duct (yo) of a yolk-gland (yg), so that ova
and yolk are already associated when the oviducts open by a short
unpaired tube into the genital atrium. The yolk-glands develop
rapidly,[57] and when fully formed are massive glands occupying the
spaces between the intestinal branches and the testes which are
then aborting. The so-called uterus (ut), apparently at first a
diverticulum of the genital atrium, expands behind the pharynx into a
receptacle lined by long glandular columnar cells, which, however,
are not all of the same kind. The uterine duct opens into the atrium
just above the aperture of a problematical, eversible, "musculo-
glandular organ" (mgr).

Fig. 17.—Diagrammatic view of the structure of Planaria


(Dendrocoelum) lactea. × 7. The body has been cut across and a
portion removed. In the posterior half the alimentary tract of the
left side is removed and the uterus, penis, and muscular organ
sliced open horizontally. The nervous system is represented by
black, and the yolk-glands by dotted lines. br, Brain; ey, eye with
lens and optic nerve; go, external genital aperture for both male
and female products; gs, genital atrium; lg, paired lateral intestinal
branch; ln, longitudinal nerve; mg, unpaired anterior intestine, the
branches of which are cut off close to the main stem; mgr,
eversible "musculo-glandular organ"; nc, nerve-cells in the
pharynx; nn, lateral nerve-twigs; ns, nerve-sheath; ov, ovary; ovd,
oviduct; pe, the eversible penis, the corrugated inner white portion
of which is the apex; ph, pharynx; phs, pharyngeal sheath; pr,
"prostate" or granule-gland (represented by dotted lines opening
into the penis); ps, penial sheath; te, testes; to, tactile lobe of the
head; ut, "uterus" opening into the genital atrium just above mgr;
vd, vasa deferentia; vs, vesicula seminalis; yg, yolk-glands; yo,
openings of the yolk-ducts into the oviducts.

Fertilisation appears to occur in the uterus, where ova, yolk, and


spermatozoa, or (in P. torva) spermatophores (Fig. 14, N, sp), are
found. The formation of the cocoon in Planaria lactea is probably
begun in the "uterus," but is undoubtedly completed in the genital
atrium. In P. polychroa, however, the stalked cocoon is formed wholly
in the "uterus." Thus we find two types of cocoons in different
species of the genus Planaria associated with two types of
reproductive organs (Hallez):—

I. Planariae in which the two oviducts open separately into the


posterior part of the duct of the uterus. A musculo-glandular organ is
absent. The cocoons are spherical and stalked. Examples—Planaria
polychroa (Fig. 14, L), P. albissima, P. gonocephala.

II. Planariae in which the two oviducts open by means of an unpaired


duct into the genital atrium. A musculo-glandular organ present
(Planaria torva (Fig. 14, N), P. mrazekii, P. lactea, P. cavatica), or
absent (P. alpina, Fig. 14, M). The cocoons are sessile.

The genitalia of the Maricola (Fig. 14, F) and Terricola do not differ
very much from those of Planaria. The uterus (greatly reduced in the
Land Planarians) lies behind the genital pore, and several ova,
together with much milky yolk, are enclosed in a capsule which is
formed in the genital atrium.

Asexual Reproduction.[58]—It has long been known that fresh-


water Planarians have not only great powers of repairing injuries, but
that they use this faculty in order to multiply by transverse fission.
Planaria alpina and Polycelis cornuta, in summer, separate off the
posterior part of the body, and this ultimately becomes an entire
individual. P. albissima, and especially P. subtentaculata, anticipate
matters so far, that before fission is complete, the new individual has
a head nearly fully formed. The new organs are largely regenerated
in both parent and young, apparently by the division and
specialisation of scattered embryonic cells in the parenchyma. The
asexual reproduction of Land Planarians is not fully proved, though it
is known that they repair injuries to the body completely, and that
Bipalium kewense is often found in hothouses, divided into
fragments which regenerate all the organs of the parent, but like the
latter, do not mature their sexual organs.

Fig. 18.—Semi-diagrammatic view of the excretory system of Planaria


lactea. (Partly after Chickoff.) can, Capillary network on both
dorsal and ventral surfaces; g.br, branches of the intestine; lg,
lateral branches of the digestive system; ln, longitudinal nerve; ph,
pharynx, with intermuscular capillary excretory network arising
from the point marked pht; tp, principal vessels of the excretory
system, the external opening of which is not certainly known; vs,
vesicula seminalis.

Excretion.—The excretory organs of Triclads consist of flame-cells,


canaliculi, and a pair of longitudinal canals, the external openings of
which, have not been satisfactorily ascertained. The flame-cells are
difficult to detect in Planaria lactea, and the latest observer, Chickoff,
[59] was unable to see them, although to him we are indebted for
figures of this system in P. lactea (Fig. 18) and P. alpina (P.
montana). In the latter, the flame-cells are distinct, and may open
directly into the two main canals or indirectly through unbranched
canaliculi. The pharynx possesses a special supply of excretory
tubules communicating with the main canals. A similar system has
been described and figured in Gunda segmentata by Lang.[60]
Classification of Tricladida.

PALUDICOLA.
Family. Genus and British Species.
Planariidae Planaria lactea O. F. M., P. punctata Pall., P.
polychroa Schm., P. torva M. Sch., P. alpina
Dana.
Polycelis nigra Ehr., P. cornuta Schm.
Anocelis.
Oligocelis, Procotyla. (Doubtful genera.)
Sorocelis.
Dicotylus.

MARICOLA.
Procerodidae Procerodes (= Gunda) ulvae Oersted, P.
(= Gundidae). littoralis van Beneden.
Cercyra.
Uteriporus.
Bdellouridae Bdelloura.
Syncoelidium.

TERRICOLA.
Bipaliidae Bipalium kewense Moseley (introduced).
Geoplanidae Geoplana.
Geodesmus.
Rhynchodemidae Rhynchodemus terrestris O. F. M.

Belonging to Dolichoplana.
undetermined Polycladus.
Families Microplana.
Leimacopsis.

Turbellaria. III. Rhabdocoelida.


The Rhabdocoelida include a very heterogeneous assemblage of
usually minute Turbellaria, distinguished collectively from the
Polyclads and Triclads by the form of the digestive tract. This is a
simple or slightly lobed sac, except in the Bothrioplanidae, which in
this and many other points closely resemble the Triclads. It is to the
straight, rod-like nature of the alimentary canal that the name of the
group refers. The size and form of the body, and the structure of the
pharynx and genitalia, vary within wide limits.

The Rhabdocoelida are subdivided into three tribes:—

(1) Acoela, in which a sub-central mouth and pharynx are present,


but lead into the parenchyma of the body, not into an intestine with
proper walls. An excretory system has not hitherto been seen. Yolk-
glands are absent. An otolith underlies the brain. The Acoela are
marine.

(2) Rhabdocoela, which possess a complete alimentary tract


separated from the body-wall (except for a few suspensory strands)
by a space or body-cavity, filled with fluid. This space is sometimes
(Vortex viridis) lined by an endothelium of flattened parenchymatous
cells. There are two compact testes, which are enclosed (as are the
ovaries and yolk-glands) in a distinct membrane. An otolith is present
in some genera and species. Terrestrial, fresh-water, marine.

(3) Alloeocoela, in which the body-cavity is greatly reduced. Except


in the Bothrioplanidae, the gonads have no distinct membrane.
Testes numerous; yolk-glands present. Marine with a few exceptions.

Occurrence and Habits of the Rhabdocoelida.—The Acoela are


usually minute, active Turbellaria abounding amongst weeds
throughout the lower half of the Littoral, and the whole of the
Laminarian zone, but are most plentiful in the pools exposed during
spring-tides on our coasts, especially on the shores of Devonshire.
The species of Haplodiscus, however, and Convoluta henseni are
modified pelagic forms found in the Atlantic Ocean.[61] Convoluta
paradoxa (Fig. 19, B) is the commonest British species. It is from 1
to 9 mm. in length, and of a brown colour, marked above by one or
more transverse white bars. The brown colour is due to a symbiotic
alga, the nature of which has not been thoroughly investigated. In an
allied species, however (C. roscoffensis), from the coast of Brittany,
the alga, which is here green, has been carefully examined by
Professor Haberlandt,[62] and it appears from his researches that the
algae form a special assimilating tissue, enabling the Convoluta to
live after the fashion of a green plant. At Roscoff, these elongated
green Convoluta live gregariously in the sandy tide-pools, fully
exposed to the sun's rays, and have the appearance of a mass of
weed floating at the surface of the water. Access to the atmosphere
and to sunlight are necessary in order to enable the assimilating
tissue to form the carbohydrates, upon which this form lives
exclusively. Not only has the alga itself undergone such profound
changes (loss of membrane, inability to live independently after the
death of the host) as to disguise its true nature (a tissue-cell derived
from algal ancestors), but the Convoluta has also undergone
concomitant changes, in form, in the loss of a carnivorous habit, and
in the development of marked heliotropic movements, thus adapting
itself to an holophytic or plant-like mode of nutrition. Nevertheless
the Acoela, as a group, are carnivorous, feeding upon Diatoms,
Copepoda, and small Rhabdocoela, the absence of a digestive tract
indeed being probably more apparent than real.[63]

The Rhabdocoela live under varied conditions. One form,


Prorhynchus sphyrocephalus, has been found among plants far from
water in the neighbourhood of Leyden, by De Man.[64] With this
exception the group is purely aquatic, and though a few genera and
even individuals of the same species occur both in salt and fresh
water, whole sub-families and genera are either marine or
paludicolous. Among the latter, Mesostoma, Castrada, Vortex, and
Derostoma are common in brooks and ponds, especially at certain
times, often only for one month (May or June) in the year. Species of
Macrostoma, Stenostoma, and Microstoma are also abundant in

You might also like