Professional Documents
Culture Documents
Phenomenology and Artificial Intelligence
Phenomenology and Artificial Intelligence
Published by Blackwell Publishers, 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main
Street, Malden, MA 02148, USA
METAPHILOSOPHY
Vol. 33, Nos. 1/2, January 2002
0026–1068
ANTHONY F. BEAVERS
I wish to thank Larry Colter, Julia Galbus, and Peter Suber for their comments on earlier
drafts of this essay.
1
The meaning of the word enveloping is difficult to discern in this context, though the
definition of the process seems clear. It is “the process of adapting the environment to the
agent in order to enhance the latter’s capacities of interactions.” “Ontological accommoda-
tion” might be a better term, but I shall keep with Floridi’s usage throughout. Readers will
need to remember that it is the process as defined that I mean to indicate by the use of
Floridi’s terminology and not be too concerned with making sense of the concept of
enveloping.
We must not forget that only under specially regimented conditions can a
collection of detected relations of difference [binary relations, for instance]
concerning some empirical aspect of reality replace direct experiential knowl-
edge of it. Computers may never fail to read a barcode correctly, but cannot
explain the difference between a painting by Monet and one by Pissarro. More
generally, mimetic approaches to AI are not viable because knowledge, expe-
rience, bodily involvement and interaction with the context all have a cumula-
tive and irreversible nature. (1999, 215–16)
2
“Mimetic” approaches to AI are those that try to build artificial agents that do things
the way human beings do. “Non-mimetic” approaches strive for the same result by attempt-
ing to emulate thinking along functionalist lines. Floridi’s critique is based on the early
analysis of Turing. “Since GOFAI could start from an ideal prototype, i.e., a Universal
Turing machine, the mimetic approach was also sustained by a reinterpretation of what
human intelligence could be. Thus, in Turing’s paper we read not only that (1) digital
computers must simulate human agents, but also that (2) they can do so because the latter
are, after all, only complex processors (in Turing’s sense of being UTMs)” (Floridi 1999,
149). Evidence that Turing makes a mimetic mistake is apparent in his admittedly bizarre
attempt to estimate the binary capacity of a human brain. (See the closing section of
Computing Machinery and Intelligence.) Still, though Turing thought of the human being
as a UTM, thereby committing a mimetic fallacy, we need not conclude that all mimetic
approaches are doomed to fail. It is conceivable and may even be likely that connectionist
networks will one day model human cognition at a reasonable level of complexity. Details
aside, they do seem to mimic brains, at least minimally. Because these networks are compu-
tationally equivalent to UTMs, it is premature to give up on the idea that the brain is a
computer.
It is difficult to tell from the context how extensively Floridi means these
words, and much hangs on the word just in the last part of the sentence: “the
former is just a short hand notation for the latter.” Take out the word just and
replace it with “more or less,” and we find ourselves implicitly accepting
some form of transcendental constitution. Such theories do suggest that data
from perceptual experience are picked up by consciousness and put through
a series of operations to construct a “world of objects” that serves to explain
what appears to us in experience. For phenomenologists, the phenomenal
world is just such a summation, a spatial and temporal objectification of the
flux of sense data into a stable and knowable world according to a set of
3
I am not claiming that Kant or Husserl, or anyone else for that matter, has furnished a
complete or even minimally adequate set of rules for world constitution. My point is only
that AI and cognitive-science researchers should want these efforts to succeed. In addition,
I agree that traditional attempts at understanding world constitution are too representational,
a charge that has clearly been continually waged against Husserl by thinkers like Martin
Heidegger and Emmanuel Levinas. But this, by itself, does not mean that the representa-
tional account is rendered useless. It means only that it must be situated. Even if embodied
action is a necessary condition for human or mimetic representation, a cognitive layer of
data processing seems nonetheless to be operative.
determine internally and then assign to the objects in the world. In other
words, we know it insofar as it is derived from our cognitive processes, not
because we find it in the world.
Of course, actual phenomenological description is much more subtle
and detailed, but this crude example should serve to illustrate the cognitive
significance of the method, namely, that the objects (and, to generalize, the
properties and relations between them) that were previously taken to
belong to the ontology of the natural world now appear as cognitive struc-
tures that result from mental processes directed at sensibility. The previous
objects of nature turn out to be “naturalized objects,” that is, ideal objects
that are constituted on the basis of appearances in concrete experience and
projected outward into the natural world.
Husserl’s first examples understand this process of naturalization in
reference to material objects. The perceptions of a material object arise one
at a time in a succession of appearances or perspectives that leads to the
conclusion that “behind” these appearances there exists a thing-in-itself
that causes them. Naturalism treats these things-in-themselves as if they
were absolute objects, failing to treat them in relation to the perceptions
from which they were abstracted:
By interpreting the ideal world which is discovered by science on the basis of
the changing and elusive world of perception as absolute being, of which the
perceptible world would be only a subjective appearance, naturalism betrays
the internal meaning of perceptual experience. Physical nature has meaning
only with respect to an existence which is revealed through the relativity of
Abschattungen [perspectives]—and this is the sui generis mode of existing of
material reality. (Levinas 1973, 10)
[T]he specific being of nature imposes the search, in the midst of a multiple and
changing reality, for a causality which is behind it. One must start from what is
immediately given and go back to that reality which accounts for what is given.
The movement of science is not so much the passage from the particular to the
general as it is the passage from the concrete sensible to the hypothetical super-
structure which claims to realize what is intimated in the subjective phenom-
ena. In other words: the essential movement of a truth-oriented thought consists
in the construction of a supremely real world on the basis of the concrete world
in which we live. (Levinas 1973, 15–16)
4
Numbers in the references to Husserl refer to sections in his Ideas and not to the actual
page numbers.
5
Husserl uses this expression to speak of a world that is genuinely immanent, that is,
built up within consciousness, but projected outward in such a way that it looks extra-
mental. For a sustained treatment of “transcendence in immanence,” see the fifth of
Husserl’s Cartesian Meditations.
6
The issue of the relationship between idealism and phenomenology has a long and
complicated history of its own. For a sustained commentary, see the lengthy treatment by
Herman Philipse in The Cambridge Companion to Husserl (Philipse 1995, 239–322). There
is no doubt that Husserl is a transcendental idealist, and Philipse is correct to note that
“Husserlian phenomenology without transcendental idealism is nonsensical. If one wants to
be a phenomenologist without being a transcendental idealist, one should make clear what
one’s non-Husserlian conception of phenomenology amounts to, and which problems it is
meant to solve” (Philipse 1995, 277). I do believe that Husserl’s phenomenology needs to
be reworked in light of the problem of idealism, but it is unfortunate that the possibility of
idealism is immediately taken to mean that phenomenology as such is bankrupt as a tool for
learning about cognitive processes. We can use it as a method, temporarily accepting its
ontological commitments, without having to posit these commitments as binding outside
the practice of the method.
Conclusion
If I am correct in this assessment of the phenomenological enterprise, its
applicability for artificial-intelligence and cognitive-science research
should be clear. By articulating the mental processes involved in getting
from sense data to a knowable world, it is engaged in understanding the
processes and procedures involved in human cognition. While such an
understanding does not explain cognition or how the brain works, it does
help us understand which processes are instantiated in the brain. Such an
understanding certainly is an aid to cognitive science. Furthermore, the
same understanding can guide our efforts to duplicate intelligence in
machines. After all, we need to know what we want a machine to do,
before we can build one to do it, and it is insufficient to presuppose a set
of mental acts without careful attention to their precise function within our
cognitive initiatives.
Having said this, I mean in no way to suggest that we can simply apply
Husserl to artificial intelligence. No doubt much of what he has said is
relevant to our purposes, but it would take considerable effort to get what
we need out of his dense and turgid prose. A shorter, and perhaps more
useful, approach would be to borrow his method in the climate of current
cognitive science and rework phenomenology, perhaps along realist lines,
with different goals in mind.
Finally, I must address the issue of why, if I am correct about this appli-
cation of phenomenology, such a view is not widely held. I can only spec-
ulate. A possible reason may be that the tendency of phenomenologists to
write in the tradition of German philosophy, indeed German Idealism,
confuses the issues or, at least, hides their significance for cognitive
science. The language often suggests spiritual powers belonging to a
disembodied psyche at the center of an ideal world—an ego playing God.
This usage lends itself to idealistic interpretations of phenomenology that
make metaphysical commitments instead of methodological ones. It does
not help that Husserl himself may be guilty of such a charge. Realist
phenomenologies friendly to materialism and recent work in cognitive
technology are possible, even while moving in the wake of the phenome-
nological reduction. But this is not easily seen.
Thinkers like Hubert Dreyfus have used phenomenology negatively to
argue that traditional approaches to artificial intelligence are doomed to
fail (Dreyfus 1992). But understanding phenomenology as an aspect of
cognitive science puts it to constructive use. Metaphysical commitments
aside, phenomenologists like Husserl and Heidegger can be read as
describing (in detail) the informational processes and procedures by which
cognition operates. Although these descriptions may not be immediately in
line with Newell and Simon’s Symbol System Hypothesis or binary
computation, it is a mistake to think that they cannot be translated into
some form that can be modeled by intelligent technology. We should be
Department of Philosophy
University of Evansville
1800 Lincoln Avenue
Evansville, IN 47722
USA
beavers@noetic-labs.com
References
Churchland, Paul. 1988. Matter and Consciousness. Revised edition.
Cambridge, Mass.: MIT Press.
Dennett, Daniel C. 1984. “Cognitive Wheels: The Frame Problem of AI.”
In Minds, Machines and Evolution: Philosophical Studies, edited by
Christopher Hookway, 129–51. New York: Cambridge University
Press.
Dreyfus, Hubert L. 1992. What Computers Still Can’t Do: A Critique of
Artificial Reason. Cambridge, Mass.: MIT Press.
———. 1997. “From Micro-Worlds to Knowledge Representation: AI at
an Impasse.” In Mind Design II: Philosophy, Psychology, Artificial
Intelligence, edited by John Haugeland, 143–82. Cambridge, Mass:
MIT Press.
Floridi, Luciano. 1999. Philosophy and Computing: An Introduction. New
York: Routledge.
Heidegger, Martin. 1996. Being and Time, translated by Joan Stambaugh.
Albany, N.Y.: SUNY Press.
Husserl, Edmund. 1960. Cartesian Meditations: An Introduction to
Phenomenology, translated by Dorion Cairns. The Hague: Nijhoff.
———. 1982. Ideas Pertaining to a Pure Phenomenology and to a
Phenomenological Philosophy. First Book. General Introduction to a
Pure Phenomenology, translated by F. Kersten. The Hague: Nijhoff.
Kant, Immanuel. 1929. Critique of Pure Reason, translated by Norman
Kemp Smith. New York: St. Martin’s Press.
Levinas, Emmanuel. 1973. The Theory of Intuition in Husserl’s
Phenomenology, translated by André Orianne. Evanston, Il.:
Northwestern University Press.
Newell, Allen, and Herbert Simon. 1990. “Computer Science as Empirical
Inquiry: Symbols and Search.” In Mind Design II: Philosophy,