Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

KARL JASPERS FORUM

TA22 (Jarvilehto)
 
Response 6 (to Malcolm's C20)
ROBOTS, ENVIRONMENT AND CONSCIOUSNESS
by Timo Jarvilehto
4 May 2000, posted 16 May 2000
 
Abstract
I am greatly indebted to Chris Malcolm (CM) for his thorough work with the target article, and for
many interesting comparisons between behavioral robotics and the organism-environment theory.
His field is quite new for me, and I never thought my ideas would come close to the work in
artificial intelligence and robotics. In my response I will first describe some problems related to the
design of robots and their environment, and then consider the possibility of robot consciousness.
------------------------------------
<0>
It is very encouraging when researchers may arrive in similar conceptions via different routes. This
is an indication that the conception has some generality. Some years ago, when I started to publish
in English the theoretical ideas which I had developed during the last fifteen years, I was very
surprised when realizing that the scientists who were most interested in these ideas, were roboticists
(see discussion in Psycoloquy on Jarvilehto, 1998a). Because of my basically critical attitude
towards the mainstream cognitive science of the last decades, and the associated artificial life or
intelligence research, I had not followed theoretical development in these fields. Hence, it was a
pleasant surprise to encounter here ideas close to the organism-environment theory.
<1>
When we build a robot or any other machine, what exactly do we build ? Let's have a car as a
simpler example. When we build a car it is not enough to put together its parts, but we must also
construct its environment, in the form of roads, bridges etc. We must also train its driver. A car
without the possibility of driving from one place to another is not really a car. Thus, the
construction of the specific action environment is an essential part of the building process of the
machine, and at least as important as the construction of its "inner" parts. Of course, this is often not
so striking, because the action environment of the machine usually exists already before the
construction process. However, the planner should know this environment, because he must be able
to plan the structure of the machine so that it will fit its environment.
<2>
This situation was typical for older robotics. The robot was built for a ready-made environment, and
its action was guided by a central control and a program that solved the problems in different
environmental situations. The new idea in behavioral robotics (if I have understood correctly) is that
the environment of the robot is not exactly predefined, but the robot is built of modules which may
in a way create robot's own environment by having contacts to new environmental parts without any
central control. Such a construction offers the possibility of functioning robots also under
circumstances which are not exactly know before and, as Malcolm points out, this kind of
technology approaches the basic principles of the organism-environment theory.
<3>
However, if I understand correctly, even in this case the possible environmental features/parts are
defined roughly in the construction of the modules, and therefore, also here the environment of the
robot is predetermined, though it may vary more than with the old robots. But do the parts of the
environment of the robot, planned by its constructor, really make up its "behavioral" environment ?
If we regard also the robot as a robot-environment system, how can we describe its environment ?
Here we will encounter surprising difficulties.
<4>
Malcolm states (<4>) in respect to an animal behaving intelligently that "we can't observe its
purposes and knowledge. All we can observe is a creature behaving in a world (I use 'world' here in
the sense of local world or Umwelt)." In this proposition there is a very critical problem related to
the organism-environment theory, which applies also to the consideration of robots in their
environment. It is, of course, true that we cannot observe the purposes or knowledge of an
animal/robot. But can we observe its world/environment?
<5>
The organism-environment theory states that the parts of environment belonging to the organism-
environment system are parts which are defined by the structure of the system: "Physical
description of a living system can never be a complete description, not only because physics has
nothing to say about life as such, but also because the parts of the system are not selected according
to the physical laws, but on the basis of the living structure" (TA22, <20>). Thus, when we describe
the environment of an animal we do not really describe the parts belonging to the living system, but
we describe these parts as separated from the system and joined to the system of the observer.
Therefore, we cannot not observe the "Umwelt" of the animal; we may describe only our own
Umwelt and relate this to the body of the animal. When observing the behavior of the animal we
may then see how the animal relates to such parts of the environment which, in fact, belong to our
own system.
<6>
This consideration may be developed even further. When we give a description of our own
environment, we do not really describe those environmental parts belonging to the organism-
environment system. Instead, we give the description of certain parts of the world from the point of
view of the human species as a whole, because the consciously described human environment is a
shared environment (TA<89>: "all conscious things are common; therefore the whole human world
as it may be described, is a social world. All conscious experiences are common experiences.").
When I say something about my environment, then it is no more MY environment, i.e. that which
belongs to this specific organism-environment system, but a shared "third person" view to my
experience and behavior. The environment cannot be extracted from the organism-environment
system, and be described as if "from inside". The human being cannot describe anything that is
completely private, because the contents of his consciousness are common, shared with his
conspecifics. This means also that we never have conscious knowledge of our environment so far it
is regarded as belonging to the organism-environment system; we know it consciously only through
the results of our actions (see TA <66-67>).
<7>
We do not know how we are connected to the world, because we act in the world as organism-
environment systems. As conscious beings we may at any moment separate some parts of the world
as objects of our activity, and these parts are as results of our perception, for example, shared with
other people. When we describe these parts we use words as indicators of results; i.e. the verbal
description of a part of the world is only an indicator of the shared part, but not identical with this
part. Such a verbal description is good enough when we create organization for common results; it
gives to the other human being the possibility to direct his activity to the part of the world in
question, and to join it into his organism-environment system. This real part of the world used in the
cooperation, however, is always more than the verbal description reveals.
<8>
This is very close to what Malcolm writes in <63>: "although I can give you complex instructions
in terms of how to get from A to B and you can follow them, if I want to teach you a sensorimotor
skill which I have (like riding a bicycle) but which you haven't, I have to resort to the ancient time-
honoured practice of coaching you through the motions until your own sensorimotor learning skills
start picking up the tricks. Then we will both be able to ride bicycles, and neither of us will know
how we do it."
<9>
From the point of view of the organism-environment theory this is understandable: as our
consciousness is related only to the result, we can consciously deal with, and verbally report, only
sequences of results, not the processes as such. On the other hand, we can learn how to throw the
ball, for example, if we follow first consciously the instructions of the teacher (put the hand like
this, press the ball, move the hand, release the ball), and in this process of training our organization
is changed such that the intermediate results disappear, and we just throw without any more
consciously knowing how it happens.
<10>
Cooperation of human beings is possible, because we share structurally also such aspects of the
environment which we cannot describe in language. Thus, we never really know in the form of
conscious knowledge what all we know in this basic structural sense. We know always more in
action than we can describe with words. In language we can express only the results of our action,
not the action itself and its structure. This is not possible, because the description of our action is
also action; we cannot jump outside of our life process and separate it as a whole to an object of our
description.
<11>
The constructed environment of the machine is a human environment as it is presented by the
language (or pictures, which is the same). In the construction of machines, we explicitly separate
machine and environment, and use language in the description of the parts of the machine and the
parts of the environment which are related to it. It is the planner or constructor who thinks what will
happen with a certain set of elements in an environment that HE knows. However, it is well
possible that for the machine such features of the world exist which are not (and even cannot be)
parts of the environment of its planner. In fact, this could be the reason why all machines eventually
break down. As it is impossible to take into account the whole universe when building a machine,
there are always some unknown factors which do not fit into the constructed structure of the
machine.
<12>
Let us illustrate these principles by considering what is "simple" and what "complex" in an
environment. It is typical in brain research that the functioning of the brain is studied by using
"simple" stimuli in order to construct, on the basis of the recordings, the responses to the more
"complex" ones. Thus, dots, lines, angles etc. are used as stimuli when trying to solve the problem
how the face of the grandmother, for example, is processed. These stimuli are regarded as simple,
whereas the geometric description of the face of the grandmother is very complex.
<13>
From the point of view of the organism, however, the situation may be totally reversed. The dots
and lines are "simple", because they can be exactly described by geometry, and reproduced in
different laboratories. The brain, however, did not evolve in a world of dots or lines (which are only
geometrical abstractions), but in a world inhabited by parents and grandmothers whose faces
(especially their emotional features) had a great survival value. Thus, from the point of view of the
organism the face may be the most simple "stimulus" to deal with. Furthermore, the mental activity,
in general, evolved for dealing with objects, not with lines or pictures which are only secondary
abstractions: "Mental activity was a new form of action of the highly developed system that was
capable of using its history of experience in achieving results of action and forming systems
directed towards the future. With its neural nets, receptors and motor organs and associated
heterogeneous environment the organism-environment system could extract from the environment
things which it could use in its action, or avoid if they were harmful" (TA, <34>).
<14>
We can extend this consideration to the robots (or to any other artifacts), too. We build the artifact
into the human world, and then we may just follow how it uses the environment, which is not really
its "Umwelt", but our environment. This means that there are always such factors present in the
"Umwelt" of the artifact, which we cannot know at all or which we regard as "simple", but which
are, in fact, "complex" from the point of view of its functioning.
<15>
There is a very crucial difference between living systems, and "technology" created by humans as
living systems. Technology is something that we describe by words, diagrams, construction plans
etc. The brain (or a living cell) is not a technological device, because it does not follow any man-
made rules, but its own intrinsic living structure. If words (plans, etc.) are related to the indicators
of common results, then we can never build life according to the verbal descriptions. If we follow
the verbal description when making an artifact, then we do not create a living thing, but an indicator
of a living thing, something that superficially imitates a living thing. We can construct an artificial
cell which looks and acts very much like a cell, but which obeys human rules, and not the rules of
the living structure. Therefore, such a cell is and stays as an automaton built for human purposes.
<16>
Every description of life is a metaphor created by humans, which touches only some aspect of life.
In constructing artificial life we have usually the problem that we try to build this metaphor,
resulting in something that is precisely "artificial". Life cannot be exhaustively described, and even
if it could this description is not identical with life. We cannot create life by following linguistic
descriptions. Life can be created only by life, not by imitating life. Those who think they can build a
living cell on the basis of its description are in the same position as those who think that a poem is
the same thing as the experience its reading creates. A poem as a text does not contain any
experience, as little as the description of a cell contains life. Writing a poem is a process in which
the world is changed so that somebody by joining to this changed part of the world (the text) can
have an experience, get reorganized. For a poet the use of a poem by the reader is always a mystery.
He doesn't know exactly what will happen as little as the reader knows what happens when he
understands the poem.
<17>
This doesn't mean, of course, that it would be impossible to imitate some features of life. However,
this imitation is not life itself, but precisely imitation. The problem with imitation is that it does not
reach the content, but only an outer shell. When you imitate eating you will not get satisfied; when
you imitate a genius you will always stay an average man. Building of functioning machines
according to the instructions is possible, because the machines are from the beginning constructed
by humans. Man himself, however, is a "construction" of life and nature.
<18>
Malcolm cites in <8> the common view of artificial intelligence (AI) research claiming that in
comparison to animals "the specifically human way of deciding what to do next in the best possible
way is to rise above emotion and use reason and intelligence". He further states that AI made the
implicit presumption that it would not be necessary to implement emotions in computers to get
them to solve problems. Is this true ? Can humans solve problems without emotional involvement ?
<19>
I think the separation of emotion from the human problem solving is based on a very general
misunderstanding which is probably related to our basic training in the school. In the school the
problem solving is seen as a task set by others, and the solution of the problem is an answer
accepted by the teacher. The teacher defines the problem and the pupil gives the correct answer.
The only emotions involved may be related only to the problem if the teacher will be satisfied with
the answer. However, in real life the problems are part of the life process, and an answer is never a
solution of the problem, because a problem can be solved only by action. An answer is only the last
piece needed for the start of the action. If we use the definitions of the organism-environment
theory, then this process is always emotional, because it is a process of reorganization and result
achievement (TA<42>: "As the reorganization of the system is a continuous process, emotions are
always present and there is no action without emotions."). The idea of emotionless and machine-
like problem solving creates a specifically human problem: we think we can analyze the problem
linguistically, and then solve it by following the rules expressed in language. However, as pointed
out above, this would not lead to the solving of the problem, but only to imitation.
<20>
In <29> Malcolm states that "there is no doubt that our modern human mental capacities, intimately
tied up with language, exploit the advantages of digital computation". I am sorry to say, but I have a
big doubt, indeed. It is true that our mental capacity makes digital computation possible, but from
this does not follow that the mental capacity is based on digital computation. In general, the
computer metaphor for the brain is badly flawed, and in my opinion we will never know MORE
about biological systems by comparing them with man-made artifacts. Of course, the computer is
similar in some respects as the brain, because we have built it to carry out similar tasks we can
accomplish. The spade is also like the hand, but we don't learn much about the ways of functioning
of the hand if we are happy with this metaphor. As to the digital computation, there is no doubt
computers use it, because they are constructed to do so, but we cannot argue into reverse direction,
i.e. from the computers to the mental capacities. Digital computation is a human construction;
stating that our mental capacities are based on our construction is only a repetition of our basic
metaphor.
<21>
I agree with Malcolm (<6>) that the comparison of computer software with mind is misleading.
This is similar to saying that language IS mind. From my point of view, language is for changing
our structure in cooperation with other people. Language may be used for solving problems, but
then it should be regarded only as a tool, i.e. when somebody defines the problem then the first
thing is to understand the description. In my terminology understanding means reorganization of the
organism-environment structure in such a way that one may produce results in respect to the object
of understanding.
<22>
But do machines have consciousness ? Machines can do many things that humans do, and in many
cases, even much more efficiently than humans ever could. Machines also use language, which is
usually regarded as one criterion for consciousness.
<23>
There are many speculations about the possibility of mental activity or consciousness in machines,
and to some extent Malcolm seems inclined to ascribe at least mental activity to a machine,
although not consciousness, if I understand correctly. In order to be able to include robots under the
umbrella of the organism-environment theory/behavioral robotics, Malcolm wants to widen the
requisites of appearance of mental activity by taking the appearance of "signal" as a starting point.
<24>
In<27> Malcolm somewhat inaccurately cites my text when he says that I identify the stage of
mental activity as the development of neurons, which are "specialised for receiving, processing, and
transmitting signals". However, I don't speak anything about such specialization, because in the
frame of my theory the neurons do not send any "signals". This would be for me already a strongly
human interpretation of the behavior of neurons. Instead, I say in <31> that "… neuron is a highly
specialized cell that differs from all other cells of the organism in its ability to influence directly the
activity and metabolic conditions of the other cells".
<25>
The idea that neurons transmit signals is comparable to the idea that robots serve people. The latter
is, of course, true; we construct robots for our purposes. However, the former is wrong, because we
have not constructed neurons for our purposes, but neurons "construct" us as living formations. The
idea of information processing or signal transmission is based on the interpretation that the neurons
are not living units, but serve the purposes of a homunculus. From the point of view of the neuron,
there are no signals, only changes in the metabolism. Neurons are not interested in stimuli or signal
transmission from the environment (and even less in dots or lines, or faces of our relatives) ! The
neuron must be active in order to receive the metabolites it needs. This it can achieve by disturbing
other neurons which eventually leads to contraction of muscles and intake of food by the organism
(see Jarvilehto, 1998b). Thus the organism is a symphony of diverse "needs" of active cells which
must support each other; if this doesn't happen the multicellular constellation breaks down.
<26>
This creates a very difficult problem for the imitation of the nervous activity by transistors or some
other technical devices. The technical devices have been constructed according to the human
interpretation of the nervous activity; thus, they feature the signal transmission function, but they do
not contain the original life process of the neuron, its basic dynamics. If we want to develop a robot
which resembles the organic action as conceived in the organism-environment theory, we should
first develop a unit which takes care of itself. This unit should be such that when joining with other
units and environmental parts it may get the energy necessary for its functioning.
<27>
From this follows that we cannot preset the results such a constellation of units would produce. For
the real organisms this process has been going on for millions of years during which such elements
have disappeared, which do not fit into the formation for he long run, or which destroy the general
living conditions of the organism. For humans, for example, this means that such neural elements
have been selected out, which would seriously risk the possibilities of cooperation, because the
joining together of human beings is one necessary condition for the human existence. However, it is
questionable if we would like to have such robots. If we build such a formation out of artificial
elements which we do not thoroughly know (i.e. their "needs"), how can we be sure that the results
which the formation will achieve, are not dangerous to our existence. Why should such robots serve
people?
<28>
But back to the problem of consciousness in the machine. As I have pointed out, consciousness is
regarded in the organism-environment theory as an aspect of a system consisting of several
organism-environment systems, which is directed towards common results that are useful for the
whole cooperative system (TA <57>). The machine is something that we build to serve this process,
and therefore it is just a part of the system. Thus, it is already from the beginning questionable to
ask if a machine can have a consciousness of its own (i.e. also in the case when no human beings
exist).
<29>
But can't the machines communicate which seems to be a necessary criterion for consciousness also
in the organism-environment theory ? In <50> Malcolm writes: "Unfortunately it seems quite
feasible that a robot could be developed, which could speak and understand a human language, and
which had a concept of its own capabilities, preferences, and taboos". Can the machine be an
"autonomous agent" using language for its own purposes?
<30>
My answer is no, because "communication" in this case is an illusion. We may have, of course,
robots which send messages to each other and behave in relation to these messages, but the robots
may use language only in connection with humans. "Communicating" robots are designed for
human purposes, and they will "communicate" only so far as this kind of action fulfills some human
plans. If they start to do things of their own, then they have a malfunction. The use of language by
the robots (even by complicated "learning" robots) may be compared to how a typewriter uses the
language. Every key on the keyboard is able to produce a letter, but it depends on the human user as
to whether such a communication makes any sense.
<31>
If "understanding" the language is identified with following of the syntactic rules, then Malcolm
may be right. In this case, however, we have another basic mistake associated also with the Chinese
room argument. The basic mistake in this argument is precisely the idea that language is a set of
symbols which can be used without understanding if you only know the syntactic rules. Or that
speaking is the process of production of words in correct order. From my point of view, the words
as such do not carry any meaning, but they are only suggestions for cooperation. It is the production
of the cooperation, joining of organism-environment structures (a process, for which no words
exist), which is critical in the use of language. Communicative cooperation means that the
participants of communication are able to change their structure so as to join their activities in a
common result which is in some sense new for both. The robot can be build to simulate such
cooperation, but even in the best case its activity resembles that of a slave who must act according
to the rules of his master.
<32>
Furthermore, two "communicating" robots are, in fact, parts of the human society that has extended
its communicative abilities by these technical devices. A robot is part of the human being; therefore
it is even odd to try to imagine its autonomous existence. In this sense the newly coined term
"autonomous agent" is at least a little questionable. Robots are not "interested" in communication,
as little as spades are interested in digging the holes; they do this when they are in the human use.
Every machine is an extension of human abilities. Therefore, we could, of course, say that a robot is
"conscious", but only in connection with a human being, as a part of his consciousness. The
question of consciousness in machines is of the similar form as wondering whether my legs are
conscious, because they can bring me so well to the place I want to go to.
<33>
The mistake in claiming that machines may have consciousness (or any other human property when
separated from the human beings) has a very simple origin: When the human being builds a
technical device, he models his own abilities for his own purposes, to support his own actions. In
the process of construction he abstracts some of his own characteristics, and by the help of
technology, exaggerates them in the machine in order to achieve the desired results more efficiently.
He constructs a spade in order to be more efficient in digging holes. A spade is like a hand in form,
but more rigid and much more limited in its use. If we now start to wonder: is a spade "really" a
hand ? then we have simply forgotten the history of its construction. It would also be strange to use
a spade as a model of hand, because this comparison would hardly teach us anything new about the
hand.
<34>
In <53> Malcolm deals with the localization of consciousness, and gives to the brain a central,
though not exclusive role in locating consciousness. As an example he uses memory, which can be
seen dispersed outside the brain, but in which the brain has a central role. However, here we have
the problem of the role of elements in the system in relation to the result achieved, in general.
Malcolm <35> is quite right in claiming that the brain is important in many cognitive operations,
but nobody knows really, at the present, how important. The cognitive roles are usually ascribed to
different brain areas on the basis of lesion or activation studies. If a lesion prevents a cognitive
operation, the conclusion is ready: the operation is located in the place of the lesion! If an area is
activated during the operation, then again we have found the locus of operation ! Such conclusions
are warranted only if we from the start assume that the cognitive operations are located in the brain,
and only in the brain. However, if we posit that these operations are carried out by an organism-
environment system then it would be important to determine the specific roles of the parts of the
brain in the whole system. If the result (e.g., remembering) is a function of the whole preceding
organization then any change in this organization is reflected in a change of the result, and in this
sense all parts are actively contributing to the result, not only some parts of the brain.
<35>
Certainly, one can say that some elements are more important than some others, and certainly, the
development of the nervous system was an important prerequisite for the development of
consciousness. However, I think it is very strange use of the words if I say, for example, that the
steering of the car is more strongly located in the steering wheel than in the wheels of the vehicle.
Every organism-environment system is unique, and if some decisive parts of the system are
permanently destroyed, then no exactly similar results as before can be achieved. These parts,
however, are not necessarily located only in the brain, they may as well located in the legs (e.g.
amputation of a leg changes the whole world of the person), or in other people (e.g. losing the dear
one in divorce or by accident has dramatic effects on people's life). Most of the modern brain
research takes it for granted that the most important places can be found in the brain, and therefore
no other alternatives are not even tested.
<36>
In order to illustrate these problems, let's look at an example: learning to play a piece of music with
piano. When the pupil learns the piece we can put electrodes on his scalp and record changes in his
brain activity. Thus: learning happens in the brain! But let's modify the experiment so that we do
not connect the electrodes to the scalp, but to some place inside the piano. Now we will again
record systematic changes during learning; in the movements of the hammers, in the succession of
activated strings, in the vibration amplitudes, etc. Thus: learning happens in the piano! The fallacy
here, of course, is that we locate the results of learning in separate components of the system, which
is changed during learning and may only as a whole produce the learned results. When you learn to
write it is, in principle, indifferent to say that your hand learned the writing or your brain, because it
is neither of them which learned it. The separated brain can write as little as the separated hand.
<37>
Malcolm writes in <57> that "although my mind is not entirely 'located' in my brain, and has (to use
Clark's term [Clark 1998] 'leaked' into my environment, nevertheless my mind is more strongly
located in my brain than anywhere else, in the sense that only in my brain resides the capacity to
repair damage to my mind." As pointed out above, the problem is that we do not really know the
role of the brain in the organism-environment system. The modern brain research tries to show this
role by looking at the brain only which will, of course, make the brain to the seat of the soul.
However, we don't know how far we can substitute brain functions with environmental aids, and we
do not really know how far the brain can be under optimal conditions reorganized such that the lost
capabilities may be recovered. In fact, in spite of the immense efforts of the last years we are quite
helpless in questions of rehabilitation.
<38>
Malcolm writes somewhat inaccurately in <55> that the capability to develop full wormhood is
inherent in the entire worm. If the capability to develop full wormhood were inherent in the entire
worm what would be the smallest part from which a worm still may develop ? Certainly one cell is
not enough. This means that in the case of the worm there is a minimal organization which must be
preserved to have regeneration. But the same is true also for the grab, although it seems that this
minimum is larger than in the case of the worm. Then this is not a question of localization of
capability to regenerate, but a question of how far the system can be destroyed so that it can still
repair itself. It is intuitively clear that such a capability is more limited the more there are
specialized cells in the system, which seems to be the main difference between the worm and crab.
<39>
In the case of cactus <56> it certainly does not get independent of its environment when it can resist
prolonged drought, it only develops a way to store some necessary components in a certain place of
its organism-environment system. A human being has developed this ability to an extremity and can
apparently live without specific environment (in space, for example), but only when he carries the
necessary parts of the environment with him (e.g. an oxygen container).
<40>
This is just the danger I am warning in the target article <15>: "The cell is, in fact, in such a
complex way bound to its environment, to its indefinite and changing parts, that we may no more
see these connections, and the cell therefore seems to be independent, separated from the
environment." If I logically continue Malcolm's idea I could claim that the brain is no more
necessary when the mind has once developed. The point is that the history of the organism-
environment system is preserved in the whole system, and it is no more possible to jump out from
the system when it has developed new characteristics. When man gets his social relations, he is no
more alone in his life - even if he is located in a desert island. Sociality is not dependent on distance
as such; we cannot measure loneliness in centimeters ! Or would Malcolm claim that I am right now
without any social contacts when I am sitting alone and writing my response to Malcolm ?
<41>
Malcolm is quite right in stating (<14>) that "if we then decide that knowledge and purposefulness
are located entirely in the brain of the creature (or the computer of the robot), then we are forced to
describe purposefulness and knowledge in terms of representational structures encoded within the
brain (or computer)." and that this leads to the problem how to tie up the internal and external.
However, his reference to Brentano in this connection is not quite accurate, because it seems that
Brentano well understood this problem. Brentano (1924) writes: "Die Intention des Erkennenden
geht auf das Ding an sich und weder auf das immanente Objekt noch auf das Bild des Dinges; sonst
könnte das Tendieren niemals ein Transzendieren sein. Womit beschäftigt sich der seelisch sich auf
etwas Beziehende? Mit den Abbildern von Dingen? Nein! mit den Dingen selbst, die er vorstellt,
verneint, liebt oder hasst". Brentano states here quite clearly that the intention is not in relation to an
inner object or picture of the object, but to the things themselves. Thus, the "outer" object is within
the intention, which is close to the idea that it is part of the organism-environment system.
<42>
It is interesting that also Koffka (1935; and later Gibson, 1979, with his concept of affordance)
realized that intention is not located only in the brain, but it may be in the things as well (the book
says: "read me!"). However, Freeman (1998), for example, although accepting basically the
organism-environment idea, represents the conception that the division of the organism and
environment is needed, because it is the brain from which the intentional activity is iniated. I think
this is not very consistent for the following reasons: If an organism-environment system is defined
as an active system in relation to the result of behavior then activity must be similarly ascribed to all
of its elements. No single element has the privilege of being active alone, although the roles of the
elements may, of course, differ. When kicking a ball the supportive leg is an active element in the
realization of the result, and its seemingly passive role is different that that of the moving leg (see
Jarvilehto, 1998b). Similarly, if we regard an organism-environment system as an intentional
system (i.e. it is goal-directed/organized for result) then no single part - let it be a neuron or the
brain - may alone contain this intentionality. In fact, the idea that the brain is the locus of control of
behavior is only a reverse way to saying that stimuli direct behavior.
<43>
Thus, consciousness cannot be located in the brain. A brain - let it consist of protoplasm or be it an
artificial neurocomputing machine - cannot be conscious as such, as little as a robot. The brain is
only one organ of the body (and even anatomically difficult to define exactly; in fact, there are no
means to separate the nervous system from the body). Locating consciousness in the brain or in the
machine leads to questions which cannot be answered, because for consciousness to exist, we need
much more than the brain or the machine alone.
<44>
Another aspect in consideration of consciousness in machines is the point of view regarding
consciousness as some kind of epiphenomenon. As indicated by Malcolm, this is usually formulated
by help of the concept of zombie. I think this is one of the most unfruitful line of considerations in
consciousness research. In fact, it is strange how little attention has been devoted in consciousness
studies to the most obvious manifestations of consciousness, the results of complex human co-
operation: buildings, roads, books, etc. Of course, it is often maintained that such results could be
produced also by Zombies who only mechanically follow the rules. However, in this context it is
completely neglected that Zombies can produce such results only AFTER we have achieved them
first. In this respect zombies are similar as any machines. Furthermore, if somebody finds a simple
solution to a difficult problem there are always people who -- after the solution has been explained
to them -- think they could have done this also themselves. Similarly, a zombie may work according
to instructions to produce something similar as a conscious and skilled craftsman. However, when a
craftsman works he does not follow instructions, he has an idea and just lets his skill work out the
result he has consciously in mind.
<45>
Zombies' behavior is indistinguishable from the human behavior only if they are able to cooperate
with other humans as we do, and can produce together with other humans/zombies results, which
are genuinely new and useful for humans. But if they can really do this, they are humans and not
zombies at all. In fact, the concept of zombie is possible only if we posit that behavior may be
separated from its results, or use of language from understanding.
<46>
If the language is a tool for cooperation then it is often not so important what are the words used,
but WHO is using these words, and HOW. The more somebody has experience the more important
are usually his utterances, even if somebody with less experience would use just the same words.
This is therefore, that the words used by an experienced person are the result of many alternatives,
in contrast to the less experienced person who may use just the same words by chance without any
connection to a larger context. In politics, for example, the speaker is more important the exact
words, because the words are used for indicating the possibility of political cooperation. This is also
the natural basis for the fact that politician's speeches usually do not contain anything; they are
more rituals for cooperation and for convincing people of the usefulness of the speaker.
<49>
The meaning of the words cannot either be disconnected from the listener's ability to comprehend,
and their relation to the speaker. The words as such do not contain any "truth". This fact is often
difficult for students to understand. When they tell the "truths", nothing happens, but when the same
words are used by the professor then everybody takes the words seriously !
-------------------------------
REFERENCES
Brentano, F. (1924) Psychologie vom empirischen Standpunkt. Hamburg: Meiner.
Freeman, W. (1998) The Necessity for Partitioning Organism From Environment. Psycoloquy:
9(81)
Gibson, J.J. (1979) The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.
Jarvilehto, T. (1998a) Role of efferent influences on receptors in knowledge formation. Psycoloquy
9(41).
Jarvilehto T (1998b) The theory of the organism-environment system: II. Significance of nervous
activity in the organism-environment system. Integrative Physiological and Behavioral Science, 33,
335-343.
Koffka, K. (1935) Principles of Gestalt Psychology. London: Bradford.
-----------------------------------
Timo Jarvilehto
http://wwwedu.oulu.fi/homepage/tjarvile/indexe.htm
Professor of psychology
University of Oulu
PB 2000, 90014 Oulun yliopisto, Finland
e-mail <tjarvile@ktk.oulu.fi>

You might also like