Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Lucian TRESTIOREANU

Doctoral researcher, SNT, the University of Luxembourg


lucian.trestioreanu@uni.lu

Essay on “What is it like to be a bat”, by Nagel Thomas

Written in the framework of the workshop

Philosophy of Mind : of Mice, Men and Machines

Organised by:​ Prof. Christoph Schommer


Antonio Bikić
Adriano Mannino
1. Introduction

The workshop ​“Philosophy of Mind : of Mice, Men and Machines” ​explored topics related to AI from a
more philosophical, ethical and moral perspective, by also making use of a more holistic approach.
One of the aspects discussed was how we could attribute different degrees of responsibility, freedom,
penalty to an AI of the future with enhanced capabilities - up to having a conscience. One major
obstacle would be how to correctly evaluate such an Ai: its abilities, competences, and possibilities, in
such a way that we obtain a correct result, which would further enable us to decide on and apply the
right policies. In this context, Nagel’s work, ​“What is it like to be a bat” ​[4] , has been discussed.
Nagel’s view is that it is impossible to correctly evaluate another entity without being able to,
metaphorically speaking, see the world through its own eyes. In this essay I would like to explore if, at
least to the extent of most practical needs of the society related to the topic, it might or not be
necessary to really be able to "experience from inside" in order to decide upon and attribute certain
degrees of freedom, responsibility, etc to such an AI.

1
2. Background

2.1. Contemporary questions related to AI

With the advent of the AI systems and their integration into everyday life, current professionals of
different specializations and the society as a whole, are faced with novel-old questions which until
now have been more or less dormant because usually, behind most of the decisions taken there was
a human factor which would have borne the responsibility, whichever decision was taken.
In other words, until now, it appears that there has been no large-scale stringent need to
definitely answer with ​yes​ or ​no​ to questions like: ​“If you were driving, and at some point you would
get into a situation where you have to decide either to possibly kill one or more of your car’s
passengers by hitting a wall or, surely kill two-three elders on the sidewalk by going around the wall,
what would you do?” (also known as “the trolley question”). ​While at the present moment
“autonomous” cars would mostly just hit the brakes and hope for the best, it appears that driven by
the rapid developments in the field of AI, this type of questions will become more and more prevalent.
If it were suspected that a particular machine was conscious, its rights would be an e ​ thical
issue that would need to be assessed (e.g. what rights it would have under law). For example, a
conscious computer that was owned and used as a tool or central computer of a building of larger
machine is a particular ambiguity. Should l​ aws​ be made for such a case? Consciousness would also
require a legal definition in this particular case. [​ 2]
With this kind of decisions comes also a lot of responsibility, liability, moral aspects and more -
some concerning the human society deciding and attributing these to AI, and some concerning the AI
itself - would it be conscious. Who should be held responsible, in the case of an
autonomous/conscious car or robot? What should the penalties be, considering that we may be
punishing a machine ​(supposing the AI could be an advanced robot who might be close to, or having,
consciousness​)? What would be the “good” answer and what would be the “bad” answer to such
questions? Does it depend on the specific societies where the said machine, software or AI operates?
It might be the case that this answer would be different from society to society, from country to
country, and the car’s or the robot’s software behavior will have to adjust for these differences
somehow, even real-time, while moving, because the trip could be across different country borders.

2
2.2. Thomas Nagel’s “What is it like to be a bat”

In his paper “What is it like to be a bat”, Thomas Nagel explores the extent of, and the
possibilities to, knowing what it is like to be some other entity. What makes his work interesting is that
he goes even further, raising the matters of ​subjective​ ​(inner) experience​ and o
​ bjective experience​.
He ​famously asserted that “an organism has conscious mental states if and only if there is
something that it is like to be that organism—something it is like for the organism." This assertion has
achieved special status in consciousness studies as “the standard ‘what it’s like’ locution.” D ​ aniel
Dennett​, while sharply disagreeing on some points, acknowledged Nagel’s paper as "the most widely
cited and influential thought experiment about consciousness." ​Peter Hacker​ analyzes Nagel’s
statement as not only “malconstructed” but philosophically “misconceived” as a definition of
consciousness, and he asserts that Nagel’s paper “laid the groundwork for…forty years of fresh
confusion about consciousness.” ​[1]
Nagel sees consciousness not as something exclusively human, but as something shared by
many, if not all, organisms. ……… In fact, what all organisms share, according to Nagel, is what he
calls the "subjective character of experience". ………… The paper argues that the subjective nature
of consciousness undermines any attempt to explain consciousness via objective, reductionist
means. The subjective character of experience cannot be explained by a system of functional or
intentional states. ​[1]

In other words, in his work, Thomas Nagel basically argues that:

- even if it would be possible to connect the human brain to a bat’s brain in such a way
that the human is able to live the experience of being a bat, this would still be far from
the truth in the case where we would really want to know “what is it like to be a bat”.
Because the witness of the experience, the consciousness living the experience would
not be the consciousness of the bat, but the respective human’s consciousness ​(inner
experience)​. And even if it would be possible to totally metamorphose into a bat and
live the life of a bat, and even see and experience from the subjective viewpoint of a
​ ot be enough, because one would still not have been a bat from
bat, that would ​still n
birth - they didn’t grow like a bat, so they lack the mindset of a bat.

- he raises questions around the concept of ​objective experience​, which would be the
naked experience itself, separated from the original consciousness which lives it.

For example, does a cake have a basic ​objective experience ​associated, to which it
​ r subjective experience of the consciousness
can later be added the ​inner experience o
belonging to the one who eats it? We know how a chocolate cake tastes from a human

3
point of view, what is the experience of a human eating a chocolate cake. But what can
we say about the experience lived by a fly eating a chocolate cake?

Or, does the experience of being a bat have some immutable character, which can be
separated, and later be lived and felt in different flavors by a bat, human or a dog?

Nagel’s paper raised interesting questions for its time, and today is considered one of, or his most
prominent works. His work becomes of central interest when trying to evaluate and understand novel
AI with its ever increasing capabilities, and position ourselves in relation to it, because a
consciousness is, and should be considered a consciousness, or a mind is and should be considered
a mind, no matter what form has its material body (organic, machine, etc).

4
3. Placing Thomas Nagel's "What is it like to be a bat" in the
context of the contemporary and future AI

In order to explore the possible (degrees of) responsibility, accountability, ethics, freedom, rights, etc
which we could possibly attribute to AI, it would be helpful to know the extent to which an AI can think,
feel, make deliberate choices and trigger corresponding actions (evaluate its "consciousness", or its
lack of it).

At this moment, what we can know in this regard, is limited to the physical objectivity of what we can
see / measure, which is, at least by Nagel's theory, insufficient: measuring the brain's waves gives no
info about the ​inner experience​ of a human mind when seeing a rainbow, and in the same way, we
couldn't infer what would an AI feel when seeing a rainbow through its camera eyes, just by
measuring the electric currents inside its "CPU".

Moreover, Nagel states that even if methods would be developed for humans to be able to "taste"
what it is like to be a bat, or an AI in the context of the present discussion, we would still be far from
the truth, because we would taste that experience from a human perspective, and not the bat's or the
AI’s perspective.

As such, to achieve this goal, a novel possibility/way/solution/method/tool to enable us to know ​"what


it is like to be a bat"​ or AI in our case, from the very perspective of the bat / AI, would be very helpful.
The problem appears to arise when trying to find a solution towards such a way or a tool, because by
our current knowledge and technology it seems next to impossible to achieve such a goal.

5
4. Discussion

While personally, I deeply agree with Nagel's point and find it valid in itself. It is indeed a pity that we
can't know better what others feel, at least. Probably the world would be a much better place,
hopefully, for all living creatures, including for AI entities.

Still, I argue that in practice and in the present reality, the majority of use cases and people would find
this overkill. Nagel’s point of view aims towards perfection in an imperfect world. By the present
available knowledge, no known entity has ever been able to "experience" other entities' or species’
experiences in such a way, and still it seems we managed to attribute some degrees of responsibility,
liberties, rights, to animal beings at least. Stretching the argument a bit, we also know how much we
can depend on a house alarm system or not. The decisions taken this way along the history may
have not always been the best decisions. Many have been really bad in fact: there have been animal
trials, inquisition, slavery, and more. But those entities (individuals / companies / societies) which will
want to do better, will (in US slavery has been abolished); and those who don't want to, won't (there is
still slavery nowadays in some parts of the world - open or hidden). Where there is a will, there is a
way.
In my opinion the society itself doesn’t care about the individual more than it has to, in spite of
the fact that it is made of individuals. There is a mechanism of push-pull between the society and the
individuals which triggers changes into a specific society, when a ​critical mass​ is reached. And what
is considered appropriate in a given society, may be found unappropriate in the context of another
society. As such, because the society in itself is agnostic of the individuals forming it, from this point
of view a given society has its own “life” and will, more or less in the same way as a conscience is not
directly aware of each of the neurons forming it. As such, in my opinion, in practice, if it will be based
on similar principles as of nowadays, the societies of the future will still probably tend to implement
just “good enough” solutions, minimising the effort needed, and caring for the individual (human or
AI), just as much as it has to, at the given moment. For Nagel’s point of view to be really relevant in
practice, first the human society as a whole, and the individuals at a lower scale (to form a ​critical
mass​), need to make a major step forward and ​really c​ are more for the other. But this was also
probably written in the animal genome during the millions of years of evolution. Survival would be
much harder if one would care so much about a deer, or even feel what a deer feels when one has to
sacrifice it for food. If we would feel all that it really happens inside every living creature around us,
would we be able to sustain that? Would we be able to survive? Taking into consideration our present
abilities, would we be able to keep our minds sane, when facing such amounts and amplitude of
information, emotions, and more?

I would also argue that, even if we can't experience what is it like to be a dog (a dog still has some
sort of AI, in some ways superior to current machine AI) for example, we have still been able to draw
a line, albeit not perfect most probably, where its responsibility to guard something or someone would
start and where would it stop. The same for its liberties and feelings. These derive from its external
behavior from where we can understand its degree of understanding and feeling.

6
In the same way, such an AI would be able to express externally with actions and words (written
words are the main way of interaction between humans and PCs). From here it will be possible to
evaluate to a reasonable degree, to what extent it can be held responsible, or to what extent its
(possible) feelings, and freedoms, should be considered. For example there are works (even Alan
Turing) suggesting a child-like approach and evaluation towards robots and AI, with many tools and
concepts already developed for children being readily available [3]:
“In the spirit of the popular book All I Really Need to Know I Learned in Kindergarten (Fulghum
1989), it is appealing to consider early childhood education such as kindergarten or preschool as
inspiration for scenarios for teaching and testing AGI systems. The details of this scenario are fleshed
out by Goertzel and Bugaj (2009).
This scenario has two obvious variants: a physical preschool-like setting involving a robot, and
a virtual-world preschool involving a virtual agent. The goal in such scenarios is not to imitate human
child behavior precisely but rather to demonstrate a robot or virtual agent qualitatively displaying
similar cognitive behaviors to a young human child. This idea has a long and venerable history in the
AI field — Alan Turing’s original 1950 paper on AI, where he proposed the famous Turing test,
contains the suggestion that "Instead of trying to produce a programme to simulate the adult mind,
why not rather try to produce one which simulates the child’s?” (Turing 1950).”

In [3], different practical assessment scenarios can be found:

● General Video-Game Learning


● Preschool Learning
● Reading Comprehension
● Story or Scene Comprehension
● School Learning
● The Wozniak Test

Obviously these ideas are not exhaustive, and possibly they may prove not even an appropriate
answer to the question of how to evaluate an AI. They are, nevertheless, examples pointing to the
fact that there could be ways to evaluate AI while bypassing Nagel’s hard-to-solve problem. Although
much more limited in possibilities and outcome, some of these “other” possibilities and ideas might
finally prove at least sufficient, for at least a good part of the practical questions that will arise and
tasks needed.

As a separate note, in my opinion it is questionable if we should ever build a conscious AI.


Because, as it is well known, any conscious being will assert its freedom and independence sooner or
later, no matter what. Out of such a race, it is obvious that the human race has a high number of
chances, in so many ways, of being obliterated. As Daniel C. Dennett said,​ “We should not be
creating conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no
conscience, no fear of death, no distracting loves and hates.”​ [5]. It can be the case that the ethics
which are discussed in regards to AI will be broken exactly by the said AI, in the end. In [5], the author
raises compelling questions regarding the build of a conscious AI: How will it be possible to ensure

7
that a conscious AI will not want revenge against foolish human users who would misbehave towards
it? And how would an AI be possibly punished, as a computer is made to be stationary and wouldn’t
care about prison? And that even “death”, may not be a real death for such a machine as all its
design and data could be back-up-ed, even by humans? What death would mean for a human who
would be himself back-up-ed, knowingly that the same day he can reconstruct himself again?
Finally, AI is like an arms race. The country achieving the most will have a sizable competitive
advantage. It is most probable that the human race is unable itself to stop the race now or in the
future. It is luring everybody like a sweet drug - you know you might end-up very bad, but you just
can’t stop; the more so as ​“everybody is doing it”.​ So we can only prepare, although we also know
that any preparation can be useless, because it is just impossible to foresee everything; or even a
relevant part of what might come all together with AI. There are so many ways in which all this might
go wrong. As with the nuclear arms. 70 y.o. ​children​ holding the faith of the world in their hands in the
form of a button. When I say ​children, ​I mean it in the context explained here: the limited ​experience
and wisdom a ​ ny human being is able to gather along their own life versus the enormous amount of
scientific knowledge and technology​ which they can access, hold, or control.

8
5. Conclusion

In the context of AI, Nagel’s work, which raises a profound but crystal clear perspective, helps
researchers and society better understand the problem. We could see studying his paper as of a high
didactical, academic and philosophical value. The viewpoints and ample perspectives he opens, can
help the concerned individuals realise the extent, the depth of the problem, and as such help the
design and decisions concerning the future of AI and the policies related to it.

His work seeks the absolute though, while the reality is that our possibilities are finite and the world is
built itself, the way it’s built - maybe it’s supposed to be as it is (in the sense that this might be the
best way for life to function in the healthiest possible way)? Each bird sings its own song and each
individual has its own inner life. Sometimes you think you know someone but after 30 years you find
that you didn’t. The world has been this way since forever (at least ours, and to the extent of our
knowledge).

As such, in practice, in many situations, the world has functioned with “good enough” approximations.
And my conclusion is that probably, this time also, other methods much more “convenient” and
practical to apply will be used for assessing, deciding and designing the aspects of ethics, morals,
responsibility and liability concerning AI - at least for the foreseeable future. While Nagel’s work is and
will remain a guiding light towards higher standards through deeper analysis and more complex
research.

Finally, as a side note, in my opinion, Nagel’s ​“What is it like to be a bat”​ also can back up my
following assert:
One of the biggest hopelessnesses of the human race over millennia, maybe bigger than the
helplessness in front of death, is related to the impossibility to pass over to the next generation the
experience​ and the ​wisdom​ resulting from experience. We can pass over to the next generation
knowledge, data, stories, scientific discoveries, but ​not experience, a ​ nd as such,​ not wisdom​.
Wisdom-wise, each human is born tabula rasa, and unfortunately, there is ​no way to catch-up with
everything that it has been before at an experiential level.​ Wisdom-wise, the same learning process is
repeated over and over, and it always finishes at almost the same point. There can be no
“capitalisation” ​in this regard, opposite to ​knowledge ​which can be passed on and caught-up with from
generation to generation. This results in a situation as depicted in ​Figure 1.​

9
I acknowledge that there might be some limited capitalisation at the collective global scale, but
in my opinion this is totally negligible to what is happening regarding ​knowledge.​ There is no way to
really​ tell your child ​what love is ​until they experience love. The same as when you can’t explain
colors. As a result, for example, the ​live and active memory​ of the hardships and ugliness of a war
dies forever once the last human being who has experienced the terror of that war, and that could at
least have told others about it - the last one which could have been at least a guiding lighthouse of
peace, kindness and understanding, dies (I am speaking of catastrophic events like a world war or
Hiroshima/Nagasaki bombings). After this last person dies, there is only written history that remains;
and history has so well proved that humans can’t learn from history - for the exact reason explained
here.
A huge competitive advantage that machines might have is that they might be able not to only
share,​ but also to ​capitalise on experience and wisdom​. At least because that they may be immortal,
and that they may be able to extend their hardware and software capabilities virtually at unlimited
levels. When experience and wisdom will be possible to be capitalised, those entities being able to do
so, will achieve most probably unimaginable heights, and supremacy.

10
Disclaimer:

The views presented here are the author’s personal views and as such, his sole responsibility.

References

[1] ​https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F​, accessed 19.03.2020


[2] ​https://en.wikipedia.org/wiki/Artificial_consciousness​, accessed 20.03.2020
[3] “Mapping the Landscape of Human Level Artificial General Intelligence”, ​Sam S. Adams, Itamar
Arel, Joscha Bach, Robert Coop, Rod Furlan, Ben Goertzel, J. Storrs Hall, Alexei Samsonovich,
Matthias Scheutz, Matthew Schlesinger, Stuart C. Shapiro, John Sow
[4]​https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_ba
t.pdf​, accessed 20.03.2020
[5] ​https://www.wired.com/story/will-ai-achieve-consciousness-wrong-question/​, by​ Daniel C. Dennett
is the Austin B. Fletcher professor of philosophy and co-director of the Center for Cognitive Studies at
Tufts University; ​accessed 19.03.2020

11

You might also like