Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Perspective

https://doi.org/10.1038/s42256-019-0103-7

Homeostasis and soft robotics in the design


of feeling machines
Kingson Man and Antonio Damasio

Attempts to create machines that behave intelligently often conceptualize intelligence as the ability to achieve goals, leaving
unanswered a crucial question: whose goals? In a dynamic and unpredictable world, an intelligent agent should hold its own
meta-goal of self-preservation, like living organisms whose survival relies on homeostasis: the regulation of body states aimed
at maintaining conditions compatible with life. In organisms capable of mental states, feelings are a mental expression of the
state of life in the body and play a critical role in regulating behaviour. Our goal here is to inquire about conditions that would
potentially allow machines to care about what they do or think. Under certain conditions, machines capable of implementing a
process resembling homeostasis might also acquire a source of motivation and a new means to evaluate behaviour, akin to that
of feelings in living organisms. Drawing on recent developments in soft robotics and multisensory abstraction, we propose a
new class of machines inspired by the principles of homeostasis. The resulting machines would (1) exhibit equivalents to feel-
ing; (2) improve their functionality across a range of environments; and (3) constitute a platform for investigating conscious-
ness, intelligence and the feeling process itself.

W
e propose the design and construction of a new class of intelligent solutions to the tasks set before it—that is, augment the
machines organized according to the principles of life reach of its cognitive skills.
regulation, or homeostasis. These machines have physical We bring a biological perspective to the effort to produce
constructions—bodies—that must be maintained within a narrow machines with an artificial equivalent of feeling. For this effort, we
range of viability states and thus share some essential traits with all will rely on recent developments from two fields: materials science
living systems. The fundamental innovation of these machines is and computer science. Turning first to new materials, we note that the
the introduction of risk-to-self. Rather than up-armouring or add- past decade has witnessed the birth of a sub-discipline, soft robotics.
ing raw processing power to achieve resilience, we begin the design This was enabled by new discoveries in the design and construction
of these robots by, paradoxically, introducing vulnerability. of soft ‘tissues’ embedded with electronics, sensors and actuators.
Living organisms capable of mentation are fragile vessels of pain, These artificial tissues are flexible, stretchable, compressible and
pleasure and points in between. It is by virtue of that fragility that bounce back resiliently—in short, they are naturally compliant with
they gain access to the realm of feeling. The main motivation for their environments. Combined with conventional parameters such
this project is a set of theoretical contributions to the understand- as temperature and energy level, soft materials potentially provide a
ing of biological systems endowed with feeling. Damasio1 has pro- rich source of information on body and environment.
vided a rationale for the emergence of feelings from the physiology The second development concerns statistical machine learn-
of life regulation. Feelings are intrinsically about something: mak- ing algorithms for the creation of abstract representations. New
ing it possible for an organism to gravitate towards states of at least computational techniques may allow us to bring maps of the inner
good and preferably optimal life regulation, thus maintaining life and outer worlds into register. There has been enormous attention
and extending it into the future. We must add that, in our conceptu- paid to the capabilities of deep learning, but here we focus on one
alization, feelings are of necessity conscious, and play a critical role particular application of the technology: its ability to bridge across
in the machinery of consciousness. sensory modalities, including not only exteroception but also the
For the homeostatic machines we envision, behaviours can carry modalities concerned with internal organism states—interoception
real consequences. The world affords risks and opportunities, not and proprioception. This advance provides a crucial piece of the
in relation to an arbitrary reward or loss function, but in relation to puzzle of how to intertwine a system’s internal homeostatic states
the continued existence of the machine itself and, more to the point, with its external perceptions and behaviour.
to the quality of feeling that is the harbinger of the good or bad
outcome relative to survival. Rewards are not rewarding and losses Self-interest as fount of creativity
do not hurt unless they are rooted in life and death. True agency Today’s robots lack feelings. They are not designed to represent the
arises when the machine can take a side in this dichotomy, when internal state of their operations in a way that would permit them
it acts with a preference for (or, seen from a different angle, makes to experience that state in a mental space. They also lack selfhood
a reliable prediction of2) existence over dissolution. A robot engi- and ‘aboutness’. All these shortcomings are related. It is true that
neered to participate in its own homeostasis would become its own present-day intelligent machines perform extremely well in narrow
locus of concern. This elementary concern would infuse mean- domains, but they fare poorly in unconstrained interactions with
ing into its particular information processing3. A robot operating the real world. Our approach diverges from traditional concep-
on intrinsically meaningful representations might seek especially tions of intelligence that emphasize outward-directed perception

Brain and Creativity Institute, Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, CA, USA.
e-mail: kman@usc.edu; damasio@usc.edu

446 Nature Machine Intelligence | VOL 1 | OCTOBER 2019 | 446–452 | www.nature.com/natmachintell


NATUre MAchIne InTellIgence Perspective
and abstract problem solving. We regard high-level cognition own existence would be jeopardized. The ‘emotions’ and ‘values’
as an outgrowth of resources that originated to solve the ancient underlying behaviours were not relevant to the continuance of the
biological problem of homeostasis. Homeostasis manifests as self- system itself. Ultimately, these systems lacked a viability constraint.
interest and inspires creative behaviour in complex environments, All robots of this class would be described by the philosopher Hans
natural and social. Current machines exhibit some intelligence but Jonas as biologically indifferent19 and, correspondingly, affectless.
no sense-making, defined as an agent’s “meaningful relation to the Despite behaving with seeming purpose, what these machines did—
environment”4. We propose that meaning begins to emerge when and this is of the essence—did not matter to the systems themselves.
information processing carries homeostatic consequences. In brief, the presence of a body serving as an aid or scaffold to
In Shannon’s5 original formalization of information, as reduc- problem-solving does not suffice to generate meaning. Nor does
tion in uncertainty of the contents of a message, the problem of calculating an abstract internal parameter and labelling it ‘emo-
the message’s meaning was neatly set aside. Recently, Kolchinsky tion’ elevate the parameter to this suggestive title. Di Paolo20 has
amd Wolpert3 have proposed a formal definition of semantic, or criticized the programme of embodied robotics as still missing an
meaningful, information as that subset of Shannon information organism-level logic: “emotions don’t come in boxes.” That is why
that is related to a system’s future viability states. They take a causal- we advocate a transition from ‘embodied artificial intelligence’ to
counterfactual approach to identify semantic information by calcu- ‘homeostatically motivated artificial intelligence’. Intelligence has
lating how the system’s future viability would have been affected, been defined as21 “an agent’s ability to achieve goals in a wide range
had that information been different. As the authors note, the suc- of environments.” But this definition prompts a follow-up question:
cess or failure of their definition hinges on the selected measure whose goals? Does an agent that myopically follows orders to the
of viability—which in their case is negative entropy, chosen for its extent that it endangers itself and compromises its ability to carry
thermodynamic, if not necessarily biologic, interpretability. out future orders deserve to be called intelligent?
Unlike other physical systems, living bodies are subject to peren- Living systems, on the other hand, have the property of selfhood.
nial risk and decay, resulting from their own regular operations of They continuously construct and maintain themselves against the
life. But nothing equivalent holds for a disembodied algorithm or natural tendency toward dissolution and decay. “This world is at
current robots whose physical existence is a given and, for practical once inviting and threatening”, as Jonas puts it19. “Appetition is
purposes, guaranteed. In our view, sense data become meaningful the form which the basic self-concern of all life assumes”. Selves,
when the data can be connected to the maintenance and integrity of as a condition of existence, must continuously enforce and mend
the sensing agent—that is, to the organism’s package of regulatory the boundary between self and environment. In the closely related
operations that contributes to homeostasis. Without a biological concept of autopoiesis22, systems continuously construct themselves
framework, sensory processing that is not attached to a vulnerable and define their own relations to the environment. Damasio has
body ‘makes no sense’. traced a gradual progression of self-processes, from protoself to core
self to autobiographical self, with each advancing stage explained by
Moving beyond embodiment. Our approach to building a machine specific brain–body architectures and the processes they execute.
with something akin to feeling takes place in a historical context Running throughout the progression of self-processes is the theme
of autonomous embodied systems (reviewed in refs. 6,7). Norbert of homeostatic life regulation (see, for example, ref. 1).
Wiener’s cybernetics placed great emphasis on feedback-based con- Ultimately, we aim to produce machines that make decisions
trol to produce and maintain states within a desired range. W. Ross and control behaviours under the guidance of feeling equivalents.
Ashby’s felicitously named homeostat demonstrated the emergence We envision these machines achieving a level of adaptiveness and
of self-restoring stability. Ashby’s device coupled electrical and mag- resilience beyond today’s ‘autonomous’ robots. Rather than having
netic sensors and effectors in such a way that, when disturbed, it to hard-code a robot for every eventuality or equip it with a limited
executed a random parameter search until equilibrium was restored set of behavioural policies, a robot concerned with its own survival
(Fig. 1; see refs. 8,9). Behaviour-based robots, perhaps originating might creatively solve the challenges that it encounters. These robots
with Grey Walter’s tortoises10,11 and brought to the fore by Rodney would interact with and learn about the environment from an inter-
Brooks’s subsumption architecture12, relied on the embodiment of nally generated perspective of self-preservation. Basic goals and val-
the agents—the fact that the AI had a physical body in continu- ues would be organically discovered, rather than being extrinsically
ous interaction with the environment—as a crucial source of their designed. These survival-related behaviours could then be yoked to
ability to behave intelligently. Lipson, Bongard and colleagues have useful human purposes.
since extended this line of research into the evolution of robots
that model their own morphology in order to execute behavioural Enabling technologies
goals13. Other work has produced agents that can regulate their own In order to realize this proposal for a new class of homeostatic
susceptibility to environmental cues based on abstract internal vari- machines endowed with equivalents to feelings, we will integrate
ables14. The Cyber Rodent project15 has explored the evolution of recent developments from two fields: soft robotics and multisensory
neural network controllers to support ‘mating’ and ‘foraging’ behav- abstraction.
iour of robots that seek out conspecifics and battery packs in the
environment. In simulation experiments making explicit reference Soft robotics. The component structures of living organisms are
to homeostasis, phototactic robots used ‘neural plasticity’ to restore themselves living and carry their own homeostatic imperatives.
adaptive behaviour following visual field inversion16. Living organisms are composed of living organs and tissues that
Some preliminary steps have been taken to develop ‘emotional are in turn composed of living cells. Each level participates in its
circuits’ to influence robotic goal selection17. Others have con- own self-maintenance, sensing and signalling the state of its life
structed robots capable of emotional expressions (typically through process. These nested levels of material self-concern have not yet
facial movements) to facilitate human–robot interaction18, but found expression in machines. At the intermediate level of tissues,
these motivation schedules and emotional performances have not however, the latest developments in soft materials should allow us to
been rooted in the machine’s own welfare, let alone well-being. design, to some degree, ‘imitations’ of nature.
Homeostatic-like features, if at all present, were implemented from Why soft materials? The majority of today’s robots are con-
the outside in: agents were instructed to maximize, or keep within structed with materials of convenience. What is so different about
a set range, certain arbitrary values. Unappreciated by the robot blob-bot as compared to metal-bot? Consider the ‘life’ of a piece of
itself was the fact that if these values veered to an extreme then its sheet metal bent into a boxy robot. In general, metal is so durable in

Nature Machine Intelligence | VOL 1 | OCTOBER 2019 | 446–452 | www.nature.com/natmachintell 447


Perspective NATUre MAchIne InTellIgence

A B

C D

Homeostat module A If V deviates from


its null value > e
uniselector advances

Uniselector

Output
cap to other
acit A
anc Homeostat
e modules
element
resistance (pivoting Position
magnet) of the plate
B determines
ry
tte voltage to
Ba uniselector
Summed
effect of coil and to A coils
C
magnetic fields of other modules
on pivoting
D magnet

Coils

Wires from other modules (inputs)

Fig. 1 | Ashby’s homeostat of 1954 exhibited some self-restoring stability. Composed of four identical electrical-magnetic modules, each exerting effects
on the others, the system executed a search for a globally stable state when the voltage (V) of one module exceeded some critical value of error (e) from
the null state. Reproduced from ref. 9, Taylor & Francis.

comparison to its niche—our human environment—that its integ- This is not to say that soft materials are necessarily weaker or
rity and viability is not a consideration. This is why metals and hard less resistant to mechanical damage than hard materials30. Soft mat-
plastics are ubiquitous in robotics: precisely so that, in most cases, ter admits of greater complexity and of more ways to regulate and
material integrity can be safely assumed. be regulated. Soft materials accommodate themselves to objects
Yet durability comes at a cost. An invulnerable material has rather than shoving objects aside. Under stress, they deform with-
nothing to say about its well-being. It rarely encounters existential out breaking, then enter dysfunction or gradual decline instead of
threats. If we imagine strain gauges embedded throughout a hard suffering sudden catastrophic failures. In many cases, soft materials
surface, they would spend most of their time reporting ‘no change’. can self-heal, regaining much, if not all, of their pre-injury struc-
The hard knocks of life accumulate until finally a catastrophic fail- tural and electrical properties (reviewed in refs. 31,32). In a coup of
ure occurs, and the sensors cry out in unison. The rigid robot pres- engineering, Cao et  al.33 demonstrated an electronic gel skin that
ents a monolithic and implacable face to the world, unfeeling by can self-heal after a cut (via ionic interactions); can sense touch,
design, its function decoupled from its constitution. pressure, and strain; and function in wet or dry conditions!
Soft robots, on the other hand, more readily enter into a Soft materials have continuously varying morphology, with more
graceful and sensitive coupling with the environment (Fig. 2a; points of contact, control and force dispersion. They densely sam-
see reviews23–28). Beginning with vulnerability as a design principle ple the environment across multiple modalities, including pressure,
for robots, we propose to extend it down to the very stuff out of stretch, temperature and energy level, and return rich information
which the robot is made. Continuing the example we advanced about the evolving interaction. While not sufficient to generate feel-
earlier, the same strain gauges embedded in the volume of a soft ing on its own, soft matter is more likely to naturally create the kind
material can localize forces and signal graded disruptions in body of relationship that, we expect, admits of an approximation to feeling.
surface continuity, such as those caused by punctures and tears. As an example, consider the robotic octopus arm constructed of
As a realized example, Markvicka et  al.29 fabricated a soft elec- silicone and actuated by tendons34 (Fig. 2b). It executes an envelop-
tronic ‘skin’ that localizes and can trigger responses to damage. ing, coiled grip on an irregular surface not by calculating an analyti-
They impregnated an elastomer base with droplets of liquid metal cal solution of applied torques to each of its microscopic ‘joints’, but
that, on rupture, cause changes in electrical conductivity across the rather by conforming itself to the object. To some extent, it holds
damaged surface. by allowing itself to be held. Its grip on the world is achieved not by

448 Nature Machine Intelligence | VOL 1 | OCTOBER 2019 | 446–452 | www.nature.com/natmachintell


NATUre MAchIne InTellIgence Perspective
a b

300 µm

Hair shaft
0.5 mm
5 mm
Sweat pore
Dermal papilla
Meissner’s corpuscle
(tactile corpuscle)

Stratum corneum
Pigment layer Epidermis
Stratum germinativum
Stratum spinosum
Stratum basale
2 cm 500 µm Dermis
Arrector pili muscle
Sebaceous gland
Hair follicle

Papilla of hair
Subcutis
Nerve fiber
(hypodermis)

1 cm
LM Elastomer Vein
droplet matrix Artery
Blood and
lymph vessels

Sweat gland
Conductive
LM network Pacinian corpuscle

Conductive LM network

Indication
of damage

Fig. 2 | Artificial and natural soft materials. a, Soft electronics can be embedded on flexible and stretchable substrates. LM, liquid metal. b, Soft robotic
effectors grip by conforming to the object. c, Human skin contains dense embeddings of sensors and effectors for the maintenance of its own integrity.
Reproduced from ref. 23, AAAS (a, top three rows); ref. 29, Wiley (a, bottom row); and ref. 24, Elsevier (b). Credit: National Cancer Institute (c).

high-level object cognition but by its own material properties. To a vibration and pressure, thus constituting the interface between self
considerable extent, so is feeling in biological creatures. and world. And yet the skin is exquisitely vulnerable. A tiny insect’s
The sensors of a living organism are themselves alive and vulner- jaws can breach the skin and create a large disturbance to the organ-
able to the conditions they are sensing. The retina is not an indiffer- ism. In fact, this literal hair-trigger sensitivity is one of the ‘pur-
ent piece of silicon, but a curved sheet of photoreceptor cells resting poses’ of skin. The skin’s registration and amplification of signals
on a bed of capillaries, all bathed in a saline jelly, surrounded by of attack, a loved one’s caresses or the sun’s rays, provides critical
pain sensors and defended by an ultra-rapidly deploying physical information regarding the ongoing governance of life. Things can
barrier: the eyelid. go well or very badly for soft materials, in more and more interest-
To take another example from biology, consider the near-mirac- ing ways than for hard matter.
ulous material called skin (Fig. 2c). In its totality, the skin is the Softness can be computationally modelled as a mesh or lattice
largest of the viscera in the human organism, and contains dense of small enough components interacting in large enough numbers.
embeddings of sensory, motor, nutrient exchange and self-repair The main challenge is to simulate the material at a high enough
systems. Furthermore, the individual cells comprising the skin not resolution for softness to be able to emerge while remaining com-
only contain their own life-maintenance systems, as all other vis- putationally tractable. Efficient algorithms have been developed
cera do, but skin cells also register itch, pain, temperature, stretch, to model the dynamics of soft robots35,36, even when composed of

Nature Machine Intelligence | VOL 1 | OCTOBER 2019 | 446–452 | www.nature.com/natmachintell 449


Perspective NATUre MAchIne InTellIgence

heterogeneous materials37. Evolutionary algorithms have also been tissues and in charge of its own self-regulation will have a wealth of
applied to the problem of generating soft robot morphologies and internal data on which to draw to inform its plans and perceptions.
the corresponding motor patterns38. The vast expansion of degrees A proposal has been made that the feeling of existence itself, or of
of movement freedom in soft robots parallels a vast expansion of conscious presence, may be due to predictive coding of internal sen-
sensitivity and control. The ensuing complexity of simulation is well sations57. The homeostatic robot will process information with the
worth it because soft materials present a larger opportunity space aid of something akin to feeling. How does the colour, taste and
for maintenance and upkeep—a larger stage for a model of homeo- texture, say, of an apple, systematically associate with changes to the
stasis and feelings to play out on. ongoing management of life? All of which is to say that the question
We would also call attention to the interesting case of biohybrid ‘how does this make you feel?’ might be asked of machines.
systems, which integrate conventional or soft robotic materials with
engineered cells (reviewed in ref. 39). Muscle tissues may be inte- Questions and objections
grated into miniature robots for small-scale actuation, or bacterial We already have our hands full teaching robots to drive our cars and
cells into hydrogels for long-term sensing and computing40. The sweep our floors. Why add new failure modes? Why worry about
caveat, however, is that adding living cells or tissues to more conven- the Roomba catching a cold? The prospect of adding vulnerability
tional materials may muddy the waters in the goal of understanding and self-interest to robots provokes a set of common concerns. We
the principles behind feeling machines. Putting chunks of biological attempt to address them here.
matter in machines may very well get us some softness and feeling
‘for free’, without explicitly modelling them. Reward, reinforcement and overhead costs of homeostasis. The
addition of physical vulnerability opens the robot’s behaviour to new
Computing cross-modal associations. The dream of building a richness. We use the body to implicitly compute a high-dimensional
robot with a homeostatic self-representation would present a com- reward function, for which an explicit analytical solution is out of
plex exercise in machine learning but it could draw on neuroscience reach. Reward functions are employed in reinforcement learning
facts and theory—for example, on a neuro-architectural frame- (RL), a computational framework that originated from the behav-
work originally proposed in 198941,42. According to this framework, iourist tradition of psychology. In computational studies, RL can be
sensory inputs coalesce into abstract concepts by being progres- used to train a system to perform a complex, multi-step behaviour
sively remapped in a neural hierarchical fashion, with each higher by designing an appropriate reward function for it to maximize. A
level registering more complex features. Nodes in each level also chief difficulty is to define the reward function with enough speci-
re-instantiate their lower-level constituent features by top-down ficity to bring about the intended goal state. Another difficulty is
projections. This convergence–divergence architecture can form the requirement of enormous amounts of experiential data—on the
representations that bridge across the sensory modalities43,44. order of millions of trials and errors—to learn complex behaviour.
There is an intriguing correspondence between the biologically Machines can acquire vast experience in accelerated computer sim-
implemented convergence–divergence architecture and some vari- ulations, but this is not possible for organisms constrained by the
ants of deep neural networks. Deep Boltzmann machines45 (DBMs; material and temporal limits of physical reality.
introduced in refs. 46,47), for example, learn hierarchical representa- We regard RL as a powerful tool for certain classes of prob-
tions of sensory inputs in a stepwise manner, with increasingly com- lems, but RL in general should not be confused or identified with
plex internal features acquired as the hierarchy is climbed. DBMs a homeostatic system architecture. We specify a particular target
are also generative in that they attempt to reconstruct, at each level for optimization (homeostatic well-being) and build in a necessary
back down the hierarchy, the learned features and ultimately the linkage to the physical integrity of the body. In so doing, we hope to
original pattern of sensory energy corresponding to the stimulus. reframe terms used by RL practitioners such as reward, punishment
Visual recognition of written digits was one of the earliest practi- and motivation, which, for the most part, lack grounding in biologi-
cal uses of neural networks48, with auditory speech recognition of cal and phenomenological reality.
digits following somewhat later49. Today’s networks can learn rep- On this point, we are encouraged by a strain of computational
resentations that bridge across the auditory and visual modalities work58–60 that builds bridges between organism homeostasis, emo-
to perform cross-modal recognition. Ngiam et al.50 used DBMs and tions and RL (see ref. 61). Keramati and Gutkin60 mathematically
autoencoders to perform audiovisual speech recognition, training model RL as the traversal of a multidimensional homeostatic space,
on auditory data to recognize the corresponding videos of lip move- in which each dimension corresponds to a physiological parameter
ments. Recognition of objects across the auditory and visual modal- and has an optimal value. Reward, in this perspective, is not identi-
ities followed soon after51. Curiously, audiovisual-invariant object fied with bonbons or dollars or videogame points, but rather with
representations were discovered in the human brain around the anything that moves the agent towards the location in homeostatic
same period52. Other modalities have since joined the algorithmic space that minimizes distance to the various optima. As a reviewer
fray, including the significant combination of vision with motor and of Keramati and Gutkin’s work put it, this makes “reinforcement
touch modalities53. Keeping pace with the algorithms, the human learning accountable to homeostatic imperatives.” Recent work has
brain has been mapped for cross-modal correspondences among extended this homeostatic RL framework to address high-level cog-
vision and touch54, and among vision, hearing and touch55. nitive, social and economic behaviours62, and considered its relation
Crucially, cross-modal processing is not limited to combinations to active inference63.
of external (exteroceptive) sensory modalities, but can also accom-
modate the internal (proprioceptive and interoceptive) modalities. The well-behaved robot. We aim to build robots with a sense of
Damasio56 has proposed that a key to generating a self-perspective self-preservation. What could possibly go wrong with such an
is the integration of exteroceptive information (from visual cortices, endeavour? Stories about robots often end poorly for their human
for example) with information from sensory portals (such as the creators. We seem to be unsettled by the moral status of artefacts
frontal eye fields) and the musculoskeletal frame, which provides a imbued with lifelike qualities. If a genuinely feeling machine’s exis-
stable anchor for the evolving sensory processing. tence would be threatened by humans, would it necessarily respond
We suggest that deep neural networks are poised to tackle the in violent self-defence? We suggest not, provided, for example, that
next great challenge of building correspondences between inner in addition to having access to its own feelings, it would be able to
space and outer space, between internal homeostatic data and know about the feelings of others—that is, if it would be endowed
external sense data. A machine constructed of soft and sensitive with empathy. For some of the problems arising from giving some

450 Nature Machine Intelligence | VOL 1 | OCTOBER 2019 | 446–452 | www.nature.com/natmachintell


NATUre MAchIne InTellIgence Perspective
feelings to robots, we might attempt to solve them by giving robots Box 1 | Feeling machines as a research platform
more feelings rather than by suppressing them.
We subscribe to a naturalistic account of morality in which behav-
The initial goal of the introduction of physical vulnerability and
iours are guided by moral deliberation. Johnson64 argues that moral
self-determined self-regulation is not to create robots with au-
deliberation involves imagining the consequences of our actions on
thentic feeling, but rather to improve their functionality across
ourselves and others, and consciously feeling those consequences.
a wide range of environments. As a second goal, introducing
Levy65 goes further, arguing that consciousness is required for moral
this new class of machines would constitute a scientific platform
responsibility. A necessary precondition for the attribution of moral
for experimentation on robotic brain–body architectures. This
responsibility is awareness of the facts and conditions giving our
platform would open the possibility of investigating important
actions their moral significance. Feelings are responsible for intro-
research questions:
ducing in the mind the relevant facts of the body. Assuming a robot
already capable of genuine feeling, an obligatory link between its • What are the specific expansions of intelligence and cogni-
feelings and those of others would result in its ethical and sociable tive capacity achieved by the addition of homeostatic design
behaviour. As a starting point, we propose two provisional rules for and affective processing?
a well-behaved robot: (1) feel good; (2) feel empathy. • What is the role of cultural transmission and collective prob-
The machine is capable of feeling well or ill, but in accordance lem-solving in societies of homeostatic robots?
with rule 1 pursues homeostatic well-being—that is, feeling good. • To what extent is the appearance of feeling and consciousness
The second rule, enforcing empathy, would make a robot feel the dependent on a material substrate?
pleasure and pain of others as its own (though not necessarily at
full strength). The two rules cycle into and reinforce each other.
Empathy acts as a governor on self-interest and as a reinforcer of properties necessary to replicate the causal chain of events. Models
pro-social behaviour. Actions that harm others will be felt as if harm and simulations of the crucial mechanism would be useful maps of
occurred to the self, whereas actions that improve the well-being of the territory, but would not usually replicate the causal structure in
others will benefit the self. a way that is grounded in reality.
We are certain that our two rules would eventually get caught Just as importantly, the possession of genuine feeling, however
in unexpected tangles, but such is the nature of moral decision- defined or verified, may not be necessary to the practical goal
making, which is characterized by difficult choices, contingency on of enhancing robot behaviour. At present, the wetness hypothesis
circumstances, resistance to rational maximization and guidance by for genuine feelings is untested and untestable. The realness of a
strong yet inarticulable or hidden feelings. What people do not typi- thing is tricky to establish when the thing is subjective. Testability
cally do is get stuck in infinite loops of ‘moral’ reasoning, or cause may ultimately be a distraction from the task currently at hand,
havoc by following strict rules to a bloody end. which is to model the thing better and gain an additional under-
Sufficiently advanced AIs have been characterized, unfairly we standing of it. As models continue to improve, it is conceivable that
think, as necessarily devious, paranoid and acquisitive. Their “basic they would become instantiations of the modelled phenomena. At
AI drives”66 of self-protection would entail a will to power. But this some level of detail the map might become indistinguishable from
is not an empirically justified entailment of intelligence. We can just the territory.
as easily imagine a fully enlightened and withdrawn ‘ascetic’ AI, as
we can a demonic maximizer. Our present understanding of the Conclusion
relationship between intelligence and personality cannot yet adju- We suggest that the artificial agent’s survival should be implemented
dicate these positions67. as a problem to be solved in its own right. The machine’s constitu-
We hope for this section to serve as a preliminary prompt to tion and viability states have not yet been exploited as an internal
necessary future discussions on robot ethics. Feeling robots should source of rich and highly relevant data. The incorporation of soft
implement moral deliberation in the way that humans do: as feel- tissues embedded with sensors and effectors into robots will provide
ing-oriented problem-solving, taking into account the feelings a source of multimodal homeostatic data. Cross-modal algorithms
of others. Robots might have even fewer impediments to moral will build abstract associations between the objects of the world and
behaviour than humans do—for example, those impediments we their multidimensional effects on homeostasis. Homeostatic robots
inherit from history, culture or biological impulses, which can be might reap behavioural benefits by acting as if they have feeling.
engineered out of a robot. Morally perfect behaviour, in robots or Even if they would never achieve full-blown inner experience in the
otherwise, is likely to be incoherent or unattainable, but moral prog- human sense, their properly motivated behaviour would result in
ress is possible. expanded intelligence and better-behaved autonomy.

But is it the real thing?. Would a homeostatic robot be, at best, a Received: 29 April 2019; Accepted: 4 September 2019;
simulacrum, replicating some of the behaviours and mechanisms Published online: 9 October 2019
of living organisms but missing a key ingredient of the real thing?
Is the ‘wet’ biochemistry of cellular tissue required for authentic References
homeostasis and for the mental experience we call feeling? These 1. Damasio, A. The Strange Order of Things: Life, Feeling, and the Making of
are important and open questions (Box 1). Can all mental phenom- Cultures (Pantheon, 2018).
ena be reduced to information processing, implementable on any 2. Friston, K. The free-energy principle: a unified brain theory? Nat. Rev.
arbitrary computing medium? A computer simulation of a hurri- Neurosci. 11, 127–38 (2010).
3. Kolchinsky, A. & Wolpert, D. H. Semantic information, autonomous agency
cane won’t get us wet68, but might simulation of thought, itself being and non-equilibrium statistical physics. Interface Focus 8, 20180041 (2018).
information processing, result in real thinking? 4. Kiverstein, J. D. & Rietveld, E. Reconceiving representation-hungry cognition:
Here we must entertain the possibility that true feeling—the sort an ecological-enactive proposal. Adapt. Behav. 26, 147–163 (2018).
of mental state that humans experience when we feel—may indeed 5. Shannon, C. E. The mathematical theory of communication. Bell Syst. Tech. J.
27, 379–423 (1948).
be restricted to wet biological tissue and may not be realizable on
6. Anderson, M. L. Embodied cognition: a field guide. Artif. Intell. 149,
non-living artefacts. The wetness hypothesis predicts that the poten- 91–130 (2003).
tially crucial mechanisms behind feeling are impossible to realize 7. Froese, T. & Ziemke, T. Enactive artificial intelligence: investigating the
in alternative materials, due to their lack of the physicochemical systemic organization of life and mind. Artif. Intell. 173, 466–500 (2009).

Nature Machine Intelligence | VOL 1 | OCTOBER 2019 | 446–452 | www.nature.com/natmachintell 451


Perspective NATUre MAchIne InTellIgence
8. Seth, A. K. & Tsakiris, M. Being a beast machine: the somatic basis of 43. Meyer, K. & Damasio, A. Convergence and divergence in a neural
selfhood. Trends Cogn. Sci. 969–981 (2018). architecture for recognition and memory. Trends Neurosci. 32, 376–82 (2009).
9. Cariani, P. A. The homeostat as embodiment of adaptive control. Int. J. Gen. 44. Man, K., Kaplan, J., Damasio, H. & Damasio, A. Neural convergence and
Syst. 38, 139–154 (2009). divergence in the mammalian cerebral cortex: from experimental neuroanatomy
10. Walter, W. G. An imitation of life. Sci. Am. 182, 42–45 (1950). to functional neuroimaging. J. Comp. Neurol. 521, 4097–4111 (2013).
11. Holland, O. E. in Artificial Life V: Proceedings of the 5th International 45. Salakhutdinov, R. & Hinton, G. Deep Boltzmann machines. Artif. Intell. Stat.
Workshop on the Synthesis and Simulation of Living Systems (eds Langton, C. 5, 448–455 (2009).
G. & Shimohara, K.) 34–44 (MIT Press, 1997). 46. Hinton, G. E. & Sejnowski, T. J. Optimal perceptual inference. in Proceedings
12. Brooks, R. A. New approaches to robotics. Science 253, 1227–1232 (1991). of the IEEE conference on Computer Vision and Pattern Recognition 448–453
13. Bongard, J. & Lipson, H. Evolved machines shed light on robustness and (IEEE, 1983).
resilience. Proc. IEEE 102, 899–914 (2014). 47. Ackley, D., Hinton, G. & Sejnowski, T. A learning algorithm for Boltzmann
14. Parisi, D. Internal robotics. Conn. Sci. 16, 325–338 (2004). machines. Cogn. Sci. 9, 147–169 (1985).
15. Doya, K. & Uchibe, E. The cyber rodent project: exploration of adaptive 48. LeCun, Y. et al. Backpropagation applied to handwritten zip code recognition.
mechanisms for self-preservation and self-reproduction. Adapt. Behav. 13, Neural Comput. 1, 541–551 (1989).
149–160 (2005). 49. Graves, A., Eck, D., Beringer, N. & Schmidhuber, J. in Biologically Inspired
16. Di Paolo, E. Homeostatic adaptation to inversion of the visual field and other Approaches to Advanced Information Technology (eds Ijspeert, A. J., Murata,
sensorimotor disruptions. Proc. Simul. Adapt. Behav. 440–449 (2000). M. & Wakamiya, N.) 127–136 (Springer, 2003).
17. Parisi, D. & Petrosino, G. Robots that have emotions. Adapt. Behav. 18, 50. Ngiam, J., Khosla, A. & Kim, M. Multimodal deep learning. In Proc. 28th
453–469 (2010). International Conference on Maching Learning (eds Getoor, L. & Scheffer, T.)
18. Breazeal, C. Emotion and sociable humanoid robots. Int. J. Hum. Comput. 689–696 (2011).
Stud. 59, 119–155 (2003). 51. Aytar, Y., Vondrick, C. & Torralba, A. SoundNet: learning sound
19. Jonas, H. The Phenomenon of Life: Toward a Philosophical Biology representations from unlabeled video. In Proc. 30th International Conference
(Northwestern Univ. Press, 1966). on Neural Information Processing Systems 892–900 (NIPS, 2016).
20. Di Paolo, E. in Dynamical Systems Approach to Embodiment and Sociality (eds 52. Man, K., Kaplan, J. T., Damasio, A. & Meyer, K. Sight and sound
Murase, K. & Asakura, T.) 19–42 (Advanced Knowledge International, 2003). converge to form modality-invariant representations in temporoparietal
21. Legg, S. & Hutter, M. Universal intelligence: a definition of machine cortex. J. Neurosci. 32, 16629–36 (2012).
intelligence. Minds Mach. 17, 391–444 (2007). 53. Lenz, I., Lee, H. & Saxena, A. Deep learning for detecting robotic grasps.
22. Maturana, H. R. & Varela, F. J. Autopoiesis and Cognition: The Realization of Int. J. Rob. Res. 34, 705–724 (2015).
the Living (Springer, 1991). 54. Oosterhof, N. N., Wiggett, A. J., Diedrichsen, J., Tipper, S. P. & Downing, P.
23. Rogers, J. A., Someya, T. & Huang, Y. Materials and mechanics for stretchable E. Surface-based information mapping reveals crossmodal vision-action
electronics. Science 327, 1603–1607 (2010). representations in human parietal and occipitotemporal cortex.
24. Kim, S., Laschi, C. & Trimmer, B. Soft robotics: a bioinspired evolution in J. Neurophysiol. 104, 1077–89 (2010).
robotics. Trends Biotechnol. 31, 287–294 (2013). 55. Man, K., Damasio, A., Meyer, K. & Kaplan, J. T. Convergent and invariant
25. Majidi, C. Soft robotics: a perspective—current trends and prospects for the object representations for sight, sound, and touch. Hum. Brain Mapp. 36,
future. Soft Robot. 1, 5–11 (2014). 3629–3640 (2015).
26. Lu, N. & Kim, D.-H. Flexible and stretchable electronics paving the way for 56. Damasio, A. Self Comes to Mind (Pantheon, 2010).
soft robotics. Soft Robot. 1, 53–62 (2014). 57. Seth, A. K., Suzuki, K. & Critchley, H. D. An interoceptive predictive coding
27. Pfeifer, R., Iida, F. & Lungarella, M. Cognition from the bottom up: on model of conscious presence. Front. Psychol. 2, 395 (2012).
biological inspiration, body morphology, and soft materials. Trends Cogn. Sci. 58. Bersini, H. in Proc. Third International Conference on Simulation of Adaptive
18, 404–413 (2014). Behaviour 325–333 (MIT Press-Bradford Books, 1994).
28. Rus, D. & Tolley, M. T. Design, fabrication and control of soft robots. Nature 59. Konidaris, G. & Barto, A. in From Animals to Animats 9 (ed. Nolfi, S.)
521, 467–475 (2015). 346–356 (Springer, 2006).
29. Markvicka, E. J., Tutika, R., Bartlett, M. D. & Majidi, C. Soft electronic skin 60. Keramati, M. & Gutkin, B. Homeostatic reinforcement learning for integrating
for multi‐site damage detection and localization. Adv. Funct. Mater. 29, reward collection and physiological stability. eLife 3, e04811 (2014).
1900160 (2019). 61. Moerland, T. M., Broekens, J. & Jonker, C. M. Emotion in reinforcement
30. Martinez, R. V., Glavan, A. C., Keplinger, C., Oyetibo, A. I. & Whitesides, learning agents and robots: a survey. Mach. Learning 107, 443–480 (2018).
G. M. Soft actuators and robots that are resistant to mechanical damage. 62. Juechems, K. & Summerfield, C. Where does value come from? Preprint at
Adv. Funct. Mater. 24, 3003–3010 (2014). https://doi.org/10.31234/osf.io/rxf7e (2019).
31. Kang, J., Tok, J. B. H. & Bao, Z. Self-healing soft electronics. Nat. Electron. 2, 63. Morville, T., Friston, K., Burdakov, D., Siebner, H. R. & Hulme, O. J. The
144–150 (2019). homeostatic logic of reward. Preprint at https://doi.org/10.1101/242974 (2018).
32. Bartlett, M. D., Dickey, M. D. & Majidi, C. Self-healing materials for 64. Johnson, M. Morality for Humans (Univ. Chicago Press, 2014).
soft-matter machines and electronics. npg Asia Mater. 11, 19–22 (2019). 65. Levy, N. Consciousness and Moral Responsibility (Oxford Univ. Press, 2014).
33. Cao, Y. et al. Self-healing electronic skins for aquatic environments. Nat. 66. Omohundro, S. M. The basic AI drives. In Proc. 2008 Conference on Artificial
Electron. 2, 75–82 (2019). General Intelligence 483–492 (ACM, 2008).
34. Laschi, C. et al. Soft robot arm inspired by the octopus. Adv. Robot. 26, 67. DeYoung, C. G. in The Cambridge Handbook of Intelligence 711–737
709–727 (2012). (Cambridge Univ. Press, 2012).
35. Duriez, C. in Proc. IEEE International Conference on Robotics and Automation 68. Searle, J. R. Minds, brains and programs. Behav. Brain Sci. 3, 417–457 (1980).
3982–3987 (IEEE, 2013).
36. Goldberg, N. N. et al. On planar discrete elastic rod models for the
locomotion of soft robots. Soft Robot. https://doi.org/10.1089/soro.2018.0104
Acknowledgements
We are grateful to H. Damasio for comments on this Perspective. This work was
(2019).
supported by grants from the Berggruen Foundation and the Templeton World Charity
37. Hiller, J. & Lipson, H. Dynamic simulation of soft multimaterial 3D-printed
Foundation to A.D.
objects. Soft Robot. 1, 88–101 (2014).
38. Rieffel, J., Knox, D., Smith, S. & Trimmer, B. Growing and evolving soft
robots. Artif. Life 20, 143–162 (2014). Competing interests
39. Ricotti, L. et al. Biohybrid actuators for robotics: a review of devices actuated The authors declare no competing interests.
by living cells. Sci. Robot. 2, eaaq0495 (2017).
40. Liu, X. et al. Stretchable living materials and devices with hydrogel–elastomer
hybrids hosting programmed cells. Proc. Natl Acad. Sci. USA 114, Additional information
2200–2205 (2017). Correspondence should be addressed to K.M. or A.D.
41. Damasio, A. The brain binds entities and events by multiregional activation Reprints and permissions information is available at www.nature.com/reprints.
from convergence zones. Neural Comput. 1, 123–132 (1989).
42. Damasio, A. Time-locked multiregional retroactivation: a systems-level Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in
proposal for the neural substrates of recall and recognition. Cognition 33, published maps and institutional affiliations.
25–62 (1989). © Springer Nature Limited 2019

452 Nature Machine Intelligence | VOL 1 | OCTOBER 2019 | 446–452 | www.nature.com/natmachintell

You might also like