Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Science and Engineering Ethics (2022) 28:7

https://doi.org/10.1007/s11948-021-00349-y

COMMENTARY

Meaningful Lives in an Age of Artificial Intelligence: A Reply


to Danaher

Lucas Scripter1 

Received: 14 November 2020 / Accepted: 27 October 2021 / Published online: 30 January 2022
© The Author(s), under exclusive licence to Springer Nature B.V. 2022

Abstract
Does the rise of artificial intelligence pose a threat to human sources of meaning?
While much ink has been spilled on how AI could undercut meaningful human
work, John Danaher has raised the stakes by claiming that AI could “sever” human
beings from non-work-related sources of meaning—specifically, those related to
intellectual and moral goods. Against this view, I argue that his suggestion that AI
poses a threat to these areas of meaningful activity is overstated. Self-transformative
activities pose a hard limit to AI’s impingement on meaningful human activities.
Contra Danaher, I suggest that a wider range of sources for meaning will continue to
exist in a world dominated by AI.

Keywords  Meaning in life · Meaningful work · Artificial intelligence · Automation

Introduction

Recent writers have suggested that digital technologies herald a new age for human
work. It has been called “the fourth industrial revolution” (Schwab 2017) as well
as “the second machine age” (Brynjolfsson and McAffee 2014). Daniel Susskind
(2020) expresses the concern that automation combined with AI may eat away
at opportunities for human work: “Machines will not do everything in the future,
but they will do more. And as they slowly, but relentlessly, take on more and more
tasks, human beings will be forced to retreat to an ever-shrinking set of activi-
ties…Eventually, what is left will not be enough to provide everyone who wants
it with traditional well-paid employment” (p. 5). Given the central role that work
has historically played in human life, these changes raise the question of what will
become of humanity in world with inadequate work. A number of contemporary
researchers (e.g., Coeckelbergh 2020; Danaher 2017, 2019a, c; Floridi 2014; Kim

* Lucas Scripter
lucasscripter@cuhk.edu.cn
1
School of Humanities and Social Science, The Chinese University of Hong Kong, Shenzhen,
2001 Longxiang Blvd., Longgang District, 518172 Shenzhen, China

13
Vol.:(0123456789)
7 
Page 2 of 9 L. Scripter

and Scheller-Wolf 2019; Susskind 2020; Tegmark 2017) have addressed the options
for meaningful living in a world with severely diminished work options for humans.
In this essay, I will consider a subtle view advanced by John Danaher (2017,
2019a) about the future of meaningful living in a world with AI. Danaher acknowl-
edges, on the one hand, the way in which the rise of AI may liberate many peo-
ple from tedious, boring, or otherwise harmful work. However, he raises the stakes
by suggesting that the worry is not simply that AI may significantly undercut the
opportunities for meaningful employment but that other non-work-related sources
of meaning may be gobbled up as well. Such technological changes brought about
by automation and AI may, in his words, “sever” us from various sources of mean-
ing. This has led him (2019a, b) to defend the meaningfulness of the ludic life, one
immersed in games playing, especially in virtual reality contexts.
In what follows, I will focus on Danaher’s claim that the development of AI could
“sever” human beings from various non-work-related sources of meaning—specifi-
cally, those related to intellectual and moral goods.1 Against this view, I argue that
the threat AI poses to these areas of meaningful activity is overblown, and humanity
will continue to have a wider range of options for meaningful living than Danaher
suggests. More specifically, I maintain, self-reflective and self-transformative activi-
ties pose a hard limit to AI’s encroachment on meaningful human activities. This
analysis reveals that a richer palette of sources for meaning will continue to exist in
a world dominated by AI, even absent the sorts of techno-social utopian changes that
Danaher (2019a) defends.

Danaher’s View

The impact of AI on human prospects for meaningful living depends, at least in


part, on how one conceives of meaning in life. As Danaher (2017, 2019a) notes,
the ascendance of AI might not be so bad given a subjectivist understanding of
meaning, i.e., one that sees meaning as flowing from subjective attitudes or experi-
ences, since it may liberate us to do more subjectively satisfying activities. How-
ever, things look less optimistic once one adds an objective condition for meaning.
Danaher’s (2017, 2019a) “severance” argument is premised on this assumption of
an objective component to meaning. His early version of the position (2017), espe-
cially, is framed using Thaddeus Metz’s (2013) “fundamentality theory” of mean-
ing in life, which holds that there are three main domains for meaningful activity:
the True, the Good, and the Beautiful.2 The central claim that Danaher distills from

1
  I will set aside Danaher’s other concerns regarding how AI could undercut human meaning by leading
to a loss of attention, autonomy, and agency, among other things (2019a, b). These concerns require a
separate treatment. For a response to his argument from “moral patiency” see (Chan 2020). My concern
here is simply to provide a response to Danaher’s “severance” argument.
2
  I will follow Danaher’s (2017) use of Metz’s (2013) theory not only because it is one of the most
sophisticated to date, but also because the results of the argument are, I believe, transferable. If it can be
shown that there are certain types of meaningful activities that survive the onslaught of AI on the basis
of Metz’s nuanced theory, I believe the argument will hold for a wide range of other more general theo-
ries that also emphasize or include an objective component (e.g., Landau 2017; Smuts 2013; Wolf 2010).

13
Meaningful Lives in an Age of Artificial Intelligence... Page 3 of 9  7

Metz’s sophisticated theory is that meaningful activity realizes objective value in


one of these three domains that correspond to intellectual, moral, and aesthetic pur-
suits. Danaher’s surprising suggestion is that AI poses a threat not merely to mean-
ingful work but also non-employment-related pursuits in domains of both the True
and the Good. He thus concludes that the proliferation of AI will “sever” us from
achieving goods in these spheres. With intelligent machines hogging the goods in
these domains, “we [humans] are left, in effect, with the Beautiful. That certainly
looks like a more impoverished form of existence” (2017, p. 59). What leads Dana-
her to think that meaningful pursuits oriented around the True and the Good might
disappear?
Let’s start with the True. Danaher (2017) observes, “Science is increasingly a ‘big
data’ enterprise, reliant on algorithmic, and other forms of automated assistance, to
process the large datasets and make useful inferences from those datasets. Humans
are becoming increasingly irrelevant to the process of discovery” (p. 57). Danaher
(2017, 2019a) speculates that this trend may go even further and that AI might take
the lead in scientific research itself, leaving even the most gifted human minds to be
mere bystanders. If this happens, he concludes, humans might play little to no role
in making scientific breakthroughs. Thus, for Danaher, the threat posed by AI to the
domain of the True lies in its potential ability to take the lead in areas of intellectual
inquiry.
What about the Good? This seems more robust in the face of the rise of AI. How-
ever, Danaher (2017, 2019a) speculates that AI might be better at solving moral
dilemmas, especially related to the distribution of resources. He writes, “many basic
moral problems—disease, suffering, inequality—are caused by human imperfections
and could be addressed through better automated systems…Removing humans from
the loop could make the world a better—or at least safer—place for all” (2019a, p.
105). Danaher’s point, I take it, is not merely about just allocation but rather dissolv-
ing the underlying problem of material scarcity that gives rise to moral questions of
distributive justice. It may be easy to distribute kidneys, to use Danaher’s example,
if AI can artificially produce them on demand. In such a world, even effective altru-
ists may find that it is most effective to step aside and leave philanthropic issues to
our silicon-based moral superiors. Thus, Danaher (2017) concludes that “the rise of
automation reduces the space in which humans can engage in meaningful and fulfill-
ing moral activities” (p. 57).
If AI takes over the social role for not only employment but also scientific inquiry
and the provision of human material needs, it seems like the sphere of potential
meaning-conducive activities is radically curtailed. Hence, Danaher (2017) con-
cludes, “In the end, the only domain in which humans might be able to meaningfully
contribute to objective outcomes would be in the realm of private, ludic, or aesthetic
activities, producing works of art, or pursuing games, hobbies and sports” (p. 59).

Footnote 2 (continued)
My argument here is thus a conditional one: assuming there is an objective element to meaning in human
life, as Danaher himself seems to do, then AI cannot “sever” us completely or as thoroughly as Danaher
suggests.

13
7 
Page 4 of 9 L. Scripter

Critique of Danaher

Do Danaher’s arguments really warrant such strong conclusions? I will argue they
do not. The apparent plausibility of his argument rests on a narrow construal of the
realms of the True and the Good. If we bring to mind the full richness of these other
domains of meaningful activity, it becomes clear that we are not left simply with the
Beautiful and the Ludic. Even in a world where AI has become ubiquitous and tra-
ditional employment has dried up or changed, there remain, in principle, meaningful
pursuits open to humans in all three domains highlighted by Metz’s (2013) theory—
namely, the True, the Good, and the Beautiful. We have a richer palette of meaning-
ful pursuits available to us than Danaher suggests. And these meaningful pursuits
are not contingent upon the development of new techno-social utopian orders of the
sort described and advocated by Danaher (2019a).
Let’s start with his argument regarding the realm of the True. Even if we grant,
for example, his contentious premise that AI could in the future perform the high-
est functions of human cognition, including spearheading novel scientific research
itself, this doesn’t necessarily eliminate a place for the True in human life. For start-
ers, we might challenge the claim that AI can take over all areas of human inquiry.
Perhaps there are areas of intellectual investigation where human beings will remain
better suited given our own understanding of what it is to be human. For instance,
certain areas of humanities and social science research may remain privileged areas
for human researchers. This is not to say that AI couldn’t be useful in these endeav-
ors, but it might possess merely an auxiliary role rather than taking the lead. Areas
of literary, historical, and cultural analysis may still remain meaningful options for
human inquirers, even if natural scientific research becomes dominated by AI.
One reason to think this might be the case is that certain areas of humanistic
inquiry seem to presuppose conscious experience. It is hard to imagine AI taking
over reflection on, for example, morality, aesthetics, or love, if it cannot experience
these things directly. Of course, the possibility of AI consciousness is a highly spec-
ulative and disputed topic.3 If it turns out that AI consciousness cannot be created or
is not created for technical, legal, or economic reasons (see Schneider 2019), then
there may be certain areas of inquiry that remain the prerogative of human beings,
even if AI can be quite helpful in assisting that research. The point is that such
inquiry cannot be wholly usurped from humanity by our silicon-based counterparts.
Moreover, even assuming that AI can attain consciousness, this still wouldn’t neces-
sarily entail that humans don’t have a distinct and irreplaceable epistemic contribu-
tion to make to humanistic inquiry. This is because AI consciousness still wouldn’t
know what it is like to be a human; it would only know what it is like to be AI.4
Given this gap, there may continue to be a unique role for humans in inquiry into
the human realm. Thus, even granting the extreme scenario that natural science
becomes an enterprise dominated by AI, we have reason to think AI cannot “sever”

3
  For a broad overview of the issues see (Schneider 2019; Tegmark 2017; Wooldridge 2020).
4
  This move, of course, rests on controversial claims in the philosophy of mind (see Nagel 1974). My
point is not to defend these assumptions but rather to show that it is by no means a given that AI can take
over all forms of intellectual inquiry, even if we grant remarkable scientific advances.

13
Meaningful Lives in an Age of Artificial Intelligence... Page 5 of 9  7

us from all meaningful activities oriented toward the True. Call this the argument
from humanistic inquiry.
Even if it is the case that AI turns out to be better literary critics, cultural his-
torians, archaeologists, and social theorists than humans, however, this would still
not “sever” us the intellectual dimension of human meaning. There are still other
reasons to think that AI could not in principle pick all the fruits in the realm of the
True. Consider the project of doing philosophy. Is this vulnerable to AI takeover?
Masahiro Morioka (2021) argues that it is in principle possible for AI not only to
productively analyze the works of dead philosophers but also identify new “philo-
sophical thought patterns” (p. 40), i.e., do original philosophy. He goes on to suggest
that AI might develop the capacity to independently and self-reflexively pose ques-
tions concerning fundamental existential questions, e.g., the meaning of life. This
may culminate, he speculates, in a situation where human beings and AI engage
each other as dialogue partners, a momentous change that “would surely open up
a new dimension to philosophy” (p. 41). Would the existence of AI philosophers
entail that humans are “severed” from the good of doing philosophy?
I don’t think so. Even if AI were to develop the capacity for philosophical inquiry,
it would not have the consequence of cutting human beings off from the value that
inheres in doing philosophy. This is not only because human beings could continue
to be in dialogue with AI, but, more importantly, philosophy’s value does not reside
simply in the answer that could be achieved by a supercomputer, an idea famously
parodied in The  Hitchhiker’s Guide to the Galaxy  (1979). If one returns to the
Socratic understanding of philosophy as leading an examined life, then AI could not
in principle “sever” us from this good. As Raimond Gaita (2004) has emphasized,
the Socratic project is necessarily co-extensive with one’s life—it is not the sort of
task that could be achieved and then set aside because it isn’t about the result of
reflection but the reflective activity itself. For the same reason, it is not a task that
could be outsourced to AI. Living an examined life is necessarily a task for one-
self. The Delphic maxim “Know thyself” requires that we pursue the truth ourselves
as individual, self-reflective agents. Artificial intelligence could not in principle be
tasked with examining our lives and then relaying its findings to us. It might inform
us of interesting patterns or observations, as on Morioka’s scenario, but reflective
living requires that we ourselves engage in self-examination. Thus, we have dis-
covered at least one example where the True remains in principle immune to AI’s
encroachment on meaningful human activities. Philosophical reflection in its tradi-
tional understanding, as Pierre Hadot (1995, 2004) and John Cooper (2012) have
argued, was a “way of life.” Keeping in mind this conception of the philosophical
life, we encounter a hard limit to AI’s infringement on the realm of the True. Dana-
her’s argument that AI could in principle come to dominate this sphere of meaning-
ful activity is overblown. Call this the argument from the examined life.
Aside from philosophy, however, is it the case that AI will come to displace
human beings from other sorts of meaningful intellectual  activities? Danaher’s
assumption is that the meaningfulness of intellectual pursuits is located in achieving
breakthroughs and novel results. Indeed, his later (2019a) iteration of the severance
argument highlights the role of making achievements for meaning in life. Only if
one holds this assumption can one reach the conclusion that AI could in principle

13
7 
Page 6 of 9 L. Scripter

“sever” humans from intellectual values. However, if we locate the meaning-giv-


ing power in the process of coming to know rather than in the production of novel
results, then human beings can still lead meaningful lives in their cognitive pursuits,
even on the scenario that AI is spearheading the breakthroughs and novel discover-
ies.5 Here intellectual inquiry has what Iddo Landau (2017) dubs "non-competitive
value" (p. 44), the sort of value that doesn’t hinge on outdoing another, e.g., in this
case, our AI competitors. In other words, what makes an intellectual pursuit a source
of meaning doesn’t depend merely on the discovery of something new but rather
on  an agent  learning something new for herself. Imagine an exceptionally curious
bookworm with a broad range of intellectual interests. Even if our aspiring polymath
never makes a breakthrough or discovers something hitherto unknown, surely the
hours coming to better understand the world have not been ill-spent. Our intuitions
about meaningful intellectual pursuits don’t solely track pioneering work. It seems
to me that this is still clearly a meaningful use of time because there is value in com-
ing to be a knowledgeable person.6 We can find meaning in learning—not just dis-
covering or inventing. Even on the extreme scenario where human beings are left in
the dust with respect to scientific research, it still remains open to us to learn about
the discoveries being undertaken by AI. This too is a meaningful use of time. Call
this the argument from the value of learning.
We have defended the idea that AI will not take over the realm of the True as
a source for meaning. What about the realm of the Good? Here we should begin
by noting an important difference in the conditions for severability presupposed
by Danaher’s argument in the realms of the True and the Good. With respect to
intellectual goods, for AI to be a threat it only needs to be deployed in spheres of
advanced inquiry and discovery. No broader assumptions about the integration of
AI into society are necessary. One can imagine AI severing us from the pursuit of
the True in a highly unequal society where top scientific inquiry was carried out by
an elite team of AI researchers and their human auxiliaries. By contrast, for AI to
threaten the moral Good, stronger assumptions about how AI is deployed in society
are required. Only if AI is harnessed to overcome scarcity and inequality, would it
pose a threat to certain moral sources of meaning. Unfortunately, strong evidence
suggests that AI may exacerbate social inequalities (e.g., Brynjolfsson and McAffee
2014; Susskind 2020). Thus, it is a plausible scenario in light of existing tendencies
for AI and automation to sever us from the True but, perversely, create the condi-
tions for meaningful activities in the realm of the Good insofar as these technolo-
gies compound social injustices and thereby open up opportunities for meaningful

5
  See the distinction between telic and atelic activities in Setiya (2017). In this terminology, Dahaner’s
severance argument seems to be thinking of meaningful activity primarily in telic or goal-oriented terms
with an emphasis on making unique achievements. This overlooks and/or minimizes atelic or process-
oriented activities, although he does elsewhere emphasize the importance of finding meaning in the
process of activities, e.g., of playing games in a virtual utopia (see Danaher 2019a).  See also  Landau
(2017)’s closely related discussion of ❝The Paradox of the End❞ (pp. 145-161).
6
  Danaher (2019a) writes, “It might be fun to replicate someone else’s scientific or moral achievements,
but it’s not quite the same as being the first one to do so” (p. 106). While he may be right that pioneering
or groundbreaking work is especially meaningful, I think he sells short the meaningfulness of learning.
It is not simply a matter of it being fun, which stresses merely a subjective condition, but about becoming
a knowledgeable or wise person, which involves an objective condition.

13
Meaningful Lives in an Age of Artificial Intelligence... Page 7 of 9  7

human resistance, reformation, and, perhaps, revolution. Thus, Danaher’s argument


here presupposes we can overcome what he calls the “distributional problem” (2017,
pp. 46, 62) or elsewhere the “deprivation problem” (2019a, p. 100). If we assume, as
Danaher does, that we can overcome this problem, then the question arises: can AI
still sever us from drawing meaning in the realm of the Good?
Let’s grant Danaher’s assumption for the sake of argument that AI is used in a
socially beneficial way to eliminate poverty, disease, and other conditions that make
charity both possible and necessary. Even in these most felicitous of circumstances,
there are still reasons to be skeptical of Danaher’s argument. The deeper problem
is that not all aspects of the moral life are akin to his example of distributing kid-
neys. The moral life goes far beyond questions of distributive justice or even acts
of charity. If we make the modest assumption that the realm of the Good includes
personal virtue, then one may discover the realm of the Good is more heterogenous
and resistant to AI takeover than Danaher argues in his (2017) presentation of the
“severance” thesis. One would do well to look at the traditions of virtue ethics, e.g.,
Aristotelian, Stoic, Thomist, Confucian, and Buddhist, to name a few, for a reminder
of the great variety and richness of human attempts to envision and live out the good
life.7 It’s hard to see how AI could take over the task of living well, even if it could
solve problems of just distribution and resource scarcity. Even if one could imagine
sophisticated technological “mentor” AIs that assist us in developing good habits,
the key point is that, again, the role of the technology remains auxiliary—it cannot
take over the central role for us because it is something that we must become. If the
foregoing argument is correct, then Danaher is wrong to think that the rise of AI
could lead to a disconnect from the realm of the Good. The task of personal ethical
self-transformation can remain a source of human meaning in a world dominated by
AI. Call this the argument from virtue ethics.
While Danaher (2019a) commends the development of virtues within virtual con-
texts, his preferred techno-social utopian scenario, the argument I’m advancing does
not depend on these sorts of changes. I’m suggesting that the cultivation of virtue
always already remains a source of meaning for us irrespective of how AI develops
and is integrated into society. There is a ceiling to AI takeover of meaning-giving
activities in the realm of the Good.
Thus, we have reason to think that Danaher’s pessimism with respect to sources
of meaning in a world of sophisticated AI is misguided. His argument rests on a nar-
row construal of the domains of the True and the Good. With a broader conception
of these realms, his argument seems implausible. AI cannot sap all the meaning out
of the realms of the True and the Good; nothing can fully “sever” our activity from
intellectual and moral pursuits because, among other things, these are not simply
external ends to be achieved but rather reflections on and transformations of who
we are. Thus, in principle, they cannot be outsourced to AI. Self-reflective and self-
transformative activities of intellectual and moral varieties remain in principle a hard
limit to AI’s encroachment on meaningful pursuits. These continue to be permanent
sources of meaning in life, even in a world awash in super-intelligent machines.

7
  For a recent work that emphasizes the value of virtue ethical traditions in relation to technology see
(Vallor 2016). For an account of the relationship between meaning and virtue see (McPherson 2020).

13
7 
Page 8 of 9 L. Scripter

It may be objected that I have been unfair to Danaher and taken his use of
the term “severance” too literally as entailing that human beings are cut off from
drawing meaning from certain sources (notably, the True and the Good). While
some of his writings seem to be defending a strong version of the severance thesis
(especially Danaher 2017), elsewhere Danaher (2019a) moderates the claim by
suggesting our options for meaning will probably “be reduced in scope”, even if
there remains residual access to the True and the Good in a world dominated by
AI (p. 105). In his more recent work, for example, he emphasizes how we can still
access goods of morality and virtue in certain utopian contexts, especially what
he calls the “Virtual Utopia” (2019a, ch. 7), and that there may be “an elite few
workers” (2019a, p. 242) that continue to maintain (and, perhaps, improve) the
underlying technical apparatus and thereby, presumably, continue to draw mean-
ing from their scientific work.
Even if we grant the moderate version of his severance thesis, I hope my argu-
ments convince the reader of three things. First, we may have more ground for
finding meaning in a world dominated by AI than Danaher suggests, even at his
most optimistic. Second, there are in principle limits to the AI conquest of the
True and the Good. While we may lose out on many fields, those self-reflexive
and self-transformative pursuits cannot be usurped by AI because only we can do
them for ourselves. Finally, many of Danaher’s more optimistic moves presup-
pose certain radical socio-technological changes: the widespread incorporation
of technology into biological human life, what he calls the “Cyborg Utopia”, or
the social elevation of virtual reality games, what he calls the “Virtual Utopia.”
My arguments are meant to show how much meaning is still possible irrespective
of these radical revisions of our humanity and/or society. Even if he is right that
these utopian changes would open up new opportunities for meaning in life, con-
troversial theses that are beyond the scope of this paper to evaluate, there would
still, nonetheless, be old-fashioned sources of meaning in a fully automated soci-
ety. My point is to underscore the limits of severability rather than opportunities
for meaning in a radically revised techno-social order.

Conclusion

The rise of AI will change the human condition in ways that we have yet to fully
fathom. Danaher has rightfully raised the issue of whether AI risks undercutting
a wider range of meaningful human pursuits than simply meaningful employ-
ment. As he suggests, we need to think holistically about AI’s potential impact
on the human condition. However, I have argued that he overstates the case for
AI severance. We will still have a wider range of possible meaningful activities
available than he assumes. His argument that we are left with merely aesthetic
and ludic pursuits sells short the moral and intellectual domains of meaningful
activities.  These spheres  are much richer than Danaher assumes. If we consider
self-reflective and self-transformative projects in both the realms of the True and
the Good, we see that AI cannot sever us from these goods, even if it spearheads

13
Meaningful Lives in an Age of Artificial Intelligence... Page 9 of 9  7

scientific research and eliminates material scarcity. In principle, the development


and proliferation of AI cannot usurp the meaningful activities that we ourselves
must do. These tasks remain ours, even in a future dominated by AI.8

References
Adams, D. (1979). The hitchhiker’s guide to the galaxy. Pan Books.
Brynjolfsson, E., & McAffee, A. (2014). The second machine age: Work, progress, and prosperity in a
time of brilliant technologies. Norton.
Chan, B. (2020). The rise of artificial intelligence and the crisis of moral passivity. AI & Society, 35,
991–993.
Coeckelbergh, M. (2020). AI ethics. MIT Press.
Cooper, J. M. (2012). Pursuits of wisdom: Six ways of life in ancient philosophy from Socrates to Ploti-
nus. Princeton University Press.
Danaher, J. (2017). Will life be worth living in a world without work? Technological unemployment and
the meaning of life. Science and Engineering Ethics, 23, 41–64.
Danaher, J. (2019a) Automation and utopia: Human flourishing in a world without work. Harvard Uni-
versity Press.
Danaher, J. (2019b). In defense of the post-work future: Withdrawal and the ludic life. In M. Cholbi & M.
Warner (Eds.), The future of work, technology, and basic income (pp. 113–130). Routledge.
Danaher, J. (2019c). The rise of the robots and the crisis of moral patiency. AI & Society, 34, 129–136.
Floridi, L. (2014). Technological unemployment, leisure occupation, and the human project. Philosophy
and Technology, 27, 143–150.
Gaita, R. (2004). Good and evil: An absolute conception (2nd ed.). Routledge.
Hadot, P. (1995). Philosophy as a way of life, ed. Arnold I. Davidson. Blackwell.
Hadot, P. (2004). What is ancient philosophy? trans. Michael Clark. Belknap Press.
Kim, T. W., & Scheller-Wolf, A. (2019). Technological unemployment, meaning in life, purpose of busi-
ness, and the future of stakeholders. Journal of Business Ethics, 160, 319–337.
Landau, I. (2017). Finding meaning in an imperfect world. Oxford University Press.
McPherson, D. (2020). Virtue and meaning: A Neo-Aristotelian perspective. Oxford University Press.
Metz, T. (2013). Meaning in life. Oxford University Press.
Morioka, M. (2021). Can artificial intelligence philosophize? The Review of Life Studies, 12, 40–41.
Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83, 435–450.
Schneider, S. (2019). Artificial you: AI and the future of your mind. Princeton University Press.
Schwab, K. (2017). The fourth industrial revolution. Crown.
Setiya, K. (2017). Midlife: A philosophical guide. Princeton University Press.
Smuts, A. (2013). The good cause account of the meaning of life. The Southern Journal of Philosophy,
51, 536–562.
Susskind, D. (2020). A world without work: Technology, automation, and how we should respond. Allen
and Lane.
Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Penguin.
Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford
University Press.
Wolf, S. (2010). Meaning in life and why it matters. Princeton University Press.
Wooldridge, M. (2020). The road to conscious machines: The story of AI. Penguin.

Publisher’s Note  Springer Nature remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.

8
  I’m grateful for the comments of two reviewers for this journal.

13

You might also like