Professional Documents
Culture Documents
Meaningful Lives in An Age of Artificial Intelligence: A Reply To Danaher
Meaningful Lives in An Age of Artificial Intelligence: A Reply To Danaher
https://doi.org/10.1007/s11948-021-00349-y
COMMENTARY
Lucas Scripter1
Received: 14 November 2020 / Accepted: 27 October 2021 / Published online: 30 January 2022
© The Author(s), under exclusive licence to Springer Nature B.V. 2022
Abstract
Does the rise of artificial intelligence pose a threat to human sources of meaning?
While much ink has been spilled on how AI could undercut meaningful human
work, John Danaher has raised the stakes by claiming that AI could “sever” human
beings from non-work-related sources of meaning—specifically, those related to
intellectual and moral goods. Against this view, I argue that his suggestion that AI
poses a threat to these areas of meaningful activity is overstated. Self-transformative
activities pose a hard limit to AI’s impingement on meaningful human activities.
Contra Danaher, I suggest that a wider range of sources for meaning will continue to
exist in a world dominated by AI.
Introduction
Recent writers have suggested that digital technologies herald a new age for human
work. It has been called “the fourth industrial revolution” (Schwab 2017) as well
as “the second machine age” (Brynjolfsson and McAffee 2014). Daniel Susskind
(2020) expresses the concern that automation combined with AI may eat away
at opportunities for human work: “Machines will not do everything in the future,
but they will do more. And as they slowly, but relentlessly, take on more and more
tasks, human beings will be forced to retreat to an ever-shrinking set of activi-
ties…Eventually, what is left will not be enough to provide everyone who wants
it with traditional well-paid employment” (p. 5). Given the central role that work
has historically played in human life, these changes raise the question of what will
become of humanity in world with inadequate work. A number of contemporary
researchers (e.g., Coeckelbergh 2020; Danaher 2017, 2019a, c; Floridi 2014; Kim
* Lucas Scripter
lucasscripter@cuhk.edu.cn
1
School of Humanities and Social Science, The Chinese University of Hong Kong, Shenzhen,
2001 Longxiang Blvd., Longgang District, 518172 Shenzhen, China
13
Vol.:(0123456789)
7
Page 2 of 9 L. Scripter
and Scheller-Wolf 2019; Susskind 2020; Tegmark 2017) have addressed the options
for meaningful living in a world with severely diminished work options for humans.
In this essay, I will consider a subtle view advanced by John Danaher (2017,
2019a) about the future of meaningful living in a world with AI. Danaher acknowl-
edges, on the one hand, the way in which the rise of AI may liberate many peo-
ple from tedious, boring, or otherwise harmful work. However, he raises the stakes
by suggesting that the worry is not simply that AI may significantly undercut the
opportunities for meaningful employment but that other non-work-related sources
of meaning may be gobbled up as well. Such technological changes brought about
by automation and AI may, in his words, “sever” us from various sources of mean-
ing. This has led him (2019a, b) to defend the meaningfulness of the ludic life, one
immersed in games playing, especially in virtual reality contexts.
In what follows, I will focus on Danaher’s claim that the development of AI could
“sever” human beings from various non-work-related sources of meaning—specifi-
cally, those related to intellectual and moral goods.1 Against this view, I argue that
the threat AI poses to these areas of meaningful activity is overblown, and humanity
will continue to have a wider range of options for meaningful living than Danaher
suggests. More specifically, I maintain, self-reflective and self-transformative activi-
ties pose a hard limit to AI’s encroachment on meaningful human activities. This
analysis reveals that a richer palette of sources for meaning will continue to exist in
a world dominated by AI, even absent the sorts of techno-social utopian changes that
Danaher (2019a) defends.
Danaher’s View
1
I will set aside Danaher’s other concerns regarding how AI could undercut human meaning by leading
to a loss of attention, autonomy, and agency, among other things (2019a, b). These concerns require a
separate treatment. For a response to his argument from “moral patiency” see (Chan 2020). My concern
here is simply to provide a response to Danaher’s “severance” argument.
2
I will follow Danaher’s (2017) use of Metz’s (2013) theory not only because it is one of the most
sophisticated to date, but also because the results of the argument are, I believe, transferable. If it can be
shown that there are certain types of meaningful activities that survive the onslaught of AI on the basis
of Metz’s nuanced theory, I believe the argument will hold for a wide range of other more general theo-
ries that also emphasize or include an objective component (e.g., Landau 2017; Smuts 2013; Wolf 2010).
13
Meaningful Lives in an Age of Artificial Intelligence... Page 3 of 9 7
Footnote 2 (continued)
My argument here is thus a conditional one: assuming there is an objective element to meaning in human
life, as Danaher himself seems to do, then AI cannot “sever” us completely or as thoroughly as Danaher
suggests.
13
7
Page 4 of 9 L. Scripter
Critique of Danaher
Do Danaher’s arguments really warrant such strong conclusions? I will argue they
do not. The apparent plausibility of his argument rests on a narrow construal of the
realms of the True and the Good. If we bring to mind the full richness of these other
domains of meaningful activity, it becomes clear that we are not left simply with the
Beautiful and the Ludic. Even in a world where AI has become ubiquitous and tra-
ditional employment has dried up or changed, there remain, in principle, meaningful
pursuits open to humans in all three domains highlighted by Metz’s (2013) theory—
namely, the True, the Good, and the Beautiful. We have a richer palette of meaning-
ful pursuits available to us than Danaher suggests. And these meaningful pursuits
are not contingent upon the development of new techno-social utopian orders of the
sort described and advocated by Danaher (2019a).
Let’s start with his argument regarding the realm of the True. Even if we grant,
for example, his contentious premise that AI could in the future perform the high-
est functions of human cognition, including spearheading novel scientific research
itself, this doesn’t necessarily eliminate a place for the True in human life. For start-
ers, we might challenge the claim that AI can take over all areas of human inquiry.
Perhaps there are areas of intellectual investigation where human beings will remain
better suited given our own understanding of what it is to be human. For instance,
certain areas of humanities and social science research may remain privileged areas
for human researchers. This is not to say that AI couldn’t be useful in these endeav-
ors, but it might possess merely an auxiliary role rather than taking the lead. Areas
of literary, historical, and cultural analysis may still remain meaningful options for
human inquirers, even if natural scientific research becomes dominated by AI.
One reason to think this might be the case is that certain areas of humanistic
inquiry seem to presuppose conscious experience. It is hard to imagine AI taking
over reflection on, for example, morality, aesthetics, or love, if it cannot experience
these things directly. Of course, the possibility of AI consciousness is a highly spec-
ulative and disputed topic.3 If it turns out that AI consciousness cannot be created or
is not created for technical, legal, or economic reasons (see Schneider 2019), then
there may be certain areas of inquiry that remain the prerogative of human beings,
even if AI can be quite helpful in assisting that research. The point is that such
inquiry cannot be wholly usurped from humanity by our silicon-based counterparts.
Moreover, even assuming that AI can attain consciousness, this still wouldn’t neces-
sarily entail that humans don’t have a distinct and irreplaceable epistemic contribu-
tion to make to humanistic inquiry. This is because AI consciousness still wouldn’t
know what it is like to be a human; it would only know what it is like to be AI.4
Given this gap, there may continue to be a unique role for humans in inquiry into
the human realm. Thus, even granting the extreme scenario that natural science
becomes an enterprise dominated by AI, we have reason to think AI cannot “sever”
3
For a broad overview of the issues see (Schneider 2019; Tegmark 2017; Wooldridge 2020).
4
This move, of course, rests on controversial claims in the philosophy of mind (see Nagel 1974). My
point is not to defend these assumptions but rather to show that it is by no means a given that AI can take
over all forms of intellectual inquiry, even if we grant remarkable scientific advances.
13
Meaningful Lives in an Age of Artificial Intelligence... Page 5 of 9 7
us from all meaningful activities oriented toward the True. Call this the argument
from humanistic inquiry.
Even if it is the case that AI turns out to be better literary critics, cultural his-
torians, archaeologists, and social theorists than humans, however, this would still
not “sever” us the intellectual dimension of human meaning. There are still other
reasons to think that AI could not in principle pick all the fruits in the realm of the
True. Consider the project of doing philosophy. Is this vulnerable to AI takeover?
Masahiro Morioka (2021) argues that it is in principle possible for AI not only to
productively analyze the works of dead philosophers but also identify new “philo-
sophical thought patterns” (p. 40), i.e., do original philosophy. He goes on to suggest
that AI might develop the capacity to independently and self-reflexively pose ques-
tions concerning fundamental existential questions, e.g., the meaning of life. This
may culminate, he speculates, in a situation where human beings and AI engage
each other as dialogue partners, a momentous change that “would surely open up
a new dimension to philosophy” (p. 41). Would the existence of AI philosophers
entail that humans are “severed” from the good of doing philosophy?
I don’t think so. Even if AI were to develop the capacity for philosophical inquiry,
it would not have the consequence of cutting human beings off from the value that
inheres in doing philosophy. This is not only because human beings could continue
to be in dialogue with AI, but, more importantly, philosophy’s value does not reside
simply in the answer that could be achieved by a supercomputer, an idea famously
parodied in The Hitchhiker’s Guide to the Galaxy (1979). If one returns to the
Socratic understanding of philosophy as leading an examined life, then AI could not
in principle “sever” us from this good. As Raimond Gaita (2004) has emphasized,
the Socratic project is necessarily co-extensive with one’s life—it is not the sort of
task that could be achieved and then set aside because it isn’t about the result of
reflection but the reflective activity itself. For the same reason, it is not a task that
could be outsourced to AI. Living an examined life is necessarily a task for one-
self. The Delphic maxim “Know thyself” requires that we pursue the truth ourselves
as individual, self-reflective agents. Artificial intelligence could not in principle be
tasked with examining our lives and then relaying its findings to us. It might inform
us of interesting patterns or observations, as on Morioka’s scenario, but reflective
living requires that we ourselves engage in self-examination. Thus, we have dis-
covered at least one example where the True remains in principle immune to AI’s
encroachment on meaningful human activities. Philosophical reflection in its tradi-
tional understanding, as Pierre Hadot (1995, 2004) and John Cooper (2012) have
argued, was a “way of life.” Keeping in mind this conception of the philosophical
life, we encounter a hard limit to AI’s infringement on the realm of the True. Dana-
her’s argument that AI could in principle come to dominate this sphere of meaning-
ful activity is overblown. Call this the argument from the examined life.
Aside from philosophy, however, is it the case that AI will come to displace
human beings from other sorts of meaningful intellectual activities? Danaher’s
assumption is that the meaningfulness of intellectual pursuits is located in achieving
breakthroughs and novel results. Indeed, his later (2019a) iteration of the severance
argument highlights the role of making achievements for meaning in life. Only if
one holds this assumption can one reach the conclusion that AI could in principle
13
7
Page 6 of 9 L. Scripter
5
See the distinction between telic and atelic activities in Setiya (2017). In this terminology, Dahaner’s
severance argument seems to be thinking of meaningful activity primarily in telic or goal-oriented terms
with an emphasis on making unique achievements. This overlooks and/or minimizes atelic or process-
oriented activities, although he does elsewhere emphasize the importance of finding meaning in the
process of activities, e.g., of playing games in a virtual utopia (see Danaher 2019a). See also Landau
(2017)’s closely related discussion of ❝The Paradox of the End❞ (pp. 145-161).
6
Danaher (2019a) writes, “It might be fun to replicate someone else’s scientific or moral achievements,
but it’s not quite the same as being the first one to do so” (p. 106). While he may be right that pioneering
or groundbreaking work is especially meaningful, I think he sells short the meaningfulness of learning.
It is not simply a matter of it being fun, which stresses merely a subjective condition, but about becoming
a knowledgeable or wise person, which involves an objective condition.
13
Meaningful Lives in an Age of Artificial Intelligence... Page 7 of 9 7
7
For a recent work that emphasizes the value of virtue ethical traditions in relation to technology see
(Vallor 2016). For an account of the relationship between meaning and virtue see (McPherson 2020).
13
7
Page 8 of 9 L. Scripter
It may be objected that I have been unfair to Danaher and taken his use of
the term “severance” too literally as entailing that human beings are cut off from
drawing meaning from certain sources (notably, the True and the Good). While
some of his writings seem to be defending a strong version of the severance thesis
(especially Danaher 2017), elsewhere Danaher (2019a) moderates the claim by
suggesting our options for meaning will probably “be reduced in scope”, even if
there remains residual access to the True and the Good in a world dominated by
AI (p. 105). In his more recent work, for example, he emphasizes how we can still
access goods of morality and virtue in certain utopian contexts, especially what
he calls the “Virtual Utopia” (2019a, ch. 7), and that there may be “an elite few
workers” (2019a, p. 242) that continue to maintain (and, perhaps, improve) the
underlying technical apparatus and thereby, presumably, continue to draw mean-
ing from their scientific work.
Even if we grant the moderate version of his severance thesis, I hope my argu-
ments convince the reader of three things. First, we may have more ground for
finding meaning in a world dominated by AI than Danaher suggests, even at his
most optimistic. Second, there are in principle limits to the AI conquest of the
True and the Good. While we may lose out on many fields, those self-reflexive
and self-transformative pursuits cannot be usurped by AI because only we can do
them for ourselves. Finally, many of Danaher’s more optimistic moves presup-
pose certain radical socio-technological changes: the widespread incorporation
of technology into biological human life, what he calls the “Cyborg Utopia”, or
the social elevation of virtual reality games, what he calls the “Virtual Utopia.”
My arguments are meant to show how much meaning is still possible irrespective
of these radical revisions of our humanity and/or society. Even if he is right that
these utopian changes would open up new opportunities for meaning in life, con-
troversial theses that are beyond the scope of this paper to evaluate, there would
still, nonetheless, be old-fashioned sources of meaning in a fully automated soci-
ety. My point is to underscore the limits of severability rather than opportunities
for meaning in a radically revised techno-social order.
Conclusion
The rise of AI will change the human condition in ways that we have yet to fully
fathom. Danaher has rightfully raised the issue of whether AI risks undercutting
a wider range of meaningful human pursuits than simply meaningful employ-
ment. As he suggests, we need to think holistically about AI’s potential impact
on the human condition. However, I have argued that he overstates the case for
AI severance. We will still have a wider range of possible meaningful activities
available than he assumes. His argument that we are left with merely aesthetic
and ludic pursuits sells short the moral and intellectual domains of meaningful
activities. These spheres are much richer than Danaher assumes. If we consider
self-reflective and self-transformative projects in both the realms of the True and
the Good, we see that AI cannot sever us from these goods, even if it spearheads
13
Meaningful Lives in an Age of Artificial Intelligence... Page 9 of 9 7
References
Adams, D. (1979). The hitchhiker’s guide to the galaxy. Pan Books.
Brynjolfsson, E., & McAffee, A. (2014). The second machine age: Work, progress, and prosperity in a
time of brilliant technologies. Norton.
Chan, B. (2020). The rise of artificial intelligence and the crisis of moral passivity. AI & Society, 35,
991–993.
Coeckelbergh, M. (2020). AI ethics. MIT Press.
Cooper, J. M. (2012). Pursuits of wisdom: Six ways of life in ancient philosophy from Socrates to Ploti-
nus. Princeton University Press.
Danaher, J. (2017). Will life be worth living in a world without work? Technological unemployment and
the meaning of life. Science and Engineering Ethics, 23, 41–64.
Danaher, J. (2019a) Automation and utopia: Human flourishing in a world without work. Harvard Uni-
versity Press.
Danaher, J. (2019b). In defense of the post-work future: Withdrawal and the ludic life. In M. Cholbi & M.
Warner (Eds.), The future of work, technology, and basic income (pp. 113–130). Routledge.
Danaher, J. (2019c). The rise of the robots and the crisis of moral patiency. AI & Society, 34, 129–136.
Floridi, L. (2014). Technological unemployment, leisure occupation, and the human project. Philosophy
and Technology, 27, 143–150.
Gaita, R. (2004). Good and evil: An absolute conception (2nd ed.). Routledge.
Hadot, P. (1995). Philosophy as a way of life, ed. Arnold I. Davidson. Blackwell.
Hadot, P. (2004). What is ancient philosophy? trans. Michael Clark. Belknap Press.
Kim, T. W., & Scheller-Wolf, A. (2019). Technological unemployment, meaning in life, purpose of busi-
ness, and the future of stakeholders. Journal of Business Ethics, 160, 319–337.
Landau, I. (2017). Finding meaning in an imperfect world. Oxford University Press.
McPherson, D. (2020). Virtue and meaning: A Neo-Aristotelian perspective. Oxford University Press.
Metz, T. (2013). Meaning in life. Oxford University Press.
Morioka, M. (2021). Can artificial intelligence philosophize? The Review of Life Studies, 12, 40–41.
Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83, 435–450.
Schneider, S. (2019). Artificial you: AI and the future of your mind. Princeton University Press.
Schwab, K. (2017). The fourth industrial revolution. Crown.
Setiya, K. (2017). Midlife: A philosophical guide. Princeton University Press.
Smuts, A. (2013). The good cause account of the meaning of life. The Southern Journal of Philosophy,
51, 536–562.
Susskind, D. (2020). A world without work: Technology, automation, and how we should respond. Allen
and Lane.
Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Penguin.
Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford
University Press.
Wolf, S. (2010). Meaning in life and why it matters. Princeton University Press.
Wooldridge, M. (2020). The road to conscious machines: The story of AI. Penguin.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.
8
I’m grateful for the comments of two reviewers for this journal.
13